id
stringlengths 20
20
| content
stringlengths 211
2.4M
| meta
dict |
---|---|---|
BkiUc-05qdmB60zYsHoj |
\section{Outline}
\section{Introduction}
Basketball is a global and growing sport with interest from fans of all ages. This growth has coincided with a rise in data availability and innovative methodology that has inspired fans to study basketball through a statistical lens. Many of the approaches in basketball analytics can be traced to pioneering work in baseball~\citep{schwartz_2013}, beginning with Bill James' publications of \emph{The Bill James Baseball Abstract} and the development of the field of ``sabermetrics''~\citep{james1984the-bill, james1987bill, james2010new}. James' sabermetric approach captivated the larger sports community when the 2002 Oakland Athletics used analytics to win a league-leading 102 regular season games despite a prohibitively small budget. Chronicled in Michael Lewis' \textit{Moneyball}, this story demonstrated the transformative value of analytics in sports~\citep{lewis2004moneyball}.
In basketball, Dean Oliver and John Hollinger were early innovators who argued for evaluating players on a per-minute basis rather than a per-game basis and developed measures of overall player value, like Hollinger's Player Efficiency Rating (PER)~\citep{oliver2004basketball, hollingerper, hollinger2004pro}. The field of basketball analytics has expanded tremendously in recent years, even extending into popular culture through books and articles by data-journalists like Nate Silver and Kirk Goldsberry, to name a few~\citep{silver2012signal, goldsberry2019sprawlball}. In academia, interest in basketball analytics transcends the game itself, due to its relevance in fields such as psychology \citep{gilovich1985hot, vaci2019large, price2010racial}, finance and gambling \citep{brown1993fundamentals, gandar1998informed}, economics (see, for example, the Journal of Sports Economics), and sports medicine and health \citep{drakos2010injury, difiori2018nba}.
Sports analytics also has immense value for statistical and mathematical pedagogy. For example, \citet{drazan2017sports} discuss how basketball can broaden the appeal of math and statistics across youth. At more advanced levels, there is also a long history of motivating statistical methods using examples from sports, dating back to techniques like shrinkage estimation \citep[e.g.][]{efron1975data} up to the emergence of modern sub-fields like deep imitation learning for multivariate spatio-temporal trajectories \citep{le2017data}. Adjusted plus-minus techniques (Section \ref{reg-section}) can be used to motivate important ideas like regression adjustment, multicollinearity, and regularization \citep{sill2010improved}.
\subsection{This review}
Our review builds on the early work of \citet{kubatko2007starting} in ``A Starting Point for Basketball Analytics,'' which aptly establishes the foundation for basketball analytics. In this review, we focus on modern statistical and machine learning methods for basketball analytics and highlight the many developments in the field since their publication nearly 15 years ago. Although we reference a broad array of techniques, methods, and advancements in basketball analytics, we focus primarily on understanding team and player performance in gameplay situations. We exclude important topics related to drafting players~\citep[e.g.][]{, mccann2003illegal,groothuis2007early,berri2011college,arel2012NBA}, roster construction, win probability models, tournament prediction~\citep[e.g.][]{brown2012insights,gray2012comparing,lopez2015building, yuan2015mixture, ruiz2015generative, dutta2017identifying, neudorfer2018predicting}, and issues involving player health and fitness~\citep[e.g.][]{drakos2010injury,mccarthy2013injury}. We also note that much of the literature pertains to data from the National Basketball Association (NBA). Nevertheless, most of the methods that we discuss are relevant across all basketball leagues; where appropriate, we make note of analyses using non-NBA data.
We assume some basic knowledge of the game of basketball, but for newcomers, \url{NBA.com} provides a useful glossary of common NBA terms~\citep{nba_glossary}. We begin in Section~\ref{datatools} by summarizing the most prevalent types of data available in basketball analytics. The online supplementary material highlights various data sources and software packages. In Section~\ref{teamsection} we discuss methods for modeling team performance and strategy. Section~\ref{playersection} follows with a description of models and methods for understanding player ability. We conclude the paper with a brief discussion on our view on the future of basketball analytics.
\subsection{Data and tools}
\label{datatools}
\noindent \textbf{Box score data:} The most available datatype is box score data. Box scores, which were introduced by Henry Chadwick in the 1900s~\citep{pesca_2009}, summarize games across many sports. In basketball, the box score includes summaries of discrete in-game events that are largely discernible by eye: shots attempted and made, points, turnovers, personal fouls, assists, rebounds, blocked shots, steals, and time spent on the court. Box scores are referenced often in post-game recaps.
\url{Basketball-reference.com}, the professional basketball subsidiary of \url{sports-reference.com}, contains preliminary box score information on the NBA and its precursors, the ABA, BAA, and NBL, dating back to the 1946-1947 season; rebounds first appear for every player in the 1959-60 NBA season \citep{nbaref}. There are also options for variants on traditional box score data, including statistics on a per 100-possession, per game, or per 36-minute basis, as well as an option for advanced box score statistics. Basketball-reference additionally provides data on the WNBA and numerous international leagues. Data on further aspects of the NBA are also available, including information on the NBA G League, NBA executives, referees, salaries, contracts, and payrolls as well as numerous international leagues. One can find similar college basketball information on the \url{sports-reference.com/cbb/} site, the college basketball subsidiary of \url{sports-reference.com}.
For NBA data in particular, \url{NBA.com} contains a breadth of data beginning with the 1996-97 season~\citep{nbastats}. This includes a wide range of summary statistics, including those based on tracking information, a defensive dashboard, ''hustle''-based statistics, and other options. \url{NBA.com} also provides a variety of tools for comparing various lineups, examining on-off court statistics, and measuring individual and team defense segmented by shot type, location, etc. The tools provided include the ability to plot shot charts for any player on demand.
\hfill
\noindent \textbf{Tracking data}: Around 2010, the emergence of ``tracking data,'' which consists of spatial and temporally referenced player and game data, began to transform basketball analytics. Tracking data in basketball fall into three categories: player tracking, ball tracking, and data from wearable devices. Most of the basketball literature that pertains to tracking data has made use of optical tracking data from SportVU through Stats, LLC and Second Spectrum, the current data provider for the NBA. Optical data are derived from raw video footage from multiple cameras in basketball arenas, and typically include timestamped $(x, y)$ locations for all 10 players on the court as well as $(x, y, z)$ locations for the basketball at over 20 frames per second.\footnote{A sample of SportVU tracking data can currently be found on Github \citep{github-tracking}.} Many notable papers from the last decade use tracking data to solve a range of problems: evaluating defense \citep{franks2015characterizing}, constructing a ``dictionary'' of play types \citep{miller2017possession}, evaluating expected value of a possession \citep{cervonepointwise}, and constructing deep generative models of spatio-temporal trajectory data \citep{yu2010hidden, yue2014learning, le2017data}. See \citet{bornn2017studying} for a more in-depth introduction to methods for player tracking data.
Recently, high resolution technology has enabled $(x,y,z)$ tracking of the basketball to within one centimeter of accuracy. Researchers have used data from NOAH~\citep{noah} and RSPCT~\citep{rspct}, the two largest providers of basketball tracking data, to study several aspects of shooting performance~\citep{marty2018high, marty2017data, bornn2019using, shah2016applying, harmon2016predicting}, see Section \ref{sec:shot_efficiency}. Finally, we also note that many basketball teams and organizations are beginning to collect biometric data on their players via wearable technology. These data are generally unavailable to the public, but can help improve understanding of player fitness and motion~\citep{smith_2018}. Because there are few publications on wearable data in basketball to date, we do not discuss them further.
\hfill
\noindent \textbf{Data sources and tools:} For researchers interested in basketball, we have included two tables in the supplementary material. Table 1 contains a list of R and Python packages developed for scraping basketball data, and Table 2 enumerates a list of relevant basketball data repositories.
\section{Team performance and strategy}
\label{teamsection}
Sportswriters often discuss changes in team rebounding rate or assist rate after personnel or strategy changes, but these discussions are rarely accompanied by quantitative analyses of how these changes actually affect the team's likelihood of winning. Several researchers have attempted to address these questions by investigating which box score statistics are most predictive of team success, typically with regression models \citep{hofler2006efficiency, melnick2001relationship, malarranha2013dynamic, sampaio2010effects}. Unfortunately, the practical implications of such regression-based analyses remains unclear, due to two related difficulties in interpreting predictors for team success: 1) multicollinearity leads to high variance estimators of regression coefficients~\citep{ziv2010predicting} and 2) confounding and selection bias make it difficult to draw any causal conclusions. In particular, predictors that are correlated with success may not be causal when there are unobserved contextual factors or strategic effects that explain the association (see Figure \ref{fig:simpsons} for an interesting example). More recent approaches leverage spatio-temporal data to model team play within individual possessions. These approaches, which we summarize below, can lead to a better understanding of how teams achieve success.
\label{sec:team}
\subsection{Network models}
One common approach to characterizing team play involves modeling the game as a network and/or modeling transition probabilities between discrete game states. For example, \citet{fewell2012basketball} define players as nodes and ball movement as edges and compute network statistics like degree and flow centrality across positions and teams. They differentiate teams based on the propensity of the offense to either move the ball to their primary shooters or distribute the ball unpredictably.~\citet{fewell2012basketball} suggest conducting these analyses over multiple seasons to determine if a team's ball distribution changes when faced with new defenses.~\citet{xin2017continuous} use a similar framework in which players are nodes and passes are transactions that occur on edges. They use more granular data than \citet{fewell2012basketball} and develop an inhomogeneous continuous-time Markov chain to accurately characterize players' contributions to team play.
\citet{skinner2015method} motivate their model of basketball gameplay with a traffic network analogy, where possessions start at Point A, the in-bounds, and work their way to Point B, the basket. With a focus on understanding the efficiency of each pathway, Skinner proposes that taking the highest percentage shot in each possession may not lead to the most efficient possible game. He also proposes a mathematical justification of the ``Ewing Theory'' that states a team inexplicably plays better when their star player is injured or leaves the team~\citep{simmons}, by comparing it to a famous traffic congestion paradox~\citep{skinner2010price}. See \citet{skinner2015optimal} for a more thorough discussion of optimal strategy in basketball.
\subsection{Spatial perspectives}
Many studies of team play also focus on the importance of spacing and spatial context.~\citet{metulini2018modelling} try to identify spatial patterns that improve team performance on both the offensive and defensive ends of the court. The authors use a two-state Hidden Markov Model to model changes in the surface area of the convex hull formed by the five players on the court. The model describes how changes in the surface area are tied to team performance, on-court lineups, and strategy.~\citet{cervone2016NBA} explore a related problem of assessing the value of different court-regions by modeling ball movement over the course of possessions.
Their court-valuation framework can be used to identify teams that effectively suppress their opponents' ability to control high value regions.
Spacing also plays a crucial role in generating high-value shots. ~\citet{lucey2014get} examined almost 20,000 3-point shot attempts from the 2012-2013 NBA season and found that defensive factors, including a ``role swap'' where players change roles, helped generate open 3-point looks.
In related work, \citet{d2015move} stress the importance of ball movement in creating open shots in the NBA. They show that ball movement adds unpredictability into offenses, which can create better offensive outcomes. The work of D'Amour and Lucey could be reconciled by recognizing that unpredictable offenses are likely to lead to ``role swaps'', but this would require further research.~\citet{sandholtz2019measuring} also consider the spatial aspect of shot selection by quantifying a team's ``spatial allocative efficiency,'' a measure of how well teams determine shot selection. They use a Bayesian hierarchical model to estimate player FG\% at every location in the half court and compare the estimated FG\% with empirical field goal attempt rates. In particular, the authors identify a proposed optimum shot distribution for a given lineup and compare the true point total with the proposed optimum point total. Their metric, termed Lineup Points Lost (LPL), identifies which lineups and players have the most efficient shot allocation.
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{figs/playbook}
\caption{Unsupervised learning for play discovery \citep{miller2017possession}. A) Individual player actions are clustered into a set of discrete actions. Cluster centers are modeled using Bezier curves. B) Each possession is reduced to a set of co-occurring actions. C) By analogy, a possession can be thought of as a ``document'' consisting of ``words.'' ``Words'' correspond to all pairs of co-occurring actions. A ``document'' is the possession, modeled using a bag-of-words model. D) Possessions are clustered using Latent Dirichlet Allocation (LDA). After clustering, each possession can be represented as a mixture of strategies or play types (e.g. a ``weave'' or ``hammer'' play).}
\label{fig:playbook}
\end{figure}
\subsection{Play evaluation and detection}
Finally, \citet{lamas2015modeling} examine the interplay between offensive actions, or space creation dynamics (SCDs), and defensive actions, or space protection dynamics (SPDs). In their video analysis of six Barcelona F.C. matches from Liga ACB, they find that setting a pick was the most frequent SCD used but it did not result in the highest probability of an open shot, since picks are most often used to initiate an offense, resulting in a new SCD. Instead, the SCD that led to the highest proportion of shots was off-ball player movement. They also found that the employed SPDs affected the success rate of the SCD, demonstrating that offense-defense interactions need to be considered when evaluating outcomes.
Lamas' analysis is limited by the need to watch games and manually label plays. Miller and Bornn address this common limitation by proposing a method for automatically clustering possessions using player trajectories computed from optical tracking data~\citep{miller2017possession}. First, they segment individual player trajectories around periods of little movement and use a functional clustering algorithm to cluster individual segments into one of over 200 discrete actions. They use a probabilistic method for clustering player trajectories into actions, where cluster centers are modeled using Bezier curves. These actions serve as inputs to a probabilistic clustering model at the possession level. For the possession-level clustering, they propose Latent Dirichlet Allocation (LDA), a common method in the topic modeling literature~\citep{blei2003latent}. LDA is traditionally used to represent a document as a mixture of topics, but in this application, each possession (``document'') can be represented as a mixture of strategies/plays (``topics''). Individual strategies consist of a set of co-occurring individual actions (``words''). The approach is summarized in Figure \ref{fig:playbook}. This approach for unsupervised learning from possession-level tracking data can be used to characterize plays or motifs which are commonly used by teams. As they note, this approach could be used to ``steal the opponent's playbook'' or automatically annotate and evaluate the efficiency of different team strategies. Deep learning models \citep[e.g.][]{le2017data, shah2016applying} and variational autoencoders could also be effective for clustering plays using spatio-temporal tracking data.
It may also be informative to apply some of these techniques to quantify differences in strategies and styles around the world. For example, although the US and Europe are often described as exhibiting different styles~\citep{hughes_2017}, this has not yet been studied statistically. Similarly, though some lessons learned from NBA studies may apply to The EuroLeague, the aforementioned conclusions about team strategy and the importance of spacing may vary across leagues.
\begin{comment}
\citep{fichman2018three, fichman2019optimal}
\citep{kozar1994importance} Importance of free throws at different stages of games. \franks{better here than in the individual performance section?} \Terner{will take a look at the papers and see where they belong}
\citep{ervculj2015basketball} also use hierarchical multinomial logistic regression, to explore differences in shot types across multiple levels of play, across multiple levels of play.
\Terner{This paper looks like it would be a good inclusion in the paper (sandholtz and bornn) but not sure where it fits:}
\citep{sandholtz2018transition} Game-theoretic approach to strategy
\end{comment}
\section{Player performance}
\label{playersection}
In this section, we focus on methodologies aimed at characterizing and quantifying different aspects of individual performance. These include metrics which reflect both the overall added value of a player and specific skills like shot selection, shot making, and defensive ability.
When analyzing player performance, one must recognize that variability in metrics for player ability is driven by a combination of factors. This includes sampling variability, effects of player development, injury, aging, and changes in strategy (see Figure \ref{fig:player_variance}). Although measurement error is usually not a big concern in basketball analytics, scorekeepers and referees can introduce bias \citep{van2017adjusting, price2010racial}. We also emphasize that basketball is a team sport, and thus metrics for individual performance are impacted by the abilities of their teammates. Since observed metrics are influenced by many factors, when devising a method targeted at a specific quantity, the first step is to clearly distinguish the relevant sources of variability from the irrelevant nuisance variability.
To characterize the effect of these sources of variability on existing basketball metrics, \citet{franks2016meta} proposed a set of three ``meta-metrics": 1) \emph{discrimination}, which quantifies the extent to which a metric actually reflects true differences between player skill rather than chance variation 2) \emph{stability}, which characterizes how a player-metric evolves over time due to development and contextual changes and 3) \emph{independence}, which describes redundancies in the information provided across multiple related metrics. Arguably, the most useful measures of player performance are metrics that are discriminative and reflect robust measurement of the same (possibly latent) attributes over time.
One of the most important tools for minimizing nuisance variability in characterizing player performance is shrinkage estimation via hierarchical modeling. In their seminal paper, \citet{efron1975data} provide a theoretical justification for hierarchical modeling as an approach for improving estimation in low sample size settings, and demonstrate the utility of shrinkage estimation for estimating batting averages in baseball. Similarly, in basketball, hierarchical modeling is used to leverage commonalities across players by imposing a shared prior on parameters associated with individual performance. We repeatedly return to these ideas about sources of variability and the importance of hierarchical modeling below.
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{figs/sources_of_variance2}
\caption{Diagram of the sources of variance in basketball season metrics. Metrics reflect multiple latent player attributes but are also influenced by team ability, strategy, and chance variation. Depending on the question, we may be interested primarily in differences between players, differences within a player across seasons, and/or the dependence between metrics within a player/season. Player 2 in 2018-2019 has missing values (e.g. due to injury) which emphasizes the technical challenge associated with irregular observations and/or varying sample sizes.}
\label{fig:player_variance}
\end{figure}
\subsection{General skill}
\label{sec:general_skill}
One of the most common questions across all sports is ``who is the best player?'' This question takes many forms, ranging from who is the ``most valuable'' in MVP discussions, to who contributes the most to helping his or her team win, to who puts up the most impressive numbers. Some of the most popular metrics for quantifying player-value are constructed using only box score data. These include Hollinger's PER \citep{kubatko2007starting}, Wins Above Replacement Player (WARP)~\citep{pelton}, Berri's quantification of a player's win production~\citep{berri1999most}, Box Plus-Minus (BPM), and Value Over Replacement Player (VORP)~\citep{myers}. These metrics are particularly useful for evaluating historical player value for players who pre-dated play-by-play and tracking data. In this review, we focus our discussion on more modern approaches like the regression-based models for play-by-play data and metrics based on tracking data.
\subsubsection{Regression-based approaches}
\label{reg-section}
One of the first and simplest play-by-play metrics aimed at quantifying player value is known as ``plus-minus''. A player's plus-minus is computed by adding all of the points scored by the player's team and subtracting all the points scored against the player's team while that player was in the game. However, plus-minus is particularly sensitive to teammate contributions, since a less-skilled player may commonly share the floor with a more-skilled teammate, thus benefiting from the better teammate's effect on the game. Several regression approaches have been proposed to account for this problem. \citet{rosenbaum} was one of the first to propose a regression-based approach for quantifying overall player value which he terms adjusted plus-minus, or APM~\citep{rosenbaum}. In the APM model, Rosenbaum posits that
\begin{equation}
\label{eqn:pm}
D_i = \beta_0 + \sum_{p=1}^P\beta_p x_{ip} + \epsilon_i
\end{equation}
\noindent where $D_i$ is 100 times the difference in points between the home and away teams in stint $i$; $x_{ip} \in \lbrace 1, -1, 0 \rbrace $ indicates whether player $p$ is at home, away, or not playing, respectively; and $\epsilon$ is the residual. Each stint is a stretch of time without substitutions. Rosenbaum also develops statistical plus-minus and overall plus-minus which reduce some of the noise in pure adjusted plus-minus~\citep{rosenbaum}. However, the major challenge with APM and related methods is multicollinearity: when groups of players are typically on the court at the same time, we do not have enough data to accurately distinguish their individual contributions using plus-minus data alone. As a consequence, inferred regression coefficients, $\hat \beta_p$, typically have very large variance and are not reliably informative about player value.
APM can be improved by adding a penalty via ridge regression~\citep{sill2010improved}. The penalization framework, known as regularized APM, or RAPM, reduces the variance of resulting estimates by biasing the coefficients toward zero~\citep{jacobs_2017}. In RAPM, $\hat \beta$ is the vector which minimizes the following expression
\begin{equation}
\mathbf{\hat \beta} = \underset{\beta}{\argmin }(\mathbf{D} - \mathbf{X} \beta)^T (\mathbf{D} - \mathbf{X}\beta) + \lambda \beta^T
\beta
\end{equation}
\noindent where $\mathbf{D}$ and $\mathbf{X}$ are matrices whose rows correspond to possessions and $\beta$ is the vector of skill-coefficients for all players. $\lambda \beta^T \beta$ represents a penalty on the magnitude of the coefficients, with $\lambda$ controlling the strength of the penalty. The penalty ensures the existence of a unique solution and reduces the variance of the inferred coefficients. Under the ridge regression framework, $\hat \beta = (X^T X + \lambda I)^{-1}X^T D$ with $\lambda$ typically chosen via cross-validation. An alternative formulation uses the lasso penalty, $\lambda \sum_p |\beta_p|$, instead of the ridge penalty~\citep{omidiran2011pm}, which encourages many players to have an adjusted plus-minus of exactly zero.
Regularization penalties can equivalently be viewed from the Bayesian perspective, where ridge regression estimates are equivalent to the posterior mode when assuming mean-zero Gaussian prior distributions on $\beta_p$ and lasso estimates are equivalent to the posterior mode when assuming mean-zero Laplace prior distributions. Although adding shrinkage priors ensures identifiability and reduces the variance of resulting estimates, regularization is not a panacea: the inferred value of players who often share the court is sensitive to the precise choice of regularization (or prior) used. As such, careful consideration should be placed on choosing appropriate priors, beyond common defaults like the mean-zero Gaussian or Laplace prior. More sophisticated informative priors could be used; for example, a prior with right skewness to reflect beliefs about the distribution of player value in the NBA, or player- and position-specific priors which incorporate expert knowledge. Since coaches give more minutes to players that are perceived to provide the most value, a prior on $\beta_p$ which is a function of playing time could provide less biased estimates than standard regularization techniques, which shrink all player coefficients in exactly the same way. APM estimates can also be improved by incorporating data across multiple seasons, and/or by separately inferring player's defensive and offensive contributions, as explored in \citet{fearnhead2011estimating}.
\begin{comment}
discuss regression in terms of apm. This starts with Rosenbaum's APM and regressions there, continues with Sill. Sill adds ridge regression; Omidiran adds lasso.
Fearnhead is similar to Rosenbaum, but "
The main difference compared with Rosenbaum (2004) is that the author estimates only a combined ability for each player. The model presented here further uses a structured approach to combining information from multiple seasons."
\end{comment}
Several variants and alternatives to the RAPM metrics exist. For example,~\citet{page2007using} use a hierarchical Bayesian regression model to identify a position's contribution to winning games, rather than for evaluating individual players.~\citet{deshpande2016estimating} propose a Bayesian model for estimating each player's effect on the team's chance of winning, where the response variable is the home team's win probability rather than the point spread. Models which explicitly incorporate the effect of teammate interactions are also needed. \citet{piette2011evaluating} propose one approach based on modeling players as nodes in a network, with edges between players that shared the court together. Edge weights correspond to a measure of performance for the lineup during their shared time on the court, and a measure of network centrality is used as a proxy for player importance. An additional review with more detail on possession-based player performance can be found in \citet{engelmann2017possession}.
\begin{comment}
Deshpande, Jensen: "We propose instead to regress the change in the home team's win probability during a shift onto signed indicators corresponding to the five home team players and five away team players in order to estimate each player's partial effect on his team's chances of winning."
This paper usefully explains Rosenbaum's approach too:
"To compute Adjusted Plus-Minus, one first breaks the game into several "shifts," periods of play between substitutions, and measures both the point differential and total number of possessions in each shift. One then regresses the point differential per 100 possessions from the shift onto indicators corresponding to the ten players on the court."
\end{comment}
\subsubsection{Expected Possession Value}
\label{sec:epv}
The purpose of the Expected Possession Value (EPV) framework, as developed by~\citet{cervone2014multiresolution}, is to infer the expected value of the possession at every moment in time. Ignoring free throws for simplicity, a possession can take on values $Z_i \in \{0, 2, 3\}$. The EPV at time $t$ in possession $i$ is defined as
\begin{equation}
\label{eqn:epv}
v_{it}=\mathbb{E}\left[Z_i | X_{i0}, ..., X_{it}\right]
\end{equation}
\noindent where $X_{i0}, ..., X_{it}$ contain all available covariate information about the game or possession for the first $t$ timestamps of possession $i$. The EPV framework is quite general and can be applied in a range of contexts, from evaluating strategies to constructing retrospectives on the key points or decisions in a possession. In this review, we focus on its use for player evaluation and provide a brief high-level description of the general framework.
~\citet{cervone2014multiresolution} were the first to propose a tractable multiresolution approach for inferring EPV from optical tracking data in basketball. They model the possession at two separate levels of resolution. The \emph{micro} level includes all spatio-temporal data for the ball and players, as well as annotations of events, like a pass or shot, at all points in time throughout the possession. Transitions from one micro state to another are complex due to the high level of granularity in this representation. The \emph{macro} level represents a coarsening of the raw data into a finite collection of states. The macro state at time $t$, $C_t = C(X_t)$, is the coarsened state of the possession at time $t$ and can be classified into one of three state types: $\mathcal{C}_{poss}, \mathcal{C}_{trans},$ and $\mathcal{C}_{end}.$ The information used to define $C_t$ varies by state type. For example,
$\mathcal{C}_{poss}$ is defined by the ordered triple containing the ID of the player with the ball, the location of the ball in a discretized court region, and an indicator for whether the player has a defender within five feet of him or her. $\mathcal{C}_{trans}$ corresponds to ``transition states'' which are typically very brief in duration, as they include moments when the ball is in the air during a shot, pass, turnover, or immediately prior to a rebound: $\mathcal{C}_{trans} = $\{shot attempt from $c \in \mathcal{C}_{poss}$, pass from $c \in \mathcal{C}_{poss}$ to $c' \in \mathcal{C}_{poss}$, turnover in progress, rebound in progress\}. Finally, $\mathcal{C}_{end}$ corresponds to the end of the possession, and simply encodes how the possession ended and the associated value: a made field goal, worth two or three points, or a missed field goal or a turnover, worth zero points. Working with macrotransitions facilitates inference, since the macro states are assumed to be semi-Markov, which means the sequence of new states forms a homogeneous Markov chain~\citep{bornn2017studying}.
Let $C_t$ be the current state and $\delta_t > t$ be the time that the next non-transition state begins, so that $C_{\delta_t} \notin \mathcal{C}_{trans}$ is the next possession state or end state to occur after $C_t$. If we assume that coarse states after time $\delta_t$ do not depend on the data prior to $\delta_t$, that is
\begin{equation}
\textrm{for } s>\delta_{t}, P\left(C_s \mid C_{\delta_{t}}, X_{0}, \ldots, X_{t}\right)=P\left(C_{s} | C_{\delta_{t}}\right),
\end{equation}
\noindent then EPV can be defined in terms of macro and micro factors as
\begin{equation}
v_{it}=\sum_{c} \mathbb{E}\left[Z_i | C_{\delta_{t}}=c\right] P\left(C_{\delta_{t}}=c | X_{i0}, \ldots, X_{it}\right)
\end{equation}
\noindent since the coarsened Markov chain is time-homogeneous. $\mathbb{E}\left[Z | C_{\delta_{t}}=c\right]$ is macro only, as it does not depend on the full resolution spatio-temporal data. It can be inferred by estimating the transition probabilities between coarsened-states and then applying standard Markov chain results to compute absorbing probabilities. Inferring macro transition probabilities could be as simple as counting the observed fraction of transitions between states, although model-based approaches would likely improve inference.
The micro models for inferring the next non-transition state (e.g. shot outcome, new possession state, or turnover) given the full resolution data, $P(C_{\delta_{t}}=c | X_{i0}, \ldots, X_{it}),$ are more complex and vary depending on the state-type under consideration.~\citet{cervone2014multiresolution} use log-linear hazard models~\citep[see][]{prentice1979hazard} for modeling both the time of the next major event and the type of event (shot, pass to a new player, or turnover), given the locations of all players and the ball. \citet{sicilia2019deephoops} use a deep learning representation to model these transitions. The details of each transition model depend on the state type: models for the case in which $C_{\delta_t}$ is a shot attempt or shot outcome are discussed in Sections \ref{sec:shot_efficiency} and \ref{sec:shot_selection}. See~\citet{masheswaran2014three} for a discussion of factors relevant to modeling rebounding and the original EPV papers for a discussion of passing models~\citep{cervone2014multiresolution, bornn2017studying}.
~\citet{cervone2014multiresolution} suggested two metrics for characterizing player ability that can be derived from EPV: Shot Satisfaction (described in Section \ref{sec:shot_selection}) and EPV Added (EPVA), a metric quantifying the overall contribution of a player. EPVA quantifies the value relative to the league average of an offensive player receiving the ball in a similar situation. A player $p$ who possesses the ball starting at time $s$ and ending at time $e$ contributes value $v_{t_e} - v_{t_s}^{r(p)}$ over the league average replacement player, $r(p)$. Thus, the EPVA for player $p$, or EPVA$(p)$, is calculated as the average value that this player brings over the course of all times that player possesses the ball:
\begin{equation}
\text{EPVA(p)} = \frac{1}{N_p}\sum_{\{t_s, t_e\} \in \mathcal{T}^{p}} v_{t_e} - v_{t_s}^{r(p)}
\end{equation}
\noindent where $N_p$ is the number of games played by $p$, and $\mathcal{T}^{p}$ is the set of starting and ending ball-possession times for $p$ across all games. Averaging over games, instead of by touches, rewards high-usage players. Other ways of normalizing EPVA, e.g. by dividing by $|\mathcal{T}^p|$, are also worth exploring.
Unlike RAPM-based methods, which only consider changes in the score and the identities of the players on the court, EPVA leverages the high resolution optical data to characterize the precise value of specific decisions made by the ball carrier throughout the possession. Although this approach is powerful, it still has some crucial limitations for evaluating overall player value. The first is that EPVA measures the value added by a player only when that player touches the ball. As such, specialists, like three point shooting experts, tend to have high EPVA because they most often receive the ball in situations in which they are uniquely suited to add value. However, many players around the NBA add significant value by setting screens or making cuts which draw defenders away from the ball. These actions are hard to measure and thus not included in the original EPVA metric proposed by \citet{cervone2014multiresolution}. In future work, some of these effects could be captured by identifying appropriate ways to measure a player's ``gravity''~\citep{visualizegravity} or through new tools which classify important off-ball actions. Finally, EPVA only represents contributions on the offensive side of the ball and ignores a player's defensive prowess; as noted in Section~\ref{defensive ability}, a defensive version of EPVA would also be valuable.
In contrast to EPVA, the effects of off-ball actions and defensive ability are implicitly incorporated into RAPM-based metrics. As such, RAPM remains one of the key metrics for quantifying overall player value. EPVA, on the other hand, may provide better contextual understanding of how players add value, but a less comprehensive summary of each player's total contribution. A more rigorous comparison between RAPM, EPVA and other metrics for overall ability would be worthwhile.
\subsection{Production curves}
\label{sec:production_curves}
A major component of quantifying player ability involves understanding how ability evolves over a player's career. To predict and describe player ability over time, several methods have been proposed for inferring the so-called ``production curve'' for a player\footnote{Production curves are also referred to as ``player aging curves'' in the literature, although we prefer ``production curves'' because it does not imply that changes in these metrics over time are driven exclusively by age-related factors.}. The goal of a production curve analysis is to provide predictions about the future trajectory of a current player's ability, as well as to characterize similarities in production trajectories across players. These two goals are intimately related, as the ability to forecast production is driven by assumptions about historical production from players with similar styles and abilities.
Commonly, in a production curve analysis, a continuous measurement of aggregate skill (i.e. RAPM or VORP), denoted $\mathbf Y$ is considered for a particular player at time t:
$$Y_{pt} = f_p(t) + \epsilon_{pt}$$
\noindent where $f_p$ describes player $p$'s ability as a function of time, $t$, and $\epsilon_{pt}$ reflects irreducible errors which are uncorrelated over time, e.g. due to unobserved factors like minor injury, illness and chance variation. Athletes not only exhibit different career trajectories, but their careers occur at different ages, can be interrupted by injuries, and include different amounts of playing time. As such, the statistical challenge in production curve analysis is to infer smooth trajectories $f_p(t)$ from sparse irregular observations of $Y_{pt}$ across players \citep{wakim2014functional}.
There are two common approaches to modeling production curves: 1) Bayesian hierarchical modeling and 2) methods based on functional data analysis and clustering. In the Bayesian hierarchical paradigm, ~\citet{berry1999bridging} developed a flexible hierarchical aging model to compare player abilities across different eras in three sports: hockey, golf, and baseball. Although not explored in their paper, their framework can be applied to basketball to account for player-specific development and age-related declines in performance. ~\citet{page2013effect} apply a similar hierarchical method based on Gaussian Process regressions to infer how production evolves across different basketball positions. They find that production varies across player type and show that point guards (i.e. agile ball-handlers) generally spend a longer fraction of their career improving than other player types. \citet{vaci2019large} also use a Bayesian hierarchical modeling with distinct parametric curves to describe trajectories before and after peak-performance. They assume pre-peak performance reflects development whereas post-peak performance is driven by aging. Their findings suggest that athletes which develop more quickly also exhibit slower age-related declines, an observation which does not appear to depend on position.
In contrast to hierarchical Bayesian models, \citet{wakim2014functional} discuss how the tools of functional data analysis can be used to model production curves. In particular, functional principal components metrics can be used in an unsupervised fashion to identify clusters of players with similar trajectories. Others have explicitly incorporated notions of player similarity into functional models of production. In this framework, the production curve for any player $p$ is then expressed as a linear combination of the production curves from a set of similar players: $f_p(t) \approx \sum_{k \neq p} \alpha_{pk} f_k(t)$. For example, in their RAPTOR player rating system, \url{fivethirtyeight.com} uses a nearest neighbor algorithm to characterize similarity between players~\citep{natesilver538_2015, natesilver538_2019}. The production curve for each player is an average of historical production curves from a distinct set of the most similar athletes. A related approach, proposed by \citet{vinue2019forecasting}, employs the method of archetypoids \citep{vinue2015archetypoids}. Loosely speaking, the archetypoids consist of a small set of players, $\mathcal{A}$, that represent the vertices in the convex hull of production curves. Different from the RAPTOR approach, each player's production curve is represented as a convex combination of curves from the \emph{same set} of archetypes, that is, $\alpha_{pk} = 0 \; \forall \ k \notin \mathcal{A}$.
One often unaddressed challenge is that athlete playing time varies across games and seasons, which means sampling variability is non-constant. Whenever possible, this heteroskedasticity in the observed outcomes should be incorporated into the inference, either by appropriately controlling for minutes played or by using other relevant notions of exposure, like possessions or attempts.
Finally, although the precise goals of these production curve analyses differ, most current analyses focus on aggregate skill. More work is needed to capture what latent player attributes drive these observed changes in aggregate production over time. Models which jointly infer how distinct measures of athleticism and skill co-evolve, or models which account for changes in team quality and adjust for injury, could lead to further insight about player ability, development, and aging (see Figure \ref{fig:player_variance}). In the next sections we mostly ignore how performance evolves over time, but focus on quantifying some specific aspects of basketball ability, including shot making and defense.
\subsection{Shot modeling}
\label{sec:shooting}
Arguably the most salient aspect of player performance is the ability to score. There are two key factors which drive scoring ability: the ability to selectively identify the highest value scoring options (shot selection) and the ability to make a shot, conditioned on an attempt (shot efficiency). A player's shot attempts and his or her ability to make them are typically related. In \emph{Basketball on Paper}, Dean Oliver proposes the notion of a ``skill curve,'' which roughly reflects the inverse relationship between a player's shot volume and shot efficiency \citep{oliver2004basketball, skinner2010price, goldman2011allocative}. Goldsberry and others gain further insight into shooting behavior by visualizing how both player shot selection and efficiency vary spatially with a so-called ``shot chart.'' (See \citet{goldsberry2012courtvision} and \citet{goldsberry2019sprawlball} for examples.) Below, we discuss statistical models for inferring how both shot selection and shot efficiency vary across players, over space, and in defensive contexts.
\subsubsection{Shot efficiency}
\label{sec:shot_efficiency}
Raw FG\% is usually a poor measure for the shooting ability of an athlete because chance variability can obscure true differences between players. This is especially true when conditioning on additional contextual information like shot location or shot type, where sample sizes are especially small. For example, \citet{franks2016meta} show that the majority of observed differences in 3PT\% are due to sampling variability rather than true differences in ability, and thus is a poor metric for player discrimination. They demonstrate how these issues can be mitigated by using hierarchical models which shrink empirical estimates toward more reasonable prior means. These shrunken estimates are both more discriminative and more stable than the raw percentages.
With the emergence of tracking data, hierarchical models have been developed which target increasingly context-specific estimands. \citet{franks2015characterizing} and \citet{cervone2014multiresolution} propose similar hierarchical logistic regression models for estimating the probability of making a shot given the shooter identity, defender distance, and shot location. In their models, they posit the logistic regression model
\begin{equation}
E[Y_{ip} \mid \ell_{ip}, X_{ijp}] = \textrm{logit}^{-1} \big( \alpha_{\ell_i,p} + \sum_{j=1}^J \beta_{j} X_{ij} \big)
\end{equation} where $Y_{ip}$ is the outcome of the $i$th shot by player $p$ given $J$ covariates $X_{ij}$ (i.e. defender distance) and $\alpha_{{\ell_i}, p}$ is a spatial random effect describing the baseline shot-making ability of player $p$ in location $\ell_i$. As shown in Figure \ref{fig:simpsons}, accounting for spatial context is crucial for understanding defensive impact on shot making.
Given high resolution data, more complex hierarchical models which capture similarities across players and space are needed to reduce the variance of resulting estimators. Franks et al. propose a conditional autoregressive (CAR) prior distribution for $\alpha_{\ell_i,p}$ to describe similarity in shot efficiencies between players. The CAR prior is simply a multivariate normal prior distribution over player coefficients with a structured covariance matrix. The prior covariance matrix is structured to shrink the coefficients of players with low attempts in a given region toward the FG\%s of players with similar styles and skills. The covariance is constructed from a nearest-neighbor similarity network on players with similar shooting preferences. These prior distributions improve out-of-sample predictions for shot outcomes, especially for players with fewer attempts. To model the spatial random effects, they represent a smoothed spatial field as a linear combination of functional bases following a matrix factorization approach proposed by \citet{miller2013icml} and discussed in more detail in Section \ref{sec:shot_selection}.
More recently, models which incorporate the full 3-dimensional trajectories of the ball have been proposed to further improve estimates of shot ability. Data from SportVU, Second Spectrum, NOAH, or RSPCT include the location of the ball in space as it approaches the hoop, including left/right accuracy and the depth of the ball once it enters the hoop.~\citet{marty2017data} and~\citet{marty2018high} use ball tracking data from over 20 million attempts taken by athletes ranging from high school to the NBA. From their analyses, \citet{marty2018high} and \citet{daly2019rao} show that the optimal entry location is about 2 inches beyond the center of the basket, at an entry angle of about $45^{\circ}$.
Importantly, this trajectory information can be used to improve estimates of shooter ability from a limited number of shots. \citet{daly2019rao} use trajectory data and a technique known as Rao-Blackwellization to generate lower error estimates of shooting skill. In this context, the Rao-Blackwell theorem implies that one can achieve lower variance estimates of the sample frequency of made shots by conditioning on sufficient statistics; here, the probability of making the shot. Instead of taking the field goal percentage as $\hat \theta_{FG} = \sum Y_{i} / n$, they infer the percentage as $\hat \theta_{FG\text{-}RB} = \sum p_{i} / n$, where $p_i = E[Y_i \mid X]$ is the inferred probability that shot $i$ goes in, as inferred from trajectory data $X$. The shot outcome is not a deterministic function of the observed trajectory information due to the limited precision of spatial data and the effect of unmeasured factors, like ball spin. They estimate the make probabilities, $p_i$, from the ball entry location and angle using a logistic regression.
~\citet{daly2019rao} demonstrate that Rao-Blackwellized estimates are better at predicting end-of-season three point percentages from limited data than empirical make percentages. They also integrate the RB approach into a hierarchical model to achieve further variance reduction. In a follow-up paper, they focus on the effect that defenders have on shot trajectories~\citep{bornn2019using}. Unsurprisingly, they demonstrate an increase in the variance of shot depth, left-right location, and entry angle for highly contested shots, but they also show that players are typically biased toward short-arming when heavily defended.
\begin{figure}
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{figs/court_colored.pdf}
\end{subfigure}
~~
\begin{subfigure}[b]{0.55 \textwidth}
\includegraphics[width=\textwidth]{figs/bball_simpsons.pdf}
\end{subfigure}
\caption{Left) The five highest-volume shot regions, inferred using the NMF method proposed by \citet{miller2013icml}. Right) Fitted values in a logistic regression of shot outcome given defender distance and NMF shot region from over 115,000 shot attempts in the 2014-2015 NBA season \citep{franks2015characterizing, simpsons_personal}. The make probability increases approximately linearly with increasing defender distance in all shot locations. The number of observed shots at each binned defender distance is indicated by the point size. Remarkably, when ignoring shot region, the coefficient of defender distance has a slightly \emph{negative} coefficient, indicating that the probability of making a shot increases slightly with the closeness of the defender (gray line). This effect, which occurs because defender distance is also dependent on shot region, is an example of a ``reversal paradox''~\citep{tu2008simpson} and highlights the importance of accounting for spatial context in basketball. It also demonstrates the danger of making causal interpretations without carefully considering the role of confounding variables. }
\label{fig:simpsons}
\end{figure}
\subsubsection{Shot selection}
\label{sec:shot_selection}
Where and how a player decides to shoot is also important for determining one's scoring ability. Player shot selection is driven by a variety of factors including individual ability, teammate ability, and strategy~\citep{goldman2013live}. For example, \citet{alferink2009generality} study the psychology of shot selection and how the positive ``reward'' of shot making affects the frequency of attempted shot types. The log relative frequency of two-point shot attempts to three-point shot attempts is approximiately linear in the log relative frequency of the player's ability to make those shots, a relationship known to psychologists as the generalized matching law~\citep{poling2011matching}. \citet{neiman2011reinforcement} study this phenomenon from a reinforcement learning perspective and demonstrate that a previous made three point shot increases the probability of a future three point attempt. Shot selection is also driven by situational factors, strategy, and the ability of a player's teammates. \citet{zuccolotto2018big} use nonparametric regression to infer how shot selection varies as a function of the shot clock and score differential, whereas \citet{goldsberry2019sprawlball} discusses the broader strategic shift toward high volume three point shooting in the NBA.
The availability of high-resolution spatial data has spurred the creation of new methods to describe shot selection.~\citet{miller2013icml} use a non-negative matrix factorization (NMF) of player-specific shot patterns across all players in the NBA to derive a low dimensional representation of a pre-specified number of approximiately disjoint shot regions. These identified regions correspond to interpretable shot locations, including three-point shot types and mid-range shots, and can even reflect left/right bias due to handedness. See Figure \ref{fig:simpsons} for the results of a five-factor NMF decomposition. With the inferred representation, each player's shooting preferences can be approximated as a linear combination of the canonical shot ``bases.'' The player-specific coefficients from the NMF decomposition can be used as a lower dimensional characterization of the shooting style of that player \citep{bornn2017studying}.
While the NMF approach can generate useful summaries of player shooting styles, it incorporates neither contextual information, like defender distance, nor hierarchical structure to reduce the variance of inferred shot selection estimates. As such, hierarchical spatial models for shot data, which allow for spatially varying effects of covariates, are warranted \citep{reich2006spatial, franks2015characterizing}. \citet{franks2015characterizing} use a hierarchical multinomial logistic regression to predict who will attempt a shot and where the attempt will occur given defensive matchup information. They consider a 26-outcome multinomial model, where the outcomes correspond to shot attempts by one of the five offensive players in any of five shot regions, with regions determined \textit{a priori} using the NMF factorization. The last outcome corresponds to a possession that does not lead to a shot attempt. Let $\mathcal{S}(p, b)$ be an indicator for a shot by player $p$ in region $b$. The shot attempt probabilities are modeled as
\begin{equation}
\label{eqn:shot_sel}
E[\mathcal{S}(p, b) \mid \ell_{ip}, X_{ip}] = \frac{\exp \left(\alpha_{p b}+\sum_{j=1}^{5} F_{n}(j, p) \beta_{j b}\right)}{1+\sum_{\tilde p,\tilde b} \exp \left(\alpha_{\tilde p \tilde b}+\sum_{j=1}^{5} F_{n}(j, \tilde p) \beta_{j \tilde b}\right)}
\end{equation}
\noindent where $\alpha_{pb}$ is the propensity of the player to shoot from region $b$, and $F(j, p)$ is the fraction of time in the possession that player $p$ was guarded by defender $j$. Shrinkage priors are again used for the coefficients based on player similarity. $\beta_{jb}$ accounts for the effect of defender $j$ on offensive player $p$'s shooting habits (see Section \ref{defensive ability}).
Beyond simply describing the shooting style of a player, we can also assess the degree to which players attempt high value shots. \citet{chang2014quantifying} define effective shot quality (ESQ) in terms of the league-average expected value of a shot given the shot location and defender distance.~\citet{shortridge2014creating} similarly characterize how expected points per shot (EPPS) varies spatially. These metrics are useful for determining whether a player is taking shots that are high or low value relative to some baseline, i.e., the league average player.
\citet{cervonepointwise} and \citet{cervone2014multiresolution} use the EPV framework (Section \ref{sec:epv}) to develop a more sophisticated measure of shot quality termed ``shot satisfaction''. Shot satisfaction incorporates both offensive and defensive contexts, including shooter identity and all player locations and abilities, at the moment of the shot. The ``satisfaction'' of a shot is defined as the conditional expectation of the possession value at the moment the shot is taken, $\nu_{it}$, minus the expected value of the possession conditional on a counterfactual in which the player did not shoot, but passed or dribbled instead. The shot satisfaction for player $p$ is then defined as the average satisfaction, averaging over all shots attempted by the player:
$$\textrm{Satis}(p)=\frac{1}{\left|\mathcal{T}_{\textrm{shot }}^{p}\right|} \sum_{(i, t) \in \mathcal{T}_{\textrm{shot }}^{p}} \left(v_{it}-\mathbb{E}\left[Z_i | X_{it}, C_{t} \textrm{ is a non-shooting state} \right]\right)$$
\noindent where $\mathcal{T}_{\textrm{shot }}^{p}$ is the set of all possessions and times at which a player $p$ took a shot, $Z_i$ is the point value of possession $i$, $X_{it}$ corresponds to the state of the game at time $t$ (player locations, shot clock, etc) and $C_t$ is a non-shooting macro-state. $\nu_t$ is the inferred EPV of the possession at time $t$ as defined in Equation \ref{eqn:epv}. Satisfaction is low if the shooter has poor shooting ability, takes difficult shots, or if the shooter has teammates who are better scorers. As such, unlike other metrics, shot satisfaction measures an individual's decision making and implicitly accounts for the shooting ability of both the shooter \emph{and} the ability of their teammates. However, since shot satisfaction only averages differential value over the set $\mathcal{T}_{\textrm{shot}}^{p}$, it does not account for situations in which the player passes up a high-value shot. Additionally, although shot satisfaction is aggregated over all shots, exploring spatial variability in shot satisfaction would be an interesting extension.
\subsubsection{The hot hand}
\label{hothand}
One of the most well-known and debated questions in basketball analytics is about the existence of the so-called ``hot-hand''. At a high level, a player is said to have a ``hot hand'' if the conditional probability of making a shot increases given a prior sequence of makes. Alternatively, given $k$ previous shot makes, the hot hand effect is negligible if $E[Y_{p,t}|Y_{p, t-1}=1, ..., Y_{p, t-k}=1, X_t] \approx E[Y_{p,t}| X_t]$ where $Y_{p, t}$ is the outcome of the $t$th shot by player $p$ and $X_t$ represents contextual information at time $t$ (e.g. shot type or defender distance). In their seminal paper,~\citet{gilovich1985hot} argued that the hot hand effect is negligible. Instead, they claim streaks of made shots arising by chance are misinterpreted by fans and players \textit{ex post facto} as arising from a short-term improvement in ability. Extensive research following the original paper has found modest, but sometimes conflicting, evidence for the hot hand~\citep[e.g.][]{bar2006twenty, yaari2011hot,hothand93online}.
Amazingly, 30 years after the original paper,~\citet{miller2015surprised} demonstrated the existence of a bias in the estimators used in the original and most subsequent hot hand analyses. The bias, which attenuates estimates of the hot hand effect, arises due to the way in which shot sequences are selected and is closely related to the infamous Monty Hall problem~\citep{sciam, miller2017bridge}. After correcting for this bias, they estimate that there is an 11\% increase in the probability of making a three point shot given a streak of previous makes, a significantly larger hot-hand effect than had been previously reported.
Relatedly,~\citet{stone2012measurement} describes the effects of a form of ``measurement error'' on hot hand estimates, arguing that it is more appropriate to condition on the \emph{probabilities} of previous makes, $E\left[Y_{p,t}|E[Y_{p, t-1}], ... E[Y_{p, t-k}], X_t\right]$, rather than observed makes and misses themselves -- a subtle but important distinction. From this perspective, the work of \citet{marty2018high} and \citet{daly2019rao} on the use of ball tracking data to improve estimates of shot ability could provide fruitful views on the hot hand phenomenon by exploring autocorrelation in shot trajectories rather than makes and misses. To our knowledge this has not yet been studied. For a more thorough review and discussion of the extensive work on statistical modeling of streak shooting, see \citet{lackritz2017probability}.
\subsection{Defensive ability}
\label{defensive ability}
Individual defensive ability is extremely difficult to quantify because 1) defense inherently involves team coordination and 2) there are relatively few box scores statistics related to defense. Recently, this led Jackie MacMullan, a prominent NBA journalist, to proclaim that ``measuring defense effectively remains the last great frontier in analytics''~\citep{espnmac}. Early attempts at quantifying aggregate defensive impact include Defensive Rating (DRtg), Defensive Box Plus/Minus (DBPM) and Defensive Win Shares, each of which can be computed entirely from box score statistics \citep{oliver2004basketball, bbref_ratings}. DRtg is a metric meant to quantify the ``points allowed'' by an individual while on the court (per 100 possessions). Defensive Win Shares is a measure of the wins added by the player due to defensive play, and is derived from DRtg. However, all of these measures are particularly sensitive to teammate performance, and thus are not reliable measures of individual defensive ability.
Recent analyses have targeted more specific descriptions of defensive ability by leveraging tracking data, but still face some of the same difficulties. Understanding defense requires as much an understanding about what \emph{does not} happen as what does happen. What shots were not attempted and why? Who \emph{did not} shoot and who was guarding them? \citet{goldsberry2013dwight} were some of the first to use spatial data to characterize the absence of shot outcomes in different contexts. In one notable example from their work, they demonstrated that when Dwight Howard was on the court, the number of opponent shot attempts in the paint dropped by 10\% (``The Dwight Effect'').
More refined characterizations of defensive ability require some understanding of the defender's goals. \citet{franks2015characterizing} take a limited view on defenders' intent by focusing on inferring whom each defender is guarding. Using tracking data, they developed an unsupervised algorithm, i.e., without ground truth matchup data, to identify likely defensive matchups at each moment of a possession. They posited that a defender guarding an offensive player $k$ at time $t$ would be normally distributed about the point $\mu_{t k}=\gamma_{o} O_{t k}+\gamma_{b} B_{t}+\gamma_{h} H$, where $O_t$ is the location of the offensive player, $B_t$ is the location of the ball, and $H$ is the location of the hoop. They use a Hidden Markov model to infer the weights $\mathbf{\gamma}$ and subsequently the evolution of defensive matchups over time. They find that the average defender location is about 2/3 of the way between the segment connecting the hoop to the offensive player being guarded, while shading about 10\% of the way toward the ball location.~\citet{keshri2019automatic} extend this model by allowing $\mathbf{\gamma}$ to depend on player identities and court locations for a more accurate characterization of defensive play that also accounts for the ``gravity'' of dominant offensive players.
Defensive matchup data, as derived from these algorithms, is essential for characterizing the effectiveness of individual defensive play. For example, \citet{franks2015characterizing} use matchup data to describe the ability of individual defenders to both suppress shot attempts and disrupt attempted shots at different locations. To do so, they include defender identities and defender distance in the shot outcome and shot attempt models described in Sections \ref{sec:shot_efficiency} and \ref{sec:shot_selection}. Inferred coefficients relate to the ability of a defensive player to either reduce the propensity to make a shot given that it is taken, or to reduce the likelihood that a player attempts a shot in the first place.
These coefficients can be summarized in different ways. For example,~\citet{franks2015characterizing} introduce the defensive analogue of the shot chart by visualizing where on the court defenders reduce shot attempts and affect shot efficiency. They found that in the 2013-2014 season, Kawhi Leonard reduced the percentage of opponent three attempts more than any other perimeter defender; Roy Hibbert, a dominant big that year, faced more shots in the paint than any other player, but also did the most to reduce his opponent's shooting efficiency. In~\citet{franks2015counterpoints}, matchup information is used to derive a notion of ``points against''-- the number of points scored by offensive players when guarded by a specific defender. Such a metric can be useful in identifying the weak links in a team defense, although this is very sensitive to the skill of the offensive players being guarded.
Ultimately, the best matchup defenders are those who encourage the offensive player to make a low value decision. The EPVA metric discussed in Section \ref{sec:general_skill} characterizes the value of offensive decisions by the ball handler, but a similar defender-centric metric could be derived by focusing on changes in EPV when ball handlers are guarded by a specific defender. Such a metric could be a fruitful direction for future research and provide insight into defenders which affect the game in unique ways. Finally, we note that a truly comprehensive understanding of defensive ability must go beyond matchup defense and incorporate aspects of defensive team strategy, including strategies for zone defense. Without direct information from teams and coaches, this is an immensely challenging task. Perhaps some of the methods for characterizing team play discussed in Section \ref{sec:team} could be useful in this regard. An approach which incorporates more domain expertise about team defensive strategy could also improve upon existing methods.
\section{Discussion}
Basketball is a game with complex spatio-temporal dynamics and strategies. With the availability of new sources of data, increasing computational capability, and methodological innovation, our ability to characterize these dynamics with statistical and machine learning models is improving. In line with these trends, we believe that basketball analytics will continue to move away from a focus on box-score based metrics and towards models for inferring (latent) aspects of team and player performance from rich spatio-temporal data. Structured hierarchical models which incorporate more prior knowledge about basketball and leverage correlations across time and space will continue to be an essential part of disentangling player, team, and chance variation. In addition, deep learning approaches for modeling spatio-temporal and image data will continue to develop into major tools for modeling tracking data.
However, we caution that more data and new methods do not automatically imply more insight. Figure \ref{fig:simpsons} depicts just one example of the ways in which erroneous conclusions may arise when not controlling for confounding factors related to space, time, strategy, and other relevant contextual information. In that example, we are able to control for the relevant spatial confounder, but in many other cases, the relevant confounders may not be observed. In particular, strategic and game-theoretic considerations are of immense importance, but are typically unknown. As a related simple example, when estimating field goal percentage as a function of defender distance, defenders may strategically give more space to the poorest shooters. Without this contextual information, this would make it appear as if defender distance is \emph{negatively} correlated with the probability of making the shot.
As such, we believe that causal thinking will be an essential component of the future of basketball analytics, precisely because many of the most important questions in basketball are causal in nature. These questions involve a comparison between an observed outcome and a counterfactual outcome, or require reasoning about the effects of strategic intervention: ``What would have happened if the Houston Rockets had not adopted their three point shooting strategy?'' or ``How many games would the Bucks have won in 2018 if Giannis Antetokounmpo were replaced with an `average' player?'' Metrics like Wins Above Replacement Player are ostensibly aimed at answering the latter question, but are not given an explicitly causal treatment. Tools from causal inference should also help us reason more soundly about questions of extrapolation, identifiability, uncertainty, and confounding, which are all ubiquitous in basketball. Based on our literature review, this need for causal thinking in sports remains largely unmet: there were few works which explicitly focused on causal and/or game theoretic analyses, with the exception of a handful in basketball \citep{skinner2015optimal, sandholtz2018transition} and in sports more broadly \citep{lopez2016persuaded, yamlost, gauriot2018fooled}.
Finally, although new high-resolution data has enabled increasingly sophisticated methods to address previously unanswerable questions, many of the richest data sources are not openly available. Progress in statistical and machine learning methods for sports is hindered by the lack of publicly available data. We hope that data providers will consider publicly sharing some historical spatio-temporal tracking data in the near future. We also note that there is potential for enriching partnerships between data providers, professional leagues, and the analytics community. Existing contests hosted by professional leagues, such as the National Football League's ``Big Data Bowl''~\citep[open to all,][]{nfl_football_operations}, and the NBA Hackathon~\citep[by application only,][]{nbahack}, have been very popular. Additional hackathons and open data challenges in basketball would certainly be well-received.
\section*{DISCLOSURE}
Alexander Franks is a consultant for a basketball team in the National Basketball Association. This relationship did not affect the content of this review. Zachary Terner is not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review.
\section*{ACKNOWLEDGMENTS}
The authors thank Luke Bornn, Daniel Cervone, Alexander D'Amour, Michael Lopez, Andrew Miller, Nathan Sandholtz, Hal Stern, and an anonymous reviewer for their useful comments, feedback, and discussions.
\newpage
\section*{SUPPLEMENTARY MATERIAL}
\input{supplement_content.tex}
\section*{LITERATURE\ CITED}
\renewcommand{\section}[2]{}%
\bibliographystyle{ar-style1.bst}
\section*{Response to comments}
We thank the reviewer for their thoughtful and helpful comments. As per the reviewer's suggestions, our major revisions were to: 1) reduce the length of the sections suggested by the reviewer 2) clarify the discussion on EPV and the comparison to APM 3) update some of the text in the production curves sections. Our detailed responses are below.
\begin{itemize}
\item The article provides a comprehensive review of the fast-changing field of basketball analytics. It is very well written and addresses all of the main topics. I have identified a few areas where I think additional discussion (or perhaps rewriting) would be helpful. In addition, the paper as a whole is longer than we typically publish by 5-10\%. We would encourage the authors to examine with an eye towards areas that can be cut. Some suggestions for editing down include Sections 2.1, 2.2.1, 2.2.2, and 3.3.3.
\emph{We cut close to 500 words (approx 5\%) by reducing the text in the suggested sections. However, after editing sections related to the reviewer comments, this length reduction was partially offset. According to the latex editor we are using, there are now 8800 words in the document, less than the 9500 limit listed by the production editor. If the reviewer still finds the text too long, we will likely cut the hot hand discussion, limiting it to a few sentences in the section on shot efficiency. This will further reduce the length by about 350 words. Our preference, however is to keep it.}
\item Vocabulary – I understand the authors decision to not spend much time describing basketball in detail. I think there would be value on occasion though in spelling out some of the differences (e.g., when referring to types of players as 'bigs' or 'wings'). I suppose another option is to use more generic terms like player type.
\emph{We have made an attempt to reduce basketball jargon and clarify where necessary.}
\item Page 8, para 4 – "minimizing nuisance variability in characterizing player performance"
\emph{We have corrected this.}
\item Page 8, para 4, line 4 – Should this be "low-sample size settings"?
\emph{We have corrected this.}
\item Page 10, line 7 – Should refer to the "ridge regression model" or "ridge regression framework"
\emph{We now refer to the "ridge regression framework"}
\item Page 11 – The EPV discussion was not clear to me. To start it is not clear what information is in $C_t$. It seems that the information in $C_t$ depends on which of the type we are in. Is that correct? So $C_t$ is a triple if we are in $C_{poss}$ but it is only a singleton if it is in $C_{trans}$ or $C_{end}?$ Then in the formula that defines EPV there is some confusion about $\delta_t$; is $\delta_t$ the time of the next action ends (after time $t$)? If so, that should be made clearer.
\emph{We agree this discussion may not have been totally clear. There are many details and we have worked hard to revise and simplify the discussion as much as possible. It is probably better to think about $C_t$ as a discrete state that is \emph{defined} by the full continuous data $X$, rather than "carrying information". $C_{poss}$ is a state defined by the ball carrier and location, whereas $C_{end}$ is defined only by the outcome that ends the possession (a made or missed shot or turnover. It is also associated with value of 0, 2, or 3.) The game is only in $C_{trans}$ states for short periods of time, e.g. when the ball is traveling in the air during a pass or shot attempt. $\delta_t$ is the start time of the next non-transition state. The micromodels are used to predict both $\delta_t$ and the identity of the next non-transition state (e.g. who is possessing the ball next and where, or the ending outcome of the possession) given the full resolution tracking data. We have worked to clarify this in the text.}
\item Page 12 – The definition of EPVA is interesting. Is it obvious that this should be defined by dividing the total by the number of games? This weights games by the number of possessions. I'd say a bit more discussion would help.
\emph{This is a great point. Averaging over games implicitly rewards players who have high usage, even if their value added per touch might be low. This is mentioned in their original paper and we specifically highlight this choice in the text.}
\item Page 12 – End of Section 3.1 .... Is it possible to talk about how APM and EPV compare? Has anyone compared the results for a season? Seems like that would be very interesting.
\emph{We have added some discussion about EPVA and APM in the text, and also recommended that a more rigorous comparison would be worthwhile. In short, EPVA is limited by focusing on the ball-handler. Off-ball actions are not rewarded. This can be a significant source of value-added and it implicitly included in APM. The original EPV authors also note that one-dimensional offensive players often accrue the most EPVA per touch since they only handle the ball when they are uniquely suited to scoring.}
\item Page 13 – Production curves – The discussion of $\epsilon_{pt}$ is a bit confusing. What do you mean by random effects? This is often characterized as variation due to other variables (e.g., physical condition). Also, a reference to Berry et al. (JASA, 1999) might be helpful here. They did not consider basketball but have some nice material on different aging functions.
\emph{We have changed this to read: ``$\epsilon_{pt}$ reflects irreducible error which is independent of time, e.g. due to unobserved factors like injury, illness and chance variation.'' The suggestion to include Berry et al is a great one. This is closely related to the work of Page et al and Vaci et al and is now included in the text.}
\item Page 13 – middle – There is a formula here defining $f_p(t)$ as an average over similar players. I think it should be made clear that you are summing over $k$ here.
\emph{We have made this explicit.}
\item Page 16 – Figure 3 – Great figure. Seems odd though to begin your discussion with the counterintuitive aggregate result. I'd recommend describing the more intuitive results in the caption. Perhaps you want to pull the counterintuitive out of the figure and mention it only in text (perhaps only in conclusion?).
\emph{The reviewer's point is well taken. We have restructured the caption to start with the more intuitive results. However, we kept the less intuitive result in the caption, since we wanted to highlight that incorporating spatial context is essential for making the right conclusions. }
\end{itemize}
\end{document}
\section{Tags}
Data Tags: \\
\#spatial \#tracking \#college \#nba \#intl \#boxscore \#pbp (play-by-play) \#longitudinal \#timeseries \\
Goal Tags: \\
\#playereval \#defense \#lineup \#team \#projection \#behavioral \#strategy \#rest \#health \#injury \#winprob \#prediction \\
Miscellaneous: \\
\#clustering \#coaching \#management \#refs \#gametheory \#intro \#background \\
\section{Summaries}
\subsection{Introduction}
Kubatko et al. "A starting point for analyzing basketball statistics." \cite{kubatko}
\newline
Tags: \#intro \#background \#nba \#boxscore
\begin{enumerate}
\item Basics of the analysis of basketball. Provide a common starting point for future research in basketball
\item Define a general formulation for how to estimate the number of possessions.
\item Provide a common basis for future possession estimation
\item Also discuss other concepts and methods: per-minute statistics, pace, four factors, etc.
\item Contain other breakdowns such as rebound rate, plays, etc
\end{enumerate}
John Hollinger \newline
Pro Basketball Forecast, 2005-06 \newline
\cite{hollinger2004pro}
This is Hollinger's yearly publication forecast for how NBA players will perform.
Can go in an intro section or in a forecast section.
Tags: \#nba \#forecast \#prediction
Handbook of Statistical Methods and Analyses in Sports \newline
Albert, Jim and Glickman, Mark E and Swartz, Tim B and Koning, Ruud H \newline
\cite{albert2017handbook} \newline
Tags: \#intro \#background
\begin{enumerate}
\item This handbook will provide both overviews of statistical methods in sports and in-depth
treatment of critical problems and challenges confronting statistical research in sports. The
material in the handbook will be organized by major sport (baseball, football, hockey,
basketball, and soccer) followed by a section on other sports and general statistical design
and analysis issues that are common to all sports. This handbook has the potential to
become the standard reference for obtaining the necessary background to conduct serious...
\end{enumerate}
Basketball on paper: rules and tools for performance analysis \newline
Dean Oliver \newline
\cite{oliver2004basketball}
Seems like a useful reference / historical reference for analyzing basketball performance?
tags: \#intro \#background
\subsection{Networks/Player performance}
Evaluating Basketball Player Performance via Statistical Network Modeling
Piette, Pham, Anand
\cite{piette2011evaluating}
Tags: \#playereval \#lineup \#nba \#team
\begin{enumerate}
\item Players are nodes, edges are if they played together in the same five-man unit
\item Adapting a network-based algorithm to estimate centrality scores
\end{enumerate}
Darryl Blackport, Nylon Calculus / fansided \newline
\cite{threeStabilize} \newline
Discusses
\begin{itemize}
\item Three-point shots are high-variance shots, leading television analysts to use the phrase "live by the three, die by the three", so when a career 32\% three point shooter suddenly shoots 38\% for a season, do we know that he has actually improved? How many attempts are enough to separate the signal from the noise? I decided to apply techniques used to calculate how long various baseball stats take to stabilize to see how long it takes for three-point shooting percentage to stabilize.
\item \#shooting \#playereval
\end{itemize}
Andrew Patton / Nylon Calculus / fansided \newline
\cite{visualizegravity} \newline
\begin{itemize}
\item Contains interesting plots to show the gravity of players at different locations on the court
\item Clever surface plots
\#tracking \#shooting \#visualization
\end{itemize}
The price of anarchy in basketball \\
Brian Skinner \\
\cite{skinner2010price} \\
\begin{enumerate}
\item Treats basketball like a network problem
\item each play represents a "pathway" through which the ball and players may move from origin (the inbounds pass) to goal (the basket). Effective field goal percentages from the resulting shot attempts can be used to characterize the efficiency of each pathway. Inspired by recent discussions of the "price of anarchy" in traffic networks, this paper makes a formal analogy between a basketball offense and a simplified traffic network.
\item The analysis suggests that there may be a significant difference between taking the highest-percentage shot each time down the court and playing the most efficient possible game. There may also be an analogue of Braess's Paradox in basketball, such that removing a key player from a team can result in the improvement of the team's offensive efficiency.
\item If such thinking [meaning that one should save their best plays for later in the game] is indeed already in the minds of coaches and players, then it should probably be in the minds of those who do quantitative analysis of sports as well. It is my hope that the introduction of ``price of anarchy" concepts will constitute a small step towards formalizing this kind of reasoning, and in bringing the analysis of sports closer in line with the playing and coaching of sports.
\end{enumerate}
Basketball teams as strategic networks \\
Fewell, Jennifer H and Armbruster, Dieter and Ingraham, John and Petersen, Alexander and Waters, James S \\
\cite{fewell2012basketball} \\
Added late
\subsection{Team performance}
Tags: \#team \#boxscore \#longitudinal
Efficiency in the National Basketball Association: A Stochastic Frontier Approach with Panel Data
Richard A. Hofler, and James E. Payneb
\cite{hofler2006efficiency}
\begin{enumerate}
\item Shooting, rebounding, stealing, blocking shots help teams' performance
\item Turnovers lower it
\item We also learn that better coaching and defensive prowess raise a team's win efficiency.
\item Uses a stochastic production frontier model
\end{enumerate}
Predicting team rankings in basketball: The questionable use of on-court performance statistics \\
Ziv, Gal and Lidor, Ronnie and Arnon, Michal \\
\cite{ziv2010predicting} \\
\begin{enumerate}
\item Statistics on on-court performances (e.g. free-throw shots, 2-point shots, defensive and offensive rebounds, and assists) of basketball players during actual games are typically used by basketball coaches and sport journalists not only to assess the game performance of individual players and the entire team, but also to predict future success (i.e. the final rankings of the team). The purpose of this correlational study was to examine the relationships between 12 basketball on-court performance variables and the final rankings of professional basketball teams, using information gathered from seven consecutive seasons and controlling for multicollinearity.
\item Data analyses revealed that (a) some on-court performance statistics can predict team rankings at the end of a season; (b) on-court performance statistics can be highly correlated with one another (e.g. 2-point shots and 3-point shots); and (c) condensing the correlated variables (e.g. all types of shots as one category) can lead to more stable regressional models. It is recommended that basketball coaches limit the use of individual on-court statistics for predicting the final rankings of their teams. The prediction process may be more reliable if on-court performance variables are grouped into a large category of variables.
\end{enumerate}
Relationship between Team Assists and Win-Loss Record in the National Basketball Association \\
Merrill J Melnick \\
\cite{melnick2001relationship} \\
\begin{enumerate}
\item Using research methodology for analysis of secondary data, statistical data for five National Basketball Association (NBA) seasons (1993–1994 to 1997–1998) were examined to test for a relationship between team assists (a behavioral measure of teamwork) and win-loss record. Rank-difference correlation indicated a significant relationship between the two variables, the coefficients ranging from .42 to .71. Team assist totals produced higher correlations with win-loss record than assist totals for the five players receiving the most playing time ("the starters").
\item A comparison of "assisted team points" and "unassisted team points" in relationship to win-loss record favored the former and strongly suggested that how a basketball team scores points is more important than the number of points it scores. These findings provide circumstantial support for the popular dictum in competitive team sports that "Teamwork Means Success—Work Together, Win Together."
\end{enumerate}
\subsection{Shot selection}
Quantifying shot quality in the NBA
Chang, Yu-Han and Maheswaran, Rajiv and Su, Jeff and Kwok, Sheldon and Levy, Tal and Wexler, Adam and Squire, Kevin
Tags: \#playereval \#team \#shooting \#spatial \#nba
\cite{chang2014quantifying}
\begin{enumerate}
\item Separately characterize the difficulty of shots and the ability to make them
\item ESQ (Effective Shot Quality) and EFG+ (EFG - ESQ)
\item EFG+ is shooting ability above expectations
\item Addresses problem of confounding two separate attributes that EFG encounters
\begin{itemize}
\item quality of a shot and the ability to make that shot
\end{itemize}
\end{enumerate}
The problem of shot selection in basketball \\
Brian Skinner
\cite{skinner2012problem} \\
\begin{enumerate}
\item In this article, I explore the question of when a team should shoot and when they should pass up the shot by considering a simple theoretical model of the shot selection process, in which the quality of shot opportunities generated by the offense is assumed to fall randomly within a uniform distribution. Within this model I derive an answer to the question ''how likely must the shot be to go in before the player should take it?''
\item The theoretical prediction for the optimal shooting rate is compared to data from the National Basketball Association (NBA). The comparison highlights some limitations of the theoretical model, while also suggesting that NBA teams may be overly reluctant to shoot the ball early in the shot clock.
\end{enumerate}
Generality of the matching law as a descriptor of shot selection in basketball \newline
Alferink, Larry A and Critchfield, Thomas S and Hitt, Jennifer L and Higgins, William J \newline
\cite{alferink2009generality}
Studies the matching law to explains shot selection
\begin{enumerate}
\item Based on a small sample of highly successful teams, past studies suggested that shot selection (two- vs. three-point field goals) in basketball corresponds to predictions of the generalized matching law. We examined the generality of this finding by evaluating shot selection of college (Study 1) and professional (Study 3) players. The matching law accounted for the majority of variance in shot selection, with undermatching and a bias for taking three-point shots.
\item Shotselection matching varied systematically for players who (a) were members of successful versus unsuccessful teams, (b) competed at different levels of collegiate play, and (c) served as regulars versus substitutes (Study 2). These findings suggest that the matching law is a robust descriptor of basketball shot selection, although the mechanism that produces matching is unknown.
\end{enumerate}
Basketball Shot Types and Shot Success in
Different Levels of Competitive Basketball \newline
Er{\v{c}}ulj, Frane and {\v{S}}trumbelj, Erik \newline
\cite{ervculj2015basketball} \newline
\begin{enumerate}
\item Relative frequencies of shot types in basketball
\item The purpose of our research was to investigate the relative frequencies of different types of basketball shots (above head, hook shot, layup, dunk, tip-in), some details about their technical execution (one-legged, two-legged, drive, cut, ...), and shot success in different levels of basketball competitions. We analysed video footage and categorized 5024 basketball shots from 40 basketball games and 5 different levels of competitive basketball (National Basketball Association (NBA), Euroleague, Slovenian 1st Division, and two Youth basketball competitions).
\item Statistical analysis with hierarchical multinomial logistic regression models reveals that there are substantial differences between competitions. However, most differences decrease or disappear entirely after we adjust for differences in situations that arise in different competitions (shot location, player type, and attacks in transition).
\item In the NBA, dunks are more frequent and hook shots are less frequent compared to European basketball, which can be attributed to better athleticism of NBA players. The effect situational variables have on shot types and shot success are found to be very similar for all competitions.
\item tags: \#nba \#shotselection
\end{enumerate}
A spatial analysis of basketball shot chart data \\
Reich, Brian J and Hodges, James S and Carlin, Bradley P and Reich, Adam M \\
\begin{enumerate}
\item Uses spatial methods (like CAR, etc) to understand shot chart of NBA player (Sam Cassell)
\item Has some shot charts and maps
\item Fits an actual spatial model to shot chart data
\item Basketball coaches at all levels use shot charts to study shot locations and outcomes for their own teams as well as upcoming opponents. Shot charts are simple plots of the location and result of each shot taken during a game. Although shot chart data are rapidly increasing in richness and availability, most coaches still use them purely as descriptive summaries. However, a team's ability to defend a certain player could potentially be improved by using shot data to make inferences about the player's tendencies and abilities.
\item This article develops hierarchical spatial models for shot-chart data, which allow for spatially varying effects of covariates. Our spatial models permit differential smoothing of the fitted surface in two spatial directions, which naturally correspond to polar coordinates: distance to the basket and angle from the line connecting the two baskets. We illustrate our approach using the 2003–2004 shot chart data for Minnesota Timberwolves guard Sam Cassell.
\end{enumerate}
Optimal shot selection strategies for the NBA \\
Fichman, Mark and O'Brien, John Robert \\
\cite{fichman2019optimal}
Three point shooting and efficient mixed strategies: A portfolio management approach \\
Fichman, Mark and O'Brien, John \\
\cite{fichman2018three}
Rao-Blackwellizing field goal percentage \\
Daniel Daly-Grafstein and Luke Bornn\\
\cite{daly2019rao}
How to get an open shot: Analyzing team movement in basketball using tracking data \\
Lucey, Patrick and Bialkowski, Alina and Carr, Peter and Yue, Yisong and Matthews, Iain \\
\cite{lucey2014get} \\
Big data analytics for modeling scoring probability in basketball: The effect of shooting under high-pressure conditions \\
Zuccolotto, Paola and Manisera, Marica and Sandri, Marco \\
\cite{zuccolotto2018big}
\subsection{Tracking}
Miller and Bornn \newline
Possession Sketches: Mapping NBA Strategies
\cite{miller2017possession}
Tags: \#spatial \#clustering \#nba \#coaching \#strategy
\begin{enumerate}
\item Use LDA to create a database of movements of two players
\item Hierarchical model to describe interactions between players
\item Group together possessions with similar offensive structure
\item Uses tracking data to study strategy
\item Depends on \cite{blei2003latent}
\end{enumerate}
Nazanin Mehrasa*, Yatao Zhong*, Frederick Tung, Luke Bornn, Greg Mori
Deep Learning of Player Trajectory Representations for Team Activity Analysis
\cite{mehrasa2018deep}
Tags: \#tracking \#nba \#player
\begin{enumerate}
\item Use deep learning to learn player trajectories
\item Can be used for event recognition and team classification
\item Uses convolutional neural networks
\end{enumerate}
A. Miller and L. Bornn and R. Adams and K. Goldsberry \newline
Factorized Point Process Intensities: A Spatial Analysis of Professional Basketball \newline
~\cite{miller2013icml}
Tags: \#tracking \#nba \#player
\begin{enumerate}
\item We develop a machine learning approach to represent and analyze the underlying spatial structure that governs shot selection among professional basketball players in the NBA. Typically, NBA players are discussed and compared in an
heuristic, imprecise manner that relies on unmeasured intuitions about player behavior. This makes it difficult to draw comparisons between players and make accurate player specific predictions.
\item Modeling shot attempt data as a point process, we create a low dimensional representation of offensive player types in the NBA. Using non-negative matrix factorization (NMF), an unsupervised dimensionality reduction technique, we show that a low-rank spatial decomposition summarizes the shooting habits of NBA players. The spatial representations discovered by the algorithm correspond to intuitive descriptions of NBA player types, and can be used to model other spatial effects, such as shooting accuracy
\end{enumerate}
Creating space to shoot: quantifying spatial relative field goal efficiency in basketball \newline
Shortridge, Ashton and Goldsberry, Kirk and Adams, Matthew \newline
\cite{shortridge2014creating} \newline
\begin{enumerate}
\item This paper addresses the
challenge of characterizing and visualizing relative spatial
shooting effectiveness in basketball by developing metrics
to assess spatial variability in shooting. Several global and
local measures are introduced and formal tests are proposed to enable the comparison of shooting effectiveness
between players, groups of players, or other collections of
shots. We propose an empirical Bayesian smoothing rate
estimate that uses a novel local spatial neighborhood tailored for basketball shooting. These measures are evaluated
using data from the 2011 to 2012 NBA basketball season in
three distinct ways.
\item First we contrast nonspatial and spatial
shooting metrics for two players from that season and then
extend the comparison to all players attempting at least 250
shots in that season, rating them in terms of shooting effectiveness.
\item Second, we identify players shooting significantly better than the NBA average for their shot constellation, and formally compare shooting effectiveness of different players.
\item Third, we demonstrate an approach to map spatial shooting effectiveness. In general, we conclude that these measures are relatively straightforward to calculate with the right input data, and they provide distinctive and useful information about relative shooting ability in basketball.
\item We expect that spatially explicit basketball metrics will be useful additions to the sports analysis toolbox
\item \#tracking \#shooting \#spatial \#efficiency
\end{enumerate}
Courtvision: New Visual and Spatial Analytics for the NBA \newline
Goldsberry, Kirk \newline
\cite{goldsberry2012courtvision}
\begin{enumerate}
\item This paper investigates spatial and visual analytics as means to enhance basketball expertise. We
introduce CourtVision, a new ensemble of analytical techniques designed to quantify, visualize, and
communicate spatial aspects of NBA performance with unprecedented precision and clarity. We propose
a new way to quantify the shooting range of NBA players and present original methods that measure,
chart, and reveal differences in NBA players' shooting abilities.
\item We conduct a case study, which applies
these methods to 1) inspect spatially aware shot site performances for every player in the NBA, and 2) to
determine which players exhibit the most potent spatial shooting behaviors. We present evidence that
Steve Nash and Ray Allen have the best shooting range in the NBA.
\item We conclude by suggesting that
visual and spatial analysis represent vital new methodologies for NBA analysts.
\item \#tracking \#shooting \#spatial
\end{enumerate}
Cervone, Daniel and D'Amour, Alex and Bornn, Luke and Goldsberry, Kirk \newline
A Multiresolution Stochastic Process Model for Predicting Basketball Possession Outcomes \newline
\cite{cervone2014multiresolution}
\begin{enumerate}
\item In this article, we propose a framework for using optical player tracking data to estimate, in real time, the expected number of points obtained by the end of a possession. This quantity, called expected possession value (EPV),
derives from a stochastic process model for the evolution of a basketball possession.
\item We model this process
at multiple levels of resolution, differentiating between continuous, infinitesimal movements of players,
and discrete events such as shot attempts and turnovers. Transition kernels are estimated using hierarchical spatiotemporal models that share information across players while remaining computationally tractable
on very large data sets. In addition to estimating EPV, these models reveal novel insights on players' decisionmaking tendencies as a function of their spatial strategy. In the supplementary material, we provide a data sample and R code for further exploration of our model and its results.
\item This article introduces a new quantity, EPV, which represents a paradigm shift in the possibilities for statistical inferences about basketball. Using high-resolution, optical tracking data, EPV
reveals the value in many of the schemes and motifs that characterize basketball offenses but are omitted in the box score.
\item \#tracking \#expectedvalue \#offense
\end{enumerate}
POINTWISE: Predicting Points and Valuing Decisions in Real Time with NBA Optical Tracking Data \newline
Cervone, Dan and D'Amour, Alexander and Bornn, Luke and Goldsberry, Kirk \newline
\cite{cervonepointwise} \newline
\begin{enumerate}
\item EPV paper
\item . We propose a framework for using player-tracking data to assign a point value to each moment of a possession by computing how many points the offense is expected to score by the end of the possession, a quantity we call expected possession value (EPV).
\item EPV allows analysts to evaluate every decision made during a basketball game – whether it is to pass, dribble, or shoot – opening the door for a multitude of new metrics and analyses of basketball that quantify value in terms of points.
\item In this paper, we propose a modeling framework for estimating EPV, present results of EPV computations performed using playertracking data from the 2012-13 season, and provide several examples of EPV-derived metrics that answer real basketball questions.
\end{enumerate}
A method for using player tracking data in basketball to learn player skills and predict team performance \\
Brian Skinner and Stephen Guy\\
\cite{skinner2015method} \\
\begin{enumerate}
\item Player tracking data represents a revolutionary new data source for basketball analysis, in which essentially every aspect of a player's performance is tracked and can be analyzed numerically. We suggest a way by which this data set, when coupled with a network-style model of the offense that relates players' skills to the team's success at running different plays, can be used to automatically learn players' skills and predict the performance of untested 5-man lineups in a way that accounts for the interaction between players' respective skill sets.
\item After developing a general analysis procedure, we present as an example a specific implementation of our method using a simplified network model. While player tracking data is not yet available in the public domain, we evaluate our model using simulated data and show that player skills can be accurately inferred by a simple statistical inference scheme.
\item Finally, we use the model to analyze games from the 2011 playoff series between the Memphis Grizzlies and the Oklahoma City Thunder and we show that, even with a very limited data set, the model can consistently describe a player's interactions with a given lineup based only on his performance with a different lineup.
\item Tags \#network \#playereval \#lineup
\end{enumerate}
Exploring game performance in the National Basketball Association using player tracking data \\
Sampaio, Jaime and McGarry, Tim and Calleja-Gonz{\'a}lez, Julio and S{\'a}iz, Sergio Jim{\'e}nez and i del Alc{\'a}zar, Xavi Schelling and Balciunas, Mindaugas \\
\cite{sampaio2015exploring}
\begin{enumerate}
\item Recent player tracking technology provides new information about basketball game performance. The aim of this study was to (i) compare the game performances of all-star and non all-star basketball players from the National Basketball Association (NBA), and (ii) describe the different basketball game performance profiles based on the different game roles. Archival data were obtained from all 2013-2014 regular season games (n = 1230). The variables analyzed included the points per game, minutes played and the game actions recorded by the player tracking system.
\item To accomplish the first aim, the performance per minute of play was analyzed using a descriptive discriminant analysis to identify which variables best predict the all-star and non all-star playing categories. The all-star players showed slower velocities in defense and performed better in elbow touches, defensive rebounds, close touches, close points and pull-up points, possibly due to optimized attention processes that are key for perceiving the required appropriate environmental information.
\item The second aim was addressed using a k-means cluster analysis, with the aim of creating maximal different performance profile groupings. Afterwards, a descriptive discriminant analysis identified which variables best predict the different playing clusters.
\item The results identified different playing profile of performers, particularly related to the game roles of scoring, passing, defensive and all-round game behavior. Coaching staffs may apply this information to different players, while accounting for individual differences and functional variability, to optimize practice planning and, consequently, the game performances of individuals and teams.
\end{enumerate}
Modelling the dynamic pattern of surface area in basketball and its effects on team performance \\
Metulini, Rodolfo and Manisera, Marica and Zuccolotto, Paola \\
\cite{metulini2018modelling} \\
Deconstructing the rebound with optical tracking data \\ Maheswaran, Rajiv and Chang, Yu-Han and Henehan, Aaron and Danesis, Samanth \\
\cite{maheswaran2012deconstructing}
NBA Court Realty \\
Cervone, Dan and Bornn, Luke and Goldsberry, Kirk \\
\cite{cervone2016nba} \\
Applying deep learning to basketball trajectories \\
Shah, Rajiv and Romijnders, Rob \\
\cite{shah2016applying} \\
Predicting shot making in basketball using convolutional neural networks learnt from adversarial multiagent trajectories \\
Harmon, Mark and Lucey, Patrick and Klabjan, Diego \\
\cite{harmon2016predicting}
\subsection{Defense}
Characterizing the spatial structure of defensive skill in professional basketball
Franks, Alexander and Miller, Andrew and Bornn, Luke and Goldsberry, Kirk and others
\cite{franks2015characterizing}
Tags: \#tracking \#nba \#playereval \#defense
\begin{enumerate}
\item This paper attempts to fill this void, combining spatial and spatio-temporal processes, matrix factorization techniques and hierarchical regression models with player tracking data to advance the state of defensive analytics in the NBA.
\item Our approach detects, characterizes and quantifies multiple aspects of defensive play in basketball, supporting some common understandings of defensive effectiveness, challenging others and opening up many new insights into the defensive elements of basketball.
\item Specifically, our approach allows us to characterize how players affect both shooting frequency and efficiency of the player they are guarding.
\item By using an NMF-based decomposition of the court, we find an efficient and data-driven characterization of common shot regions which naturally corresponds to common basketball intuition.
\item Additionally, we are able to use this spatial decomposition to simply characterize the spatial shot and shot-guarding tendencies of players, giving a natural low-dimensional representation of a player's shot chart.
\end{enumerate}
Counterpoints: Advanced defensive metrics for nba basketball
Franks, Alexander and Miller, Andrew and Bornn, Luke and Goldsberry, Kirk
\cite{franks2015counterpoints}
Tags: \#tracking \#nba \#playereval \#defense
\begin{enumerate}
\item This paper bridges this gap, introducing a new suite of defensive metrics that aim to progress the field of basketball analytics by enriching the measurement of defensive play.
\item Our results demonstrate that the combination of player tracking, statistical modeling, and visualization enable a far richer characterization of defense than has previously been possible.
\item Our method, when combined with more traditional offensive statistics, provides a well-rounded summary of a player's contribution to the final outcome of a game.
\item Using optical tracking data and a model to infer defensive matchups at every moment throughout, we are able to provide novel summaries of defensive performance, and report "counterpoints" - points scored against a particular defender.
\item One key takeaway is that defensive ability is difficult to quantify with a single value. Summaries of points scored against and shots attempted against can say more about the team's defensive scheme (e.g. the Pacers funneling the ball toward Hibbert) than the individual player's defensive ability.
\item However, we argue that these visual and statistical summaries provide a much richer set of measurements for a player's performance, particularly those that give us some notion of shot suppression early in the possession.
\end{enumerate}
The Dwight effect: A new ensemble of interior defense analytics for the NBA
Goldsberry, Kirk and Weiss, Eric
Tags: \#tracking \#nba \#playereval \#defense
\cite{goldsberry2013dwight}
\begin{enumerate}
\item This paper introduces new spatial and visual analytics capable of assessing and characterizing the nature of interior defense in the NBA. We present two case studies that each focus on a different component of defensive play. Our results suggest that the integration of spatial approaches and player tracking data promise to improve the status quo of defensive analytics but also reveal some important challenges associated with evaluating defense.
\item Case study 1: basket proximity
\item Case study 2: shot proximity
\item Despite some relevant limitations, we contend that our results suggest that interior defensive abilities vary considerably across the league; simply stated, some players are more effective interior defenders than others. In terms of affecting shooting, we evaluated interior defense in 2 separate case studies.
\end{enumerate}
Automatic event detection in basketball using HMM with energy based defensive assignment \\
Keshri, Suraj and Oh, Min-hwan and Zhang, Sheng and Iyengar, Garud \\
\cite{keshri2019automatic}
\subsection{Player Eval}
Estimating an NBA player's impact on his team's chances of winning
Deshpande, Sameer K and Jensen, Shane T
\cite{deshpande2016estimating}
Tags: \#playereval \#winprob \#team \#lineup \#nba
\begin{enumerate}
\item We instead use a win probability framework for evaluating the impact NBA players have on their teams' chances of winning. We propose a Bayesian linear regression model to estimate an individual player's impact, after controlling for the other players on the court. We introduce several posterior summaries to derive rank-orderings of players within their team and across the league.
\item This allows us to identify highly paid players with low impact relative to their teammates, as well as players whose high impact is not captured by existing metrics.
\end{enumerate}
Who is 'most valuable'? Measuring the player's production of wins in the National Basketball Association
David J. Berri*
\cite{berri1999most}
Tags: \#playereval \#team \#nba
\begin{enumerate}
\item The purpose of this inquiry is to answer this question via an econometric model that links the player's statistics in the National Basketball Association (NBA) to team wins.
\item The methods presented in this discourse take the given NBA data and provide an accurate answer to this question. As noted, if the NBA tabulated a wider range of statistics, this accuracy could be improved. Nevertheless, the picture painted by the presented methods does provide a fair evaluation of each player's contribution to his team's relative success or failure. Such evaluations can obviously be utilized with respect to free agent signings, player-for-player trades, the allocation of minutes, and also to determine the impact changes in coaching methods or strategy have had on an individual's productivity.
\end{enumerate}
On estimating the ability of nba players
Fearnhead, Paul and Taylor, Benjamin Matthew
\cite{fearnhead2011estimating}
Tags: \#playereval \#nba \#lineup
\begin{enumerate}
\item This paper introduces a new model and methodology for estimating the ability of NBA players. The main idea is to directly measure how good a player is by comparing how their team performs when they are on the court as opposed to when they are off it. This is achieved in a such a way as to control for the changing abilities of the other players on court at different times during a match. The new method uses multiple seasons' data in a structured way to estimate player ability in an isolated season, measuring separately defensive and offensive merit as well as combining these to give an overall rating. The use of game statistics in predicting player ability will be considered
\item To the knowledge of the authors, the model presented here is unique in providing a structured means of updating player abilities between years. One of the most important findings here is that whilst using player game statistics and a simple linear model to infer offensive ability may be okay, the very poor fit of the defensive ratings model suggests that defensive ability depends on some trait not measured by the current range of player game statistics.
\end{enumerate}
Add \cite{macdonald} as a reference? (Adj. PM for NHL players)
A New Look at Adjusted Plus/Minus for Basketball Analysis \newline
Tags: \#playereval \#nba \#lineups \newline
D. Omidiran \newline
\cite{omidiran2011pm}
\begin{enumerate}
\item We interpret the Adjusted Plus/Minus (APM) model as a special case of a general penalized regression problem indexed by the parameter $\lambda$. We provide a fast technique for solving this problem for general values of $\lambda$. We
then use cross-validation to select the parameter $\lambda$ and demonstrate that this choice yields substantially better prediction performance than APM.
\item Paper uses cross-validation to choose $\lambda$ and shows they do better with this.
\end{enumerate}
Improved NBA adjusted plus-minus using regularization and out-of-sample testing \newline
Joseph Sill \newline
\cite{adjustedpm}
\begin{enumerate}
\item Adjusted +/- (APM) has grown in popularity as an NBA player evaluation technique in recent years. This paper presents a framework for evaluating APM models and also describes an enhancement to APM which nearly doubles its accuracy. APM models are evaluated in terms of their ability to predict the outcome of future games not included in the model's training data.
\item This evaluation framework provides a principled way to make choices about implementation details. The enhancement is a Bayesian technique called regularization (a.k.a. ridge regression) in which the data is combined with a priori beliefs regarding reasonable ranges for the parameters in order to
produce more accurate models.
\end{enumerate}
Article by Rosenbaum on 82games.com \newline
\cite{rosenbaum}
Measuring How NBA Players Help Their Teams Win \newline
Effects of season period, team quality, and playing time on basketball players' game-related statistics \newline
Sampaio, Jaime and Drinkwater, Eric J and Leite, Nuno M \newline
\cite{sampaio2010effects}
Tags: \#playereval \#team
\begin{enumerate}
\item The aim of this study was to identify within-season differences in basketball players' game-related statistics according to team quality and playing time. The sample comprised 5309 records from 198 players in the Spanish professional basketball league (2007-2008).
\item Factor analysis with principal components was applied to the game-related statistics gathered from the official box-scores, which limited the analysis to five factors (free-throws, 2-point field-goals, 3-point field-goals, passes, and errors) and two variables (defensive and offensive rebounds). A two-step cluster analysis classified the teams as stronger (6998 winning percentage), intermediate (4395 winning percentage), and weaker teams (3295 winning percentage); individual players were classified based on playing time as important players (2894 min) or less important players (1694 min). Seasonal variation was analysed monthly in eight periods.
\item A mixed linear model was applied to identify the effects of team quality and playing time within the months of the season on the previously identified factors and game-related statistics. No significant effect of season period was observed. A team quality effect was identified, with stronger teams being superior in terms of 2-point field-goals and passes. The weaker teams were the worst at defensive rebounding (stronger teams: 0.1790.05; intermediate teams: 0.1790.06; weaker teams: 0.1590.03; P0.001). While playing time was significant in almost all variables, errors were the most important factor when contrasting important and less important players, with fewer errors being made by important players. The trends identified can help coaches and players to create performance profiles according to team quality and playing time. However, these performance profiles appear to be independent of season period.
\item Identify effects of strength of team on players' performance
\item Conclusion: There appears to be no seasonal variation in high level basketball performances. Although offensive plays determine success in basketball, the results of the current study indicate that securing more defensive rebounds and committing fewer errors are also important. Furthermore, the identified trends allow the creation of performance profiles according to team quality and playing time during all seasonal periods. Therefore, basketball coaches (and players) will benefit from being aware of these results, particularly when designing game strategies and when taking tactical decisions.
\end{enumerate}
Forecasting NBA Player Performance using a Weibull-Gamma Statistical Timing Model \newline
Douglas Hwang \newline
\cite{hwang2012forecasting} \newline
\begin{enumerate}
\item Uses a Weibull-Gamma statistical timing model
\item Fits a player's performance over time to a Weibull distribution, and accounts for unobserved heterogeneity by fitting the parameters of the Weibull distribution to a gamma distribution
\item This will help predict performance over the next season, estimate contract value, and the potential "aging" effects of a certain player
\item tags \#forecasting \#playereval \#performance
\end{enumerate}
\subsubsection{Position analysis?}
Using box-scores to determine a position's contribution to winning basketball games
Page, Garritt L and Fellingham, Gilbert W and Reese, C Shane
\cite{page2007using}
\begin{enumerate}
\item Tried to quantify importance of different positions in basketball
\item Used hierarchical Bayesian model at five positions
\item One result is that defensive rebounds from point and shooting guards are important
\item While it is generally recognized that the relative importance of different skills is not constant across different positions on a basketball team, quantification of the differences has not been well studied. 1163 box scores from games in the National Basketball Association during the 1996-97 season were used to study the relationship of skill performance by position and game outcome as measured by point differentials.
\item A hierarchical Bayesian model was fit with individual players viewed as a draw from a population of players playing a particular position: point guard, shooting guard, small forward, power forward, center, and bench. Posterior distributions for parameters describing position characteristics were examined to discover the relative importance of various skills as quantified in box scores across the positions.
\item Results were consistent with expectations, although defensive rebounds from both point and shooting guards were found to be quite important.
\item In summary, the point spread of a basketball game increases if all five positions have more offensive rebounds, out-assist, have a better field goal percentage, and fewer turnovers than their positional opponent. These results are cer- tainly not surprising. Some trends that were somewhat more surprising were the importance of defensive rebounding by the guard positions and offensive rebounding by the point guard. These results also show the emphasis the NBA places on an all-around small forward.
\end{enumerate}
\subsubsection{Player production curves}
Effect of position, usage rate, and per game minutes played on NBA player production curves \\
Page, Garritt L and Barney, Bradley J and McGuire, Aaron T \\
\cite{page2013effect}
\begin{enumerate}
\item Production related to games played
\item Production curve modeled using GPs
\item Touches on deterioration of production relative to age
\item Learn how minutes played and usage rate affect production curves
\item One good nugget from conclusion: The general finding from this is that aging trends do impact natural talent evaluation of NBA players. Point guards appear to exhibit a different aging pattern than wings or bigs when it comes to Hollinger's Game Score, with a slightly later peak and a much steeper decline once that peak is met. The fact that the average point guard spends more of his career improving his game score is not lost on talent evaluators in the NBA, and as such, point guards are given more time on average to perform the functions of a rotation player in the NBA.
\end{enumerate}
Functional data analysis of aging curves in sports \\
Alex Wakim and Jimmy Jin \\
\cite{wakim2014functional} \\
\begin{enumerate}
\item Uses FDA and FPCA to study aging curves in sports
\item Includes NBA and finds age patterns among NBA players with different scoring abilities
\item It is well known that athletic and physical condition is affected by age. Plotting an individual athlete's performance against age creates a graph commonly called the player's aging curve. Despite the obvious interest to coaches and managers, the analysis of aging curves so far has used fairly rudimentary techniques. In this paper, we introduce functional data analysis (FDA) to the study of aging curves in sports and argue that it is both more general and more flexible compared to the methods that have previously been used. We also illustrate the rich analysis that is possible by analyzing data for NBA and MLB players.
\item In the analysis of MLB data, we use functional principal components analysis (fPCA) to perform functional hypothesis testing and show differences in aging curves between potential power hitters and potential non-power hitters. The analysis of aging curves in NBA players illustrates the use of the PACE method. We show that there are three distinct aging patterns among NBA players and that player scoring ability differs across the patterns. We also show that aging pattern is independent of position.
\end{enumerate}
Large data and Bayesian modeling—aging curves of NBA players \\
Vaci, Nemanja and Coci{\'c}, Dijana and Gula, Bartosz and Bilali{\'c}, Merim \\
\cite{vaci2019large} \\
\begin{enumerate}
\item Researchers interested in changes that occur as people age are faced with a number of methodological problems, starting with the immense time scale they are trying to capture, which renders laboratory experiments useless and longitudinal studies rather rare. Fortunately, some people take part in particular activities and pastimes throughout their lives, and often these activities are systematically recorded. In this study, we use the wealth of data collected by the National Basketball Association to describe the aging curves of elite basketball players.
\item We have developed a new approach rooted in the Bayesian tradition in order to understand the factors behind the development and deterioration of a complex motor skill. The new model uses Bayesian structural modeling to extract two latent factors, those of development and aging. The interaction of these factors provides insight into the rates of development and deterioration of skill over the course of a player's life. We show, for example, that elite athletes have different levels of decline in the later stages of their career, which is dependent on their skill acquisition phase.
\item The model goes beyond description of the aging function, in that it can accommodate the aging curves of subgroups (e.g., different positions played in the game), as well as other relevant factors (e.g., the number of minutes on court per game) that might play a role in skill changes. The flexibility and general nature of the new model make it a perfect candidate for use across different domains in lifespan psychology.
\end{enumerate}
\subsection{Predicting winners}
Modeling and forecasting the outcomes of NBA basketball games
Manner, Hans
Tags: \#hothand \#winprob \#team \#multivariate \#timeseries
\cite{manner2016modeling}
\begin{enumerate}
\item This paper treats the problem of modeling and forecasting the outcomes of NBA basketball games. First, it is shown how the benchmark model in the literature can be extended to allow for heteroscedasticity and treat the estimation and testing in this framework. Second, time-variation is introduced into the model by (i) testing for structural breaks in the model and (ii) introducing a dynamic state space model for team strengths.
\item The in-sample results based on eight seasons of NBA data provide some evidence for heteroscedasticity and a few structural breaks in team strength within seasons. However, there is no evidence for persistent time variation and therefore the hot hand belief cannot be confirmed. The models are used for forecasting a large number of regular season and playoff games and the common finding in the literature that it is difficult to outperform the betting market is confirmed. Nevertheless, it turns out that a forecast combination of model based forecasts with betting odds can outperform either approach individually in some situations.
\end{enumerate}
Basketball game-related statistics that discriminate between teams' season-long success \newline
~\cite{ibanez2008basketball} \newline
Tags: \#prediction \#winner \#defense \newline
Ibanez, Sampaio, Feu, Lorenzo, Gomez, Ortega,
\begin{enumerate}
\item The aim of the present study was to identify the game-related statistics that discriminate between season-long successful and
unsuccessful basketball teams participating in the Spanish Basketball League (LEB1). The sample included all 145 average
records per season from the 870 games played between the 2000-2001 and the 2005-2006 regular seasons.
\item The following
game-related statistics were gathered from the official box scores of the Spanish Basketball Federation: 2- and 3-point fieldgoal attempts (both successful and unsuccessful), free-throws (both successful and unsuccessful), defensive and offensive
rebounds, assists, steals, turnovers, blocks (both made and received), and fouls (both committed and received). To control
for season variability, all results were normalized to minutes played each season and then converted to z-scores.
\item The results allowed discrimination between best and worst teams' performances through the following game-related statistics: assists (SC0.47), steals (SC0.34), and blocks (SC0.30). The function obtained correctly classified 82.4\% of the cases. In conclusion, season-long performance may be supported by players' and teams' passing skills and defensive preparation.
\item Our results suggest a number of differences between
best and worst teams' game-related statistics, but
globally the offensive (assists) and defensive (steals
and blocks) actions were the most powerful factors
in discriminating between groups. Therefore, game
winners and losers are discriminated by defensive
rebounding and field-goal shooting, whereas season-long performance is discriminated by players'
and teams' passing skills and defensive preparation.
Players should be better informed about these
results and it is suggested that coaches pay attention
to guards' passing skills, to forwards' stealing skills,
and to centres' blocking skills to build and prepare
offensive communication and overall defensive
pressure
\end{enumerate}
Simulating a Basketball Match with a Homogeneous Markov Model and Forecasting the Outcome \newline
Strumbelj and Vracar \newline
\#prediction \#winner \newline
~\cite{vstrumbelj2012simulating}
\begin{enumerate}
\item We used a possession-based Markov model to model the progression of a basketball match. The model's transition matrix was estimated directly from NBA play-by-play data and indirectly from the teams' summary statistics. We evaluated both this approach and other commonly used forecasting approaches: logit regression of the outcome, a latent strength rating method, and bookmaker odds. We found that the Markov model approach is appropriate for modelling a basketball match and produces forecasts of a quality comparable to that of other statistical approaches, while giving more insight into basketball. Consistent with previous studies, bookmaker odds were the best probabilistic forecasts
\item Using summary statistics to estimate Shirley's Markov model for basketball produced a model for a match between two specific teams. The model was be used to simulate the match and produce outcome forecasts of a quality comparable to that of other statistical approaches, while giving more insights into basketball. Due to its homogeneity, the model is still limited with respect to what it can simulate, and a non-homogeneous model is required to deal with the issues. As far as basketball match simulation is concerned, more work has to be done, with an emphasis on making the transitional probabilities conditional on the point spread and the game time.
\end{enumerate}
Differences in game-related statistics of basketball performance by game location for men's winning and losing teams \\
G{\'o}mez, Miguel A and Lorenzo, Alberto and Barakat, Rub{\'e}n and Ortega, Enrique and Jos{\'e} M, Palao \\
\cite{gomez2008differences} \\
\#location \#winners \#boxscore
\begin{enumerate}
\item The aim of the present study was to identify game-related statistics that differentiate winning and losing teams according to game location. The sample included 306 games of the 2004–2005 regular season of the Spanish professional men's league (ACB League). The independent variables were game location (home or away) and game result (win or loss). The game-related statistics registered were free throws (successful and unsuccessful), 2- and 3-point field goals (successful and unsuccessful), offensive and defensive rebounds, blocks, assists, fouls, steals, and turnovers. Descriptive and inferential analyses were done (one-way analysis of variance and discriminate analysis).
\item The multivariate analysis showed that winning teams differ from losing teams in defensive rebounds (SC = .42) and in assists (SC = .38). Similarly, winning teams differ from losing teams when they play at home in defensive rebounds (SC = .40) and in assists (SC = .41). On the other hand, winning teams differ from losing teams when they play away in defensive rebounds (SC = .44), assists (SC = .30), successful 2-point field goals (SC = .31), and unsuccessful 3-point field goals (SC = –.35). Defensive rebounds and assists were the only game-related statistics common to all three analyses.
\end{enumerate}
\subsubsection{NCAA bracket}
Predicting the NCAA basketball tournament using isotonic least squares pairwise comparison model \\
Neudorfer, Ayala and Rosset, Saharon \\
\cite{neudorfer2018predicting}
Identifying NCAA tournament upsets using Balance Optimization Subset Selection \\
Dutta, Shouvik and Jacobson, Sheldon H and Sauppe, Jason J \\
\cite{dutta2017identifying}
Building an NCAA men's basketball predictive model and quantifying its success
Lopez, Michael J and Matthews, Gregory J
\cite{lopez2015building}
Tags: \#ncaa \#winprob \#outcome \#prediction
\begin{enumerate}
\item This manuscript both describes our novel predictive models and quantifies the possible benefits, with respect to contest standings, of having a strong model. First, we describe our submission, building on themes first suggested by Carlin (1996) by merging information from the Las Vegas point spread with team-based possession metrics. The success of our entry reinforces longstanding themes of predictive modeling, including the benefits of combining multiple predictive tools and the importance of using the best possible data
\end{enumerate}
Introduction to the NCAA men's basketball prediction methods issue
Glickman, Mark E and Sonas, Jeff \\
\cite{glickman2015introduction} \\
This is a whole issue of JQAS so not so important to include the intro cited here specifically \\
A generative model for predicting outcomes in college basketball
Ruiz, Francisco JR and Perez-Cruz, Fernando
\cite{ruiz2015generative}
Tags: \#ncaa \#prediction \#outcomes \#team
\begin{enumerate}
\item We show that a classical model for soccer can also provide competitive results in predicting basketball outcomes. We modify the classical model in two ways in order to capture both the specific behavior of each National collegiate athletic association (NCAA) conference and different strategies of teams and conferences. Through simulated bets on six online betting houses, we show that this extension leads to better predictive performance in terms of profit we make. We compare our estimates with the probabilities predicted by the winner of the recent Kaggle competition on the 2014 NCAA tournament, and conclude that our model tends to provide results that differ more from the implicit probabilities of the betting houses and, therefore, has the potential to provide higher benefits.
\item In this paper, we have extended a simple soccer model for college basketball. Outcomes at each game are modeled as independent Poisson random variables whose means depend on the attack and defense coefficients of teams and conferences. Our conference-specific coefficients account for the overall behavior of each conference, while the perteam coefficients provide more specific information about each team. Our vector-valued coefficients can capture different strategies of both teams and conferences. We have derived a variational inference algorithm to learn the attack and defense coefficients, and have applied this algorithm to four March Madness Tournaments. We compare our predictions for the 2014 Tournament to the recent Kaggle competition results and six online betting houses. Simulations show that our model identifies weaker but undervalued teams, which results in a positive mean profit in all the considered betting houses. We also outperform the Kaggle competition winner in terms of mean profit.
\end{enumerate}
A new approach to bracket prediction in the NCAA Men's Basketball Tournament based on a dual-proportion likelihood
Gupta, Ajay Andrew
\cite{gupta2015new}
Tags: \#ncaa \#prediction
\begin{enumerate}
\item This paper reviews relevant previous research, and then introduces a rating system for teams using game data from that season prior to the tournament. The ratings from this system are used within a novel, four-predictor probability model to produce sets of bracket predictions for each tournament from 2009 to 2014. This dual-proportion probability model is built around the constraint of two teams with a combined 100\% probability of winning a given game.
\item This paper also performs Monte Carlo simulation to investigate whether modifications are necessary from an expected value-based prediction system such as the one introduced in the paper, in order to have the maximum bracket score within a defined group. The findings are that selecting one high-probability "upset" team for one to three late rounds games is likely to outperform other strategies, including one with no modifications to the expected value, as long as the upset choice overlaps a large minority of competing brackets while leaving the bracket some distinguishing characteristics in late rounds.
\end{enumerate}
Comparing Team Selection and Seeding for the 2011 NCAA Men's Basketball Tournament
Gray, Kathy L and Schwertman, Neil C
\cite{gray2012comparing}
Tags: \#ncaa \#seeding \#selection
\begin{enumerate}
\item In this paper, we propose an innovative heuristic measure of team success, and we investigate how well the NCAA committee seeding compares to the computer-based placements by Sagarin and the rating percentage index (RPI). For the 2011 tournament, the NCAA committee selection process performed better than those based solely on the computer methods in determining tournament success.
\item This analysis of 2011 tournament data shows that the Selection Committee in 2011 was quite effective in the seeding of the tournament and that basing the seeding entirely on computer ratings was not an advantage since, of the three placements, the NCAA seeding had the strongest correlation with team success points and the fewest upsets. The incorporation of some subjectivity into their seeding appears to be advantageous. The committee is able to make an adjustment, for example, if a key player is injured or unable to play for some reason. From this analysis of the data for the 2011 tournament, there is ample evidence that NCAA selection committee was proficient in the selection and seeding of the tournament teams.
\end{enumerate}
Joel Sokol's group on LRMC method: \\
\cite{brown2010improved} \\
\cite{kvam2006logistic} \\
\cite{brown2012insights} \\
\subsection{Uncategorized}
Hal Stern in JASA on Brownian motion model for progress of sports scores: \\
\cite{stern1994brownian} \\
Random walk picture of basketball scoring \\
Gabel, Alan and Redner, Sidney \\
\cite{gabel2012random}\\
\begin{enumerate}
\item We present evidence, based on play-by-play data from all 6087 games from the 2006/07– 2009/10 seasons of the National Basketball Association (NBA), that basketball scoring is well described by a continuous-time anti-persistent random walk. The time intervals between successive scoring events follow an exponential distribution, with essentially no memory between different scoring intervals.
\item By including the heterogeneity of team strengths, we build a detailed computational random-walk model that accounts for a variety of statistical properties of scoring in basketball games, such as the distribution of the score difference between game opponents, the fraction of game time that one team is in the lead, the number of lead changes in each game, and the season win/loss records of each team.
\item In this work, we focus on the statistical properties of scoring during each basketball game. The scoring data are consistent with the scoring rate being described by a continuous-time Poisson process. Consequently, apparent scoring bursts or scoring droughts arise from Poisson statistics rather than from a temporally correlated process.
\item Our main hypothesis is that the evolution of the score difference between two competing teams can be accounted by a continuous-time random walk.
\item However, this competitive rat race largely eliminates systematic advantages between teams, so that all that remains, from a competitive standpoint, are small surges and ebbs in performance that arise from the underlying stochasticity of the game.
\end{enumerate}
Importance of Free-Throws at Various Stages of Basketball Games \\
\cite{kozar1994importance}
\begin{enumerate}
\item Basketball coaches often refer to their teams' success or failure as a product of their players' performances at the free-throw line. In the present study, play-by-play records of 490 NCAA Division I men's basketball games were analyzed to assess the percentage of points scored from free-throws at various stages of the games.
\item About 20\% of all points were scored from free-throws. Free-throws comprised a significantly higher percentage of total points scored during the last 5 minutes than the first 35 minutes of the game for both winning and losing teams. Also, in the last 5 minutes of 246 games decided by 9 points or less and 244 decided by 10 points or more, winners scored a significantly higher percentage of points from free-throws than did losers. Suggestions for structuring practice conditions are discussed.
\end{enumerate}
Dynamic modeling of performance in basketball \\
\cite{malarranha2013dynamic} \\
\begin{enumerate}
\item The aim of this study was to identify the intra-game variation from four performance indicators that determine the outcome of basketball games, controlling for quality of opposition. All seventy-four games of the Basketball World Championship (Turkey 2010) were analyzed to calculate the performance indicators in eight 5-minute periods. A repeated measures ANOVA was performed to identify differences in time and game outcome for each performance indicator. The quality of opposition was included in the models as covariable.
\item The effective field goal percentage (F=14.0 p <.001, η2=.09) influenced the game outcome throughout the game, while the offensive rebounds percentage (F=7.6 p <.05, η2=.05) had greater influence in the second half. The offensive (F=6.3, p <.05, η2=.04) and defensive (F=12.0, p <.001, η2=.08) ratings also influenced the outcome of the games. These results may allow coaches to have more accurate information aimed to prepare their teams for the competition.
\end{enumerate}
Modeling the offensive-defensive interaction and resulting outcomes in basketball \\
Lamas, Leonardo and Santana, Felipe and Heiner, Matthew and Ugrinowitsch, Carlos and Fellingham, Gilbert \\
\cite{lamas2015modeling} \\
\subsubsection{Sampaio section}
Discriminative power of basketball game-related statistics by level of competition and sex \\
\cite{sampaio2004discriminative} \\
Discriminative game-related statistics between basketball starters and nonstarters when related to team quality and game outcome \\
\cite{sampaio2006discriminative} \\
Discriminant analysis of game-related statistics between basketball guards, forwards and centres in three professional leagues \\
\cite{sampaio2006discriminant} \\
Statistical analyses of basketball team performance: understanding teams' wins and losses according to a different index of ball possessions \\
\cite{sampaio2003statistical}
Game related statistics which discriminate between winning and losing under-16 male basketball games \\
\cite{lorenzo2010game} \\
Game location influences basketball players' performance across playing positions \\
\cite{sampaio2008game} \\
Identifying basketball performance indicators in regular season and playoff games \\
\cite{garcia2013identifying} \\
\subsection{General references}
Basketball reference: \cite{bballref} \newline
Moneyball: \cite{lewis2004moneyball} \newline
NBA website: \cite{nba_glossary} \newline
Spatial statistics: \cite{diggle2013statistical} and \cite{ripley2005spatial} \newline
Statistics for spatial data: \cite{cressie93} \newline
Efron and Morris on Stein's estimator? \cite{efron1975data} \\
Bill James?: \cite{james1984the-bill} and \cite{james2010the-new-bill}
I think Cleaning the Glass would be a good reference, especially the stats component of its site. The review article should mention somewhere that the growth of basketball analytics in academic articles - and its growth in industry - occurred simultaneously with its use on different basketball stats or fan sites. Places like CTG, hoopshype, etc which mention and popularize fancier statistics contributed to the growth of analytics in basketball.
Not sure where or how is the best way to include that in the paper but thought it was at least worth writing down somewhere.
\subsection{Hot hand?}
\cite{hothand93online}
\bibliographystyle{plain}
\section{Tags}
Data Tags: \\
#spatial #tracking #college #nba #intl #boxscore #pbp (play-by-play) #longitudinal #timeseries \\
Goal Tags: \\
#playereval #defense #lineup #team #projection #behavioral #strategy #rest #health #injury #winprob #prediction \\
Miscellaneous: \\
#clustering #coaching #management #refs #gametheory #intro #background \\
\section{Summaries}
\subsection{Introduction}
Kubatko et al. "A starting point for analyzing basketball statistics." \cite{kubatko}
\newline
\citefield{kubatko}{title}
Tags: #intro #background #nba #boxscore
\begin{enumerate}
\item Basics of the analysis of basketball. Provide a common starting point for future research in basketball
\item Define a general formulation for how to estimate the number of possessions.
\item Provide a common basis for future possession estimation
\item Also discuss other concepts and methods: per-minute statistics, pace, four factors, etc.
\item Contain other breakdowns such as rebound rate, plays, etc
\end{enumerate}
\subsection{Networks/Player performance}
Evaluating Basketball Player Performance via Statistical Network Modeling
Piette, Pham, Anand
\cite{piette2011evaluating}
Tags: #playereval #lineup #nba #team
\begin{enumerate}
\item Players are nodes, edges are if they played together in the same five-man unit
\item Adapting a network-based algorithm to estimate centrality scores
\end{enumerate}
Quantifying shot quality in the NBA
Chang, Yu-Han and Maheswaran, Rajiv and Su, Jeff and Kwok, Sheldon and Levy, Tal and Wexler, Adam and Squire, Kevin
Tags: #playereval #team #shooting #spatial #nba
\cite{chang2014quantifying}
\begin{enumerate}
\item Separately characterize the difficulty of shots and the ability to make them
\item ESQ (Effective Shot Quality) and EFG+ (EFG - ESQ)
\item EFG+ is shooting ability above expectations
\item Addresses problem of confounding two separate attributes that EFG encounters
\begin{itemize}
\item quality of a shot and the ability to make that shot
\end{itemize}
\end{enumerate}
\bibliographystyle{plain}
\section{INTRODUCTION}
Please begin the main text of your article here.
\section{FIRST-LEVEL HEADING}
This is dummy text.
\subsection{Second-Level Heading}
This is dummy text. This is dummy text. This is dummy text. This is dummy text.
\subsubsection{Third-Level Heading}
This is dummy text. This is dummy text. This is dummy text. This is dummy text.
\paragraph{Fourth-Level Heading} Fourth-level headings are placed as part of the paragraph.
\section{ELEMENTS\ OF\ THE\ MANUSCRIPT}
\subsection{Figures}Figures should be cited in the main text in chronological order. This is dummy text with a citation to the first figure (\textbf{Figure \ref{fig1}}). Citations to \textbf{Figure \ref{fig1}} (and other figures) will be bold.
\begin{figure}[h]
\includegraphics[width=3in]{SampleFigure}
\caption{Figure caption with descriptions of parts a and b}
\label{fig1}
\end{figure}
\subsection{Tables} Tables should also be cited in the main text in chronological order (\textbf {Table \ref{tab1}}).
\begin{table}[h]
\tabcolsep7.5pt
\caption{Table caption}
\label{tab1}
\begin{center}
\begin{tabular}{@{}l|c|c|c|c@{}}
\hline
Head 1 &&&&Head 5\\
{(}units)$^{\rm a}$ &Head 2 &Head 3 &Head 4 &{(}units)\\
\hline
Column 1 &Column 2 &Column3$^{\rm b}$ &Column4 &Column\\
Column 1 &Column 2 &Column3 &Column4 &Column\\
Column 1 &Column 2 &Column3 &Column4 &Column\\
Column 1 &Column 2 &Column3 &Column4 &Column\\
\hline
\end{tabular}
\end{center}
\begin{tabnote}
$^{\rm a}$Table footnote; $^{\rm b}$second table footnote.
\end{tabnote}
\end{table}
\subsection{Lists and Extracts} Here is an example of a numbered list:
\begin{enumerate}
\item List entry number 1,
\item List entry number 2,
\item List entry number 3,\item List entry number 4, and
\item List entry number 5.
\end{enumerate}
Here is an example of a extract.
\begin{extract}
This is an example text of quote or extract.
This is an example text of quote or extract.
\end{extract}
\subsection{Sidebars and Margin Notes}
\begin{marginnote}[]
\entry{Term A}{definition}
\entry{Term B}{definition}
\entry{Term C}{defintion}
\end{marginnote}
\begin{textbox}[h]\section{SIDEBARS}
Sidebar text goes here.
\subsection{Sidebar Second-Level Heading}
More text goes here.\subsubsection{Sidebar third-level heading}
Text goes here.\end{textbox}
\subsection{Equations}
\begin{equation}
a = b \ {\rm ((Single\ Equation\ Numbered))}
\end{equation}
Equations can also be multiple lines as shown in Equations 2 and 3.
\begin{eqnarray}
c = 0 \ {\rm ((Multiple\ Lines, \ Numbered))}\\
ac = 0 \ {\rm ((Multiple \ Lines, \ Numbered))}
\end{eqnarray}
\begin{summary}[SUMMARY POINTS]
\begin{enumerate}
\item Summary point 1. These should be full sentences.
\item Summary point 2. These should be full sentences.
\item Summary point 3. These should be full sentences.
\item Summary point 4. These should be full sentences.
\end{enumerate}
\end{summary}
\begin{issues}[FUTURE ISSUES]
\begin{enumerate}
\item Future issue 1. These should be full sentences.
\item Future issue 2. These should be full sentences.
\item Future issue 3. These should be full sentences.
\item Future issue 4. These should be full sentences.
\end{enumerate}
\end{issues}
\section*{DISCLOSURE STATEMENT}
If the authors have noting to disclose, the following statement will be used: The authors are not aware of any affiliations, memberships, funding, or financial holdings that
might be perceived as affecting the objectivity of this review.
\section*{ACKNOWLEDGMENTS}
Acknowledgements, general annotations, funding.
\section*{LITERATURE\ CITED}
\noindent
Please see the Style Guide document for instructions on preparing your Literature Cited.
The citations should be listed in alphabetical order, with titles. For example:
\begin{verbatim}
| {
"attr-fineweb-edu": 2.5625,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdkA4eILhQJnKqwLR | \section{Introduction}
Online algorithms are traditionally evaluated through competitive analysis \cite{borodin2005online}. To grasp this concept, consider the standard problem of ski rental, where a person goes skiing for an unknown number~$n$ of days (depending on the weather, say). Each day, she has two options, either renting the skis (for the same daily price $c$) or buying the skis, once and for all, at price $B$. The optimal, offline, algorithm $\operatorname{OPT}$ would obviously be to buy the skis if and only if $cn>B$, and the score of OPT is then $C_{\operatorname{OPT}}(n)=\min\{cn,B\}$ whereas the price paid by an online algorithm $\operatorname{ALG}$, that takes sequential decisions and is unaware of the input $n$, is $C_{\operatorname{ALG}}(n)=cN_{\operatorname{ALG}}+B\mathds{1}\{N_{\operatorname{ALG}}<n\}$ where $N_{ALG}$ is the day where ALG would buy the skis. Notice that $C_{\operatorname{ALG}}$ can be random, if the buying-decision time is not deterministic.
Those notions can be formally defined in any online problem described by its input $\eta$: $C_{\operatorname{OPT}}(\eta)$ still denotes the score of the optimal offline algorithm, while $C_{\operatorname{ALG}}(\eta)$ is the score of an online algorithm $\operatorname{ALG}$ over that same input. The performance of $\operatorname{ALG}$ over some class of input distributions $\mathbf{D}$ is then given by the competitive ratio (CR):
\[
\operatorname{CR}(\operatorname{ALG})=\min_{\mathcal{D} \in \mathbf{D}}\frac{\EE_{\eta \sim \mathcal{D}}[C_{\operatorname{ALG}}(\eta)]}{\EE_{\eta \sim \mathcal{D}}[C_{OPT}(\eta)]},
\]
where the expectation is with respect to the (possible) randomness of the algorithm and the input.
The historical approach to worse case competitive analysis, widely covered \cite{borodin2005online}, is to assume the class of input distributions $\mathbf{D}$ encompasses all possible distributions, and that the algorithm has no a-priori knowledge of the data. This postulate has been questioned by a recent line of work considering algorithms using external predictions. Indeed, in many practical applications, some a-priori knowledge about the data can be leveraged to improve performances. For instance, when looking up a word in a dictionary, it is known that "alligator" appears towards the beginning whereas "wheel" will be towards the end. This knowledge can be used to design algorithms more efficient than binary search. This idea has been investigated in many classical online problems such as web page caching~\cite{lykouris2018competitive}, ski rental~\cite{wei2020optimal, purohitImprovingOnlineAlgorithms2018}, set cover~\cite{bamas2020primal}, matching~\cite{dinitz2021faster}, clustering~\cite{dong2019learning} or scheduling~\cite{imNonClairvoyantSchedulingPredictions2021, lindermayrNonClairvoyantSchedulingPredictions2022, lattanzi2020online}.
Typically, given predictions $\hat{\eta}$ of the true input, most of these works aim at balancing consistency vs.\ robustness, i.e., designing algorithms whose performances are close to that of optimal offline algorithms if predictions are accurate, but that do not degrade too much if they are terrible.
This approach assumes that predictions are given by some oracle, and cannot be trusted fully, hence the need for robustness to bad/incorrect predictions. The trade-off robustness vs.\ consistency in the prediction is oftentimes characterised by some parameter affecting the performance guarantees.
Instead of relying on an external prediction, we argue that algorithms can learn while they run. Self-learned predictions are by essence more and more accurate as the number of samples increases and therefore can be more trusted as the algorithm runs, in striking contrast with the previous approaches. For the sake of illustration, we shall focus our analysis on static job scheduling on a single machine.
In this problem, a set of jobs indexed by $i \in [N]$ must be processed on a machine. Each job $i$ has a size $P_i$ which is the amount of time needed for the machine to complete it. An algorithm is a policy assigning jobs to the machine and the completion time $e_i$ of a job $i$ is the current time when this job finishes. The cost for an algorithm $\operatorname{ALG}$, also called \emph{flow time}, is the sum of all completion times:
\[\EE[C_{{\operatorname{ALG}}}] = \EE\left[\sum_{i \in [N]}e_i\right].\]
The objective is to minimize it with the constraint that only one job is processed at a time. Yet, preemption is allowed, i.e., changing processed jobs can be done at any time without additional cost (which implies that two jobs could be run in parallel for twice longer completion times).
For some learning to eventually occur, jobs are assumed to have known ``types''. More precisely, there are $K$ different types and $n$ jobs per type, so that $N=nK$. Jobs of type $k\in[K]$ have sizes $(P_i^{k})_{i \in [n]}$ drawn i.i.d. from the same exponential distribution $D_k$, parameterized by $\lambda_k$. Even though the family of exponential distributions is quite classical to describe job sizes~\cite{kampke1989optimal, hamada1993bayesian, cunningham1973scheduling, cai2000asymmetric, cai2005single, pinedo1985scheduling, glazebrook1979scheduling}, the purpose of those assumptions is to better illustrate the principles and concepts without dwelling too much into technical details.
\paragraph{Contributions:}
We first explain why we focus on exponential job sizes in the static scheduling problem. Indeed, we show that any algorithm unaware of the parameters of the exponential distributions $(D_k)_{k=1}^K$ and that ignores jobs types has the same asymptotic CR (i.e, $2$) as in the adversarial case. This generalizes the lower bound in~\cite{motwani1994nonclairvoyant}. On the other side of the spectrum, stands the algorithm that has access to perfect predictions of the parameters of $(D_k)_{k=1}^K$ and schedules jobs with the smallest expected size first. This algorithm, called Follow-The-Perfect-Prediction ($\operatorname{FTPP}$), is the optimal realization agnostic algorithm (see~\cite{caiOptimalStochasticScheduling2014} Corollary 2.1). We study the variations of the CR of $\operatorname{FTPP}$ with respect to the problem's parameters and show that $\operatorname{FTPP}$ has a CR always smaller than 2. Those two points advocate for learning while running.
Then, we consider algorithms learning the unknown parameters of $(D_k)_{k=1}^K$ by using known job types. We show that the algorithm Explore-Then-Commit with Uniform exploration ($\operatorname{ETC-U}$) achieves asymptotically the same performances as $\operatorname{FTPP}$ with an expected difference in CR of order $\mathcal{O}(\sqrt{\log(n)/n })$. This algorithm performs successive elimination, maintaining a set of types that are candidates to have the smallest mean. Those candidates are explored uniformly in "series", meaning that a job of each candidate type is selected successively and run until completion.
Lastly, Explore-then-Commit with Round-Robin based exploration ($\operatorname{ETC-RR}$) is introduced. $\operatorname{ETC-RR}$ differs from $\operatorname{ETC-U}$ as it runs a job of each candidate types in "parallel". This strategy is reminiscent of Round Robin (RR), the optimal online algorithm for adversarial input. The CR of $\operatorname{ETC-RR}$ has the same asymptotic behavior in $n$, however, its second order term is smaller, with a lower dependency in the largest job types means than the one of $\operatorname{ETC-U}$. This demonstrates that structure can be leveraged to further improve learning algorithms.
Those findings are empirically evaluated on synthetic data.
\section{Related work}
\paragraph{Scheduling problems} The scheduling literature and problem zoology is large. We focus on static scheduling on a single machine where the objective is to minimize the flow time. Possible generalisations include dynamic scheduling where jobs arrive at different times~\cite{becchettiNonclairvoyantSchedulingMinimize2004}, weighted flow time~\cite{bansalMinimizingWeightedFlow2007} where different jobs have different weights, forbidden preemption~\cite{jeffay1991non, hamada1993bayesian, marban2011learning} where jobs are not allowed to be stopped once they are started, multiple machines~\cite{lawler1978preemptive}) and many more~\cite{durr2020adversarial, tsung1997optimal}.
The performance of algorithms depends on the available information.
\emph{Clairvoyant and non-clairvoyant scheduling}: In clairvoyant scheduling, job sizes are assumed to be known. In this setting, scheduling the shortest jobs first gives the lowest flow time~\cite{schrage1968proof}. In non-clairvoyant scheduling, job sizes are arbitrary and unknown. The Round Robin (RR) algorithm, that gives the same amount of computing time to all jobs, is the best deterministic algorithm with a competitive ratio of $2 - \frac{2}{N+1} = 2 + o(N)$~\cite{motwani1994nonclairvoyant}. The best randomized algorithm has a competitive ratio of $2 - \frac{4}{N+3} = 2 + o(N)$~\cite{motwani1994nonclairvoyant}.
\emph{Stochastic scheduling}: Stochastic scheduling covers a middle ground where job sizes are known random variables. The field of \emph{optimal stochastic scheduling} is about designing optimal algorithms for stochastic scheduling (see~\cite{caiOptimalStochasticScheduling2014} for a review). When distributions have non-decreasing hazard rate, scheduling the shortest mean first is optimal (see~\cite{caiOptimalStochasticScheduling2014} Corollary 2.1).
We consider exponential job sizes (which have a non-decreasing hazard rate) but in contrast to most of the literature on stochastic scheduling, the means are unknown to the algorithm and are learned as the algorithm runs.
Exponential random variables for job sizes are very common in scheduling~\cite{kampke1989optimal, hamada1993bayesian, cunningham1973scheduling, cai2000asymmetric, cai2005single, pinedo1985scheduling, glazebrook1979scheduling} and similarly for the presence of different types of jobs~\cite{mitzenmacher2020scheduling, hamada1993bayesian, marban2011learning}.
\paragraph{Learning-augmented scheduling algorithms}
In static scheduling, the most natural quantity to predict is the job sizes. Bounds depending on the $\ell_1$ distance between the actual and predicted sizes are established in~\cite{purohitImprovingOnlineAlgorithms2018}. Measuring the error with the $\ell_1$ distance is problematic as it does not reflect the performance of the algorithm scheduling jobs in increasing order of their predicted sizes~\cite{imNonClairvoyantSchedulingPredictions2021,lindermayrNonClairvoyantSchedulingPredictions2022}. Instead, in~\cite{lindermayrNonClairvoyantSchedulingPredictions2022}, authors propose to predict instead the relative order of jobs and they obtain tighter error-dependent bounds on the competitive ratio. They show that learning an order across jobs can in principle be done through empirical risk minimization providing a way to obtain the needed predictions. However, using empirical risk minimization supposes the existence of multiple static scheduling tasks with roughly the same job order.
In contrast, we assume that jobs are typed based on their expected size and learn the order among types as the algorithm runs. In some sense, we deal with \emph{cold-start} augmented learning as no predictions are available before the algorithm starts.
\section{Setting and Notations}
We recall that the considered problem is formalized as follows.
There are $N$ jobs that can be of $K$ different types to be scheduled on a single machine. We assume that $N=nK$ i.e there are $n$ jobs of each type. The different sizes (also called processing times) of the jobs of type $k$ are noted $(P^{k}_i)_{i \in [n]}$, where $(P^{k}_i)_{i \in [n]}$ are independent samples from an an exponential variable of parameter $\lambda_k$ where we adopt the following convention:
\[P_i^k \sim \Ecal(\lambda_k)\implies \EE[P_i^k]= \lambda_k.\]
By extension, we call $\lambda_k$ the mean size of type $k$.
We assume without loss of generality that the mean sizes of the $K$ types are ordered: $\lambda_1\leq \lambda_2 \ldots \leq \lambda_{K}$ and we denote $\boldsymbol{\lambda} = (\lambda_1, \dots, \lambda_K)$. From now on, $\operatorname{OPT}$ denotes the optimal offline algorithm, that is aware of each job size realisations and computes them in increasing order.
\section{Scheduling jobs with exponential sizes without learning}
In this section, we study the two extreme cases. We start with the setting of full knowledge of all expected job sizes (Section~\ref{subsec:ftpp}) and
then investigate the setting without prior information on expected job sizes (Section~\ref{subsec:lowerbound}).
\subsection{FTPP: Follow The Perfect Predictions - Structure can be leveraged}
\label{subsec:ftpp}
In this section, we assume that the mean sizes of all jobs are known or in other words that the algorithm has access, prior to seeing any jobs, to a perfect prediction. Follow-The-Perfect-Prediction (FTPP) is the algorithm that completes jobs by increasing expected sizes. It is the optimal strategy for an algorithm unaware of the realisations (see~\cite{caiOptimalStochasticScheduling2014} Corollary 2.1). The study of FTPP illustrates the fact that the cases $K=1$ and $K > 1$ are structurally and fondamentally different.
The next proposition yields an upper bound on the CR of FTPP:
\begin{prop}
\label{prop:CR-FTPP-lt2}
Assume that $\lambda_1 \leq \dots \leq \lambda_K$ with at least one strict inequality. Then the CR of $\operatorname{FTPP}$ is strictly lower than 2.
\end{prop}
\begin{proof}[Proof sketch (full proof in Appendix~\ref{app:CR-FTPP-lt2})]
Consider an algorithm $\operatorname{A}$ that schedules jobs in order $\sigma$ such that for a job $i$ of mean size $\lambda_i$ and a job of type $j$ of mean size $\lambda_j$: $\lambda_i < \lambda_j \implies P(\sigma(i) < \sigma(j) | i < j) > \frac{\lambda_{j}}{\lambda_{i} + \lambda_{j}}$. We show that $A$ has a CR smaller than 2 which entails that $\operatorname{FTPP}$ has a CR smaller than 2 as FTPP is the optimal algorithm in this setting.
\end{proof}
Proposition~\ref{prop:CR-FTPP-lt2} shows that, asymptotically, $\operatorname{FTPP}$ has a lower CR than $\operatorname{RR}$, the best non-clairvoyant algorithm. In order to quantify this statement, let us be more precise. First, consider the case of $K=2$ types of jobs with $n$ jobs per type with $\lambda_1 = 1$ and $\lambda_2 = \lambda > 1$.
In this case the asymptotic CR, as a function of $\lambda$, is given by:
\[
\lim_{ n \to \infty} \frac{\EE[C_{\operatorname{FTPP}}]}{\EE[C_{\operatorname{OPT}}]} = \frac{2(1 + \lambda)^2 + 4 (1 + \lambda)}{(1 + \lambda)^2 + 4 \lambda}=2-4\frac{\lambda-1}{{(1 + \lambda)^2 + 4 \lambda}}.
\]
We refer the reader to Appendix~\ref{app:CR-FTPP-2types} for the detailed derivation.
This function has value $2$ at $\lambda=1$, decreases to reach its minimum value $\sqrt{2}/2 - 1 \approx 1.7$ at $\lambda^* = 1 + 2 \sqrt{2}$ and then increases with a limit on the right equal to $2$.
Then, we consider the general case:
\begin{prop}
\label{prop:CR-FTPP-convex}
The asymptotic competitive ratio of FTPP for $K$ types of job with mean sizes $\boldsymbol{\lambda} = (\lambda_{k})_{k=1}^K$ with $\lambda_1 \leq \dots \leq \lambda_K$ is given by:
\begin{align*}
\lim_{ n \to \infty} \frac{\EE[C_{\operatorname{FTPP}}]}{\EE[C_{\operatorname{OPT}}]} &= \frac{\sum_{k=1}^K (\frac{1}{2}+K-k)\lambda_{k}}{\sum_{k=1}^K \left(\frac{1}{4} \lambda_{k} +\sum_{\ell=k+1}^K \frac{\lambda_{k} \lambda_\ell}{\lambda_{p}+\lambda_\ell} \right)} \\
&=2- \frac{\sum_{k=1}^K \lambda_k\sum_{\ell=k+1}^K\frac{\lambda_\ell-\lambda_k}{\lambda_\ell\lambda_k}}{\sum_{k=1}^K \left(\frac{1}{4} \lambda_{k} +\sum_{\ell=k+1}^K \frac{\lambda_{k} \lambda_\ell}{\lambda_{p}+\lambda_\ell}\right)} =: \operatorname{CR}_{\operatorname{FTPP}}(\boldsymbol{\lambda}, K).
\end{align*}
In addition $\operatorname{CR}^*_{\operatorname{FTPP}}(K)$, the minimum value in $\lambda$ of FTPP asymptotic competitive ratio, decreases in $K$ and can be computed by solving a constrained optimization problem with strictly quasi-convex objective and compact convex constraints.
\end{prop}
See Appendix~\ref{app:CR-FTPP-convex} for a proof.
We can upper-bound the limit as $K$ increases (details are postponed to Appendix~\ref{app:cr_approx}) and obtain
\begin{equation}
\lim_{K\rightarrow \infty} \operatorname{CR}_{\operatorname{FTPP}}^*(K) \leq \frac{4}{\pi} \approx 1.273, \label{eq:cr_approx}
\end{equation}
therefore the asymptotic CR of $\operatorname{FTPP}$ can reach much lower values than the asymptotic CR of $\operatorname{RR}$.
Even in the average analysis where we suppose that the $\boldsymbol{\lambda}$ themselves are distributed according to some law, we show that the CR can be strictly lower than the CR of RR. We shall illustrate this by assuming that the $\lambda_i$ are sampled from a uniform distribution on $[0,1]$.
Then, $\overline{\operatorname{CR}}_{\operatorname{FTPP}}(K,n)=\frac{\mathbb{E}_{\lambda}[C_{\operatorname{FTPP}}]}{\mathbb{E}_{\lambda}[C_{\operatorname{OPT}}]}$ is decreasing in $K$ for $n > 8$, increasing in $n$ and
\begin{equation}
\label{eq:expected_cr}
\lim_{\substack{K \rightarrow \infty \\ n \rightarrow \infty}} \overline{\operatorname{CR}}_{\operatorname{FTPP}}(K,n)=10/(11-8\log(2)) \approx 1.629
\end{equation}
Detailed derivations are available in Appendix~\ref{app:expected_cr}.
Together Proposition~\ref{prop:CR-FTPP-convex}, Equation~\eqref{eq:cr_approx} and Equation~\eqref{eq:expected_cr} demonstrate that the gap between the best non-clairvoyant algorithm and the best algorithm aware of the distribution of the jobs is large. This motivates the design of efficient learning algorithms aiming at reaching the performance of FTPP.
\subsection{Lower bound - Learning structure is crucial}
\label{subsec:lowerbound}
A ``non-clairvoyant'' algorithm has no a-priori knowledge on the job sizes, not even the existence of types.
There exists a lower bound of $2 - o(1)$ on the CR of any (deterministic or randomized) non-clairvoyant algorithm in the adversarial case~\cite{motwani1994nonclairvoyant}. This lower bound is attained when the input is $n$ jobs with exponential sizes, assuming they all have the same mean, which corresponds to $K=1$ in the above notations. In the next proposition, we show that the lower bound remains of the same order, even if $K\geq 2$, which has inherently more structure as exposed in the previous section.
\begin{prop}
\label{prop:lowerbound}
In the
limit of large $n$, any (deterministic or randomized) non-clairvoyant algorithm has competitive ratio at least $2 - o(1)$ on input $(P_i^{k})_{i\in [n],k \in [K]}$.
\end{prop}
\begin{proof}[Proof sketch (full proof in Appendix~\ref{app:lowerbound})]
From~\cite{motwani1994nonclairvoyant}, the total expected flow time of any deterministic algorithm $A$ is given by the sum of the time spent computing all jobs and the time lost waiting as jobs delay each other. We first show that, asymptotically, only the second matters.
Then, we consider $T_{ij}^A$, the amount of time that job $i$ and job $j$ delay each other. As the algorithm is unaware of the expected job size order, its run is independent of whether the expected size of job $i$ is smaller or greater than that of $j$. This holds because a non-clairvoyant algorithm has no information on job expected sizes and cannot learn them as it ignores that jobs of the same type have same distribution. As an adversary, we can therefore choose the order so that the algorithm incurs the largest flow time.
A careful analysis then provides $\EE[T_{ij}^A] \geq 2 \EE[T_{ij}^{OPT}]$ where OPT is the optimal realisation-aware algorithm. A similar reasoning is made in the case of randomized algorithms.
\end{proof}
Proposition~\ref{prop:lowerbound} states that, asymptotically and compared to the adversarial case, assuming that jobs have exponential size does not make the problem intrinsically easier. This proposition is easily extended to the more general setting where jobs do not have types (i.e when $k=N$). Note that this extension does not reduce to the adversarial setting studied in~\cite{motwani1994nonclairvoyant} as job sizes are sampled from an exponential distribution. We refer the reader to Appendix~\ref{app:lowerbound} for more details.
\section{Learning Algorithms}
We introduce in this section two different learning algorithms. The first one, $\operatorname{ETC-U}$, naively and uniformly explores to estimate each type's expected size. The exploration strategy of the second one, $\operatorname{ETC-RR}$, is inspired by the worst-case optimal algorithm, $\operatorname{RR}$. We prove that the CR of both of them asymptotically matches the optimal one of $\operatorname{FTPP}$. However, the second order term in the upper bound of $\operatorname{ETC-RR}$ is smaller than the one of $\operatorname{ETC-U}$.
Those findings are confirmed by empirical evidence on synthetic data (see Section~\ref{sec:experiments}).
\subsection{Explore then commit with uniform exploration: ETC-U}%
Explore-Then-Commit with Uniform exploration ($\operatorname{ETC-U}$) is a successive elimination algorithm, where the relative ranking of each type is learned through uniform exploration.
Had the relative ranks of the means of each types been known, the optimal online algorithm would compute all the jobs with smallest expected mean first. However, that knowledge is not available, it is thus learned.
While $\operatorname{ETC-U}$ runs, it maintains a set of types $\mathcal{A}$ that are candidates for lowest mean size among the set $\mathcal{U}$ of types with at least one remaining jobs.
At each iteration, $\operatorname{ETC-U}$ chooses a job of type $k$ where $k$ is the job of type $\mathcal{A}$ that has the lowest number of finished jobs.
This job is executed until it finishes.
Then $\mathcal{U}$ and $\mathcal{A}$ are updated and the procedure repeats until no more jobs are available.
At a given iteration, note $m_k$ and $m_\ell$ the number of jobs of type $k$ and $\ell$ that have been computed up to that iteration. Define:
\[
\hat{r}_{k, \ell}^{\min(m_{k}, m_{\ell})} = \frac{\sum_{i=1}^{\min(m_{k}, m_{\ell})} \mathds{1}\{P^k_i < P^\ell_i\}}{\min(m_{k}, m_{\ell})} \quad \text{and}\quad
\delta_{k, \ell}^{\min(m_{k}, m_{\ell})} = \sqrt{\frac{\log(2n^2)}{2 \min(m_{k}, m_{\ell})}}.
\]
A type $\ell$ is excluded from $\mathcal{A}$ if $\ell \notin \mathcal{U}$ or (Line~\ref{algo:etcu:elimination}) if there exists a type $k$ such that
\[\hat{r}_{k, l}^{\min(m_{k}, m_{\ell})}-\delta_{k, \ell}^{\min(m_{k}, m_{\ell})}>0.5,
\]
which we show in Equation~\eqref{eq:concentration_good_event_ETCU} (in appendix) implies $\lambda_{k}<\lambda_\ell$ w.h.p. We say that type $k$ eliminates type $\ell$.
This means job type $\ell$ is no longer a candidate for the remaining job type with smallest expectation.
Note that since $\hat{r}_{\ell, k} = 1 - \hat{r}_{k, l}$ and $\delta_{kl} = \delta_{lk}$, if $ \hat{r}_{k, \ell}^{\min(m_{k}, m_{\ell})}+\delta_{k, \ell}^{\min(m_{k}, m_{\ell})}<0.5$, then $0.5 < \hat{r}_{\ell, k}^{\min(m_{k}, m_{\ell})}-\delta_{\ell, k}^{\min(m_{k}, m_{\ell})}$ and $k$ is eliminated by $\ell$.
Whenever $\mathcal{A}$ contains only one type, all jobs of this type are run until they are all finished. Whenever all jobs from type $p$ are finished, type $p$ is removed from $\mathcal{U}$ and therefore from $\mathcal{A}$. This means that types that were eliminated by $p$ can be candidates again.
\begin{algorithm}[H]
\caption{$\operatorname{ETC-U}$}
\label{algo:etcu}
\begin{algorithmic}[1]
\STATE {\bfseries Input :} $n \geq 1$ (number of jobs of each type), $K \geq 2$ (number of types)
\STATE For all pairs of different types $k, \ell$ initialize $\delta_{k, \ell} = 0$, $\hat{r}_{k, \ell} = 0$ and $h_{k, \ell} = 0$
\STATE For all types $k$, set $m_{k} = 0$
\REPEAT
\STATE $\mathcal{U}$ is the set of types with at least one remaining job
\STATE $\mathcal{A} = \{ \ell \in \mathcal{U}, \forall k \neq \ell , \enspace \hat{r}_{k, \ell} - \delta_{k, \ell} \leq 0.5 \} \label{algo:etcu:elimination}$
\STATE Select the type $\ell$ with the lowest number of finished jobs $\ell = \argmin_{k \in \mathcal{A}} m_{k}$ and run one job of type $\ell$ yielding a size $P_{m_{\ell} + 1}^{\ell}$.
\STATE $m_{\ell} = m_{\ell} + 1$
\FOR {$k, \ell$ in $\mathcal{U}$, $k \neq \ell$}
\STATE $h_{k, \ell} = \sum_{i=1}^{\min(m_{k}, m_{\ell})} \mathds{1}\{P^k_i < P^{\ell}_i\}$
\STATE $\delta_{k,\ell} = \sqrt{\frac{\log(2n^2)}{2 \min(m_{k}, m_{\ell})}}$
\STATE $\hat{r}_{k, \ell} = \frac{h_{k, \ell}}{\min(m_{k}, m_{\ell})}$
\ENDFOR
\UNTIL{$ \mathcal{U}$ is not empty}
\end{algorithmic}
\end{algorithm}
\begin{prop}\label{prop:bound_ETCUk}
The expected cost of $\operatorname{ETC-U}$ is upper bounded by:
\begin{align*}
\EE[C_{\operatorname{ETC-U}}]\leq& \EE[C_{\operatorname{FTPP}}] +\sum_{k\in [K]}\left[\frac{1}{2}(k-1)(2K-k)+(K-k)^2\right]\lambda_k n \sqrt{8n \log(2 n^2)}\\&+\frac{K^3}{n}\EE[C_{\operatorname{OPT}}]
\end{align*}
\end{prop}
\begin{proof}[Proof sketch (full proof in Appendix~\ref{app:bound_ETCUk})]
The proof starts by showing that w.h.p., the empirical quantities $\hat{r}$ concentrate around their means, Equation~\eqref{eq:concentration_good_event_ETCU} (in appendix). This implies that w.h.p, the algorithm never takes the wrong decision of eliminating the remaining job type of smallest mean. We then bound the worse possible cost for the algorithm in the case where it takes wrong decisions and show it is always smaller than $2K\EE[C_{OPT}]$, Equation~\eqref{eq:boundworsecostETCU} (in appendix). Finally, we show that when the empirical quantities do concentrate around their expectation, the number of exploration steps before a sub-optimal job type is eliminated is bounded, Equation~\eqref{eq:boundm_ETCUk} (in appendix). This bounds the total cost of exploration.
\end{proof}
\subsection{Explore then commit with Round-Robin based exploration: $\operatorname{ETC-RR}$}
\label{subsec:RR}
As $\operatorname{ETC-U}$, $\operatorname{ETC-RR}$ is a successive elimination algorithm, except that the exploration is performed through a Round-Robin type procedure that runs many jobs "in parallel" while $\operatorname{ETC-U}$ was running jobs "in series".
As previously, $\operatorname{ETC-RR}$ maintains a set $\mathcal{A}$ of types that are candidates to have the smallest mean size among types in $\mathcal{U}$. During the exploration phase, $\operatorname{ETC-RR}$ chooses one job for every type in $\mathcal{A}$ and runs them in parallel. When a job finishes, if the job's type is still active, it is replaced by a new job of the same type. Intuitively, this should diminish the cost of exploration when the chosen jobs have very different sizes: when $|\mathcal{A}|$ jobs are processed in parallel, the smallest job (with size $p_{min}$) finishes first after an elapsed time $|\mathcal{A}| p_{min}$. Thus, while the largest jobs runs (with size $p_{max}$), at least $\lfloor p_{max}/p_{min}\rfloor$ smaller jobs may finish.
The statistics needed to construct $\mathcal{A}$ are different from the one used in $\operatorname{ETC-U}$.
At a given time, $h_{k, \ell}$ is the number of times a job of type $k$ has finished while $\ell$ and $k$ were run in parallel.
Define:
\[
\hat{r}_{k, \ell} = \frac{h_{k, \ell}}{h_{k, \ell}+h_{\ell, k}} \quad \text{and}\quad
\delta_{k, \ell} = \sqrt{\frac{\log(2n^2)}{2(h_{k, \ell}+h_{\ell, k})}}.
\]
The rest of the algorithm is the same as $\operatorname{ETC-U}$ except that the elimination rule in Line~\ref{algo:etcrr:elim} is now justified by Equation~\eqref{eq:ETCRRboundbadevent} (in appendix).
\begin{prop}\label{prop:ETC-RR-k-types}
The expected cost of $\operatorname{ETC-RR}$ is upper bounded by:
\begin{align*}
\EE[C_{\operatorname{ETC-RR}}]\leq &\EE[C_{\operatorname{FTPP}}] + \sum_{\ell=1}^{K-1}(K-\ell)^2 2n \lambda_\ell\sqrt{4n\log(2n^2)} \\ &+ 2(K-1)^2n\sum_{\ell=1}^{K-1}\lambda_\ell+\frac{2K^3}{n}\EE[C_{\operatorname{OPT}}]
\end{align*}
\end{prop}
\begin{proof}[Proof sketch (full proof in Appendix~\ref{app:ETC-RR-k-types})]
The technical arguments differ, but the general structure of the proof is the same as that of Proposition \ref{prop:bound_ETCUk}.
\end{proof}
\begin{algorithm}[htb]
\caption{ETC-RR}
\label{algo:etc-rr-k}
\begin{algorithmic}[1]
\STATE {\bfseries Input :} $n \geq 1$ (number of jobs of each type), $K \geq 2$ (number of types),
\STATE For all pairs of different types $k, \ell$ initialize $\delta_{k, \ell} = 0$, $\hat{r}_{k, \ell} = 0$ and $h_{k, \ell} = 0$
\STATE For all types $k$, set $c_{k} = 0$
\REPEAT
\STATE $\mathcal{U}$ is the set of types with at least one remaining job
\STATE $\mathcal{A} = \{ \ell \in \mathcal{U}, \forall k \neq \ell , \enspace \hat{r}_{k, \ell} - \delta_{k, \ell} \leq 0.5 \} \label{algo:etcrr:elim}$
\STATE Run jobs $ (P^{k}_{c_k+1})_{k \in \mathcal{A}}$ in parallel until a job finishes and denote $\ell$ the type of this job
\STATE $c_\ell = c_\ell + 1$
\FOR {$k \in \mathcal{A}, k \neq \ell$}
\STATE $h_{ \ell, k} = h_{\ell, k} + 1$
\STATE $\delta_{ \ell, k} = \sqrt{\frac{\log(2 n^2)}{2 (h_{\ell, k} + h_{k,\ell })}}$
\STATE $\hat{r}_{\ell,k} = \frac{h_{ \ell,k}}{h_{k, \ell} + h_{ \ell, k }}$
\STATE $\hat{r}_{k,\ell} = \frac{h_{ k,\ell}}{h_{k, \ell} + h_{ \ell, k }}$
\ENDFOR
\UNTIL{$\mathcal{U}$ is empty}
\end{algorithmic}
\end{algorithm}
\subsection{Comparison of the two algorithms }
\label{subsec:comparison}
The competitive ratio of both algorithms is asymptotically the one of $\operatorname{FTPP}$. Indeed, it always holds that $\EE[C_{\operatorname{OPT}}]\geq \lambda_1 \frac{n^2}{2}$, we thus have, according to Propositions \ref{prop:bound_ETCUk} and \ref{prop:ETC-RR-k-types}:
\[
\text{CR}_{\operatorname{ETC-U}} = \text{CR}_{\operatorname{FTPP}}+ \mathcal{O}\left(\sqrt{\frac{\log(n)}{n}}\right) \text{ and } \text{CR}_{\operatorname{ETC-RR}} = \text{CR}_{\operatorname{FTPP}}+ \mathcal{O}\left(\sqrt{\frac{\log(n)}{n}}\right).
\]
On the one hand, the leading term in the cost is the same for both algorithms. On the other hand, the second order term can be much smaller in the case of algorithm $\operatorname{ETC-RR}$.
To illustrate this claim, let us consider the case where there are two types of jobs. The jobs of type $1$ have expected size $\lambda_1$ and the jobs of type $2$ have expected size $\lambda_2$. Instantiating the bounds obtained of Propositions \ref{prop:bound_ETCUk} and \ref{prop:ETC-RR-k-types} to this setting we get:
\begin{align}\label{eq:bound2typesETCU}
\EE[C_{\operatorname{ETC-U}}] \leq \EE[C_{\operatorname{FTPP}}] + n (\lambda_1 + \lambda_2) \sqrt{8n \log(2 n^2)} + \frac{8}{n}\EE[C_{\operatorname{OPT}}],
\end{align}
and
\begin{align}\label{eq:bound2typesETCRR}
\EE[C_{\operatorname{ETC-RR}}]\leq \EE[C_{\operatorname{FTPP}}] + 2n \lambda_1(\sqrt{4n \log(2 n^2)}+1) +\frac{16}{n}\EE[C_{\operatorname{OPT}}].
\end{align}
If $\lambda_2 \gg \lambda_1$ the bound in Equation \eqref{eq:bound2typesETCU} is much larger than the bound in Equation \eqref{eq:bound2typesETCRR}.
\section{Experiments}
\label{sec:experiments}
\begin{figure}
\centering
\includegraphics[width=.65\textwidth]{./figures/etc-vs-opt-2-jobs}
\caption{\textbf{CR on jobs with 2 different types}. $k=2$, $n=2000$, $\lambda_2=1$ and $\lambda_1$ takes a grid of values in $[0.02,1]$. We report the median CR using $100$ different seeds. Error bars represent the first and last decile. The black dotted line represents the theoretical best possible performance of $\operatorname{FTPP}$ as $n$ goes to infinity.}
\label{fig:etc-vs-opt-2jobs}
\end{figure}
In this section, we design synthetic experiments to compare $\operatorname{ETC-U}$, $\operatorname{ETC-RR}$, $\operatorname{RR}$ and $\operatorname{FTPP}$. All code is written in Python. We use matplotlib~\cite{hunter2007matplotlib} for plotting, and numpy~\cite{harris2020array} for array manipulations. The above libraries use open-source licenses. Computations are run on a laptop in less than 5 minutes.
In a first experiment, reported in Figure~\ref{fig:etc-vs-opt-2jobs}, algorithms are run with only $K=2$ types and a number of jobs per type fixed to $n=2000$. Jobs of type 1 have mean size $\lambda_1$ which varies between $0.02$ and $1$ and jobs of type 2 have $\lambda_2=1$.\\
As can be seen from the figure, the CR of $\operatorname{ETC-U}$ and $\operatorname{ETC-RR}$ is only slightly above the one of $\operatorname{FTPP}$.
On the other hand, both $\operatorname{ETC-U}$ and $\operatorname{ETC-RR}$ outperform $\operatorname{RR}$.
The CR of both learning algorithm vary similarly with $\lambda_1$, which is consistent with the theoretical result stating that they have the same asymptotic performance.
In particular, it can be seen that the theoretical value for the minimum of FTPP (plotted as a black dotted line) corresponds to what we observe in practice.
As highlighted in Section~\ref{subsec:comparison}, the cost of exploration in $\operatorname{ETC-RR}$ does not depend on $\lambda_2$ the highest mean size. Therefore, it is much lower when $\lambda_1$ is low. This can be seen from the figure where we can observe that for $\lambda_1 \in [0.02, 0.2]$, the CR of $\operatorname{ETC-RR}$ is almost confounded with the CR of $\operatorname{FTPP}$ whereas the CR of $\operatorname{ETC-U}$ is much higher. This difference diminishes as $\lambda_1$ grows.
In a second experiment, reported in Figure~\ref{fig:etc-vs-opt-nsamples} (left graphic), the number of types $K$ varies while the number of jobs is fixed to $n=2000$. The mean sizes $(\lambda_k)_{k\in[K]}$ are i.i.d. samples from a uniform distribution in $[0, 1]$.
As seen in Section~\ref{subsec:ftpp}, the expected CR of $\operatorname{FTPP}$ decreases with $K$. This behaviour is also observed in this experiment. As can be seen in the figure, for all values of $K$, $\operatorname{ETC-RR}$ outperforms $\operatorname{ETC-U}$ thanks to its efficient exploration. Both $\operatorname{ETC-U}$ and $\operatorname{ETC-RR}$ outperform $\operatorname{RR}$.
In a third experiment, reported in Figure~\ref{fig:etc-vs-opt-nsamples} (right graphic), the number of jobs per type $n$ varies while the number of types is fixed to $K=3$. Like in the previous experiment,
the mean sizes $(\lambda_k)_{k\in[K]}$ are i.i.d. samples from a uniform distribution in $[0, 1]$.
For small $n$, neither $\operatorname{ETC-U}$ nor $\operatorname{ETC-RR}$ have enough samples so that they can commit. Therefore $\operatorname{ETC-U}$ finishes one job of each type alternatively which is not as efficient as $\operatorname{RR}$. In contrast $\operatorname{ETC-RR}$ follows a strategy similar to the one of $\operatorname{RR}$ yielding a similar competitive ratio even when no learning occurs. When the number of samples increases, the gap between $\operatorname{ETC-U}$, $\operatorname{ETC-RR}$ and $\operatorname{FTPP}$ decreases as the amount of time spent exploring decreases compared to the time spent exploiting.
\begin{figure}[H]
\begin{minipage}[b]{0.48\linewidth}
\centering
\includegraphics[width=0.9\textwidth]{./figures/etc-vs-opt-k-jobs-uniform}
\end{minipage}
\begin{minipage}[b]{0.48\linewidth}
\centering
\includegraphics[width=0.9\textwidth]{./figures/etc-vs-opt-vary-nsamples}
\end{minipage}
\caption{\textbf{CR on jobs with $K$ different types}.
On the left graphic, $n=2000$ and $K$ varies. On the right graphic $K=3$ and $n$ takes a grid of value.
We take $\lambda_i \sim \mathcal{U}(0, 1)$. We report the median CR using $100$ different seeds. Error bars represent the first and last decile.}
\label{fig:etc-vs-opt-nsamples}
\end{figure}
\section*{Conclusion}
We proved that online learning and online algorithms can be efficiently combined to produce algorithms that learn while they run. We developed this idea in the context of static scheduling with exponential processing time, a hypothesis that is shown not to make the problem easier. Two algorithms $\operatorname{ETC-U}$ and $\operatorname{ETC-RR}$ are introduced. Both of them yield the same asymptotic performance as if job mean sizes were known but $\operatorname{ETC-RR}$, which takes inspiration from the best non-clairvoyant algorithm, has a much faster rate.
These results are validated experimentally on synthetic datasets.
Future work might focus on whether it is possible to combine our cold start approach with available predictions from external sources to achieve even more efficient exploration. Another possible extension would be to have job types with more structure (for instance, expected size could be an unknown linear mapping of some features representing the job).
This theoretical work does not raise any obvious ethical issues.
\section*{Acknowledgment and funding disclosure}
This work was supported by the French government under management of Agence Nationale
de la Recherche as part of the "Investissements d'avenir" program, references ANR20-CE23-0007-01 FAIRPLAY project and LabEx Ecodec/ANR-11-LABX-0047 as well as from the grant number ANR-19-CE23-0026.
\newpage
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 2.462891,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdAg4c3aisMwilrfa | \section{Introduction}
Competitive team sports, including, for example, American football, soccer,
basketball and hockey, serve as model systems for social competition, a
connection that continues to foster intense popular interest. This passion
stems, in part, from the apparently paradoxical nature of these sports. On
one hand, events within each game are unpredictable, suggesting that chance
plays an important role. On the other hand, the athletes are highly skilled
and trained, suggesting that differences in ability are fundamental. This
tension between luck and skill is part of what makes these games exciting for
spectators and it also contributes to sports being an exemplar for
quantitative modeling, prediction and human
decision-making~\cite{mosteller1997lessons,palacios2003professionals,ayton:fischer:2004,albert2005anthology},
and for understanding broad aspects of social competition and
cooperation~\cite{barney:1986,michael:chen:2005,romer:2006,johnson:2006,berger:2011,radicchi:2012}.
In a competitive team sport, the two teams vie to produce events (``goals'')
that increase their score, and the team with the higher score at the end of
the game is the winner. (This structure is different from individual sports
like running, swimming and golf, or judged sports, like figure skating,
diving, and dressage.)~ We denote by $X(t)$ the instantaneous difference in
the team scores. By viewing game scoring dynamics as a time series, many
properties of these competitions may be quantitatively
studied~\cite{reed:hughes:2006,galla:farmer:2013}. Past work has
investigated, for example, the timing of scoring
events~\cite{thomas:2007,everson2008composite,heuer:etal:2010,buttrey2011estimating,yaari:eisenman:2011,gabel:redner:2012,merritt:clauset:2014},
long-range correlations in scoring~\cite{ribeiro2012anomalous}, the role of
timeouts~\cite{saavedra:etal:2012}, streaks and ``momentum'' in
scoring~\cite{gilovich1985hot,vergin:2000,sire:redner:2009,arkes2011finally,yaari:eisenman:2011,yaari:david:2012},
and the impact of spatial positioning and playing field
design~\cite{bourbousson:seve:mcgarry:2012,merritt:clauset:2013}.
In this paper, we theoretically and empirically investigate a simple yet
decisive characteristic of individual games: the times in a game when the
lead changes. A lead change occurs whenever the score difference $X(t)$
returns to 0. Part of the reason for focusing on lead changes is that these
are the points in a game that are often the most exciting. Although we are
interested in lead-change dynamics for all sports, we first develop our
mathematical results and compare them to data drawn from professional
basketball, where the agreement between theory and data is the most
compelling. We then examine data for three other major competitive American
team sports:\ college and professional football, and professional hockey, and
we provide some commentary as to their differences and similarities.
Across these sports, we find that many of their statistical properties are
explained by modeling the evolution of the lead $X$ as a simple random walk.
More strikingly, seemingly unrelated properties of lead statistics,
specifically, the distribution of the times $t$: (i) for which one team is
leading $\mathcal{O}(t)$, (ii) for the last lead change $\mathcal{L}(t)$, and
(iii) when the maximal lead occurs $\mathcal{M}(t)$, are all described by the
same celebrated arcsine law~\cite{levy:1939,levy:1948,morters:peres:2010}:
\begin{equation}
\label{arcsine}
\mathcal{O}(t)=\mathcal{L}(t)=\mathcal{M}(t)= \frac{1}{\pi} \frac{1}{\sqrt{t(T-t)}} \enspace ,
\end{equation}
for a game that lasts a time $T$. These three results are, respectively, the
\emph{first}, \emph{second}, and \emph{third} arcsine laws.
Our analysis is based on a comprehensive data set of all points scored in
league games over multiple consecutive seasons each in the National
Basketball Association (abbreviated NBA henceforth), all divisions of NCAA
college football (CFB), the National Football League (NFL), and the National
Hockey League (NHL)~\footnote{Data provided by STATS LLC, copyright
2015.}. These data cover 40,747 individual games and comprise 1,306,515
individual scoring events, making it one of the largest sports data sets
studied.
Each scoring event is annotated with the game clock time $t$ of the event, its point
value, and the team scoring the event. For simplicity, we ignore events
outside of regulation time (i.e., overtime). We also combine the point
values of events with the same clock time (such as a successful foul shot
immediately after a regular score in basketball). Table~\ref{tab:data}
summarizes these data and related facts for each sport.
\begin{table*}[t!]
\begin{center}
\begin{tabular}{ll|ccc|ccccc}
& & Num.\ & Num.\ scoring & Duration & Mean events & Mean pts.\ & Persistence & Mean num.\ & Frac.\ with no\\\
Sport~~ & Seasons~~~~~ & games & events & $T$ (sec) & per game $N$ & per event $s$ & $p$ & lead changes $\mathcal{N}$ & lead changes \\ \hline
NBA & 2002--2010 & 11,744 & 1,098,747 & 2880 & 93.56 & 2.07 & 0.360 & 9.37 & 0.063\\
CFB & 2000--2009 & 14,586 & \hspace{0.7em}123,448 & 3600 & \hspace{0.5em}8.46 & 5.98 & 0.507 & 1.23 & 0.428\\
NFL & 2000--2009 & \hspace{0.5em}2,654 & \hspace{1.2em}20,561 & 3600 & \hspace{0.5em}7.75 & 5.40 & 0.457 & 1.43 & 0.348\\
NHL & 2000--2009 & 11,763 & \hspace{1.2em}63,759 & 3600 & \hspace{0.5em}5.42 & 1.00 & --- & 1.02 & 0.361
\end{tabular}
\end{center}
\caption{Summary of the empirical game data for the team sports considered in this study, based
on regular-season games and scoring events within regulation time. }
\label{tab:data}
\end{table*}
\subsection*{Basketball as a model competitive system}
To help understand scoring dynamics in team sports and to set the stage for
our theoretical approach, we outline basic observations
about NBA regular-season
games.
In an average game, the
two teams combine to score an average of 93.6 baskets (Table~\ref{tab:data}), with an
average value of 2.07 points per basket (the point value greater than 2
arises because of foul shots and 3-point baskets). The
average scores of the winning and losing teams are 102.1 and 91.7 points,
respectively, so that the total average score is 193.8 points in a 48-minute
game ($T\!=\!2880$ seconds). The rms score difference between the winning
and losing teams is 13.15 points. The high scoring rate in basketball
provides a useful laboratory to test our random-walk description of
scoring (Fig.~\ref{example}).
\begin{figure}[H]
\centerline{\includegraphics[width=0.4\textwidth]{figures/example}}
\caption{Evolution of the score difference in a typical NBA game: the
Denver Nuggets vs.\ the Chicago Bulls on 26 November 2010. Dots indicate
the four lead changes in the game. The Nuggets led for 2601 out of 2880
total seconds and won the game by a score of 98--97.}
\label{example}
\end{figure}
Scoring in professional basketball has several
additional important features~\cite{gabel:redner:2012,merritt:clauset:2014}:
\begin{enumerate}
\itemsep -0.25ex
\item Nearly constant scoring rate throughout the game, except for small
reductions at the start of the game and the second half, and a substantial
enhancement in the last 2.5 minutes.
\item Essentially no temporal correlations between successive scoring events.
\item Intrinsically different team strengths. This feature may be modeled by
a bias in the underlying random walk that describes scoring.
\item Scoring antipersistence. Since the team that scores cedes ball
possession, the probability that this team again scores next occurs with
probability $p<\frac{1}{2}$.
\item Linear restoring bias. On average, the losing team scores at a
slightly higher rate than the winning team, with the rate disparity
proportional to the score difference.
\end{enumerate}
A major factor for the scoring rate is the 24-second ``shot clock,'' in which
a team must either attempt a shot that hits the rim of the basket within 24
seconds of gaining ball possession or lose its possession. The average time
interval between scoring events is $\Delta t=2880/93.6=30.8$ seconds, consistent with
the 24-second shot clock. In a random walk picture of scoring, the average
number of scoring events in a game, $N\!=\!93.6$, together with $s\!=\!2.07$ points
for an average event, would lead to an rms displacement of
$x_{\rm rms} = \sqrt{Ns^2}$. However, this estimate does not account for the
antipersistence of basketball scoring. Because a team that scores
immediately cedes ball possession, the probability that this same team
scores next occurs with probability $p\approx 0.36$. This antipersistence
reduces the diffusion coefficient of a random walk by a factor
$p/(1-p)\approx 0.562$~\cite{gabel:redner:2012,garcia:2007}. Using this, we
infer that the rms score difference in an average basketball game should be
$\Delta S_{\rm rms}\approx \sqrt{pNs^2/(1-p)}\approx 15.01$ points. Given
the crudeness of this estimate, the agreement with the empirical value of
13.15 points is satisfying.
A natural question is whether this final score difference is determined by
random-walk fluctuations or by disparities in team strengths. As we now
show, for a typical game, these two effects have comparable influence. The
relative importance of fluctuations to systematics in a stochastic process is
quantified by the P\'eclet number $\mathcal{P}\!e \!\equiv\! vL/2D$~\cite{P94},
where $v$ is the bias velocity, $L=vT$ is a characteristic final score
difference, and $D$ is the diffusion coefficient. Let us now estimate the
P\'eclet number for NBA basketball. Using $\Delta S_{\rm rms}= 13.15$
points, we infer a bias velocity $v=13.15/2880\approx 0.00457$ points/sec
under the assumption that this score difference is driven only by the
differing strengths of the two competing teams. We also estimate the
diffusion coefficient of basketball as
$D = \frac{p}{1-p} (s^2/2\Delta t)\approx 0.0391$ (points)$^2$/sec. With
these values, the P\'eclet number of basketball is
\begin{equation}
\label{Pe}
\mathcal{P}\!e= \frac{vL}{2D}\approx 0.77 \enspace .
\end{equation}
Since the P\'eclet number is of the order of 1, systematic effects do not
predominate, which accords with common experience---a team with a weak
win/loss record on a good day can beat a team with a strong record on a bad
day. Consequently, our presentation on scoring statistics is mostly based on
the assumption of equal-strength teams. However, we also discuss the case of
unequal team strengths for logical completeness.
As we will present below, the statistical properties of lead changes and lead
magnitudes, and the probability that a lead is ``safe,'' i.e., will not be
erased before the game is over, are well described by an unbiased random-walk
model. The agreement between the model predictions and data is closest for
basketball. For the other professional sports, some discrepancies with the
random-walk model arise that may help identify alternative mechanisms for
scoring dynamics.
\section{Number of Lead Changes and Fraction of Time Leading}
Two simple characterizations of leads are:\ (i) the average number
of lead changes $\mathcal{N}$ in a game, and (ii) the fraction of game time
that a randomly selected team holds the lead. We define a lead change as an
event where the score difference returns to zero (i.e., a tie score), but do
not count the initial score of 0--0 as lead change. We estimate the number
of lead changes by modeling the evolution of the score difference as an
unbiased random walk.
Using $N=93.6$ scoring events per game, together with the well-known
probability that an $N$-step random walk is at the origin, the random-walk
model predicts $\sqrt{2N/\pi}\approx 8$ for a typical number of lead changes.
Because of the antipersistence of basketball scoring, the above is an
underestimate. More properly, we must account for the reduction of the
diffusion coefficient of basketball by a factor of $p/(1-p)\approx 0.562$
compared to an uncorrelated random walk. This change increases the number of
lead changes by a factor $1/\sqrt{0.562}\approx 1.33$, leading to roughly
10.2 lead changes. This crude estimate is close to the observed 9.4 lead
changes in NBA games (Table~\ref{tab:data}).
\begin{figure}[ht]
\centerline{\includegraphics[width=0.45\textwidth]{figures/NBA_numberOfLeadChanges}}
\caption{Distribution of the average number of lead changes per game in
professional basketball.}
\label{NL}
\end{figure}
For the distribution of the number of lead changes, we make use of the
well-known result that the probability $G(m,N)$ that a discrete $N$-step random
walk makes $m$ returns to the origin asymptotically has the Gaussian form
$G(m,N)\sim e^{-m^2/2N}$~\cite{weiss:rubin:1983,weiss:1994,redner:2001}.
However, the antipersistence of basketball scoring leads to $N$ being
replaced by $N\frac{1-p}{p}$, so that the probability of making $m$ returns to
the origin is given by
\begin{equation}
\label{G}
G(m,N) \simeq \sqrt{\frac{2p}{\pi N(1-p)}}\,\, e^{-m^2p/[2N(1-p)]} \enspace .
\end{equation}
Thus $G(m,N)$ is broadened compared to the uncorrelated random-walk
prediction because lead changes now occur more frequently. The comparison
between the empirical NBA data for $G(m,N)$ and a simulation in which scoring
events occur by an antipersistent Poisson process (with average scoring rate
of one event every 30.8 seconds), and Eq.~\eqref{G} is given in
Fig.~\ref{NL}.
For completeness, we now analyze the statistics of lead changes for unequally
matched teams. Clearly, a bias in the underlying random walk for scoring
events decreases the number of lead changes. We use a suitably adapted
continuum approach to estimate the number of lead changes in a simple way.
We start with the probability that biased diffusion lies in a small range
$\Delta x$ about $x=0$:
\begin{equation*}
\frac{\Delta x}{\sqrt{4\pi Dt}} \,\, e^{-v^2t/4D} \enspace .
\end{equation*}
Thus the local time that this process spends within $\Delta x$ about the
origin up to time $t$ is
\begin{align}
\mathcal{T}(t) &=\Delta x \int_0^t \frac{dt}{\sqrt{4\pi Dt}} \,\, e^{-v^2t/4D} \nonumber \\
&=\frac{\Delta x}{v}\,\,\mathrm{erf}\big(\sqrt{{v^2t}/{4D}}\big) \enspace ,
\end{align}
where we used $w=\sqrt{v^2t/4D}$ to transform the first line into the
standard form for the error function. To convert this local time to number
of events, $\mathcal{N}(t)$, that the walk remains within $\Delta x$, we
divide by the typical time $\Delta t$ for a single scoring event. Using
this, as well as the asymptotics of the error function, we obtain the
limiting behaviors:
\begin{equation}
\label{Nt}
\mathcal{N}(t)=\frac{\mathcal{T}(t)}{\Delta t} \sim
\begin{cases} \sqrt{{v_0^2t}/{\pi D}} & \qquad {v^2t}/{4D}\ll 1\\
{v_0}/{v} & \qquad {v^2t}/{4D}\gg 1 \enspace ,
\end{cases}
\end{equation}
with $v_0={\Delta x}/{\Delta t}$ and $\Delta x$ the average value of a single
score (2.07 points). Notice that $v^2T/4D=\mathcal{P}\!e/2$, which, from
Eq.~\eqref{Pe}, is roughly 0.38. Thus, for the NBA, the first line of
Eq.~\eqref{Nt} is the realistic case. This accords with what we have already
seen in Fig.~\ref{NL}, where the distribution in the number of lead changes
is accurately accounted for by an unbiased, but antipersistent random walk.
\begin{figure}[t!]
\centerline{\includegraphics[width=0.45\textwidth]{figures/NBAprobTeamInLeadForX}}
\caption{The distribution of the time that a given team holds the lead,
$\mathcal{O}(t)$. }
\label{fig:first:arcsin}
\end{figure}
Another basic characteristic of lead changes is the amount of game time that
one team spends in the lead, $\mathcal{O}(t)$, a quantity that has been
previously studied for basketball~\cite{gabel:redner:2012}. Strikingly, the
probability distribution for this quantity is bimodal, in which
$\mathcal{O}(t)$ sharply increases as the time approaches either 0 or $T$,
and has a minimum when the time is close to ${T}/{2}$. If the scoring
dynamics is described by an unbiased random walk, then the probability that
one team leads for a time $t$ in a game of length $T$ is given by the first
arcsine law of Eq.~\eqref{arcsine}~\cite{feller:1968,redner:2001}.
Figure~\ref{fig:first:arcsin} compares this theoretical result with
basketball data. Also shown are two types of synthetically generated data.
For the ``homogeneous Poisson process'', we use the game-averaged scoring
rate to generate synthetic basketball-game time series of scoring events.
For the ``inhomogeneous Poisson process'', we use the empirical instantaneous scoring
rate for each second of the game to generate the synthetic data
(Fig.~\ref{Lt}). As we will justify in the next section, we do not
incorporate the antipersistence of basketball scoring in these Poisson
processes because this additional feature minimally influences the
distributions that follow the arcsine law ($\mathcal{O}$, $\mathcal{L}$ and
$\mathcal{M}$). The empirically observed increased scoring rate at the end
of each quarter~\cite{gabel:redner:2012,merritt:clauset:2014}, leads to
anomalies in the data for $\mathcal{O}(t)$ that are accurately captured by
the inhomogeneous Poisson process.
\section{Time of the Last Lead Change}
We now determine \emph{when} the last lead change occurs. For the discrete
random walk, the probability that the last lead occurs after $N$ steps can be
solved by exploiting the reflection principle~\cite{feller:1968}. Here we
solve for the corresponding distribution in continuum diffusion because this
formulation is simpler and we can readily generalize to unequal-strength
teams. While the distribution of times for the last lead change is well
known~\cite{levy:1939,levy:1948}, our derivation is intuitive and elementary.
\begin{figure}[h!]
\centerline{\includegraphics[width=0.325\textwidth]{figures/last-time}}
\caption{Schematic score evolution in a game of time $T$. The subsequent
trajectory after the last lead change must always be positive (solid) or
always negative (dashed).}
\label{last-time}
\end{figure}
For the last lead change to occur at time $t$, the score difference, which
started at zero at $t\!=\!0$, must again equal zero at time $t$
(Fig.~\ref{last-time}). For equal-strength teams, the probability for this
event is simply the Gaussian probability distribution of diffusion evaluated
at $x\!=\!0$:
\begin{equation}
\label{P0}
P(0,t)= \frac{1}{\sqrt{4\pi Dt}} \enspace .
\end{equation}
To guarantee that it is the \emph{last} lead change that occurs at time $t$,
the subsequent evolution of the score difference, cannot cross the origin
between times $t$ and $T$ (Fig.~\ref{last-time}). To enforce this
constraint, the remaining trajectory between $t$ and $T$ must therefore be a
\emph{time-reversed} first-passage path from an arbitrary final point $(X,T)$
to $(0,t)$. The probability for this event is the first-passage
probability~\cite{redner:2001}
\begin{equation}
\label{fpp}
F(X,T\!-\!t)= \frac{X}{\sqrt{4\pi D(T\!-\!t)^3}}\,\,e^{-X^2/[4D(T-t)]} \enspace .
\end{equation}
\begin{figure}[t!]
\centerline{\includegraphics[width=0.45\textwidth]{figures/NBAprobLastLeadChangeWScoringRate}}
\caption{(upper) Empirical probability that a scoring event occurs at time
$t$, with the game-average scoring rate shown as a horizontal line. The
data is aggregated in bins of 10 seconds each; the same binning is used in Fig.~\ref{fig:last:lead:change}. (lower)
Distribution of times $\mathcal{L}(t)$ for the last lead change.}
\label{Lt}
\end{figure}
With these two factors, the probability that the last lead change occurs at
time $t$ is given by
\begin{align}
\label{Pi}
\mathcal{L}(t)&=2\!\int_0^\infty \!dX\, P(0,t)\, F(X,T\!-\!t) \nonumber \\
& = 2\!\int_0^\infty \!\!\frac{dX}{\sqrt{4\pi Dt}} \frac{X}{\sqrt{4\pi D(T\!-\!t)^3}}\,\,
e^{-X^2/[4D(T\!-\!t)]} \,.
\end{align}
The leading factor 2 appears because the subsequent trajectory after time $t$
can equally likely be always positive or always negative. The integration is
elementary and the result is the classic second arcsine
law~\cite{levy:1939,levy:1948} given in Eq.~\eqref{arcsine}. The salient
feature of this distribution is that the last lead change in a game between
evenly matched teams is most likely to occur either near the start or the end
of a game, while a lead change in the middle of a game is less likely.
As done previously for the distribution of time $\mathcal{O}(t)$ that one
team is leading, we again generate a synthetic time series that is based on a
homogeneous and an inhomogeneous Poisson process for individual scoring
events without antipersistence. From these synthetic histories, we extract
the time for the last lead and its distribution. The synthetic inhomogeneous
Poisson process data accounts for the end-of-quarter anomalies in the
empirical data with remarkable accuracy (Fig.~\ref{Lt}).
\begin{figure}[t!]
\centerline{\includegraphics[width=0.45\textwidth]{figures/last-94}}
\caption{The distribution of time for the last lead change,
$\mathcal{L}(f)$, as a function of the fraction of steps $f$ for a
94-step random walk with persistence parameter $p\!=\!0.36$ as in the NBA
($\circ$) and $p\!=\!0.25$ corresponding to stronger persistence
($\bigtriangleup$). The smooth curve is the arcsine law for $p\!=\!0.5$
(no antipersistence). }
\label{LLA}
\end{figure}
Let us now investigate the role of scoring antipersistence on the
distribution $\mathcal{L}(t)$. While the antipersistence substantially
affects the number of lead changes and its distribution, antipersistence has
a barely perceptible effect on $\mathcal{L}(t)$. Figure~\ref{LLA} shows the
probability $\mathcal{L}(f)$ that the last lead change occurs when a fraction
$f$ of the steps in an $N$-step antipersistent random walk have occurred,
with $N\!=\!94$, the closest even integer to the observed value $N\!=\!93.56$
of NBA basketball. For the empirical persistence parameter of basketball,
$p=0.36$, there is little difference between $\mathcal{L}(f)$ as given by the
arcsine law and that of the data, except at the first two and last two steps
of the walk. Similar behavior arises for the more extreme case of
persistence parameter $p=0.25$. Thus basketball scoring antipersistence
plays little role in determining the time at which the last lead change
occurs.
We may also determine the role of a constant bias on $\mathcal{L}(t)$,
following the same approach as that used for unbiased diffusion. Now the
analogues of Eqs.~\eqref{P0} and~\eqref{fpp} are~\cite{redner:2001},
\begin{align}
\begin{split}
\label{P0-bias}
P(0,v,t)&= \frac{1}{\sqrt{4\pi Dt}}\,\, e^{-v^2t/4D} \\
F(X,v,t)&= \frac{X}{\sqrt{4\pi Dt^3}}\,\, e^{-(X+vt)^2/4Dt}\enspace .
\end{split}
\end{align}
Similarly, the analogue of Eq.~\eqref{Pi} is
\begin{align}
\label{Pi-bias}
\mathcal{L}(t)=\int_0^\infty \!\!\!dX\, P(0,t)\,\big[ F(X,v,T\!-\!t)\!+\! F(X,-v,T\!-\!t)\big] \,.
\end{align}
In Eq.~\eqref{P0-bias} we must separately consider the situations where the
trajectory for times beyond $t$ is strictly positive (stronger team
ultimately wins) or strictly negative (weaker team wins). In the former
case, the time-reversed first-passage path from $(X,T)$ to $(0,t)$ is
accomplished in the presence of a positive bias $+v$, while in the latter
case, this time-reversed first passage occurs in the presence of a negative
bias $-v$.
Explicitly, Eq.~\eqref{Pi-bias} is
\begin{align}
\mathcal{L}(t)&= \frac{e^{-v^2t/4D}}{\sqrt{4\pi Dt}}
\int_0^\infty \!\!\!dX \frac{X}{\sqrt{4\pi D(T\!-\!t)^3}}\,\,\nonumber \\
&\hskip 0.7in \times \left\{e^{-(X+a)^2/b} + e^{-(X-a)^2/b}\right\}\,,
\end{align}
with $a=v(T-t)$ and $b=4D(T-t)$. Straightforward calculation gives
\begin{align}
\label{Lt-v}
\mathcal{L}(t)
&= \frac{e^{-v^2t/4D}}{\pi\sqrt{t(T\!-\!t)}}
\left\{\sqrt{\frac{\pi v^2(T\!-\!t)}{4D}} \,\,
\mathrm{erf}\bigg(\sqrt{\frac{v^2(T\!-\!t)}{4D}}\,\bigg)\right.\nonumber \\
&\hskip 1.3in \left. +e^{-v^2(T\!-\!t)/4D}\right\}\,.
\end{align}
This form for $\mathcal{L}(t)$ is again bimodal (Fig.~\ref{Lv}), as in the
arcsine law, but the last lead change is now more likely to occur near the
beginning of the game. This asymmetry arises because once a lead is
established, which is probable because of the bias, the weaker team is
unlikely to achieve another tie score.
\begin{figure}[t!]
\centerline{\includegraphics[width=0.45\textwidth]{figures/Lv2}}
\caption{The distribution $\mathcal{L}(t)$ for non-zero bias
(Eq.~\eqref{Lt-v}). The diffusion coefficient is the empirical value
$D=0.0391$, and bias values are: $v=0.002$, $v=0.004$, and $v=0.008$
(increasingly asymmetric curves). The central value of $v$ roughly
corresponds to average NBA game-scoring bias if diffusion is neglected.}
\label{Lv}
\end{figure}
More germane to basketball, we should average $\mathcal{L}(t)$ over the
distribution of biases in all NBA games. For this averaging, we use the
observation that many statistical features of basketball are accurately
captured by employing a Gaussian distribution of team strengths with mean
value 1 (since the absolute strength is immaterial), and standard deviation
of approximately 0.09~~\cite{gabel:redner:2012}. This parameter value was
inferred by using the Bradley-Terry competition
model~\cite{bradley:terry:1952}, in which teams of strengths $S_1$ and $S_2$
have scoring rates $S_1/(S_1+S_2)$ and $S_2/(S_1+S_2)$, respectively, to
generate synthetic basketball scoring time series. The standard deviation
0.09 provided the best match between statistical properties that were
computed from the synthetic time series and the empirical game
data~\cite{gabel:redner:2012}. From the distribution of team strengths, we
then infer a distribution of biases for each game and finally average over
this bias distribution to obtain the bias-averaged form of $\mathcal{L}(t)$.
The skewness of the resulting distribution is minor and it closely matches the bias-free
form of $\mathcal{L}(t)$ given in Fig.~\ref{Lt}. Thus, the bias of
individual games appears to again play a negligible role in statistical properties
of scoring, such as the distribution of times for the last lead change.
\section{Time of the Maximal Lead}
We now ask when the \emph{maximal} lead occurs in a
game~\cite{majumdar:etal:2008}. If the score difference evolves by unbiased
diffusion, then the standard deviation of the score difference grows as
$\sqrt{t}$. Naively, this behavior might suggest that the maximal lead
occurs near the end of a game. In fact, however, the probability
$\mathcal{M}(t)$ that the maximal lead occurs at time $t$ also obeys the
arcsine law Eq.~\eqref{arcsine}. Moreover, the arcsine laws for the last
lead time and for the maximal lead time are
equivalent~\cite{levy:1939,levy:1948,morters:peres:2010}, so that the largest
lead in a game between two equally-matched teams is most likely to occur
either near the start or near the end of a game.
\begin{figure}[ht]
\centerline{\includegraphics[width=0.35\textwidth]{figures/max-time}}
\caption{ The maximal lead (which could be positive or negative) occurs at
time $t$.}
\label{max-time}
\end{figure}
For completeness, we sketch a derivation for the distribution
$\mathcal{M}(t)$ by following the same approach used to find
$\mathcal{L}(t)$. Referring to Fig.~\ref{max-time}, suppose that the maximal
lead $M$ occurs at time $t$. For $M$ to be a maximum, the initial trajectory
from $(0,0)$ to $(M,t)$ must be a first-passage path, so that $M$ is never
exceeded prior to time $t$. Similarly, the trajectory from $(M,t)$ to the
final state $(X,T)$ must also be a time-reversed first-passage path from
$(X,T)$ to $(M,t)$, but with $X<M$, so that $M$ is never exceeded for all
times between $t$ and $T$.
\begin{figure}[ht]
\centerline{\includegraphics[width=0.45\textwidth]{figures/NBAprobMaxLead}}
\caption{Distribution of times $\mathcal{M}(t)$ for the maximal lead. }
\label{Mt}
\end{figure}
Based on this picture, we may write $\mathcal{M}(t)$ as
\begin{align}
\label{eq:Mt}
\mathcal{M}(t)&=A
\int_0^\infty \! dM\, F(M,t)\int_{-\infty}^M \! dX\, F(X\!-\!M,T\!-\!t)\nonumber \\
&=A \int_0^\infty \! dM \frac{M}{\sqrt{4\pi Dt^3}}\,\, e^{-M^2/4Dt}\nonumber \\
&\hskip 0.25in \times
\int_{-\infty}^M \! dX \frac{(M\!-\!X)}{\sqrt{4\pi D(T\!-\!t)^3}}\,\, e^{-(M\!-\!X)^2/4D(T\!-\!t)} \,.
\end{align}
The constant $A$ is determined by the normalization condition
$\int_0^T \mathcal{M}(t)dt \!=\!1$. Performing the above two elementary
integrations yields again the arcsine law of Eq.~\eqref{arcsine}.
Figure~\ref{Mt} compares the arcsine law prediction with empirical data from
the NBA.
\section{Probability that a Lead is Safe}
Finally, we turn to the question of how safe is a lead of a given size at any
point in a game (Fig.~\ref{score-diff}), i.e., the probability that the team
leading at time $t$ will ultimately win the game. The probability that a
lead of size $L$ is safe when a time $\tau$ remains in the game is, in
general,
\begin{align}
\label{Q}
Q(L,\tau)&=1-\int_0^\tau \!\!F(L,t)\, dt\,.
\end{align}
where $F(L,t)$ again is the first-passage probability [Eqs.~\eqref{fpp} and
\eqref{P0-bias}] for a diffusing particle, which starts at $L$, to first
reach the origin at time $t$. Thus the right-hand side is the probability
that the lead has not disappeared up to time $\tau$.
\begin{figure}[ht]
\centerline{\includegraphics[width=0.35\textwidth]{figures/score-diff}}
\caption{One team leads by $L$ points when a time $\tau$ is left in the
game.}
\label{score-diff}
\end{figure}
First consider evenly-matched teams, i.e., bias velocity $v=0$. We
substitute $u=L/\sqrt{4Dt}$ in Eq.~\eqref{Q} to obtain
\begin{equation}
\label{Q-final}
Q(L,\tau)=1-\frac{2}{\sqrt{\pi}}\int_z^\infty e^{-u^2}\, du
= \mathrm{erf}(z)\enspace .
\end{equation}
Here $z\equiv L/\sqrt{4D\tau}$ is the dimensionless lead size. When
$z\!\ll\! 1$, either the lead is sufficiently small or sufficient game time
remains that a lead of scaled magnitude $z$ is likely to be erased before the
game ends. The opposite limit of $z\!\gg \!1$ corresponds to either a
sufficiently large lead or so little time remaining that this lead likely
persists until the end of the game. We illustrate Eq.~\eqref{Q-final} with a
simple numerical example from basketball. From this equation, a lead of
scaled size $z\approx 1.163$ is 90\% safe. Thus a lead of 10 points is 90\%
safe when 7.87 minutes remain in a game, while an 18-point lead at the end of
the first half is also 90\% safe~\footnote{For convenience, we note that all 90\%-safe leads in professional basketball are solutions to $L=0.4602\sqrt{\tau}$, for $\tau$ seconds remaining.}.
\begin{figure}[ht]
\includegraphics[scale=0.40]{figures/prob_LeadIsSafe_NBA}
\caption{Probability that a lead is safe versus the dimensionless lead size
$z=L/\sqrt{4D\tau}$ for NBA games, showing the prediction from
Eq.~\eqref{Q-final}, the empirical data, and the mean prediction for Bill
James' well-known ``safe lead'' heuristic.}
\label{nba:safe}
\end{figure}
Figure~\ref{nba:safe} compares the prediction of Eq.~\eqref{Q-final} and the
empirical basketball data. We also show the prediction of the heuristic
developed by basketball analyst and historian Bill James~\cite{james:2008}.
This rule is mathematically given by:\
$Q(L,\tau) = \min\left\{1,\frac{1}{\tau}(L\!-\!3\!+\!\delta/2)^2\right\}$,
where $\delta=+1$ if the leading team has ball possession and $\delta=-1$
otherwise. The figure shows the predicted probability for
$\delta=\{-1,0,+1\}$ (solid curve for central value, dashed otherwise)
applied to all of the empirically observed $(L,\tau)$ pairs, because ball
possession is not recorded in our data. Compared to the random walk model,
the heuristic is quite conservative (assigning large safe lead
probabilities only for dimensionless leads $z>2$) and has the wrong
qualitative dependence on $z$. In contrast, the random walk model gives a
maximal overestimate of 6.2\% for the safe lead probability over all $z$, and
has the same qualitative $z$ dependence as the empirical data.
For completeness, we extend the derivation for the safe lead probability to
unequal-strength teams by including the effect of a bias velocity $v$ in
Eq.~\eqref{Q}:
\begin{align}
Q(L,\tau)&=1-\int_0^\tau\!\!\frac{L}{\sqrt{4\pi Dt^3}}\,\, e^{-(L+vt)^2/4Dt}\,dt \nonumber \\
&=1-e^{-vL/2D} \!\int_0^\tau\!\!\frac{L}{\sqrt{4\pi Dt^3}}\,\, e^{-L^2/4Dt-v^2t/4D}\, dt\,,
\end{align}
where the integrand in the first line is the first-passage probability for
non-zero bias. Substituting $u=L/\sqrt{4Dt}$ and using again the P\'eclet
number $\mathcal{P}\!e= vL/2D$, the result is
\begin{align}
Q(L,\tau)&=1-\frac{2}{\sqrt{\pi}}\,\,e^{-\mathcal{P}\!e}\int_0^z
e^{-u^2-\mathcal{P}\!e^2/4u^2}\, du \nonumber\\
&=1-\tfrac{1}{2}\!\left[e^{-2\mathcal{P}\!e}\mathrm{erfc}\Big(z\!-\!\frac{\mathcal{P}\!e}{2z}\Big)
\!+\!\mathrm{erfc}\Big(z\!+\!\frac{\mathcal{P}\!e}{2z}\Big)\right] \enspace .
\end{align}
\begin{figure}[ht]
\subfigure[]{\includegraphics[width=0.42\textwidth]{figures/bias+}}\qquad\subfigure[]{\includegraphics[width=0.42\textwidth]{figures/bias-}}
\caption{Probability that a lead is safe versus $z=L/\sqrt{4D\tau}$ for: (a)
the stronger team is leading for $\mathcal{P}\!e=\frac{1}{5}$, $\frac{1}{2}$
and 1, and (b) the weaker team is leading for $\mathcal{P}\!e=-\frac{2}{5}$,
$-\frac{4}{5}$ and $-\frac{6}{5}$. The case $\mathcal{P}\!e=0$ is also shown
for comparison.}
\label{bias}
\end{figure}
When the stronger team is leading ($\mathcal{P}\!e>0$), essentially any lead is
safe for $\mathcal{P}\!e\agt 1$, while for $\mathcal{P}\!e<1$, the safety of a lead
depends more sensitively on $z$ (Fig.~\ref{bias}(a)). Conversely, if the
weaker team happens to be leading ($\mathcal{P}\!e<0$), then the lead has to be
substantial or the time remaining quite short for the lead to be safe
(Fig.~\ref{bias}(b)). In this regime, the asymptotics of the error function
gives $Q(L,\tau)\sim e^{-\mathcal{P}\!e^2/4z^2}$ for
$z<|\mathcal{P}\!e|/2$, which is vanishingly small. For values of $z$
in this range, the lead is essentially never safe.
\section{Lead changes in other sports}
We now consider whether our predictions for lead change statistics in
basketball extend to other sports, such as college American football (CFB),
professional American football (NFL), and professional hockey
(NHL)~\footnote{Although the NHL data span 10 years, they include only 9
seasons because the 2004 season was canceled as a result of a labor
dispute.}. These sports have the following commonalities with
basketball~\cite{merritt:clauset:2014}:
\begin{enumerate}
\itemsep -0.25ex
\item Two teams compete for a fixed time $T$, in which points are scored by
moving a ball or puck into a special zone in the field.
\item Each team accumulates points during the game and the team with the
largest final score is the winner (with sport-specific tiebreaking rules).
\item A roughly constant scoring rate throughout the game, except for small
deviations at the start and end of each scoring period.
\item Negligible temporal correlations between successive scoring events.
\item Intrinsically different team strengths.
\item Scoring antipersistence, except for hockey.
\end{enumerate}
These similarities suggest that a random-walk model should also apply to lead
change dynamics in these sports.
However, there are also points of departure, the most important of which is
that the scoring rate in these sports is between 10--25 times smaller than in
basketball. Because of this much lower overall scoring rate, the diminished
rate at the start of games is much more apparent than in basketball
(Fig.~\ref{fig:last:lead:change}). This longer low-activity initial period
and other non-random-walk mechanisms cause the distributions $\mathcal{L}(t)$
and $\mathcal{M}(t)$ to visibly deviate from the arcsine laws
(Figs.~\ref{fig:last:lead:change} and \ref{fig:max:lead}). A particularly
striking feature is that $\mathcal{L}(t)$ and $\mathcal{M}(t)$ approach zero
for $t\to 0$. In contrast, because the initial reduced scoring rate occurs
only for the first 30 seconds in NBA games, there is a realu, but barely
discernible deviation of the data for $\mathcal{L}(t)$ from the arcsine law
(Fig.~\ref{Lt}).
\begin{figure*}[t!]
\begin{tabular}{ccc}
\includegraphics[scale=0.303]{figures/CFBnumLeadChanges} &
\includegraphics[scale=0.303]{figures/NFLnumLeadChanges} &
\includegraphics[scale=0.303]{figures/NHLnumLeadChanges}
\end{tabular}
\caption{Distribution of the average number of lead changes per game, for
CFB, NFL, and NHL, showing the simple prediction of Eq.~\eqref{G}, the
empirical data, and the results of a simulation in which scoring events
occur by a Poisson process with the game-specific scoring rate.}
\label{fig:lead:changes:per:game}
\end{figure*}
\begin{figure*}[t!]
\begin{tabular}{ccc}
\includegraphics[scale=0.284]{figures/CFBprobLastLeadChangeWScoringRate} &
\includegraphics[scale=0.284]{figures/NFLprobLastLeadChangeWScoringRate} &
\includegraphics[scale=0.284]{figures/NHLprobLastLeadChangeWScoringRate}
\end{tabular}
\caption{(upper) Empirical probability that a scoring event occurs at time $t$, with the game-average scoring rate shown as a horizontal line, for games of CFB, NFL, and NHL. (lower) Distribution of times $\mathcal{L}(t)$ for the last lead change. }
\label{fig:last:lead:change}
\end{figure*}
Finally, the safe lead probability given in Eq.~\eqref{Q-final} qualitatively
matches the empirical data for football and hockey
(Fig.~\ref{fig:prob:safe}), with the hockey data being closest to the
theory~\footnote{For convenience, the 90\%-safe leads for CFB, NFL, and NHL
are solutions to $L=\alpha\sqrt{\tau}$, where
$\alpha=\{0.4629,0.3781,0.0638\}$, respectively.}. For both basketball and
hockey, the expression for the safe lead probability given in
Eq.~\eqref{Q-final} is quantitatively accurate. For football, a prominent
feature is that small leads are much more safe that what is predicted by our
theory. This trend is particularly noticeable in the CFB. One possible
explanation of this behavior is that in college football, there is a
relatively wide disparity in team strengths, even in the most competitive
college leagues. Thus a small lead size can be quite safe if the
two teams happen to be significantly mismatched.
\begin{figure*}[t!]
\begin{tabular}{ccc}
\includegraphics[scale=0.290]{figures/CFBprobMaxLead} &
\includegraphics[scale=0.290]{figures/NFLprobMaxLead} &
\includegraphics[scale=0.290]{figures/NHLprobMaxLead}
\end{tabular}
\caption{Distribution of times $\mathcal{M}(t)$ for the maximal lead, for
games of CFB, NFL, and NHL. }
\label{fig:max:lead}
\end{figure*}
\begin{figure*}[t!]
\begin{tabular}{ccc}
\includegraphics[scale=0.303]{figures/CFBprobSafeLeadZ} &
\includegraphics[scale=0.303]{figures/NFLprobSafeLeadZ} &
\includegraphics[scale=0.303]{figures/NHLprobSafeLeadZ}
\end{tabular}
\caption{Probability that a lead is safe, for CFB, NFL, and NHL, versus the
dimensionless lead $z=L/\sqrt{4D\tau}$. Each figure shows the prediction
from Eq.~\eqref{Q-final} and the corresponding empirical pattern. }
\label{fig:prob:safe}
\end{figure*}
For American football and hockey, it would be useful to understand how the
particular structure of these sports would modify a random walk model. For
instance, in American football, the two most common point values for scoring
plays are 7 (touchdown plus extra point) and 3 (field goal). The random-walk
model averages these events, which will underestimate the likelihood that a
few high-value events could eliminate what otherwise seems like a safe lead.
Moreover, in football the ball is moved incrementally down the field through
a series of plays. The team with ball possession has four attempts to move
the ball a specific minimum distance (10 yards) or else lose possession; if
it succeeds, that team retains possession and repeats this effort to further
move the ball. As a result, the spatial location of the ball on the field
likely plays an important role in determining both the probability of scoring
and the value of this event (field goal versus touchdown). In hockey, players
are frequently rotated on and off the ice so that a high intensity of play is
maintained throughout the game. Thus the pattern of these substitutions---
between potential all-star players and less skilled ``grinders''---can change
the relative strength of the two teams every few minutes.
\section{Conclusions}
A model based on random walks provides a remarkably good description for the
dynamics of scoring in competitive team sports. From this starting point, we
found that the celebrated arcsine law of Eq.~\eqref{arcsine} closely
describes the distribution of times for: (i) one team is leading
$\mathcal{O}(t)$ (first arcsine law), (ii) the last lead change in a game
$\mathcal{L}(t)$ (second arcsine law), and (iii) when the maximal lead in the
game occurs $\mathcal{M}(t)$ (third arcsine law). Strikingly, these arcsine
distributions are bimodal, with peaks for extremal values of the underlying
variable. Thus both the time of the last lead and the time of the maximal
lead are most likely to occur at the start or the end of a game.
These predictions are in accord with the empirically observed scoring
patterns within more than 40,000 games of professional basketball, American
football (college or professional), and professional hockey. For basketball,
in particular, the agreement between the data and the theory is quite close.
All the sports also exhibit scoring anomalies at the end of each scoring
period, which arise from a much higher scoring rate around these
times (Figs.~\ref{Lt} and~\ref{fig:last:lead:change}). For football and
hockey, there is also a substantial initial time range of reduced scoring
that is reflected in $\mathcal{L}(t)$ and $\mathcal{M}(t)$ both approaching
zero as $t\to 0$. Football and hockey also exhibit other small but
systematic deviations from the second and third arcsine laws that remain
unexplained.
The implication for basketball, in particular, is that a typical game can be
effectively viewed as repeated coin-tossings, with each toss subject to the
features of antipersistence, an overall bias, and an effective restoring
force that tends to shrink leads over time (which reduces the likelihood of a
blowout). These features represent inconsequential departures from a pure
random-walk model. Cynically, our results suggest that one should watch only
the first few and last few minutes of a professional basketball game; the rest of the
game is as predictable as watching repeated coin tossings.
On the other hand, the high degree of unpredictability of events in the
middle of a game may be precisely what makes these games so exciting for
sports fans.
The random-walk model also quantitatively predicts the probability that a
specified lead of size $L$ with $t$ seconds left in a game is ``safe,'' i.e.,
will not be reversed before the game ends. Our predictions are
quantitatively accurate for basketball and hockey. For basketball, our
approach significantly outperforms a popular heuristic for determining when a
lead is safe. For football, our prediction is marginally less accurate, and
we postulated a possible explanation for why this inaccuracy could arise in
college football, where the discrepancy between the random-walk model and the
data is the largest.
Traditional analyses of sports have primarily focused on the composition of
teams and the individual skill levels of the players. Scoring events and
game outcomes are generally interpreted as evidence of skill differences
between opposing teams. The random walk view that we formalize and test here
is not at odds with the more traditional skill-based view. Our perspective
is that team competitions involve highly skilled and motivated players who
employ well-conceived strategies. The overarching result of such keen
competition is to largely negate systematic advantages so that all that
remains is the residual stochastic element of the game. The appearance of
the arcsine law, a celebrated result from the theory of random walks, in the
time that one team leads, the time of the last lead change, and the time at
which the maximal lead occurs, illustrates the power of the random-walk view
of competition. Moreover, the random-walk model makes surprisingly accurate
predictions of whether a current lead is effectively safe, i.e., will not be
overturned before the game ends, a result that may be of practical interest
to sports enthusiasts.
The general agreement between the random-walk model for lead-change dynamics
across four different competitive team sports suggests that this paradigm has
much to offer for the general issue of understanding human competitive
dynamics. Moreover, the discrepancies between the empirical data and our
predictions in sports other than basketball may help identify alternative
mechanisms for scoring dynamics that do not involve random walks. Although
our treatment focused on team-level statistics, another interesting direction
for future work would be to focus on understanding how individual behaviors
within such social competitions aggregate up to produce a system that behaves
effectively like a simple random walk. Exploring these and other hypotheses,
and developing more accurate models for scoring dynamics, are potential
fruitful directions for further work.
\begin{acknowledgements}
The authors thank Sharad Goel and Sears Merritt for helpful
conversations. Financial support for this research was provided in part by
Grant No.\ DMR-1205797 (SR) and Grant No.\ AGS-1331490 (MK) from the NSF,
and a grant from the James S.\ McDonnell Foundation (AC). By mutual agreement,
author order was determined randomly, according to when the maximum lead
size occurred during the Denver Nuggets--Milwaukee Bucks NBA game on 3 March 2015.
\end{acknowledgements}
| {
"attr-fineweb-edu": 2.505859,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbkHxK1yAgWt78yVz | \section{Introduction}
\citet{Ferrari2003} introduced {\it velocit\`a ascensionale media} (VAM), mean ascent velocity, which is a measurement of the rate of ascent and quantifies a cyclist's climbing ability.
Commonly, VAM is built-in into a GPS bike computer~\citep[e.g.,][]{PolarM460}, as an instantaneous information, and is available {\it a posteriori} for many Strava segments~\citep{Strava2018}.
It is used in training manuals~\citep[e.g.,][]{Rotondi2012} and touring books~\citep[e.g.,][]{Barrett2016}.
An application to racing strategies is described by~\citet{Gifford2011}, for Stage~16 of the Tour de France in 2000.
Ferrari advised the USPS team not to chase Marco Pantani, in spite of a large gap, based on VAM measurements, which he considered unsustainable.
Ferrari's prediction was correct: Pantani was overtaken by the peloton.
Recently, due to the popularity of Everesting, VAM has become a point of conversation among cyclists eager to attempt this challenge.
In spite of its presence within the cycling community, VAM is rarely considered as a research topic.
At the time of this article, an advanced search on Google Scholar for articles containing the exact phrase ``mean ascent velocity'' yields fewer than one hundred results, while restricting the search to include ``cycling'' yields fewer than ten.
Therein, the sole consideration of VAM is by~\citet{CintiaEtAl2013}, for data mining using Strava.
VAM is a value devoid of information concerning the conditions under which it is obtained.
For instance, the same VAM value can be obtained by slowly climbing a steep hill or by riding quickly a moderate incline.
In this article, we use mathematical physics to provide a novel approach to enhance our understanding of VAM and to examine conditions for its maximization.
We begin this article by presenting a model that accounts for the power to maintain a given ground speed.
Then, we prove that the maximization of VAM requires a constant slope, as a consequence of solving the brachistochrone problem.
We proceed by using the model to express VAM in terms of ground speed and slope, and prove, for a fixed power output, that VAM increases monotonically with the slope.
To gain an insight into these results, we discuss a numerical example, wherein we vary the values of cadence and gear ratio.
We conclude by suggesting possible consequences of our results for VAM-maximization strategies.
\section{Method}
In this article, we use methods rooted in mathematical physics and logical proof to determine the conditions required to maximize VAM, which we demonstrate in two steps.
The first step is to obtain the curve that minimizes the time between two points along an incline.
In other words, we consider the brachistochrone problem to determine the optimal hill conditions that minimize the ascent time.
The solution to the problem, which is steeped in the history of physics, laid the groundwork for the mid-eighteenth-century development of the calculus of variations by Leonhard Euler and Joseph-Louis Lagrange~\citep[e.g.,][pp.~19--20]{Coopersmith2017}.
In our solution, we avail of these developments and use the Euler-Lagrange equation to determine that the brachistochrone for the ascent is a straight line, which, given the nature of typical solutions to brachistochrone problems, is an unexpected result.
The second step requires a mathematical formulation to model a cyclist climbing a straight-line incline.
To that end, we consider a model that accounts for the power required to overcome the forces, $F_{\!\leftarrow}$\,, that oppose the translational motion of a bicycle-cyclist system along a hill of constant slope with ground speed~$V$~\citep[e.g.,][]{DanekEtAl2021}.
The model is
\begin{align}
\label{eq:PV}
P
&=
F_{\!\leftarrow}\,V
\\
\nonumber
&=
\quad\frac{
\overbrace{\vphantom{\left(V\right)^2}\,m\,g\sin\theta\,}^\text{change in elevation}
+
\!\!\!\overbrace{\vphantom{\left(V\right)^2}\quad m\,a\quad}^\text{change in speed}
+
\overbrace{\vphantom{\left(V\right)^2}
{\rm C_{rr}}\!\!\!\underbrace{\,m\,g\cos\theta}_\text{normal force}
}^\text{rolling resistance}
+
\overbrace{\vphantom{\left(V\right)^2}
\,\tfrac{1}{2}\,{\rm C_{d}A}\,\rho\,
(
\!\!\underbrace{
V+w_\leftarrow
}_\text{air-flow speed}\!\!
)^{2}\,
}^\text{air resistance}
}{
\underbrace{\quad1-\lambda\quad}_\text{drivetrain efficiency}
}\,V\,,
\end{align}
where $m$~is the mass of the bicycle-cyclist system, $g$~is the acceleration due to gravity, $\theta$ is the slope of the incline, $a$~is the change of speed, $\rm C_{rr}$~is the rolling-resistance coefficient, $\rm C_{d}A$~is the air-resistance coefficient, $\rho$~is the air density, $w_{\leftarrow}$~is the wind component opposing the motion and $\lambda$ is the drivetrain-resistance coefficient.
Considering an idealized setup\,---\,that is, a steady ride,~$a=0\,{\rm m/s^2}$\,, in windless conditions,~$w=0\,{\rm m/s}$\,---\,we write model~\eqref{eq:PV} as
\begin{equation}
\label{eq:P}
P=\dfrac{mg\left({\rm C_{rr}}\cos\theta+\sin\theta\right)+\tfrac{1}{2}\,{\rm C_{d}A}\,\rho\,V^2}{1-\lambda}V\,.
\end{equation}
Finally, to present the theorems associated with the first and second steps, we define
\begin{equation}
\label{eq:VAM}
{\rm VAM} := 3600\,V(\theta)\,\sin\theta\,,
\end{equation}
which is stated in metres per hour.
Thus, for a given power, $P = P_0$\,, and using expression~\eqref{eq:P}, we write
\begin{equation}
\label{eq:PowerConstraint}
V^{3}+A(\theta)\,V + B = 0\,,
\end{equation}
where
\begin{equation}
\label{eq:a}
A(\theta) = \frac{mg\left({\rm C_{rr}}\cos\theta+\sin\theta\right)}{\tfrac{1}{2}{\rm C_{d}A}\,\rho}
\end{equation}
and
\begin{equation}
\label{eq:b}
B = -\frac{\left(1-\lambda\right)P_{0}}{\tfrac{1}{2}{\rm C_{d}A}\,\rho}\,.
\end{equation}
\section{Results}
\subsection{Brachistochrone problem}
\label{sec:brachistochrone}
Let us consider the brachistochrone problem, which, herein, consists of finding the curve along which the ascent between two points\,---\,under the assumption of fixed power\,---\,takes the least amount of time.
To solve the problem, we prove the following theorem.\\
\begin{theorem}
\label{thm:Brachistochrone}
The path of quickest ascent from $(0,0)$ to $(R,H)$\,, which is an elevation gain of~$H$ over a horizontal distance of~$R$\,, is the straight line, $y = (H/R)\,x$\,, where $0\leqslant x\leqslant R$ and $y(R) = H$\,.
\end{theorem}
\begin{proof}
The relation between speed, slope and power is stated by expressions~\eqref{eq:PowerConstraint},~\eqref{eq:a} and~\eqref{eq:b}, which determines the ground speed $V=V(\theta)$\,, where $\theta$ is the slope, namely,
\begin{equation*}
\tan\theta = \frac{{\rm d}y}{{\rm d}x}\,,\quad
\theta = \arctan\left(\frac{{\rm d}y}{{\rm d}x}\right)\,.
\end{equation*}
The traveltime over an infinitesimal distance, ${\rm d}s$\,, is ${\rm d}t = {\rm d}s/V$\,.
Since ${\rm d}s^2={\rm d}x^2+{\rm d}y^2$\,, we write
\begin{equation*}
{\rm d}t = \frac{\sqrt{1+\left(\dfrac{{\rm d}y}{{\rm d}x}\right)^{2}}\,{\rm d}x}{V(\theta)}
= \frac{\sqrt{1+\left(y'\right)^{2}}}{V(\arctan(y'))}\,{\rm d}x\,,
\end{equation*}
where $y':={\rm d}y/{\rm d}x$\,.
Hence, the ascent time is
\begin{equation*}
T = \int\limits_0^R\frac{\sqrt{1+\left(y'\right)^{2}}}{V(\arctan(y'))}\,{\rm d}x
=:\int\limits_0^R\underbrace{\frac{\sqrt{1+p^{2}}}{V(\arctan(p))}}_F\,{\rm d}x\,,
\end{equation*}
where $p:=y'(x)$\,.
Thus, we write the integrand as a function of three independent variables,
\begin{equation*}
T=: \int\limits_0^RF(x,y,p)\,{\rm d}x\,.
\end{equation*}
To minimize the traveltime, we invoke the Euler-Lagrange equation,
\begin{equation*}
\frac{\partial}{\partial y}F(x,y,p) = \frac{\rm d}{{\rm d}x}\frac{\partial}{\partial p}F(x,y,p)\,.
\end{equation*}
Since $F$ is not an explicit function of $y$\,, the left-hand side is zero,
\begin{equation}
\label{eq:ConsQuant}
0 = \frac{\rm d}{{\rm d}x}\left\{\frac{\partial}{\partial p}\frac{\sqrt{1+p^{2}}}{V(\arctan(p))}\right\}\,,
\end{equation}
which means that the term in braces is a constant.
To proceed, we use the fact that
\begin{equation*}
p = \tan\theta \implies \frac{\partial}{\partial p} = \frac{{\rm d}\theta}{{\rm d}p}\frac{\partial}{\partial\theta} = \cos^{2}\theta\,\frac{\partial}{\partial\theta}
\end{equation*}
and
\begin{align*}
\frac{\partial}{\partial p}\frac{\sqrt{1+p^{2}}}{V(\arctan(p))}
&= \cos^{2}\theta\,\frac{\partial}{\partial\theta}\frac{\sqrt{1+p^{2}}}{V(\arctan(p))}
= \cos^{2}\theta\,\frac{\partial}{\partial\theta}\frac{\sqrt{1+\tan^{2}\theta}}{V(\theta)}
= \cos^{2}\theta\,\frac{\partial}{\partial\theta}\frac{\sqrt{\sec\theta}}{V(\theta)} \\
&= \cos^{2}\theta\,\frac{\rm d}{{\rm d}\theta}\frac{1}{\cos\theta\,V(\theta)}\,,
\end{align*}
for $0\leqslant\theta\leqslant\pi/2$\,.
Thus, expression~(\ref{eq:ConsQuant}) is
\begin{equation*}
\frac{\rm d}{{\rm d}x}\left(\cos^{2}\theta\,\frac{\rm d}{{\rm d}\theta}\frac{1}{\cos\theta\,V(\theta)}\right)\equiv0\,,
\end{equation*}
which, using the fact that
\begin{equation*}
\theta = \arctan(y'(x)) \implies \frac{{\rm d}\theta}{{\rm d}x} = \frac{1}{1+(y'(x))^{2}}\,y''(x)
\quad{\rm and}\quad
\frac{\rm d}{{\rm d}x} = \frac{{\rm d}\theta}{{\rm d}x}\frac{\rm d}{{\rm d}\theta}\,,
\end{equation*}
we write as
\begin{equation*}
\frac{y''(x)}{1+(y'(x))^{2}}\frac{\rm d}{{\rm d}\theta}\left(\cos^{2}\theta\,\frac{\rm d}{{\rm d}\theta}\frac{1}{\cos\theta\,V(\theta)}\right)\equiv0\,.
\end{equation*}
Hence, there are two possibilities.
Either
\begin{equation*}
\underbrace{y''(x)\equiv0}_{\rm(a)}
\qquad{\rm or}\qquad
\underbrace{\quad\frac{\rm d}{{\rm d}\theta}\left(\cos^{2}\theta\,\frac{\rm d}{{\rm d}\theta}\frac{1}{\cos\theta\,V(\theta)}\right)\equiv0}_{\rm(b)}\,.
\end{equation*}
We claim that (b) is impossible.
Indeed, (b) holds if and only if
\begin{equation*}
\cos^{2}\theta\,\frac{\rm d}{{\rm d}\theta}\frac{1}{\cos\theta\,V(\theta)}\equiv a\,,
\end{equation*}
where $a$ is a constant.
It follows that
\begin{equation*}
\frac{\rm d}{{\rm d}\theta}\frac{1}{\cos\theta\,V(\theta)}= a\sec^{2}\theta
\iff\frac{1}{\cos\theta\,V(\theta)}= a\tan\theta + b\,,
\end{equation*}
where $a$ and $b$ are constants, and
\begin{equation*}
\cos\theta\,V(\theta)=\frac{1}{a\tan\theta + b}
\iff V(\theta)=\frac{1}{a\sin\theta + b\cos\theta}\,.
\end{equation*}
This cannot be the case, since it implies that there exists a value of $\theta^{\ast}$ for which $a\sin\theta^{\ast} + b\cos\theta^{\ast} = 0$ and $V(\theta^{\ast})\to\infty$\,.
But $V^{3}+A(\theta)\,V+B=0$\,, so any root of this cubic equation is such that
\begin{equation*}
|V|\leqslant\max\{1,|A(\theta)| + |B|\}\,.
\end{equation*}
Thus, it follows that (a), namely, $y''(x)\equiv0$\,, must hold, which implies that $y(x)$ is a straight line.
For the ascent from $(0,0)$ to $(R,H)$\,, the line is $y(x) = (H/R)\,x$\,, as required.
\end{proof}
In the context of VAM, there is a corollary of Theorem~\ref{thm:Brachistochrone}.\\
\begin{corollary}
The path of the quickest gain of altitude between two points, $(0,0)$ and $(R,H)$\,, is the straight line, $y = (H/R)\,x$\,, where $0\leqslant x\leqslant R$ and $y(R) = H$\,.
\end{corollary}
\subsection{Monotonic increase of VAM}
\label{sec:VAMmaximization}
Since the path of quickest ascent is a straight line, let us consider model~\eqref{eq:P} to determine the behaviour of VAM\,---\,under the assumption of fixed power\,---\,for slopes of interest.
To determine the behaviour, we prove the following theorem.\\
\begin{theorem}
\label{thm:Theorem}
For a given power, $P = P_0$\,, $V(\theta)\sin\theta$ increases monotonically for $0\leqslant\theta\leqslant\pi/2$\,.
\end{theorem}
\begin{proof}
Expressions~\eqref{eq:PowerConstraint},~\eqref{eq:a} and~\eqref{eq:b} determine $V=V(\theta)$\,.
Then,
\begin{equation}
\label{eq:d_Vsintheta_dtheta_temp1}
\frac{\rm d}{{\rm d}\theta}\left(V(\theta)\sin\theta\right) = V'(\theta)\sin\theta + V(\theta)\cos\theta\,,
\end{equation}
where ${}'$ is a derivative with respect to $\theta$\,.
The derivative of expression~\eqref{eq:PowerConstraint} is
\begin{equation*}
3\,V^{2}(\theta)V'(\theta) + A'(\theta)\,V(\theta) + A(\theta)V'(\theta)=0\,,
\end{equation*}
from which it follows that
\begin{equation}
\label{eq:V'(theta)}
V'(\theta) = -\frac{A'(\theta)V(\theta)}{3\,V^{2}(\theta)+A(\theta)}\,.
\end{equation}
Using expression~\eqref{eq:V'(theta)} in expression~\eqref{eq:d_Vsintheta_dtheta_temp1}, and simplifying, we obtain
\begin{equation}
\label{eq:d_Vsintheta_dtheta_temp2}
\frac{\rm d}{{\rm d}\theta}\left(V(\theta)\sin\theta\right) = V(\theta)\,\frac{3\,V^{2}(\theta)\cos\theta-A'(\theta)\sin\theta+A(\theta)\cos\theta}{3\,V^{2}(\theta)+A(\theta)}\,.
\end{equation}
Upon using expression~\eqref{eq:a}, the latter two terms in the numerator of expression~\eqref{eq:d_Vsintheta_dtheta_temp2} simplify to
\begin{align*}
-A'&(\theta)\sin\theta+A(\theta)\cos\theta \\
&= \frac{mg}{\tfrac{1}{2}{\rm C_{d}A}\,\rho}\Big(-\left(\cos\theta-{\rm C_{rr}}\sin\theta\right)\sin\theta+\left(\sin\theta+{\rm C_{rr}}\cos\theta\right)\cos\theta\Big)
\\
&= \frac{mg\,{\rm C_{rr}}}{\tfrac{1}{2}{\rm C_{d}A}\,\rho}\,.
\end{align*}
Hence,
\begin{equation}
\label{eq:d_Vsintheta_dtheta}
\frac{\rm d}{{\rm d}\theta}\left(V(\theta)\sin\theta\right)
=
V(\theta)\,\frac{3\,V^{2}(\theta)\cos\theta+\dfrac{mg\,{\rm C_{rr}}}{\tfrac{1}{2}{\rm C_{d}A}\,\rho}}{3\,V^{2}(\theta)+A(\theta)}>0\,,
\quad{\rm for}\quad
0\leqslant\theta\leqslant\frac{\pi}{2}\,,
\end{equation}
as required.
Since expression~\eqref{eq:d_Vsintheta_dtheta} is positive, VAM is\,---\,within the slope range of interest\,---\,a monotonically increasing function of $\theta$\,.
\end{proof}
From this theorem, it follows that the steeper the incline that a rider can climb with a given power, the greater the value of the corresponding VAM, as illustrated in Figure~\ref{fig:FigThm1and2}.
\subsection{VAM as a function of cadence and gear ratio}
In view of Theorem~\ref{thm:Theorem}, we conclude that, for a fixed power, there is no specific value of $\theta$ that maximizes expression~\eqref{eq:VAM} and hence, there is no local maximum for VAM.
In other words\,---\,based on expression~\eqref{eq:P}, alone\,---\,we cannot specify the optimal slope to maximize VAM.
However, this is not the case if we include other considerations, such as
\begin{equation}
\label{eq:fv}
P = f\,v\,,
\end{equation}
where $f$ is the force applied to the pedals and $v$ is the circumferential speed of the pedals.
These are measurable quantities; they need not be modelled in terms of the surrounding conditions, such as the mass of the rider, strength and direction of the wind or the steepness of an uphill.
Herein, the circumferential speed is
\begin{equation}
\label{eq:v}
v=2\pi\,\ell_c\,c\,,
\end{equation}
where $\ell_c$ is the crank length, $g_r$ is the gear ratio, $r_w$ is the wheel radius and $c$ is the cadence, and the corresponding ground speed is
\begin{equation}
\label{eq:V}
V=2\pi\,r_{w}\,c\,g_r\,;
\end{equation}
these expressions allow us to examine VAM as a function of the cadence and gear ratio.
To examine VAM as a function of cadence and gear ratio, we require typical values for the parameters that constitute the power model.
Thus, we consider~\citet[Section~3]{DanekEtAl2021}, which details a numerical optimization process for the estimation of $\rm C_dA$\,, $\rm C_{rr}$ and $\lambda$ using model~\eqref{eq:PV} along an 8-kilometre climb of nearly constant inclination in Northwest Italy.
The pertinent values are as follows\footnote{%
For the convenience of the reader, all numerical values resulting from operations on model parameters are rounded to four significant figures.
}.
For the bicycle-cyclist system, $m=111\,{\rm kg}$, $\rm C_{d}A=0.2702\,{\rm m}{}^2$, $\rm C_{rr}=0.01298$\,, $\lambda=0.02979$\,.
For the bicycle, $\ell_c = 0.175\,{\rm m}$, $g_{r}=1.5$ and $r_w = 0.335\,{\rm m}$.
For the external conditions, $g=9.81\,{\rm m/s^2}$ and $\rho=1.168\,{\rm kg/m^3}$, which corresponds to an altitude of 400\,m.
Let us suppose that the power that the cyclist maintains during a climb a power of $P_0=375\,{\rm W}$ and that the cadence is $c = 1$\,, which is 60\,rpm, and\,---\,in accordance with expression~\eqref{eq:v}\,---\, results in $v = 1.100\,{\rm m/s}$.
The force that the rider must apply to the pedals is $f = 341.0\,{\rm N}$.
In accordance with expression~\eqref{eq:V}, the ground speed is $V=3.157\,{\rm m/s}$ and, inserting $V$ into expression~\eqref{eq:PowerConstraint}, we obtain
\begin{equation*}
46.00\cos\theta + 3544\sin\theta - 369.9 = 0\,,
\end{equation*}
whose solution is $\theta = 0.09160$\,rad\,$=5.248^\circ$\,, which results in ${\rm VAM} = 1040$\,m/h; herein, $\theta$ corresponds to a grade of 9.184\,\%\,.
Now, let us consider the effect on this value of VAM due to varying the cadence and gear ratio.
To do so, we use expression~(\ref{eq:V}) in expressions~\eqref{eq:VAM} and \eqref{eq:PowerConstraint} to obtain
\begin{equation}
\label{eq:VAM(c,g_r,theta)}
{\rm VAM}=7578\,c\,g_r\sin\theta
\end{equation}
and
\begin{equation}
\label{eq:F(c,g_r,theta)}
1.517\,(c\,g_r)^3 + 30.66\,c\,g_{r}\cos\theta + 2362\,c\,g_{r}\sin\theta - 375.0 = 0\,,
\end{equation}
respectively.
In both expressions~(\ref{eq:VAM(c,g_r,theta)}) and (\ref{eq:F(c,g_r,theta)}), $c$ and $g_r$ appear as a product.
For each $c\,g_r$ product, we solve equation~\eqref{eq:F(c,g_r,theta)} to obtain the corresponding value of~$\theta$\,, which we use in expression~\eqref{eq:VAM(c,g_r,theta)} to calculate the VAM.
We consider $c\,g_r\in[0.5\,,3.75]$\,; the lower limit represents $c=0.5$\,, which is 30\,rpm, and $g_r = 1$\,; the upper limit represents $c=1.5$\,, which is 90\,rpm, and $g_r = 2.5$\,.
Following expression~(\ref{eq:v}), the corresponding circumferential speeds are $v=0.5498$\,m/s and $v=1.649$\,m/s\,.
Since $P_0=375$\,W\,, in accordance with expression~(\ref{eq:fv}), the forces applied to the pedals are $f=682.1$\,N and $f=227.4$\,N\,, respectively.
\begin{figure}[h]
\centering
\includegraphics[scale=0.4]{FigVAMvsCadenceGearRatio.pdf}
\caption{\small%
VAM as function of $c\,g_r$\,, with $P_0=375$\,W}
\label{fig:VAM}
\end{figure}
Examining Figure~\ref{fig:VAM}, we conclude that lowering $c\,g_r$ increases the slope of the climbable incline and, hence\,---\,for a given value of power\,---\,increases the VAM, as expected in view of Theorem~\ref{thm:Theorem}.
The VAM is determined not only by the power sustainable during a climb, but also the steepness of that climb.
As stated in Theorem~\ref{thm:Theorem}, the steeper the slope that can be climbed with a given power, the greater the VAM.
Hence, as illustrated in Figure~\ref{fig:VAM}, the highest value corresponds to the lowest cadence-gear product.
A maximization of VAM requires an optimization of the force applied to the pedals and their circumferential speed, along a slope, in order to maintain a high power.
\section{Discussion}
\begin{wrapfigure}{2}{0.525\textwidth}
\centering
\includegraphics[width=0.475\textwidth]{FigThm1and2}
\caption{\small%
Illustration of Theorems~\ref{thm:Brachistochrone} and \ref{thm:Theorem}}
\label{fig:FigThm1and2}
\end{wrapfigure}
Let us illustrate Theorems~\ref{thm:Brachistochrone} and~\ref{thm:Theorem} in the context of climbs chosen for Everesting, which consists of riding an uphill repeatedly in a single activity until reaching the total ascent of 8,848~metres.
In June of 2020, Lachlan Morton \citep{MortonRecord} established an Everesting record.
To do so\,---\,in a manner consistent with Theorem~\ref{thm:Theorem}\,---\,he chose a steep slope, whose average gradient is~$11.1\%$\,.
This record was established\,---\,in a manner consistent with Theorem~\ref{thm:Brachistochrone} and its corollary\,---\,on a hill whose average and maximum gradients are $11.1\%$ and $13.2\%$\,, respectively.
A more recent record\,---\,by Sean Gardner~\citep{GardnerRecord} in October of 2020\,---\,was established on a hill whose average and maximum gradients are $15.5\%$ and $22.6\%$\,, respectively.
According to Theorem~\ref{thm:Brachistochrone} and its corollary, which are illustrated in Figure~\ref{fig:FigThm1and2}, a more steady uphill would have been preferable.
Let us emphasize that Theorem~\ref{thm:Brachistochrone} and its corollary are statements of mathematical physics, whose implications might be adjusted for an optimal strategy.
For instance, it might be preferable\,---\,from a physiological viewpoint\,---\,to vary the slope to allow periods of respite.
Also, let us revisit expression~\eqref{eq:VAM(c,g_r,theta)}, whose general form, in terms of $c\,g_r$\,, is
\begin{equation*}
8\,\pi^{3}r_{w}^{3}(c\,g_r)^{3} + \frac{2\,\pi\,r_{w}\,m\,g\,(\sin\theta+{\rm C_{rr}}\,\cos\theta)}{\tfrac{1}{2}{\rm C_{d}A}\,\rho}\,c\,g_{r} - \frac{P_{0}(1-\lambda)}{\tfrac{1}{2}{\rm C_{d}A}\,\rho} = 0
\,.
\end{equation*}
Using common values for most quantities, namely, \mbox{${\rm C_{d}A} = 0.27\,{\rm m}^2$}, \mbox{${\rm C_{rr}} = 0.02$}, \mbox{$\lambda = 0.03$}, \mbox{$\rho = 1.2\,{\rm kg/m}^3$}, \mbox{$r_{w} = 0.335\,{\rm m}$}, and, for convenience of a concise expression, below, we invoke the small-angle approximation, which results\,---\,in radians\,---\,in $\sin\theta\approx\theta$ and $\cos\theta\approx1-\theta^{2}/2$\,, to obtain
\begin{equation*}
\theta = 50-15.5610\sqrt{10.3326+\frac{0.0302\,c\,g_r-0.0194\,P_0}{m\,c\,g_r}}\,,
\end{equation*}
where $m$ is the mass of the bicycle-cyclist system, and the product, $c\,g_r$\,, might be chosen to correspond to the maximum sustainable power,~$P_0$\,.
This product is related to power by expression~(\ref{eq:fv}), since $c$ is proportional to $v$\,, and $g_r$ to $f$\,.
For any value of $c\,g_r$\,, there is a unique value of $P_0$\,, which depends only on the power of a rider.
As expected, and as illustrated in Figure~\ref{fig:FigTheta}, the slope decreases with~$m$\,, {\it ceteris paribus}.
Also, as expected, and as illustrated in Figure~\ref{fig:FigVAM}, so does VAM.
For both figures, the abscissa, expressed in kilograms, can be viewed as the power-to-weight ratio from $6.25$ to $3.41$\,, whose limits represent the values sustainable\,---\,for about an hour\,---\,by an elite and a moderate rider, respectively.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[scale=0.4]{Figthetavsm.pdf}
\caption{\small Slope}
\label{fig:FigTheta}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[scale=0.4]{FigVAMvsm.pdf}
\caption{\small VAM}
\label{fig:FigVAM}
\end{subfigure}
\caption{\small
Slope and VAM as functions of $m$\,, with $P_0=375$\,W and $c\,g_r=2.25$}
\label{fig:CronoMaps}
\end{figure}
\section{Conclusions}
The presented results constitute a mathematical-physics background upon which various strategies for the VAM maximization can be examined.
Following fundamental properties of the calculus of variations and the rigor of mathematical proof, we prove that\,---\,for a fixed power\,---\,the quickest ascent between two points is a straight line and that VAM increases monotonically with slope.
The latter conclusion yields that there is no specific slope value that maximizes VAM.
Rather, we demonstrate that its maximization is determined by the maximum power sustainable by the cyclist during a climb.
It follows that the lower the product of cadence and gear ratio, $c\,g_r$, the greater the slope of the climbable incline for a fixed power and, in turn, the greater the VAM.
In the context of physiological considerations, one could examine the maximum sustainable power as a function of both the force applied to the pedals and their circumferential speed.
For instance, depending on the physical ability of the cyclist, one might choose a less steep slope to allow a higher $c\,g_r$ product, whose value allows to generate and sustain a higher power.
One might also consider a very short and an exceptionally steep slope, while keeping in mind the issue of measurement errors due to the shortness of the segment itself.
In all cases, however, the slope needs to be constant.
\section*{Acknowledgements}
We wish to acknowledge Elena Patarini, for her graphic support, Roberto Lauciello, for his artistic contribution, and Favero Electronics for inspiring this study by their technological advances and for supporting this work by providing us with their latest model of Assioma Duo power meters.
\bibliographystyle{apa}
| {
"attr-fineweb-edu": 2.505859,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcvzxK0iCl4YJnVZy | \section{Introduction}
\label{sec:intro}
Most data used in sports comes from either sensors or manual annotations.
Using sensors to obtain data points in sports has several drawbacks, with the most severe being the negative impact on player performance. Furthermore, many sensors are not authorized to be used in competitive games. For most use cases, acquiring data requires non-intrusive technologies - for example, manual annotations. However, manual annotations are very expensive to produce and prone to errors. Manually acquired data also reduces the possibility of systematic analysis in many problems and is impossible for others such as biomechanical analysis.
We believe Computer Vision can have an important role in improving the quality and quantity of data available in sports, both from competitive games and training drills.
One of the main goals of Computer Vision is to extract information from images. The recent link between Computer Vision and Deep Learning has led to remarkably high efficiency in some of the hardest to execute tasks: classification (i.e., what is the object in the image), detection (i.e., what is the object and where is it located in the image), and detection of specific patterns (e.g., pose/gait estimation).
The extraction of information from images is essential for many use cases. Autonomous vehicles, which need incredibly high accuracy for the detection of road elements, can save thousands of lives on the road and allow users to be productive during commuting time. Agriculture, for increased automation, will decrease the cost of goods, allowing us to feed a growing population. And for healthcare, for automated, more efficient, and convenient disease detection, saving thousands of patients from the problems related to late diagnostics. There are many resources being invested into Computer Vision and with good reason.
In this article, we present a detailed but accessible review of the state-of-the-art methods for Object Detection and Pose Estimation from video, two tasks with the potential to acquire a tremendous amount of valuable data automatically. Data collected will serve as a basis for advanced analytic systems, which today seem out of reach. We present single-view methods that only rely on capturing video from one angle, which has similar performance to multi-view methods while being much simpler to use.
Current commercial data collection solutions fall into three categories: (1) collection of sensor data (e.g., GPS and biometrics), (2) collection of event data (e.g., shots with information regarding shooting coordinates, where it landed, which foot was used, among others), and (3) vision-based collection systems (e.g., tracking data which contains the x,y coordinates of soccer players over time). Vision-based collection systems provide automated and, if correctly implemented, high-precision data. Most data collection methodologies in sports are manual, using very inefficient processes. Since processes are not standardized, much of the data obtained by the clubs cannot be shared to create more robust analytic systems, and data is only relevant for a small range of applications.
Computer Vision can already automate some data acquisition procedures in sports; for example, obtaining tracking data from video. Tracking data is one example where a new category of data in sports leads to fast growth in research, serving as input for physics-based systems \cite{spearman_physics-based_2017}, Machine Learning systems \cite{tuyls_game_2021}, fan engagement tools, or analytic frameworks.
Sports literature has substantial indications that using pose data is the next big step for sports data \cite{tuyls_game_2021}. There are several use cases, such as detecting player orientation \cite{arbues-sanguesa_using_2020}, biomechanical analysis, augmented/virtual reality analytics \cite{rematas_soccer_2018}, and injury prevention, to name a few.
Therefore, it is vital to understand what Computer Vision tools are available and create value in sports. Automating processes will allow us to extract data quickly and accurately, which will enable many applications that use such data. Furthermore, many data types were previously hard to collect (e.g., gait), which means embracing new ways to collect data in sports can open many future possibilities.
To further illustrate the use of data acquired through Computer Vision, we present a use case where we aim to predict the speed of a soccer shot based only on pose data. With simple, plug-and-play Machine Learning tools, we reach a 67\% correlation with shot power. This performance indicates a high potential for exploring the insights that Machine Learning models provide, serving as a basis for potentially building recommendation systems or innovative analytics tools.
In this article, we make the following contributions:
\begin{itemize}
\item Section \ref{sec:litrev} presents a review of the literature in Computer Vision. First, it presents the basic principles of Machine Learning and computer vision. Then it describes advanced topics such as Image Recognition, Object Detection, 2D and 3D Pose Estimation.
\item Section \ref{sec:expsetup} describes a use case of biomechanics analysis, predicting shot speed using only pose data. Then, Section \ref{sec:resultsdiscussion} presents and discusses the results and what they mean in the context of sports sciences.
\end{itemize}
\section{Literature Review} \label{sec:litrev}
Computer Vision is one of the main beneficiaries of the recent Deep Learning boom \cite{lecun_deep_2015}. Many traditional Computer Vision techniques required sophisticated techniques for image pre-processing \cite{aggarwal_human_2011}. Deep Learning removed the emphasis on image pre-processing to leverage the high computational power available nowadays, leading to enormous performance gains.
\subsection{Machine Learning and Deep Learning}
To better understand the technologies presented, it is essential to understand the goal of Machine Learning. Machine Learning aims to find a function that maps input data (for example, track condition, caloric intake, and weather) into an output (how an athlete will perform in a 100 meters race). Instead of this function being designed by a programmer, the Machine Learning method is responsible to learn this function by itself, using data.
Different Machine Learning algorithms have different approaches to finding the mapping function. For example, Linear Regression maps use linear relationships, while Decision Trees maps use conditional statements.
The most flexible learning algorithm is Deep Learning (also known as Deep Artificial Neural Networks or simply Neural Networks). Deep Learning finds highly non-linear relationships in data, which is one of the reasons why this algorithm performs much better than any other algorithm in Computer Vision. The other reason is that Deep Learning scales much better than other algorithms when using large amounts of data.
The simplest Deep Learning algorithm is the multi-layer Perceptron. Perceptrons are linear classifiers that can distinguish between two linearly separated classes (see Figure \ref{fig:001_neural_network_example}). They are similar to linear regression, multiplying each input by the respective weight. But, in the Perceptron, the output passes through an activation function. The activation function is responsible for adding non-linearities to the prediction.
The activation function alone only models a restricted number of non-linearities. To increase the range of functions that our Perceptron can approximate, we need to combine multiple Perceptrons in a network, organized in interconnected layers. For each link between two interconnected layers, we have a specific weight. The procedure to find the network's weights, usually referred to as training, is an optimization problem and uses algorithms such as Gradient Descent. The optimization of weights is a computationally expensive task, but recent hardware and algorithmic advances enabled training networks with millions of nodes that can perform the Computer Vision tasks we will describe in the following sections.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{001_neural_network_example.pdf}
\caption{Comparing the Perceptron with a Multi-layer Perceptron. We observe that the Perceptron separates linearly-separable classes, but we require multiple interconnected Perceptrons to separate non-linearly-separable classes.}
\label{fig:001_neural_network_example}
\end{figure}
One Deep Learning architecture is fundamental regarding Computer Vision tasks: Convolutional Neural Networks (CNNs). In Computer Vision, the input given to our models is the raw pixels in an image. Each pixel is a combination of three numbers ranging from 0 to 255. In essence, we are giving our Deep Learning model a matrix whose elements have relationships among themselves. For example, to detect the frontier separating which pixels represent a person and which do not, we need to combine the information from multiple pixels and their respective relative position in an image. This information can substantially simplify the learning procedure.
CNNs solve this problem by applying filters across an image. For example, in Figure \ref{fig:002_convolution_example}, we try to find a feature representing the bottom of the number 1 across an image of a 1. Where the image contains the said feature, the Convolutional Layer of the Neural Network produces a positive output. Otherwise, it gives a negative output. Then, the rest of the Neural Network can use information about which features are detected and where they are located to indicate which number is present in the image. As with the weights of the Deep Learning model, we can also learn the shape for each filter.
\begin{figure}
\centering
\includegraphics[width=0.35\columnwidth]{002_convolution_example.pdf}
\caption{How a Convolutional filter works. We slide the Convolutional filter (in orange) across the image. If the filter matches a feature of the element we want to find, the filter produces a positive output (in green). Otherwise, it produces a negative output (in red). By combining multiple features, we can learn to recognize all hand-written digits.}
\label{fig:002_convolution_example}
\end{figure}
For a deeper explanation of any Deep Learning concepts, see \cite{lecun_deep_2015, aggarwal_neural_2018}.
\subsection{Computer Vision Basics}
\subsubsection{Datasets} \label{sec:datasets}
As we described before, Computer Vision is nowadays intrinsically linked with Deep Learning. Therefore, there is a high correlation between the efficacy of Computer Vision methods and the amount and quality of data available to train these models.
Deep Learning learns to map input spaces into the desired outputs. The larger the input space, the larger the training dataset we require. A 1080p image contains more than 6M numbers. Therefore, to build accurate Computer Vision models, we require massive datasets. Table \ref{tab:datasets} lists the most relevant datasets used to build the state-of-the-art models presented in this work.
\begin{table}[h]
\caption{Most relevant publicly available datasets for Computer Vision research.}{
\resizebox{\columnwidth}{!}{
\begin{tabular}{lccccc} \hlin
\textbf{Dataset} & \textbf{Task} & \textbf{Year} & \textbf{No. Images} & \textbf{No. Classes} & \textbf{Homepage} \\ \hlin
OpenImages \cite{kuznetsova_open_2020} & Object, Location & 2017 & 9M & 19 957 & storage.googleapis.com/openimages/web/index.html \\
COCO \cite{fleet_microsoft_2014} & Object, Location, Pose & 2015 & 200k & 80 & cocodataset.org \\
ImageNet \cite{russakovsky_imagenet_2015} & Object, Location & 2014 & 1.4M & 1 000 & image-net.org \\
SUN \cite{xiao_sun_2010} & Object & 2010 & $\sim$131k & 899 & vision.princeton.edu/projects/2010/SUN \\
PASCAL VOC \cite{everingham_pascal_2010} & Object & 2007 & $\sim$11.5k & 20 & host.robots.ox.ac.uk/pascal/VOC \\
MPII HumanPose \cite{andriluka_2d_2014} & Pose & 2014 & 25k & 410 & human-pose.mpi-inf.mpg.de \\
Leeds Sports Poses \cite{johnson_clustered_2010} & Pose & 2010 & 10k & - & dbcollection.readthedocs.io/en/latest/datasets/leeds\_sports\_pose\_extended.html \\
3DPW \cite{ferrari_recovering_2018} & Pose (3D) & 2018 & 51k & - & virtualhumans.mpi-inf.mpg.de/3DPW/ \\
Human3.6M \cite{ionescu_human36m_2014} & Pose (3D) & 2014 & 3.6M & 17 & vision.imar.ro/human3.6m/description.php \\ \hlin
\end{tabular}
}}
\label{tab:datasets}
\end{table}
Although Google's OpenImages is the most extensive dataset, it uses machine-labeled data, which is less reliable. Microsoft's COCO is currently the best and most used public dataset for Object Detection and 2D Pose Estimation. Leeds Sports Poses contains data related to sports, making it a useful dataset for fine-tuning models for sports use cases.
Human3.6M and 3DPW are the go-to datasets for 3D Pose Estimation. While Human3.6M provides more training data with diverse poses and actions, the dataset is captured in a controlled environment (laboratory). Meanwhile, 3DPW provides an in-the-wild dataset, which is very useful for evaluating the models' performance in real-world scenarios.
PapersWithCode\footnotemark[1] platform presents multiple benchmark rankings for each of the datasets described in Table \ref{tab:datasets}, where the users can find the best performing algorithms and their related literature. While we provide up-to-date information, note that this field is rapidly advancing. Therefore, please refer to this platform to find the best models available now.
\footnotetext[1]{paperswithcode.com}
\subsubsection{Metrics}
Another relevant part for training and evaluating the model's performance is the evaluation metrics. The datasets presented in Section \ref{sec:datasets} are imbalanced: they provide very few instances with some objects while providing many instances with persons. Learning on imbalanced datasets requires using adequate metrics. For example, we can obtain high accuracy while the models ignore objects with low representation. If the dataset contains 95\% persons and 5\% objects, a model that is 100\% accurate for detecting persons and 0\% accurate when detecting objects, would achieve an accuracy of 95\%, without actually detecting a single object.
Since Object Detection works by predicting the bounding box around an object (see Figure \ref{fig:003_IOU_example}), we need to first define how we measure a successful prediction. To measure how well a model found the bounding box, we calculate the Intersection over Union (IoU), Equation \ref{eq:iou}.
Imagine we have two bounding boxes (we will call it box/boxes from now on), one being the ground-truth and the other being the prediction. The Area of Intersection is where both boxes coincide, and the Area of Union is the area covered by at least one of the boxes. If the IoU is above a predefined threshold, we consider the prediction successful.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{003_IOU_example.pdf}
\caption{From left to right: (1) The box of the person on the image, (2) a situation where the IoU is 0, since the boxes do not intercept each other, and (3) a situation where IoU is 0.5. Image obtained from the COCO dataset\cite{fleet_microsoft_2014}.}
\label{fig:003_IOU_example}
\end{figure}
\begin{equation} \label{eq:iou}
Intersection\ over\ Union = \frac{Area\ of\ Intersection}{Area\ of\ Union}
\end{equation}
The metric used to avoid the imbalanced data problem is the Average Precision (AP) \cite{su_relationship_2015}, presented in Equation \ref{eq:avgprecision}, where $p(r)$ is the relationship curve between precision and recall. AP corresponds to the area below the Precision-Recall curve (see Figure \ref{fig:004_average_precision_example}), averaged across all classes, at different confidence intervals.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{004_average_precision_example.pdf}
\caption{Visualization of the AP metric, estimated by the area below the Precision-Recall curve at different confidence intervals.}
\label{fig:004_average_precision_example}
\end{figure}
\begin{equation} \label{eq:avgprecision}
AP = \int_{0}^{1} p(r) dr
\end{equation}
Since we can arbitrarily define the IoU threshold for considering a successful prediction, the consensus is to calculate the mean of the AP across multiple IoU values, known as the Mean Average Precision (mAP). For example, the COCO dataset \cite{fleet_microsoft_2014} suggests averaging the AP metric for all IoU thresholds between 0.5 and 0.95 with step 0.05. Older datasets, such as PASCAL VOC \cite{everingham_pascal_2010}, suggested using only AP with IoU=0.5.
For 2D Pose Estimation, we can still use mAP over the body parts boxes. However, our goal in 2D Pose Estimation is not to find the individual boxes that make a person. Therefore, there are better metrics that allow us to verify if we are improving in the right problem, predicting the keypoint location.
The most used metric is the Percentage of Correct Keypoints (PCK). Like in Object Detection, we need to define how we evaluate if a keypoint is correct. For this, we consider a keypoint correct if it meets a predefined error tolerance. For example, the most used error tolerance is half of the person's head size, referred to as PCKh@0.5 metric.
In 3D Pose Estimation, using mAP is not possible: both boxes are now 3D, and manually labeling 3D boxes would be a challenging task. PCK metrics are still usable. However, since in 3D Pose Estimation, we must predict every keypoint (and not just the visible), we have access to more informative metrics, like Mean Per Joint Position Error (MPJPE) and Mean Per Joint Angular Error (MPJAE).
MPJPE is the average distance between each predicted point and actual keypoint across all keypoints, as seen in Equation \ref{eq:mpjpe} \cite{ionescu_human36m_2014}, where N is the number of keypoints predicted, $k_p$ is the predicted keypoint and $k_{gt}$ is the ground truth keypoint. MPJAE follows the same formula but uses the distance between the predicted angle and the actual keypoint angle.
\begin{equation} \label{eq:mpjpe}
MPJPE = \frac{1}{N} \sum_{1}^{N}\left \| k_p - k_{gt} \right \|
\end{equation}
With the metrics defined, we can now better understand the models' limitations and consider the measuring error in our analysis. Similar to errors in sensors, models have measurement errors, for which we need to account.
\subsection{Image Recognition}
In Image Recognition, our goal is to detect which object is in the image. For example, whether an image contains a person or not. In this approach, we do not require any localization of the detected object. Thus, this is a relatively limited approach but is relevant since many Object Detection frameworks leverage the technology behind Image Recognition.
\subsubsection{Non-Deep Learning Approaches}
CNNs provide a crucial building block for Computer Vision models. With CNNs, we can extract features from the image, which helps the models distinguish between different images. However, CNNs have only become widespread since 2012. Before this, most Computer Vision techniques focused on correctly defining the features from the images so that Machine Learning models could distinguish the different classes, which formed the backbone of Computer Vision models. This section will present the three most widely used methods for feature extraction from images that do not rely on Deep Learning.
David Lowe \cite{lowe_object_1999, lowe_distinctive_2004} proposed a feature generation method called Scale-Invariant Feature Transform (SIFT). SIFT detects interest points summarizing the local structures in the image and builds a database with histograms that describe the gradients near the interest points (see Figure \ref{fig:005_non_deep_example}).
This procedure is performed across different image scales, which increases the robustness of image scaling. Also, rotation/translation mechanisms for the gradients introduce robustness to changes of those two components. The method is also robust to illumination changes since we ignore the magnitude of the gradients, using only their orientation.
Viola and Jones \cite{viola_rapid_2001, viola_robust_2004} proposed a facial recognition framework based on generating Haar-like features that aim to capture patterns that are common across a specific type of object. Haar features are rectangular filters used across an image to detect regions where pixel intensities match the layout of the Haar feature. The method is efficient and is still used in facial recognition today but has limited use cases for other Image Recognition tasks. It does not generalize properly to other tasks because it suffers from the influence of rotation of the object and improper lighting.
Dalal and Triggs \cite{dalal_histograms_2005} applied Histogram of Oriented Gradients (HOG) to detect persons in images. HOG creates a grid of points and then calculates the gradient of the pixels in the neighborhood. Then, using only the orientation of the gradients, HOG finds the histogram of orientations that summarizes the whole image. The significant difference to David Lowe's technique \cite{lowe_object_1999, lowe_distinctive_2004} is that HOG tries to summarize the whole image rather than finding specific features in an image. Furthermore, computing HOG features is substantially more efficient than computing SIFT features.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{005_non_deep_example.pdf}
\caption{From left to right: (1) Interest points detected by the SIFT method, (2) examples of Haar features (in blue) that can characterize specific features in the binary image, and (3) orientation of the gradients in a image, which are then grouped in one histogram according to the HOG method.}
\label{fig:005_non_deep_example}
\end{figure}
The methods presented are limited in terms of applications nowadays. However, it is interesting to note that they all have the same goal: transform an RGB image into a data structure easily read by a Machine Learning algorithm, which is a procedure that CNNs automate.
\subsubsection{Deep Learning Approaches}
The beginning of CNNs is attributed to Fukushima \cite{fukushima_neocognitron_1980}, who proposed a mechanism allowing Neural Networks to learn features "unaffected by a shift in position", called Neocognitron. The Neocognitron was the first to use Convolutional Layers to construct a feature map by applying filters to the input space.
Later, LeCun et al. \cite{lecun_backpropagation_1989} provided a significant innovation: by applying the backpropagation algorithm, LeCun enabled the Neural Network to automatically learn each filter's weights in the Convolutional layer. Instead of requiring the user to define the filters that create the feature map, the process of defining the filters is also automated.
It was only much later that CNNs achieved state-of-the-art performance in image classification on the ImageNet dataset with AlexNet \cite{krizhevsky_imagenet_2017}. This model provided a leap in performance over previous state-of-the-art techniques that were dominant at the time (non-Deep Learning approaches), with 42\% error reduction.
AlexNet used Convolutional Layers to extract features and then classified the image using three extra Convolutional Layers. The most relevant advances of AlexNet were not the network architecture but rather the computational tricks that allowed training such a large model. First, it used ReLU as an activation function, which enables much faster training than other options. Second, the architecture enabled training the model in parallel using two graphics processing units instead of one, enabling faster training times and larger models.
In the following years, the models became increasingly complex but faced one problem: performance degraded when the number of layers increased past a certain threshold. This degradation went against the consensus that a network with N1+N2 layers should match the performance of a network with N1 layers simply by learning the identity function in the remaining N2 layers. However, as networks grew deeper, learning even the identity function became very difficult.
He et al. \cite{he_deep_2016} solves this problem by introducing skip connections that give an alternative path, guaranteeing that a deeper network can consistently perform at least as well as a smaller network. The called ResNets serve as a backbone for many methods on both Object Detection and Pose Estimation. Even by themselves, ResNets produced state-of-the-art performance in Image Recognition at the time of their publishing.
The approaches that we described in this section until now fail to address the localization aspect. They focus on Image Recognition rather than Object Detection. The state-of-the-art techniques that we will present next all address both the classification and localization problem.
\subsection{Object Detection}
Object Detection adds a step to Image Recognition: locating the coordinates of an object in the image. Szegedy et al. \cite{szegedy_deep_2013} framed Object Detection as the problem of predicting object's boxes by regression but achieved limited success with this approach. The most successful approach is to find candidate regions/boxes for objects and then treat the region as a single image.
Girshick et al. \cite{girshick_rich_2014} introduced this latter approach with a method called Region-based CNNs (R-CNN). The method achieved an AlexNet-like leap in performance, increasing ~53\% over the previous best approaches. R-CNN follows three steps: (1) recommending candidate boxes/regions, using Selective Search \cite{uijlings_selective_2013}, (2) computing features using CNNs, and (3) classifying each region using Support Vector Machines (SVM). Selective Search generates many candidate regions and then greedily combines similar regions into larger ones.
Iteratively, the R-CNN architecture received many improvements. Some improvements are related to improving the speed, like Fast R-CNN \cite{girshick_fast_2015} which computes the feature map for the whole image instead of computing for each candidate region, accelerating training and testing. Then, the Faster R-CNN \cite{ren_faster_2017} introduced a Region Proposal Network that accelerates the process of providing region proposals. Instead of using Selective Search, Ren et al. \cite{ren_faster_2017} uses a Deep Learning model that predicts the region location.
Other proposed solutions focused on improving performance, like Retina-Net \cite{lin_focal_2017} and Libra R-CNN \cite{pang_libra_2019} which change training to focus on hard-to-learn objects instead of focusing uniformly across objects, which addresses the imbalance issues in the data set.
The major drawback of R-CNN-based models is that they require different frameworks to detect the regions and classify them. Redmon et al. \cite{redmon_you_2016} proposed using a single model to detect and classify regions. The method proposed, You Only Look Once (YOLO), draws a grid of cells covering the whole image without intersecting each other (see Figure \ref{fig:006_object_concepts_example}). Then, each grid cell generates multiple candidate boxes and a prediction of whether the box contains an object or not. Then, it combines the generated candidate boxes with the respective cell's prediction of which object is in the cell to output the final prediction.
YOLO has a large development community, with numerous improvements made on the same proposed architecture. YOLOv2 to YOLOv5 \cite{redmon_yolo9000_2017,redmon_yolov3_2018,bochkovskiy_yolov4_2020}, YOLOR \cite{wang_you_2021}, PP-YOLO v1 and v2 \cite{long_pp-yolo_2020,huang_pp-yolov2_2021} provided improvements to the model accuracy and prediction speed. The Single Shot Detector (SSD) \cite{leibe_ssd_2016} follows a similar approach from YOLO. The most significant distinction between SSD and YOLO is that SSD uses predefined starting boxes, which the model adapts using offsets and scale factors during the prediction process.
Zhang et al. \cite{zhang_single-shot_2018} proposed combining both the best of R-CNN multi-phased approaches (that lead to better accuracy) and single network proposals (that are faster to train and predict). For this, he proposed a box-based single-shot network (similar to SSD), RefineDet, which contains a box refinement module. This box refinement module assumes default box styles (see Figure \ref{fig:006_object_concepts_example}). The approach removes irrelevant boxes and adjusts the location of the remaining.
While the single-network approaches are much quicker in prediction, in some use cases, we require even faster prediction times. For this reason, there are multiple proposed mobile models based on single-network approaches, which have the goal of scaling down networks to make faster predictions with limited resources.
EfficientNets \cite{tan_efficientnet_2019} defines a scaling method that allows efficient scaling of both network size and image resolution efficiently. EfficientDet \cite{tan_efficientdet_2020} further optimized EfficientNets with a method to use the same features across different scales. The YOLO-related literature also contains many proposals for efficient models called TinyYOLO.
Zhou et al. \cite{zhou_objects_2019} proposed CenterNet, which uses an approach that focuses on modeling the object as the center point of its box. CenterNet predicts the center of the box and keypoints of the object, for example, joint locations on human bodies (see Figure \ref{fig:006_object_concepts_example}). The keypoints are crucial for this method since they expand the box from the predicted center. One of the main advantages of this method is that it also yields the object keypoints, which increases the range of possible use cases.
To summarize, we have two different solutions: (1) R-CNN architectures that are very good but slow, and (2) single-network architectures that are fast but have lower prediction power.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{006_object_concepts_example.pdf}
\caption{From left to right: (1) Grid-style boxes used by the original YOLO algorithm. (2) An example of the default box styles used by RefineDet. (3) Example of the keypoints used by the CenterNet framework.}
\label{fig:006_object_concepts_example}
\end{figure}
The most recent state-of-the-art has shifted from CNNs towards Transformers. Transformers are Attention-based Deep Learning architectures. While CNNs are excellent for discovering local features, Transformers are excellent for finding global features. For this, it uses Attention, a technique that enhances important parts of the image for prediction while reducing the importance of the rest.
While Attention was proposed by Vaswani et al. \cite{vaswani_attention_2017} in 2017, it was difficult to use it in Computer Vision. The reason is that since these techniques calculate the importance of each input pixel, it would be costly to perform in images that have millions of pixels. To make matters even worse, the number of operations does not increase linearly with the number of pixels but quadratically. Dosovitskiy et al. \cite{dosovitskiy_image_2021} proposed splitting images into fixed-size patches, which made Transformers efficient, requiring less computational resources than CNNs, even when controlling for the lower number of parameters in models.
Transformer-based models perform better when trained in super-large data sets because they can continue improving past the point where CNNs hit their plateau. On the other hand, CNNs are more data-efficient because they can reach better performance levels when working with limited (but still large) datasets \cite{dosovitskiy_image_2021}.
Many proposals aim to solve the problems using Transformer-based architectures. Liu et al. \cite{liu_swin_2021} proposed using shifting windows and patch merging to artificially increase the resolution of the technique, which was previously limited to the size of the patches (see Figure \ref{fig:007_attention_concepts_example}). This change allowed a higher granularity in the predictions, which we require for Object Detection. Patch merging also reduced the computational complexity from quadratic to linear by computing Attention within the respective patch.
Efficient patch usage is one of the most focused areas. Yang et al. \cite{yang_focal_2021} proposes using dynamic patch sizes according to the distance to the patch in which we calculate the Attention. This way, neighbor patches are smaller than further away patches, decreasing the number of patches to process (see Figure \ref{fig:007_attention_concepts_example}), and accelerating the procedure while maintaining performance.
Transformers also allowed some simplified architectures to perform very well in Object Detection. Xu et al. \cite{xu_end--end_2021} proposes treating Object Detection as a prediction problem by introducing a "no object" class which allows the network to predict a fixed number of objects for each image.
While some studies have discarded CNNs entirely in favor of Transformers, some works focused on using Attention to enhance current CNNs architectures. For example, Dai et al. \cite{dai_dynamic_2021}, which explores the introduction of Attention in the later stages of the models (the prediction phase), allowing improvements using the existing models' backbones (i.e., part of a Deep Learning model that is responsible for generating the features).
Attention-based methods have seen significant advances in the last year, and their performance already surpasses methods using only CNNs. The most effective methods combine both Attention and CNNs to produce the best results in terms of accuracy.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{007_attention_concepts_example.pdf}
\caption{From left to right: (1) The original size of a patches allowed very low resolutions, lacking precision when detecting the objects. For example, the box corresponding to the tennis ball would be the dark box. (2) Using a shifting window increases the resolution of the method by shifting the patches over iterations. By combining two patches, we can increase the resolution of the method substantially. (3) To reduce the cost of using Attention, we can use small patches for the close neighborhood and larger patches to distant points in the image.}
\label{fig:007_attention_concepts_example}
\end{figure}
One last notable work is from Liang et al. \cite{liang_cbnetv2_2021}, which proposes a framework for combining multiple model backbones. This ensembling approach increased the accuracy of the models over using only one isolated model, even when combining only two models.
Pretrained versions of most models presented in this section are available in libraries such as the TensorFlow Hub\footnotemark[2], DarkNet\footnotemark[3], or published by their authors in public repositories. These versions are plug-and-play versions of the models that are easy to use across multiple applications. Furthermore, benchmarks of most of the approaches are available on the PaperWithCode platform.
\footnotetext[2]{tfhub.dev/tensorflow/collections/object\_detection}
\footnotetext[3]{pjreddie.com/darknet}
\subsection{Pose Estimation}
Pose (or Gait) Estimation (or Extraction) methods aim to detect the key body parts/joints that define the human body in an image. As a result, these methods give us the coordinates of specific human body parts in an image. The methods in this category require images with the keypoints labeled for training, which are significantly harder to label when compared to the Object Detection problems. We can see an example of the keypoints in the COCO dataset in Figure \ref{fig:008_keypoint_example}.
In this section, we cover Pose Estimation methods using Deep Learning. Since the field is evolving towards using multi-person pose estimators (in opposition to single-person pose estimators), we do not use this to categorize the approaches. Instead, we prefer to categorize them according to the two main approaches: (1) top-down, where we detect the box of the person and then detect the keypoints within the box, and (2) bottom-up, where we detect body parts in an image and then aggregate them to form a person.
Both approaches leverage similar methods to the ones described in the Object Detection section to detect the box of a person or body parts. Furthermore, CNNs are still the basis of the network's layers for feature generation, with some improvements in specific use cases \cite{tompson_joint_2014, wei_convolutional_2016,leibe_stacked_2016,sun_deep_2019}.
Top-down approaches allow for easy use of single-person pose estimators and Object Detection methods that detect the boxes of persons in an image. This method has a clear drawback: when two people overlap in the image, it struggles to detect which body parts belong to which person.
Regarding this approach, we highlight the following contributions: Papandreou et al. \cite{papandreou_towards_2017} presented an approach that uses the Faster R-CNN for box detection and then uses a ResNet backbone to build the keypoint estimation model inside the box. Chen et al. \cite{chen_cascaded_2018} combined two networks: one to predict keypoints with high confidence, and another network specializes in detecting harder keypoints, such as occlusions and out-of-bounds keypoints. Carreira et al. \cite{carreira_human_2016} introduced a model that, instead of predicting the keypoints, outputs a suggestion of correction for the current keypoints. Iteratively, this work corrects an initial standard pose towards the actual pose by using the model's feedback, with the number of iterations being user-defined.
Bottom-up approaches focus on correctly identifying body parts in an image and then grouping them. The procedure of identifying a body part in an image uses the same strategies as in Object Detection. The main challenge of the bottom-up approaches is to correctly group body parts with varying degrees of difficulty. For example, identifying how to group a single body is easy, but for multi-person detection, the problem is substantially harder, especially if they overlap in the image.
DeepCut \cite{pishchulin_deepcut_2016}, and DeeperCut \cite{leibe_deepercut_2016} use pairwise terms to understand which body parts are likely to belong to the same person. OpenPose \cite{cao_realtime_2017} uses part affinity fields to merge different keypoints. Part affinity fields are flows that indicate the orientation of a body part which allows for a more accurate joining procedure. OpenPose \cite{cao_openpose_2021} also extends the number of keypoints detected; while other approaches only detect the most prominent joints in the human body, this approach also allows detecting hand and facial keypoints.
The approaches presented above use similar keypoint structures, usually following the labels in the COCO dataset. However, DensePose \cite{guler_densepose_2018} proposed a new approach for Pose Estimation: it uses an annotation pipeline that allows for image annotation at the surface level. This approach will enable algorithms to estimate poses in a different, enriched format (see Figure \ref{fig:008_keypoint_example}).
Overall, the bottom-up approaches are more successful in 2D Pose Estimation. When both Object Detection and Pose Estimation are required, some methods provide both solutions in one model, leading to quicker prediction time than running each method individually, as the CenterNet approach described in the last section.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{008_keypoint_example.pdf}
\caption{From left to right: (1) Example of the keypoints in the COCO dataset, as obtained by OpenPose. (2) Example of the output of a DensePose model. (3) An example of a 3D mesh as described by SMPL.}
\label{fig:008_keypoint_example}
\end{figure}
\subsection{3D Pose Estimation}
Although 2D Pose Estimation is helpful for many use cases, it is a limitation in others. For example, 2D Pose Estimation may be used in gyms to track posture in specific movements but with the restriction of analyzing only one point of view. We can use it to measure the depth of a squat, or whether the quadriceps or the glutes activate first by checking the Pose Estimation of a side view, but we will lose on detecting potential side imbalances that we would find on the front or back views. Furthermore, if we try to film from an angle that aims to capture both, we will lose accuracy compared to an adequate filming angle.
3D Pose Estimation can fix these issues by obtaining 3D data from a single view. With 3D Pose Estimation, we can reduce the loss in accuracy from using a filming angle that aims to capture all details from the movement. Adding this third dimension introduces much more information on the data obtained, resulting in better insights from the analysis.
Currently, 3D Pose Estimation methods focus on estimating the 3D meshes of the people present in the image. Bogo et al. \cite{leibe_keep_2016} introduced a methodology that uses SMPL \cite{loper_smpl_2015}, a model learned from data that represents the human body shape with pose dependencies. Bogo et al. inferred 3D coordinates from 2D coordinates (extracted using DeepCut) by fitting the SMPL model to the 2D data. The author also discusses that 2D coordinates contain much of the information required to extract 3D poses, which might indicate that, for some applications, acquiring 2D data is a sufficient and lesser complex task. We can see an example of a SMPL mesh in Figure \ref{fig:008_keypoint_example}.
To the best of our knowledge, nearly every relevant method published in the literature relies on mesh prediction, specifically, the meshes described in SMPL \cite{loper_smpl_2015}. SMPL meshes contain information about 3D pose keypoints and body shape, allowing us to detect differences in height and body shape between two persons.
Sun et al. \cite{sun_monocular_2021} predicts if the pixel belongs to the person's center and predicts the mesh for a person centered in the pixel throughout the whole image. Then it searches for clusters in the predictions, and if a cluster is found, the center of the cluster suggests the predicted mesh.
One of the significant known errors are the biases introduced by data in the models. Guan et al. \cite{guan_out--domain_2021} fixed some of these issues, namely the ones related to camera parameters, backgrounds, and occlusions, and achieves top performance in the 3DPW benchmark, which indicates how important it is to fix these issues.
While state-of-the-art 2D Pose Estimation did not use Transformers, some 3D Pose Estimators already take advantage of this method. Kocabas et al. \cite{kocabas_pare_2021} increases robustness to partial occlusion using Attention. For each occlusion, the network finds which detected body parts should use to infer the occluded part. For example, if the elbow is occluded, it checks the orientation of the shoulder to estimate where the elbow is.
Hu et al. \cite{hu_conditional_2021} is one of the few approaches that focus on predicting only the keypoints in the form of a directed graph. This specialization allowed the algorithm to perform very well, being the best performer in the Human3.6M benchmark in single view at the time of writing.
There are several proposals for 3D Pose Estimation from video. By using multiple video frames as input, we can give temporal information about the pose to the models, thus enhancing the performance \cite{pfister_flowing_2015,girdhar_detect-and-track_2018,kocabas_vibe_2020,li_exploiting_2021}. As a trade-off, these algorithms demand more resources since their inputs are larger.
Another possibility for enhancing performance is to use multi-view models. This approach adds computational complexity and hardware complexity since it requires at least two points of view. Iskakov et al. \cite{iskakov_learnable_2019} combines 2D keypoints from multiple views into a single 3D representation using algebraic triangulation. Reddy et al. \cite{reddy_tessetrack_2021} combines both the temporal and multi-view approaches, while He et al. \cite{he_epipolar_2020} combines multi-view approaches and Transformers, which can perform the Attention operation across multiple frames.
\subsection{Other Computer Vision Methods}
At last, we will describe some important methods that can complement some of the methods described above.
The first is transfer learning \cite{zhuang_comprehensive_2021}. Transfer learning is a methodology to refine models, like the ones we described in previous sections, to perform a different task. For example, many of the proposed algorithms work with on-the-wild images. One can further train the described models on a specific data set related to sports if the goal is to use these methods in sports, enhancing their performance for the task.
The second method is image super-resolution \cite{wang_deep_2021}. Image super-resolution aims to increase the resolution of an image by running it through a model that upscales said image. This method can be useful to correct some deficiencies in the capturing procedure, such as having to film from a high distance or sacrificing image quality for a higher frame rate.
\section{Experimental Setup} \label{sec:expsetup}
The main goal of our experiments is to present a methodology that can predict the speed of a shot using only the pose variation of a player. Suppose we obtain a high correlation between the predicted speed of the shot and the actual speed of the shot. This will enable us to create a system that analyzes the shots from multiple players and learns which technique is the best to kick the ball with high speed, depending on player's physical and technical traits. This methodology can be extended to many other use cases.
With this system, we can provide feedback to players about their technical movement, providing recommendations that consider an extensive history of the technical movement. The recommendations can be, for example, related to certain timings in the movement, position of indirect body parts, or simply detecting the best run-up strategy for the player.
In this work, we collect 145 soccer shots from a single person. The shots are performed 6 meters from the goal and captured using 90 frames per second camera. To extract data from the video, we use two algorithms: (1) CenterNet HourGlass104 1024x1024 with keypoints\footnotemark[4] and (2) ROMP\footnotemark[5]. Both models provide state-of-the-art performance for the tasks and are publicly available. Each shot takes in average 650 and 43 seconds to produce results in (1) and (2) respectively. Note that (2) uses GPU while (1) is limited to CPU, which explains the difference in prediction time.
Code and processed data are available at GitHub\footnotemark[6].
\footnotetext[4]{tfhub.dev/tensorflow/centernet/hourglass\_1024x1024\_kpts/1}
\footnotetext[5]{github.com/Arthur151/ROMP}
\footnotetext[6]{github.com/nvsclub/ShotSpeedEstimation}
\subsection{Calculating the Ground Truth}
To calculate the ground truth (the ball speed), we make two assumptions: (1) the ball hits the center of the left/right side of the goal (see Figure \ref{fig:010_experimental_setup}), and (2) the time the ball takes to travel is calculated between the player's first contact and right before the ball gets occluded by the goalpost.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{010_experimental_setup.pdf}
\caption{A snapshot of the captured video and the capture procedure.}
\label{fig:010_experimental_setup}
\end{figure}
In the first assumption, we can calculate the maximum error for the ground truth, the difference in distance between the largest trajectory (which we will assume is the corners of the goal) and the average trajectory (the center of the goal). The goal's size is 3 meters in width, 2 meters in height. This assumption introduces an error of +/- 6\% in our measurements. More importantly, the error follows a uniform distribution, meaning that the average error is 0\%. To reduce this error component one could increase the distance to the goal.
We made the second assumption to automate the calculation of the time between the moment the player hits the ball and the moment the ball hits the goal. We use the fact that the ball is occluded when going between the posts to determine our endpoint. Furthermore, we can detect partial occlusions of the ball, ensuring that we can detect the point when it reaches the goal accurately.
To calculate the time between the shot and the goal, we count the frames up to the point where the ball does not register (occluded by the goalposts) or the ball area is substantially reduced (partially occluded by the goalposts) and divide the count by the frame rate (90fps). For the average speed of the ball, we divide the average distance by the calculated time. To reduce the error caused by this component, we would need to increase the framerate of the camera. The results of this procedure are presented in Figure \ref{fig:011_ball_trajectories}.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{011_ball_trajectories.pdf}
\caption{Visualizing a sample of the ball's trajectories as captured by the cameras, along with the calculated speed of the ball.}
\label{fig:011_ball_trajectories}
\end{figure}
\subsection{Processing the Pose Data}
The ROMP model provides many data points that are useful for multiple applications. In our case, we use the SMPL 24-joint data. This format provides information about the keypoints required.
Our first step was to transform the data that contained absolute positions to angular speeds. According to most of the literature, angular motions tend to be better predictors of good biomechanical performance. Since absolute positions are noisy, we calculate the angular speed from the moving average of the absolute position with a window of 5 frames, which smooths our data points.
However, the translation from absolute positions to angular velocities introduces quite a lot of noise. Therefore, we smooth out the angular time series again using the moving average with a window of 5. Figure \ref{fig:012_data_processing} illustrates the whole process.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{012_data_processing.pdf}
\caption{Visualizing the data obtained from the 3D Pose Estimation. In this case, it was obtained from tracking the right knee in a shot at 90 fps. On the top, the absolute position and speed coordinates, which are light in the raw format and darker after smoothing the time series. In the bottom, the angular position and speed calculated from the absolute position and speed time series.}
\label{fig:012_data_processing}
\end{figure}
There is one last problem with our dataset. Machine Learning algorithms require inputs of a fixed size. In our case, we could use the time series as input, but time series from different shots have different lengths. Our solution is to extract features from the time series. There are many approaches to extracting features from time series. We will use an automated approach to avoid getting too technical. For this, we use the TSFresh \cite{christ_time_2018}, which transforms time series in hundreds of features. Using the filter function from the same library, which filters the most relevant features for our problem, we obtain 858 features for each shot.
To compare the difference in results obtained between 2D and 3D pose data, we run the whole processing pipeline on the 2D Pose Estimation data obtained from the CenterNet model, obtaining only 273 relevant features.
\subsection{Predicting the Speed of the Ball}
To create the prediction model for ball speed, we test 4 algorithms: (1) Linear Regression, (2) K-Nearest Neighbors, (3) Gradient Boosting, and (4) Random Forest. We opt to use these algorithms since they provide state-of-the-art results for many other problems, are easy to use, and require little parameter tuning to produce good results. We opt not to use Deep Learning due to the low ratio of instances per feature (1 instance per 5 features), which leads to poor performance of the algorithm.
To guarantee that the models do not simply memorize data, we use K-Fold Cross Validation, which divides data into train and test sets. By training the model in the train set and hiding the test set data, we ensure that the models try to predict instances that they have not seen before. Both processes using 2D or 3D pose data follow the same procedure.
\section{Results and Discussion} \label{sec:resultsdiscussion}
Figure \ref{fig:013_results} presents the performance of the models built across multiple dimensions.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{013_results.pdf}
\caption{Performance of the models using 3D and 2D pose data, with the Random Forest algorithm achieving 67\% correlation in 3D and just 53\% in 2D. On the bottom, we present the body parts where the Random Forest algorithm focused to obtain the best performance in the 3D problem.}
\label{fig:013_results}
\end{figure}
By observing Figure \ref{fig:013_results}, we see that the Random Forest model has the best performance between the algorithms tested. Although the r-squared is relatively modest (0.44), it achieves a correlation of 67\% with our target variable.
Obtaining 67\% correlation is very good, especially considering the low number of shots available, the fact that we did not perform any hyperparameter tuning, and we are using automatic feature engineering. We believe that by addressing these issues, we will substantially improve the quality of the prediction.
This experiment's main goal is to show that there are many possibilities for using the combination of Computer Vision to acquire data and Machine Learning to model. As these fields mature, it is very important to start working on problems that were previously very hard to solve but are now becoming accessible.
A high correlation model has tremendous applications, such as building a player recommendation system tailored to a specific technical movement. Furthermore, for elite players, the "save point" created when gathering this data can help them recover performance in specific technical movements in injury recovery or lack of shape scenarios. Injury prevention can also see advantages by using these methods since they can detect slight differences in player technique, indicating a higher probability of injury.
With only 145 shots, we achieved a 67\% correlation, which is impressive if we consider that we can scale the methodology to capture thousands of shots from multiple individuals. This tremendous amount of data will increase the type of insights we gather from these kinds of experiments.
Even though we believe this research path is ready for the next step, we acknowledge that there are some limitations. For example, the real-time performance of these methods is not up to the required standards, with errors that make real-time applications unadvised. Another limitation is that we still require a relatively controlled setup. Since many sports, namely collective sports, require on-the-wild detection, we need to ensure we acquire data with the highest quality possible, like correct camera angles and position to avoid occlusions. Having multiple people in the camera range also increases the difficulty of filtering the data from the correct athlete since most methods detect all people in an image.
Another thing to note is that models that use 3D data far outperform the models that use 2D data. Even when considering that 3D data is less precise than 2D data, the extra dimensionality improves the results substantially. Therefore, if the goal is to obtain the best results possible, we should always be using 3D data.
The advance in using this kind of data in biomechanics research can also incentive creating a sports-related data set. A sports-related 3D pose data set would enable fine-tuning of the models proposed, further increasing the accuracy of the Pose Estimation process.
We have shown a path that can address multiple problems in biomechanics, which can start taking advantage of big data. Single athlete sports, like high jump or discus throwing, can take advantage of the methods to automatically analyze both biomechanics (jumping movement or rotation speed) and actual performance information (like jumping or release angle). In other sports, like soccer or boxing, we can use these techniques to improve set piece taking or measure training performance in specific drills.
\section{Conclusion}
We reviewed the literature related to Computer Vision methods in a simplified manner to introduce the concepts used in the field to incentive their use in sports-related research. Specifically, for biomechanical analysis, we think that the existing techniques can revolutionize the way we conduct research.
This article's primary conclusion is that these methods can already produce gigabyte-scale data sets, which are orders of magnitude greater than current data sets. Moreover, with increasing magnitude, the associations found in data will have increased relevance, leading to more and better insights into how athletes perform and improve performance.
With applications spanning from sports to ergonomics, or even the metaverse, these methods will play a prominent role in the near future.
\subsection{Future Work}
There are many research paths that these solutions open. The next step is to improve the models developed in this article to achieve a higher correlation. Furthermore, we want to introduce interpretable methods and understand what insights and recommendations we can perform from this kind of data.
\section*{Acknowledgement(s)}
This work is financed by National Funds through the Portuguese funding agency, FCT - Fundação para a Ciência e a Tecnologia, within project UIDB/50014/2020.
The second author would like to thank Futebol Clube Porto – Futebol SAD for the funding.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the cited entity.
\bibliographystyle{elsarticle-num}
| {
"attr-fineweb-edu": 2.953125,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUf8PxK7IACRyfnd9u |
\section{EMQA Dataset: Questions Grounded in Scene Tours}
\label{sec:dataset-questions}
We now describe the dataset for the task. Recall that the task involves taking the assistant on an exploratory tour of an indoor environment, and then asking multiple, post-hoc questions about the guided tour. The EMQA model must localize answers to the questions in the scene.
\xhdr{Guided Exploration Tours}
\label{subsec:tours}
We instantiate our task in the Habitat \cite{habitat19iccv} simulator using Matterport3D \cite{Matterport3D} (MP3D) scans (reconstructed 3D meshes from 90 indoor environments). For any given indoor scene, we use the manually recorded exploration paths from \cite{cartillier2020}. These multi-room navigation trajectories were optimized for coverage, comprise of egocentric RGB-D maps, ground-truth pose and are, on an average, 2500 steps in length.
\xhdr{Questions Grounded in Tours}
\label{subsec:dataset-questions}
For a given exploration path through an indoor scene, we now describe our process of generating grounded questions about objects witnessed by the egocentric assistant. Following \cite{cartillier2020}, we restrict ourselves to $12$ commonly occurring object categories such as sofa, bed etc. (see Suppl. for the full list).
We start by generating the ground-truth top-down maps labelled with object instances for each scene via an orthographic projection of the semantically annotated MP3D mesh. These generated maps contain ground-truth information about the top-down layout of all objects in all parts of a scene. Each cell in these maps is of a fixed spatial resolution of $2$cm $\times 2$cm and therefore, the spatial dimensions of the maps depend on the size of the indoor scenes. Although the exploration tours have been optimized for coverage, they do not cover all observable parts of all scenes (leaving out some hard-to-reach, niche areas of environments during the manually guided exploration process). Therefore, to ensure the relevance of questions in our dataset, next, we compute the subset of ``observed'' locations from within the ground-truth semantic map of the entire scene.
\begin{figure*}
\centering
\includegraphics[clip, trim=0cm 0cm 0cm 0cm, scale=0.20]{figures/dataset.png}
\caption{A schematic overview of our dataset generation process. The ground-truth top-down maps are created from the orthographic projection of the Matterport \cite{Matterport3D} mesh with semantic labels, via egocentric RGB + Depth observations and pose information, and filtered out by the per-step observed masks.}
\label{fig:dataset}
\end{figure*}
In order to do that, we project the depth maps from each step of the tour onto the top-down scene map, giving us the per-time step mask over locally observed locations (Fig. \ref{fig:dataset}, ``Inputs''). Summing across all time steps and overlaying the resulting mask over the ground-truth semantic map, gives us the subset of objects and their locations observed during the tour. This serves as our source for generating questions.
From each such ``observed'' top-down semantic map (representing the tour), we generate templated questions that broadly belong to the following two categories: (1) \emph{spatial} localization questions and (2) \emph{spatio-temporal} localization questions. For the former, we generate a question of the form, ``where did you see the $<$X$>$?", where $<$X$>$ is an object category (from the pre-selected vocabulary of 12 objects) with at most 5 instances witnessed during the tour. Along with every question, we also record the information regarding (a) top-down map pixels corresponding to all instances of the object category in question that serves as the ground-truth answer, (b) the tour time-steps when each answer instance was observed during the tour. This is done by computing the intersection between the per-step mask of observed locations (described above) with the object instance on the top-down map. If, at any time step, the observed mask covers more than a heuristically determined fraction (10\%) of the object instance in question, then, we consider that instance to be ``seen'' at that particular time step of the tour (Fig. \ref{fig:dataset}, ``Dataset Sample'').
Similarly, for the spatio-temporal subset, we generate questions with the format, ``where did you first/last see the $<$X$>$?" for each object category $<$X$>$ with atleast 2 instances sighted during the tour. To select the first (or, the last) viewed instance of the object, we query the the metadata (described above) comprising of the time steps wherein each instance of the object was viewed during the tour. The instance with the earliest (latest) time step among the first (last) observed time steps of all instances becomes the first (last) viewed instance of the object. Note that, in certain situations, the first and last seen instance of an object might coincide (the tour comprising of a ``loop'' within its tour), presenting a challenging scenario for learning.
\xhdr{Generating ``short" tours for training}
As stated above, the exploration tours in the EMQA dataset are, on an average, 2500 steps in length. To make the memory requirements and speed during training tractable, we follow the protocol laid down in \cite{cartillier2020} and consider 20-step ``short'' tour segments randomly sampled from the originally curated ``full'' tours. The top-down maps encompassing the area covered by these 20-step tour subsets are of fixed spatial dimensions of $250 \times 250$ cells across all ``short'' tours (as compared to varying spatial dimensions, depending on the environment size for ``full'' tours). We use the same question generation engine (described in the preceding sub-section) to generate questions corresponding to the ``short'' tour data splits. In addition to easing up speed and memory requirements during training, this also greatly increases the number of training samples to learn from. We would like to emphasize that our original task (and all results that follow) are defined on the full-scale tours.
Fig. \ref{fig:dataset} (``Dataset Statistics'') shows the distribution of the total number of scenes and questions across train, val and test splits for both ``short'' and ``full'' tours. We use mutually exclusive scenes for the train, val and test splits, evaluating generalization on never-before-seen environments. We also show a distribution of object categories across all questions in our dataset. For more qualitative examples
and statistics (analysis of the sizes and spatial distribution of objects in scenes), please refer to the Suppl. document.
\section{Baselines}
\label{sec:baselines}
In this section, we present details of a range of competitive baselines that we compare our approach against.
\xhdr{Language-only (\textcolor{blue}{LangOnly})}.
We evaluate baselines which answer questions from the language input alone for EMQA. Such baselines have been shown to demonstrate competitive performance for embodied question answering tasks \cite{das2017embodied, wijmans2019embodied}. Specifically, we drop the episodic scene memory features (while keeping the temporal features) from our inputs and train the question-answering model to predict the answer to the input question. Performance of this baseline is an indication of the spatial biases present in the dataset (are beds almost always present in the same corner of the map?).
\xhdr{Egocentric semantic segmentation (\textcolor{blue}{EgoSemSeg})}. This baseline serves as a naive, ``off-the-shelf'' solution for the EMQA task. We perform semantic segmentation on each of the egocentric RGB frames that comprise the scene tour (using the same pre-trained RedNet model as used in our approach).
Then, we extract the subset of the model's predictions corresponding to the object in question and that serves as the final prediction for this baseline.
\xhdr{Decoding top-down semantic labels (\textcolor{blue}{SMNetDecoder})}. In this baseline, we use the decoding section of the network that was used to pre-train our scene memory features and directly predict the top-down semantic segmentation of the floorplan as seen during the tour. We follow that with the same step as above: extracting the subset of top-down prediction pixels corresponding to the object in question to get the answer prediction for this baseline. Note that both the EgoSemSeg and SMNetDecoder baselines do not have access to the temporal features.
\begin{figure*}
\centering
\includegraphics[clip, trim=0cm 0cm 0cm 0cm, width=1.00\textwidth]{figures/qual_rgbd_results_rgb.png}
\vspace{0.05in}
\caption{(a) Qualitative examples of the model's outputs in the top-down map output space from the test split of the Matterport \cite{Matterport3D} dataset. (b) The improvement in performance due to temporal features. (c) Qualitative results of zero-shot generalization of our model to real-world RGB-D dataset \cite{sturm2012benchmark}.}
\label{fig:qual-examples-temporal-feats-compare}
\end{figure*}
\xhdr{Buffer of egocentric features as scene memory}. For this family of baselines, we store a buffer of visual features extracted from the egocentric RGB-D frames of the tour. These features are extracted via the same pre-trained RedNet model used in our approach as well as the EgoSemSeg and SMNetDecoder baselines. We then condense the per-step features in the buffer using the following different techniques to give rise to specific instantiations of baselines from prior work: (a) averaging \cite{eslami2018neural} (\textbf{\textcolor{blue}{EgoBuffer-Avg}}), (b) GRU \cite{bakker2001reinforcement} (\textbf{\textcolor{blue}{EgoBuffer-GRU}}) and (c) question-conditioned, scaled, dot-product attention \cite{fang2019scene} (\textbf{\textcolor{blue}{EgoBuffer-Attn}}).
\begin{comment}
\begin{table*}[h]
\rowcolors{2}{gray!10}{white}
\small
\centering
\begin{tabular}{l c c c c c c c c}
\toprule
&& \multicolumn{3}{c}{\textbf{{\footnotesize MP3D test (``short'' tours)}}} && \multicolumn{3}{c}{\textbf{{\footnotesize MP3D test (``full'' tours)}}} \\
\cmidrule{3-5} \cmidrule{7-9}
\textbf{{\footnotesize Method}} && {\footnotesize IoU} & {\footnotesize Recall} & {\footnotesize Precision} && {\footnotesize IoU} & {\footnotesize Recall} & {\footnotesize Precision} \\
\cmidrule{1-9}
LangOnly && 15.09 {\tiny $\pm$ 0.22} & 22.36 {\tiny $\pm$ 0.26} & 29.56 {\tiny $\pm$ 0.41} && 4.75 {\tiny $\pm$ 0.14} & 14.41 {\tiny $\pm$ 0.62} & 6.98 {\tiny $\pm$ 0.18} \\
EgoSemSeg && 27.51 {\tiny $\pm$ 0.32} & 39.24 {\tiny $\pm$ 0.48} & \textbf{55.53} {\tiny $\pm$ 0.58} && 22.89 {\tiny $\pm$ 0.69} & 35.85 {\tiny $\pm$ 1.06} & 38.45 {\tiny $\pm$ 1.07} \\
SMNetDecoder && 30.76 {\tiny $\pm$ 0.37} & 45.06 {\tiny $\pm$ 0.46} & 54.51 {\tiny $\pm$ 0.37} && 26.92 {\tiny $\pm$ 1.12} & 43.86 {\tiny $\pm$ 1.25} & \textbf{40.95} {\tiny $\pm$ 1.26} \\
\hline
EgoBuffer-Avg \cite{eslami2018neural} && 2.46 {\tiny $\pm$ 0.09} & 4.53 {\tiny $\pm$ 0.17} & 8.10 {\tiny $\pm$ 0.26} && 0.07 {\tiny $\pm$ 0.01} & 0.37 {\tiny $\pm$ 0.06} & 0.24 {\tiny $\pm$ 0.03} \\
EgoBuffer-GRU \cite{bakker2001reinforcement} && 1.39 {\tiny $\pm$ 0.06} & 2.02 {\tiny $\pm$ 0.11} & 9.09 {\tiny $\pm$ 0.22} && 0.01 {\tiny $\pm$ 0.00} & 0.01 {\tiny $\pm$ 0.00} & 0.02 {\tiny $\pm$ 0.00} \\
EgoBuffer-Attn \cite{fang2019scene} && 2.18 {\tiny $\pm$ 0.07} & 3.67 {\tiny $\pm$ 0.12} & 10.08 {\tiny $\pm$ 0.24} && 0.12 {\tiny $\pm$ 0.02} & 0.16 {\tiny $\pm$ 0.02} & 0.85 {\tiny $\pm$ 0.15} \\
\hline
\textbf{Ours} && 36.09 {\tiny $\pm$ 0.53} & 48.03 {\tiny $\pm$ 0.62} & 53.04 {\tiny $\pm$ 0.51} && 27.42 {\tiny $\pm$ 0.64} & 60.81 {\tiny $\pm$ 1.28} & 31.94 {\tiny $\pm$ 0.60} \\
\textbf{Ours} (+temporal) && \textbf{37.72} {\tiny $\pm$ 0.45} & \textbf{49.88} {\tiny $\pm$ 0.55} & 53.48 {\tiny $\pm$ 0.64} && \textbf{29.11} {\tiny $\pm$ 0.44} & \textbf{62.27} {\tiny $\pm$ 1.13} & 33.39 {\tiny $\pm$ 0.51} \\
\bottomrule
\end{tabular}
\\ [8pt]
\caption{EMQA results for our proposed model and baselines in the ``top-down map'' output space.}
\label{tab:results-topdown}
\end{table*}
\end{comment}
Having generated the 1-D scene embedding vector using either of the above approaches, we use a network of de-convolutional layers to generate a top-down, 2D ``heatmap'' prediction of the agent's answer. For more details about the architecture of these baselines, please refer to the Supplementary material. Note that these baselines implicitly convert the scene embedding derived from egocentric observations into an allocentric top-down answer map (via the de-convolutional layers). On the contrary, our model has this transformation explicitly baked-in via geometrically consistent projections of egocentric features.
\section{Introduction}
\label{sec:intro}
Imagine wearing a pair of AI-powered, augmented reality (AR) glasses and walking around your house. Such smart-glasses will possess the ability to ``see'' and passively capture egocentric visual data from the same perspective as its wearer, organize the surrounding visual information into its memory, and use this encoded information to communicate with humans by answering questions such as, ``where did you last see my keys?''. In other words, such devices can act as our own personal egocentric AI assistants.
\begin{figure}
\centering
\includegraphics[width=0.475\textwidth]{figures/teaser_shrunk.png}
\caption{
(a) An egocentric AI assistant, assumed to be running on a pair of augmented reality glasses, is taken on a guided exploratory tour by virtue of its human wearer moving about inside an environment (example scene from \cite{straub2019replica}). (b) The agent passively records an egocentric stream of RGB-D maps, (c) builds an internal, episodic memory representation of the scene and (d) exploits this spaito-temporal memory representation to answer (multiple) post-hoc questions about the tour.
}
\label{fig:teaser}
\end{figure}
There has been a rich history of prior work in training navigational agents to answer questions grounded in indoor environments -- a task referred to as Embodied Question Answering (EQA) in literature \cite{das2017embodied,yu2019multi,gordon2018iqa,wijmans2019embodied}. However, egocentric AI assistants differ from EQA agents in several important ways. First, such systems passively observe a sequence of egocentric visual frames as a result of the human wearer's navigation, as opposed to taking actions in an environment. Second, AI systems for egocentric assistants would be required to build scene-specific memory representations that persist across different questions. This is in direct contrast with EQA where contemporary approaches have treated every question as a clean-slate navigation episode. EQA agents start navigating with no prior information about the scene (even if the current question is about a scene that they have witnessed before). And third, EQA agents respond to questions by uttering language token(s). Responding to a question such as, ``where did you last see my keys?'' with the answer — ``hallway'' isn't a very helpful response if there are multiple hallways in the house. In contrast, our setup presents a scenario wherein an egocentric assistant can potentially localize answers by grounding them within the environment tour.
Therefore, as a step towards realizing the goal of such egocentric AI assistants, we present a novel task wherein the AI assistant is taken on a guided tour of an indoor environment and then asked to localize its answers to post-hoc questions grounded in the environment tour (Fig. \ref{fig:teaser}). This pre-exploratory tour presents an opportunity to build an internal, episodic memory of the scene. Once constructed, the AI assistant can utilize this scene memory to answer multiple, follow-up questions about the tour. We call this task -- Episodic Memory Question Answering (EMQA).
\begin{comment}
\begin{figure*}
\centering
\includegraphics[clip, trim=0cm 1.5cm 0cm 4cm, width=1.00\textwidth]{figures/teaser_with_headings.pdf}
\caption{
(a) An egocentric AI assistant, assumed to be running on a pair of augmented reality glasses, is taken on a guided exploratory tour by virtue of its human wearer moving about inside the environment. (b) The agent passively records an egocentric stream of RGB-D maps along with associated GT pose. (c) It builds an internal, episodic memory representation of the scene (what objects were seen, where were they observed and when). (d) The AI assistant later exploits this memory representation to answer (multiple) questions about the scene.
}
\label{fig:teaser}
\end{figure*}
\end{comment}
More concretely, in the proposed EMQA task, the system receives a pre-recorded sequence of RGB-D images with the corresponding oracle pose information (the guided agent tour) as an input. It uses the input tour to construct a memory representation of the indoor scene. Then, it exploits the scene memory to ground answers to multiple text questions. The localization of answers can happen either within the tour's egocentric frame sequence or onto a top-down metric map (such as the house floorplan). The two output modalities are equivalent, given the agent pose.
The paper makes several important contributions. \textbf{First}, we introduce the task of Episodic Memory Question Answering. We generate a dataset of questions grounded in pre-recorded agent tours that are designed to probe the system's spatial understanding of the scene ("where did you see the cushion?") as well as temporal reasoning abilities ("where did you first/last see the cushion?"). \textbf{Second}, we propose a model for the EMQA task that builds allocentric top-down semantic scene representations (the episodic scene memory) during the tour and leverages the same for answering follow-up questions. In order to build the episodic scene memory, our model combines the semantic features extracted from egocentric observations from the tour in a geometrically consistent manner into a single top-down feature map of the scene as witnessed during the tour \cite{cartillier2020}. \textbf{Third}, we extend existing scene memories that model spatial relationships among objects \cite{cartillier2020} ("what" objects were observed and "where") by augmenting them with temporal information ("when" were these objects observed), thereby making the memory amenable for reasoning about temporal localization questions.
\textbf{Fourth}, we compare our choice of scene representation against a host of baselines and show that our proposed model outperforms language-only baselines by $\sim150\%$, naive, "off-the-shelf" solutions to the task that rely on making frame-by-frame localization predictions by 37\% as well as memory representations from prior work that aggregate (via averaging \cite{eslami2018neural}, GRU \cite{cangea2019videonavqa}, context-conditioned attention \cite{fang2019scene}) buffers of observation features across the tour.
\textbf{Finally}, in addition to photorealistic indoor environments \cite{Matterport3D}, we also test the robustness of our approach under settings that have a high fidelity to the real world. We show qualitative results of a zero-shot transfer of our approach to a real-world, RGB-D dataset \cite{sturm2012benchmark} that presents significantly challenging conditions of imperfect depth, pose and camera jitter - typical deployment conditions for egocentric AR assistants. In addition to that, we break away from the unrealistic assumption of the availability of oracle pose in indoor settings \cite{datta2020integrating, zhao2021surprising} and perform a systematic study of the impact of noise (of varying types and intensities) in the agent's pose. We show that our model is more resilient than baselines to such noisy perturbations.
\section{Related Work}
\label{sec:related}
\xhdr{Question-Answering in Embodied Environments}.
Training embodied agents to answer questions grounded in indoor environments has been the subject of several prior works. Specifically, \cite{das2017embodied, yu2019multi} present agents that can answer questions about a single and multiple target objects in simulated scenes, respectively. \cite{wijmans2019embodied} presents an instantiation of this line of work to photorealistic scenes from the Matterport3D simulator and 3D point cloud based inputs whereas \cite{gordon2018iqa} presents an extension of the task that requires the agent to interact with its environment. In all of the above, the agents are setup to carry out each question-answering episode with a "clean-slate" i.e. with no avenues for the agent to potentially re-use scene information gathered during prior traversals of the same scene. Although the agent in \cite{gordon2018iqa} has a semantic spatial memory storing information about object semantics and free space, it is not persistent across different question-episodes from the same scene. Moreover, all prior work involves generating a ranked list of language answer tokens as predictions. Our task formulation allows for the sharing of a semantic scene memory that is persistent across the different questions from a scene and localizes answers to questions -- a much more high-fidelity setting for egocentric AI assistants.
\xhdr{Video Question Answering.}
Our work is also reminiscent of video question-answering (VideoQA) tasks. VideoQA has witnessed a rich history of prior work with the introduction of datasets sampled from "open-world" domains (movies \cite{tapaswi2016movieqa}, TV shows \cite{lei2018tvqa}, cooking recipes \cite{DaXuDoCVPR2013,zhou2017procnets}) and tasks involving detecting/localizing actions and answering questions about events that transpire in the videos. Our EMQA dataset instead, comprises of egocentric videos generated from navigation trajectories in indoor environments. In addition to localization within the input video sequence, this setting enables additional output modalities, such as grounding over scene floor-plans, that are incompatible with existing VideoQA domains. Moreover, existing VideoQA datasets are accompanied with rich, per-frame annotations such as subtitles, plot scripts and other sub-event metadata. In contrast, EMQA assumes no such additional annotations. \cite{grauman2021ego4d} is concurrent with our work and proposes a large-scale dataset of egocentric videos paired with 3D meshes of environments in which the recorded activities took place.
\xhdr{Localization via Dialog}. \cite{hahn2020you, de2018talk} present a task wherein an entity having access to the top-down floorplan of an environment needs to localize an agent (on the top-down map) navigating within a scene through first-person vision alone. Both assume that the top-down floorplan of the scene is provided to the agent as input whereas our model constructs the map representation from scratch from the guided tours.
\xhdr{Scene Memory Representations.}
Building representations of scenes for assisting agents in embodied tasks has a rich history of prior work with early examples adopting the hidden state of an LSTM navigation model as a compact representation of the scene \cite{wijmans2020ddppo, habitat19iccv}. To overcome the limited expressive power of a single state vector representing complex, 3D scenes, more recent approaches have modelled scene memories as either a buffer of observed egocentric features \cite{fang2019scene,eslami2018neural}, 2D metric grids \cite{cartillier2020,blukis2018mapping,Anderson2019ChasingGI,chaplot2020object}, topological maps \cite{savinov2018semi,wu2019bayesian} or full-scale 3D semantic maps \cite{tung2019learning,cheng2018geometry,prabhudesai2019embodied}. Storing observed egocentric features in a buffer does not explicitly model the geometric and spatial relationships among objects in the scene. Topological graphs are not optimal for precise metric localization (a desideratum for our task). The memory constraints involved with constructing voxel-based 3D scene representations restricts the feature maps proposed in \cite{tung2019learning,cheng2018geometry} to simple scenes (handful of objects placed on a table-top). On the contrary, our task deals with indoor environments of significantly higher visual complexity. Our scene memory (allocentric, top-down semantic feature map) is most similar to \cite{cartillier2020}. We present a novel extension to the scene features from \cite{cartillier2020} by incorporating temporal information with the semantic features (``when'' was ``what'' observed and ``where'').
\section{Experimental Results}
\label{sec:results}
\textbf{Metrics}.
Our models generate a binary segmentation map (``answer'' v/s ``background'' pixels) as their output. Therefore, we report from the suite of segmentation metrics for evaluating the output answer localization. Specifically, for each data point (tour+question+answer), we compute: the precision, recall and the intersection-over-union (IoU) between the predicted and GT binary answer maps for the question.
We report the aforementioned metrics, averaged across the test splits of our dataset of tours.
\xhdr{Quantitative Results}. We report results for our model's predictions in both the top-down map as well as egocentric tour output modalities in Tab. \ref{tab:results-topdown}. As stated in Sec. \ref{sec:intro}, a grounding of answers into the topdown floorplan is equivalent to a localization within the egocentric pixels of the agent tour -- we simply back-project top-down map pixel predictions onto the agent's egocentric frame of reference. Therefore, for simplicity, we discuss trends in the top-down map space in the subsequent text.
We outperform the SMNetDecoder \cite{cartillier2020} baseline with $8.2$\%, $42$\% gains in IoU, recall respectively. This is due to the combination of two factors: the SMNetDecoder baseline doesn't encode knowledge about the temporal information of tours and our proposed model offers a better mechanism to ground the semantics of questions into the top-down map features via the more expressive LingUNet-based question-answering model. To isolate the gains due to the availability of temporal features, we also train a variant of our model without the same. We see that, even in the absence of temporal features, we are able to out-perform SMNetDecoder (recall of $60.81$ v/s $43.86$, IoU of $27.42$ v/s $26.92$). Moreover, the EgoSemSeg baseline performs worse than SMNetDecoder baseline (and by association, our model). This empirically demonstrates the superiority of our proposed approach over naive, ``off-the-shelf'' solutions for the task. This is consistent with observations made by prior work \cite{cartillier2020}.
All the baselines that rely on a buffer of egocentric RGB-D frame features as scene memories fail spectacularly at the task. This further verifies the hypothesis that compressed, 1-D representations of scenes are woefully inadequate for tasks such as ours. Encoding spatial knowledge of how objects are laid out in scenes and temporal information regarding when they were observed during a tour into such representations and then decoding precise localizations of answers from such scene memories is an extremely challenging problem. Our findings invalidate such memory representations from being used for our task.
In Fig. \ref{fig:qual-examples-temporal-feats-compare}, we also show qualitatively that our agent learns to distinguish all, first- and last-seen instances of tables within a given scene (refer to Suppl. for more qualitative examples).
\begin{figure*}
\centering
\includegraphics[clip,trim=0cm 0cm 0cm 0cm,width=1.0\textwidth]{figures/noise_results_fig_updated.png}
\vspace{0.05in}
\caption{Samples of noisy tours obtained by adding (a) noise sampled from LoCoBot \cite{murali2019pyrobot} and (b) using a visual odometry model \cite{zhao2021surprising}. (c) Quantitative evaluation of our models under these noisy pose settings (d) Qualitative examples of noisy semantic map predictions from the Matterport \cite{Matterport3D} dataset.}
\label{fig:noise}
\end{figure*}
\xhdr{Temporal features help}. As stated in Sec. \ref{sec:models}, having knowledge of when objects were observed during the tour is critical for answering temporal localization questions. To further elucidate this claim, we break down the performance of both variants of our model (Ours v/s Ours(+temporal) in Tab. \ref{tab:results-topdown}) by question types (spatial and spatio-temporal). As shown in Fig. \ref{fig:qual-examples-temporal-feats-compare} (b), we see a $24\%$ relative improvement in IoU for spatio-temporal localization questions upon the addition of temporal features. There is no significant impact on metrics for spatial questions.
\begin{comment}
\begin{table*}[t]
\rowcolors{2}{gray!10}{white}
\small
\centering
\begin{tabular}{l c c c c c c c c}
\toprule
&& \multicolumn{3}{c}{\textbf{{\footnotesize MP3D test (``short'' tours)}}} && \multicolumn{3}{c}{\textbf{{\footnotesize MP3D test (``full'' tours)}}} \\
\cmidrule{3-5} \cmidrule{7-9}
\textbf{{\footnotesize Method}} && {\footnotesize IoU} & {\footnotesize Recall} & {\footnotesize Precision} && {\footnotesize IoU} & {\footnotesize Recall} & {\footnotesize Precision} \\
\cmidrule{1-9}
LangOnly && 15.23 {\tiny $\pm$ 0.23} & 23.19 {\tiny $\pm$ 0.27} & 29.61 {\tiny $\pm$ 0.43} && 5.26 {\tiny $\pm$ 0.18} & 14.57 {\tiny $\pm$ 0.62} & 7.86 {\tiny $\pm$ 0.19} \\
EgoSemSeg && 29.05 {\tiny $\pm$ 0.36} & 39.45 {\tiny $\pm$ 0.50} & \textbf{59.57} {\tiny $\pm$ 0.55} && 23.42 {\tiny $\pm$ 0.73} & 36.06 {\tiny $\pm$ 1.06} & 40.13 {\tiny $\pm$ 1.15} \\
SMNetDecoder && 31.36 {\tiny $\pm$ 0.31} & 45.86 {\tiny $\pm$ 0.23} & 55.52 {\tiny $\pm$ 0.54} && 27.13 {\tiny $\pm$ 1.12} & 43.39 {\tiny $\pm$ 1.58} & \textbf{41.96} {\tiny $\pm$ 1.22} \\
\hline
EgoBuffer-Avg \cite{eslami2018neural} && 2.18 {\tiny $\pm$ 0.09} & 4.29 {\tiny $\pm$ 0.18} & 7.98 {\tiny $\pm$ 0.28} && 0.11 {\tiny $\pm$ 0.01} & 0.40 {\tiny $\pm$ 0.06} & 0.34 {\tiny $\pm$ 0.03} \\
EgoBuffer-GRU \cite{bakker2001reinforcement} && 1.34 {\tiny $\pm$ 0.07} & 2.08 {\tiny $\pm$ 0.12} & 9.36 {\tiny $\pm$ 0.23} && 0.01 {\tiny $\pm$ 0.00} & 0.01 {\tiny $\pm$ 0.00} & 0.04 {\tiny $\pm$ 0.00} \\
EgoBuffer-Attn \cite{fang2019scene} && 2.01 {\tiny $\pm$ 0.07} & 3.66 {\tiny $\pm$ 0.12} & 10.15 {\tiny $\pm$ 0.26} && 0.14 {\tiny $\pm$ 0.03} & 0.17 {\tiny $\pm$ 0.03} & 1.01 {\tiny $\pm$ 0.17} \\
\hline
\textbf{Ours} && 36.66 {\tiny $\pm$ 0.27} & 49.64 {\tiny $\pm$ 0.41} & 53.12 {\tiny $\pm$ 0.32} && 28.04 {\tiny $\pm$ 0.94} & 60.96 {\tiny $\pm$ 1.41} & 32.83 {\tiny $\pm$ 0.96} \\
\textbf{Ours} (+temporal) && \textbf{38.29} {\tiny $\pm$ 0.44} & \textbf{51.47} {\tiny $\pm$ 0.58} & 53.48 {\tiny $\pm$ 0.59} && \textbf{29.78} {\tiny $\pm$ 0.59} & \textbf{62.68} {\tiny $\pm$ 1.08} & 34.36 {\tiny $\pm$ 0.73} \\
\bottomrule
\end{tabular}
\\ [8pt]
\caption{EMQA results for our proposed model and baselines in the ``egocentric pixel'' output space.}
\label{tab:results-ego}
\end{table*}
\end{comment}
\xhdr{Sim2Real Robustness}.
Moving beyond simulation, we analyze the robustness of our models to typical sources of noise that might arise from the real-world deployment of such systems. First, we test our EMQA model trained in simulation \cite{Matterport3D} on raw video sequences captured in the real world. We use the RGB-D observations + camera poses from the RGB-D SLAM benchmark in \cite{sturm2012benchmark} which presents a significantly challenging test-bed with high-fidelity conditions for egocentric AR applications: noisy depth+pose and head-worn camera jitter. Despite these challenges, our approach provides promising results with zero-shot generalization (\ref{fig:qual-examples-temporal-feats-compare} (c)). Without any fine-tuning, the agent is able to ground answers to questions and reasonably distinguish between first and last seen instances of objects.
Second, following prior work \cite{datta2020integrating,zhao2021surprising}, we remove the assumption of oracle indoor localization and investigate our models under conditions of noisy pose. Specifically, we perturb the ground-truth pose sequence from our dataset in two ways. First, we add noise (independently, at every step) from a distribution estimated with samples collected by a LoCoBot \cite{murali2019pyrobot} (Fig. \ref{fig:noise} (a)). Second, we predict the relative pose change between two successive steps via a state-of-the-art visual odometry model \cite{zhao2021surprising} and integrate these estimates over the trajectory to maintain a noisy estimate of the current pose (Fig. \ref{fig:noise} (b)). The latter is more realistic as it takes into account drift in the agent's pose estimates due to cascading errors along the trajectory.
As expected, when EMQA models trained with oracle localization inputs are evaluated with noisy pose, the quality of the scene representations (Fig. \ref{fig:noise} (d)) and the task metrics (Fig. \ref{fig:noise} (c)) drops proportional to the severity and nature (independent v/s cumulative) of noise being added. We find that the IoU of our proposed model drops by 29\%, as compared to a 36\% drop for our best-performing baseline, indicating that our model is more resilient to the added noise. Finally, we also show that re-training our models (both the SMNet scene encoder and LingUNet question-answering module) in the noisy settings allows us to regain some of this lost performance (increase in IoU and precision across all three noise models) in Fig. \ref{fig:noise} (c).
Refer to Suppl. for more details.
\xhdr{Limitations}.
Our approach involves building static scene maps which restricts the setup to questions about objects (such as furniture items) whose positions in the scene remain largely fixed. One way to overcome this is to update the scene maps (via re-sampling the agent tours) at sufficient frequency so that the constructed scene maps more closely approximate the current environment state.
\section{Conclusion}
We study the task of question-answering in 3D environments with the goal of egocentric personal AI assistants. Towards that end, we propose a model that builds semantic feature representations of scenes as its episodic memory. We show that exploiting such bottleneck scene representations can enable the agent to effectively answer questions about the scene and demonstrate its superiority over strong baselines. Our investigations into the robustness of such a system to different forms of noise in its inputs present promising evidence for future research towards deploying such agents in egocentric AR devices for the real-world.
\vspace{0.5em}
\xhdr{Acknowledgements}: The Georgia Tech effort was supported in part by NSF, ONR YIP, and ARO PECASE. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.
\section{Models}
\label{sec:models}
\begin{figure*}
\centering
\includegraphics[clip, trim=0cm 0cm 0cm 0cm, scale=0.50]{figures/model_figure.png}
\vspace{0.1in}
\caption{A schematic diagram of our proposed EMQA agent. Our agent first constructs an episodic memory representation of the tour and then grounds answers to questions on the top-down scene floorplan using a LingUNet-based answering model. A sample tour from the Matterport \cite{Matterport3D} is shown in the figure.}
\label{fig:model}
\end{figure*}
Any model for the EMQA task must, broadly speaking, comprise of modules for the following two sub-tasks: (1) scene memory representation and (2) question-answering. The former takes the scene tour (RGB-D video frame sequence and associated ground-truth pose) as an input and generates a compact representation of the scene that ideally encodes information about objects, their relative spatial arrangements and when they were viewed during the tour. The latter takes the question as an input, operates on this episodic scene memory and generates its predicted answer as the output. In this section, we describe our choices regarding the specific instantiations of the two modules.
\xhdr{Scene Memory Representation.}
\label{subsec:scene-memory}
As our preferred choice of scene representation, we use allocentric, 2D, top-down, semantic features \cite{cartillier2020}. These representations are computed via projecting egocentric visual features onto an allocentric, top-down floor-plan of the environment using the knowledge of camera pose and depth.
\begin{table*}[h]
\rowcolors{2}{gray!10}{white}
\small
\centering
\begin{tabular}{l c c c c c c c c}
\toprule
&& \multicolumn{3}{c}{\textbf{{\footnotesize Top-down map output space}}} && \multicolumn{3}{c}{\textbf{{\footnotesize Egocentric pixel output space}}} \\
\cmidrule{3-5} \cmidrule{7-9}
\textbf{{\footnotesize Method}} && {\footnotesize IoU} & {\footnotesize Recall} & {\footnotesize Precision} && {\footnotesize IoU} & {\footnotesize Recall} & {\footnotesize Precision} \\
\cmidrule{1-9}
LangOnly && 4.75 {\tiny $\pm$ 0.14} & 14.41 {\tiny $\pm$ 0.62} & 6.98 {\tiny $\pm$ 0.18} && 5.26 {\tiny $\pm$ 0.18} & 14.57 {\tiny $\pm$ 0.62} & 7.86 {\tiny $\pm$ 0.19} \\
EgoSemSeg && 22.89 {\tiny $\pm$ 0.69} & 35.85 {\tiny $\pm$ 1.06} & 38.45 {\tiny $\pm$ 1.07} &&23.42 {\tiny $\pm$ 0.73} & 36.06 {\tiny $\pm$ 1.06} & 40.13 {\tiny $\pm$ 1.15} \\
SMNetDecoder && 26.92 {\tiny $\pm$ 1.12} & 43.86 {\tiny $\pm$ 1.25} & \textbf{40.95} {\tiny $\pm$ 1.26} &&27.13 {\tiny $\pm$ 1.12} & 43.39 {\tiny $\pm$ 1.58} & \textbf{41.96} {\tiny $\pm$ 1.22} \\
\hline
EgoBuffer-Avg \cite{eslami2018neural} && 0.07 {\tiny $\pm$ 0.01} & 0.37 {\tiny $\pm$ 0.06} & 0.24 {\tiny $\pm$ 0.03} && 0.11 {\tiny $\pm$ 0.01} & 0.40 {\tiny $\pm$ 0.06} & 0.34 {\tiny $\pm$ 0.03} \\
EgoBuffer-GRU \cite{bakker2001reinforcement} && 0.01 {\tiny $\pm$ 0.00} & 0.01 {\tiny $\pm$ 0.00} & 0.02 {\tiny $\pm$ 0.00} && 0.01 {\tiny $\pm$ 0.00} & 0.01 {\tiny $\pm$ 0.00} & 0.04 {\tiny $\pm$ 0.00}\\
EgoBuffer-Attn \cite{fang2019scene} && 0.12 {\tiny $\pm$ 0.02} & 0.16 {\tiny $\pm$ 0.02} & 0.85 {\tiny $\pm$ 0.15} && 0.14 {\tiny $\pm$ 0.03} & 0.17 {\tiny $\pm$ 0.03} & 1.01 {\tiny $\pm$ 0.17} \\
\hline
\textbf{Ours} && 27.42 {\tiny $\pm$ 0.64} & 60.81 {\tiny $\pm$ 1.28} & 31.94 {\tiny $\pm$ 0.60} && 28.04 {\tiny $\pm$ 0.94} & 60.96 {\tiny $\pm$ 1.41} & 32.83 {\tiny $\pm$ 0.96}\\
\textbf{Ours} (+temporal) && \textbf{29.11} {\tiny $\pm$ 0.44} & \textbf{62.27} {\tiny $\pm$ 1.13} & 33.39 {\tiny $\pm$ 0.51} && \textbf{29.78} {\tiny $\pm$ 0.59} & \textbf{62.68} {\tiny $\pm$ 1.08} & 34.36 {\tiny $\pm$ 0.73} \\
\bottomrule
\end{tabular}
\\ [8pt]
\caption{EMQA results for our proposed model and baselines in the ``top-down map'' and ``egocentric pixel'' output space.}
\label{tab:results-topdown}
\end{table*}
More specifically, as shown in Fig. \ref{fig:model} (SMNet), at each step of the tour, we first extract convolutional features from the input RGB-D video frame via a RedNet \cite{jiang2018rednet} model that has been trained for egocentric semantic segmentation on indoor scenes from the SUN-RGBD \cite{sunrgb} dataset and then fine-tuned on egocentric frames from Matterport3D.
Next, these per-step egocentric semantic features are projected onto the scene's top-down floor-plan. The resulting feature map is of a fixed resolution -- each ``cell'' in the map corresponds to a fixed, $2$cm $\times 2$cm metric space in the real world and encodes semantic information about the objects present in that space (as viewed top-down). These local, per-step, projected features from each time step are accumulated using a GRU into a consolidated spatial memory tensor that serves as an ``episodic memory representation'' of the tour. The GRU is pre-trained to decode the top-down semantic segmentation from the scene memory tensor \cite{cartillier2020} with the ground truth top-down semantic map for the tour being computed from the annotated Matterport3D semantic mesh, as described in Sec. \ref{sec:dataset-questions}.
On account of how these 2D scene features are derived, they have greater expressive capacity, model inter-object spatial relationships better (due to geometrically consistent pin-hole feature projections) and do not suffer from memory constraints of voxel-based representations.
\xhdr{Spatiotemporal memory.}
While the above is a sensible choice for representing objects in scenes, it doesn't encode temporal information about the tour (when were the objects observed?). Therefore, in this work, we present a novel extension to the scene representations from \cite{cartillier2020} by augmenting information about when each metric cell in the representation was observed during the tour.
As shown in Fig. \ref{fig:model} (Spatiotemporal memory), this is done by a channel-wise stacking of the per-step masks over ``observed'' locations (from Sec. \ref{sec:dataset-questions}). Refer to the Supp. for more details.
\xhdr{Question-Answering.}
\label{subsec:question-answering}
We adopt the LingUNet \cite{blukis2018mapping} architecture to ground answers to input questions by exploiting the constructed scene memory (Fig. \ref{fig:model} (LingUNet)). LingUNet is an encoder-decoder architecture with language-conditioned skip connections. Prior work \cite{blukis2018mapping, hahn2020you, Anderson2019ChasingGI} has demonstrated that it is a highly performant architecture for tasks such as grounding of language-specified goal locations onto top-down maps of scenes for agent navigation.
The input questions are encoded using a 64-dim, single-layer LSTM. Our 3-layer LingUNet-based question-answering model takes the constructed scene memory features and the LSTM embedding of the question as inputs and generates a spatial feature map over the 2D floorplan (refer to Suppl. for the layer-wise architectural details of the LingUNet model). The spatial feature map is further processed by a convolutional block to generate the spatial distribution of answer predictions scores. This predicted ``heatmap'' represents the agent's belief with respect to the localization of the target object in question over the top-down scene floorplan. Both the question-encoder LSTM and the LingUNet models are trained end-to-end with gradients from the question-answering loss.
\xhdr{Training Details}
The visual encoder (RedNet) is first trained to perform egocentric semantic segmentation via pixel-wise CE loss on egocentric frames. Features from this pre-trained and frozen RedNet are used by the scene memory encoder to generate our episodic memory (via pixelwise CE loss on top-down semantic maps). Finally, the episodic memories from the pre-trained and frozen scene encoder are used to train the question-answering model. To do that, we use the ground truth answers (as described in Sec. \ref{sec:dataset-questions}) and train using a per-pixel binary cross entropy loss that encourages the model to correctly classify each "cell" in the top-down map as belonging to the answer category or not. Since the optimization process in our case is dealing with a severe class imbalance problem (only a few hundreds of pixels belong to the answer class among several tens of thousands of ``background'' pixels), we leverage the dynamic weighting properties of Focal loss \cite{focal} that adjusts the contribution of the easily classified ``background'' pixels to the overall loss in order to stabilize training for our model. In our experiments, we also found that setting the bias of the last layer to the ratio of the number of positive to negative samples and using the normalization trick from \cite{focal} helps.
\section{Additional Dataset Details}
The Matterport3D \cite{Matterport3D} meshes are annotated with 40 object categories. Following prior work \cite{cartillier2020}, we work with the $12$ most commonly occurring object types, leaving out objects such as walls, curtains that are rendered as a thin array of pixels on the top-down map. The list of the 12 object categories used in our work is as follows: shelving, fireplace, bed, table, plant, drawers, counter, cabinet, cushion, sink, sofa and chair.
Fig. \ref{fig:dataset-qual-examples-suppl} shows additional qualitative examples of questions and associated ground-truth, top-down answer maps from our dataset.
\begin{figure*}[h]
\centering
\includegraphics[clip, trim=0cm 0cm 0cm 0cm, scale=0.30]{figures/dataset_qual_examples_rgb.pdf}
\vspace{0.15in}
\caption{This figure illustrates more examples of questions grounded in Matterport \cite{Matterport3D} scenes from our proposed EMQA dataset.}
\label{fig:dataset-qual-examples-suppl}
\end{figure*}
We also show additional analysis of the distribution of locations and sizes of objects from our questions (Fig. \ref{fig:dataset-stats-obj-dist} and \ref{fig:dataset-stats-overall-dist}). Fig. \ref{fig:dataset-stats-obj-dist} visualizes the spatial distribution of 6 object types (drawers, sink, plant, fireplace, cushion and shelving) over the $250 \times 250$ spatial maps across the ``short tours'' data splits (recall that this is the dataset that we train our models on). Each object instance is represented by a circle with the center of the circle corresponding to the location of an object instance and the radius representing its size (on the top-down map). The sizes of object instances are measured in terms of the number of pixels. As mentioned in Sec. 3 (main paper), all top-down maps are of a fixed $2$cm $\times 2$cm resolution, therefore the number of pixels covered by an object on the top-down map has a direct correlation with its size in the real world.
\begin{figure*}[h]
\centering
\includegraphics[clip, trim=0cm 0cm 0cm 0cm, scale=0.40]{figures/dataset_stats_obj.pdf}
\caption{Distribution of spatial arrangement and size of object instances across $6$ different object categories. Each instance is represented by a circle whose center and radius correspond to the location and size of the object, as viewed on the top-down map.}
\label{fig:dataset-stats-obj-dist}
\end{figure*}
As evident from Fig. \ref{fig:dataset-stats-obj-dist}, the objects demonstrate a good distribution over spatial locations in the map, thereby alleviating dataset biases such as ``are sinks almost always found in specific corners of the map?''. Recall that, the fact that we beat the \textcolor{blue}{LangOnly} baseline by almost $150$\% is also indicative of the fact that our models are free from such biases as well.
Additionally, Fig. \ref{fig:dataset-stats-overall-dist} (left) shows the mean object locations for all $12$ categories across the ``short tours'' data splits. Notice that, consistent with Fig. \ref{fig:dataset-stats-obj-dist}, the mean locations for nearly all object categories are close to the center of the $250 \times 250$ map. This further confirms a comprehensive distribution of objects across all spatial locations in the map. Fig. \ref{fig:dataset-stats-overall-dist} (right) shows a distribution of average object sizes per category. Here, we can see that our dataset involves asking questions about objects of varying sizes -- from typically small objects such as cushions to bigger ones such as beds.
\begin{figure*}[h]
\centering
\includegraphics[clip, trim=0cm 0cm 0cm 0cm, scale=0.40]{figures/dataset_stats_overall.pdf}
\caption{[Left] Scatter-plot showing the mean locations of all instances across a given object category, for all object categories. [Right] A distribution of the mean object size on the top-down map across all categories demonstrating the spectrum of object sizes in our dataset.}
\label{fig:dataset-stats-overall-dist}
\end{figure*}
\section{Additional Model+Training Details}
\xhdr{Details of the temporal features}.
In this section, we provide more details regarding the construction of the temporal features component of our scene memories. At a high level, we save temporal features at every (i, j) gridcell on the top-down map. To do that, we first represent each video tour (of varying lengths) as a sequence of 20 (determined heuristically) equal-length segments. For every tour segment, we then compute the subset of top-down grid-cells that were observed by the agent during that part of the tour. This induces a binary topdown map, per tour segment, indicating whether a given (i, j) was observed during that part of the tour or not. Aggregated over all segments via channel-wise stacking, we get a 20-dim multi-hot vector at each (i, j). Intuitively, these 20-bits for any top-down location (i, j) indicate the time instances (in terms of segments) during the tour when that metric location was observed.
\xhdr{Details of the LingUNet architecture}.
Fig. 3 in the main paper shows a schematic diagram of our EMQA agent. We show the same in Fig. \ref{fig:model} for the readers' convenience and provide more details about our LingUNet-based question-answering module in this section. Recall (from Sec. 4 in the main paper) that our LingUNet-based question-answering model takes the constructed scene memory features and an embedding of the question as an input and generates a top-down heatmap of the agent's answer prediction as its output. We use a $3$-layer LingUNet architecture, as shown in Fig. \ref{fig:model}. The module receives a tensor of dimensions $256 \times 250 \times 250$ (the spatio-temporal memory tensor) and a $64$-dim question LSTM embedding as inputs. The scene memory tensor is encoded through a series of three, $3 \times 3$ Conv. layers and the question embedding is L2-normalized and used to generate three convolutional filters via FC-layers. These question-conditioned convolutional filters operate on the intermediate feature outputs from the $3$ encoder Conv. layers to generate the language-conditioned skip connections. Finally, the encoded feature volumes are passed through a series of $3$ de-convolutional layers (see Fig. \ref{fig:model}), with added skip connections to generate the final output map: a tensor containing a $128$-dim feature vector for each of the $250 \times 250$ spatial locations of the input map.
\begin{figure*}[h]
\centering
\includegraphics[clip, trim=0cm 0cm 0cm 0cm, scale=0.60]{figures/model_figure.pdf}
\vspace{0.1in}
\caption{A schematic diagram of our proposed EMQA agent. Our agent first constructs an episodic memory representation of the tour and then grounds answers to questions on the top-down scene floorplan using a LingUNet-based answering model. A sample tour from the Matterport \cite{Matterport3D} is shown in the figure.}
\label{fig:model}
\end{figure*}
The output feature map from the LingUNet model is post-processed by a $1 \times 1$ Conv. layer, followed by Sigmoid non-linearity to generate the agent's prediction score for each of the $250 \times 250$ spatial map cell.
\xhdr{Other training hyperparameters}.
As stated in Sec. 4 (main paper), we use a dynamically-weighted variant of the binary cross entropy loss (Focal loss \cite{focal}), applied per ``pixel'' on the top-down map output to train our model to correctly predict whether each metric ``pixel'' on the map belongs to the answer category or not. We set the gamma parameter in our Focal loss implementation to $2.0$ and use the Adam optimizer with a learning rate of $2$e$-4$ and weight-decay of $4$e$-4$. To circumvent memory issues during training, our data loader returns singleton batches of questions (and associated scene memory features). However, we accumulate gradients and update the model weights every $B = 4$ iterations to simulate an effective batch-size of $4$.
\xhdr{Details about EgoBuffer baselines}.
In this section, we provide more architectural details about the \textcolor{blue}{EgoBuffer-*} baselines (Sec. 5 in the main paper). As mentioned in Sec. 5, as a first step, we compute egocentric feature volumes for every step of the agent tour via a pre-trained RedNet model (the same pre-trained RedNet model weights are used across all our models and baselines).
For the \textcolor{blue}{EgoBuffer-Avg} baseline, we simply average these per-step, convolutional feature volumes and pass them through a $4$-layer convolutional network (followed by a linear layer) to get a flattened, $1$-D vector representation of the agent tour.
For the \textcolor{blue}{EgoBuffer-GRU} baseline, we instead compute the encoded, flattened, $1$-D feature representation at every step and then pass them through a GRU. The hidden state from the final time step is the representation of the agent's tour.
And finally, for the \textcolor{blue}{EgoBuffer-Attn.} baseline, instead of sending the per-step, flattened egocentric features as inputs to a GRU, we perform a scaled, dot-product attention, conditioned on the question-embedding to obtain the tour representation.
Once we obtain the tour representation using the methods described above, we append the features with the question embedding and then decode the multi-modal (question+scene) features into a $250 \times 250$ feature map via a $5$-layer de-convolutional network. Finally, use a $1 \times 1$ Conv + Sigmoid layer to get the answer prediction scores.
Also note that, similar to our proposed model, the family of \textcolor{blue}{EgoBuffer-*} baselines are trained on fixed-size, $20$-step ``short'' tours. During evaluation on ``full tours'', we split them into consecutive chunks of $20$-steps each, generate a $250 \times 250$ output for each such ``short tour'' segment and then combine them into the full-scale environment map to get the final output (using pose information).
\begin{table*}[h]
\rowcolors{2}{gray!10}{white}
\small
\centering
\begin{tabular}{l c c c c c c c c}
\toprule
&& \multicolumn{3}{c}{\textbf{{\footnotesize Top-down map output space}}} && \multicolumn{3}{c}{\textbf{{\footnotesize Egocentric pixel output space}}} \\
\cmidrule{3-5} \cmidrule{7-9}
\textbf{{\footnotesize Method}} && {\footnotesize IoU} & {\footnotesize Recall} & {\footnotesize Precision} && {\footnotesize IoU} & {\footnotesize Recall} & {\footnotesize Precision} \\
\cmidrule{1-9}
LangOnly && 15.09 {\tiny $\pm$ 0.22} & 22.36 {\tiny $\pm$ 0.26} & 29.56 {\tiny $\pm$ 0.41} && 15.23 {\tiny $\pm$ 0.23} & 23.19 {\tiny $\pm$ 0.27} & 29.61 {\tiny $\pm$ 0.43} \\
EgoSemSeg && 27.51 {\tiny $\pm$ 0.32} & 39.24 {\tiny $\pm$ 0.48} & \textbf{55.53} {\tiny $\pm$ 0.58} && 29.05 {\tiny $\pm$ 0.36} & 39.45 {\tiny $\pm$ 0.50} & \textbf{59.57} {\tiny $\pm$ 0.55} \\
SMNetDecoder && 30.76 {\tiny $\pm$ 0.37} & 45.06 {\tiny $\pm$ 0.46} & 54.51 {\tiny $\pm$ 0.37} && 31.36 {\tiny $\pm$ 0.31} & 45.86 {\tiny $\pm$ 0.23} & 55.52 {\tiny $\pm$ 0.54} \\
\hline
EgoBuffer-Avg \cite{eslami2018neural} && 2.46 {\tiny $\pm$ 0.09} & 4.53 {\tiny $\pm$ 0.17} & 8.10 {\tiny $\pm$ 0.26} && 2.18 {\tiny $\pm$ 0.09} & 4.29 {\tiny $\pm$ 0.18} & 7.98 {\tiny $\pm$ 0.28} \\
EgoBuffer-GRU \cite{bakker2001reinforcement} && 1.39 {\tiny $\pm$ 0.06} & 2.02 {\tiny $\pm$ 0.11} & 9.09 {\tiny $\pm$ 0.22} && 1.34 {\tiny $\pm$ 0.07} & 2.08 {\tiny $\pm$ 0.12} & 9.36 {\tiny $\pm$ 0.23}\\
EgoBuffer-Attn \cite{fang2019scene} && 2.18 {\tiny $\pm$ 0.07} & 3.67 {\tiny $\pm$ 0.12} & 10.08 {\tiny $\pm$ 0.24} && 2.01 {\tiny $\pm$ 0.07} & 3.66 {\tiny $\pm$ 0.12} & 10.15 {\tiny $\pm$ 0.26} \\
\hline
\textbf{Ours} && 36.09 {\tiny $\pm$ 0.53} & 48.03 {\tiny $\pm$ 0.62} & 53.04 {\tiny $\pm$ 0.51} && 36.66 {\tiny $\pm$ 0.27} & 49.64 {\tiny $\pm$ 0.41} & 53.12 {\tiny $\pm$ 0.32}\\
\textbf{Ours} (+temporal) && \textbf{37.72} {\tiny $\pm$ 0.45} & \textbf{49.88} {\tiny $\pm$ 0.55} & 53.48 {\tiny $\pm$ 0.64} && \textbf{38.29} {\tiny $\pm$ 0.44} & \textbf{51.47} {\tiny $\pm$ 0.58} & 53.48 {\tiny $\pm$ 0.59} \\
\bottomrule
\end{tabular}
\\ [8pt]
\caption{EMQA results on ``short tours'' for our proposed model and baselines in the ``top-down map'' and ``egocentric pixel'' output space.}
\label{tab:results-short-tours}
\end{table*}
\section{Additional Quantitative EMQA Results}
As a means to circumvent memory constraints during training and as a data augmentation strategy, we subsample 20-step ``short'' tours from the full-scale exploration tours in our dataset (see Sec. 3, sub-heading, ``Generating ``short'' tours for training'' in the main paper for more details). During training, our model learns to build top-down maps (of smaller spatial dimensions: $250 \times 250$ than that of the full scene) and localize answers to questions on the map from these 20-step tours. During evaluation, we test its generalization to building full-scale maps from the entire exploration trajectory (and not just 20-step tours). We presented results (both quantitative as well as qualitative) for this full-scale generalization in the main paper. In this section, for completeness, we also provide results of all our models and baselines on the 20-step ``short tours'' in Tab. \ref{tab:results-short-tours}. We also show qualitative examples of predictions made by our agent for these 20-step short tours in Fig. \ref{fig:qual-examples-suppl-short}. All trends observed and discussed in the main paper (Section 6) hold true for Tab. \ref{tab:results-short-tours} as well.
We'd like to reiterate (from Section 3 in the main paper) that generating and using these ``short tours'' merely happens during train. The focus of our task (as well as the subject of all our evaluations/analysis in the main paper) is on the full-scale exploration tours and maps for entire scenes.
\section{Additional Qualitative EMQA Results}
We show additional qualitative results on both ``short'' (Fig. \ref{fig:qual-examples-suppl-short}) and ``full'' (Fig. \ref{fig:qual-examples-suppl-full}) tours for our proposed EMQA model.
\begin{figure*}[h]
\centering
\includegraphics[clip, trim=0cm 0cm 0cm 0cm, width=1.00\textwidth]{figures/qual_results_suppl_short_rgb.pdf}
\vspace{0.1in}
\caption{We provide qualitative results of our proposed EMQA agents grounding answers to questions onto the top-down environment floorplan for ``short'' tours in Matterport \cite{Matterport3D}.}
\label{fig:qual-examples-suppl-short}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[clip, trim=0cm 0cm 0cm 0cm, width=1.00\textwidth]{figures/qual_results_suppl_full_rgb.pdf}
\vspace{0.1in}
\caption{We provide additional qualitative results of our proposed EMQA agents grounding answers to questions onto the top-down environment floorplan for ``full'' tours in Matterport \cite{Matterport3D}.}
\label{fig:qual-examples-suppl-full}
\end{figure*}
\section{Sim2Real: Real-world RGBD Results}
Recall that Fig. 4 (c) in the main paper had qualitative results of evaluating our model on a more challenging, real-world RGB-D dataset \cite{sturm2012benchmark} with imperfect depth+pose and camera jitter. Here, we provide some additional details about the same.
We perform zero-shot evaluation tests on this dataset -- we construct scene memories using our pre-trained SMNet \cite{cartillier2020} model, hand-craft questions relevant to the scene and generate prediction output using our pre-trained LingUNet-based question-answering module (no component of our model was fine-tuned on \cite{sturm2012benchmark}).
We pre-processed the input video by subsampling the original frame sequence (selecting every $10$th frame). We discarded visually blurry images by looking at the variance of the Laplcaian in the RGB image. Any image with a variance below a threshold of $100$ is discarded. Data inputs are grouped into tuples of (RGB, Depth and Pose) using timestamps. We allow for a max timestamp difference of at most 0.02s between modalities. The depth frame is further pre-processed through a binary erosion step with a circular element of radius 20 pixels.
In the answer to the question 'where did you first see the chair?' (Fig. 4 (c) in the main paper), we see the first two chairs (at the bottom) of the map being detected. The third chair on the other side of the desk is also detected. Although it is not, strictly speaking, the first instance of a chair seen in the video, that chair appears at the beginning of the video.
In the answer to the question 'where did you last see the chair', that third chair at the top of the map is not detected this time. The bottom two chairs are detected again because there are also seen last in the video.
In the answer to the question 'where did you see the chair', three chairs are detected. The fourth one with the stuffed animal on it has been missed.
\section{Sim2Real: Noisy Pose Experiments}
In this section, we discuss the impact of localization noise on the construction of scene memory representations and downstream question answering performance.
\subsection{Noise Models}
As discussed in Sec. 6 of the main paper (subsec: Sim2Real Robustness), we investigate the impact of two qualitatively different types of noise -- (1) noise sampled from a real-world robot \cite{murali2019pyrobot}, \textbf{independently} added to the pose at each step along the ground truth trajectory, and (2) \textbf{cumulatively} integrating per-step noisy pose change estimates derived from a visual odometry-based egomotion estimation module.
In this section, we discuss how we implement and integrate these two noise models into the pose at each step of our guided exploration tours.
\xhdr{Independent Noise}. Here, we leverage the actuation noise models derived from the real-world benchmarking \cite{kadian2020sim2real} of a physical LoCoBot \cite{murali2019pyrobot}. Specifically, this comprises the linear and rotational actuation noise models. The linear noise distribution is a bi-variate Gaussian (with a diagonal covariance matrix) that models localization inaccuracies along the X (orthogonal to the heading on the ground plane) and Z axis (direction of heading on the ground plane), whereas the rotational noise distribution is a uni-variate Gaussian for the agent's heading. We integrate these noise models in the guided tours by independently adding noise to each ground truth pose of the assistant along the tour.
Additionally, we implement this setup with two levels of noise multipliers -- 0.5x and 1x to simulate varying intensities of noise being added. An example of a trajectory with this noise model has been visualized in Fig. 5(a) in the main paper.
\xhdr{Cumulative Noise}. Beyond adding noise to existing ground truth trajectory pose, we additionally investigate a setting where our model is given an initial pose and from thereon, it estimates per-step changes in its pose solely from observations (visual odometry) and integrates these predictions along the trajectory, thereby maintaining an up-to-date (though noisy) estimate of its current pose
In contrast to the independent addition of noise to the ground truth pose at each tour step, this is a more challenging setting where the assistant maintains and works with a pose estimate completely on its own. As a result, noise picked up at each step (e.g. during collisions) cascades and accumulates throughout the rest of the trajectory. We present an example of a noisy egomotion trajectory in Fig. 5(b). We now describe the visual odometry model that we adapt from \cite{zhao2021surprising} and use to estimate per-step pose changes.
\subsection{Visual Odometry Models}
\xhdr{Model Architecture}.
The visual odometry (VO) model takes as input a pair of consecutive RGB-D observations ($o_t$, $o_{t+1}$) and estimates three relative pose transformation components -- the translation along X and Z axis, and the rotation angle ($\theta$).
For our experiments, we employ a stripped-down (bare-bones) version of the architecture proposed in \cite{zhao2021surprising}. The architecture includes a Resnet-18 \cite{he2015resnet} feature extractor backbone followed by two fully-connected (FC) layers with Dropout. We do away with the soft top-down projection and discretized depth inputs, and only use RGB+Depth observation pairs from successive steps. Additionally, as suggested by the authors \cite{zhao2021surprising}, we train action-specific models to deal with the differences in the action distributions.
\xhdr{Dataset Preparation}.
The pre-trained VO models provided by the authors of \cite{zhao2021surprising} were trained on the Gibson dataset of 3D scans \cite{xia2018gibson}. In contrast, we work with scenes from the Matterport3D \cite{Matterport3D} dataset. More importantly, these models were trained in a setup that used agent actuation specifications (forward step: 25cm, rotation angle: 30\textdegree) that are significantly different from those used in our exploration tours (forward step: 10cm, rotation angle: 9\textdegree). Therefore, a zero-shot transfer of models pre-trained by the authors in \cite{zhao2021surprising} to our setup is likely to not work well. We qualitatively verify this in Fig. \ref{fig:retrained-vs-pretrained-vo}. Note that the trajectory formed by integrating predictions from a pre-trained VO model applied directly to our setup doesn't match the original trajectory at all.
\begin{figure}[h]
\centering
\includegraphics[clip, trim=0cm 0cm 0cm 0cm, width=0.496\textwidth]{figures/retrained-vs-pretrained-vo.pdf}
\caption{Difference in egomotion predictions between the pre-trained VO model provided by the authors of \cite{zhao2021surprising} and the model we retrained on a custom dataset of 100k data points from Matterport3D \cite{Matterport3D} scenes. Apart from the difference in the scene datasets, the provided pre-trained models were trained under significantly dissimilar actuation specifications. This discrepancy called for a re-training of the VO model under the guided tour setup we use \cite{cartillier2020}.}
\label{fig:retrained-vs-pretrained-vo}
\end{figure}
To remedy this, we retrain the VO models from \cite{zhao2021surprising} on scenes from the Matterport3D dataset and under the actuation specificns of EMQA. To do that, we first create a separate VO dataset in Matterport3D \cite{Matterport3D}. For generating the dataset, we follow the protocol laid out in \cite{zhao2021surprising}. We sample (uniformly, at random) two navigable points in the scene and compute the shortest navigable path between the two points. Then, we sample (again, uniformly, ar random) consecutive observation pairs along these shortest-path trajectories. Using this method, we create a dataset of 100k and 20k training and validation data points.
\xhdr{Training and Evaluation}.
Following the training regime proposed by \cite{zhao2021surprising}, we train our model by jointly minimizing the regression and geometric invariance losses from \cite{zhao2021surprising}. Fig. \ref{fig:retrained-vs-pretrained-vo} \textcolor{blue}{(blue)} shows the trajectory generated via integration of predictions from the retrained VO model. We note a significant qualitative improvement in the trajectory as it much closely resembles the ground truth trajectory. In addition to that, in Tab. \ref{tab:noisy-trajectory-deviation}, we also quantitatively analyze the improvements gained through retraining by comparing the average deviation in the trajectories (as measured by the average over per-step pose RMSEs) predicted via the pre-trained and the re-trained VO models. We note that with a retrained VO model, we significantly improve the quality of our trajectories by getting $\sim6$x less deviation with the ground truth trajectory.
For completeness, Tab. \ref{tab:noisy-trajectory-deviation} also contains metrics for trajectories generated by independently adding noise to the ground-truth pose. It is evident that both (a) increasing the intensity of independently added noise and (b) moving from independent to VO-based cumulative noise leads to larger deviations in the predicted trajectories from the ground truth ($\sim$1.3x and $\sim$15x increase respectively).
\begin{table}[h]
\rowcolors{2}{gray!10}{white}
\small
\centering
\begin{tabular}{l c c c}
\toprule
\textbf{Method} & {RMSE (X axis)} & {RMSE (Z axis)} \\
\cmidrule{1-3}
Noise 0.5x (independent) & 0.068 & 0.066 \\
Noise 1x (independent) & 0.09 & 0.088 \\
Egomotion (re-trained) & 1.448 & 1.346 \\
Egomotion (pre-trained \cite{zhao2021surprising}) & 11.074 & 10.386 \\
\bottomrule
\end{tabular}
\\ [8pt]
\caption{Average absolute and relative differences between the ground truth poses and their corresponding noisy estimates at each step. Here, we highlight the improvement in egomotion estimation by retraining the VO model for our setup. We also note how, due to its cumulative nature (accumulating noise along the trajectory), the error metrics are much higher for the egomotion predictions compared to the independent noise models (where noise is independently added to the pose at each step).}
\label{tab:noisy-trajectory-deviation}
\end{table}
\begin{figure*}[h]
\centering
\includegraphics[clip, trim=0cm 0cm 0cm 0cm, width=1\textwidth]{figures/noise_retraining.pdf}
\\ [8pt]
\caption{Recovering performance loss under noisy settings in semantic map prediction (a) and question answering (b,c) by retraining SMNet and LingUNet on noisy pose data. In each sub-figure, we observe a dip in the evaluation metrics of the ground truth (GT) pose-trained model upon adding noise to the input pose. This drop is partially recovered from, by retraining the model(s) on noisy data.}
\label{fig:noise-retraining}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[clip, trim=0cm 0cm 0cm 0cm, width=0.75\textwidth]{figures/qual_retrain.pdf}
\\ [8pt]
\caption{We show qualitative examples of semantic map predictions for the three experiments described in Sec. \ref{sec:emqa_noise_eval} -- (1) our model trained and evaluated using ground truth pose (left), (2) our model trained using ground-truth pose, but evaluated in the noisy pose setting (middle) and (3) our model retrained using noisy pose (right). We see sharper boundaries and less label splatter upon retraining with noise.}
\label{fig:qual-retraining}
\end{figure*}
\subsection{Evaluating EMQA Models on Noisy Pose Data}
\label{sec:emqa_noise_eval}
In the discussions so far, we talked about adding noise (of various types and intensities) to the pose information in our dataset. In this section, we first investigate the performance of our models under the noisy settings and then, make a case for re-training our EMQA models so that they learn to adapt to noisy pose inputs.
Towards that end, we plot the improvement in performance upon retraining across three setups in Fig. \ref{fig:noise-retraining}. In each of the sub-figures, we first plot the original model's performance (trained and evaluated using ground truth pose i.e. train=GT, eval=GT). This is followed by taking the same model (trained using ground truth pose) and evaluating it under the noisy pose setting (i.e. train=GT, eval=Noise1x). And finally, we re-train our model so that it adapts to the noise and then evaluate this re-trained model in the noisy setting (train=Noise1x, eval=Noise1x).
We now compare the above settings across three experiments.
\xhdr{1. Semantic map prediction performance of SMNet (Fig. \ref{fig:noise-retraining} (a))}: Our method utilizes scene memory representations learnt by SMNet for downstream question answering. As a result, comparing the semantic map predictions serves as a proxy for the quality of scene representations. Here, we observe a 43\% drop in IoU on adding noise and evaluating using the original model. Upon retraining SMNet with noisy data, we see a 6\% gain on the same metric.
\xhdr{2. Question answering (QA) performance of SMNet Decoder baseline \cite{cartillier2020} (Fig. \ref{fig:noise-retraining} (b))}:
Here, we plot the downstream question-answering performance for the SMNetDecoder baseline (our best performing baseline from Sec. 5 in the main paper). We observe a 41\% drop in IoU on adding noise and evaluating using the original model. However, we see a 19\% increase on the same metric when we retrain SMNet on noisy data.
\xhdr{3. Question answering (QA) performance of our method (Fig. \ref{fig:noise-retraining} (c))}: Finally, we plot the downstream question-answering performance for our proposed method. Initially, on integrating noise and evaluating using the original model, we observe a 39\% drop in IoU performance. However, upon retraining SMNet and LingUNet, we see a 10\% increase on the same metric.
Across all the three experiments, we observe that (a) when models trained using privileged, oracle pose information are evaluated with noise in pose, their performance (understandably) drops and (b) we are able to recover the drop by re-training so that the model learns to adapt to its noisy inputs. We also qualitatively demonstrate the improvement in semantic map predictions obtained through a re-training of our models to adapt to noisy pose in Fig. \ref{fig:qual-retraining} (note the sharper boundaries and reduced label splatter). | {
"attr-fineweb-edu": 2.660156,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfA7xK7kjXJwSSqgF | \section{Introduction}
In this article, we formulate a mathematical model to examine the power expended by a cyclist on a velodrome.
Our assumptions limit the model to such races as the individual pursuit, a kilometer time trial and the hour record; we do not include drafting effects, which arise in team pursuits.
Furthermore, our model does not account for the initial acceleration; we assume the cyclist to be already in a launched effort.
The only opposing forces we consider are dissipative forces, namely, air resistance, rolling resistance, lateral friction and drivetrain resistance.
We do not consider changes in mechanical energy, which\,---\,on a velodrome\,---\,result from the change of height of the centre of mass due to leaning along the curves.
Nevertheless, our model is empirically adequate \citep{Fraassen}; it accounts for measurements with a satisfactory accuracy.
This article is a continuation of research presented by \citet{DSSbici1}, \citet{DSSbici2}, \citet{BSSbici6} and, in particular, by \citet{SSSbici3} and \citet{BSSSbici4}.
Several details\,---\,for conciseness omitted herein\,---\,are referred to therein.
Geometrically, we consider a velodrome with its straights, circular arcs, and connecting transition curves, whose inclusion\,---\,while presenting a certain challenge, and neglected in previous studies \citep[e.g.,][]{SSSbici3}\,---\,increases the empirical adequacy of the model, as discussed by, among others, \citet{Solarczyk}.
We begin this article by expressing mathematically the geometry of both the black line%
\footnote{The circumference along the inner edge of this five-centimetre-wide line\,---\,also known as the measurement line and the datum line\,---\,corresponds to the official length of the track.}
and the inclination of the track.
Our expressions are accurate representations of the common geometry of modern $250$\,-metre velodromes~(Mehdi Kordi, {\it pers.~comm.}, 2020).
We proceed to formulate an expression for power expended against dissipative forces, which we examine for both the constant-cadence and constant-power cases.
We examine their empirical adequacy, and conclude by discussing the results.
\section{Track}
\label{sec:Formulation}
\subsection{Black-line parameterization}
\label{sub:Track}
To model the required power of a cyclist who follows the black line, in a constant aerodynamic position, as illustrated in Figure~\ref{fig:FigBlackLine}, we define this line by three parameters.
\begin{figure}[h]
\centering
\includegraphics[scale=0.35]{FigBlackLine}
\caption{\small A constant aerodynamic position along the black line}
\label{fig:FigBlackLine}
\end{figure}
\begin{itemize}
\item[--] $L_s$\,: the half-length of the straight
\item[--] $L_t$\,: the length of the transition curve between the straight and the circular arc
\item[--] $L_a$\,: the half-length of the circular arc
\end{itemize}
The length of the track is $S=4(L_s+L_t+L_a)$\,.
In Figure~\ref{fig:FigTrack}, we show a quarter of a black line for $L_s=19$\,m\,, $L_t=13.5$\,m and $L_a=30$\,m\,, which results in $S=250$\,m\,.
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{FigTrack.pdf}
\caption{\small A quarter of the black line for a $250$\,-metre track}
\label{fig:FigTrack}
\end{figure}
This curve has continuous derivative up to order two; it is a $C^2$ curve, whose curvature is continuous.
To formulate, in Cartesian coordinates, the curve shown in Figure~\ref{fig:FigTrack}, we consider the following.
\begin{itemize}
\item[--] The straight,
\begin{equation*}
y_1=0\,,\qquad0\leqslant x\leqslant a\,,
\end{equation*}
shown in gray, where $a:=L_s$\,.
\item[--] The transition, shown in black\,---\,following a standard design practice\,---\,we take to be an Euler spiral, which can be parameterized by Fresnel integrals,
\begin{equation*}
x_2(\varsigma)=a+\sqrt{\frac{2}{A}}\int\limits_0^{\varsigma\sqrt{\!\frac{A}{2}}}\!\!\!\!\cos\!\left(x^2\right)\,{\rm d}x
\qquad{\rm and}\qquad
y_2(\varsigma)=\sqrt{\frac{2}{A}}\int\limits_0^{\varsigma\sqrt{\!\frac{A}{2}}}\!\!\!\!\sin\!\left(x^2\right)\,{\rm d}x\,,
\end{equation*}
with $A>0$ to be determined; herein, $\varsigma$ is the curve parameter.
Since the arclength differential,~${\rm d}s$\,, is such that
\begin{equation*}
{\rm d}s=\sqrt{x_2'(\varsigma)^2+y_2'(\varsigma)^2}\,{\rm d}\varsigma
=\sqrt{\cos^2\left(\dfrac{A\varsigma^2}{2}\right)+\sin^2\left(\dfrac{A\varsigma^2}{2}\right)}\,{\rm d}\varsigma
={\rm d}\varsigma\,,
\end{equation*}
we write the transition curve as
\begin{equation*}
(x_2(s),y_2(s)), \quad 0\leqslant s\leqslant b:=L_t\,.
\end{equation*}
\item[--] The circular arc, shown in gray, whose centre is $(c_1,c_2)$ and whose radius is $R$\,, with $c_1$\,, $c_2$ and $R$ to be determined.
Since its arclength is specified to be $c:=L_a,$ we may parameterize the quarter circle by
\begin{equation}
\label{eq:x3}
x_3(\theta)=c_1+R\cos(\theta)
\end{equation}
and
\begin{equation}
\label{eq:y3}
y_3(\theta)=c_2+R\sin(\theta)\,,
\end{equation}
where $-\theta_0\leqslant\theta\leqslant 0$\,, for $\theta_0:=c/R$\,.
The centre of the circle is shown as a black dot in Figure~\ref{fig:FigTrack}.
\end{itemize}
We wish to connect these three curve segments so that the resulting global curve is continuous along with its first and second derivatives.
This ensures that the curvature of the track is also continuous.
To do so, let us consider the connection between the straight and the Euler spiral.
Herein, $x_2(0)=a$ and $y_2(0)=0$\,, so the spiral connects continuously to the end of the straight at $(a,0)$\,.
Also, at $(a,0)$\,,
\begin{equation*}
\frac{{\rm d}y}{{\rm d}x}=\frac{y_2'(0)}{x_2'(0)}=\frac{0}{1}=0\,,
\end{equation*}
which matches the derivative of the straight line.
Furthermore, the second derivatives match, since
\begin{equation*}
\frac{{\rm d}^2y}{{\rm d}x^2}=\frac{y''_2(0)x_2'(0)-y'_2(0)x_2''(0)}{(x_2'(0))^2}=0\,,
\end{equation*}
which follows, for any $A>0$\,, from
\begin{equation}
\label{eq:FirstDer}
x_2'(\varsigma)=\cos^2\left(\dfrac{A\,\varsigma^2}{2}\right)\,, \quad y_2'(\varsigma)=\sin^2\left(\dfrac{A\,\varsigma^2}{2}\right)
\end{equation}
and
\begin{equation*}
x_2''(\varsigma)=-A\,\varsigma\sin\left(\dfrac{A\,\varsigma^2}{2}\right)\,, \quad y_2''(\varsigma)=A\,\varsigma\cos\left(\dfrac{A\,\varsigma^2}{2}\right)\,.
\end{equation*}
Let us consider the connection between the Euler spiral and the arc of the circle.
In order that these connect continuously,
\begin{equation*}
\big(x_2(b),y_2(b)\big)=\big(x_3(-\theta_0),y_3(-\theta_0)\big)\,,
\end{equation*}
we require
\begin{equation}
\label{eq:Cont1}
x_2(b)=c_1+R\cos(\theta_0)\,\,\iff\,\,c_1=x_2(b)-R\cos\!\left(\dfrac{c}{R}\right)
\end{equation}
and
\begin{equation}
\label{eq:Cont2}
y_2(b)=c_2-R\sin(\theta_0)\,\,\iff\,\, c_2=y_2(b)+R\sin\!\left(\dfrac{c}{R}\right)\,.
\end{equation}
For the tangents to connect continuously, we invoke expression~(\ref{eq:FirstDer}) to write
\begin{equation*}
(x_2'(b),y_2'(b))=\left(\cos\left(\dfrac{A\,b^2}{2}\right),\,\sin\left(\dfrac{A\,b^2}{2}\right)\right)\,.
\end{equation*}
Following expressions~(\ref{eq:x3}) and (\ref{eq:y3}), we obtain
\begin{equation*}
\big(x_3'(-\theta_0),y_3'(-\theta_0)\big)=\big(R\sin(\theta_0),R\cos(\theta_0)\big)\,,
\end{equation*}
respectively.
Matching the unit tangent vectors results in
\begin{equation}
\label{eq:tangents}
\cos\left(\dfrac{A\,b^2}{2}\right)=\sin\!\left(\dfrac{c}{R}\right)\,,\quad \sin\left(\dfrac{A\,b^2}{2}\right)=\cos\!\left(\dfrac{c}{R}\right)\,.
\end{equation}
For the second derivative, it is equivalent\,---\,and easier\,---\,to match the curvature.
For the Euler spiral,
\begin{equation*}
\kappa_2(s)=\frac{x_2'(s)y_2''(s)-y_2'(s)x_2''(s)}
{\Big(\big(x_2'(s)\big)^2+\big(y_2'(s)\big)^2\Big)^{\frac{3}{2}}}
=A\,s\cos^2\left(\dfrac{A\,s^2}{2}\right)+A\,s\sin^2\left(\dfrac{A\,s^2}{2}\right)
=A\,s\,,
\end{equation*}
which is indeed the defining characteristic of an Euler spiral: the curvature grows linearly in the arclength.
Hence, to match the curvature of the circle at the connection, we require
\begin{equation*}
A\,b=\frac{1}{R} \,\,\iff\,\,A=\frac{1}{b\,R}\,.
\end{equation*}
Substituting this value of $A$ in equations~(\ref{eq:tangents}), we obtain
\begin{align*}
\cos\!\left(\dfrac{b}{2R}\right)&=\sin\!\left(\dfrac{c}{R}\right)\,,\quad \sin\!\left(\dfrac{b}{2R}\right)=\cos\!\left(\dfrac{c}{R}\right)\\
&\iff\dfrac{b}{2R}=\dfrac{\pi}{2}-\dfrac{c}{R}\\
&\iff R=\frac{b+2c}{\pi}.
\end{align*}
It follows that
\begin{equation*}
A=\frac{1}{b\,R}=\frac{\pi}{b\,(b+2c)}\,;
\end{equation*}
hence, the continuity condition stated in expressions~(\ref{eq:Cont1}) and (\ref{eq:Cont2}) determines the centre of the circle,~$(c_1,c_2)$\,.
For the case shown in Figure~\ref{fig:FigTrack}, the numerical values are~$A=3.1661\times10^{-3}$\,m${}^{-2}$, $R=23.3958$\,m\,, $c_1=25.7313$\,m and $c_2=23.7194$\,m\,.
The complete track\,---\,with its centre at the origin\,,~$(0,0)$\,---\,is shown in Figure~\ref{fig:FigComplete}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{FigCompleteTrack.pdf}
\caption{\small Black line of $250$\,-metre track}
\label{fig:FigComplete}
\end{figure}
The corresponding curvature is shown in Figure~\ref{fig:FigCurvature}.
Note that the curvature transitions linearly from the constant value of straight,~$\kappa=0$\,, to the constant value of the circular arc,~$\kappa=1/R$\,.
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{FigTrackCurvature.pdf}
\caption{\small Curvature of the black line,~$\kappa$\,, as a function of distance,~$s$\,, with a linear transition between the zero curvature of the straight and the $1/R$ curvature of the circular arc}
\label{fig:FigCurvature}
\end{figure}
\subsection{Track-inclination angle}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{FigInclinationAngle.pdf}
\caption{\small Track inclination,~$\theta$\,, as a function of the black-line distance,~$s$}
\label{fig:FigAngle}
\end{figure}
There are many possibilities to model the track inclination angle.
We choose a trigonometric formula in terms of arclength, which is a good analogy of an actual $250$\,-metre velodrome.
The minimum inclination of $13^\circ$ corresponds to the midpoint of the straight, and the maximum of $44^\circ$ to the apex of the circular arc.
For a track of length $S$\,,
\begin{equation}
\label{eq:theta}
\theta(s)=28.5-15.5\cos\!\left(\frac{4\pi}{S}s\right)\,;
\end{equation}
$s=0$ refers to the midpoint of the lower straight, in Figure~\ref{fig:FigComplete}, and the track is oriented in the counterclockwise direction.
Figure \ref{fig:FigAngle} shows this inclination for $S=250$\,m\,.
\begin{remark}
It is not uncommon for tracks to be slightly asymmetric with respect to the inclination angle.
In such a case, they exhibit a rotational symmetry by $\pi$\,, but not a reflection symmetry about the vertical or horizontal axis.
This asymmetry can be modeled by including $s_0$ in the argument of the cosine in expression~(\ref{eq:theta}).
\begin{equation*}
\theta(s)=28.5-15.5\cos\!\left(\frac{4\pi}{S}(s-s_0)\right)\,;
\end{equation*}
$s_0\approx 5$ provides a good model for several existing velodromes.
Referring to discussions about the London Velodrome of the 2012 Olympics, \citet{Solarczyk} writes that
\begin{quote}
the slope of the track going into and out of the turns is not the same.
This is simply because you always cycle the same way around the track, and you go shallower into the turn and steeper out of it.
\end{quote}
\end{remark}
\section{Instantaneous power}
\label{sec:InstPower}
A mathematical model to account for the power required to propel a bicycle is based on \citep[e.g.,][]{DSSbici1}
\begin{equation}
\label{eq:BikePower}
P=F\,V\,,
\end{equation}
where $F$ stands for the magnitude of forces opposing the motion and $V$ for speed.
Herein, we model the rider as undergoing instantaneous circular motion, in rotational equilibrium about the line of contact of the tires with the ground.
Following \citet[Section~2]{SSSbici3} and in view of Figure~\ref{fig:FigCentFric}, along the black line, in windless conditions,
\begin{subequations}
\label{eq:power}
\begin{align}
\nonumber P&=\\
&\dfrac{1}{1-\lambda}\,\,\Bigg\{\label{eq:modelO}\\
&\left.\left.\Bigg({\rm C_{rr}}\underbrace{\overbrace{\,m\,g\,}^{F_g}(\sin\theta\tan\vartheta+\cos\theta)}_N\cos\theta
+{\rm C_{sr}}\Bigg|\underbrace{\overbrace{\,m\,g\,}^{F_g}\frac{\sin(\theta-\vartheta)}{\cos\vartheta}}_{F_f}\Bigg|\sin\theta\Bigg)\,v
\right.\right.\label{eq:modelB}\\
&+\,\,\tfrac{1}{2}\,{\rm C_{d}A}\,\rho\,V^3\Bigg\}\label{eq:modelC}\,,
\end{align}
\end{subequations}
where $m$ is the mass of the cyclist and the bicycle, $g$ is the acceleration due to gravity, $\theta$ is the track-inclination angle, $\vartheta$ is the bicycle-cyclist lean angle, $\rm C_{rr}$ is the rolling-resistance coefficient, $\rm C_{sr}$ is the coefficient of the lateral friction, $\rm C_{d}A$ is the air-resistance coefficient, $\rho$ is the air density, $\lambda$ is the drivetrain-resistance coefficient.
Herein, $v$ is the speed at which the contact point of the rotating wheels moves along the track \citep[Appendix~B]{DSSbici1}, which we assume to coincide with the black-line speed.
$V$ is the centre-of-mass speed.
Since lateral friction is a dissipative force, it does negative work, and the work done against it\,---\,as well as the power\,---\,are positive.
For this reason, in expression~(\ref{eq:modelB}), we consider the magnitude,~$\big|{\,\,}\big|$\,.
\begin{figure}[h]
\centering
\includegraphics[scale=0.8]{FigNonIner.pdf}
\caption{\small Force diagram}
\label{fig:FigCentFric}
\end{figure}
To accommodate the assumption of {\it instantaneous} power in the context of measurements within a fixed-wheel drivetrain, as discussed by \citet[Appendix~B.2]{DSSbici2},%
\footnote{For a fixed-wheel drivetrain, the momentum of a bicycle-cyclist system results in rotation of pedals even without any force applied by a cyclist.
This leads to inaccuracy of measurements referring to the instantaneous power generated by a cyclist, which increases with the variability of effort.}
we assume the steadiness of effort, which\,---\,following the initial acceleration\,---\,is consistent with a steady pace of an individual pursuit, as presented in Section~\ref{sec:Adequacy}, below.
Formally, this assumption corresponds to setting the acceleration,~$a$\,, to zero in \citet[expression~(1)]{SSSbici3}, which entails expression~(\ref{eq:power}).
For the constant-cadence case\,---\,in accordance with expression~(\ref{eq:vV}), below\,---\,change of speed refers only to the change of the centre-of-mass speeed; the black-line speed is constant.
For the constant-power case, neither speed is constant, but both are nearly so.
Let us return to expression~(\ref{eq:power}).
Therein, $\theta$ is given by expression~(\ref{eq:theta}).
The lean angle is \citep[Appendix~A]{SSSbici3}
\begin{equation}
\label{eq:LeanAngle}
\vartheta=\arctan\dfrac{V^2}{g\,r_{\rm\scriptscriptstyle CoM}}\,,
\end{equation}
where $r_{\rm\scriptscriptstyle CoM}$ is the centre-of-mass radius, and\,---\,along the curves, at any instant\,---\,the centre-of-mass speed is
\begin{equation}
\label{eq:vV}
V=v\,\dfrac{\overbrace{(R-h\sin\vartheta)}^{\displaystyle r_{\rm\scriptscriptstyle CoM}}}{R}
=v\,\left(1-\dfrac{h\,\sin\vartheta}{R}\right)\,,
\end{equation}
where $R$ is the radius discussed in Section~\ref{sub:Track} and $h$ is the centre-of-mass height of the bicycle-cyclist system at $\vartheta=0$\,.
Along the straights, the black-line speed is equal to the centre-of-mass speed, $v=V$\,.
As expected, $V=v$ if $h=0$\,, $\vartheta=0$ or $R=\infty$\,.
As illustrated in Figure~\ref{fig:FigCentFric}, expressions~(\ref{eq:LeanAngle}) and (\ref{eq:vV}) assume instantaneous circular motion of the centre of mass to occur in a horizontal plane.
Therefore, using these expression implies neglecting the vertical motion of the centre of mass.
Accounting for the vertical motion of the centre of mass would mean allowing for a nonhorizontal centripetal force, and including the work done in raising the centre of mass.
\section{Numerical examples}
\label{sec:NumEx}
\subsection{Model-parameter values}
\label{sub:ModPar}
For expressions~(\ref{eq:power}), (\ref{eq:LeanAngle}) and (\ref{eq:vV}), we consider a velodrome discussed in Section~\ref{sec:Formulation}, and let ${R=23.3958}$\,m\,.
For the bicycle-cyclist system, we assume, $h=1.2$\,m\,, $m=84$\,kg\,, ${\rm C_{d}A}=0.2$\,m${}^2$\,, ${\rm C_{rr}}=0.002$\,, ${\rm C_{sr}}=0.003$ and $\lambda=0.02$\,.
For the external conditions, $g=9.81$\,m/s${}^2$ and $\rho=1.225$\,kg/m${}^3$\,.
\subsection{Constant cadence}
\label{sub:ConstCad}
Let the black-line speed be constant,~$v=16.7$\,m/s\,, which is tantamount to the constancy of cadence.
The lean angle and the centre-of-mass speed, as functions of distance\,---\,obtained by numerically and simultaneously solving equations~(\ref{eq:LeanAngle}) and (\ref{eq:vV}), at each point of a discretized model of the track\,---\,are shown in Figures~\ref{fig:FigLeanAngle} and \ref{fig:FigCoMSpeed}, respectively.
The average centre-of-mass speed, per lap is~$\overline V=16.3329$\,m/s\,.
Changes in $V$\,, shown in Figure~\ref{fig:FigCoMSpeed}, result from changes in the lean angle.
Along the straights, $\vartheta=0\implies V=v$\,.
Along the curves, since $\vartheta\neq0$\,, the centre-of-mass travels along a shorter path; hence, $V<v$\,.
Thus, assuming a constant black-line speed implies a variable centre-of-mass speed and, hence, an acceleration and deceleration, even though ${\rm d}V/{\rm d}t$\,, where $t$ stands for time, is not included explicitly in expression~(\ref{eq:power}).
Examining Figure~\ref{fig:FigCoMSpeed}, we conclude that ${\rm d}V/{\rm d}t\neq0$ along the transition curves only.
The power\,---\,obtained by evaluating expression~(\ref{eq:power}), at each point along the track\,---\,is shown in Figure~\ref{fig:FigPower}.
The average power, per lap, is $\overline P=580.5941$\,W\,.
Since the black-line speed is constant, this is both the arclength average and the temporal average.
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{FigLeanAngle.pdf}
\caption{\small Lean angle,~$\vartheta$\,, as a function of the black-line distance,~$s$\,, for constant cadence}
\label{fig:FigLeanAngle}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{FigCoMSpeed.pdf}
\caption{\small Centre-of-mass speed,~$V$\,, as a function of the black-line distance,~$s$\,, for constant cadence}
\label{fig:FigCoMSpeed}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{FigPower.pdf}
\caption{\small Power,~$P$\,, as a function of the black-line distance,~$s$\,, for constant cadence}
\label{fig:FigPower}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{FigAngleDiff.pdf}
\caption{\small $\theta-\vartheta$\,, as a function of the black-line distance,~$s$\,, for constant cadence}
\label{fig:FigAngleDiff}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{FigPowerSummands.pdf}
\caption{\small Power to overcome air resistance, rolling resistance and lateral friction}
\label{fig:FigPowerSummands}
\end{figure}
Examining Figure~\ref{fig:FigPower}, we see the decrease of power along the curve to maintain the same black-line speed.
This is due to both the decrease of the centre-of-mass speed, which results in a smaller value of term~(\ref{eq:modelC}), and the decrease of a difference between the track-inclination angle and the lean angle, shown in Figure~\ref{fig:FigAngleDiff}, which results in a smaller value of the second summand of term~(\ref{eq:modelB}).
Examining Figure~\ref{fig:FigPowerSummands}, where\,---\,in accordance with expression~(\ref{eq:power})\,---\,we distinguish among the power used to overcome the air resistance, the rolling resistance and the lateral friction, we can quantify their effects.
The first has the most effect; the last has the least effect, and is zero at points for which $\theta=\vartheta$\,, which corresponds to the zero crossings in Figure~\ref{fig:FigAngleDiff}.
Let us comment on potential simplifications of the model.
If we assume a straight flat course\,---\,which is tantamount to neglecting the lean and inclination angles\,---\,we obtain $\overline P\approx 610~\rm W$\,.
If we consider an oval track but ignore the transitions and assume that the straights are flat and the semicircular segments, whose radius is $23$\,m\,, have a constant inclination of $43^\circ$, we obtain \citep[expression~(13)]{SSSbici3} $\overline P\approx 563~\rm W$\,.
In both cases, there is a significant discrepancy with the power obtained from the model discussed herein,~$\overline P=580.5941~\rm W$\,.
To conclude this section, let us calculate the work per lap corresponding to the model discussed herein.
The work performed during a time interval, $t_2-t_1$\,, is
\begin{equation*}
W=\int\limits_{t_1}^{t_2}\!P\,{\rm d}t
=\dfrac{1}{v}\int\limits_{s_1}^{s_2}\!P\!\underbrace{\,v\,{\rm d}t}_{{\rm d}s\,}\,,
\end{equation*}
where the black-line speed,~$v$\,, is constant and, hence, ${\rm d}s$ is an arclength distance along the black line.
Considering the average power per lap, we write
\begin{equation}
\label{eq:WorkConstCad}
W=\underbrace{\,\dfrac{S}{v}\,}_{t_\circlearrowleft}\,\underbrace{\dfrac{\int\limits_0^S\!P\,{\rm d}s}{S}}_{\overline P}=\overline P\,t_\circlearrowleft\,.
\end{equation}
Given $\overline P=580.5941$\,W and $t_\circlearrowleft=14.9701$\,s\,, we obtain $W=8691.5284$\,J\,.
\subsection{Constant power}
\label{sub:ConstPower}
Let us solve numerically the system of nonlinear equations given by expressions~(\ref{eq:power}), (\ref{eq:LeanAngle}) and (\ref{eq:vV}), to find the lean angle as well as both speeds, $v$ and $V$\,, at each point of a discretized model of the track\,, under the assumption of constant power.
As in Section~\ref{sub:ConstCad}, we let $R=23.3958$\,m\,, $h=1.2$\,m\,, $m=84$\,kg\,, ${\rm C_{d}A}=0.2$\,m${}^2$\,, ${\rm C_{rr}}=0.002$\,, ${\rm C_{sr}}=0.003$\,, $\lambda=0.02$\,, $g=9.81$\,m/s${}^2$ and $\rho=1.225$\,kg/m${}^3$\,.
However, in contrast to Section~\ref{sub:ConstCad}, we allow the black-line speed to vary but set the power to be the average obtained in that section, $P=580.5941$\,W\,.
Stating expression~(\ref{eq:vV}), as
\begin{equation}
\label{eq:Vv}
v=V\dfrac{R}{R-h\sin\vartheta}\,,
\end{equation}
we write expression~(\ref{eq:power}) as
\begin{align}
\label{eq:PConst}
P&=\\
\nonumber&\dfrac{V}{1-\lambda}\,\,\Bigg\{\\
\nonumber&\left.\left.\Bigg({\rm C_{rr}}\,m\,g\,(\sin\theta\tan\vartheta+\cos\theta)\cos\theta
+{\rm C_{sr}}\Bigg|\,m\,g\,\frac{\sin(\theta-\vartheta)}{\cos\vartheta}\Bigg|\sin\theta\Bigg)\,\dfrac{R}{R-h\sin\vartheta}
\right.\right.\\
\nonumber&+\,\,\tfrac{1}{2}\,{\rm C_{d}A}\,\rho\,V^2\Bigg\}\,,
\end{align}
and expression~(\ref{eq:LeanAngle}) as
\begin{equation}
\label{eq:Vvar}
\vartheta=\arctan\dfrac{V^2}{g\,(R-h\sin\vartheta)}\,,
\end{equation}
which\,---\,given $g$\,, $R$ and $h$\,---\,can be solved for $V$ as a function of~$\vartheta$\,.
Inserting that solution in expression~(\ref{eq:PConst}), we obtain an equation whose only unknown is~$\vartheta$\,.
The difference of the lean angle\,---\,between the case of a constant cadence and a constant power\,---\,is so small that there is no need to plot it; Figure~\ref{fig:FigLeanAngle} illustrates it accurately.
The same is true for the difference between the track-inclination angle and the lean angle, illustrated in Figure~\ref{fig:FigAngleDiff}, as well as for the dominant effect of the air resistance, illustrated in Figure~\ref{fig:FigPowerSummands}.
The resulting values of $V$ are shown in Figure~\ref{fig:FigCoMSpeed2}.
As expected, in view of the dominant effect of air resistance, a constancy of $P$ entails only small variations in~$V$\,.
In comparison to the case discussed in Section~\ref{sub:ConstCad}, the case in question entails lesser changes of the centre-of-mass speed\,---\,note the difference of vertical scale between Figures~\ref{fig:FigCoMSpeed} and \ref{fig:FigCoMSpeed2}\,---\,but the changes of speed are not limited to the transition curves.
Even though such changes are not included explicitly in expression~(\ref{eq:power}), a portion of the given power is due to the $m\,V\,{\rm d}V/{\rm d}t$ term, which is associated with acceleration and deceleration.
The amount of this portion can be estimated {\it a posteriori}.
Since
\begin{equation}
\label{eq:dK}
m\,V\,\dfrac{{\rm d}V}{{\rm d}t}=\dfrac{{\rm d}}{{\rm d}t}\left(\dfrac{1}{2}\,m\,V^2\right)\,,
\end{equation}
the time integral of the power used for acceleration of the centre of mass is the change of its kinetic energy.
Therefore, to include the effect of accelerations, per lap, we need to add the increases in kinetic energy.
This is an estimate of the error committed by neglecting accelerations in expression~(\ref{eq:power}), which could be quantified, for the constant-cadence and constant-power cases, following \citet[Appendix~A]{BSSSbici4}.
Also, in the same appendix, \citeauthor{BSSSbici4} discuss errors due to neglecting raising the centre of mass upon exiting the curve, which is an increase of potential energy.
As mentioned in Section~\ref{sec:InstPower}, herein, expressions~(\ref{eq:LeanAngle}) and (\ref{eq:vV}) assume instantaneous circular motion of the centre of mass to occur in a horizontal plane.
For inverse problems, the power used to increase the kinetic and potential energy is implicitly included on the left-hand side of expression~(\ref{eq:power}).
Hence, the power required to increase mechanical energy is incorporated in the power to account for the drivetrain, tire and air frictions stated in expressions~(\ref{eq:modelO}), (\ref{eq:modelB}) and (\ref{eq:modelC}), respectively.
Since the model does not explicitly incorporate changes in mechanical energy, this necessarily leads to an overestimate of the values of $\lambda$\,, $\rm C_{rr}$\,, $\rm C_{sr}$ and $\rm C_{d}A$ to achieve the agreement between the model and measurements.
A model that explicitly takes into account the acceleration and vertical motion of the centre of mass is to be considered in future work.
The values of $v$\,, in accordance with expression~(\ref{eq:vV}), are shown in Figure~\ref{fig:FigBLspeed}, where\,---\,as expected for a constant power\,---\,leaning into the turn entails an increase of the black-line speed; note the difference of vertical scale between Figures~\ref{fig:FigCoMSpeed2} and \ref{fig:FigBLspeed}.
The averages are $\overline V=16.3316$\,m/s and $\overline v=16.7071$\,m/s\,.
These averages are similar to the case of the constant black-line speed averages.
Hence, maintaining a constant cadence or a constant power results in nearly the same laptime, namely, $14.9701$\,s and $14.9670$\,s\,, respectively.
To conclude this section, let us calculate the corresponding work per lap.
The work performed during a time interval, $t_2-t_1$\,, is
\begin{equation}
\label{eq:WorkConstPow}
W=\int\limits_{t_1}^{t_2}\!P\,{\rm d}t=P\!\int\limits_{t_1}^{t_2}\!{\rm d}t=P\,\underbrace{(t_2-t_1)}_{t_\circlearrowleft}=P\,t_\circlearrowleft\,,
\end{equation}
where, for the second equality sign, we use the constancy of~$P$\,; also, we let the time interval to be a laptime.
Thus, given $\overline P=580.5941$\,W and $t_\circlearrowleft=14.9670$\,s\,, we obtain $W=8689.7680$\,J\,.
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{FigCoMSpeed2.pdf}
\caption{\small Centre-of-mass speed,~$V$\,, as a function of the black-line distance,~$s$\,, for constant power}
\label{fig:FigCoMSpeed2}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{FigBLSpeed.pdf}
\caption{\small Black-line speed,~$v$\,, as a function of the black-line distance,~$s$\,, for constant power}
\label{fig:FigBLspeed}
\end{figure}
The empirical adequacy of the assumption of a constant power can be corroborated\,---\,apart from the measured power itself\,---\,by comparing experimental data to measurable quantities entailed by theoretical formulations.
The black-line speed,~$v$\,, shown in Figure~\ref{fig:FigBLspeed}, which we take to be tantamount to the wheel speed, appears to be the most reliable quantity.
Notably, an increase of $v$ by a few percent along the turns is a commonly measured quantity.
Other quantities\,---\,not measurably directly, such as the centre-of-mass speed and power expended to increase potential energy\,---\,are related to $v$ by equations~(\ref{eq:PConst}) and (\ref{eq:Vvar}).
\section{Empirical adequacy}
\label{sec:Adequacy}
To gain an insight into empirical adequacy of the model, let us examine Section~\ref{sec:InstPower} in the context of measurements~(Mehdi Kordi, {\it pers.~comm.}, 2020).
To do so, we use two measured quantities: cadence and force applied to the pedals, both of which are measured by sensors attached to the bicycle.
They allow us to calculate power, which is the product of the circumferential pedal speed\,---\,obtained from cadence, given a crank length\,---\,and the force applied to pedals.
\begin{figure}[h]
\centering
\includegraphics[scale=0.35]{FigIPPower}
\caption{\small Measured power,~$P$\,, as a function of the pursuit time,~$t$}
\label{fig:FigIPPower}
\end{figure}
The measurements of power, shown in Figure~\ref{fig:FigIPPower}, oscillate about a nearly constant value, except for the initial part, which corresponds to acceleration, and the final part, where the cyclist begins to decelerate.
These oscillations are due to the repetition of straights and curves along a lap.
In particular, Figure~\ref{fig:FigIPCadence}, below, exhibits a regularity corresponding to thirty-two curves along which the cadence, and\,---\,equivalently\,---\,the wheel speed, reaches a maximum.
There are also fluctuations due to measurement errors.
A comparison of Figures~\ref{fig:FigIPPower} and \ref{fig:FigIPCadence} illustrates that power is necessarily more error sensitive than cadence, since the cadence itself is used in calculations to obtain power.
This extra sensitivity is due to intrinsic difficulties of the measurement of applied force and, herein, to the fact that values are stated at one-second intervals only, which makes them correspond to different points along the pedal rotation \citep[see also][Appendix~A]{DSSbici1}.
To diminish this effect, it is common to use a moving average, with a period of several seconds, to obtain the values of power.
To use the model to relate power and cadence, let us consider a $4000$\,-metre individual pursuit.
The model parameters are $h=1.1~\rm{m}$\,, $m=85.6~\rm{kg}$\,, ${\rm C_{d}A}=0.17~\rm{m^2}$\,, ${\rm C_{rr}}=0.0017$\,, ${\rm C_{sr}}=0.0025$\,, $\lambda=0.02$\,, $g=9.81~\rm{m/s^2}$\,, $\rho=1.17~\rm{kg/m^3}$\,.
If we use, as input, $P=488.81~\rm{W}$\,---\,which is the average of values measured over the entire pursuit\,---\,the retrodiction provided by the model results in $\overline{v}=16.86~\rm{m/s}$\,.
Let us compare this retrodiction to measurements using the fact that\,---\,for a fixed-wheel drivetrain\,---\,cadence allows us to calculate the bicycle wheel speed.
The average of the measured cadence, shown in Figure~\ref{fig:FigIPCadence}, is $k=106.56~\rm rpm$\,, which\,---\,given the gear of $9.00~\rm m$\,, over the pursuit time of $256~\rm s$\,---\,results in a distance of $4092~\rm m$\,.
Hence, the average wheel speed is~$15.98~\rm{m/s}$\,.%
\footnote{The average wheel speed is distinct from the average black-line speed,~$\overline{v}=15.63~\rm{m/s}$\,.}
\begin{figure}[h]
\centering
\includegraphics[scale=0.35]{FigIPCadence.pdf}
\caption{\small Measured cadence,~$k$\,, in revolutions per minute, [rpm], as a function of the pursuit time,~$t$}
\label{fig:FigIPCadence}
\end{figure}
The average values of the retrodicted and measured speeds appear to be sufficiently close to each other to support the empirical adequacy of our model, for the case in which its assumptions, illustrated in Figure~\ref{fig:FigBlackLine}\,---\,namely, a constant aerodynamic position and the trajectory along the black line\,---\,are, broadly speaking, satisfied.
Specifically, they are not satisfied on the first lap, during the acceleration.
Nor can we expect them to be fully satisfied along the remainder of the pursuit, as illustrated by empirical distance of $4092\,{\rm m}>4000\,{\rm m}$\,, which indicates the deviation from the black-line trajectory.
\begin{figure}[h]
\centering
\includegraphics[scale=0.35]{FigIPShift.pdf}
\caption{\small Scaled values of power (black) and cadence (grey) as functions of the pursuit time,~$t$}
\label{fig:FigIPShift}
\end{figure}
Furthermore, Figure~\ref{fig:FigIPShift}, which is a superposition of scaled values from Figures~\ref{fig:FigIPPower} and \ref{fig:FigIPCadence}, shows the oscillations of power and cadence to be half a cycle out of phase.
However, in Figure~\ref{fig:FigModelShift} these quantities are in phase.
Therein, as input, we use simulated values of power along a lap\,---\,in a manner consistent with the measured power\,---\,as opposed to single value of an average.
Thus, according to the model, the power and the black-line speed\,---\,whose pattern within the model for a fixed-wheel drivetrain is the same as for cadence\,---\,do not exhibit the phase shift seen in the data.
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{FigModelShift.pdf}
\caption{\small Scaled values of power (black) and black-line speed (grey) as functions of the black-line distance,~$s$}
\label{fig:FigModelShift}
\end{figure}
The phase shift observed in Figure~\ref{fig:FigIPShift} is a result of the difference between the wheel speed, which corresponds to cadence, and the the centre-of-mass speed, which has the dominant effect on the required power.
In other words, as indicated by regularity corresponding to the thirty-two pairs, it is a result of the pattern of straights and curves on the velodrome.
It suggests that the model being considered is adequate for modelling average power, since the values of power and cadence in Figures~\ref{fig:FigIPPower} and \ref{fig:FigIPCadence} oscillate about means that are nearly constant, but less so for describing instantaneous effects.
Also, measurements of the effect caused by the difference between the wheel and the centre-of-mass speeds are affected by a fixed-wheel drivetrain.
The value of power, at each instant, is obtained from the product of measurements of~$f_{\circlearrowright}$\,, which is the force applied to pedals, and~$v_{\circlearrowright}$\,, which is the circumferential speed of the pedals \citep[e.g.,][expression~(1)]{DSSbici1},
\begin{equation}
\label{eq:PowerMeter}
P=f_{\circlearrowright}\,v_{\circlearrowright}\,.
\end{equation}
For a fixed-wheel drivetrain, there is a one-to-one relation between $v_{\circlearrowright}$ and the wheel speed.
Hence\,---\,in contrast to a free-wheel drivetrain, for which $f_{\circlearrowright}\to0\implies v_{\circlearrowright}\to0$\,---\,the momentum of a launched bicycle-cyclist system might contribute to the value of~$v_{\circlearrowright}$\,, which is tantamount to contributing to the value of cadence.
Nevertheless, even with the above caveats, the agreement between the average values of the retrodiction and measurements appears to be satisfactory.
Notably, excluding the first and last laps would increase this agreement.
For instance, if we consider, say, $33\,{\rm s} < t < 233\,{\rm s}$\,---\,with the start and end locations not at the same point, which has a negligible effect over many laps\,---\,the average power and cadence are $455.02\,{\rm W}$ and $108.78\,{\rm rpm}$\,, respectively.
Hence, the retrodicted and measured speeds are $16.45\,\rm{m/s}$ and $16.32\,\rm{m/s}$\,, respectively.
\begin{figure}[h]
\centering
\includegraphics[scale=0.35]{FigKiloPower}
\caption{\small Measured power,~$P$\,, as a function of the `kilo' time,~$t$}
\label{fig:FigKiloPower}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.35]{FigKiloCadence}
\caption{\small Measured cadence,~$k$\,, in revolutions per minute, [rpm], as a function of the `kilo' time,~$t$}
\label{fig:FigKiloCadence}
\end{figure}
To illustrate limitations of the model, Figures~\ref{fig:FigKiloPower} and \ref{fig:FigKiloCadence} represent measurements for which it is not empirically adequate.
As shown in these figures, in this $1000$\,-metre time trial, commonly referred to as a `kilo', the cyclist reaches a steady cadence\,---\,and speed\,---\,with an initial output of power, in a manner similar to the one shown in Figure~\ref{fig:FigIPPower}.
Subsequently, in a manner similar to the one shown in Figure~\ref{fig:FigIPCadence}, the cadence remains almost unchanged, for the remainder of the time trial.
However, in contrast to Figure~\ref{fig:FigIPPower}, the power decreases.
Herein, as discussed by \citet[Appendix~B.2]{DSSbici2}, the cadence, as a function of time, is a consequence of both the power generated by a cyclist\,---\,at each instant\,---\,and the momentum of the moving bicycle-cyclist system, gained during the initial acceleration, which propels the pedals.
In other words, the cadence at a given instant is not solely due to the power at that instant but also depends on power expended earlier.
This situation is in contrast to the case of a steady effort in a $4000$\,-metre individual pursuit.
Given a one-to-one relationship between cadence and power, Figures~\ref{fig:FigIPPower} and \ref{fig:FigIPCadence} would remain similar for a free-wheel drivetrain, provided a cyclists keeps on pedalling in a continuous and steady manner.
Figures~\ref{fig:FigKiloPower} and \ref{fig:FigKiloCadence} would not.
In particular, Figure~\ref{fig:FigKiloCadence} would show a decrease of cadence with time, even though the bicycle speed might not decrease significantly.
\section{Discussion and conclusions}
\label{sec:DisCon}
The mathematical model presented in this article offers the basis for a quantitative study of individual time trials on a velodrome.
The model can be used to predict or retrodict the laptimes, from the measurements of power, or to estimate the power from the recorded times.
Comparisons of such predictions or retrodictions with the measurements of time, speed, cadence and power along the track offer an insight into the empirical adequacy of a model.
Given a satisfactory adequacy and appropriate measurements, the model lends itself to estimating the rolling-resistance, lateral-friction, air-resistance and drivetrain-resistance coefficients.
One can examine the effects of power on speed and {\it vice versa}, as well as of other parameters, say, the effects of air resistance on speed.
One can also estimate the power needed for a given rider to achieve a particular result.
Furthermore, presented results allow us to comment on aspects of the velodrome design.
As illustrated in Figures~\ref{fig:FigLeanAngle}--\ref{fig:FigPower}, \ref{fig:FigCoMSpeed2}, \ref{fig:FigBLspeed}, \ref{fig:FigModelShift}, the transitions\,---\,between the straights and the circular arcs\,---\,do not result in smooth functions for the lean angles, speeds and powers.
It might suggest that a commonly used Euler spiral, illustrated in Figure~\ref{fig:FigCurvature}, is not the optimal transition curve.
Perhaps, the choice of a transition curve should consider such phenomena as the jolt, which is the temporal rate of change of acceleration.
It might also suggest the necessity for the lengthening of the transition curve.
An optimal velodrome design would strive to minimize the separation between the zero line and the curve in Figure~\ref{fig:FigAngleDiff}, which is tantamount to optimizing the track inclination to accommodate the lean angle of a rider.
The smaller the separation, the smaller the second summand in term~(\ref{eq:modelB}).
As the separation tends to zero, so does the summand.
These considerations are to be examined in future work.
Also, the inclusion, within the model, of a change of mechanical energy, discussed by \citet[Appendix~A]{BSSSbici4}, is an issue to be addressed.
Another consideration to be examined is the discrepancy between the model and measurements with respect to the phase shift between power and cadence, illustrated in Figures~\ref{fig:FigIPShift} and \ref{fig:FigModelShift}.
A possible venue for such a study is introduced by \citet[Appendix~B]{BSSSbici4}.
In conclusion, let us emphasize that our model is phenomenological.
It is consistent with\,---\,but not derived from\,---\,fundamental concepts.
Its purpose is to provide quantitative relations between the model parameters and observables.
Its key justification is the agreement between measurements and predictions or retrodictions, as discussed in Section~\ref{sec:Adequacy}.
\section*{Acknowledgements}
We wish to acknowledge Mehdi Kordi, for information on the track geometry and for measurements, used in Sections~\ref{sec:Formulation} and \ref{sec:Adequacy}, respectively;
Tomasz Danek, for statistical insights into these measurements;
Elena Patarini, for her graphic support;
Roberto Lauciello, for his artistic contribution;
Favero Electronics for inspiring this study by their technological advances of power meters.
\bibliographystyle{apa}
| {
"attr-fineweb-edu": 2.666016,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdwnxK7Ehm79fdBMF | \section{Introduction}\label{intro}
The number of foreign travelers has rapidly increased during the last several years.
Therefore, the economy in Japan should be considerably affected by their travel dynamics.
For marketers in the tourism industry, for example, a global view of their travel patterns is expected to be an informative piece of information.
Human mobility data are distributed by several organizations, and an analysis of mobility patterns has been an important topic in this regard. During the past decade, in particular, datasets based on records of mobile devices \cite{Hossmann2011,Isaacman2012,Ganti2013} and geo-located tags in social networks \cite{Hawelka2013,Zagheni2014,Spyratos2019} have been actively studied.
The target scale ranges from an intramural or urban scale to an international scale (e.g., migration).
In this paper, we describe a network analysis of travel pathways by foreign travelers in Japan.
\subsection{Dataset}
The dataset analyzed is a set of travel records of foreign travelers in Japan that is distributed by the Ministry of Land, Infrastructure, and Transport,
called the \textit{FF-data} (Flow of Foreigners-Data) \cite{FFdata}.
In this dataset, the traveler pathways, i.e., the locations the travelers visited in chronological order, are recorded based on survey responses of the travelers themselves.
FF-data are separated into yearly datasets; currently, datasets from 2014 through 2017 are available.
Although these datasets contain various attributes of foreign visitors, we focus on identifying the macroscopic features of the travel pathways.
As an interesting characteristic of such travel pathways, because Japan is surrounded by water and because the FF-data contain only the records of foreign travelers, all pathways start and end at airports or seaports.
\subsection{Travel data as a network and as a memory network}\label{MemoryNetworkMain}
It is common to characterize travel data as a network \cite{Barbosa2018}.
A naive way to characterize travel data as a network is to represent the locations of a departure and destination as nodes and a transition of a traveler between two nodes as a directed edge.
The result is typically a multigraph (a graph with multiple edges between a pair of nodes) because numerous travelers travel between the same pairs of locations.
However, such a network representation has a crucial limitation as it can only encode pairwise information among the nodes.
Although some travelers visit more than one destination during their visit, the pathways are broken down into independent transitions between each pair of locations and are encoded as edges.
For example, although the pathway from Tokyo to Osaka, followed by Osaka to Kyoto, is quite popular, presumably for tourists, the transition from Osaka to Kyoto is recorded independent of the transition from Tokyo to Osaka.
\begin{figure}[t!]
\begin{center}
\includegraphics[width= \columnwidth, bb = 0 0 253 253]{./figures/ModuleMapFigure.pdf}
\end{center}
\caption{
Representative transitions of foreign travelers in identified modules.
The details of this figure are described in Sec.~\ref{sec:SecondOrderMarkov}.
}
\label{fig:ModuleFeatures}
\end{figure}
We instead conducted an analysis using a higher-order network representation called a \textit{memory network} \cite{Rosvall2014,edler2017mapping,lambiotte2019networks} that incorporates the information of multiple transitions of each traveler.
We identify densely connected components (sets of nodes) in a memory network and denote them as \textit{modules}. A structure that consists of densely connected components is termed an assortative structure, and a method to identify an assortative structure is termed community detection\footnote{In this paper, a module is loosely defined in a narrow sense. As we briefly mention in Appendix \ref{sec:MapEquation}, the community detection algorithm that we use for memory networks is not directly designed to identify densely connected components. Thus, an outcome of community detection may indicate a more general module structure. Nevertheless, we define modules as densely connected components (as a structure that we focus on). Moreover, strictly speaking, we would need to specify what we mean by ``densely connected'' exactly. When the problem is treated as an inference problem (instead of an optimization problem), we would also need to assess whether the identified modules are statistically significant. However, we treat a module as an object that is not strictly defined. As we will also mention in Sec.~\ref{sec:Stationarity}, we only use community detection to \textit{propose} a set of components that may be interpreted as modules in the sense of densely connected components. Therefore, although we refer to the identified components as modules for simplicity, it is more precise to describe them as candidates of modules. In other words, we do not define modules as an outcome of an algorithm.}.
Figure ~\ref{fig:ModuleFeatures} shows a brief overview of the classified elements of traveler pathways.
Note that the pathways are basically classified into modules of different regions.
For example, however, because Tokyo is a hub for many different destinations, transitions containing Tokyo belong to many different modules.
This is therefore an overlapping module structure.
Herein, we provide a brief introduction to a memory network; a formal definition of the memory network can be found in ~\cite{Rosvall2014,edler2017mapping} and in Appendix \ref{MemeryNetworkFormulation}.
An $m$th-order Markov (memory) network consists of \textit{state nodes} and the directed edges between them.
A state node represents the destination as well as its recent $m-1$ transitions.
For example, in a second-order Markov network, when a traveler moves from Osaka to Kyoto, we have a state node that places Kyoto as the destination with the memory of Osaka as the previous location.
If the traveler left Tokyo before arriving at Osaka, we also have a state node that has Osaka as the destination with the memory of Tokyo, and these two state nodes are connected by a directed edge.
Because many travelers travel along similar pathways, a memory network is typically a multigraph.
From the viewpoint of a memory network, a simple network representation is categorized as a first-order Markov network or a memoryless network.
The actual locations are referred to as \textit{physical nodes} in a memory network.
The destination of a state node is regarded as a physical node to which a state node belongs.
As a community detection algorithm for memory and memoryless networks, we employ the Infomap \cite{MapEquationURL} that optimizes an objective function called the map equation \cite{Rosvall2008,Rosvall2011} (see Appendix~\ref{MemeryNetworkFormulation} for a brief introduction of the map equation).
\begin{figure*}[t!]
\begin{center}
\includegraphics[width= \columnwidth, bb = 0 0 691 329]{./figures/pathway_and_memorynetwork.pdf}
\end{center}
\caption{Pathway examples (top) and the corresponding network elements in a second-order Markov network (bottom).
}
\label{fig:pathway_and_memorynetwork}
\end{figure*}
To better understand the community detection on memory networks, let us consider some specific pathways in second-order Markov networks.
We denote a state node corresponding to a transition from location $i$ to location $j$ as $\overrightarrow{ij}$.
If a traveler moves from $i$ to $j$ and he/she moves back to $i$ again (i.e., a backtracking pathway) as shown in Fig.~\ref{fig:pathway_and_memorynetwork}{\bf a} (top), we have a directed edge from $\overrightarrow{ij}$ to $\overrightarrow{ji}$ (Fig.~\ref{fig:pathway_and_memorynetwork}{\bf a} (bottom)).
In contrast, when we have asymmetric pathways, as shown in Fig.~\ref{fig:pathway_and_memorynetwork}{\bf b} (top), the state nodes of opposite directions are not connected (Fig.~\ref{fig:pathway_and_memorynetwork}{\bf b} (bottom)).
Note, however, that even when the backtracking pathway is absent and pathways are asymmetric, $\overrightarrow{ij}$ and $\overrightarrow{ji}$ can be (indirectly) connected; the pathways and the corresponding network element shown in Fig.~\ref{fig:pathway_and_memorynetwork}{\bf c} is such an example.
When the pathways in Figs.~\ref{fig:pathway_and_memorynetwork}{\bf a} and {\bf c} are popular, $\overrightarrow{ij}$ and $\overrightarrow{ji}$ are likely to belong to the same module.
It should be noted that, although representing a dataset using a memoryless network can incur a significant loss of information, it is occasionally sufficient, depending on the dataset and purpose of the analysis.
Furthermore, even when a record of multiple transitions is not explicitly encoded in a network, it may be possible to infer similarities among nodes in some sense from such a simple network. For example, when every pair of nodes is connected by a considerably large number of edges within a certain node set, we can trivially conclude that those nodes are similar to each other.
Therefore, before moving on to an analysis of a second-order Markov network, we first investigate the basic statistics and a first-order Markov network of the FF-data to determine whether a higher-order network representation is necessary for an extraction of a module structure.
\section{Network analysis of travel pathways}\label{sec:NetworkAnalysis}
\subsection{Basic statistics}
Let us first look at the basic statistics of the FF-data.
In Table \ref{tab:basicstatistics}, we summarized the numbers of destinations (including the airport and seaport first visited), travelers, unique pathways (i.e., itineraries), and unique edges (first and second order, respectively), as well as the median, lower and upper quartiles of the pathway lengths (number of steps of transitions), and weights.
The distributions of the pathway lengths and weights are shown in Fig.~\ref{fig:distribution}.
We define the weight of a pathway as the number of travelers who traveled along exactly the same pathway. (This should not be confused with the weight of a memory network, which is defined as the number of transitions occurring between two state nodes.)
It is confirmed that both distributions are heavily tailed and a considerable number of travelers travel to more than two destinations.
This observation implies that there is no evident cutoff length and weight of the pathways or a characteristic scale of the length and weight that we should consider.
In other words, apparently, multiple pathway lengths significantly contribute on pathway structures.
Therefore, the first-order Markov network may be inadequate for a correct understanding of travel pathways because it only takes into account the one-step transitions.
\begin{table*}[t!]
\begin{center}
\caption{Basic statistics of travel pathway in the FF-data for each year.
Each triple in the pathway lengths and weights represent the 25\% quantile, median, and 75\% quantile, respectively.}
\begin{tabular}{@{}lcccc}
\toprule
Year & 2014 & 2015 & 2016 & 2017 \\ \midrule
Num. of destinations & 80 & 82& 83& 84\\
Num. of travelers & 46,635 & 60,640& 63,116 & 65,191\\
Num. of pathways &10,887 & 12,093& 11,995 & 11,818 \\
Num. of edges (1st order) &1,995 & 2,130& 2,121 & 2,105 \\
Num. of edges (2nd order) & 9,676 &10,679 & 10,937 & 10,760 \\
Pathway lengths & (2.0, 3.0, 5.0) & (2.0, 3.0, 5.0)& (2.0, 3.0, 5.0)& (3.0, 4.0, 6.0) \\
Pathway weights & (1.0, 1.0, 1.0) & (1.0, 1.0, 2.0) & (1.0, 1.0, 2.0) & (1.0, 1.0, 2.0) \\ \bottomrule
\end{tabular}
\label{tab:basicstatistics}
\end{center}
\end{table*}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.7\columnwidth, bb = 0 0 720 360]{figures/plot_distribution.pdf}
\end{center}
\caption{Histograms of {\bf (a)} the pathway lengths (normal scale) and {\bf (b)} pathway weights (log-log scale) of the FF-data in 2017.
In panel (b), the frequencies of the pathway weights are counted based on logarithmic binning.}
\label{fig:distribution}
\end{figure}
\subsection{First-order Markov network}\label{sec:FirstOrderMarkov}
\begin{figure}[t!]
\begin{center}
\includegraphics[bb = 0 0 827 767, width=\columnwidth]{figures/1st_order_adjacency.pdf}
\end{center}
\caption{Results of community detection on a memoryless network using the Infomap: {\bf (Main part)} Classification on the map figure (the upper-left part represents Okinawa islands) and {\bf (Lower-right part)} adjacency matrix of the network sorted according to the module indices. In the map figure, the pin icons indicate the locations of airports and seaports in the dataset.}
\label{fig:partition_1st}
\end{figure}
We then consider the community detection of the first-order Markov network.
Here, we focus on the dataset of 2017.
Figure \ref{fig:partition_1st} shows the partitioning of 47 prefectures, 33 airports, and 4 seaports in Japan.
The corresponding adjacency matrix is shown at the bottom right; each element is a node pair, and the color depth indicates the weight of an edge.
Note that the FF-data includes transitions within a prefecture, indicating self-loops in the first-order Markov network.
The order of the node labels is sorted based on the modules identified using the Infomap and each module is specified by a blue box.
It is confirmed that the Infomap identifies an assortative module structure on the first-order Markov network.
The modules apparently correspond to the geographical regions in Japan: Hokkaido, Tohoku, Chyuetsu, Kanto, Chubu, Kansai, Chugoku, Shikoku, Northern Kyushu, and Southern Kyusyu with Okinawa regions.
Although the present results may sound reasonable, there is an important concern.
Notice that all airports and seaports belong to the module of the region where the airport or seaport is located.
This observation implies that the present result is simply dominated by the transition from--to airports and seaports.
If this is the case, then the result may not reflect any informative travel patterns.
This is further confirmed because the result of community detection is considerably varied by removing the airport and seaport nodes.
Moreover, we have also confirmed that the module boundaries of the first-order Markov network can be reproduced with high accuracy by a simple label aggregation based on the module labels of the airport nodes.
These results are shown in Appendix \ref{FirstOrderAnalysisDetail}.
Based on this, we move to an analysis of the second-order Markov networks.
\subsection{Second-order Markov network}\label{sec:SecondOrderMarkov}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=\columnwidth, bb = 0 0 1273 1166]{figures/2nd_order_adjacency.pdf}
\end{center}
\caption{Adjacency matrix of the second-order Markov network.
Each element represents a state node, and the color depth indicates the weight of an edge in the network.
The gray elements represent the forbidden edges.
The order of node labels is sorted based on the module assignments, and the identified modules are specified by blue boxes.}
\label{fig:SecondOrderAdjacencyMatrix}
\end{figure}
Here, we classify the state nodes of the second-order Markov networks into modules using the Infomap, which can be directly applied to higher-order Markov networks.
Figure \ref{fig:SecondOrderAdjacencyMatrix} shows the adjacency matrix of a subgraph in the second-order Markov network corresponding to the dataset in 2017.
The state nodes in the subgraph are extracted such that each state node belongs to one of the eight largest modules and has a total edge weight to other state nodes (i.e., degree) larger than 300.
The matrix elements are sorted based on the module assignments.
Note that there are forbidden edges in the second-order Markov networks.
For example, when an edge is present from state node $\alpha$ to state node $\beta$, an edge in the opposite direction is prohibited because such an edge does not constitute a pathway, e.g., for $\alpha = \mathrm{Tokyo}\rightarrow\mathrm{Osaka}$ and $\beta = \mathrm{Osaka}\rightarrow\mathrm{Kyoto}$, $\beta$ to $\alpha$ is not possible because a traveler cannot be at Kyoto and Tokyo at the same time.
Analogously, the edges of both directions are prohibited when neither of them constitutes a pathway.
The elements corresponding to the forbidden edges are indicated in gray in Fig.~\ref{fig:SecondOrderAdjacencyMatrix}.
It is confirmed that the Infomap identifies an assortative structure on top of the constraint of the forbidden edges.
Larger-sized modules exhibit assortative structures, even though the edges to different modules are not strictly prohibited.
In contrast, the modules of a relatively smaller size are identified owing to a severe constraint of the forbidden edges.
Let us then explore the identified module structure in more detail.
Figure \ref{fig:ModuleFeatures} shows popular transitions that appear in the four largest modules.
The arrows of the same color represent transitions within the same module.
The color depth of an arrow distinguishes the prior (lighter color) and destination (deeper color) physical nodes.
The thickness of each arrow reflects the weight of a transition measured based on the \texttt{flow} (PageRank) in the outcome of the Infomap.
We show the Sankey diagrams of the four largest modules in Fig.~\ref{fig:SankeyDiagrams}.
The locations on the left-hand side of each diagram represent the physical nodes visited before arriving at the (physical) destination nodes; we refer to them as prior nodes.
Conversely, the locations on the right-hand side of each diagram represent the destination nodes.
It is confirmed that, for example, the transition from Kyoto to Tokyo appears in Module 1, while the transition from Tokyo to Kyoto appears in Module 2 (see Fig.~\ref{fig:pathway_and_memorynetwork} and the corresponding part in Sec.~\ref{MemoryNetworkMain} for the implication).
It is also confirmed that the prior nodes and the destination nodes are strongly asymmetric in Module 3.
In contrast, the prior nodes and destination nodes in Modules 2 and 4 are relatively symmetric.
These observations imply that the visits of locations in Module 3 are direction-sensitive, while those in Modules 2 and 4 are less direction-sensitive.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width= \columnwidth, bb = 0 0 740 529]{./figures/Sankey_4_modules.pdf}
\end{center}
\caption{
Sankey diagrams of the four largest modules in the second-order Markov network.
The size of a strip of each location corresponds to the relative amount of flow from--to the location in the memory network.
}
\label{fig:SankeyDiagrams}
\end{figure*}
\section{Stationarity of the modules}\label{sec:Stationarity}
The readers may wonder if the identified modules in the second-order Markov network are ``significant.''
It should be noted here that in this study, we did not perform community detection as a task of statistical inference.
That is, we only used the Infomap as a tool to propose several informative node sets that exhibit an assortative structure.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=\columnwidth, bb =0 0 842 595]{figures/fig_alluvial_diagram_2nd.pdf}
\end{center}
\caption{Sankey diagram comparing the modules identified in the second-order Markov networks of successive years.}
\label{fig:alluvial_diagram2}
\end{figure}
However, we still need to assess which of the identified modules are informative.
For example, a number of small modules are identified as a result of the Infomap (although we only show large modules in this paper), and it is doubtful whether they are worth further consideration.
These small modules might have appeared because of the algorithmic infeasibility of the optimization algorithm, or they might have emerged because of the random nature of the network.
In fact, by comparing the randomness assumed in the map equation \cite{smiljanic2019mapping}, a statistical assessment of significance should be possible.
However, we should also note that, because the memory network of the FF-data must be strongly affected by the geographic constraint and the use of airports, the plausibility of the statistical model is also a considerable issue.
Therefore, instead of assessing the statistical significance, we assess the significance of the identified modules based on the consistency \cite{Kawamoto2018}.
By using the Sankey diagram\footnote{We used Alluvial Generator \cite{MapEquationURL} to draw Fig.~\ref{fig:alluvial_diagram2}. Alluvial Generator generates an alluvial diagram \cite{Rosvall2010}, which is equivalent to the Sankey diagram when there are no degrees of freedom of color depth.}, we assess whether the modules are stationary in the FF-data across different years.
Because it is unlikely that the module structures largely differ in reality, if it happens, we should conclude that the modules are simply identified out of noises.
Figure \ref{fig:alluvial_diagram2} indicates that, for larger modules among the ones focused upon in Sec.~\ref{sec:SecondOrderMarkov}, nearly the same modules are indeed identified in all datasets.
From these observations, we conclude that the large modules considered in Sec.~\ref{sec:SecondOrderMarkov} are informative\footnote{Of course, there is a possibility that modules are stationary even when the algorithm only captures noise. This assessment is still very loose. However, the result here indicates that we cannot positively conclude that the identified modules are only due to noise.}.
\section{Conclusion}
Based on the observation that the FF-data apparently have structures that cannot be explained by a simple network representation, we revealed the module structure of the travel pathways of foreign visitors using the framework of the memory network and the map equation (and the Infomap as the specific algorithm).
The results herein offer a better understanding of the travel patterns of foreign visitors. When travel patterns are utilized for deciding government policies or marketing research in the tourism industry, it is important to recognize whether a travel to/from a particular location is considerably conditioned on the location visited previously/later. It would be difficult to identify such a feature by skimming through the raw FF-data. Moreover, for the pathways that are highly correlated, the results of memoryless network approaches are not very reliable. The present analysis based on the second-order Markov networks successfully revealed dependencies on the previously/later visited locations.
We expect that the results described herein will be even more useful by relating the identified modules with regional factors. For example, it is interesting to investigate how the modules are correlated to the amount of consumption by tourists and the degree of development of public transportation in each prefecture.
It is difficult to forecast whether the structures we found will exist in the future, particularly during an era in which the way people live and travel is drastically changing.
In any case, however, we expect that our results will serve as a reference for future data analysis of traveling patterns.
\begin{acknowledgements}
This work was supported by the New Energy and Industrial Technology Development Organization (NEDO).
\end{acknowledgements}
| {
"attr-fineweb-edu": 2.019531,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfz05qhDCjWqnRTh2 | \section{You \emph{can} have an appendix here.}
\end{document}
\section{Additional Details}
\subsection{Explanation of Game Objectives}
\label{game_objectives}
\textbf{Skiing.} The player controls the direction and speed of a skier and must avoid obstacles, such as trees and moguls. The goal is to reach the end of the course as rapidly as possible, but the skier must pass through a series of gates (indicated by a pair of closely spaced flags). Missed gates count as a penalty against the player's time.
\textbf{Tennis.} The player plays the game of tennis. When serving and returning shots, the tennis players automatically swing forehand or backhand as the situation demands, and all shots automatically clear the net and land in bounds.
\textbf{Ms. Pac-Man.}
The player earns points by eating pellets and avoiding ghosts, which contacting them results in losing a life. Eating a ``power pellet'' causes the ghosts to turn blue, allowing them to be eaten for extra points. Bonus fruits can be eaten for increasing point values, twice per round.
\textbf{Breakout} Using a single ball, the player must use the paddle to knock down as many bricks as possible. If the player's paddle misses the ball's rebound, the player will lose a life.
\subsection{Outputs of QA Extraction Module}
\begin{table}[h]
{\centering
\begin{tabular}{l|m{20em} @{\hskip 0.3in} m{20em}}
Game & \multicolumn{1}{c}{Wikipedia} & \multicolumn{1}{c}{Official Manual}\\
\hline
& \multicolumn{2}{c}{What is the objective of the game?}\\
\cline{2-3}
\multirow{8}{3em}{Skiing} & To reach the bottom of the ski course as rapidly as possible. & To reach the bottom of the hill in the fastes time Have fun and god bless!".\\
\cline{2-3}\cline{2-3}
& \multicolumn{2}{c}{How to succeed in the game?}\\
\cline{2-3}
& N/A & Learning to control the tips of your skis and anticipating and avoiding trouble. \\
\cline{2-3}\cline{2-3}
& \multicolumn{2}{c}{How to score at the game?}\\
\cline{2-3}
& N/A & Elapsed time at the end of the run.\\
\cline{2-3}\cline{2-3}
& \multicolumn{2}{c}{Who are your enemies?}\\
\cline{2-3}
& N/A & N/A\\
\hline
& \multicolumn{2}{c}{What is the objective of the game?}\\
\cline{2-3}
\multirow{8}{3em}{Pacman} & The player earns points by eating pellets and avoiding ghosts. & To score as many points as you can practice clearing the maze of dots before trying to gobble up the ghosts.\\
\cline{2-3}\cline{2-3}
& \multicolumn{2}{c}{How to succeed in the game?}\\
\cline{2-3}
& The player earns points by eating pellets and avoiding ghosts. & Score as many points as you can. \\
\cline{2-3}\cline{2-3}
& \multicolumn{2}{c}{How to score at the game?}\\
\cline{2-3}
& The player earns points by eating pellets and avoiding ghosts. & N/A\\
\cline{2-3}\cline{2-3}
& \multicolumn{2}{c}{Who are your enemies?}\\
\cline{2-3}
& N/A & Ghosts. stay close to an energy pill before eating it, and tease the ghosts.\\
\hline
& \multicolumn{2}{c}{What is the objective of the game?}\\
\cline{2-3}
\multirow{8}{3em}{Tennis} & The first player to win one six-game set is declared the winner of the match. & N/A\\
\cline{2-3}\cline{2-3}
& \multicolumn{2}{c}{How to succeed in the game?}\\
\cline{2-3}
& The first player to win one six-game set is declared the winner of the match. & precisely aim your shots and hit them out of reach of your opponent. \\
\cline{2-3}\cline{2-3}
& \multicolumn{2}{c}{How to score at the game?}\\
\cline{2-3}
& N/A & The first player to win at least 6 games and be ahead by two games wins the set.\\
\cline{2-3}\cline{2-3}
& \multicolumn{2}{c}{Who are your enemies?}\\
\cline{2-3}
& N/A & N/A\\
\hline
& \multicolumn{2}{c}{What is the objective of the game?}\\
\cline{2-3}
\multirow{8}{3em}{Breakout} & The player must knock down as many bricks as possible. & Destroy the two walls using five balls To destroy the wall in as little time as possible The first player or team to completely destroy both walls or score the most points To smash their way through the wall and score points\\
\cline{2-3}\cline{2-3}
& \multicolumn{2}{c}{How to succeed in the game?}\\
\cline{2-3}
& the player must knock down as many bricks as possible by using the walls and/or the paddle below to hit the ball against the bricks and eliminate them & The first team to destroy a wall or score the most points after playing five balls wins the game To destroy the walls in as little time as possible \\
\cline{2-3}\cline{2-3}
& \multicolumn{2}{c}{How to score at the game?}\\
\cline{2-3}
& N/A & Scores are determined by the bricks hit during a game A player scores points by hitting one of the wall's bricks Smash their way through the wall and score points.\\
\cline{2-3}\cline{2-3}
& \multicolumn{2}{c}{Who are your enemies?}\\
\cline{2-3}
& N/A & N/A\\
\hline
\end{tabular}
}
\caption{\label{table:wiki_vs_official_FULL} Table showing the outputs of the QA Extraction module on Wikipedia instructions vs the official Atari manual. The Wikipedia manual is significantly shorter, and contains less information, causing the extractive QA model to repeat answers. However, the overall information, across the 4 questions, captured by the Extraction module is in good agreement across Wiki and Official manuals.}
\end{table}
\section{Background}\label{sec2}
\subsection{Reducing sample complexity in RL}
Prior works have attempted to improve sample efficiency with new exploration techniques \cite{schulman2017proximal,haarnoja2018soft,badia2020agent57}, self-supervised objectives \cite{pathak2017curiosity,muzero}, and static demonstration data \cite{kumar2019stabilizing,kumar2020conservative}. \citet{fan2022generalized} cast the RL training problems into a training data distribution optimization problem, and propose a policy mathematically controlled to traverse high-value and non-trivial states. However, such a solution requires manual reward shaping. Currently, the SOTA RL agent \cite{badia2020agent57} still requires billions of frames to complete the Skiing game that can be mastered by humans within minutes.
\subsection{Grounding objects for control problems}
\label{background:instruction}
Most prior works study grounding -- associating perceived objects with appropriate text -- in the setting of step-by-step instruction following for object manipulation tasks \cite{wang2016learning,bahdanau2018learning} or indoor navigation tasks \cite{chaplot2018gated,janner2018representation,chen2019touchdown,shridhar2020alfred}.
The common approach has been to condition the agent's policy on the embedding of both the instruction and observation \cite{mei2016listen,hermann2017grounded,janner2018representation,misra2017mapping,chen2019touchdown,pashevich2021episodic}. Recently, modular solutions have been shown to be promising \cite{min2021film}. However, the majority of these works use synthetic language. These often take the form of templates, e.g., ``what colour is $<$object$>$ in $<$room$>$'' \cite{chevalier2018babyai}.
On the other hand, recent attempts on large-scale image-text backbones, e.g., CLIP \cite{radford2021learning} have shown robustness at different styles of visual inputs, even on sketches. Therefore, CLIP may be the current best option for zero-shot grounding.
\subsection{Reinforcement learning informed by natural language}
\label{background:RL_text}
In the language-assisted setting, step-by-step instructions have been used to generate auxiliary rewards, when environment rewards are sparse. \citet{goyal2019using,wang2019reinforced} use auxiliary reward-learning modules trained offline to predict whether trajectory segments correspond to natural language annotations of expert trajectories.
In a more general setting, text may contain both information about optimal policy and environment dynamics.
\citet{branavan2012learning} improve Monte-Carlo tree search planning by accessing a natural language manual. They apply their approach to the first few moves of the game Civilization II, a turn-based strategy game. However, many of the features they use are handcrafted to fit the game of Civilization II, which limits the generalization of their approach.
\citet{narasimhan2018grounding,wang2021grounding} investigate planning in a 2D game environment with a fixed number of entities that are annotated by natural language (e.g. the `spider' and `scorpion' entities might be annotated with the descriptions ``randomly moving enemy'' and ``an enemy who chases you'', respectively). The agent learns to generalize to new environment settings by learning the correlation between the annotations and the environmental goals and mechanics. The proposed agent achieves better performance than the baselines, which do not use natural language. However the design of the 2D environment uses 6-line descriptions created from templates. Such design over-simplifies the observations into a 2D integer matrix, and removes the need for visual understanding and generalization to new objects and new formats of instruction manual, e.g., human written manuals.
\subsection{RL Models that Read Natural Language Instructions}
\citet{zhong2019rtfm,zhong2021silg} make use of special architectures with Bidirectional Feature-wise Linear Modulation Layers that support multi-hop reasoning on 6 $\times$ 6 grid worlds with template-generated instruction manuals. However, the model requires 200 million training samples from templates identical to the test environments. Such a training requirement results in performance loss even on 10 $\times$ 10 grid worlds with identical mechanics, thus limiting the generalization of the model.
\citet{wang2021grounding} mixes embedding of entities with multi-modal attention to get entity representations, which are then fed to a CNN actor. The attention model and CNN actor are trained jointly using RL. However, the whole framework is designed primarily for a grid world with a template generated instruction manual consisting of one line per entity.
In our experiments, we find that extracting entity embeddings from human-written Atari instruction manuals following \citet{wang2021grounding} does not improve the performance of our vision-based agent due to the complexity and diversity of the language.
\section{Conclusions}
In this work, we propose Read and Reward, a method that assists and speeds up RL algorithms on the Atari challenges by reading downloaded game manuals released by the Atari game developers. Our method consists of a QA extraction module that extracts and summarizes relevant information from the manual and a reasoning module that assigns auxiliary rewards to detected in-game events by reasoning with information from the manual. The auxiliary reward is then provided to a standard A2C RL agent.
To our knowledge, this work is the first successful attempt for using instruction manuals in a fully automated and generalizable framework for solving the widely used Atari RL benchmark \cite{badia2020agent57}. When assisted by our design, A2C improves on 4 games in Atari without immediate reward and requires 1000x fewer training frames compared to the previous SOTA Agent 57 on the hardest Skiing game.
\section{Limitations and Future Works}
\label{sec:limitations}
One of the main implementation challenges and limitations is the requirement of object localization and detection as mentioned in Section~\ref{sec:grounding}.
However, these limitations are not present in environments providing object ground-truth, such as modern game environments \cite{fan2022minedojo}, and virtual reality worlds \cite{ai2thor}. Other environments, such as indoor navigation and autonomous driving, include relatively reliable backbone object detection models \cite{he2017mask}.
In addition, with recent progress on video-language models, we believe that there will soon be more reliable solutions as drop-in replacements for our current algorithm for localization and detection.
Another simplification we made in Section~\ref{limitation_distance} is to consider only interactions where objects get sufficiently \textbf{close} to each other. While such simplification appears to be sufficient in Atari, where observations are 2D, one can imagine this to fail in a 3D environment like Minecraft \cite{fan2022minedojo} or an environment where multiple types of interaction could happen between same objects \cite{ai2thor}. Future works would benefit by grounding and tracking more types of events and interactions following \citet{wang2022language}.
\section{Introduction}\label{sec1}
Reinforcement Learning (RL) has achieved impressive performance in a variety of tasks, such as Atari \cite{mnih2015human,badia2020agent57}, Go \cite{silver2017mastering}, or autonomous driving \cite{GranTurismo,survey_driving}, and is hypothesized to be an important step toward artificial intelligence \cite{SILVER2021103535}. However, RL still faces a great challenge when applied to complicated, real-world-like scenarios \cite{shridhar2020alfworld,szot2021habitat} due to its low sample efficiency.
The Skiing game in Atari, for example, requires the skier (agent) to ski down a snowy hill and hit the objective gates in the shortest time, while avoiding obstacles. Such an intuitive game with simple controls (left, right) still required 80 billion frames to solve with existing RL algorithms \cite{badia2020agent57}, roughly equivalent to 100 years of non-stop playing.
Observing the gap between RL and human performance, we identify an important cause: the lack of knowledge and understanding about the game. One of the most available sources of information, that is often used by humans to solve real-world problems, is unstructured text, e.g., books, instruction manuals, and wiki pages. In addition, unstructured language has been experimentally verified as a powerful and flexible source of knowledge for complex tasks \cite{tessler2021learning}. Therefore, we hypothesize that RL agents could benefit from reading and understanding publicly available instruction manuals and wiki pages.
In this work, we demonstrate the possibility for RL agents to benefit from human-written instruction manuals.
We download per-game instruction manuals released by the original game developers\footnote{\href{https://atariage.com/system_items.php?SystemID=2600&itemTypeID=MANUAL}{atariage.com}} or Wikipedia and experiment with 4 Atari games where the object location ground-truths are available.
Two challenges arise with the use of human-written instruction manuals:
1) \emph{Length}: Manuals contain a lot of redundant information and are too long for current language models. However, only a few paragraphs out of the 2-3 page manuals contain relevant information for any specific aspect of concern, i.e., the game's objective or a specific interaction.
2) \emph{Reasoning}: References may be implicit and attribution may require reasoning across paragraphs. For example, in Skiing, the agent needs to understand that hitting an obstacle is bad using both ``you lose time if the skier hits an obstacle" and ``the objective is to finish in shortest time", which are from different parts of the manual. In addition, the grounding of in-game features to references in the text remains an open problem.
In addition to these challenges, manuals are unstructured and contain significant syntactic \emph{variation}. For example, the phrase ``don't hit an obstacle" is also referred to as ``don't get wrapped around a tree" or ``avoid obstacles". Finally, the grounding of in-game features to references in the text remains an open problem \cite{luketina2019survey}.
Our proposed Read and Reward framework (Figure~\ref{fig:archi}) addresses the challenges with a two-step process. First, a zero-shot extractive QA module is used to extract and summarize relevant information from the manual, and thus tackles the challenge of \emph{Length}. Second, a zero-shot reasoning module, powered by a large language model, \emph{reasons} about contexts provided by the extractive QA module and assigns auxiliary rewards to in-game interactions. After object detection and localization, our system registers an interaction based on the distance between objects. The auxiliary reward from the interaction can be consumed by any RL algorithm.
Even though we only consider ``hit" interactions (detected by tracking distance between objects) in our experiments, Read and Reward still achieves 60\% performance improvement on a baseline using 1000x fewer frames than the SOTA for the game of Skiing. In addition, since Read and Reward does not require explicit training on synthetic text \cite{hermann2017grounded,chaplot2018gated,janner2018representation,narasimhan2018grounding,zhong2019rtfm,zhong2021silg,wang2021grounding}, we observe consistent performance on manuals from two \emph{different} sources: Wikipedia and Official manuals.
To our knowledge, our work is the first to demonstrate the capability to improve RL performance in an end-to-end setting using real-world manuals designed for human readers.
\section{Read and Reward}\label{sec:read_and_reward}
We identify two key challenges to understanding and using the instruction manual with an RL agent.
The first challenge is \textbf{Length}.
Due to the sheer length of the raw text ($2\sim 3$ pages), current pre-trained language models cannot handle the raw manual due to input length constraints. Additionally, most works in the area of long-document reading and comprehension \cite{beltagy2020longformer,ainslie2020etc,zemlyanskiy2021readtwice} require a fine-tuning signal from the task, which is impractical for RL problems since the RL loss already suffers from high variance. Furthermore, an instruction manual may contain a lot of task-irrelevant information. For example, the official manual for MsPacman begins with: ``MS. PAC-MAN and characters are trademark of Bally Midway Mfg. Co. sublicensed to Atari, Inc. by Namco-America, Inc.''
\begin{figure*}[t]
\vspace{-3mm}
\centering
\centerline{\includegraphics[width=0.9\textwidth]{imgs/extract.png}}
\vspace{-6mm}
\caption{Illustration of the QA Extraction Module on the game PacMan. We obtain generic information about the game by running extractive QA on 4 generic questions (3 shown since one question did not have an answer). We then obtain object-specific information using a question template. We concatenate generic and object-specific information to obtain the $<$context$>$ string (used in Figure~\ref{fig:reasoning}).}
\label{fig:QA_extraction}
\vspace{-3mm}
\end{figure*}
It is therefore intuitive in our first step to have a QA Extraction Module that produces a summary directed toward game features and objectives, to remove distractions and simplify the problem of reasoning.
The second challenge is \textbf{Reasoning}. Information in the manual may be implicit. For example, the Skiing manual states that the goal is to arrive at the end in the fastest time, and that the player will lose time when they hit a tree, but it never states directly that hitting a tree reduces the final reward. Most prior works \cite{eisenstein2009reading,narasimhan2018grounding,wang2021grounding,zhong2019rtfm,zhong2021silg} either lack the capability of multi-hop reasoning or require extensive training to form reasoning for specific manual formats, limiting the generalization to real-world manuals with a lot of variations like the Atari manuals.
On the other hand, large language models have achieved success without training in a lot of fields including reading comprehension, reasoning, and planning \cite{brown2020language,kojima2022large,ahn2022can}. Therefore, in the Reasoning Module, we compose natural language queries about specific object interactions in the games, to borrow the in-context reasoning power of large language models.
The rest of the section describes how the two modules within our Read and Reward framework (a QA extraction module and a reasoning module) are implemented.
\paragraph{QA Extraction Module (Read)}\label{method:qa_extraction}
In order to produce a summary of the manual directed towards game features and objectives, we follow the extractive QA framework first proposed by \citet{bert}, which extracts a text sub-sequence as the answer to a question about a passage of text.
The extractive QA framework takes raw text sequence $S_{\text{manual}}=\{w^0, ...,w^L\}$ and a question string $S_{\text{question}}$ as input. Then for each token (word) $w^i$ in $S_{\text{manual}}$, the model predicts the probability that the current token is the start token $p^i_{\text{start}}$ and end token $p^i_{\text{end}}$ with a linear layer on top of word piece embeddings. Finally, the output $S_{\text{answer}}=\{w^{\text{start}},...,w^{\text{end}}\}$ is selected as a sub-sequence of $S_{\text{manual}}$ that maximizes the overall probability of the start and end: $w^{\text{start}},w^{\text{end}} = \arg\max_{w^{\text{start}},w^{\text{end}}} p^{\text{start}}p^{\text{end}}$.
Implementation-wise, we use a RoBERTa-large model \cite{liu2019roberta} for Extractive QA fine-tuned on the SQUAD dataset \cite{rajpurkar2016squad}. Model weights are available from the AllenNLP API\footnote{\href{https://demo.allennlp.org/reading-comprehension/transformer-qa}{AllenNLP Transformer-QA}}.
To handle instruction manual inputs longer than the maximum length accepted by RoBERTa, we split the manual into chunks according to the max-token size of the model, and concatenate extractive QA outputs on each chunk.
As shown in Figure~\ref{fig:QA_extraction}, we compose a set of generic prompts about the objective of the game, for example, ``What is the objective of the game?''. In addition, to capture information on agent-object interactions, we first identify the top 10 important objects using TFIDF. Then for each object, we obtain an answer to the generated query:``What happens when the player hit a $<$object$>$?''.
Finally, we concatenate all non-empty question-answer pairs per interaction into a $<$context$>$ string.
Note that the format of context strings closely resembles that of a QA dialogue.
\begin{figure}[h]
\centering
\centerline{\includegraphics[width=0.45\textwidth]{imgs/reasoning.png}}
\vspace{-3mm}
\caption{Illustration of the QA Extraction Module. The $<$context$>$ (from Figure~\ref{fig:QA_extraction}) related to the object \textbf{ghost} from the QA Extraction module is concatenated with a template-generated question to form the zero-shot in-context reasoning prompt for a Large Language Model. The Yes/No answer from the LLM is then turned into an auxiliary reward for the agent.}
\label{fig:reasoning}
\vspace{-3mm}
\end{figure}
\paragraph{Zero-shot reasoning with pre-trained QA model (Reward)}\label{method:read_and_reward:reasoning}
The summary $<$context$>$ string from the QA Extraction module could directly be used as prompts for reasoning through large language models \cite{brown2020language,kojima2022large,ahn2022can}.
Motivated by the dialogue structure of the summary, and the limit in computational resources, we choose Macaw \cite{macaw}, a general-purpose zero-shot QA model with performance comparable to GPT-3 \cite{brown2020language}. Compared to the RoBERTa extractive QA model for the QA Extraction module, Macaw is significantly larger and is better suited for reasoning. Although we find that GPT-3 generally provides more flexibility and a better explanation for its answers for our task, Macaw is faster and open-source.
As shown in Figure~\ref{fig:reasoning}, for each interaction we compose the query prompt as ``$<$context$>$ Question: Should you hit a $<$object of interaction$>$ if you want to win? Answer: '', and calculate the LLM score on choices \{Yes, No\}. The Macaw model directly supports scoring the options in a multiple-choice setting. We note that for other generative LLMs, we could use the probability of ``Yes" v.s. ``No" as scores similar to \citep{ahn2022can}.
Finally, during game-play, a rule-based algorithm detects `hit' events by checking the distance between bounding boxes. For detected interactions, an auxiliary reward is provided to the RL algorithm according to \{Yes, No\} rating from the reasoning module (see Section~\ref{limitation_distance} for details).
\section{Experiments}
\subsection{Atari Environment}
The Openai gym Atari environment contains diverse Atari games designed to pose a challenge for human players. The observation consists of a single game screen (frame): a 2D array of 7-bit pixels, 160 pixels wide by 210 pixels high. The action space consists of the 18 discrete actions defined by the joystick controller. Instruction manuals released by the original game developers have been scanned and parsed into publicly available HTML format\footnote{\href{https://atariage.com/system_items.php?SystemID=2600&itemTypeID=MANUAL}{atariage.com}}. For completeness, we also attach an explanation of the objectives for each game in Appendix~\ref{game_objectives}.
\subsection{Atari Baselines}
\paragraph{A2C} A2C \cite{mnih2016asynchronous} and PPO \cite{schulman2017proximal} are among some of the earliest Actor-Critic algorithms that brought success for deep RL on Atari games. A2C learns a policy $\pi$ and a value function $V$ to reduce the variance of the REINFORCE \cite{sutton1999policy} algorithm. The algorithm optimizes the advantage function $A(s,a)=Q(s,a)-V(s)$ instead of the action value function $Q(s,a)$ to further stabilize training.
\paragraph{MuZero}
MuZero algorithm \cite{muzero} combines a tree-based search with a learned model of the environment. The model of MuZero iteratively predicts the reward, the action-selection policy, and the value function. When applied to the Atari benchmarks, MuZero achieves super-human performance in a lot of games.
\paragraph{R2D2}
R2D2 \cite{kapturowski2018recurrent} proposes an improved training strategy for RNN-based RL agents with distributed prioritized experience replay. The algorithm is the first to exceed human-level performance in 52 of the 57 Atari games.
\paragraph{Agent 57}
\citet{badia2020agent57} train a neural network that parameterizes a family of policies with different exploration, and proposes a meta-controller to choose which policy to prioritize during training. Agent 57 exceeds human-level performance on all 57 Atari games.
\subsection{Delayed Reward Schedule}\label{method:delayed_reward}
Most current Atari environments include a dense reward structure, i.e., the reward is obtained quite often from the environment. Indeed, most RL algorithms for the Atari environments perform well due to their dense reward structure.
However, in many real-world scenarios \cite{ai2thor,wang2022learning} or open-world environments \cite{fan2022minedojo}, it is expensive to provide a dense reward. The reward is usually limited and ``delayed" to a single positive or negative reward obtained at the very end of an episode (e.g., once the robot succeeds or fails to pour the coffee).
Notably, Atari games with the above ``delayed" reward structure, such as Skiing, pose great challenges for current RL algorithms, and are among some of the hardest games to solve in Atari \cite{badia2020agent57}.
Therefore, to better align with real-world environments, we use a \emph{delayed reward} version of the tennis, breakout, and Pac-Man games by providing the reward (the ``final game score") only at the end of the game. This delayed reward structure does not change the optimization problem, but is more realistic and imposes additional challenges to RL algorithms.
\subsection{Grounding Objects in Atari}
\label{sec:grounding}
We find grounding (detecting and relating visual objects to keywords from TFIDF) a significant challenge to applying our framework. In some extensively studied planning/control problems, such as indoor navigation, pre-trained models such as masked RCNN reasonably solve the task of grounding \cite{shridhar2020alfred}. However, the objects in the Atari environments are too different from the training distribution for visual backbones like mask RCNN. Therefore, the problem becomes very challenging for the domain of Atari. Since it is not the main focus of our proposed work, we attempt to provide an unsupervised end-to-end solution only for the game of Skiing.
\paragraph{Full end-to-end pipeline on Skiing}
\label{skiing_pipeline}
To demonstrate the full potential of our work, we offer a proof-of-concept agent directly operating on raw visual game observations and the instruction manual for the game of Skiing. For unsupervised object localization, we use SPACE, a method that uses spatial attention and decomposition to detect visual objects \cite{lin2020space}. SPACE is trained on 50000 random observations from the environment, and generates object bounding boxes. These bounded boxes are fed to CLIP, a zero-shot model that can pair images with natural language descriptions \cite{radford2021learning}, to classify the bounding box into categories defined by key-words from TF-IDF. The SPACE/CLIP models produce mostly reasonable results; however, in practice we find that CLIP lacks reliability over time, and SPACE cannot distinguish objects that cover each other. Therefore, to improve classification reliability, we use a Multiple Object Tracker \cite{sort} and set the class of the bounding box to be the most dominant class over time.
\begin{figure}[ht]
\vspace{-3mm}
\centering
\centerline{\includegraphics[width=0.5\textwidth]{imgs/detection_failure.png}}
\vspace{-2mm}
\caption{\label{fig:detection_failure}Examples of SPACE \cite{lin2020space} and CLIP \cite{radford2021learning} in the full end-to-end pipeline (Section~\ref{skiing_pipeline}). The top row shows bounding boxes for objects and the bottom row shows corresponding object masks as detected by SPACE. Most of the bounding boxes generated are correct. \textbf{Left}: SPACE confuses bounding boxes of agent and tree into one and the box gets classified as ``tree" (blue), and the auxiliary penalty is not properly triggered. \textbf{Right}: The flag next to the agent (in red circle) is not detected, and therefore the auxiliary reward is not provided.}
\vspace{-4mm}
\end{figure}
\paragraph{Ground-truth object localization/grounding experiments}
\label{ram_pipeline}
For other games (Tennis, Ms. Pac-Man, and Breakout), we follow \citet{anand2019unsupervised} and directly calculate the coordinates of all game objects from the simulator RAM state. Specifically, we first manually label around 15 frames with object name and X,Y coordinates and then train a linear regression model mapping from simulator RAM state to labeled object coordinates. Note that this labeling strategy does not work for games in which multiple objects of the same kind appears in a frame, i.e., multiple trees appear in one frame of Skiing.
\subsection{Simplifying Interaction Detection to Distance Tracking}
\label{limitation_distance}
We make the assumption that all object interactions in Atari can be characterized by the distance between objects, and we ignore all interactions that do not contain the agent. We note that since the Atari environment is in 2D, it suffices to consider only interactions where objects get close to the agent so that we can query the manual for ``hit" interaction.
We, therefore, monitor the environment frames and register interaction whenever an object's bounding boxes intersect with the agent. For each detected interaction, we assign rewards from $\{-r_n, r_p\}$ accordingly as detailed in Section~\ref{sec:read_and_reward}. Experimentally, we find that it suffices to set $r_n=r_p=5$.
\subsection{RL Agent}
Since Read and Reward only provides an auxiliary reward, it is compatible with any RL algorithms like PPO, SAC, TD3, and even Agent 57 \cite{schulman2017proximal,haarnoja2018soft,fujimoto2018addressing,badia2020agent57}. For simplicity, we use a popular open-source implementation \cite{stooke2019rlpyt} of A2C \cite{mnih2016asynchronous}.
\begin{figure}[t]
\vspace{-3mm}
\centering
\centerline{\includegraphics[width=0.5\textwidth]{figures/scatter.pdf}}
\vspace{-4mm}
\caption{\label{fig:skiing}Performance (in Game Score) vs Efficiency (in Frames Seen) of fully end-to-end Read and Reward (green) compared to an A2C baseline, and MuZero \cite{muzero} on Skiing. Benefiting from the auxiliary rewards from the instruction manual, Read and Reward out-performs the A2C Baseline and achieves 60\% improvement on random performance while using much fewer training frames compared to the SOTA mixed-policy Agent 57~\cite{badia2020agent57} and GDI-H3~\cite{fan2022generalized}.}
\vspace{-3mm}
\end{figure}
\begin{figure}[t]
\centering
\centerline{\includegraphics[width=0.55\textwidth]{figures/Breakout.pdf}}
\centerline{\includegraphics[width=0.55\textwidth]{figures/Tennis.pdf}}
\centerline{\includegraphics[width=0.55\textwidth]{figures/MsPacman.pdf}}
\caption{Comparison of the training curves of Read and Reward v.s. A2C Baseline in terms of game score alongside Auxiliary Reward v.s. Frames Seen under our delayed reward setting for 3 different Atari games. Read and Reward (red) consistently outperforms the A2C baseline (brown) in all games. In addition, the auxiliary reward from Read and Reward demonstrates a strong positive correlation with the game score, suggesting that the model benefits from optimizing the auxiliary rewards at training time.}
\label{fig:learning_curves}
\end{figure}
\begin{table*}[t]
{\centering
\begin{tabular}{l|m{20em} @{\hskip 0.3in} m{20em}}
Game & \multicolumn{1}{c}{Wikipedia} & \multicolumn{1}{c}{Official Manual}\\
\hline
& \multicolumn{2}{c}{What is the objective of the game?}\\
\cline{2-3}
\multirow{8}{3em}{Pacman} & The player earns points by eating pellets and avoiding ghosts. & To score as many points as you can practice clearing the maze of dots before trying to gobble up the ghosts.\\
\cline{2-3}\cline{2-3}
& \multicolumn{2}{c}{How to succeed in the game?}\\
\cline{2-3}
& The player earns points by eating pellets and avoiding ghosts. & Score as many points as you can. \\
\cline{2-3}\cline{2-3}
& \multicolumn{2}{c}{How to score at the game?}\\
\cline{2-3}
& The player earns points by eating pellets and avoiding ghosts. & N/A\\
\cline{2-3}\cline{2-3}
& \multicolumn{2}{c}{Who are your enemies?}\\
\cline{2-3}
& N/A & Ghosts. stay close to an energy pill before eating it, and tease the ghosts.\\
\hline
\end{tabular}
}
\caption{\label{table:wiki_vs_official} Table showing the outputs of the QA Extraction module on Wikipedia instructions vs the official Atari manual. The Wikipedia manual is significantly shorter, and contains less information, causing the extractive QA model to use the same answer for all questions. Full table for all 4 games is shown in Table~\ref{table:wiki_vs_official_FULL} in the Appendix.}
\vspace{-3mm}
\end{table*}
\subsection{Full Pipeline Results on Skiing}
We compare the performance of our Read and Reward agent using the full pipeline to baselines in Figure~\ref{fig:skiing}, our agent is best on the efficiency axis and close to best on the performance axis. A2C equipped with Read and Reward outperforms A2C baseline and more complex methods like MuZero \cite{muzero}. We note a 50\% performance gap between our agent and the SOTA Agent 57, but our solution does not require policy mixing and requires better than 1000 times fewer frames to converge to performance 60\% better than prior works.
When experimenting with a full end-to-end pipeline as described in Section~\ref{skiing_pipeline}, we notice instability with the detection and grounding models. As shown in Figure~\ref{fig:detection_failure}, missing bounding boxes cause lots of mislabeling when objects are close to each other. Such errors may lead to the auxiliary reward being incorrectly assigned. We observe that performance could further be improved
Although Skiing is arguably one of the easiest games in Atari for object detection and grounding, with easily distinguishable objects and white backgrounds, we still observe that current unsupervised object detection and grounding techniques lack reliability.
\begin{table}[h]
{\centering
\begin{tabular}{lccc}
& R\& R Wiki & R\& R Official & A2C\\
\hline
Tennis & \textbf{-5} & \textbf{-5} & -23 \\
Pacman & \textbf{580} & \textbf{580} & 452 \\
Breakout & \textbf{14} & \textbf{14} & 2
\end{tabular}
}
\vspace{-2mm}
\caption{\label{table:wiki_results} Table of the game score of algorithms trained under \emph{delayed reward} schedule, Read and Reward consistently outperforms the baseline A2C agent, which does not use the manual, using either Wikipedia (\textbf{Wiki}) instructions or official Atari manual (\textbf{Official}).}
\vspace{-3mm}
\end{table}
\subsection{Results on Games with Delayed Reward Structure}
Since the problem of object detection and grounding is not the main focus of this paper, we obtain ground-truth labeling for object locations and classes as mentioned in Section~\ref{ram_pipeline}. In addition, for a setting more generalizable to other environments~\cite{ai2thor,fan2022minedojo}, we implement \emph{delayed reward} schedule (Section~\ref{method:delayed_reward}) for Tennis, MsPacman, and Breakout.
We plot the Game Score and the auxiliary rewards of the Read and Reward agent v.s. the A2C baseline in Figure~\ref{fig:learning_curves}. The A2C baseline fails to learn under the sparse rewards, while the performance of the Read and Reward agent continues to increase with more frames seen. In addition, we observe that the auxiliary reward from the Read and Reward framework has a strong positive correlation with the game score (game reward) of the agent, suggesting that the A2C agent benefits from optimizing the auxiliary rewards at training time.
\subsection{R\&R Behavior on Manuals from Different Source}
To illustrate the generalization of our proposed framework to a different source of information, we download the ``Gameplay" section from Wikipedia\footnote{\href{https://en.wikipedia.org/}{wikipedia.org}} for each game and feed the section as manual to our model. In Table~\ref{table:wiki_vs_official}, we show a comparison of the answers on the set of 4 generic questions on the Pacman by our QA Extraction Module. Note that the QA Extraction module successfully extracts all important information about avoiding ghosts and eating pellets to get points. In addition, since the official manual contains more information than the Gameplay section of Wikipedia (2 pages v.s. less than 4 lines), we were able to extract much more information from the official manual.
Due to the high similarity in the output from the QA Extraction Module, the Reasoning module demonstrates good agreement for both the Wiki manual and the official manual. Therefore, we observe consistent agent performance between the official manual and the Wikipedia manual in terms of game score (Table~\ref{table:wiki_results}). | {
"attr-fineweb-edu": 2.152344,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfdg5qhLB215XM2ka | \section{Introduction}
The application of data analysis is becoming increasingly popular for improving sports performance \cite{paul,DeFroda,LAUDNER2019330,9592793}. Objective data, such as pose data, are expected to be a critical part of these analyses that result in innovations. In the past decade, human pose estimation has become one of the most active research fields in computer vision. The solutions vary from camera-based \cite{8765346,Moon_2018_CVPR_V2V-PoseNet,mmpose2020} and inertial-sensor-based methods \cite{6402423,marcard} to hybrid of these methods. Inertial-sensor-based methods require the attachment of small sensors to the body. Subsequently, postural data can be obtained by analyzing the sensors' output. This wearable sensor-based solution can be applied to a target that moves over a wide area. As wearable sensor-based methods are attached to the body, these sensors may hinder the subject, particularly in sports involving acrobatic movements or body contact. Another disadvantage is its influence on the subject's performance. Wearing a sensor restricts the athletes' routine, impacting their performance.
Conversely, camera-based methods can estimate poses without the subject's assistance. Recently, deep learning-based methods have realized automatic estimation of human poses only from camera images.
Learning-based methods, such as deep learning-based approaches, are based on training data. Thus, they cannot estimate untrained aspects. Current human pose estimators are trained using public data sources, such as MS COCO \cite{coco} and MPII \cite{andriluka14cvpr}. Some studies have focused on estimating poses without having similar situations included in the training data \cite{Kitamura_2022_WACV,Zwolfer2021Improved2K}, such as upside-down poses in pole vaulting. They used additional training data suited to their situation and improved the pose-estimation performance.
This study focused on pose estimation of subjects wearing loose-fitting clothes as a situation not included in existing datasets. Some types of winter sports, such as freestyle skiing or snowboarding, necessitate the wearing of loose-fitting clothes. Therefore, we considered the need for human pose estimation under loose-fitting clothes, which is significant for this community.
Considering previous approaches, this problem may appear to be easily resolved by including situations with loose-fitting clothes in the training data. However, obtaining the joint positions with loose-fitting clothes requires considerable effort. Even for a human annotator, determining the joint position is difficult because of the shaking of the clothes.
Matsumoto et al. estimated the pose of a human target wearing Japanese kimonos as an example of loose-fitting clothing \cite{TakuyaMATSUMOTO20202019MVP0007}. The method requires the subjects to perform the same movement twice, wearing tight- and loose-fitting clothes. They subsequently estimated the joint position occluded in loose-fitting clothes by matching these two motion sequences. According to the study, motion similarity between the two movements is essential, and accuracy deteriorates when the motion similarity decreases. Thus, this approach is unsuitable for sports applications.
The primary goal of this study was to enable 2D human pose estimation of a subject wearing loose-fitting clothes in sports scenarios. To this end, this study developed a simple but novel method to obtain the ground truth value of joints in loose-fitting clothes. The proposed method uses fast-flashing light-emitting diode (LED) lights placed on the subject's joints. The participants wore loose-fitting clothes over the lights. Filmy clothes made of linen or polyester, which adequately allow light to pass through, are selected as the loose-fitting clothes. The video was captured at 240 fps using a high-frame-rate camera. By extracting the light-on and -off frames, we rendered two video sequences of 30 fps each––one obtained the ground truth joint position from the lighted LED, and the other was used as regular video sequences.
In the experiment, we obtained data from multiple sequences of persons and motions. We obtained the ground truth data by manually clicking on a light-on video sequence rendered using the proposed method. Subsequently, a state-of-the-art (SOTA) human pose estimation method, MMPose \cite{mmpose2020}, was applied to light-off video sequences. Additionally, two operators manually labeled the joint positions in the light-off video sequences. The estimates and ground truths were compared. We verified that the pose estimates from MMPose and human operators differed from the ground truth data. Therefore, our data can help improve the human pose estimator. Moreover, the ground truth data feedback can improve the human operators' accuracy.
\section{Proposed method}
This study entailed the development of a method for obtaining human joint position ground truth data in 2D with the subject wearing loose-fitting clothes. In the proposed method, fast-flushing LED are attached onto the joints, and the video is captured using a high-frame-rate camera; it virtually enables the temporal division of the information source. The subsequent sections present the details of video capturing and post-processing.
\subsection{Capturing method}
The proposed method uses fast-flushing LEDs. The detailed setting had light-on and -off times of 10 and 23.33 ms, respectively. The device was easily implemented using the Arduino Nano and a relay. The participants, two males in their 30s and 20s, wore loose-fitting clothes. Thin polyester clothes were selected.
The video sequences were captured at 240 fps by using a high-frame-rate camera. The light-on and -off cycle was 33.33 ms, corresponding to eight frames on 240 fps video, as shown in Fig. \ref{fig:fig1}. Figure \ref{fig:fig2} shows sample snapshots, with Fig. \ref{fig:fig2}(a) illustrating the LED-on image and Fig. \ref{fig:fig2}(b) showing the LED-off image. As shown in Fig. \ref{fig:fig2}, considering human motion speed, the temporal difference of 2 [frames]/240 [fps] = 8.3 [ms] between (a) and (b) can be ignored.
\begin{figure}
\begin{center}
\includegraphics[pagebox=cropbox,width=0.48\textwidth]{fig1.pdf}
\end{center}
\vspace{-1cm}
\caption{Concept of the proposed approach. The topmost figure shows the brightness of the LED. It is captured at 240 fps. The middle part of the figure shows the brightness in a 240 fps video sequence. From the HFR video, LED-on and -off video sequence is extracted.}
\label{fig:fig1}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[pagebox=cropbox,width=0.26\textwidth]{fig2.pdf}
\end{center}
\caption{Example of the LED-off and -on image. The captured timing of the two images differs by 8.33 msec.
in time. However, the two images are almost the same to the extent of capturing human motion. Thus, the temporal difference can ignore.}
\label{fig:fig2}
\end{figure}
\subsection{Post-processing}
Following the capturing of the 240-fps video, two 30 fps sequences were rendered: an LED-on video sequence and an LED-off video sequence, as shown in Fig. \ref{fig:fig1}. Each cycle of LED on and off consisted of eight frames. One LED-on frame was extracted from eight successive frames; one LED-off frame was also selected similarly. In this study, we manually extracted LED-on and -off frames. The LED position was then manually annotated by clicking on the LED-on video. Hereafter, we refer to the manually annotated data as the ground truth data.
\section{Experiment}
The effectiveness of the proposed method was experimentally verified. Two types of target movements were captured––a fundamental movement and a skiing motion on a ski simulator––from two subjects. For the experimental verification, we attached LEDs on both sides of the hip, knee, and ankle joints, as shown in Fig. \ref{fig:fig3}.
The following two comparisons were conducted. First, we compared the estimates of the joint positions of the two human operators with the ground truth. Human operators were asked to annotate the positions of the six joints on the LED-off video sequence. Second, we compared the estimates from the learning-based pose estimator with the ground truth. MMpose was used as the learning-based pose estimator and was applied to the LED-off video sequence similarly to the human operator case.
\begin{figure}
\begin{center}
\includegraphics[pagebox=cropbox,width=0.3\textwidth]{fig3.pdf}
\end{center}
\caption{Position of LEDs. We used the six LEDs in the experiment.}
\label{fig:fig3}
\end{figure}
\subsection{Motion sequences}
We used a sideways jump over a ball for the fundamental movement, as shown in Fig.\ref{fig:fig4}. Additionally, we captured the action in the ski simulator, as shown in Fig.\ref{fig:fig5}. Both movements had vertical and horizontal translations. Two subjects were asked to perform the two types of the aforementioned movements.
\begin{figure}
\begin{center}
\includegraphics[pagebox=cropbox,width=0.48\textwidth]{fig4.pdf}
\end{center}
\caption{Snapshots of the sideways jump. Each image is ten frames apart in time. It continues about 160 frames.}
\label{fig:fig4}
\begin{center}
\includegraphics[pagebox=cropbox,width=0.48\textwidth]{fig5.pdf}
\end{center}
\caption{Snapshots of ski simulator motion.}
\label{fig:fig5}
\end{figure}
\subsection{Comparisons between existing methods and ground truths}
Figures \ref{fig:fig6} and \ref{fig:fig7} show the results of the left knee; Fig. \ref{fig:fig6} shows the results for the first subject, while Fig. \ref{fig:fig7} depicts the results for the second subject. The left side of each figure depicts the result of sideways jumping, and the right side depicts the ski motion. As shown in Figs. \ref{fig:fig6} and \ref{fig:fig7}, the graphs exhibit similar trajectories.
\begin{figure}
\begin{center}
\includegraphics[pagebox=cropbox,width=0.48\textwidth]{fig6.pdf}
\end{center}
\caption{The resultant position of the left knee for subject A; the top images show the horizontal position, and the bottom images show the vertical position. The left images show the result for the sideways jump, and the right images show that of the ski simulator.}
\label{fig:fig6}
\begin{center}
\includegraphics[pagebox=cropbox,width=0.48\textwidth]{fig7.pdf}
\end{center}
\caption{The resultant position of the left knee for subject B.}
\label{fig:fig7}
\end{figure}
However, detailed observations reveal certain differences. Figure \ref{fig:fig8} shows an example of an inaccurate estimation during sideways jumping. In Fig. \ref{fig:fig8}, green and blue dots represent the positions annotated by human operators, whereas orange and red dots denote the positions annotated by MMPose and the ground truth, respectively. Figure \ref{fig:fig8} shows the likelihood of human operators making a similar mistake; the bulge around the knee creates the illusion of indicating the knee position, which is different from the ground truth knee position.
Figure \ref{fig:fig9} shows an example of this aspect during ski motion. As shown in Fig. \ref{fig:fig9}, the human operators and MMPose tend to consider that the knee exists at the center of the clothes. However, in this instance, the subject changed the direction of motion (left-to-right to right-to-left), and the loose-fitting clothes kept moving from left to right by inertial law.
\begin{figure}
\begin{center}
\includegraphics[pagebox=cropbox,width=0.32\textwidth]{fig8.pdf}
\end{center}
\caption{Example of erroneous result: bulge is considered as knee position.}
\label{fig:fig8}
\vspace{-0.5cm}
\begin{center}
\includegraphics[pagebox=cropbox,width=0.32\textwidth]{fig9.pdf}
\end{center}
\caption{Example of erroneous result: center of clothes is considered as knee position.}
\label{fig:fig9}
\end{figure}
Here, our emphasis was on highlighting the difficulty that the human operators faced while working with loose-fitting clothes. Thus, we confirmed that obtaining training data from human operators for machine learning is difficult.
\section{Conclusion and Future work}
This paper proposes a novel method for obtaining ground truth data for loose-fitting clothes. And we verified the accuracy of existing methods, which strongly supports the necessity of our work.
Unlike most existing pose estimation frameworks, the proposed method places significance on temporal information. Therefore, in future research, we intend to use a network architecture that can handle time-intensive data. We will prepare various sequences involving loose-fitting clothes as ground truth data and use them to train a deep learning framework.
{\small
\bibliographystyle{ieee_fullname}
\section{Introduction}
Please use this template for two separate purposes. (1) If you are
submitting a paper to the second round for a paper that received a
Revise decision in the first round, please use this template to
address the reviewer concerns and highlight the changes made since the
original submission. (2) If you submit a paper to the second round,
after you receive the reviews, you may optionally submit a rebuttal to
address factual errors or to supply additional information requested by
the reviewers (this rebuttal will be seen by Area Chairs but not by
the reviewers). In either case, you are limited to a {\bf one page}
PDF file. Please follow the steps and style guidelines outlined below
for submitting your author response.
The rebuttal must adhere to the same blind-submission as the original submission and must comply with this rebuttal-formatted template.
\subsection{Response length}
Author responses must be no longer than 1 page in length including any references and figures. Overlength responses will simply not be reviewed. This includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font.
\section{Formatting your Response}
{\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.}
All text must be in a two-column format. The total allowable width of the text
area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54 cm) high.
Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch
(0.8 cm) space between them. The top margin should begin
1.0 inch (2.54 cm) from the top edge of the page. The bottom margin should be
1-1/8 inches (2.86 cm) from the bottom edge of the page for $8.5 \times
11$-inch paper; for A4 paper, approximately 1-5/8 inches (4.13 cm) from the
bottom edge of the page.
Please number all of your sections and any displayed equations. It is important
for readers to be able to refer to any particular equation.
Wherever Times is specified, Times Roman may also be used. Main text should be
in 10-point Times, single-spaced. Section headings should be in 10 or 12 point
Times. All paragraphs should be indented 1 pica (approx. 1/6 inch or 0.422
cm). Figure and table captions should be 9-point Roman type as in
Figure~\ref{fig:onecol}.
List and number all bibliographical references in 9-point Times, single-spaced,
at the end of your response. When referenced in the text, enclose the citation
number in square brackets, for example~\cite{Alpher02}. Where appropriate,
include the name(s) of editors of referenced books.
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{1in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the response. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Many readers (and reviewers), even of an electronic copy, will choose to print your response in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
{\small
\bibliographystyle{ieee}
| {
"attr-fineweb-edu": 2.726562,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbEw4eIfiUOAdUxb6 | \section{Introduction}
Recently, professional association football has seen a surge in the availability of positional data of the players and the ball, typically collected by GPS, radar or camera systems~\cite{pino2021comparison}. The growing availability of such data has opened up an exciting new avenue for performance analysis~\cite{memmert2018data}. Positional data enables the development of sophisticated performance indicators that accurately measure the technical, tactical and athletic performance of teams and players beyond the possibilities of qualitative observation and simple event-based statistics like ball possession and passing accuracy. High-quality measures of performance are invaluable for effective performance analysis, training, opposition scouting, and player recruitment.
The modelling of human motion is a foundational component of many performance metrics based on positional data. For example, algorithms that compute space control~\cite{rico2020identification} or simulate passes~\cite{spearman2017physics} implicitly or explicitly make assumptions about human kinematics. These kinematic assumptions have never been verified so far, which calls the validity of these assumptions and the resulting models into question.
In sports with many degrees of freedom like football, predicting player motion is generally a very hard task because player positioning and motion are the result of an intractable, individual decision-making process. Luckily, many applications do not require predictions of the actual position and kinematic state of players but merely of their possibly reachable positions. This requirement essentially shifts the purpose of a motion model from predicting expected positions towards estimating the most remote reachable positions. Estimating such extreme values from real-world data can be difficult, because extreme values are typically rare and particularly likely to include a component of measurement error. Since the distribution of measurement error within player trajectories can vary significantly between data sets due to the use of different measurement systems and post-processing methods, a validation procedure of player motion models needs to flexibly account for various, possibly unknown distributions of measurement error.
The contributions of this paper are twofold: First, we formally propose a validation routine for the quality of player motion models which measures their ability to predict all reachable positions depending on the player's current position and kinematic state. The procedure favors those models that predict the smallest reachable areas which still contain a certain, manually defined proportion of actually observed positions. Second, we use this routine to evaluate and optimise the parameters of four models of motion, assuming (a) motion with constant speed, (b) motion with constant acceleration, (c) motion with constant acceleration with a speed limit, and (d) motion along two segments of constant speed. This evaluation sheds light on the predictive quality of these models and suggests sensible parameter values for them.
The rest of this paper is structured as follows: Section~\ref{state_of_the_art} provides some background on motion models in football and their validation. Section~\ref{validation_procedure} formally presents our validation routine. Section~\ref{experiment_results} describes our exemplary model validation and optimisation based on a real data set and discusses its results. Section~\ref{conclusion} summarises the contributions of this paper and points out possible directions of further research.
\section{Motion models in football: state of the art} \label{state_of_the_art}
Assumptions about the trajectorial motion of players are inherent to many performance indicators within the analysis of sports games. One example is the commonly used concept of space control which assigns control or influence over different areas on the pitch to players. It is used, for example, as a context variable to rate football actions~\cite{fernandez2018wide,rein2017pass} and for time series analyses~\cite{fonseca2012spatial}. Controlled space is often defined as the area that a player is able to reach before any other player, given a specific model of motion for each player. Commonly used for this purpose are motion models assuming constant and equal speed, which results in a Voronoi partition of the pitch~\cite{efthimiou2021voronoi}, or accelerated movement, possibly including a friction term~\cite{fujimura2005} or velocity-dependent acceleration~\cite{taki2000visualization} to limit the achievable speed. Spearman et al.~\cite{spearman2017physics} assume accelerated player motion with a limit on both acceleration and velocity in the context of modeling ground passes.
Motion models have also been estimated directly from positional data by fitting a probability distribution over appropriately normalized future positions~\cite{brefeld2019probabilistic,caetano2021football}. However, such empirical models can be computationally expensive, prone to outliers and their current versions lend themselves less naturally to extreme value estimation than theoretically derived models.
Attempts to validate trajectorial player motion models are rare. Notably, Caetano et al.~\cite{caetano2021football} performed a validation of their space control model, and thus indirectly also the underlying motion model, by checking how many future positions of players fall within their associated controlled area for a number of time horizons.
\section{Player motion model and validation procedure} \label{validation_procedure}
\subsection{Objectives \& Requirements} \label{objectives}
The essential requirement for a player motion model with regards to our validation routine is that it defines a non-zero, finite area that corresponds to the set of positions that the player is able to reach according to the model.
The validation procedure can be represented as a function rating a suitable player motion model on how well it fits some real positional data. Usually, positional data in soccer consists of individual frames, with the position of each player defined for each frame. The frames are normally separated by a constant amount of time (seconds per frame).
In order to abstract our validation procedure from the underlying positional data, we introduce the concept of a \textit{trail}. A trail represents a slice of a player's trajectory over some duration $\Delta t$. Formally, a trail is defined as the quadruple:
\begin{equation} \label{eq_trail_def}
(\vec{x}_0, \vec{v}_0, \vec{x}_t, \Delta t)
\end{equation}
\begin{itemize}
\item $\vec{x}_0$: (2D) position of a given player at some arbitrary time $t_0$
\item $\vec{v}_0$: (2D) velocity of the player at time $t_0$
\item $\vec{x}_t$: (2D) position of the player at time $t = t_0 + \Delta t$
\item $\Delta t$: time horizon (predefined)
\end{itemize}
\begin{figure} \label{fig:trail}
\centering
\includegraphics[width=260pt]{img/trail.png}
\caption{A trail visualized. The black dots represent the positions of a player for a number of consecutive frames. In this case, the time horizon $\Delta t$ is four times the duration of a frame. The player's current velocity $\vec{v}_0$ can be calculated using numerical differentiation if it is not present in the data.}
\label{fig:trail}
\end{figure}
Figure \ref{fig:trail} visualizes how a trail can be extracted from the positional data of a player.
The validation function takes a motion model $m$ and a set of trails $T$ as parameters and returns a numerical validation score. Formally, the function has the signature: $(m, T) \rightarrow \mathbb{R}$.
Since every reached position is trivially contained in a large enough area, the validation function should take not only correctness but also precision of the model into account. The correctness of a motion model measures its ability to make true predictions, i.e. to predict reachable areas that contain the true target position $\vec{x}_t$. Precision refers to how well narrowed-down the predicted areas of a model are. There is a trade-off relationship between correctness and precision.
One non-functional requirement worth mentioning is performance. Since a player motion model usually has to be invoked and evaluated separately for every trail, the computational complexity is generally $\theta(n)$ with $n$ being the number of trails, unless some algorithmic optimisation or approximation is applied. Given that a high number of trails is desirable in order to get a representative sample of player displacements including a sufficient number of extrema, the validation of a trail should be as efficient as possible.
\subsection{Conceptual approach} \label{conceptual_approach}
\subsubsection{Measuring Correctness.} For the conceptual understanding of the correctness measure, it helps to consider only a single trail \eqref{eq_trail_def} of a player.
Using $\vec{x}_0$, $\vec{v}_0$ and $\Delta t$, a motion model makes a prediction for the reachable area. The reachable area predicted by a motion model $m$ is defined by the set of reachable positions $R_m$. If $\vec{x}_t$ is contained in $R_m$, the model $m$ has made a correct prediction. Following this logic, a motion model achieves the highest possible correctness if and only if for every trail, the model predicts a reachable area in which $\vec{x}_t$ is contained. As in practice, there may appear considerable outliers due to measurement errors, we decided against weighting incorrect predictions according to their distance to the predicted reachable area.
The ratio between the number of correct predictions $n_{correct}$ and the number of total predictions $n_{total}$ of a model $m$ for a sample of trails $T$ will be called $hit\_ratio$.
\begin{equation} \label{eq_hit_ratio}
hit\_ratio(m, T) = \cfrac{n_{correct}}{n_{total}} = \cfrac{n_{correct}}{length(T)}
\end{equation}
The total number of predictions is always equal to the length of the sample of trails as the motion model makes exactly one prediction for each trail.
We can use the $hit\_ratio$ of a model as an indicator for its correctness. A high $hit\_ratio$ corresponds to a high correctness and vice-versa.
\subsubsection{Measuring Precision.}
In the context of this paper, the precision of a motion model represents how much it narrows down the reachable area of a player. Smaller reachable areas imply a higher precision of the model and are generally preferable, given an equal $hit\_ratio$.
To determine the precision of a model across multiple evaluated trails, we use the inverse of the mean surface area of all correctly predicted reachable areas. Incorrect predictions, where the target position $\vec{x_t}$ is not contained in the predicted reachable area are excluded from this average, since the precision of a model would otherwise increase inappropriately for very narrow, incorrect predictions. The precision of model $m$ across a sample of trails $T$ is given by:
\begin{equation} \label{eq_precision}
precision(m, T) = \cfrac{1}{\cfrac{1}{n_{correct}} \cdot \sum areas_{correct}} = \cfrac{n_{correct}}{\sum areas_{correct}}
\end{equation}
where $\sum areas_{correct}$ is the sum of all correctly predicted reachable areas.
\subsubsection{Defining an overall Validation Score.} Since we aim for a single numerical value as a score for player motion models, correctness and precision have to be balanced in some way. Due to the fact that some measurement-related extreme outliers can usually be expected in positional data from football games, a model with a $hit\_ratio$ of 100\% might not necessarily be desirable. Therefore, we introduce a minimum level of correctness $hit\_ratio_{min}$, which represents a minimal required ratio between correct and total predictions of a model. We propose that if a motion model $m$ satisfies the condition
\[hit\_ratio(m,T) \geq hit\_ratio_{min}\]
for a trail sample $T$, the exact $hit\_ratio(m,T)$ should be indifferent for the overall validation score of $m$. This way, extreme outliers in the positional data caused by measurement-related errors have no influence on the validation score, as long as $hit\_ratio_{min}$ is chosen adequately
Consequently, for a motion model $m$ that exceeds $hit\_ratio_{min}$, the validation score is only determined by the precision of the model \eqref{eq_precision}. We define the $score$ of a motion model $m$ with the sample of trails $T$ as:
\begin{equation} \label{eq_validation_score}
score(m,T) =
\begin{cases}
0 & \text{if $hit\_ratio(m,T) < hit\_ratio_{min}$}\\
precision(m,T) & \text{else}
\end{cases}
\end{equation}
The $score$ measures how well a motion models fits a sample of positional data. $hit\_ratio_{min}$ can be considered a free parameter of this validation procedure. Although the primary rationale behind introducing this parameter is to prevent outliers from influencing the validation score, it can also be tuned to some extent for adjusting the behaviour of the validation procedure. Using a relatively high $hit\_ratio_{min}$ favors models with a high correctness, whereas a lower one favors models with a high precision. It is important to note that due to this trade-off relationship, the "right" balance between correctness and precision depends on the specific use-cases of the model. Thus, the exact value for $hit\_ratio_{min}$ has to be chosen based on both the expected quality of the positional data in terms of outliers and the desired properties of the model with regard to correctness vs. precision.
In any case, $hit\_ratio_{min}$ has to be be chosen such that
\begin{equation} \label{eq_condition_hit_ratio_min}
hit\_ratio_{min} \leq 1 - \cfrac{n_{outlier}}{length(T)}
\end{equation}
where $n_{outlier}$ is the (expected) number of outliers in the sample T. If \eqref{eq_condition_hit_ratio_min} were violated, the validation score would be influenced by the outliers in the sample, which invalidates the primary purpose of using $hit\_ratio_{min}$ as a threshold for the validation score. The identification of outliers or estimation of their prevalence within a sample is naturally non-trivial, since the appropriate statistical method for achieving an acceptable estimate depends on the specific properties of the sample. However, this paper will not go into further detail here.
\subsection{Implementation}
For the actual computation of the validation score of a given motion model with reasonable efficiency, we need to overcome a few challenges. This section outlines how these challenges can be managed when implementing the validation procedure.
\subsubsection{Implementing a motion model interface.}
First of all, a player motion model $m$ predicts a set of reachable positions $R_m$ based on the input parameters. This set $R_m$, however, is hardly useful for implementing our validation procedure because it is an infinite set. In practice, $R_m$ typically corresponds to a bounded, simply-connected shape, so $R_m$ is also defined through its boundary $B_m$. We can use a simple polygon to approximate $B_m$ and thus $R_m$. With an increasing number of polygon vertices, this approximation can become arbitrarily accurate.
Using a polygon to represent the reachable area predicted by a model does not only allow a straightforward computation of its validation score but is also practical for defining a common interface for motion models. Therefore, we implement the abstract concept of a motion model as an interface with the parameters $(\vec{x}_0, \vec{v}_0, \Delta t)$ and an array of vertices representing a polygon as return type.
\subsubsection{Implementing the validation function.} Translating the conceptual approach for calculating the validation score is fairly straightforward. Both the $hit\_ratio$ \eqref{eq_hit_ratio} and the $precision$ \eqref{eq_precision} have to be calculated.
It has to be determined for each trail in $T$ whether $\vec{x}_t$ is inside the polygon returned by the motion model for the parameters $(\vec{x}_0, \vec{v}_0, \Delta t)$ of the current trail. For all trails where this is the case, it is also necessary to calculate the surface area of that polygon in order to later compute the precision \eqref{eq_precision}.
However, looping over the entire sample of trails is not always necessary. Following the definition of the validation score \eqref{eq_validation_score}, the score is always equal to 0 if the condition $hit\_ratio(m,T) < hit\_ratio_{min}$ is true. We can reformulate the hit ratio \eqref{eq_hit_ratio} as:
\[
hit\_ratio(m, T) = \cfrac{n_{correct}}{length(T)} = \cfrac{length(T)-n_{incorrect}}{length(T)}
\]
with $n_{incorrect}$ being the number of incorrect predictions made by $m$ for $T$. We can put this reformulated definition of $hit\_ratio(m,T)$ into our condition from the definition of the validation score \eqref{eq_validation_score}:
\[
\cfrac{length(T)-n_{incorrect}}{length(T)} < hit\_ratio_{min}
\]
Solved for $n_{incorrect}$:
\begin{equation} \label{eq_n_incorrect}
n_{incorrect} > (1-hit\_ratio_{min}) \cdot length(T)
\end{equation}
Therefore, $score(m, T)$ is equal to 0 if and only if this condition \eqref{eq_n_incorrect} is satisfied.
As $length(T)$ and $hit\_ratio_{min}$ are known beforehand, the execution of the validation procedure can be stopped prematurely if the number of incorrect predictions exceeds the threshold $(1-hit\_ratio_{min}) \cdot length(T)$. This small optimisation to the original procedure drastically accelerates the validation of motion models with a low correctness, i.e. a low $hit\_ratio$.
Algorithm \ref{alg:validation} describes the implementation for the computation of the validation score including the mentioned optimisation.
\noindent \begin{algorithm}
\SetKwInOut{Input}{inputs}
\SetKwInOut{Output}{output}
\SetKwProg{validate}{validate}{}{}
\Input{motion model $m$, sample of trails $T$, free parameter $hit\_ratio_{min}$}
\Output{The validation score as defined in \eqref{eq_validation_score}}
\BlankLine
$n_{incorrect} \gets 0$\;
Let $A$ be an empty array\;
Let $length(T)$ be the number of trails in $T$\;
\ForEach{$trail \in T$}{
\If{$n_{incorrect} > (1-hit\_ratio_{min})length(T)$}{
\KwRet{0}
}
Let $(\vec{x}_0, \vec{v}_0, \vec{x}_t, \Delta t)$ represent the current $trail$\;
Let $P$ be the reachable area polygon predicted by $m$ based on $(\vec{x}_0,\vec{v}_0,\Delta t)$\;
\eIf{$\vec{x}_t$ is contained in $P$}{
Let $a$ be the surface area of $P$\;
Append $a$ to $A$\;
}{
$n_{incorrect} \gets n_{incorrect} + 1$\;
}
}
$n_{correct} \gets length(T) - n_{incorrect}$\;
\KwRet{$n_{correct}/sum(A)$}\;
\BlankLine
\caption{Validation routine, optimized}\label{alg:validation}
\end{algorithm}
\section{Experiment \& Evaluation of results} \label{experiment_results}
To illustrate validation and parameter optimisation using the procedure defined in section~\ref{validation_procedure}, we evaluate four different models of motion, assuming (a) motion with constant speed, (b) motion with constant acceleration, (c) motion with constant acceleration until a speed limit is reached, and (d) motion along two segments with constant speed.
\subsection{Data set}
For the evaluation, we use the public sample data set provided by Metrica Sports which consists of three anonymised games of football~\cite{metrica}. The positional data has been collected using a video-based system and is provided at a frequency of 25 Hz. Since the data contains no velocity, we compute a player's velocity $\vec{v}_i$ for each frame $i$ as $\vec{v}_i = \frac{\vec{x}_{i+1} - \vec{x}_{i-1}}{2 \cdot 0.04s}$.
For performing our validation routine we convert the positional data into a list of trails. As outlined in \eqref{eq_trail_def}, each trail is defined as a quadruple $(\vec{x}_0, \vec{v}_0, \vec{x}_t, \Delta t)$. For this experiment, we use a constant time horizon of $\Delta t = 1s$ for all trails. After visual inspection of the data, the minimal required hit ratio is set to $hit\_ratio_{min}=99.975\%$. We evaluate the models on a random sample of $5 \cdot 10^5$ trails across all three games and all participating players.
\subsection{Preparation of motion models} \label{evaluated_models}
We phrase the motion models such that they define a set of reachable positions $R$. This set has to form a non-zero, finite area. To approximate the reachable area as a polygon, $R$ also has to be simply connected and have a computationally approximable boundary $B$. The sets $R$ and $B$ depend on the player's initial position $\vec{x}_0$ and velocity $\vec{v}_0$, the selected time horizon $\Delta t$, and the parameters of the respective model. For brevity of notation, we set $t_0 = 0$ such that $\Delta t = t$. The reachable areas defined by the models (a) - (d) are exemplarily visualized in Figure~\ref{fig:reachable_area_per_model}.
\subsubsection{(a) Constant speed.}
Given motion with some constant speed $v \in [0, v_{max}]$ in direction $\phi$ from a starting location $\vec{x}_0$, the set of reachable target locations $R$ after time $t$ forms a disk with the center $\vec{x}_0$ and radius $v_{max} t$.
\[ R = \{\vec{x} \mid \exists \phi \in [0, 2\pi], \exists v \in [0, v_{max}], \vec{x} = \vec{x}_0 + v \colvec{\cos{\phi}}{\sin{\phi}} t\} \]
\begin{align}
B = \{\vec{x} \mid \exists \phi \in [0, 2\pi], \vec{x} = \vec{x}_0 + v_{max} \colvec{\cos{\phi}}{\sin{\phi}} t\}
\label{eq:B_a}
\end{align}
\subsubsection{(b) Constant acceleration.}
Given motion with some constant acceleration $a \in [0, a_{max}]$ in direction $\phi$ from a starting location $\vec{x}_0$ with starting velocity $\vec{v}_0$, the set of reachable positions $R$ after time $t$ forms a disk with the center $\vec{x}_0 + \vec{v}_0t$ and radius $\frac{1}{2}a_{max}t^2$.
\[ R = \{\vec{x} \mid \exists \phi \in [0, 2\pi], \exists a \in [0, a_{max}], \vec{x} = \vec{x}_0 + \frac{1}{2}a \colvec{\cos{\phi}}{\sin{\phi}} t^2 + \vec{v}_0 t\} \]
\begin{align}
B = \{\vec{x} \mid \exists \phi \in [0, 2\pi], \vec{x} = \vec{x}_0 + \frac{1}{2}a_{max} \colvec{\cos{\phi}}{\sin{\phi}} t^2 + \vec{v}_0 t\}
\label{eq:B_b1}
\end{align}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{img/reachable-area-per-model.pdf}
\caption{Exemplary boundaries $B$ of the reachable area defined by different motion models when the player starts at $\vec{x}_0 = \vec{0}$ with velocity $\vec{v}_0=\colvec{5\frac{m}{s}}{0}$. The time horizon is $\Delta t = 1s$.}
\label{fig:reachable_area_per_model}
\end{figure}
\subsubsection{(c) Constant acceleration with speed limit.}
We want to restrict model (b) such that the player's speed never exceeds a fixed maximum $v_{max}$. For that purpose, the simulated trajectory is divided into two segments: A first segment of motion with constant acceleration $a$ until $v_{max}$ is reached, and a second segment of motion with constant speed $v_{max}$. A non-negative solution for the time $t_{acc}$ until $v_{max}$ is reached is guaranteed to exist if the player's initial speed $|\vec{v}_0|$ does not exceed $v_{max}$. For that reason, $\vec{v}_0$ is clipped to $\vec{v}_0^* = \vec{v}_0 \frac{\min(|\vec{v}_0|, v_{max})}{|\vec{v}_0|}$. With $t_{acc}^* = \min(t_{acc}, t)$, the reachable area $R$ corresponds to the following simply connected shape, which can be non-circular as $t_{acc}^*$ depends on $\phi$.
\[ R = \{\vec{x} \mid \exists \phi \in [0, 2\pi], \exists a \in [0, a_{max}], \vec{x} = \vec{x}_0 + \vec{v}_0^* t + a t_{acc}^* (t-\frac{1}{2}t_{acc}^*) \colvec{\cos{\phi}}{\sin{\phi}}\} \]
\begin{align}
B = \{\vec{x} \mid \exists \phi \in [0, 2\pi],\vec{x} = \vec{x}_0 + \vec{v}_0^* t + a_{max} t_{acc}^* (t-\frac{1}{2}t_{acc}^*) \colvec{\cos{\phi}}{\sin{\phi}}\}
\label{eq:B_b2}
\end{align}
\subsubsection{(d) Two-segment constant speed.}
We propose the following approximate model to respect the current kinematic state of a player: The player's motion is divided into two segments of constant-speed motion. First, with some speed $v_{inert}$ for a predetermined amount of time $t_{inert}$ in the direction of $\vec{v}_0$ and, subsequently, with speed $v_{final}$ in some arbitrary direction $\phi$. Using $t_{inert}^* = \min(t_{inert}, t)$, the set of reachable positions $R$ forms a disk with the radius $v_{final}(t-t_{inert}^*)$ if $t_{inert} > t$ and is otherwise reduced to the point $\vec{x}_0 + v_{inert} \frac{\vec{v}_0}{|\vec{v}_0|} t$.
\[ R = \{\vec{x} \mid \exists \phi \in [0, 2\pi], \exists v \in [0, v_{final}], \vec{x} = \vec{x}_0 + v_{inert} \frac{\vec{v}_0}{|\vec{v}_0|} t_{inert}^* + v_{final} \colvec{\cos{\phi}}{\sin{\phi}}(t-t_{inert}^*) \} \]
\begin{align}
B = \{\vec{x} \mid \exists \phi \in [0, 2\pi], \vec{x} = \vec{x}_0 + v_{inert} \frac{\vec{v}_0}{|\vec{v}_0|} t_{inert}^* + v_{final} \colvec{\cos{\phi}}{\sin{\phi}}(t-t_{inert}^*)
\label{eq:B_c}
\end{align}
The values of $v_{inert}$ and $v_{final}$ can in principle be set in various ways. In our parameterisation, we determine the value of $v_{inert}$ according to a boolean parameter \texttt{keep\_initial}, such that the speed of the player is either preserved or set to a fixed value $v_{const}$.
\[v_{inert} = \begin{cases}
|\vec{v}_0| & \text{if } \texttt{keep\_initial} \\
v_{const} & \, \text{else}
\end{cases}\]
The value of $v_{final}$ is either computed based on two parameters $a_{max}$ and $v_{max}$ which corresponds to pretending that the player has accelerated during the first segment, or set to the fixed value $v_{const}$.
\[v_{final} = \begin{cases}
\min(v_{inert} + a_{max}t_{inert}, v_{max}) & \text{if } a_{max} \text{ and } v_{max} \text{ are set}\\
v_{const} & \, \text{else}
\end{cases}\]
\subsubsection{Computation of reachable area}
It follows from equations~\eqref{eq:B_a} - \eqref{eq:B_c} that for each presented model, the reachable area can be approximated as a polygon by computing the vertices of that polygon as points from the boundary $B$ along a discrete set of angles $\phi$. For our evaluation, we choose to evaluate $200$ evenly spaced angles.
\subsection{Optimisation of model parameters} \label{optimisation}
Based on the validation procedure outlined in \cref{validation_procedure}, we aim to find the motion model $m$ out of the set of considered models $M$ which maximises the $score$ \eqref{eq_validation_score} for a given sample of trails $T$. We define the optimal model as:
\begin{equation} \label{arg_max_compact}
\underset{m \in M}{\operatorname{arg \: max}} \ (score(m, T))
\end{equation}
However, for each of the four player motion models defined in \cref{evaluated_models}, there are free parameters which specify its behaviours. For this reason, in order to determine the best model, the optimal combination of parameters has to be found for each model. We thus extend \eqref{arg_max_compact} by introducing the tuple $P_m$ that represents the values for the free parameters of a model $m(P_m)$:
\begin{equation}
\underset{m \in M}{\operatorname{arg \: max}}\ (
\underset{P_m}{\operatorname{max}} \ (score(m(P_m), T))
)
\end{equation}
The parameters of the models (a) - (d) are, as outlined in \cref{evaluated_models}:
\begin{itemize}
\item (a) Constant speed: $P_{(a)} = (v_{max})$
\item (b) Constant acceleration: $P_{(b)} = (a_{max})$
\item (c) Constant acceleration with speed limit: $P_{(c)} = (a_{max}, v_{max})$
\item (d) Two-segment constant speed: $P_{(d)} = (t_{inert}, \texttt{keep\_initial}, v_{const}, a_{max}, v_{max})$
\end{itemize}
The main challenge here is to determine the optimal $P_m$ for a model with regard to its $score$ \eqref{eq_validation_score}. For this experiment, the best parameter configuration $P_m$ is determined using Bayesian optimisation. However, Bayesian optimization generally does not work for discrete parameter values such as the boolean variable $\texttt{keep\_initial}$ in model (d). There are proposals for enabling discrete variables, most notably by Luong et al. \cite{luong_bayesian_2019}, but this works only if all values in $P_m$ are discrete, which does not apply to our models.
Luckily, our models have very few discrete parameters, so we perform one Bayesian optimisation for each combination of discrete parameter values and use the best score across those results.
\subsection{Evaluation of results}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{img/results.pdf}
\caption{Comparison of the performance of the four models (a) - (d) with their optimised parameter values.}
\label{fig:results}
\end{figure}
The performance of the optimised models (a) - (d) and their parameter values are shown in Figure~\ref{fig:results}.
The constant-speed model (a) unsurprisingly shows a weaker performance ($score^{-1}=218m^2$) than the more sophisticated models (c) and (d), since it does not factor in the initial kinematic state of the player. If the player moves at a high speed, the model unrealistically assumes that the player can instantly run at maximal speed in the other direction. Thus, for high speeds, the model overestimates the amount of reachable space in directions which sufficiently deviate from the direction which the player is initially moving towards.
The naive constant acceleration model (b) ($score^{-1}=344m^2$) performs even worse than model (a), likely because it makes the unrealistic assumption that the possible magnitude of acceleration is independent of the magnitude and direction of a player's current velocity. This implies in particular that for high speeds, the amount of reachable space in the direction that a player is moving towards will be heavily overestimated since the model assumes that the player's speed can increase unboundedly.
The model assuming constant acceleration with a speed limit (c) ($score^{-1}=144m^2$) outperforms models (a) and (b). However, the optimised value of the maximally possible acceleration of a player $a_{max}$ is physically unrealistic. A value of $a_{max}=19.42 \frac{m}{s^2}$ assumes that a player can accelerate from zero to the top speed $v_{max}=8.91\frac{m}{s} (=32.08\frac{km}{h})$ within about half a second, which is implausibly fast. Therefore, the model still overestimates the reachable area. One approach to improve the constant acceleration model could be to view the maximally possible acceleration as being dependent on the direction and magnitude of the current velocity.
The two-segment constant speed model (d) ($score^{-1}=71.7m^2$) is able to account for all reachable positions by predicting only about half the area of model (c). It successfully narrows down the area that a player can reach within one second to a circle with an average radius of $4.8$ meters which is highly accurate. Model (d) not only achieves the best score in our evaluation, but is also mathematically simpler than model (c). For that reason, it is also computationally more efficient across the various tasks that motion models are used for, like the computation of reachable areas or the shortest time to arrive at a specific location. A drawback of model (d) is that in its presented form, it only makes meaningful predictions for time horizons $\Delta t > t_{inert}$ (where the optimised value is $t_{inert} = 0.22s$). Below this duration, the model is not applicable unless it is appropriately extended.
In summary, our newly presented model (d) is both highly accurate and computationally efficient. The models (a) - (c) have obvious weaknesses and are not adequate to accurately identify reachable locations.
\section{Conclusion}\label{conclusion}
We presented a novel approach to the validation and optimisation of models of trajectorial player motion in football and similar sports. We also presented an empirical comparison of the accuracy of various such models. In line with our expectations, more sophisticated and accurate assumptions made by a motion model generally tend to be reflected in a better predictive performance. Yet, by far the best-performing model is our proposed approximate model which assumes motion along two segments with constant speed. Using this model allows researchers to compute complex performance indicators more efficiently and accurately over large data sets. In contrast, player motion should not be assumed to take place with constant speed or constant acceleration with unlimited speed. These assumptions are inappropriate to accurately distinguish reachable from unreachable locations.
The validation and optimisation approach described in this paper can be applied to data with arbitrary distributions of measurement error. However, this is also a disadvantage, since the threshold for the amount of outliers that are attributed to measurement error has to be determined manually. This threshold also has to be set for each distinguished population, depending on the frequency of extrema and the distribution of measurement error in the population. If for example one wants to evaluate motion models for each player individually, it would be misleading to assign the same threshold to each player because players who sprint regularly during the game produce more positional extrema (and thus outliers) than goalkeeperes, for example, who are rarely forced to run with maximal effort. A solution to the problem of having to specify a threshold is to perform validation and optimisation with different thresholds and analyse how this choice affects the result. Also, the approach presented here can be extended to automatically determine an optimal threshold for known error distributions.
In the future, we plan to search for motion models that further exceed the presented ones in accuracy and computational efficiency. A key towards this goal is to estimate motion models from positional data. Many problems addressed in this paper are mirrored in empirical model fitting, for example the need to exclude outliers and the lack of generalisability across populations~\cite{brefeld2019probabilistic}. In the context of validation, empirical models can serve as a highly informative benchmark to reveal how well theoretical models are able to approximate actual human motion.
\section{First Section}
\subsection{A Subsection Sample}
Please note that the first paragraph of a section or subsection is
not indented. The first paragraph that follows a table, figure,
equation etc. does not need an indent, either.
Subsequent paragraphs, however, are indented.
\subsubsection{Sample Heading (Third Level)} Only two levels of
headings should be numbered. Lower level headings remain unnumbered;
they are formatted as run-in headings.
\paragraph{Sample Heading (Fourth Level)}
The contribution should contain no more than four levels of
headings. Table~\ref{tab1} gives a summary of all heading levels.
\begin{table}
\caption{Table captions should be placed above the
tables.}\label{tab1}
\begin{tabular}{|l|l|l|}
\hline
Heading level & Example & Font size and style\\
\hline
Title (centered) & {\Large\bfseries Lecture Notes} & 14 point, bold\\
1st-level heading & {\large\bfseries 1 Introduction} & 12 point, bold\\
2nd-level heading & {\bfseries 2.1 Printing Area} & 10 point, bold\\
3rd-level heading & {\bfseries Run-in Heading in Bold.} Text follows & 10 point, bold\\
4th-level heading & {\itshape Lowest Level Heading.} Text follows & 10 point, italic\\
\hline
\end{tabular}
\end{table}
\noindent Displayed equations are centered and set on a separate
line.
\begin{equation}
x + y = z
\end{equation}
Please try to avoid rasterized images for line-art diagrams and
schemas. Whenever possible, use vector graphics instead (see
Fig.~\ref{fig1}).
\begin{figure}
\includegraphics[width=\textwidth]{fig1.eps}
\caption{A figure caption is always placed below the illustration.
Please note that short captions are centered, while long ones are
justified by the macro package automatically.} \label{fig1}
\end{figure}
Program listings or code snippets in the text like \lstinline!var i:integer;!
should be set in typewriter font. The ``listings'' macro package can be used
for reader-friendly syntax highlighting as in the following Pascal sample code:\footnote{For
typesetting pseudocode, the ``algorithmic'', ``algorithm2e'', ``algorithmicx'',
and ``program'' packages are also worth considering.}
\begin{lstlisting}
for i:=maxint to 0 do
begin
{ do nothing }
end;
Write('end of sample code');
\end{lstlisting}
\begin{theorem}
This is a sample theorem. The run-in heading is set in bold, while
the following text appears in italics. Definitions, lemmas,
propositions, and corollaries are styled the same way.
\end{theorem}
\begin{proof}
Proofs, examples, and remarks have the initial word in italics,
while the following text appears in normal font.
\end{proof}
For citations of references, we prefer the use of square brackets
and consecutive numbers. Citations using labels or the author/year
convention are also acceptable. The following bibliography provides
a sample reference list with entries for journal
articles~\cite{ref_article1}, an LNCS chapter~\cite{ref_lncs1}, a
book~\cite{ref_book1}, proceedings without editors~\cite{ref_proc1},
and a homepage~\cite{ref_url1}. Multiple citations are grouped
\cite{ref_article1,ref_lncs1,ref_book1},
\cite{ref_article1,ref_book1,ref_proc1,ref_url1}.
\subsubsection{Acknowledgements} Please place your acknowledgments at
the end of the paper, preceded by an unnumbered run-in heading (i.e.
3rd-level heading).
| {
"attr-fineweb-edu": 2.023438,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbf825YjgKDMz-_GK |
\section{Introduction}
\label{Sec:Intro}
The rapid increase in data collection technology in sports over the previous decade has led to an evolution of player monitoring with regard to player performance, assessment, and injury detection and prevention \citep{Bourdon2017, Akenhead2016,De2018}.
Within both individual and team sports, objective and subjective player monitoring data are commonly being used to assess the acute and chronic effects of training and recovery in order to maximize the current and future performance of athletes \citep{Mujika2017, Saw2016, Thorpe2015, Thorpe2017, Buchheit2013, Meeusen2013}.
Customary metrics of an athlete's training response include objective measures of performance, physiology, or biochemistry, such as heart rate, heart rate variability, blood pressure, or oxygen consumption \citep{Borresen2009}.
Recently, there has been strong emphasis on also including subjective, or perceptual, measures of well-being in athlete assessment \citep{Akenhead2016, Tavares2018}.
Subjective measures, which are often self-reported by the athletes using self-assessment surveys, include wellness profiles to quantify physical, mental, and emotion state, sleep quality, energy levels and fatigue, and measures of perceived effort during training.
Subjective measures have been reported to be more sensitive and consistent than objective measures in capturing acute and chronic training loads \citep{Saw2016, Tavares2018}.
In particular, \cite{Saw2016} found subjective well-being to negatively respond to acute increases in training load as well as to chronic training, whereas acute decreases in training load led to an increases in subjective well-being.
Subjective rate of perceived effort has also been reported to provide a reasonable assessment of training load compared to objective heart-rate based methods \citep{Borresen2008}, however accuracy varied as a function of the amount of high- and low-intensity exercise.
\cite{Tavares2018} investigated the effects of training and recovery on rugby athletes based on an objective measure of fatigue using countermovement jump tests as well as perceptual muscle-group specific measures of soreness for six days prior to a rugby match and two days following.
They found that perceptual muscle soreness tended to diminish after a recovery day while lower body muscle soreness remained above the baseline.
In addition, even though running load, as measured by total distance and high metabolic load distance, did not differ between the match and training sessions, the change in perceptual measures of muscle soreness indicated greater physical demands on the athletes during rugby matches.
The highest soreness scores for all muscle groups were reported the morning after the match, and these scores remained high the subsequent morning.
Interestingly, no significant difference in countermovement jump scores was detected across days.
Since no one metric is best at quantifying the effects of training on fitness and fatigue and predicting performance \citep{Bourdon2017}, subjective and objective measures are often used in conjunction to guide training programs to improve athlete performance.
Self-reported wellness measures are comprised of responses to a set of survey questions regarding the athlete's mental, emotional, and physical well-being.
Most commonly, these survey responses are recorded on ordinal (or Likert) scales.
Collectively, these ordinal response variables inform on the athlete's overall well-being.
Previous studies looked at the average of a set of ordinal wellness response variables and treated the average as a continuous response variable in a multiple linear regression model \citep{Gallo2017}.
Whereas this approach can be used as an exploratory tool to identify important training variables (work load, duration of matches/games, recovery) on individual wellness, it has three major shortcomings.
First and foremost, it throws away important information by reducing the multiple wellness metrics into one value.
Second, it assumes that each ordinal response variable is equally important (i.e., assigns weight $1/J$ for each $j=1, \dots, J$ variable).
Since there is likely variation in the sensitivity of some wellness variables in an athlete's response to training and recovery, failing to differentiate between these variables could mask indicators of potential poor performance or negative health outcomes.
Lastly, by modeling the average of the wellness metrics, the model is unable to identify important variable-specific relationships between training and wellness.
With the increase in collection of self-reported wellness measures and their identified significance in monitoring player performance, we need more advanced statistical methods and models that leverage the information across all wellness metrics in order to obtain a better understanding of the subjective measures of training and wellness. These models could then be used to synthesize these data in order to guide training programs and player evaluations.
There are many statistical challenges in modeling subjective wellness data.
First, the data are multivariate, with each variable representing a particular aspect of wellness (e.g., energy levels, mental state, etc.).
Second, the data are ordinal, requiring more advanced generalized linear models of which are not often included in customary statistical programming packages \citep[although see the R package \texttt{mvord}][]{mvord}.
Lastly, the subjective wellness data are individual-specific.
The day-to-day variation in wellness scores reported by an individual will vary greatly not only between individuals but also across wellness variable for an individual.
In addition, training and recovery can have varying short- and long-term effects on athlete wellness, of which are also known to vary across individuals.
As such, statistical modeling of individual wellness needs to be able account for the variation across individual as a response to the cumulative effects of training and recovery.
Generalized latent variable models, including multilevel models and structural equation models, have been proposed as a comprehensive approach for modeling multivariate ordinal data and for capturing complex dependencies both between variables and within variables across time and space \citep{Skrondal2004}.
To capture flexible, nonlinear relationships, \cite{Deyoreo2018} developed a Bayesian nonparametric approach for multivariate ordinal regression based on mixture modeling for the joint distribution of latent responses and covariates.
\cite{Schliep2013} developed a multi-level spatially-dependent latent factor model to assess the biotic condition of wetlands across a river basin using five ordinal response metrics.
A modified approach by \cite{Cagnone2018} used a latent Markov model to model temporal dynamics in the latent factor for multivariate longitudinal ordinal data.
\cite{Cagnone2009} propose a latent variable model with both a common latent factor and auto-regressive random effect to capture dependencies between the variables dynamically through time.
Within the class of generalized linear multivariate mixed models,
\cite{Chaubert2008} proposed a dynamic multivariate ordinal probit model by placing an auto-regressive structure on the coefficients and threshold parameters of the probit regression model.
Multivariate ordinal data are also common in the educational testing literature where mixed effects models (or item response theory models) are used to compare ordinal responses across individual \citep{Lord2012}.
\cite{Liu2006} developed a multi-level item response theory regression model for longitudinal multivariate ordinal data to study the substance use behaviors over time in inner-city youth.
Drawing on this literature, we propose using the latent factor model approach to capture marginal dependence between the multivariate ordinal wellness variables.
We extend the current approaches by modeling the latent factors using distributed lag models to allow for functional effects of training and recovery.
Distributed lag models (also known as dynamic regression models) stem from the time series literature \citep[see][Chapter 9]{Hyndman2018} and offer an approach for identifying the dynamics relating two time series \citep{Haugh1977}.
Distributed lag models can be written as regression models in which a series of lagged explanatory variables accounts for the temporal variability in the response process.
The coefficients of these lagged variables can be used to infer short- and long-term effects of important explanatory variables on the response.
Due to the dependent relationship between the lagged values of the process, constraints are often imposed on the coefficients to induce shrinkage but maintain interpretability.
Often the constraints impose a smooth functional relationship between the explanatory variables and response.
The smoothed coefficients can then be interpreted as a time series of effect sizes and provide insights to the significance of the explanatory variables at various lags.
In epidemiological research, distributed lag models have been used to capture the lag time between exposure and response. For example, \cite{Schwartz2000} studied air pollution exposure on adverse health outcomes in humans and found up to a 5 day lag in exposure effects.
\cite{Gasparrini2010} developed the family of distributed lag non-linear models to study non-linear relationships between exposure and response.
The added flexibility of their model enables the shape and temporal lag of the relationship to be captured simultaneously.
In the ecological context, \cite{Ogle2015} and \cite{Itter2019} presented a special case of distributed lag models to capture so-called `ecological memory'.
Like distributed lag models, ecological memory models assign a non-negative measure, or weight, to multiple previous time points to capture the possible short- and long-term effects of environmental variables (e.g., climate variables, such as precipitation or temperature) on various environmental processes (e.g., species occupancy).
These models are able to identify not only significant past events on the environmental process of interest but also the length of ``memory" these environmental process have with respect to the environmental variables.
An important distinction between ecological memory models and distributed lag models is that ecological memory models assume the short- and long-term effects to be consistent.
That is, with non-negative weights, the relationship between the explanatory variable and the response is the same at all lags governed by the sign of the coefficient.
In modeling the short- and long-term effects of training and recovery on athlete wellness, we note that this limitation may be overly restrictive.
We propose a joint multivariate ordinal response latent factor distributed lag model to tackle the challenges outlined above.
The joint model specification enables the borrowing of strength across athletes with regard to capturing the general trends in training response.
However, athlete-specific model components allow for individualized effects and relationships between wellness measures, training, and recovery.
The latent factors capture the temporal variability in athlete wellness as a function of training and recovery.
In addition, the multivariate model allows for the latent factors to be informed by each of the self-reported wellness metrics, which alleviates the pre-processing of computing the average and treating it as a univariate response.
We model the latent factors using distributed lag models in order to capture the cumulative effects of training load and recovery on athlete wellness.
The remainder of the paper is outlined as follows.
In Section \ref{Sec:Data} we describe the ordinal athlete wellness data, including the metrics of workload and recovery.
Summaries of the data as well as exploratory data analysis is included.
The multivariate ordinal response latent factor distributed lag model is developed in Section \ref{Sec:Model}.
Details with regard to the model specification, model inference, and important identifiability constraints are included.
The model is applied to the athlete data in Section \ref{Sec:Results} and important results and inference are discussed.
We conclude with a summary and discussion of future work in Section \ref{Sec:Disc}.
\section{Athlete wellness, training, and recovery data}
\label{Sec:Data}
Daily wellness, training, and recovery data were obtained for 20 professional referees during the 2015 and 2016 seasons of Major League Soccer (MLS), which spanned from approximately February 1$^{st}$ to October 30$^{th}$ of each year.
Each referee followed a training program established by the Professional Referees Organization and attended bi-weekly training camps.
Over the span of these two seasons, the number of matches officiated by each referee ranged from 3 to 44, with an average of 28 matches.
Upon waking up each morning, the referee (hereafter, ``athlete") was prompted on his smart phone to complete a wellness survey. The survey questions entailed assigning a value to each wellness variable (metric) on a scale from 1 to 10 where 1 is low/worst and 10 is high/best.
These wellness variables include \emph{energy}, \emph{tiredness}, \emph{motivation}, \emph{stress}, \emph{mood}, and \emph{appetite}.
Prior to the start of data collection, the referees were trained on the ordinal scoring for these variables such that a high value of all variables corresponds to being energized, fully rested, feeling highly motivated, with low stress, a positive attitude, and being hungry.
Due to the limited number of observations in each of the 10 categories for most individuals and wellness variables, we transformed the raw ordinal data to a 5 category scale.
The transformation we elected to use was individual specific, but uniform across metrics as we assumed each individual's ability to differentiate between ordinal values was consistent across variables.
For each individual, we combined the observed ordinal data and conducted $k$-means clustering using 5 clusters.
The $k$-means clustering technique minimizes within group variability and identifies a center value for each cluster.
The midpoints between the ordered cluster centers
were used as cut-points to assign ordinal values from 1 to 5.
Each of the six ordinal wellness variables for the individual was transformed to the 5 category ordinal scale using the same set of cut-points.
We investigated various transformation (e.g., basic combining of classes 1-2, 3-4, etc, and re-scaling the data to (0,1) and then applying a threshold based on percentiles 0.2, 0.4, etc.) but found model inference in general to be robust to these choices.
Distributions of the 5-category ordinal data for the two metrics, \emph{tiredness} and \emph{stress}, are shown in Figure \ref{Fig:WM1} for four athletes denoted Athlete A, B, C, and D.
The distributions of wellness scores vary quite drastically both across metrics for a given athlete as well as across athletes for a given metric.
The distributions of the other wellness variables for these four athletes are included in the Figure \ref{Fig: WM2} of the Supplementary Material.
In addition to the wellness variables, each athlete also reported information on training and recovery.
These data consisted of the duration (hours) and rate of perceived effort (RPE; scored 0-10) of the previous days workout as well as sleep quantity (hours) and quality (scored 1-10) of the previous nights sleep.
Distributions of RPE and workout duration are shown in Figure \ref{Fig:RPEDur} for the four athletes.
RPE has been found to be a valid method for quantifying training across a variety of types of exercise \citep{Foster2001, Haddad2017}.
The distribution of RPE varies across each individual both in terms of center and spread as well as with regard to the frequency of ``0" RPE days (i.e., rest days).
For example, over 40\% of the days are reported as rest days for Athlete B.
For non-rest days (those with RPE $>0$), Athlete D has a much higher average RPE compared to the other three athletes.
The distribution of workout duration tends to be multi-modal for each athlete.
In particular, we see spikes around 90 minutes, which is consistent with the length of MLS games.
The distribution of shorter workouts for Athlete C tends to be more uniform between 10 and 70 minutes with fewer short workouts than the other athletes.
Similar summaries of the number of hours slept and the quality of sleep are shown in Figure \ref{Fig:Sleep}.
In general, the average number of hours of sleep for an individual ranges between 6-8 hours, however the frequency of nights with 5 or less hours and 10 or more hours varies significantly across individuals.
Sleep quality is most concentrated on values between 6 and 8 for each individual, with the distributions being skewed towards lower values.
We define two important measures of training and recovery to use in our modeling; namely, \emph{workload} and \emph{recovery}.
Workload, which is also commonly referred to as training load, is quantified as the product of the rate of perceived effort and the duration for the training period \citep{Foster1996, Foster2001, Brink2010}.
Therefore, a training session of moderate to high intensity and average to long duration will result in a large workload value.
Recovery is defined using the reported quality and quantity of sleep.
We applied principle component analysis on the two sleep metrics for each individual and found that one loading vector captured between 64\% and 94\% of the variation.
As a result, recovery for each athlete was defined using the first principle component, where large values correspond to overall good recovery (high quality and long duration sleep).
\section{Joint multivariate ordinal response latent factor distributed lag model}
\label{Sec:Model}
We model the ordinal wellness data using a multivariate ordinal response latent factor distributed lag model.
We first describe the multivariate ordinal response model in Section \ref{Ord} and then offer two latent factor model specifications using distributed lag models in Section \ref{LFM}.
Identifiability constraints are discussed in Section \ref{sec:Ident}, and full prior specifications for Bayesian inference are given in Section \ref{sec:Inf1}.
Section \ref{sec:Inf2} introduces important inference measures for addressing questions regarding athlete wellness, workload, and recovery.
\subsection{Multivariate ordinal response model}
\label{Ord}
Let $i = 1, \dots, n$ denote individual, $j=1, \dots, J$ denote wellness variable (which we refer to as \emph{metric}), and $t =1, \dots, T_i$ denote time (day).
Then, define $Z_{ijt} \in \{1, 2, \dots, K_{ij}\}$ to be the ordinal value for individual $i$ and wellness metric $j$ on day $t$.
Without loss of generality, let $K_{ij}=5$ for each $i$ and $j$ such that each wellness metric is ordinal taking integer values $1, \dots, 5$ for each individual.
We model the ordinal response variables using a cumulative probit regression model.
We utilize the efficient parameterization of \cite{Albert1993}, and define the latent metric parameter $\widetilde{Z}_{ijt}$ such that
\begin{equation}
Z_{ijt} = \begin{cases}
1 ~~~& -\infty < \widetilde{Z}_{ijt} \leq \theta^{(1)}_{ij}\\
2 ~~~& \theta^{(1)}_{ij} < \widetilde{Z}_{ijt} \leq \theta^{(2)}_{ij}\\
3 ~~~& \theta^{(2)}_{ij} < \widetilde{Z}_{ijt} \leq \theta^{(3)}_{ij}\\
4 ~~~& \theta^{(3)}_{ij} < \widetilde{Z}_{ijt} \leq \theta^{(4)}_{ij}\\
5 ~~~& \theta^{(4)}_{ij} < \widetilde{Z}_{ijt} < \infty. \\
\end{cases}
\end{equation}
Here, $\theta^{(k-1)}_{ij}$ and $\theta^{(k)}_{ij}$ denote the lower and upper thresholds of ordinal value $k$, for individual $i$ and wellness metric $j$, where $\theta^{(k-1)}_{ij} < \theta^{(k)}_{ij}$.
Under the general probit regression specification,
\begin{equation}
\widetilde{Z}_{ijt} = \mu_{ijt} + \epsilon_{ijt}
\label{eq:Ztilde}
\end{equation}
where $\epsilon_{ijt} \sim N(0,\sigma^2_{ij})$.
In the Bayesian framework with inference obtained using Markov chain Monte Carlo, this parameterization enables efficient Gibbs updates of the model parameters, $\widetilde{Z}_{ijt}$, $\mu_{ijt}$, and $\sigma^2_{ij}$, for all $i$, $j$, and $t$ \citep{Albert1993}.
Posterior samples of the threshold parameters, $\theta^{(k)}_{ij}$, require a Metropolis step.
More details with regard to the sampling algorithm are given in Section \ref{sec:Ident}.
\subsection{Latent factor models}
\label{LFM}
In modeling $\mu_{ijt}$, we propose both a univariate and multivariate latent factor model specification to generate important, distinct inferential measures.
We begin with the univariate latent factor model for $\widetilde{Z}_{ijt}$.
Let $Y_{it}$ denote the latent factor at time $t$ for individual $i$.
The assumption of this model is that $Y_{it}$ is driving the multivariate response for each individual at each time point.
That is, for each $i$, $j$, and $t$, we define
\begin{equation}
\mu_{ijt} = \beta_{0ij} + \beta_{1ij} Y_{it}
\label{eq:LFM}
\end{equation}
where $\beta_{0ij}$ is a metric-specific intercept term and $\beta_{1ij}$ is a metric-specific coefficient of the latent factor individual $i$.
We can extend (\ref{eq:LFM}) to an $M$-variate latent factor model where we now assume that the multivariate response might be a function of multiple latent factors.
Let $Y_{1it}, \dots, Y_{Mit}$ denote the latent factors at time $t$ for individual $i$.
Then, we define $\mu_{ijt}$ as
\begin{equation}
\mu_{ijt} = \beta_{0ij} + \sum_{m=1}^M \beta_{mij} Y_{mit}
\end{equation}
where $\beta_{mij}$ captures the metric-specific effect of each latent factor.
We investigate the univariate and multivariate latent factors models in modeling the multivariate ordinal wellness data.
The two important covariates of interest identified above that are assumed to be driving athlete wellness include \emph{workload} and \emph{recovery}.
Therefore, we model the latent factors as functions of these variables.
Specifically, we model the latent factors using distributed lag models such that we are able to capture the cumulative effects of workload and recovery on athlete wellness.
Let $X_{1it}$ and $X_{2it}$ denote the workload and recovery variables for individual $i$ and time $t$, respectively.
Starting with the univariate latent factor model, we model $Y_{it}$ as a linear combination of these lagged covariates.
We write the distributed lag model for $Y_{it}$ as
\begin{equation}
Y_{it} = \sum_{l=0}^L \left(X_{1i,t-l}\alpha_{1il} + X_{2i,t-l}\alpha_{2il}\right) + \eta_{it}
\label{eq:Udl}
\end{equation}
where $\alpha_{1il}$ and $\alpha_{2il}$ are coefficients for the lagged $l$ covariates $X_{1i,t-l}$ and $X_{2i,t-l}$, and $\eta_{it}$ is an error term.
Here, we assume $\eta_{it} \sim N(0, \tau_i^2)$.
The distributed lag model is able to capture the covariate-specific cumulative effects at lags ranging from $l=0, \dots, L$.
The benefit of the univariate approach is that latent factor $Y_{it}$ offers a univariate summary for individual $i$ on day $t$ as a function of both \emph{wellness} and \emph{recovery}.
We can easily compare these univariate latent factors across days in order to identify anomalies in wellness across time for an individual.
The distributed lag model specification can also be utilized in the multivariate latent factor model.
Having two important covariates of interest, we specify a bivariate latent factor model with factors $Y_{1it}$ and $Y_{2it}$.
Here, $Y_{1it}$ is modeled using a distributed lag model with covariate $X_{1it}$, and $Y_{2it}$ is modeled using a distributed lag model with covariate $X_{2it}$.
The benefit of this approach is that we can infer about the separate metric-specific relationships with each of the lagged covariates for each individual.
That is, we can compare the relationships across metrics within an individual as well as within metric across individuals.
For $m =1, 2$, let
\begin{equation}
Y_{mit} = \sum_{l=0}^L X_{mi,t-l}'\alpha_{mil} + \eta_{mit}
\label{eq:Mdl}
\end{equation}
where $X_{mi,t}$, and $\alpha_{mil}$ are analogous to above, and $\eta_{mit}$ is the error term for factor $m$.
Again, we assume $\eta_{mit} \sim N(0,\tau^2_{mi})$.
It is worth mentioning that under certain parameter constraints, distributed lag models are equivalent to the ecological memory models proposed by \cite{Ogle2015}.
That is, ecological memory models are a special case of distributed lag models where the lagged coefficients are assigned non-negative weights that sum to 1.
For example, with $\text{E}(Y_{mit}) = \sum_{l=0}^L X_{mi,t-l}\alpha_{mil}$, the ecological memory model is such that $\alpha_{mil}>0$ and $\sum_{l=0}^L\alpha_{mil}=1$ for all $m$ and $i$.
Under this approach, $\boldsymbol{\alpha}_{mi} = (\alpha_{mi0}, \dots, \alpha_{miL})$ is modeled using a Dirichlet distribution.
The drawback of this approach is that this forces the relationship between the latent wellness metric $\widetilde{Z}_{ijt}$ and each element of the vector $(X_{mi0}, \dots, X_{miL})$ to be the same (e.g., all positive or all negative according to the sign of $\beta_{mij}$).
In our application, we desire the flexibility of having both positive and negative short- and long-term effects of training and recovery on athlete wellness.
For example, we might expect high-intensity training sessions to have immediate negative effects on wellness, but they could have positive impacts on wellness at longer time scales given proper recovery.
Under either the univariate or multivariate latent factor model, we can borrow strength across individuals by incorporating shared effects.
Here, we include shared distributed lag coefficients.
Recall that in (\ref{eq:Udl}) and (\ref{eq:Mdl}), $\alpha_{mil}$ denotes the lagged coefficient for variable $m$, individual $i$, and lag $l$.
We model $\alpha_{mil} \sim N(\alpha_{ml}, \psi_{ml})$ where $\alpha_{ml}$ is the global mean coefficient of covariate $m$ at lag $l$ and $\psi_{ml}$ represents the variability across individuals for this effect.
We can obtain inference with respect to these global parameters to provide insight into the general effects of the covariates at various lags as well as the measures of variability across individuals.
\subsection{Identifiability constraints}
\label{sec:Ident}
We begin with a general depiction of the important identifiability constraints of the model parameters assuming one athlete (i.e., $n=1$). As such, we drop the dependence on $i$ in the following.
Additionally, we note that there is more than just one set of parameter constraints that will result in an identifiable model, and will therefore justify our choices with regard to desired inference when necessary.
First, as is customary in probit regression models, the first threshold parameter, $\theta^{(1)}_{j} =0$ for each $j$ \citep{Chib1998}.
This enables the identification of the intercept terms, $\beta_{0j}$.
Then, to identify the lag coefficients, $\alpha_{ml}$, for $m = 1, 2$, and $l = 0, \dots, L$, without loss of generality, we set the $\beta_{mj}$ factor coefficients for the first metric equal to 1 \citep{Cagnone2009}.
In the univariate latent factor model, this results in $\beta_{11} = 1$ and in the bivariate latent factor model, this results in $\beta_{11} = \beta_{21}=1$.
With the number of metrics $J>1$, we also must specify a common $\theta^{(2)}_1 = \cdots = \theta^{(2)}_J = \theta^{(2)}$ in order to identify the metric-specific latent factor coefficients, $\beta_{mj}$.
Another common identifiability constraint for probit regression models imposes a fixed variance for the latent continuous metrics, $\widetilde{Z}_{jt}$ \citep{Chib1998}.
This is the variance of $\epsilon_{jt}$ from (\ref{eq:Ztilde}) which is denoted $\sigma_j^2$.
With metric specific threshold parameters $\theta^{(k)}_{j}$, we drop the dependence on $j$ such that $\sigma_1^2 = \cdots = \sigma_J^2 = \sigma^2$.
One option is to fix $\sigma^2 = 1$ and model the variance parameters of the latent factors, $\tau^2$, in the univariate latent factor model, and $\tau^2_1$ and $\tau^2_2$ in the bivariate latent factor model \citep{Cagnone2009, Cagnone2018}.
However, since we are modeling the latent factors using distributed lag models, this approach can mask some of the effects of the lagged covariates as well as the relationships between the latent factors and the ordinal wellness metrics.
Therefore, we opt to work with the marginal variance of $\widetilde{Z}_{jt}$, which is equal to $$\text{Var}(\widetilde{Z}_{jt}) = \sigma^2 + \beta_{1j}^2 \tau^2$$ in the univariate factor model and
$$\text{Var}(\widetilde{Z}_{jt}) = \sigma^2 + \beta_{1j}^2 \tau_1^2 + \beta_{2j}^2 \tau_2^2$$
in the bivariate factor model.
For $j=1$, this reduces to $\sigma^2 + \tau^2$ and $\sigma^2 + \tau_1^2 + \tau_2^2$.
We set $\sigma^2 + \tau^2 = 1$ and $\sigma^2 + \tau_1^2 + \tau_2^2 =1$ and use a Dirichlet prior with two and three categories, respectively.
Details regarding this prior are given below.
In extending to modeling multiple athletes, we add subscript $i$ to each of the parameters and latent factors.
That is, we have individual specific threshold parameters, intercepts, factor coefficients, latent factors, and variances.
In addition, we introduce the global mean coefficients, $\alpha_{ml}$, and variances, $\psi_{ml}$.
By imposing the same set of constraints above for each individual, the model parameters are identifiable.
Fitting this model to the referee ordinal wellness data discussed in Section \ref{Sec:Data} requires one additional modification.
In looking at the ordinal response distributions (Figures 1 and 11), notice that for some individuals and some metrics, some ordinal values have few, if any, counts (e.g., Athlete A: Stress).
In such a case, there is no information in the data to inform about the cut points for these individual and metric combinations.
Therefore, we drop the individual specific threshold parameters to leverage information across athletes for each metric.
With this modification, we can relax the constraint on $\theta^{(2)}$ to allow for metric specific thresholds, $\theta^{(2)}_1, \dots, \theta^{(2)}_J$.
The shared metric-specific threshold approach across individuals is preferred over having individual threshold parameters that are shared across metrics for two reasons.
First, some referees have very small counts for some ordinal values, even when aggregated across metrics, resulting in challenges in estimating these parameters. When aggregating across referees for a given metric, the distribution of observations across ordinal values is much more uniform.
Second, by retaining the metric-specific threshold parameters, we can more easily compare the metric-specific relationships with the latent factor(s).
That is, we can directly compute correlations between the latent factor(s) and the latent continuous wellness metrics as discussed below.
Due to ordinal data not having an identifiable scale, specifying individual threshold parameters that are shared across metrics requires computing more complex functions of the model parameters in order to obtain this important inference.
\subsection{Model inference and priors}
\label{sec:Inf1}
Model inference was obtained in a Bayesian framework.
Prior distributions are assigned to each model parameter and non-informative and conjugate priors were chosen when available.
Each global mean lagged coefficient parameter is assigned an independent, conjugate hyper prior where $\alpha_{ml} \sim N(0,10)$ for all $m=1, 2$ and $l = 1, \dots, L$.
The variance parameters are assigned independent Inverse-Gamma(0.01, 0.01) priors.
The latent factor coefficients are assigned independent normal priors, where $\beta_{mij} \sim N(0,10)$ for $m=0, 1$ in the univariate factor model and $m=0, 1, 2$ in the bivariate factor model.
Given the identifiability constraints above for the variance parameters, we specify a two category Dirichlet prior for $(\sigma^2, \tau^2)$ in the univariate latent factor model and a three category Dirichlet prior for $(\sigma^2, \tau_1^2, \tau_2^2)$ in the bivariate model.
Both Dirichlet priors are defined with concentration parameter 10 for each category.
The threshold parameters were modeled on a transformed scale due to their order restriction where $\theta_{ij}^{(k-1)} \leq \theta_{ij}^{(k)}$.
To ensure these inequalities hold true, with $\theta_{ij}^{(1)} = 0$ for all $i$ and $j$, we define $\widetilde{\theta}_{ij}^{(k)} = \text{log} (\theta_{ij}^{(k)} - \theta_{ij}^{(k-1)})$ for $k=2, 3, 4$ and model $\widetilde{\theta}_{ij}^{(k)} \stackrel{iid}{\sim} N(0, 1)$.
This transformation improves mixing and convergence when using MCMC for model inference \citep{Higgs2010}.
Sampling the threshold parameters requires a Metropolis step within the MCMC algorithm.
\subsection{Posterior inference}
\label{sec:Inf2}
Important posterior inference includes estimates of the model parameters as well as correlation and relative importance measures for each metric.
Dropping the dependence on $i$ for ease of notation, let $C_j$ define the correlation between latent ordinal response metric $\widetilde{\mathbf{Z}}_j = (\widetilde{Z}_{j1}, \dots, \widetilde{Z}_{jT})'$ and the univariate latent factor $\mathbf{Y} = (Y_1, \dots, Y_T)'$, computed as
\begin{equation}
C_j = \text{corr}(\widetilde{\mathbf{Z}}_j, \mathbf{Y}).
\label{eq:CorrU}
\end{equation}
For the multivariate latent factor model, we can define analogous correlations for each factor $m$ where
\begin{equation}
C_{jm} = \text{corr}(\widetilde{\mathbf{Z}}_j, \mathbf{Y}_m).
\label{eq:CorrM}
\end{equation}
These correlations provide a measure for which to compare the importance of each wellness metric in capturing the variation in the latent factor.
To compare across metric, we compute the relative importance of each metric for the univariate latent factor model as
\begin{equation}
R_j = \frac{|C_j|}{\sum_{j'=1}^J |C_{j'}|}
\label{eq:RIU}
\end{equation}
and as
\begin{equation}
R_{jm} = \frac{|C_{jm}|}{\sum_{j'=1}^J |C_{j'm}|}
\label{eq:RIM}
\end{equation}
for the multivariate factor model.
Metrics with higher relative importance indicate that they are more important in capturing the variation in the latent factor.
For the univariate latent factor model, these relative importance scores could be used as weights in computing an overall wellness score for each individual.
Then, these latent factors could be monitored through time to identify possible changes in each athlete's wellness in response to training and recovery throughout the season.
For example, we can investigate the variation in the latent factors for each athlete by comparing match days to the days leading up to and following the match.
Similarly, for the multivariate metric, these weights could identify which metrics are more or less important in explaining the variation in each particular latent factor, and again, can be monitored throughout the season to identify potential spikes in wellness as a response to training and recovery.
We can obtain full posterior distributions, including estimates of uncertainty, of all correlations and relative importance metrics post model fitting.
\section{Application: modeling athlete wellness}
\label{Sec:Results}
We apply our model to the subjective ordinal wellness data and workload and recovery data for 20 MLS referees collected during the 2015 and 2016 seasons.
We investigated the effects of the workload and recovery variables on wellness for lags up to 10 day.
Therefore, we limited the analysis to daily data for which at least 10 prior days of workload and recovery data were available.
The number of observations for each individual ranged from 170 to 467 days.
The univariate and multivariate latent factor models were each fitted to the data.
Model inference was obtained using Markov chain Monte Carlo and a hybrid Metropolis-within-Gibbs sampling algorithm.
The chain was run for 100,000 iterations, and the first 20,000 were discarded as burn-in.
Traceplots of the chain for each parameter were investigated for convergences and no issues were detected.
Boxplots of the posterior distributions of the global lagged coefficients of the univariate latent factor model are shown in Figure \ref{Fig:AlphaGlobU} for the workload and recovery covariates.
Also shown are the upper and lower limits of the central 95\% credible intervals.
In general, workload is negatively related with athlete wellness, and the previous day's workout (lag equal to 1) is the most significant.
This implies that, in general, workload has an acute effect on player wellness, such that a heavy workload on the previous day tends to lead to a decrease in wellness on the following day.
The lagged coefficients of the recovery variable show a positive relationship between recovery and wellness, where a large recovery value corresponds to high sleep quality and quantity.
The lagged coefficients for this variable are significant for lags 1 through 5 as indicated by the 95\% credible intervals not including 0.
These significant lagged coefficients suggest that sleep quality and quantity may have a longer lasting effect on athlete wellness.
Posterior distributions of the individual-specific lagged coefficients are shown for Athlete A, B, C, and D in Figures \ref{Fig:AlphaIndUW} and \ref{Fig:AlphaIndUR} for the workload and recovery variables, respectively.
In general, there is a lot of variation between individuals with regard to the lagged effects of the two variables.
For example, Athlete A and B experience significant negative effects of workload at both lags 1 and 2, whereas Athlete C and D do not experience such negative effects.
In fact, a heavy workload on the previous day has a positive relationship with wellness for Athlete D.
For all four athletes, we see positive effects of workload at longer lags (e.g., lag 9 for Athlete A, lags 7-9 for Athlete B).
The individual-specific lagged coefficients of the recovery variable show that the previous nights sleep quantity and quality have a very significant positive relationship with wellness for each athlete (Figure \ref{Fig:AlphaIndUR}).
However, we detect a more short-term effect of recovery for Athlete A and B (2 days) relative to Athlete C and D (3+ days) than the average shown in Figure \ref{Fig:AlphaGlobU}.
The latent factors, $Y_{it}$, provide a univariate measure of wellness for each individual on each day.
Figure \ref{Fig:LatY} shows boxplots of the posterior mean estimates of $Y_{it}$ for the four athletes for match days, denoted ``M," compared to the 3 days leading up to and following the match.
Due to possible variation throughout the seasons, the estimates are centered within each match week by subtracting the 7-day average.
Estimates of the latent factors vary throughout the 7-day period for each athlete.
In general, the wellness of Athlete A is highest on match day relative to the days leading up to and following the match.
The wellness estimates for Athlete B and C show less variation across days, although wellness for Athlete B is lower, on average, the day following the match relative to the match day.
Wellness for Athlete D appears similar across all days except for the day following the match, in which wellness is higher.
We computed the correlation between the vectors of the latent continuous response metric, $\widetilde{\mathbf{Z}}_{ij}$ and the univariate latent factor, $\mathbf{Y}_{i}$, for each athlete and metric.
Boxplots of the posterior distributions for these correlations for the four athletes are shown in Figure \ref{Fig:CorrU}, indicating variation both within metric across individuals and across metrics within individual.
The majority of the significant correlations between the ordinal wellness metric and latent factor are positive, although the correlation was negative for Athlete B for the appetite metric.
To compare the significance of the different metrics within an individual in relation to the latent factor, we compute the relative importance statistics defined in (\ref{eq:RIU}).
The relative importance statistics give a measure of the ability of each wellness metric at capturing the variation in the latent factor.
The posterior mean estimates of $R_j$ for each of the four athletes across the six metrics are shown in Figure \ref{Fig:RIU}.
The relative importance of each ordinal wellness metric varies across the athletes.
Note that a value of 1/6 for each metric would correspond to an equal weighting.
The most notable similarity between the four athletes is the high relative importance of \emph{energy}, with each greater than 1/6.
The relative importance of \emph{motivation} and \emph{mood} vary a lot between athletes.
The estimates for Athlete C and D closely resemble an equal weighting scheme across the six metrics, whereas A and B each have unequal relative importance estimates with emphasis on mood, energy and tiredness for Athlete A, and motivation, energy, and tiredness for Athlete B. These results clearly depict a difference between computing the average across all metrics and the utility of the multivariate model in leveraging the individual wellness measures.
Plots of the correlation and relative importance metrics for all 20 athletes are included in Figures \ref{Fig:CorrsUALL} and \ref{Fig:RIUALL} of the Supplementary Material.
The multivariate latent factor model resulted in similar global and individual lagged coefficient estimates as the univariate model. (See Figures \ref{Fig:M1} - \ref{Fig:M3} of the Supplementary Material).
In addition, the variation in the estimates of the two latent factors across days leading up to and following each match also appeared similar to the univariate model (Figures \ref{Fig:LatY1} and \ref{Fig:LatY2}).
Important inference from the multivariate model consists of the factor-specific correlations and relative importance estimates for each wellness metric.
Posterior distributions of the correlation estimates are shown in Figure \ref{Fig:CorrM} for both workload and recovery variables for the same four athletes, Athlete A, B, C, and D. (A similar figure with all athletes is given in Figure \ref{Fig:CorrMALL} of the Supplementary Material).
This figure shows some important similarities and differences for each of the wellness metrics in terms of the correlations with the two latent factors.
Both energy and tiredness show stronger correlations with recovery than workload for Athlete B, C, and D.
The motivation metric is significantly more correlated with workload than recovery for Athlete A and B, whereas it is similar for Athlete C and D.
Stress and mood both appear more strongly correlated with workload, whereas appetite appears more strongly correlated with recovery for each athlete.
Posterior mean estimates of relative importance for each latent factor for the four athletes are shown in Figure \ref{Fig:RIM}.
Some wellness metrics that appeared insignificant in the univariate latent factor model now show significance when the workload and recovery latent factors are considered separately.
For example, motivation appears significant for both workload and recovery for Athlete A whereas it was the least important metric in the univariate latent model.
The relative importance of mood on the workload latent factor is greater than 1/6 for each athlete, and is the highest relative importance for Athlete A.
Energy and tiredness appear to capture the majority of the variation in the recovery latent factor for Athlete B.
Interestingly, Athlete C retains a fairly equal weighting scheme across the six metrics for both workload and recovery.
The relative importance of stress is high for workload and low for recovery for Athlete D, whereas tiredness is low for workload and high for recovery. Additional comparisons between all athletes with respect to the relative importance for each metric and latent factor can be made looking at Figure \ref{Fig:RIMALL} of the Supplementary Material. Interestingly, none of the metrics appear to be uniformly insignificant across the 20 athletes, providing justification in each component of the self-assessment survey.
\section{Discussion}
\label{Sec:Disc}
We develop a joint multivariate latent factor model to study the relationships between athlete wellness, training, and recovery using subjective and objective measures.
Importantly, the multivariate response model incorporates the information from each of the subjective ordinal wellness variables for each individual.
Additionally, the univariate and bivariate latent factors are modeled using distributed lag models to identify the short- and long-term effects of training and recovery.
Individual-specific parameters enable individual-level inference with respect to these effects.
The relative importance indices provide individual-specific estimates of the sensitivity of each ordinal wellness metric to the variation in training and recovery.
The joint modeling approach enables the sharing of information across individuals to strengthen the results.
We applied our model to daily wellness, training, and recovery data collected across two MLS seasons. While the results show important similarities and differences across athletes with regard to the training and recovery effects on wellness and the importance of each of the wellness variables, the model could provide new and insightful information when applied to competing athletes. When referencing physical performance, our findings align with important known differences in training programs for referees and players. Training programs aim at maximizing the physical performance of players on match days, whereas referee training does not place the same significance on these days. This suggests interesting comparisons could be made using the results of this type of analysis between athletes who compete in different sports with differing levels of intensity and periods of recovery. For example, football has regular weekly game schedules at the college and professional levels, whereas soccer matches are scheduled typically twice per week, and hockey leagues often play two and three-game series with games on back-to-back nights. Maximizing player performance under these different competition schedules and levels of intensity using subjective wellness data is an open area for future work.
Distributed lag models, like those applied in this work, make the assumption that the lagged coefficients are constant in time.
That is, the effect of variable $X_{1,t-l}$ at lag $l$ on the response at time $t$ is captured by $\alpha_{1l}$ and is the same for all $t$.
In terms of training and recovery for athletes, one could argue that these effects might vary throughout a season as a function of fitness and fatigue.
For example, if an athlete's fitness level is low during the early part of the season, a hard training session might have a longer lasting effect on wellness than it would mid-season when the athlete is at peak fitness. Alternatively, as the season wears on, an athlete might require a longer recovery time in order to return to their maximum athletic performance potential. As future work, we plan to incorporate time-varying parameters into the distributed lag models. This will require strategic model development in
in order to minimize the number of additional parameters and retain computation efficiency in model fitting.
The scope of this future work spans beyond sports, as the lagged effects of environmental processes could also have important time-varying features.
\bibliographystyle{apalike}
| {
"attr-fineweb-edu": 2.650391,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdXI5qdmDHWQtq-Qx | \section{Introduction}
Ice hockey is a popular sport played by millions of people \cite{iihf}. Being a team sport, knowing the location of players on the ice rink is essential for analyzing the game strategy and player performance. The locations of the players on the rink during the game are used by coaches, scouts, and statisticians for analyzing the play. Although player location data can be obtained manually, the process of labelling data by hand on a per-game basis is extremely tedious and time consuming. Therefore, an automated computer vision-based player tracking and identification system is of high utility.
In this paper, we introduce an automated system to track and identify players in broadcast National Hockey League (NHL) videos. Referees, being a part of the game, are also tracked and identified separately from players. The input to the system is broadcast NHL clips from the main camera view (i.e., camera located in the stands above the centre ice line) and the output are player trajectories along with their identities. Since there are no publicly available datasets for ice hockey player tracking, team identification, and player identification, we annotate our own datasets for each of these problems. The previous papers in ice hockey player tracking \cite{cai_hockey, okuma} make use of hand crafted features for detection and re-identification. Therefore, we perform experiments with five state of the art tracking algorithms \cite{Braso_2020_CVPR,tracktor_2019_ICCV,Wojke2017simple,Bewley2016_sort, zhang2020fair} on our hockey player tracking dataset and evaluate their performance. The output of the player tracking algorithm is a temporal sequence of player bounding boxes, called player tracklets.
Posing team identification as a classification problem with each team treated as a separate class would be impractical since (1) this will result in a large number of classes, and (2) the same NHL team wears two different colors based on whether it is the home or away team (Fig. \ref{fig:mtl_jerseys}). Therefore, instead of treating each team as a separate class, we treat the away (light) jerseys of all teams as a single class and cluster home jerseys based on their jersey color. Since referees are easily distinguishable from players, they are treated as a separate class.Based on this simple training data formation, hockey players can be classified into home and away teams. The team identification network obtains an accuracy of $96.6\%$ on the test set and does not require additional fine tuning on new games.
\begin{figure*}[!th]
\begin{center}
\includegraphics[width=\linewidth]{Figures/overall_pipeline_results.pdf}
\end{center}
\caption{ Overview of the player tracking and identification system. The tracking model takes a hockey broadcast video clip as input and outputs player tracks. The team identification model takes the player track bounding boxes as input and identifies the team of each player along with identifying the referees. The player identification model utilizes the player tracks, team data and game roster data to output player tracks with jersey number identities. }
\label{fig:pipeline}
\end{figure*}
\begin{figure}
\centering
\includegraphics[scale=0.55]{Figures/Montreal_canadiens_unif_new.png}
\caption{Home (dark) and away (white) jerseys worn by the Montreal Canadiens of the National Hockey League \cite{martello_2020}.}
\label{fig:mtl_jerseys}
\end{figure}
Unlike soccer and basketball \cite{Senocak_2018_CVPR_Workshops} where player facial features and skin color are visible, a big challenge in player identification in hockey is that the players of the same team appear nearly identical due to having the same uniform as well as having similar physical size.. Therefore, we use jersey number for identifying players since it is the most prominent feature present on all player jerseys. Instead of classifying jersey numbers from static images \cite{gerke,li,liu}, we identify a player's jersey number from player tracklets. Player tracklets allow a model to process temporal context to identify a jersey number since that is likely to be visible in multiple frames of the tracklet. We introduce a temporal 1-dimensional convolutional neural network (1D CNN)-based network for identifying players from their tracklets. The network generates a higher accuracy than the previous work by Chan \textit{et al.} \cite{CHAN2021113891} by $9.9\%$ without requiring any additional probability score aggregation model for inference.
The tracking, team identification, and player identification models are combined to form a holistic offline system to track and identify players and referees in the broadcast videos. Player tracking helps team identification by removing team identification errors in player tracklets through a simple majority voting. Additionally, based on the team identification output, we use the game roster data to further improve the identification performance of the automated system by an additional $5\%$. The overall system is depicted in Fig. \ref{fig:pipeline}. The system is able to identify players from video with an accuracy of $82.8\%$ with a Multi-Object Tracking Accuracy (MOTA) score of $94.5\%$ and an Identification $F_1$ (IDF1) score of $62.9\%$.
Five computer vision contributions are recognized applied to the game of ice hockey:
\begin{enumerate}
\item New ice hockey datasets are introduced for player tracking, team identification, and player identification from tracklets.
\item We compare and contrast several state-of-the-art tracking algorithms and analyze their performance and failure modes.
\item A simple but efficient team identification algorithm for ice hockey is implemented.
\item A temporal 1D CNN based player identification model is implemented that outperforms the current state of the art \cite{CHAN2021113891} by $9.9\%$.
\item A holistic system that combines tracking, team identification, and player identification models, along with making use of the team roster data, to track and identify players in broadcast ice hockey videos is established.
\end{enumerate}
\section{Background}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth]{Figures/jn_arch.png}
\end{center}
\caption{ Network architecture for the player identification model. The networks accepts a player tracklet as input. Each tracklet image is passed through a ResNet18 to obtain time ordered features $F$. The features $F$ are input into three 1D convolutional blocks, each consisting of a 1D convolutional layer, batch normalization, and ReLU activation. In this figure, $k$ and $s$ are the kernel size and stride of convolution operation. The activations obtained from the convolutions blocks are mean-pooled and passed through a fully connected layer and a softmax layer to output the probability distribution of jersey number $p \in \mathbb{R}^86$. }
\label{fig:jn_arch}
\end{figure*}
\subsection{Tracking}
The objective of multi-object tracking (MOT) is to detect objects of interest in video frames and associate the detections with appropriate trajectories. Player tracking is an important problem in computer vision-based sports analytics, since player tracking combined with an automatic homography estimation system \cite{jiang2020optimizing} is used to obtain absolute player locations on the sports rink. Also, various computer vision-based tasks, such as sports event detection \cite{Vats_2020_CVPR_Workshops,Sanford_2020_CVPR_Workshops,Vats_2021_CVPR}, can be improved with player tracking data.
Tracking by detection (TBD) is a widely used approach for multi-object tracking. Tracking by detection consists of two steps: (1) detecting objects of interest (hockey players in our case) frame-by-frame in the video, then (2) linking player detections to produce tracks using a tracking algorithm. Detection is usually done with the help of a deep detector, such as Faster R-CNN \cite{NIPS2015_14bfa6bb} or YOLO \cite{yolo}. For associating detections with trajectories, techniques such as Kalman filtering with Hungarian algorithm \cite{Bewley2016_sort, Wojke2017simple, zhang2020fair} and graphical inference \cite{Braso_2020_CVPR,tanglifted} are used. In recent literature, re-identification in tracking is commonly carried out with the help of deep CNNs using appearance \cite{Wojke2017simple, Braso_2020_CVPR,zhang2020fair} and pose features \cite{tanglifted}.
For sports player tracking, Sanguesa \textit{et al.} \cite{Sangesa2019SingleCameraBT} demonstrated that deep features perform better than classical hand crafted features for basketball player tracking. Lu \textit{et al.} \cite{lu} perform player tracking in basketball using a Kalman filter. Theagarajan \textit{et al.} \cite{Theagarajan_2021} track players in soccer videos using the DeepSORT algorithm \cite{Wojke2017simple}. Hurault \textit{et al} \cite{hurault} introduce a self-supervised detection algorithm to detect small soccer players and track players in non-broadcast settings using a triplet loss trained re-identification mechanism, with embeddings obtained from the detector itself.
In ice hockey, Okuma \textit{et al.} \cite{okuma} track hockey players by introducing a particle filter combined with mixture particle filter (MPF) framework \cite{mixture_tracking}, along with an Adaboost \cite{adaboost} player detector. The MPF framework \cite{mixture_tracking} allows the particle filter framework to handle multi-modality by modelling the posterior state distributions of $M$ objects as an $M$ component mixture. A disadvantage of the MPF framework is that the particles merge and split in the process and leads to loss of identities. Moreover, the algorithm did not have any mechanism to prevent identity switches and lost identities of players after occlusions. Cai \textit{et al.} \cite{cai_hockey} improved upon \cite{okuma} by using a bipartite matching for associating observations with targets instead of using the mixture particle framework. However, the algorithm is not trained or tested on broadcast videos, but performs tracking in the rink coordinate system after a manual homography calculation.
In ice hockey, prior published research \cite{okuma,cai_hockey} perform player tracking with the help of handcrafted features for player detection and re-identification. In this paper we track and identify hockey players in broadcast NHL videos and analyze performance of several state-of-the-art deep tracking models on the ice hockey dataset.
\subsection{Player Identification}
Identifying players and referees is one of the most important problems in computer vision-based sports analytics. Analyzing individual player actions and player performance from broadcast video is not feasible without detecting and identifying the player. Before the advent of deep learning methods, player identification was performed with the help of handcrafted features \cite{Saric2008PlayerNL}. Although techniques for identifying players from body appearance exist \cite{Senocak_2018_CVPR_Workshops}, jersey number is the primary and most widely used feature for player identification, since it is observable and consistent throughout a game. Most deep learning based player identification approaches in the literature focus on identifying the player jersey number from single frames using a CNN \cite{gerke, li, liu}. Gerke \textit{et al.} \cite{gerke} were one of the first to use CNNs for soccer jersey number identification and found that deep learning approach outperforms handcrafted features. Li \textit{et al.} \cite{li} employed a semi-supervised spatial transformer network to help the CNN localize the jersey number in the player image. Liu \textit{et al.} \cite{liu} use a pose-guided R-CNN for jersey digit localization and classification by introducing a human keypoint prediction branch to the network and a pose-guided regressor to generate digit proposals. Gerke \textit{et al.} \cite{GERKE2017105} also combined their single-frame based jersey classifier with soccer field constellation features to identify players. Vats \textit{et al.} \cite{Vats2021MultitaskLF} use a multi-task learning loss based approach to identify jersey numbers from static images.
Zhang \textit{et al.} \cite{ZHANG2020107260} track and identify players in a multi-camera setting using a distinguishable deep representation of player identity using a coarse-to-fine framework. Lu \textit{et al.} \cite{lu} use a variant of Expectation-Maximization (EM) algorithm to learn a Conditional Random Field (CRF) model composed of player identity and feature nodes. Chan \textit{et al.} \cite{CHAN2021113891} use a combination of a CNN and long short term memory network (LSTM) \cite{lstm} similar to the long term recurrent convolutional network (LRCN) by Dohnaue \textit{et al.} \cite{lrcn2014} for identifying players from player sequences. The final inference in Chan \textit{el al.} \cite{CHAN2021113891} is performed using a another CNN network applied over probability scores obtained from CNN LSTM network.
In this paper, we identify players using player tracklets with the help of a temporal 1D CNN. Our proposed inference scheme does not require the use of an additional network.
\subsection{Team Identification}
Beyond knowing the identity of a player, the player must also be assigned to a team. Many sports analytics, such as ``shot attempts" and ``team formations", require knowing the team to which each individual belongs. In sports leagues, teams differentiate themselves based on the colour and design of the jerseys worn by the players. In ice hockey, formulating team identification as a classification problem with each team treated as a separate class is not feasible since teams use different colored jerseys depending if they are the 'home' or 'away' team. Teams wear light- and dark-coloured jerseys depending on whether they are playing at their home venue or away venue (Fig. \ref{fig:mtl_jerseys}). Furthermore, each game in which new teams play would require fine-tuning \cite{Koshkina_2021_CVPR}.
Early work used colour histograms or colour features with a clustering approach to differentiate teams \cite{shitrit2011, ivankovic2014,dorazio2009, lu2013, liu2014, bialkowski2014, tong2011, guo2020, mazzeo2008, ajmeri2018}. This approach, while being lightweight, does not address occlusions, changes in illumination, and teams wearing similar jersey colours \cite{shitrit2011, Koshkina_2021_CVPR}. Deep learning approaches have increased performance and generalizablitity of player classification models \cite{Istasse_2019_CVPR_Workshops}.
Istasse \textit{et al.} \cite{Istasse_2019_CVPR_Workshops} simultaneously segment and classify players in indoor basketball games. Players are segmented and classified in a system where no prior is known about the visual appearance of each team with associative embedding. A trained CNN outputs a player segmentation mask and, for each pixel, a feature vector that is similar for players belonging to the same team. Theagarajan and Bhanu \cite{Theagarajan_2021} classify soccer players by team as part of a pipeline for generating tactical performance statistics by using triplet CNNs.
In ice hockey, Guo \textit{et al.} \cite{guo} perform team identification using the color features of hockey player uniforms. For this purpose, the uniform region (central region) of the player's bounding box is cropped. From this region, hue, saturation, and lightness (HSL) pixel values are extracted, and histograms of pixels in five essential color channels (i.e., green, yellow, blue, red, and white) are constructed. Finally, the player's team identification is determined by the channel that contains the maximum proportions of pixels. Koshkina \textit{et al.} \cite{Koshkina_2021_CVPR} use contrastive learning to classify player bounding boxes in hockey games. This self-supervised learning approach uses a CNN trained with triplet loss to learn a feature space that best separates players into two teams. Over a sequence of initial frames, they first learn two k-means cluster centres, then associate players to teams.
\section{Technical Approach}
\subsection{Player Tracking}
\subsubsection{Dataset}
The player tracking dataset consists of a total of 84 broadcast NHL game clips with a frame rate of 30 frames per second (fps) and resolution of $1280 \times 720$ pixels. The average clip duration is $36$ seconds. The 84 video clips in the dataset are extracted from 25 NHL games. The duration of the clips in shown in Fig. \ref{fig:clip_lengths}. Each frame in a clip is annotated with player and referee bounding boxes and player identity consisting of player name and jersey number. The annotation is carried out with the help of open source CVAT tool\footnote{Found online at: \url{https://github.com/openvinotoolkit/cvat}}. The dataset is split such that 58 clips are used for training, 13 clips for validation, and 13 clips for testing. To prevent any game-level bias affecting the results, the split is made at the game level, such that the training clips are obtained from 17 games, validation clips from 4 games and test split from 4 games, respectively.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth, height = 3.6cm]{Figures/vid_len.png}
\end{center}
\caption{Duration of videos in the player tracking dataset. The average clip duration is $36$ seconds. }
\label{fig:clip_lengths}
\end{figure}
\subsubsection{Methodology}
We experimented with five state-of-the-art tracking algorithms on the hockey player tracking dataset. The algorithms include four online tracking algorithms \cite{Bewley2016_sort, Wojke2017simple,zhang2020fair,tracktor_2019_ICCV} and one offline tracking algorithm \cite{Braso_2020_CVPR}. The best tracking performance (see Section \ref{section:results}) is achieved using the MOT Neural Solver tracking model \cite{Braso_2020_CVPR} re-trained on the hockey dataset. MOT Neural Solver uses the popular tracking-by-detection paradigm.
In tracking by detection, the input is a set of object detections $O = \{o_1,.....o_n\}$, where $n$ denotes the total number of detections in all video frames. A detection $o_i$ is represented by $\{x_i,y_i,w_i,h_i, I_i,t_i\}$, where $x_i,y_i,w_i,h_i$ denotes the coordinates, width, and height of the detection bounding box. $I_i$ and $t_i$ represent the image pixels and timestamp corresponding to the detection. The goal is to find a set of trajectories $T = \{ T_1, T_2....T_m\}$ that best explains $O$ where each $T_i$ is a time-ordered set of observations. The MOT Neural Solver models the tracking problem as an undirected graph $G=(V,E)$ , where $V=\{1,2, ..., n\}$ is the set of $n$ nodes for $n$ player detections for all video frames. In the edge set $E$, every pair of detections is connected so that trajectories with missed detections can be recovered. The problem of tracking is now posed as splitting the graph into disconnected components where each component is a trajectory $T_i$. After computing each node (detection) embedding and edge embedding using a CNN, the model then solves a graph message passing problem. The message passing algorithm classifies whether an edge between two nodes in the graph belongs to the same player trajectory.
\subsection{Team Identification}
\subsubsection{Dataset}
The team identification dataset is obtained from the same games and clips used in the player tracking dataset. The train/validation/test splits are also identical to player tracking data. We take advantage of the fact that the away team in NHL games usually wear a predominantly white colored jersey with color stripes and patches, and the home team wears a dark colored jersey. For example, the Toronto Maple Leafs and the Tampa Bay Lightning both have dark blue home jerseys and therefore can be put into a single `Blue' class. We therefore build a dataset with five classes (blue, red, yellow, white, red-blue and referees) with each class composed of images with same dominant color. The data-class distribution is shown in Fig. \ref{fig:team_data_dist}. Fig. \ref{fig:team_examples} shows an example of the blue class from the dataset. The training set consists of $32419$ images. The validation and testing set contain $6292$ and $7898$ images respectively.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{Figures/teamidexamples.png}
\end{center}
\caption{Examples of `blue' class in the team identification dataset. Home jersey of teams such as (a) Vancouver Canucks (b) Toronto Maple Leafs and (c) Tampa Bay Lightning are blue in appearance and hence are put in the same class.}
\label{fig:team_examples}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{Figures/teamex.png}
\end{center}
\caption{Classes in team identification and their distribution. The `ref' class denotes referees.}
\label{fig:team_data_dist}
\end{figure}
\subsubsection{Methodology}
For team identification, we use a ResNet18 \cite{resnet} pretrained on the ImageNet dataset \cite{deng2009imagenet}, and train the network on the team identification dataset by replacing the final fully connected layer to output six classes. The images are scaled to a resolution of $224 \times 224$ pixels for training. During inference, the network classifies whether a bounding box belongs to the away team (white color), the home team (dark color), or the referee class. For inferring the team for a player tracklet, the team identification model is applied to each image of the tracklet and a simple majority vote is used to assign a team to the tracklet. This way, the tracking algorithm helps team identification by resolving errors in team identification.
\subsubsection{Training Details}
We use the Adam optimizer with an initial learning rate of .001 and a weight decay of .001 for optimization. The learning rate is reduced by a factor of $\frac{1}{3}$ at regular intervals during the training process. We do not perform data augmentation since performing color augmentation on white away jerseys makes it resemble colored home jerseys.
\begin{table}[!t]
\centering
\caption{Network architecture for the player identification model. k, s, d and p denote kernel dimension, stride, dilation size and padding respectively. $Ch_{i}$, $Ch_{o}$ and $b$ denote the number of channels going into and out of a block, and batch size, respectively.}
\footnotesize
\setlength{\tabcolsep}{0.2cm}
\begin{tabular}{c}
\hline \textbf{Input: Player tracklet} $b\times 30 \times 3 \times 300\times300$ \\\hline\hline
\textbf{ResNet18 backbone} \\ \hline
\textbf{Layer 1}:
Conv1D \\
$Ch_{i} = 512, Ch_{o} = 512$ \\
(k = $3$,
s = $3$,
p = $0$,
d = $1$) \\
Batch Norm 1D \\
ReLU \\ \hline
\textbf{Layer 2}:
Conv1D \\
$Ch_{i} = 512, Ch_{o} = 512$ \\
(k = $3$,
s = $3$,
p = $1$,
d = $1$) \\
Batch Norm 1D \\
ReLU \\
\hline
\textbf{Layer 3}:
Conv2D \\
$Ch_{i} = 512, Ch_{o} = 128$ \\
(k = $3$,
s = $1$,
p = $0$,
d = $1$) \\
Batch Norm 1D \\
ReLU
\\ \hline
\textbf{Layer 4}:
Fully connected\\
$Ch_{i} = 128, Ch_{o} = 86$ \\ \hline
\textbf{Output} $b\times86$ \\ \hline
\end{tabular}
\label{table:player_branch_network}
\end{table}
\subsection{Player Identification}
\subsubsection{Image Dataset}
\label{subsubsection:image_dataset}
The player identification image dataset \cite{Vats2021MultitaskLF} consists of $54,251$ player bounding boxes obtained from 25 NHL games. The NHL game video frames are of resolution $1280 \times 720$ pixels. The dataset contains $81$ jersey number classes, including an additional $null$ class for no jersey number visible. The player head and bottom of the images are cropped such that only the jersey number (player torso) is visible. Images from $17$ games are used for training, four games for validation and four games for testing. The dataset is highly imbalanced such that the ratio between the most frequent and least frequent class is $92$. The dataset covers a range of real-game scenarios such as occlusions, motion blur and self-occlusions.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth]{Figures/tracklet-data.png}
\end{center}
\caption{Examples of two tracklets in the player identification dataset. \textbf{Top row:} Tracklet represents a case when the jersey number 12 is visible in only a subset of frames. \textbf{Bottom row:} Example when the jersey number is never visible over the whole tracklet. }
\label{fig:tracklet_data}
\end{figure*}
\subsubsection{Tracklet Dataset}
The player identification tracklet dataset consists of $3510$ player tracklets. The tracklet bounding boxes and identities are annotated manually. The manually annotated tracklets simulate the output of a tracking algorithm. The tracklet length distribution is shown in Fig. \ref{fig:tracklet_dist}. The average length of a player tracklet is $191$ frames (6.37 seconds in a 30 frame per second video). It is important to note that the player jersey number is visible in only a subset of tracklet frames. Fig. \ref{fig:tracklet_data} illustrates two tracklet examples from the dataset. The dataset is divided into 86 jersey number classes with one $null$ class representing no jersey number visible. The class distribution is shown in Fig. \ref{fig:class_jersey}. The dataset is heavily imbalanced with the $null$ class consisting of $50.4\%$ of tracklet examples. The training set contains $2829$ tracklets, $176$ validation tracklets and $505$ test tracklets. The game-wise training/testing data split is identical in all the four datasets discussed.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth, height = 3.5cm]{Figures/trackletlength.png}
\end{center}
\caption{Distribution of tracklet lengths in frames of the player identification dataset. The distribution is positively skewed with the average length of a player tracklet as $191$ frames.}
\label{fig:tracklet_dist}
\end{figure}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth]{Figures/trackletfreq.png}
\end{center}
\caption{Class distribution in the player tracklet identification dataset. The dataset is heavily imbalanced with the $null$ class (denoted by class $100$) consisting of $50.4\%$ of tracklet examples.}
\label{fig:class_jersey}
\end{figure*}
\subsubsection{Network Architecture}
\label{subsection:network_architecture}
Let $T = \{ o_1,o_2....o_n\}$ denote a player tracklet where each $o_i$ represents a player bounding box. The player head and bottom in the bounding box $o_i$ are cropped such that only the jersey number is visible. Each resized image $I_{i} \in \mathbb{R}^{300\times\ 300\times3}$ corresponding to the bounding box $o_i$ is input into a backbone 2D CNN, which outputs a set of time-ordered features $\{F = \{f_1,f_2.....f_n\}, f_i \in \mathbb{R}^{512} \}$. The features $F$ are input into a 1D temporal convolutional network that outputs probability $p\in \mathbb{R}^{86}$ of the tracklet belonging to a particular jersey number class. The architecture of the 1D CNN is shown in Fig. \ref{fig:jn_arch}.\par
The network consists of a ResNet18 \cite{resnet} based 2D CNN backbone pretrained on the player identification image dataset (Section \ref{subsubsection:image_dataset}). The weights of the ResNet18 backbone network are kept frozen while training. The 2D CNN backbone is followed by three 1D convolutional blocks each consisting of a 1D convolutional layer, batch normalization, and ReLU activation. Each block has a kernel size of three and dilation of one. The first two blocks have a larger stride of three, so that the initial layers have a larger receptive field to take advantage of a large temporal context. Residual skip connections are added to aid learning. The exact architecture is shown in Table \ref{table:player_branch_network}. Finally, the activations obtained are pooled using mean pooling and passed through a fully connected layer with 128 units. The logits obtained are softmaxed to obtain jersey number probabilities. Note that the model accepts fixed length training sequences of length $n=30$ frames as input, but the training tracklets are hundreds of frames in length (Fig. \ref{fig:tracklet_dist}). Therefore, $n=30$ tracklet frames are sampled with a random starting frame from the training tracklet. This serves as a form of data augmentation since at every training iteration, the network processes a randomly sampled set of frames from an input tracklet.
\subsubsection{Training Details}
To address the severe class imbalance present in the tracklet dataset, the tracklets are sampled intelligently such that the $null$ class is sampled with a probability $p_0=0.1$. The network is trained with the help of cross entropy loss. We use Adam optimizer for training with a initial learning rate of $.001$ with a batch size of $15$. The learning rate is reduced by a factor of $\frac{1}{5}$ after iteration numbers 2500, 5000, and 7500. Several data augmentation techniques such as random cropping, color jittering, and random rotation are also used. All experiments are performed on two Nvidia P-100 GPUs.
\subsubsection{Inference}
\label{subsubsection:inference}
During inference, we need to assign a single jersey number label to a test tracklet of $k$ bounding boxes $T_{test} = \{ o_1,o_2....o_k\}$. Here $k$ can be much greater than $n = 30$. So, a sliding window technique is used where the network is applied to the whole test tracklet $T_{test}$ with a stride of one frame to obtain window probabilities $P = \{p_1,p_2,...p_k\}$ with each $p_i \in R^{86}$. The probabilities $P$ are aggregated to assign a single jersey number class to a tracklet. To aggregate the probabilities $P$, we filter out the tracklets where the jersey number is visible. To do this we first train a ResNet18 classifier $C^{im}$ (same as the backbone of discussed in Section \ref{subsection:network_architecture}) on the player identification image dataset. The classifier $C^{im}$ is run on every image of the tracklet. A jersey number is assumed to be absent on a tracklet if the probability of the absence of jersey number $C^{im}_{null}$ is greater than a threshold $\theta$ for each image in the tracklet. The threshold $\theta$ is determined using the player identification validation set. The tracklets for which the jersey number is visible, the probabilities are averaged to obtain a single probability vector $p_{jn}$, which represents the probability distribution of the jersey number in the test tracklet $T_{test}$. For post-processing, only those probability vectors $p_i$ are averaged for which $argmax(p_i) \ne null$.
The rationale behind using visibility filtering and post-processing step is that a large tracklet with hundreds of frames may have the number visible in only a few frames and therefore, a simple averaging of probabilities $P$ will often output $null$. The proposed inference technique allows the network to ignore the window probabilities corresponding to the $null$ class if a number is visible in the tracklet. The whole algorithm is illustrated in Algorithm \ref{algorithm:inference}.
\begin{algorithm}[]
\setstretch{1}
\SetAlgoLined
\textbf{Input}: Tracklet $T_{test}= \{ o_1,o_2....o_k\}$, image-wise jersey number classifier $C^{im}$, Tracklet id model $\mathcal{P}$, Jersey number visibility threshold $\theta$ \\
\textbf{Output}: Identity $Id$, $p_{jn}$ \\
\textbf{Initialize}: $vis = false$ \\
$P = \mathcal{P}(T_{test})$ \tcp*[h]{using sliding window}
\For{$o_i \quad in \quad T_{test} $}{
\If{$ C^{im}_{null}(o_i) < \theta $}{
$vis = true$\\
$break$
}
}
\If{$vis == true$}{
$P^\prime = \{p_i \in P : argmax(p_i) \ne null\} $ \tcp*[h]{post-processing} \\
$p_{jn} = mean(P^\prime)$\\
$Id = argmax(p_{jn})$
}
\Else{
$Id = null$
}
\caption{Algorithm for inference on a tracklet.}
\label{algorithm:inference}
\end{algorithm}
\begin{table*}[!t]
\centering
\caption{Tracking performance of MOT Neural Solver model for the 13 test videos ($\downarrow$ means lower is better, $\uparrow$ means higher is better). }
\footnotesize
\setlength{\tabcolsep}{0.2cm}
\begin{tabular}{c|c|c|c|c|c}\hline
Video number & IDF1$\uparrow$ & MOTA $\uparrow$& ID-switches $\downarrow$ & False positives (FP)$\downarrow$ & False negatives (FN) $\downarrow$\\\hline
1 & $78.53$ & $94.95$ & $23$ & $100$ & $269$\\
2 & $61.49$ & $93.29$ & $26$ & $48$ & $519$\\
3 & $55.83$ & $95.85$ & $43$ & $197$ & $189$\\
4 & $67.22$ & $95.50$ & $31$ & $77$ & $501$\\
5 & $72.60$ & $91.42$ & $40$ & $222$ & $510$\\
6 & $66.66$ & $90.93$ & $38$ & $301$ & $419$\\
7 & $49.02$ & $94.89$ & $59$ & $125$ & $465$\\
8 & $50.06$ & $92.02$ & $31$ & $267$ & $220$\\
9 & $53.33$ & $96.67$ & $30$ & $48$ & $128$\\
10 & $55.91$ & $95.30$ & $26$ & $65$ & $193$\\
11 & $56.52$ & $96.03$ & $40$ & $31$ & $477$\\
12 & $87.41$ & $94.98$ & $14$ & $141$ & $252$\\
13 & $62.98$ & $94.77$ & $30$ & $31$ & $252$\\
\end{tabular}
\label{table:video_wise_track}
\end{table*}
\begin{table*}[!t]
\centering
\caption{Comparison of the overall tracking performance on test videos the hockey player tracking dataset. ($\downarrow$ means lower is better, $\uparrow$ mean higher is better) }
\footnotesize
\setlength{\tabcolsep}{0.2cm}
\begin{tabular}{c|c|c|c|c|c}\hline
Method & IDF1$\uparrow$ & MOTA $\uparrow$& ID-switches $\downarrow$ & False positives (FP)$\downarrow$ & False negatives (FN) $\downarrow$\\\hline
SORT \cite{Bewley2016_sort} & $53.7$ & $92.4$ & $673$ & $2403$ & $5826$\\
Deep SORT \cite{Wojke2017simple} & $59.3$ & $94.2$ & $528$ & $1881$ & $4334$\\
Tracktor \cite{tracktor_2019_ICCV} & $56.5$ & $94.4$ & $687$ & $1706$ & $4216$\\
FairMOT \cite{zhang2020fair} & $61.5$ & $91.9$ & $768$ & $1179$ & $7568$\\
MOT Neural Solver \cite{Braso_2020_CVPR} & $\textbf{62.9}$ & $\textbf{94.5}$ & $\textbf{431}$ & $1653$ & $4394$\\
\end{tabular}
\label{table:tracking_results}
\end{table*}
\subsection{Overall System}
\label{subsection:pipeline}
The player tracking, team identification, and player identification methods discussed are combined together for tracking and identifying players and referees in broadcast video shots. Given a test video shot, we first run player detection and tracking to obtain a set of player tracklets $\tau = \{ T_1, T_2,....T_n \}$. For each tracklet $T_i$ obtained, we run the player identification model to obtain the player identity. We take advantage of the fact that the player roster is available for NHL games through play-by-play data, hence we can focus only on players actually present in the team. To do this, we construct vectors $v_a$ and $v_h$ that contain information about which jersey numbers are present in the away and home teams, respectively. We refer to the vectors $v_h$ and $v_a$ as the \textit{roster vectors}. Assuming we know the home and away roster, let $H$ be the set of jersey numbers present in the home team and $A$ be the set of jersey numbers present in away team. Let $p_{jn} \in \mathbb{R}^{86}$ denote the probability of the jersey number present in the tracklet. Let $null$ denote the no-jersey number class and $j$ denote the index associated with jersey number $\eta_j$ in $p_{jn}$ vector.
\begin{align}
v_h[j] &=1, \text{if} \; \eta_j \in H\cup\{null\} \\
v_h[j] &= 0, \; otherwise
\end{align}
\noindent similarly,
\begin{align}
v_a[j] &=1, \text{if} \; \eta_j \in A\cup\{null\} \\
v_a[j] &= 0, \; otherwise
\end{align}
We multiply the probability scores $p_{jn} \in R^{86}$ obtained from the player identification by $v_h \in R^{86}$ if the player belongs to home team or $v_a \in R^{86}$ if the player belongs to the away team. The determination of player team is done through the trained team identification model. The player identity $I$ is determined through \begin{equation}Id=argmax(p_{jn} \odot v_h) \end{equation} (where $\odot$ denotes element-wise multiplication) if the player belongs to home team, and \begin{equation}Id=argmax(p_{jn} \odot v_a)\end{equation} if the player belongs to the away team. The overall algorithm is summarized in Algorithm \ref{algorithm:holistic}. Fig. \ref{fig:pipeline} depicts the overall system.
\begin{algorithm}[]
\setstretch{1}
\SetAlgoLined
\textbf{Input}: Input Video $V$, Tracking model $T_r$ , Team ID model $\mathcal{T}$, Player ID model $\mathcal{P}$, $v_h$ , $v_a$\\
\textbf{Output}: Identities $\mathcal{ID} = \{Id_1, Id_2.....Id_n\}$\\
\textbf{Initialize}: $\mathcal{ID} = \phi$\\
$\tau = \{ T_1, T_2,....T_n \} = T_r(V)$\\
\For{$T_i \quad in \quad \tau $}
{
$team = \mathcal{T}(T_i)$\\
$p_{jn} = \mathcal{P}(T_i)$\\
\uIf{$team == home$}{
$Id=argmax(p_{jn} \odot v_h)$
}
\uElseIf{$team == away$}{
$Id=argmax(p_{jn} \odot v_a)$
}
\Else{
$Id = ref$
}
$\mathcal{ID} = \mathcal{ID} \cup Id $
}
\caption{Holistic algorithm for player tracking and identification.}
\label{algorithm:holistic}
\end{algorithm}
\section{Results}
\label{section:results}
\subsection{Player Tracking}
The MOT Neural Solver algorithm is compared with four state of the art algorithms for tracking. The methods compared to are Tracktor \cite{tracktor_2019_ICCV}, FairMOT \cite{zhang2020fair}, Deep SORT \cite{Wojke2017simple} and SORT \cite{Bewley2016_sort}. Player detection is performed using a Faster-RCNN network \cite{NIPS2015_14bfa6bb} with a ResNet50 based Feature Pyramid Network (FPN) backbone \cite{fpn} pre-trained on the COCO dataset \cite{coco} and fine tuned on the hockey tracking dataset. The object detector obtains an average precision (AP) of $70.2$ on the test videos (Table \ref{table:player_det_results}). The accuracy metrics for tracking used are the CLEAR MOT metrics \cite{mot} and Identification F1 score (IDF1) \cite{idf1}. An important metric is the number of identity switches (IDSW), which occurs when a ground truth ID $i$ is assigned a tracked ID $j$ when the last known assignment ID was $k \ne j$. A low number of identity switches is an indicator of accurate tracking performance. For sports player tracking, the IDF1 is considered a better accuracy measure than Multi Object Tracking accuracy (MOTA) since it measures how consistently the identity of a tracked object is preserved with respect to the ground truth identity. The overall results are shown if Table \ref{table:tracking_results}. The MOT Neural Solver model obtains the highest MOTA score of $94.5$ and IDF1 score of $62.9$ on the test videos.
\subsubsection{Analysis}
From Table \ref{table:tracking_results} it can be seen that the MOTA score of all methods is above $90\%$. This is because MOTA is calculated as
\begin{equation}
\label{MOT accuracy equation}
MOTA = 1 - \frac{\Sigma_{t}(FN_{t} + FP_{t} + IDSW_{t})}{\Sigma_t GT_{t}}
\end{equation}
where $t$ is the frame index and $GT$ is the number of ground truth objects. MOTA metric counts detection errors through the sum $FP+FN$ and association errors through $IDSWs$. Since false positives (FP) and false negatives (FN) heavily rely on the performance of the player detector, the MOTA metric highly depends on the performance of the detector. For hockey player tracking, the player detection accuracy is high because of sufficiently large size of players in broadcast video and a reasonable number of players and referees (with a fixed upper limit) to detect in the frame. Therefore, the MOTA score for all methods is high.\par
The MOT Neural Solver method achieves the highest IDF1 score of $62.9$ and significantly lower identity switches than the other methods. This is because pedestrian trackers use a linear motion model assumption which does not perform well with motion of hockey players. Sharp changes in player motion often leads to identity switches. The MOT Neural Solver model, in contrast, has no such assumptions since it poses tracking as a graph edge classification problem. \par
Table \ref{table:video_wise_track} shows the performance of the MOT Neural solver for each of the 13 test videos. We do a failure analysis to determine the cause of identity switches and low IDF1 score in some videos. The major sources of identity switches are severe occlusions and player going out of field of view (due to camera panning and/or player movement). We define a pan-identity switch as an identity switch resulting from a player leaving and re-entering camera field of view due to panning. It is very difficult for the tracking model to maintain identity in these situations since players of the same team look identical and a player going out of the camera field of view at a particular point in screen coordinates can re-enter at any other point. We try to estimate the proportion of pan-identity switches to determine the contribution of panning to total identity switches. \par
To estimate the number of pan-identity switches, since we have quality annotations, we make the assumption that the ground truth annotations are accurate and there are no missing annotations in ground truth. Based on this assumption, there is a significant time gap between two consecutive annotated detections of a player only when the player leaves the camera field of view and comes back again. Let $T_{gt} = \{o_1, o_2,..., o_n \}$ represent a ground truth tracklet, where $o_i = \{x_i,y_i,w_i,h_t,I_i, t_i\}$ represents a ground truth detection. A pan-identity switch is expected to occur during tracking when the difference between timestamps (in frames) of two consecutive ground truth detections $i$ and $j$ is greater than a sufficiently large threshold $\delta$. That is \begin{equation}
(t_i - t_j) > \delta
\end{equation}
Therefore, the total number of pan-identity switches in a video is approximately calculated as
\begin{equation}
\sum_G \mathbb{1}( t_i - t_j > \delta )
\end{equation}
where the summation is carried out over all ground truth trajectories and $\mathbb{1}$ is an indicator function. Consider the video number $9$ having $30$ identity switches and an IDF1 of $53.33$. We plot the proportion of pan identity switches (Fig \ref{fig:vid_9_pidsw}), that is \begin{equation}
= \frac{\sum_G \mathbb{1}( t_i - t_j > \delta )}{IDSWs}
\end{equation}
against $\delta$, where $\delta$ varies between $40$ and $80$ frames. In video number $9$ video $IDSWs = 30$. From Fig. \ref{fig:vid_9_pidsw} it can be seen that majority of the identity switches ($~90\%$ at a threshold of $\delta= 40$ frames) occur due to camera panning, which is the main cause of error. Visually investigating the video confirmed the statement. Fig. \ref{fig:all_pidsw} shows the proportion of pan-identity switches for all videos at a threshold of $\delta = 40$ frames. On average, pan identity switches account for $65\%$ of identity switches in the videos. This shows that the tracking model is able to tackle occlusions and lack of detections with the exception of extremely cluttered scenes.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{Figures/pan_idsw_all.png}
\end{center}
\caption{Proportion of pan-identity switches for all videos at a threshold of $\delta = 40$ frames. On average, pan-identity switches account for $65\%$ of identity switches. }
\label{fig:all_pidsw}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{Figures/pan_idsw_1v.png}
\end{center}
\caption{Proportion of pan identity switches vs. $\delta$ plot for video number $9$. Majority of the identity switches ($~90\%$ at a threshold of $\delta= 40$ frames) occur due to camera panning, which is the main cause of error.}
\label{fig:vid_9_pidsw}
\end{figure}
\begin{table}[!t]
\centering
\caption{Player detection results on the test videos. $AP$ stands for Average Precision. $AP_{50}$ and $AP_{75}$ are the average precision at an Intersection over Union (IoU) of 0.5 and 0.75 respectively.}
\footnotesize
\setlength{\tabcolsep}{0.2cm}
\begin{tabular}{c|c|c}\hline
$AP$ & $AP_{50}$ & $AP_{75}$ \\\hline
$70.2$ & $95.9$ & $87.5$
\end{tabular}
\label{table:player_det_results}
\end{table}
\subsection{ Team Identification}
The team identification model obtains an accuracy of $96.6\%$ on the team identification test set. Table \ref{table:team_id_results} shows the macro averaged precision, recall and F1 score for the results. The model is also able to correctly classify teams in the test set that are not present in the training set. Fig. \ref{fig:team_results} shows some qualitative results where the network is able to generalize on videos absent in training/testing data. We compare the model to color histogram features as a baseline. Each image in the dataset was cropped such that only the upper half of jersey is visible. A color histogram was obtained from the RGB representation of each image, with $n_{bins}$ bins per image channel. Finally a support vector machine (SVM) with an radial basis function (RBF) kernel was trained on the normalized histogram features. The optimal SVM hyperparameters and number of histogram bins were determined using grid search by doing a five-fold cross-validation on the combination of training and validation set. The optimal hyperparameters obtained were $C=10$ , $\gamma = .01$ and $n_{bins}=12$. Compared to the SVM model, the deep network approach performs $14.6\%$ better on the test set demonstrating that the deep network (CNN) based approach is superior to simple handcrafted color histogram features.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth]{Figures/team_id.png}
\end{center}
\caption{Team identification results from four different games that are each not present in the team identification dataset. The model performs well on data not present in dataset, which demonstrates the ability to generalize well on out of sample data points. }
\label{fig:team_results}
\end{figure*}
\subsection{Player Identification}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{Figures/theta_presence.png}
\end{center}
\caption{Jersey number presence accuracy vs. $\theta$ (threshold for filtering out tracklets where jersey number is not visible) on the validation set. The values of $\theta$ tested are $\theta = \{0.0033, 0.01, 0.03, 0.09, 0.27, 0.81\}$. The highest accuracy is attained at $\theta=0.01$. }
\label{fig:theta_presence}
\end{figure}
\begin{table}[!t]
\centering
\caption{Team identification accuracy on the team-identification test set.}
\footnotesize
\setlength{\tabcolsep}{0.2cm}
\begin{tabular}{c|c|c|c|c}\hline
Method & Accuracy & Precision & Recall & F1 score \\\hline
Proposed & $\mathbf{96.6}$ & $\mathbf{97.0}$ & $\mathbf{96.5}$ & $\mathbf{96.7}$ \\
SVM with color histogram & $82.0$ & $81.7$ & $81.5$ & $81.5$ \\
\end{tabular}
\label{table:team_id_results}
\end{table}
\begin{table*}[!t]
\centering
\caption{Ablation study on different methods of aggregating probabilities for tracklet confidence scores.}
\footnotesize
\setlength{\tabcolsep}{0.2cm}
\begin{tabular}{c|c|c|c|c}\hline
Method & Accuracy & F1 score & Visiblility filtering & Postprocessing\\\hline
Majority voting & $80.59 \%$ & $80.40\%$ & \checkmark & \checkmark \\
Probability averaging & $75.64\%$ & $75.07\%$ & & \\
Proposed w/o postprocessing & $80.80\%$ & $79.12\%$ & \checkmark & \\
Proposed w/o visibility filtering & $50.10\%$ & $48.00\%$ & & \checkmark\\
Proposed & $\textbf{83.17\%}$ & $\textbf{83.19\%}$ & \checkmark & \checkmark\\
\end{tabular}
\label{table:logit_averaging}
\end{table*}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth, height=3.5cm]{Figures/success_case.png}
\end{center}
\caption{Some frames from a tracklet where the model is able to identify the number 20 where 0 is at a tilted angle in majority of bounding boxes. }
\label{fig:success}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth, height=3.5cm]{Figures/26_28.png}
\end{center}
\caption{Some frames from a tracklet where $6$ appears as $8$ due to motion blur and folds in the player jersey leading to error in classification. }
\label{fig:26_28}
\end{figure*}
\label{subsection:player_identification}
The proposed player identification network attains an accuracy of $83.17 \%$ on the test set. We compare the network with Chan \textit{et al.} \cite{CHAN2021113891} who use a secondary CNN model for aggregating probabilities on top of an CNN+LSTM model. Our proposed inference scheme, on the contrary, does not require any additional network. Since the code and dataset for Chan \textit{et al.} \cite{CHAN2021113891} is not publicly available, we re-implemented the model by scratch and trained and evaluated the model on our dataset. The proposed network performs $9.9\%$ better than Chan \textit{et al.} \cite{CHAN2021113891}. The network proposed by Chan \textit{et al.} \cite{CHAN2021113891} processes shorter sequences of length 16 during training and testing, and therefore exploits less temporal context than the proposed model with sequence length 30. Also, the secondary CNN used by Chan \textit{et al.} \cite{CHAN2021113891} for aggregating tracklet probability scores easily overfits on our dataset. Adding $L2$ regularization while training the secondary CNN proposed in Chan \textit{et al.} \cite{CHAN2021113891} on our dataset also did not improve the performance. This is because our dataset is half the size and is more skewed than the one used in Chan \textit{et al.} \cite{CHAN2021113891}, with the $null$ class consisting of half the examples in our case. The higher accuracy indicates that the proposed network and training methodology involving intelligent sampling of the $null$ class and the proposed inference scheme works better on our dataset. Additionally, temporal 1D CNNs have been reported to perform better than LSTMs in handling long range dependencies \cite{BaiTCN2018}, which is verified by the results.\par
The network is able to identify digits during motion blur and unusual angles (Fig. \ref{fig:success}). Upon inspecting the error cases, it is seen that when a two digit jersey number is misclassified, the predicted number and ground truth often share one digit. This phenomenon is observed in $85\%$ of misclassified two digit jersey numbers. For example, 55 is misclassified as 65 and 26 is misclassified as 28 since 6 often looks like 8 (Fig. \ref{fig:26_28}) because of occlusions and folds in player jerseys.
\begin{table*}[!t]
\centering
\caption{Ablation study on different kinds of data augmentations applied during training. Removing any one of the applied augmentation techniques decreases the overall accuracy and F1 score.}
\footnotesize
\setlength{\tabcolsep}{0.2cm}
\begin{tabular}{c|c|c|c|c}\hline
Accuracy & F1 score & Color & Rotation & Random cropping \\\hline
$\textbf{83.17\%}$ & $\textbf{83.19\%}$ & \checkmark & \checkmark & \checkmark \\
$81.58\%$ & $82.00\%$ & \checkmark & \checkmark & \\
$81.58\%$ & $81.64\%$ & \checkmark & & \checkmark\\
$81.00\%$ & $81.87\%$ & & \checkmark & \checkmark\\
\end{tabular}
\label{table:augmentation}
\end{table*}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth]{Figures/idsw.png}
\end{center}
\caption{Example of a tracklet where the same identity is assigned to two different players due to an identity switch. This kind of errors in player tracking gets carried over to player identification, since a single jersey number cannot be associated with this tracklet.}
\label{fig:id_switch}
\end{figure*}
The value of $\theta$ (threshold for filtering out tracklets where jersey number is not visible) is determined using the validation set. In Fig \ref{fig:theta_presence}, we plot the percentage of validation tracklets correctly classified for presence of jersey number versus the parameter $\theta$. The values of $\theta$ tested are $\theta = \{0.0033, 0.01, 0.03, 0.09, 0.27, 0.81\}$. The highest accuracy of $95.64\%$ at $\theta = 0.01$. A higher value of $\theta$ results in more false positives for jersey number presence. A $\theta$ lower than $0.01$ results in more false negatives. We therefore use the value of $\theta=0.01$ for doing inference on the test set. \par
\subsubsection{Ablation studies}
We perform ablation studies in order to study how data augmentation and inference techniques affect the player identification network performance.
\textbf{Data augmentation}: We perform several data augmentation techniques to boost player identification performance such data color jittering, random cropping, and random rotation by rotating each image in a tracklet by $\pm 10$ degrees. Note that since we are dealing with temporal data, these augmentation techniques are applied per tracklet instead of per tracklet-image. In this section, we investigate the contribution of each augmentation technique to the overall accuracy. Table \ref{table:augmentation} shows the accuracy and weighted macro F1 score values after removing these augmentation techniques. It is observed that removing any one of the applied augmentation techniques decreases the overall accuracy and F1 score.
\textbf{Inference technique}:
We perform an ablation study to determine how our tracklet score aggregation scheme of averaging probabilities after filtering out tracklets based on jersey number presence compares with other techniques. Recall from section \ref{subsubsection:inference} that for inference, we perform \textit{visibility filtering} of tracklets and evaluate the model only on tracklets where jersey number is visible. We also include a \textit{post-processing} step where only those window probability vectors $p_i$ are averaged for which $argmax(p_i) \ne null$. The other baselines tested are described:
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth]{Figures/team_error.png}
\end{center}
\caption{Example of a tracklet where the team is misclassified. Here, the away team player (white) is occluded by the home team player (red), which causes the team identification model to output the incorrect result. Since the original tracklet contains hundreds of frames, only a subset of tracklet frames is shown. }
\label{fig:team_error}
\end{figure*}
\begin{enumerate}
\item Majority voting: after filtering tracklets based on jersey number presence, each window probability $p_i \in P$ for a tracklet is argmaxed to obtain window predictions after which a simple majority vote is taken to obtain the final prediction. For post-processing, the majority vote is only done for those window predictions with are not the $null$ class.
\item Only averaging probabilities: this is equivalent to our proposed approach without visibility filtering and post-processing.
\end{enumerate}
The results are shown in Table \ref{table:logit_averaging}. We observe that our proposed aggregation technique performs the best with an accuracy of $83.17\%$ and a macro weighted F1 score of $83.19\%$. Majority voting shows lower performance with accuracy of $80.59\%$ even after the visibility filtering and post-processing are applied. This is because majority voting does not take into account the overall window level probabilities to obtain the final prediction since it applies the argmax operation to each probability vector $p_i$ separately. Simple probability averaging without visibility filtering and post-processing obtains a $7.53\%$ lower accuracy demonstrating the advantage of visibility filter and post-processing step. The proposed method without the post-processing step lowers the accuracy by $2.37\%$ indicating post-processing step is of integral importance to the overall inference pipeline. The proposed inference technique without visibility filtering performs poorly when post-processing is added with an accuracy of just $50.10\%$. This is because performing post-processing on every tracklet irrespective of jersey number visibility prevents the model to assign the $null$ class to any tracklet since the logits of the $null$ class are never taken into aggregation. Hence, tracklet filtering is an essential precursor to the post-processing step.
\subsection{Overall system}
We now evaluate the holistic pipeline consisting of player tracking, team identification, and player identification. This evaluation is different from evaluation done the Section \ref{subsection:player_identification} since the player tracklets are now obtained from the player tracking algorithm (rather than being manually annotated). The accuracy metric is the percentage of tracklets correctly classified by the algorithm.
Table \ref{table:pipeline_table} shows the holistic pipeline. Taking advantage of player roster improves the overall accuracy for the test videos by $4.9\%$. For video number $11$, the improvement in accuracy is almost $24.44 \%$. This is because the vectors $v_a$ and $v_p$ help the model focus only on the players present in the home and away roster. There are three main sources of error: \begin{enumerate}
\item Tracking identity switches, where the same ID is assigned to two different player tracks. These are illustrated in Fig. \ref{fig:id_switch};
\item Misclassification of the player's team, as shown in Fig. \ref{fig:team_error}, which causes the player jersey number probabilities to get multiplied by the incorrect roster vector; and
\item Incorrect jersey number prediction by the network.
\end{enumerate}
\begin{table}[!t]
\centering
\caption{Overall player identification accuracy for 13 test videos. The mean accuracy for the video increases by $4.9 \%$ after including the player roster information }
\footnotesize
\setlength{\tabcolsep}{0.2cm}
\begin{tabular}{c|c|c}\hline
Video number & Without roster vectors & With roster vectors\\\hline
1 & $90.6 \%$ & $\textbf{95.34\%}$ \\
2 & $57.1\%$ & $\textbf{71.4\%}$ \\
3 & $84.2\%$ & $\textbf{85.9\%}$ \\
4 & $74.0\%$ & $\textbf{78.0\%}$ \\
5 & $79.6\%$ & $\textbf{81.4\%}$ \\
6 & $88.0\%$ & $88.0\%$ \\
7 & $68.6\%$ & $\textbf{74.6\%}$ \\
8 & $91.6\%$ & $\textbf{93.75\%}$ \\
9 & $88.6\%$ & $\textbf{90.9\%}$ \\
10 & $86.04\%$ & $\textbf{88.37\%}$ \\
11 & $44.44\%$ & $\textbf{68.88\%}$ \\
12 & $84.84\%$ & $84.84\%$ \\
13 & $75.0\%$ & $75.0\%$ \\ \hline
Mean & $77.9\%$ & $\textbf{82.8\%}$
\end{tabular}
\label{table:pipeline_table}
\end{table}
\section{Conclusion}
We have introduced and implemented an automated offline system for the challenging problem of player tracking and identification in ice hockey. The system takes as input broadcast hockey video clips from the main camera view and outputs player trajectories on screen along with their teams and identities. However, there is room for improvement. Tracking players when they leave the camera view and identifying players when their jersey number is not visible is a big challenge.
In a future work, identity switches resulting from camera panning can be reduced by tracking players directly on the ice-rink coordinates using an automatic homography registration model \cite{jiang2020optimizing}. Additionally player locations on the ice rink can be used as a feature for identifying players.
\IEEEpeerreviewmaketitle
\section*{Acknowledgment}
This work was supported by Stathletes through the Mitacs Accelerate Program and Natural Sciences
and Engineering Research Council of Canada (NSERC). We also acknowledge Compute
Canada for hardware support.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
{\small
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 2.089844,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdvU4dbgg-EpAJpnK | \section{Introduction}
\label{sec:sec1}
\subsection{Combined Competition Format}
\label{sec1pt1}
The 2020 Summer Olympics in Tokyo, Japan marked the first appearance of
sport climbing on the Olympic stage \citep{ioc2016}. This sport is broken down into
three distinct disciplines: speed climbing, bouldering, and lead
climbing. However, rather than granting a separate sets of medals for
each discipline, the International Olympic Committee (IOC) only allowed
one set of medals each for men and women. As a result, rather than
choosing only one of the three concentrations, all three disciplines were
included together forming one single combined event. Under this
``triathlon'' format, every climber must compete in all three
concentrations, and their individual score is determined as the product
of the ranks across the three disciplines, with the lowest rank product
declared the winner.
The first discipline, speed climbing, takes place on a standardized 15
meter-high wall where the racers get one chance to try to reach the top
of the wall as quickly as possible. At Tokyo 2020, speed climbing is
being contested in a head-to-head format and under a single elimination
bracket tournament structure. Next, in bouldering, contestants have a
fixed amount of time to attempt to reach the top of a climbing problem
on a 4.5 meter-high wall in as few attempts as possible, without the use
of ropes. Ties are further broken by the number of ``zone holds,'' which
are holds approximately half way through each course. Finally, in lead
climbing, each athlete is given six minutes and one attempt to climb as
high as they can on a wall of height 15 meters. A climber gets one point
for each hold they reach, and the participant that manages to reach
the highest point on the wall wins the lead discipline. If there is a
tie, the competitor with the fastest elapsed time wins.
The decision to combine the three climbing events and only award one set
of medals each for men and women in the Olympics has received a large
amount of criticism from climbing athletes all over the world. In a
series of interviews conducted by Climbing Magazine in 2016 \citep{blanchard2016},
a number of climbers shared their thoughts and concerns about the
new Olympic climbing format. Legendary climber Lynn Hill compared the
idea of combining speed climbing, bouldering, and lead climbing to
``asking a middle distance runner to compete in the sprint.'' She then
added, ``Speed climbing is a sport within our sport.'' Other climbers
also hold the same opinion as Hill regarding speed climbing, using words
and phrases like ``bogus,'' ``a bummer,'' ``less than ideal,'' ``not in
support,'' and ``cheesy and unfair'' to describe the new combined
format. Courtney Woods stated, ``Speed climbers will have
the biggest disadvantage because their realm isn't based on difficult
movements.'' Mike Doyle believed, ``Honestly, the people that will
suffer the most are the ones that focus only on speed climbing. Those
skills/abilities don't transfer as well to the other disciplines.'' The
climbers also expressed their hope for a change in the competition
format in future climbing tournaments.
\subsection{Rank-product Scoring}
\label{rank-product-scoring}
At the 2020 Summer Olympics, both sport climbing competitions for male and female begin with 20 climbers who have previously qualified for the Olympics from qualifying events held in 2019 and 2020. All 20 athletes compete in each of the three disciplines in the qualification round, and their performances in each concentration are ranked from 1 to 20. A competitor's combined score is computed as the product of their ranks in each of the three events; specifically,
\begin{equation} \label{eq:1}
Score_i = R^S_i\times R^B_i\times R^L_i,
\end{equation}
where $R^S_i$, $R^B_i$, and $R^L_i$ are the ranks of the $i$-th competitor in speed climbing, bouldering, and lead climbing, respectively.
The 8 qualifiers with the lowest score in terms of product of ranks across the three disciplines advance to the final round, where they once again compete in all three events. Similar to qualification, the overall score for each contestant in the final stage is determined by multiplying the placements of speed, bouldering, and lead disciplines, and the athletes are ranked from 1 to 8. The climbers with the lowest, second lowest, and third lowest product of ranks in the final round win the gold, silver, and bronze medals, respectively. This type of rank aggregation method heavily rewards high finishes and relatively ignores poor finishing results. For instance, if climber A finished 1st, 20th, and 20th and climber B finished 10th, 10th, and 10th, climber B would have a score of 1000 whereas climber A would have a much better score of 400, despite finishing last in 2 out of 3 of the events.
To the best of our knowledge, we know of no sporting event, team or individual, that uses the product of ranks to determine an overall ranking. There are examples of team sports that use the rank-sum scoring to determine the winning team such as cross country where the squad with the lowest sum of ranks of the top five runners is awarded with a first place finish. \cite{hammond2007}, \cite{mixon2012}, \cite{Boudreau2014}, \cite{boudreau2018}, and \cite{Medcalfe2020} pointed out several problems with rank-sum scoring in cross country, most notably, violations of social choice principles. In addition, some individual sports such as the decathlon and heptathlon rely on a sum of scores from the ten or seven events. However, these scores are not determined based on the ranks of the competitors. That is, a decathlete's score is entirely based upon their own times, distances, and heights, and their overall score will remain the same regardless of the performance of other individuals \citep{westera2006}. On the other hand, there are other individual competitions consisting of several events combined, such as crossfit, that do base their scoring on ranks. In each event, points are earned based on the competitor's rank in the event based on a scoring table, and a contestant's final score is based on the sum of their scores across all the events \citep{crossfit2021}.
To date, there are several articles that have proposed alternative rank aggregation methods to the sport climbing combined format. First, \cite{Parker2018} investigated a variety of approaches to rank and score climbing athletes, namely, geometric mean, Borda count, linear programming, geometric median, top score method, ABS10, and what they refer to as the "merged method." Eventually, they recommended the merged method, which is a combination of the top score and ABS10 methods as a reasonable scoring approach for climbing due to predictive power and satisfaction of social choice theory. More recently, \cite{stin2022} compared the current rank-product scoring method with rank-sum scoring and observed a dramatic change to the outcome of the men's sport climbing final at the 2020 Olympics when the sum of discipline rankings was used instead of the product. Specifically, Tomoa Narasaki, who finished fourth overall at Tokyo 2020, would have won the gold medal if rank-sum was used as the method of scoring instead of rank-product, Furthermore, they proposed an alternative ranking-based scoring scheme that computes the sum of the square roots of each climber's rankings as their overall score.
As a side note, there are applications of the rank-product statistic in other fields. For example, the rank-product statistic can be used to analyze replicated microarray experiments and identify differentially expressed genes. See \cite{Breitling2004} for more information on this statistical procedure.
In this paper, we perform statistical analysis to investigate the limitations of sport climbing's combined competition format and ranking system. We will evaluate whether the concerns of the professional climbers were valid. The manuscript is outlined as follows. We first begin with a simulation study to examine the key properties of rank-product scoring in sport climbing in Section \ref{sec:sec2}. Our analyses of past climbing tournament data are then presented in Sections \ref{sec:sec3}. Finally, in Section \ref{sec:sec4}, we provide a summary of our main findings as well as some discussion to close out the paper.
\section{Simulation Study}
\label{sec:sec2}
In this section, we perform a simulation study to examine the rankings
and scoring for climbers in both qualification and final rounds. There
are two crucial assumptions to our simulation approach:
\vspace{1mm}
\begin{itemize}
\item
\textbf{Uniform ranks}: The ranks within each discipline follow a
discrete uniform distribution with lower and upper bounds of
\([1, 20]\) and \([1, 8]\) for the qualification and final rounds,
respectively.
\vspace{1mm}
\item
\textbf{Correlation between events}: We want to introduce dependence
in the ranks between the disciplines. In particular, based on the
claim that ``Speed climbing is a sport within our sport'' mentioned in Section \ref{sec:sec1}, a correlation between bouldering and lead climbing is
assumed, whereas speed climbing is assumed completely independent of
the other two events.
\end{itemize}
\vspace{1mm}
In order to generate data that satisfy the assumptions listed above, we performed simulations using the method of copulas. By definition, a copula is a multivariate distribution function with standard uniform univariate margins \citep{copbook}. In this simulation study, bouldering and lead ranks were considered to have discrete uniform marginal distributions with a non-zero, positive correlation between the ranks. We chose Kendall's $\tau$ \citep{kendall1938} as our measurement of association between the bouldering and lead disciplines. Our examination considers five different levels of correlation between bouldering and lead climbing: \(0\) (no correlation), \(0.25\) (low), \(0.5\) (moderate), \(0.75\) (high), and \(1\) (perfect).
Using functions from the \texttt{R} package \texttt{copula} \citep{coppkg}, a copula with the desired correlation structure was first calibrated for each correlation value. A random sample of either 20 (for qualification) or 8 (for final) values from the copula was then drawn and the values were ranked within each discipline. Finally, this process was repeated \(10000\) times for each correlation
level. After the simulations were complete, the total scores for every
simulated round and the final standings for the climbing athletes were
calculated. The simulation results allow us to explore different
properties of the sport climbing rank-product scoring system, including
the distributions of total score for qualifying and final rounds, and
the probabilities of advancing to the final stage and winning a medal, given
certain conditions.
After obtaining the simulation output, we summarized the advancement
probabilities for both qualifying and final rounds, looking for trends
over the correlation values. Figure \ref{fig:fig1} displays the
probability of finishing first in the qualification and final rounds for each of
the 5 correlation values we considered, color coded by whether a climber
wins the speed event or wins one of bouldering and lead concentrations.
It is easy to see that for both rounds, as the correlation magnitude
increases, the win probability given a first-place finish for lead
climbing or bouldering also goes up. In contrast, the probability of
winning each round given a speed victory decreases as the correlation
between bouldering and lead climbing increases. Moreover, if the two
disciplines lead and bouldering are perfectly correlated, the top-ranked
speed contestant in the final stage only has a 20.5\% chance to win gold,
compared to a 90.3\% chance for a finalist that ranks first in
bouldering and lead climbing. This suggests a potentially unfair
disadvantage for speed specialists when participating in a competition
structure that, as many believed, possesses little skill crossover
between speed and the other two climbing concentrations.
\begin{figure}
\centering
\includegraphics{corr-1.pdf}
\caption{\label{fig:fig1}The probabilities of winning qualification
(i.e.~finishing first in qualification) and final rounds (i.e.~winning a
gold medal), given that a climber finishes first in speed versus
finishes first in bouldering or lead. Results were obtained from
simulations for each Kendall correlation value.}
\end{figure}
To further examine the advancement probabilities and scores for Olympics
climbing, we implemented the same copula simulation procedure based on empirical results from past climbing competition data. In particular, we used the women's climbing event at Tokyo 2020 as a case study, and obtained Kendall rank correlation coefficients between bouldering and lead climbing of $0.526$ and $0.214$ for the women's qualification and final rounds, respectively. We are specifically interested in the following questions:
\vspace{1mm}
\begin{itemize}
\item
For a qualifier, what is the probability that they advance to the
final round (i.e.~finish in the top 8 of the qualification round), given that
they win any discipline?
\vspace{1mm}
\item
For a finalist, what is the probability that they win a medal
(i.e.~finish in the top 3 of the final round), given that they win any
discipline?
\end{itemize}
\vspace{1mm}
Our simulation results, as illustrated by Figure \ref{fig:fig2}, show
that a climber is almost guaranteed to finish in the top 8 of
qualification and advance to the final round if they win at least one of
the three climbing concentrations (99.5\% chance). Regarding the final round,
a climber is also very likely to claim a top 3 finish and bring home a
medal if they win any event (84.8\% chance). Moreover, we notice that if
a climber wins any discipline, they are also more likely to finish first
overall than any other positions in the eventual qualification and final
rankings. This shows how significant winning a discipline is to the
overall competition outcome for any given climber.
In addition, we are interested in examining the distribution of the
total score for both qualification and final rounds. Figure
\ref{fig:fig3} is a summary of the expected score for each qualification
and final placement. According to our simulations, on average, the
qualification score that a contestant should aim for in order to move on
to the final round is 453 (for 8th rank). Furthermore, we observe that
in order to obtain a climbing medal, the average scores that put a
finalist in position to stand on the tri-level podium at the end of the
competition are 9, 19, and 33 for gold, silver, and bronze medals,
respectively.
\begin{figure}
\centering
\includegraphics{probs-1.pdf}
\caption{\label{fig:fig2}The distributions for the probability of
finishing at each rank of both qualification and final rounds, given
that a climber wins any discipline, obtained from simulations. The
probability of finishing exactly at each given rank is coded blue,
whereas the probability of finishing at or better than each given rank
is coded orange. Results shown here are for Kendall rank correlation coefficients between bouldering and lead climbing of $0.526$ and $0.214$ for qualification and final rounds, with speed assumed independent of
both other events.}
\end{figure}
\begin{figure}
\centering
\includegraphics{scores-1.pdf}
\caption{\label{fig:fig3}The average scores (with 95\% confidence intervals) for the top 10 qualification ranks and all 8 final ranks, obtained from simulations. Results shown here are for Kendall rank correlation coefficients between bouldering and lead climbing of $0.526$ and $0.214$ for qualification and final rounds, with speed assumed independent of both other events.}
\end{figure}
\section{Data Analysis}
\label{sec:sec3}
\subsection{Correlations}
\label{combined-competition-format}
Throughout this section, we will be using data from the Women's Qualification at the 2020 Summer Olympics \citep{2021tokyo} as a case study for examining the relationship between the climber rankings in each individual discipline and the overall standings. The main attributes of this data are the name and nationality of the climbers; the finishing place of climbers in speed climbing, bouldering, and lead climbing; the total score (which equals the product of the discipline ranks); the overall placement; and the performance statistics associated with speed (race time), bouldering (tops-zones-attempts), and lead climbing (highest hold reached). We utilize this data to analyze the correlations between the event ranks and final table position, as well as to look at how often the final orderings change if one athlete is removed and the remaining climbers' ranks and total scores in each discipline are re-calculated.
Figure \ref{fig:fig4} is a multi-panel matrix of scatterplots of the
ranks of the individual events and the final women's qualification
standings. We use Kendall's $\tau$ as our measure of
ordinal association between the ranked variables. Table \ref{tab:tab1}
shows the Kendall rank correlation coefficients between the overall rank
and the ranks of speed, bouldering, and lead disciplines; along with
corresponding \(95\%\) confidence intervals obtained from bootstrapping \citep{efron1986}.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{scatter-1.pdf}
\caption{\label{fig:fig4}Scatterplots with smoothed fitting curves and 95\% confidence intervals of overall rank and speed, bouldering, and lead ranks. Each data point represents a climber that competed in the Women's Qualification at Tokyo 2020.}
\end{figure}
\begin{table}[tbp]
\caption{Kendall's $\tau$ values, along with correlation
test statistics, p-values, and bootstrapped 95\% confidence intervals,
for the overall rank and the rank of speed, bouldering, and lead
disciplines of Women's Qualification at Tokyo 2020.}%
\label{tab:tab1}
\centering
\begin{tabular}[]{@{}lrrrr@{}}
\toprule
Discipline & Kendall's $\tau$ & Test Statistic & $p$-value & Bootstrapped
95\% CI \\
\midrule
Speed & $0.147$ & $109$ & $0.386$ & $(-0.191, 0.469)$ \\
Bouldering & $0.432$ & $136$ & $0.007$ & $(0.107, 0.691)$ \\
Lead & $0.463$ & $139$ & $0.004$ & $(0.112, 0.753)$ \\
\bottomrule
\end{tabular}
\end{table}
It is evidently clear that there exists a fairly strong and positive
association between the final rank and the ranks of both bouldering
(\(\tau = 0.432\), \(p\)-value \(=0.007\), Bootstrapped \(95\%\) CI:
$(0.107, 0.691)$) and lead climbing (\(\tau = 0.463\), \(p\)-value
\(=0.004\), Bootstrapped \(95\%\) CI: $(0.112, 0.753)$). This implies
that climbers with high placements in both bouldering and lead also tend
to finish at a higher ranking spot overall.
Alternatively, the correlation with the final rank is not as strong for
speed climbing as the other two events (\(\tau = 0.147\)), and there is
insufficient evidence for an association between the rank of speed
climbing and the overall rank (\(p\)-value \(=0.386\), Bootstrapped
\(95\%\) CI: $(-0.191, 0.469)$). Thus, this offers evidence that speed
climbers are at a disadvantage under this three-discipline combined
format, compared to those with expertise in the other two
concentrations. Hence, it appears that the concerns of the climbers
mentioned in Section \ref{sec1pt1} may be valid.
In addition, we perform principal component analysis (PCA) to summarize
the correlations among a set of observed performance variables
associated with the three climbing disciplines. In particular, using the
data from the Women's Qualification at Tokyo 2020, we look at the
racing time (in seconds) for speed; the number of successfully completed
boulders (``tops''); and the number of holds reached for lead.
Figure \ref{fig:fig5} is a PCA biplot showing the PC scores of the
climbers and loadings of the skill variables. We notice that the lead
and bouldering performances strongly influence PC1, while speed time is
the only variable contributing substantially to PC2, separated from the
other two skills. Moreover, since the two vectors representing the
loadings for the bouldering and lead performances are close and form a
small angle, the two variables they represent, bouldering tops and lead
holds reached, are positively correlated. This implies that a climber
that does well in bouldering is also very likely to deliver a good
performance in lead climbing.
\begin{figure}
{\centering \includegraphics[width=0.8\linewidth]{pca-1.pdf}
}
\caption{\label{fig:fig5}PCA Biplot for Women's Qualification at Tokyo 2020.}\label{fig:pca}
\end{figure}
\subsection{Leave-one-climber-out
Analysis}
\label{leave-one-climber-out-analysis}
Another question that we are interested in investigating is ``What would happen to the rankings if a single climber is removed?" There is a connection between this situation and the idea of independence of irrelevant alternatives (IIA) in social choice theory. The IIA criterion is a property of a voting system which states that after a winner is determined, if one of the losing candidates drops out and the votes are recounted, there should not be a change in the winner. First mentioned by \cite{arrow1951}, the IIA condition is also known as Luce's choice axiom \citep{luce1959} in probability theory, and it has had a number of applications in the fields of decision theory, economics, and psychology over the years. We notice a link between the concept of IIA and the topic of ranking system in sports. As an illustration, suppose we have 3 players A, B, and C participating in a competition. If A finishes in the first place and C is later disqualified and removed, A should still win. If the original winner (A) loses the modified competition (with C removed), then the Independence of Irrelevant Alternatives has been violated. For our particular case, this type of analysis can be helpful in examining the overall outcome for a climbing contest, specifically how a disqualification can affect the standings of medalists in the final round.
To investigate whether the rank-product aggregation method of sport climbing violates social choice principles, we use data from the 2018 Youth Olympics women's climbing competition \citep{2018youth}. This event also implemented the combined format and rank-product scoring system, but consisted of 21 and 6 climbers competing in the qualification and final rounds, respectively, rather than 20 and 8 like the Tokyo 2020 Olympics. The analysis is performed as follows. After an athlete is dropped (by their original placement), the ranks for each discipline of the remaining players are re-calculated. The new total scores are then obtained as before, by multiplying the three event ranks, which then determines the new overall finishing positions. For each rank elimination case, the association between the original and modified standings is summarized, and a distribution of Kendall correlation between the two sets of rankings is obtained for each round. Figure \ref{fig:fig6} reveals that there is not always a perfect concordance between the original orderings and new overall placements of the remaining climbers after the person with a specified rank is removed. In particular, a non-perfect agreement between the two sets of rankings is observed in two-thirds and one-half of the rank exclusion instances for the qualification and final rounds, respectively.
Furthermore, Figure \ref{fig:fig7} shows the modified versions of the rankings after
each ranked climber is excluded for the Women's Final at the 2018 Youth
Olympics. We observe from this plot that removing a single contestant
changes the rankings considerably, especially in terms of the order of
medalists. One particular interesting case is where an athlete's
finishing position changes when someone who originally finished behind them drops
out. This situation is illustrated by panel 5 of the women's
competition, where the fifth-place climber, Krasovskaia, was excluded;
and Meul, whose actual placement was fourth, moved up to the second
spot and would have claimed the silver medal. Moreover, this clearly
demonstrates that the rank-product scoring method of Olympics sport
climbing violates the independence of irrelevant alternatives criterion.
To reiterate, Meul actually finished fourth in the 2018 Youth Olympics.
However, \textbf{with the exact same performance} in the final round,
she would have finished in second place, winning a medal no less, if Krasovskaia,
who finished behind Meul in fifth, had simply not competed.
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{kendist-1.pdf}
\caption{\label{fig:fig6}This figure shows the distributions of Kendall's $\tau$ measuring the association between initial placements and new overall rankings after the removal of each rank for the women's qualification and final climbing rounds at the 2018 Youth Olympics. A perfect agreement of $\tau=1$ between the two sets of rankings is observed in 7 out of 21 rank elimination cases for qualification and 3 out of 6 for final.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{iia-1.pdf}
\caption{\label{fig:fig7}This figure illustrates the changes to the 2018
Youth Olympics Women's Final rankings when each climber is left out.
Each panel represents the rank of the drop-out athlete, with 0 being the
original final results. Each case with a change in rank orderings is
highlighted by a black panel border, and any player with a rank change
is represented by a red-filled bar.}
\end{figure}
\section{Conclusion and Discussion}
\label{sec:sec4}
In this paper, we examined the general features of the rank-product
scoring system of sport climbing, in particular, the advancement
probabilities and scores for the climbers. Most importantly, through
analyses of historical climbing data, we pointed out several problems of
the combined competition format and rank-product scoring system of sport
climbing. First, combining the three disciplines speed, bouldering, and
lead together into a ``triathlon'' format is putting speed
climbers in an unfavorable position. Second, the sport climbing rank-product
aggregation method violates the independence of irrelevant alternatives.
As such, there is a dependency on irrelevant parties in this scoring scheme, as the
orderings of medalists can be affected with a drop-out of a lower-ranked
climber.
While we did not attempt to propose a new rank aggregation method to replace
rank-product scoring in combined climbing contest, we instead have a
suggestion to modify the overall structure of the competition. In
particular, we suggest that speed climbing should have its own set of
medals in future tournaments, whereas bouldering and lead
climbing can be amalgamated into one event. In fact, it was confirmed that
in the next Summer Olympics held in Paris in 2024, there will be two
separate contests and two sets of climbing medals for each gender event:
combined lead and bouldering, and speed-only \citep{goh2020}. This is
consistent with what we have shown, as bouldering and lead climbing
performances are highly correlated with each other and with the overall
result, whereas speed climbing should be separated from the combined
competition format.
\vspace{-4mm}
\section*{Acknowledgements}
\label{acknowledgements}
The authors would like to acknowledge the organizers of the 2021 UConn
Sports Analytics Symposium and the 2021 Carnegie Mellon Sports Analytics
Conference (CMSAC 2021) for the opportunities to present this work at
its early stage and receive feedback. In addition, we would like to
express our special thanks to the anonymous reviewers of the
Reproducible Research Competition at CMSAC 2021 for the insightful
comments and suggestions that led to significant improvements in the
paper.
\section*{Supplementary Material}
\label{supplementary-material}
All data and code for reproducing the analyses presented in this
manuscript are publicly available on GitHub at
\url{https://github.com/qntkhvn/climbing}.
\bibliographystyle{jds}
| {
"attr-fineweb-edu": 2.525391,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdKU5qdmC7qKfcfik | \section{Introduction}
\label{chap:intro}
Predicting match results in sports games has always been a popular research topic in machine learning. Indeed, sports analytics has been an emerging field in many professional sports games to help decision making\footnote{http://www.forbes.com/sites/leighsteinberg/2015/08/18/changing-the-game-the-rise-of-sports-analytics/}. In recent years, electronic sports (a.k.a. eSports), a form of competitions on multiplayer video games, have also been recognized as legitimate sports. For example, Dota 2, a multiplayer online battle arena game, is one of the most active games in the eSports industry, with over one million concurrent players on the Steam gaming platform. The prize of professional Dota 2 tournaments has already passed \$65 million by June 2016.\footnote{https://techvibes.com/2016/06/21/how-videogames-became-a-sport-and-why-theyre-here-to-stay-hint-money} Just as sports analytics has been used in decision making of professional sports, it is foreseeable that "eSports analytics" will also be useful for players in professional eSports competitions, live streaming media that cover these competitions, or eSports game developers.
In this paper, we try to predict the winning team of a match in the multiplayer eSports game Dota 2, in which a match consists of two teams named "Radiant" and "Dire", and each team consists of five players. Before a match begins, each player selects a unique "hero" character to be controlled in the match. As the match progresses, the hero can farm gold or obtain experience to level up via combats against rival heroes. The winning criteria for a team is to destroy the opponent team.
Our goal to predict the winning team of Dota 2 has two stages. In our first stage, we predict the result before a match begins. Because previous work \cite{Stanford2015, UCSD2015} used very limited aspects of features, we extract more aspects of features from the available game data. In addition, previous work only consider pre-match information, while the most useful information is typically generated during the game. This motivates our second stage to further introduce real-time gameplay data to predict during a match. Therefore, the contributions of this paper are:
\begin{enumerate}
\item Consider more aspects of prior features (\textit{before a match begins}) from individual players' match history instead of only hero features to improve prediction accuracy.
\item Consider real-time gameplay features (\textit{during a match}) to predict the winning team as the match progresses (e.g. predicting at each 1-minute interval).
\end{enumerate}
In this paper, the information before a match begins is referred to as \textit{prior} information, and the information during a match is referred to as \textit{real-time} information. Figure \ref{fig:turnaround_match} illustrates an example Dota 2 match predicted by our system. The winning probabilities of both teams are estimated at each minute as the match progresses. At the 0th minute when only prior information is available, our system predicts that the Radiant team has an advantage. However, starting from the 5th minute when real-time information gains more influence, it predicts that the Dire team will reverse the match.
\begin{SCfigure}
\centering
\includegraphics[width=0.6\linewidth]{turnaround_match.pdf}
\label{fig:turnaround_match}
\caption{The winning probability of Radiant team in an example Dota 2 match at each minute predicted by our system. In the beginning of the match, prior information (history data) predicts that the Radiant team has an advantage. However, starting from the 5th minute, real-time information (gameplay data) gains more influence and predicts that the Dire team will reverse the match. In this example, our system is able to achieve nearly 100\% confidence at as early as the 7th minute, long before the match ends.}
\end{SCfigure}
\section{Related Work}
\label{chap:related_work}
There have been numerous efforts on predicting the winning team of a match in different kinds of sports. \cite{KDD2016} proposed a novel probabilistic framework using context information to predict tennis and StarCraft II match results. \cite{BayesianNet} used a Bayesian network with subjective variables on football games. \cite{ELO} used the ELO rating system to derive covariates for prediction models. \cite{CF} used collaborative filtering to predict cases without enough history data.
As for eSports matches, \cite{IEEE-GEM2014} used zone changes, distribution of team members, and time series clustering to investigate spatio-temporal behavior. \cite{UTAH2013} focused on classifying hero positional role and hero identifier based on play style and performance. \cite{FDG2014} worked on discovering patterns in combat tactics of Dota 2. \cite{DERBY2014} trained a neural network to find optimal jungling routes in Dota 2.
Some previous work also tried to predict eSports match results. \cite{ARXIV2015} clustered player behavior and learned the optimal team composition. Then it used team composition-based features to predict the outcome of the game League of Legends. \cite{AASRI2014} predicted the outcome of Dota 2 with topological measures, which are the areas of polygons described by the players, inertia, diameter, distance to the base. \cite{Stanford2013} used logistic regression and k-nearest neighbors to recommend hero selections that would maximize team winning probability against the opponent team. \cite{Stanford2015, UCSD2015} used logistic regression or random forests to predict the winning team of Dota 2. However, the features used in previous work are only hero selection and/or hero winning rates, which are from very limited aspects of the available game data. In addition, the information from the real-time gameplay is entirely ignored. Therefore, an important part of this paper is to expand the feature set that will address these weaknesses.
\section{Dataset}
\label{chap:dataset}
For prior information, we use Dota 2 API\footnote{dota2api: wrapper and parser, https://github.com/joshuaduffy/dota2api} to crawl 78362 matches participated by 19790 players with 'very high' skill level\footnote{Matches with very high skill level result in fewer random factors and more resemble professional matches.}. Mean and standard deviation of match duration are 37.75 minutes and 10.42 minutes respectively, which are consistent with real Dota 2 scenario. It is worth noting that the match data in previous work \cite{Stanford2015, Stanford2013, UCSD2015} had much shorter duration (mean is ranged between 23 and 30 minutes) and would lead to more biased prediction results.
Each match contains the winning team, players' account IDs and corresponding hero IDs. In addition, we use other 3rd party APIs to collect statistics of heroes and players. Heroes' data can be obtained from HeroStats API\footnote{HeroStats API, http://herostats.io/}. It contains numerous statistics associated with each hero's abilities such as strength, agility, and intelligence.
Player's ability (i.e. match history) plays a major role to the outcome of prediction. Opendota API\footnote{OpenDota API, http://docs.opendota.com/} provides a player's statistics, which can be treated as prior knowledge of the player's skill level before a match begins. For example, Dota 2 uses Matchmaking Rating (MMR)\footnote{How Dota 2 MMR Works – A Detail Guide, http://www.dotainternational.com/how-dota-2-mmr-works} as the official ranking score, which estimates the skill level of a player. We could also access a player's match history. Through the past match records, we can know the skill level of a player when they choose a particular hero.
For real-time information, we use Opendota API to collect match replay data such as gold, experience and deaths for all players at each minute. Since not all matches have replays available online, only 20631 out of 78362 matches contain gameplay information in our dataset.
\section{Features}
\label{chap:features}
\subsection{Prior Features}
\label{sec:prior_features}
Feature engineering plays a major role in this paper. Domain knowledge is leveraged to extract useful features from the data. Using the collected data, we categorize the extracted features of the prior knowledge into three types: hero, player, and hero-player combined features. For each match, we obtain features from 10 players in the two opponent teams (Radiant and Dire), and combine them into a single feature vector with Radiant team's feature values in the front half.
Sometimes, a player's statistics can not be accessed if they set their account profile private. In this case, we replace missing feature values with corresponding mean values. Furthermore, we filter matches to guarantee that there are at most two missing players in each match.
\textbf{Hero Feature.} Hero feature $\textbf{x}_{h}$ contains three parts: hero selection, hero attributes, and hero winning rate. Hero selection is binary one-hot encoding indicating which heroes are selected for each team in a match. At the time of writing, there are 113 heroes in the Dota 2. We define the hero selection part as a 113 heroes $\times$ 2 teams = 226-dimensional vector. Hero selection is the only feature implemented in two previous works \cite{Stanford2015, Stanford2013}.
Hero attributes are 26 manually chosen statistics associated with each hero's abilities such as strength, agility, and intelligence. We use a 260-dimensional vector to represent the 10 selected heroes' 26 attributes.
Moreover, we calculate
the radiant winning rate of a hero against another hero in all history matches. So for each match, there will be 5 Radiant heroes $\times$ 5 Dire heroes = 25 rival hero combinations.
Finally, we concatenate the three parts into a 226 + 260 + 25 = 511-dimensional hero feature vector $\textbf{x}_{h}$. Note that all parts of the hero feature are based on global statistics from all players, independent of the current players' match history.
\textbf{Player Feature.} Player feature $\textbf{x}_{p}$ contains the skills and ranking information about the 10 players. We define player feature $\textbf{x}_{p}$ as a 20-dimensional vector to represent the current 10 players by their MMR scores and MMR percentiles\footnote{For a visualization of the MMR distribution and percentiles among more than one million players, see https://www.opendota.com/distributions}.
\textbf{Hero-player Combined Feature.} Hero-player combined feature $\textbf{x}_{hp}$ contains 8 statistics about the 10 players when they choose the current heroes. The statistics include the winning rate when using this hero, mean experience, mean gold gained per minute, and mean number of deaths per minute. We define hero-play combined feature $\textbf{x}_{hp}$ as a 80-dimensional vector to represent 10 players' 8 statistics when choosing the respective heroes for each match. Figure \ref{fig:winrate} shows the winning rates of two example players when choosing different heroes. We can see that player A and player B are good at different heroes with a large variance in winning rates. Such property cannot be captured by hero feature alone since it is averaged over all levels of players. Similarly, player feature alone is not enough since it doesn't distinguish player's performance over different heroes.
\begin{figure}[htbp]
\centering
\begin{minipage}{0.48\linewidth}
\includegraphics[width=\textwidth]{winrate.pdf}
\caption{Winning rates of two example players when choosing different heroes.}
\label{fig:winrate}
\end{minipage}
\hfill
\begin{minipage}{0.48\linewidth}
\includegraphics[width=\textwidth]{realtime-data}
\caption{Gold and experience difference between the two teams (Radiant minus Dire) at each minute of one particular match.}
\label{fig:time_series}
\end{minipage}
\end{figure}
\subsection{Real-time Features}
\label{sec:realtime_features}
Each team's real-time information can be represented by averaging the five players' gold, experience and deaths at each minute. Then we take the difference between the two teams (Radiant minus Dire) as our feature. Figure \ref{fig:time_series} illustrates the gold and experience features of one particular match. If a match has $T$ minutes, then the real-time feature for this match will have $3 \times T$ dimensions.
\section{Models}
The goal of this paper is to predict the winning team before a match begins (using prior information) and during a match (using real-time information). We refer to these two parts as prior modeling and real-time modeling. In this section, we first introduce how we separately model the prior part and the real-time part. Then we propose two methods to combine prior modeling and real-time modeling.
\subsection{Prior Modeling}
\label{sec:prior_modeling}
For predicting match results before a match begins, we explore the effectiveness of two classifiers: logistic regression (LR) and neural network. Input to the classifiers are the three categories of prior features explained in Section \ref{sec:prior_features}: hero feature, player feature, and hero-player combined feature.
\subsection{Real-time Modeling}
\label{sec:realtime_modeling}
As game progresses, we are able to observe more data related to the performance of each team. In this section, we proposed two methods to model the three time series data (Section \ref{sec:realtime_features}) obtained during the match.
One intuitive method is to slice time-series data by sliding window and train a LR. For example, we can slice time-series data by 5-minute windows. The LR is trained using all matches' 5-minute-window time series data (convoluted in time domain). When making a prediction at time $t$, we feed time-series data at time $t-5, t-4, ..., t-1$ as features into the LR.
As the second method, we build a generative model called Attribute Sequence Model (ASM). This model aims to capture the trend of time-series data by explicitly modeling its transition probability in a discrete state space. See Figure \ref{fig:model} for its plate notation.
Here $Y_r$ denotes which team wins the game. $Y_r =1$ if Radiant wins and $Y_r = 0$ otherwise. $X, U, V$ are the three time-series data, with $X_t, U_t, V_t$ represents respectively the deaths, gold and experience at time $t$. Note that here we construct the value of $X_t, U_t, V_t$ as the difference of two teams. For example if at time $t$ Radiant team has mean experience 2000 and Dire team has mean experience 1000, then $V_t = 2000 - 1000$. Further, we discretize $X, U, V$ into bins and assign a discrete bin number to $X_t, U_t, V_t$ rather than the real value. For example, experience 1000 falls into the $8^{th}$ bin thus we set $V_t = 8$. In practice, we divide deaths, gold and experience into 24 bins. Therefore $X_t, U_t, V_t$ can have integer values 0-23.
The generative process of ASM is as follows: First $Y_r$ is sampled to determine which team is going to win. Then at each time $t$, $X_t, U_t, V_t$ are sampled independently based on their previous value $X_{t-1}, U_{t-1}, V_{t-1}$ and $Y_r$. The training process of ASM involves learning transition probability $P(X_t | X_{t-1}, Y_r)$, $P(U_t | U_{t-1}, Y_r)$ and $P(V_t | V_{t-1}, Y_r)$, which are estimated by Maximum Likelihood Estimation.
\begin{figure}[htbp]
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{model.pdf}
\vspace{20mm}
\caption{Plate notation of the Attribute Sequence Model (ASM).}
\vspace{-13mm}
\label{fig:model}
\end{minipage}
\hfill
\begin{minipage}{0.35\linewidth}
\begin{minipage}{\linewidth}
\includegraphics[width=\textwidth]{transition2.pdf}
\label{fig:transition2}
\end{minipage}
\vfill
\begin{minipage}{\linewidth}
\includegraphics[width=\textwidth]{transition5.pdf}
\label{fig:transition5}
\end{minipage}
\caption{Learned transition probability for the gold time-series data.}
\label{fig:transition}
\end{minipage}
\end{figure}
Figure \ref{fig:transition} is the visualization of the learned transition probability for gold time-series data. X-axis and Y-axis are bin numbers. The larger the bin number is, the larger the value it represents. Light color represents larger probability. Transition probability can be interpreted as the trend of time-series data under different conditions. For instance, when $Y=1$ which means Radiant is going to win the game, $U$ (Radiant's gold minus Dire's gold) is more likely to transit from a smaller value to a larger value. When $Y = 0$, it is the opposite.
The prediction process of the ASM model is as follows: if we want to make prediction at time $t$, we use the previous 5 minutes' data $D = \{X_{t-5}, ..., X_{t-1}$, $U_{t-5}, ..., U_{t-1}$, $V_{t-5}, ..., V_{t-1}\}$ and calculate the posterior probability $P(Y_r=y\ |\ D)$, on which our prediction is based.
\subsection{Prior and Real-time Combined Modeling}
\label{sec:prior_realtime_combined_modeling}
To build an accurate prediction model, we need to take into account both prior information and real-time information. In this section we propose two methods to combined prior modeling and real-time modeling together.
In method 1, we concatenate the 5-minutes-window time-series features with prior features and train a LR. In method 2 we train a new LR on top of the output of prior LR ($Y_p$) and the output of the real-time ASM model ($Y_r$).
\section{Experiments}
\label{chap:experiments}
We split matches into training and testing dataset with ratio of 9:1. All feature values are normalized to zero mean and unit variance. When obtaining statistics from match history, we exclude matches in the test dataset. Hyper-parameters for LR and neural network are searched via 10-fold cross-validation on training data set. Table \ref{table:cross-validation} is the cross-validation result on selecting hidden size and activation for two-layer neural network. In the end, we using LR with L2 regularization of 1e-6 and two-layer neural network with hidden size of 64 and sigmoid activation. Both classifiers were implemented with Keras\footnote{Keras: Deep Learning library for TensorFlow and Theano, https://github.com/fchollet/keras} library.
\begin{table}[htbp]
\centering
\caption{Cross-validation accuracy on two-layer neural network for hidden size and activation}
\begin{tabular}{|c|c|c|c|}
\hline & 32 & 64 & 128 \\
\hline Sigmoid & 70.48\% & \textbf{70.79\%} & 70.68\% \\
\hline Relu & 69.06\% & 69.08\% & 68.65\% \\
\hline Tanh & 69.39\% & 70.04\% & 69.39\% \\
\hline
\end{tabular}
\label{table:cross-validation}
\end{table}
\subsection{Prediction Using Only Prior Information}
\label{sec:exp_prior}
We compare prediction accuracy with different prior feature combinations by various methods, including two previous works, our logistic regression and neural network. \cite{Stanford2013} trained logistic regression based on only hero selection. \cite{UCSD2015} further added hero against hero winning rate and trained with random forest. We train their features using LR on our dataset.
As can be seen from Table \ref{table:features}, using all features (Hero + Player + Hero-player) achieves the highest prediction accuracy $71.49\%$, which outperforms the two previous works by more than 10\% because they used only subsets of our prior features. Although both previous works claimed to have achieved over 70\% prediction accuracy, their data had much shorter average match duration (between 23 to 30 minutes) that does not reflect the real distribution in the Dota 2 game. Section \ref{sec:discuss_duration} further discusses the effect of match duration on prior prediction.
\begin{comment}
Player feature alone is the least useful feature in this dataset. However, based on how we collected matches described in Section \ref{chap:features}, the majority of the players in our dataset have very high MMR scores (median 4885) and MMR percentiles (median 0.97). These biased players have a major negative impact on player feature because their MMRs are much less discriminating.
\end{comment}
Furthermore, hero-player combined feature is the single most informative feature, achieving $69.90\%$ prediction accuracy by itself. For example, it includes individual player's performance over different heroes (Figure \ref{fig:winrate}), which is proven to be a key factor in winning a match.
\begin{table}[htbp]
\centering
\caption{Prediction accuracy with different prior feature combinations by various methods.}
\begin{tabular}{|c|c|c|c|c|}
\hline & \textbf{Hero} & \textbf{Player} & \textbf{Hero-Player} & \textbf{Hero + Player + Hero-Player} \\
\hline Conley et al. \cite{Stanford2013} & 58.79\% & N/A & N/A & N/A \\
\hline Kinkade et al. \cite{UCSD2015} & 58.69\% & N/A & N/A & N/A \\
\hline Logistic Regression & 60.07\% & 55.77\% & 69.90\% & 71.49\% \\
\hline Neural Network & 59.53\% & 56.39\% & 69.71\% & 70.46\% \\
\hline
\end{tabular}
\label{table:features}
\end{table}
\begin{comment}
\subsubsection{Effect of Training Data Size}
\label{sec:exp_data_size}
According to Figure \ref{fig:trainsize.png}, as the number of training matches increases, training accuracy and test accuracy converge to around $71.49\%$. This means the model's generalization ability improves when fed with more training data. In this dataset, using $2^{14} \approx 16,000$ historical matches is enough to achieve $70\%$ test accuracy.
\begin{figure}[htbp]
\centering
\begin{minipage}{0.48\linewidth}
\includegraphics[width=\textwidth]{trainsize.png}
\caption{Accuracy of Logistic Regression model trained with different data sizes.}
\label{fig:trainsize.png}
\end{minipage}
\hfill
\begin{minipage}{0.48\linewidth}
\includegraphics[width=\textwidth]{trainsize.png}
\caption{Match duration figure is now moved to another section.}
\label{fig:xxx}
\end{minipage}
\end{figure}
\end{comment}
\subsection{Prediction Using Both Prior and Real-time Information}
In Figure \ref{fig:acc}, we compare all the proposed methods in terms of their accuracy when we make prediction at every 5 minutes. The green horizontal line is training LR using only prior features, which serves the baseline of our real-time modeling. The red dashed line is training LR using only real-time features and red solid line is training LR using both prior and real-time features (Section \ref{sec:prior_realtime_combined_modeling} method 1). The blue dashed line is the real-time prediction made by ASM model, and the blue solid line is the combination of ASM model and LR (Section \ref{sec:prior_realtime_combined_modeling} method 2). Three observations can be drawn from these experiments.
\label{sec:exp_realtime}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{acc.pdf}
\caption{Prediction accuracy plot using different models and features. The horizontal axis is the specific minute when prediction is made. Real-time predictions use previous 5 minutes of data.}
\label{fig:acc}
\end{figure}
\begin{enumerate}
\item Sufficient real-time information greatly improves prediction accuracy over baseline (green line). In our experiments, all four real-time models (blue lines and red lines) achieve better accuracy when fed with real-time information longer than 15 minutes. When predicting at the 40th minutes, accuracy achieves as high as $93.73\%$, in spark contrast to the 69\% accuracy when we only use prior feature to predict at the 40th minute (Figure \ref{fig:duration}).
\item Real-time features become more and more informative than prior features when make prediction at later stage of a match. Notice that the gap between blue solid line and blue dashed line is diminishing. The same happens to red solid line and red dashed line. This suggests that at later stage of a match teams' real-time performance determine the winning side. Prior features lose prediction power as match lasts longer.
\item The ASM approach (blue solid) performs better than LR (red solid) when using real-time features less than 20 minutes. This is probably because at the early stage of a match each team's performance is similar. At this stage it's more important to model the trend of performance rather its current value. ASM model explicitly models the transition probability of the three time-series data which encodes the trend of each team's performance, therefore it outperforms LR at this period.
\end{enumerate}
\section{Discussion and Analysis}
\label{chap:discussion}
\subsection{Effect of Match Duration on Prior Prediction}
\label{sec:discuss_duration}
We conduct error analysis to examine the possible cause of wrong predictions when using only prior information (Section \ref{sec:exp_prior}). Figure \ref{fig:duration} shows the effect of different match duration on prediction accuracy. In general, as matches last longer, accuracy drops and the matches become more unpredictable. For matches longer than 55 minutes, only less than $65\%$ accuracy is achieved. This phenomenon is likely due to the fact that as the match progresses, prior information (such as hero selection) is of less importance in its contribution to the final match result. Instead, it is the real-time gameplay information (such as how quickly each hero gains experience) that will give more clues to the final match result. In fact, this hypothesis is also a motivation to introduce real-time gameplay features in our feature set.
\begin{figure}[htbp]
\centering
\begin{minipage}{0.48\linewidth}
\includegraphics[width=\textwidth]{duration.png}
\caption{Prediction accuracy using only prior information for different match duration.}
\label{fig:duration}
\end{minipage}
\hfill
\begin{minipage}{0.48\linewidth}
\includegraphics[width=\textwidth]{MLR.pdf}
\caption{Prediction accuracy plot for LR and Time-specific LR.}
\label{fig:multipleLR}
\end{minipage}
\end{figure}
\subsection{Time-specific Logistic Regression}
\label{sec:time_specific_lr}
In previous section, we train one LR with 5-minutes-window time-series features. In a match, the model will use multiple 5-minutes-window time-series features starting at different minute. Merging different windows together will lose time-specific information. For example, gaining 1000 gold from 35 minute to 40 minute have different effect comparing to that from 5 minute to 10 minute. Therefore, one way to improve the previous LR is that we train multiple LR. Each LR is trained by time-series features from the same 5-minutes-window. Figure \ref{fig:multipleLR} shows that this time-specific model (Blue solid line) outperforms previous LR (Red solid line) in most time.
\subsection{Assumptions of ASM Model}
\label{sec:assumption_asm}
Here we discuss certain assumptions made in building the ASM model that may be untrue. The ASM model assumes independence of the three time-series. However, we observe strong correlation between them. For example, gold and deaths are highly negatively correlated because in the settings of Dota 2, deaths lead to lost of gold. Also we assume bigram-like transition. These two assumptions are made primarily to reduce model parameters and make it more generalizable. If there are more training data, these two assumptions can be relaxed.
Another important assumption is that for every match, we assume its winning side $Y_r$ remains unchanged throughout the match. In reality, however, it could be for example that Radiant team is about to win during the first half of the game but in the end Radiant loses because it makes an unforeseeable mistake during the second half. The problem is that in the training data we can only observe the final winning side but not the intermediate winning tendency. One possible workaround is to approximate the intermediate winning tendency by features, but we don't explore it in this paper.
\section{Conclusion and Future Work}
\label{chap:conclusion}
In this paper, we predict the winning team of a match in the multiplayer eSports game Dota 2. To address the weaknesses of previous work, we construct a feature set which covers more aspects including hero, player, and hero-player combination as prior features (before a match begins), and the team differences of gold, experience, and deaths at each minute as real-time features (during a match). We explored the effectiveness of LR, the proposed Attribute Sequence Model and their combinations in such prediction problem. Experiment results show that prior features' prediction power drops as the match lasts longer, which is successfully solved by modeling real-time time-series data. We also show that training time-specific LR further improves prediction accuracy by taking into account time heterogeneity.
Future work includes modeling intermediate winning tendency (Section \ref{sec:assumption_asm}) which is a more accurate indicator of game status than final result. Also, more replay data should be used in the real-time modeling, such as hero locations, equipment and major battle events. Lastly, it would be interesting to develop a system that takes in a public match id and automatically visualize the predicted winning probability in realtime.
\begin{comment}
In the first part of the project, we explored the use of different features and their effect on prediction result. Specifically, we construct features based on three different information sources: hero feature, player feature and hero-player combined feature. Experiment results demonstrate that the inclusion of hero-player combined feature greatly improves the match prediction accuracy, as compared to previous works \cite{Stanford2015, Stanford2013, UCSD2015} that only leverage hero feature. Goals set in the project proposal are successfully accomplished in the mid-way checkpoint.
So far, we only focus on "prior information" (i.e. information obtained before a match begins). In the second half of the project, we will try to also include "real-time information" (i.e. information obtained during the match) from match replay data to make real-time predictions as the match progresses.
\end{comment}
\begin{comment}
\subsection{[OLD] Paragraphs From Project Proposal}
[OLD] First we will measure accuracy only use player-based data. Next, model will be extended to support context-based data. Comparison will be made between including and excluding context-based data. Besides, we will experiment which features contribute the most to the model. We could also record the time when there is abrupt change in our model's output (probability of winning) and see whether there is really some significant event going on. Finally, feature vectors will be visualized in 2-dimension to find some interesting correlations.
\end{comment}
| {
"attr-fineweb-edu": 2.513672,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfQ7xK19JmhArFdzP | \section{Introduction}
Each day competitive contests between similar entities occur in the hope of one proving dominant over the other. These contests have been shown to be wide-ranging with examples including animals combating in order to prove their strength~\cite{Chase2002, Ellis2017}, online content producers aiming to create a popular post~\cite{Weng2012, Gleeson2014, Lorenz-Spreen2019}, or the quantification of the scientific quality underlying a researcher's output~\cite{Lehmann2006, Radicchi2008, Radicchi2009, Sinatra2016}. In most of these scenarios it proves difficult to ultimately determine the stronger of two such competitors for a number of reasons, most evidently the lack of explicit quantitative data describing the corresponding result from each contest. One noticeable exemption to this predicament is in the case of competitive sports where, on the contrary, there exists an abundance of data available from extended periods of time describing the results of contests. This source of empirical data has resulted in an entire domain of study in applying the theoretical concepts of complex systems to the field of sport~\cite{Passos2011, Davids2013, Wasche2017}.
The application of these tools has resulted in a greater understanding of the dynamics underlying a number of sporting contests including soccer~\cite{Grund2012, Buldu2019}, baseball~\cite{Petersen2008, Saavedra2010}, basketball~\cite{Gabel2012, Clauset2015, Ribeiro2016}, and more recently even virtual sporting contests based upon actual sports~\cite{Getty2018, OBrien2020}. An area that has received much focus and which is most relevant to the present work is the application of network science~\cite{Newman2010} in identifying important sporting competitors in both team and individual sports. This has led to analysis in a range of sports including team-based games such as soccer where the identification of important players within a team's structure has been considered~\cite{Onody2004, Duch2010} and cricket, where rankings of both teams and the most influential player in specialty roles including captains, bowlers, and batsmen have been considered~\cite{Mukherjee2012, Mukherjee2014}. Analysis has also been conducted into individual-based sports, again with the aim of providing a ranking of players within a given sport. For example, the competitors within both professional tennis~\cite{Radicchi2011} and boxers, at an individual weight level~\cite{TennantBoxing17} and a pound-for-pound level~\cite{Tennant20}, have been extensively studied. Lastly, rankings at a country level based upon their success across the spectrum of Olympic Games sports have also been considered~\cite{Calzada-Infante2016}.
In this article we focus on the application of network science to the sport of Snooker --- a cue-based game with its origins in the late $19^{\text{th}}$ century from the military bases of British officers based in India. The game is played on a cloth-covered rectangular table which has six pockets located at the four corners and two along the middle of the longer sides. The players strike the cue ball (which is white) with their cue such that it strikes another of the 21 colored balls which is then ideally pocketed ,i.e., it falls into one of the pockets. The order in which the different colored balls must be pocketed is pre-determined and a player is awarded different points depending on the color of ball pocketed. For each set of balls (known as a \textit{frame}) one player has the first shot and continues to play until they fail to pocket a ball, at which point their competitor then has their own attempt. The number of points scored by a player in a single visit to the table is known as a \textit{break}. The player who has the most points after all balls have been pocketed (or the other player concedes) is the winner of the frame. A snooker match itself general consists of an odd number of frames such that the players compete until it is impossible for the other to win , i.e., the winner reaches a majority of frames.
The popularization of Snooker came in conjunction with the advent of color television where the sport demonstrated the potential applicability of this new technology for entertainment purposes~\cite{Bury1986}. From the 1970s onwards Snooker's popularity grew among residents of the United Kingdom and Ireland, culminating in the 1985 World Championship~---~the final of which obtained a viewership of 18.5 million, a record at the time for any broadcast shown after midnight in the United Kingdom. The sport continued to increase in popularity over the 1990s with there being a significant increase in the number of professional players. It was, however, dealt a blow in 2005 with the banning of sponsorship from the tobacco companies who were major benefactors of the sport. There has been a revitalization
in the sport over the past decade however with a reorganization of the governing body \textit{World Snooker}~\cite{worldsnooker} and a resulting increase in both the number of competitive tournaments alongside the corresponding prize-money on offer for the players. One notable consequence of this change is the way in which the official rankings of the players is determined. Specifically, there has been a change from the points based system which was the method of choice from 1968 to 2013 towards a system based upon the player's total prize-winnings in monetary terms. The question arises as to whether this approach more accurately captures the actual ranking of a player's performances over the season in terms of who is capabable of beating whom and if, alternatively, a more accurate approach exists.
Motivated by this question, in this paper we consider a dataset of competitive Snooker matches taken over a period of over fifty years (1968 - 2020) with the aim of firstly constructing a networked representation of the contests between each player. This is obtained by representing all the matches between two players as a weighted connection, which we show to have similar features to apparently unrelated complex systems~\cite{Newman2010}. Using this conceptual network we proceed to make use of a ranking algorithm similar in spirit to the PageRank algorithm~\cite{Gleich2015} from which we can quantify the quality of players over multiple different temporal periods within our dataset. Importantly this algorithm is based purely on the network topology itself and does not incorporate any exogenous factors such as points or prize-winnings. The benefit offered by this approach is that, through the aforementioned dominance network, the quality of competition faced in each game is incorporated when determining a player's rank rather than simply the final result itself. As such, the algorithm places higher levels of importance in victory against other players who are perceived as successful and, notably, similar approaches have proven effective when applied to other sports resulting in numerous new insights to the competitive structure of said contests~\cite{Radicchi2011, Mukherjee2012, Mukherjee2014, TennantBoxing17, Tennant20}. Through this ranking system we proceed to highlight a number of interesting properties underlying the sport including an increased level of competition among players over the previous thirty years. We also demonstrate that while prior to its revitalization Snooker was failing to capture the dominance-ranking of players in the sport through its points-based ranking scheme, the subsequent change in the ranking system to a prize-money basis is also inaccurate
We investigate the quality of different ranking schemes in comparison to the PageRank approach using similarity metrics and also introduce a graphical tool known as a rank-clock~\cite{Batty2006} to the sport of Snooker which allows one to interpret how a player's rank has changed over the course of their career. Finally, we conclude with a discussion on the work and how it offers the potential for a new form of ranking scheme within the sport of Snooker.
\section{Methods}
The data used in the analysis to follow is obtained from the \textit{cuetracker} website \cite{cuetracker} which is an online database containing information of professional snooker tournaments from 1908 onwards. This amounts to providing the records of 18,324 players from a range of skill levels. In this article we focus only on those matches which were of a competitive professional nature and took place between the years of 1968 and 2020, a period of over fifty years. More specifically, in terms of the quality of match considered, we focus on those games that fall under the categories League, Invitational and Ranking events which are those that the majority of professional players compete in. With these considerations the dataset used amounts to 657 tournaments featuring 1221 unique players competing in 47710 matches. Importantly each season is split over two years such that the season which begins in one calendar year concludes during the following calendar year. As such we reference seasons by the year in which they begun i.e., the 2018-19 season is referred to as the 2018 season in analysis below. Furthermore, for validation and comparative purposes in the forthcoming analysis, the official rankings of snooker players from World Snooker (the governing body of the sport) were also obtained~\cite{worldsnooker} from the period 1975-2020. The top two panels of Fig.~\ref{fig:summaries} demonstrate the temporal behavior behind both the number of players and tournaments in each season of our dataset. We see the increase in popularity of Snooker and corresponding financial sustainability for more players arising during the period 1980-2000 prior to the subsequent decrease in professionals and tournament in the decade to follow. In the concluding ten years of the dataset however we observe an increase in both the number of tournaments alongside the players who compete in them as a possible consequence of the professional game's restructuring.
\subsection{Network Generation}\label{subsec:network}
In order to create a networked representation of the competition between pairs of Snooker players we consider each match in our dataset as an edge of weight one and construct a dominance relationship between the two players appearing in said match. Thus each time that player~$i$ defeats player~$j$, an edge from~$j$ to~$i$ is drawn. The construction described above results in a directed, weighted network with entries~$w_{ji}$ indicating the number of times that player~$j$ has lost to player~$i$. This definition proves insightful for analysis as statistics governing the players may immediately be obtained. For example, the out-strength of node~$j$, $k_j^{\text{out}} = \sum_i w_{ji}$ describes the number of times that player~$j$ has lost and similarly their in-strength~$k_j^{\text{in}} = \sum_i w_{ij}$ gives their total number of wins, while the total number of matches in which they partook is simply the sum of these two metrics. The probability distributions of these quantities (alongside the corresponding complementary cumulative distribution function) are shown in the bottom panels of Fig.~\ref{fig:summaries} where clear heavy-tailed distributions are observed, which is a common feature among networked descriptions of empirical social systems~\cite{Newman2010}. This suggests that the majority of snooker players take part in very few games and a select minority dominate the sport.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{data_summaries.pdf}
\caption{\textbf{Summary of the Snooker result dataset.} \textbf{(a)}~The total number of tournaments taking place in each season, the increasing popularity of the sport over the period 1980-2000 can be seen with increased number of tournaments and similarly its subsequent decay until the second half of the last decade. \textbf{(b)} Similar to panel (a) but now showing the number of professional players who competed by year in our dataset. \textbf{(c)}~The probability distribution function~(PDF) of player results describing the fraction of players who have played~(blue), won~(red), and lost~(green) a certain number of games. Each of these quantities appear to follow a heavy-tailed distribution. \textbf{(d)}~The corresponding complementary cumulative distribution function~(CCDF) in each case of panel (c), i.e., the fraction of players that have more than a certain number of matches in each category.}
\label{fig:summaries}
\end{figure*}
\subsection{Ranking Procedure}
We now proceed to the main aim of this article, which is to provide a ranking scheme from which the relative skill of players may be identified. This is obtained through a complex networks approach which involves assigning each node~$i$ some level of importance~$P_i$ based upon their record across the entire network that is obtained via evaluating the PageRank score originally used in the ranking of webpage search results~\cite{Page1998} and more recently within the domain of sports~\cite{Radicchi2011, Mukherjee2012, Mukherjee2014, TennantBoxing17, Tennant20}. This procedure is mathematically described by
\begin{equation}
P_i = (1 - q) \sum_j P_j \frac{w_{ji}}{k_j^{\text{out}}} + \frac{q}{N} + \frac{1-q}{N} \sum_j P_j \delta\left(k_j^{\text{out}}\right),
\label{eq:pageRank}
\end{equation}
which, importantly, depends on the level of importance associated with all other nodes in the network and as such is a coupled set of equations. Indeed, the first term within these equations describes the transfer of importance to player~$i$ from all other players~$j$ proportional to the number of games in which they defeated player~$j$, $w_{ji}$, relative to the total number of times the player lost, or their out-strength~$k_{j}^{\text{out}}$. The value of $q \in [0, 1]$ is a parameter (referred to as a damping factor in some literature e.g.,~\cite{Radicchi2009}), which controls the level of emphasis placed upon each term in the algorithm and has generally been set to 0.15 in the literature, which we follow here. The second term describes the uniform redistribution of importance to all players proportional to the damping factor, this allows the system to award some importance to nodes independent of their results. Finally, the last term contains a Kronecker delta, where $\delta(\cdot)$ is equal to one when its argument is zero and is otherwise zero, in order to give a correction in the case where nodes with no outdegree exist that would otherwise act as sinks in the diffusion process considered here. From the perspective of the sporting example studied here such nodes would represent undefeated players, none of which actually occur within the dataset. Therefore, while we show the final term in Eq.~(\ref{eq:pageRank}) for consistency with the existing literature, it can be neglected in the analysis to follow.
The main disadvantage of this approach is that, in general, an analytical solution to this problem proves elusive and as such we revert to numerical solutions, as in~\cite{Radicchi2011, Mukherjee2014}, by initially assigning each node an importance reciprocal to the network's size and iterating until convergence to a certain level of precision. After the system of equations has reached its steady state in this diffusive-like process we proceed to rank the nodes by their corresponding importance scores.
\section{Results}
\subsection{All-time Rankings}
Having implemented the system of equations given by Eq.~\eqref{eq:pageRank} using the network described in Sec.~\ref{subsec:network} we obtain a ranking of Snooker players over all time, the top 20 of which are shown in Table~\ref{tab:top20}. Immediately some interesting results appear. First, 18 of the players within this list are still competing in the sport which is indicative of two things--- first snooker players have considerably longer careers (regularly spanning over 30 years) in comparison to other sports and a second related point is that the current period of Snooker can be viewed as a \textit{golden-age} of sorts. It is important to note that these results may also be considered contrary to general opinion if one instead based their ranking upon the number of \textit{World Championships} (Snooker's premier tournament) a competitor has won where the top four ranked players are \textit{Stephen Hendry}~(7), \text{Steve Davis}~(6), \textit{Ronnie O'Sullivan}~(6), and \textit{Ray Reardon}~(6). However one must recognize an important factor which is vital to the algorithm used here, namely that in most of these cases the titles were won over a short period of time indicating that the prime years of these player's careers did not feature as much competition among other highly ranked players. For example, Hendry won his titles over a period of 10 years, Davis and Reardon in nine years each, whereas O'Sullivan has taken 20 years to amass his collection suggesting more competition between players he competed with and as such he is correspondingly given the highest rank of the four in our algorithm.
\begin{table*}[h!]
\centering
\caption{\label{tab:top20}The top 20 players in Snooker's history.}
\begin{tabular*}{\textwidth}{l @{\extracolsep{\fill}} lcclcc}
\toprule
Rank & Player & PageRank Score & In strength & Nationality & Start & End \\
\midrule
1 & John Higgins & 0.0204 & 899 & Scotland & 1992 & -\\
2 & Ronnie O'Sullivan & 0.0201 & 843 & England & 1992 & -\\
3 & Mark Williams & 0.0169 & 768 & Wales & 1992 & -\\
4 & Stephen Hendry & 0.0164 & 818 & Scotland & 1985 & 2011\\
5 & Mark Selby & 0.0149 & 643 & England & 1999 & -\\
6 & Judd Trump & 0.0136 & 579 & England & 2005 & -\\
7 & Neil Robertson & 0.0134 & 581 & Australia & 2000 & -\\
8 & Steve Davis & 0.0129 & 761 & England & 1978 & 2014\\
9 & Shaun Murphy & 0.0126 & 552 & England & 1998 & -\\
10 & Jimmy White & 0.0116 & 650 & England & 1980 & -\\
11 & Stephen Maguire & 0.0113 & 475 & Scotland & 1997 & -\\
12 & Ali Carter & 0.0111 & 487 & England & 1996 & -\\
13 & Peter Ebdon & 0.0110 & 520 & England & 1991 & -\\
14 & Ken Doherty & 0.0110 & 523 & Ireland & 1990 & -\\
15 & Barry Hawkins & 0.0105 & 475 & England & 2000 & -\\
16 & Marco Fu & 0.0104 & 427 & Hong Kong & 1997 & -\\
17 & Ding Junhui & 0.0103 & 436 & China & 2003 & -\\
18 & Stuart Bingham & 0.0101 & 477 & England & 1996 & -\\
19 & Mark Allen & 0.0100 & 444 & Northern Ireland & 2002 & -\\
20 & Ryan Day & 0.0098 & 458 & Wales & 1998 & -\\
\bottomrule
\end{tabular*}
\end{table*}
Through this approach we identify \textit{John Higgins} to be the greatest Snooker player of all time, which is an understandable statement when one considers his career to date. Having already commented on the more competitive nature of the game since the late 1990s above, we note that Higgins has appeared in the top 10 ranked positions of the official rankings in 22 out of 25 years between 1995 and 2019 (20 of which he ranked in the top five positions) which is an impressive return in the circumstance. An interesting occurrence is also observed whereby the top three positions are filled by players who first competed professionally in 1992 (these three are in fact known as the \textit{Class of 92} within the Snooker community) and their frequent competition between one another over the pass 28 years is in itself helpful towards understanding their co-appearance as the highest ranked players.
Two other interesting happenstances occur in the rankings shown here, namely the relatively low ranking of both \textit{Steve Davis} and \textit{Jimmy White} in spite of their large number of wins. This is again explained by them both experiencing their peak years in terms of winning tournaments in an era within which there were fewer successful players (it is worth noting that White is in fact infamous for having never won the World Championship despite reaching six finals) and as such their wins receive less importance in the algorithm.
One may observe from Table \ref{tab:top20} that there appears to be a strong correlation between a player's PageRank and their corresponding number of wins (their in-strength). This is further demonstrated in Fig.~\ref{fig:all_time_rank} where the relationship between the two is visualized. Quantitative comparison between the two measures provides a very strong correlation with a Spearman correlation $\rho = 0.932$ and a Kendall tau correlation of $\tau = 0.793$. While this correlation is strong it does indicate some disparity, particularly so in the case of the second measure, which demonstrates the subtle differences evident in the two approaches. Namely, the algorithm proposed in this article can identify the quality of opponent whom a player is defeating such that it captures more information than simply the result itself. This suggests that those players whom appear below the red-dashed line in Fig.~\ref{fig:all_time_rank} have had to obtain their career wins in more difficult contests and vice-versa for those above the line. This point further emphasizes the value in our proposed approach for identifying the top players. On the contrary, if instead the metric used was simply the number of wins the players would have incentive to enter as many tournaments possible, particularly those in which they had a better chance of less competitive games, in order to increase their number of wins.
\begin{figure*}
\centering
\includegraphics[width=0.85\textwidth]{all_time_ranks.pdf}
\caption{\textbf{Relationship between PageRank importance and number of wins.} The top 30 players over the full time period considered here (1968-2019) ranked by both the PageRank score and the number of wins obtained by the player, which interestingly offer an exact overlap of players. We see that the two measures are highly correlated with Spearman correlation $\rho = 0.932$, and Kendall $\tau = 0.793$.}
\label{fig:all_time_rank}
\end{figure*}
\subsection{Specific Seasons}
While the analysis thus far has focused upon the entire breadth of the dataset we may also readily consider more specific time periods within which a ranking of players may be provided. Indeed this is particularly beneficial in the scenario where we consider sections of the data at a season level i.e., the annual representation of the game of Snooker. Taking this as our starting point we proceed to consider each of the 52 seasons in the dataset and calculate the importance of each player who features in said season using our proposed algorithm. Table~\ref{tab:number_ones} shows the highest ranked player in each of these seasons using three metrics --- PageRank, In-strength (number of wins), and the official rankings provided by World Snooker~\cite{worldsnooker}.
\makeatletter
\renewcommand\@makefnmark{\hbox{\@textsuperscript{\normalfont\color{black}\@thefnmark}}}
\makeatother
\begin{table*}[h!]
\centering
\caption{\label{tab:number_ones}The highest ranked player each year from 1968 based upon PageRank, In-strength, and the official rankings from World Snooker. Note year denotes the calendar year in which a season begun, i.e., 1975 represents the 1975-76 season.}
\begin{tabular*}{\textwidth}{l @{\extracolsep{\fill}} ccc}
\toprule
Year & PageRank & In-strength & World Snooker\\
\midrule
1968 & Ray Reardon & Ray Reardon & ---\\
1969 & John Spencer & John Spencer & ---\\
1970 & John Spencer & John Spencer & ---\\
1971 & John Spencer & John Spencer & ---\\
1972 & John Spencer & John Spencer & ---\\
1973 & John Spencer & Graham Miles & ---\\
1974 & John Spencer & John Spencer & ---\\
1975 & Ray Reardon & Ray Reardon & Ray Reardon\\
1976 & Doug Mountjoy & Doug Mountjoy & Ray Reardon\\
1977 & Ray Reardon & Ray Reardon & Ray Reardon\\
1978 & Ray Reardon & Ray Reardon & Ray Reardon\\
1979 & Alex Higgins & Alex Higgins & Ray Reardon\\
1980 & Cliff Thorburn & Cliff Thorburn & Cliff Thorburn\\
1981 & Steve Davis & Steve Davis & Ray Reardon\\
1982 & Steve Davis & Steve Davis & Steve Davis\\
1983 & Steve Davis & Steve Davis & Steve Davis\\
1984 & Steve Davis & Steve Davis & Steve Davis\\
1985 & Steve Davis & Steve Davis & Steve Davis\\
1986 & Steve Davis & Steve Davis & Steve Davis\\
1987 & Steve Davis & Steve Davis & Steve Davis\\
1988 & Steve Davis & Steve Davis & Steve Davis\\
1989 & Stephen Hendry & Stephen Hendry & Stephen Hendry\\
1990 & Stephen Hendry & Stephen Hendry & Stephen Hendry\\
1991 & Stephen Hendry & Stephen Hendry & Stephen Hendry\\
1992 & Steve Davis & Steve Davis & Stephen Hendry\\
1993 & Stephen Hendry & Stephen Hendry & Stephen Hendry\\
1994 & Stephen Hendry & Stephen Hendry & Stephen Hendry\\
1995 & Stephen Hendry & Stephen Hendry & Stephen Hendry\\
1996 & Stephen Hendry & Stephen Hendry & Stephen Hendry\\
1997 & John Higgins & John Higgins & John Higgins\\
1998 & Mark Williams & John Higgins & John Higgins\\
1999 & Mark Williams & Mark Williams & Mark Williams\\
2000 & Ronnie O'Sullivan & Ronnie O'Sullivan & Mark Williams\\
2001 & Mark Williams & John Higgins & Ronnie O'Sullivan\\
2002 & Mark Williams & Mark Williams & Mark Williams\\
2003 & Stephen Hendry & Ronnie O'Sullivan & Ronnie O'Sullivan\\
2004 & Ronnie O'Sullivan & Ronnie O'Sullivan & Ronnie O'Sullivan\\
2005 & John Higgins & Ding Junhui & Stephen Hendry\\
2006 & Ronnie O'Sullivan & Ronnie O'Sullivan & John Higgins\\
2007 & Shaun Murphy & Shaun Murphy & Ronnie O'Sullivan\\
2008 & John Higgins & John Higgins & Ronnie O'Sullivan\\
2009 & Neil Robertson & Neil Robertson & John Higgins\\
2010 & Shaun Murphy & Matthew Stevens & Mark Williams\\
2011 & Mark Selby & Mark Selby & Mark Selby\\
2012 & Stephen Maguire & Stephen Maguire & Mark Selby\\
\arrayrulecolor{red}
\midrule
\arrayrulecolor{black}
2013\footnote{Rankings changed to be based upon value of prize-winnings.} & Shaun Murphy & Neil Robertson & Mark Selby\\
2014 & Stuart Bingham & Stuart Bingham & Mark Selby\\
2015 & Mark Selby & Mark Selby & Mark Selby\\
2016 & Judd Trump & Judd Trump & Mark Selby\\
2017 & John Higgins & John Higgins & Mark Selby\\
2018 & Judd Trump & Neil Robertson & Ronnie O'Sullivan\\
2019 & Judd Trump & Judd Trump & Judd Trump\\
\bottomrule
\end{tabular*}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=0.85\textwidth]{prize_fund_distribution.pdf}
\caption{\textbf{Distribution of Prize funds within Snooker.} The total prize fund in each tournament for the last ten years within our dataset is shown. We highlight how the distributions have become more skewed in recent times, with the large outlier in each case representing the premier tournament in Snooker --- the World Championship. Note that the same outlier is represented twice each season i.e., by both the box and scatter plot.}
\label{fig:prize_money}
\end{figure*}
This table offers an interesting comparison into how the best player in a given season is determined. For example, in the early editions we observe the first major benefit of our approach where due to there being no official rankings calculated to compare with, new inferences may be made in said years within which we identify \textit{Ray Reardon} and \textit{John Spencer} to be the dominant players. It is worth noting that in these early years there was a very small number of tournaments to determine the player ranking as evidenced in panels (a) and (b) of Fig.~\ref{fig:summaries}. Considering the years in which there are three measures to compare, we first comment on the general agreement between the PageRank and in-strength rankings for the first half of our dataset (in all years, aside from one, up to 1997 the two metrics agree on the number one ranking player) and in general offer a strong alignment with the official rankings. This is in agreement with our earlier statements regarding the level of competition present in these years such that the best player could be readily identified due to a lower level of competition. After this period however the level of agreement between the three metrics demonstrates considerably more fluctuation suggesting that the official rankings were not entirely capturing the true landscape of the game. This may have been one of the motivations for changing the official ranking procedure from the 2013-14 season to instead be based upon total monetary prize-winnings rather than the points-based system used previously. This has occurred, however, just as the monetary value of the largest competitions has increased significantly, as demonstrated in Fig.~\ref{fig:prize_money}, which can result in a skewed level of emphasis upon larger tournaments. Analysis of the seasons which have occurred since this change suggests that the problem has not been remedied by this alteration, demonstrated by the three measures all agreeing only twice in the seven subsequent seasons, whereas the two network based procedures on the other hand agree four times in the same period.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{2018_ranks.pdf}
\caption{\textbf{Ranking algorithms in the 2018-19 season.} \textbf{(a)} The top 30 players from the 2018-19 season ranked by both the PageRank score and the number of wins obtained by the player, we see a strong correlation between the two metrics~(Spearman correlation $\rho = 0.917$, and Kendall $\tau = 0.792$). Note some players only appear in the top thirty ranks in one of the metrics. \textbf{(b)} Equivalent plot using the PageRank score and the official World Snooker rankings we now see a rather less correlated picture, particularly at larger ranks~($\rho = 0.635$, $\tau = 0.471$).}
\label{fig:2018_ranks}
\end{figure*}
Figure \ref{fig:2018_ranks} shows a direct comparison between the top thirty ranked players obtained by the PageRank approach versus the two others. Specifically, we see in panel (a) that the PageRank and in-strength generally agree well with strong correlation between the two rankings ($\rho = 0.917$, $\tau = 0.792$). The equivalent features to those found in Fig.~\ref{fig:all_time_rank} are also relevant here, namely the location of a player relative to the red dashed-line is indicative of the types of matches they are taking part in. For example, we see that \textit{Ronnie O'Sullivan}, a player widely acknowledged to be among the greats of the game but has in recent years become selective in the tournaments in which he competes, being in self-proclaimed semi-retirement, is better ranked by the PageRank scores rather than his number of wins. This is in agreement with the idea that he generally focuses on the more prestigious tournaments (he took part in eleven in our dataset for this season, of which he won four) thus playing less matches but those he does play in tend to be against better players which is more readily captured by the PageRank algorithm. On the other end of the spectrum we have \textit{Yan Bingtao}, a young professional in the game, who's in-strength rank is stronger than his corresponding PageRank rank suggesting he has more wins against less prestigious opponents (he took part in eighteen such tournaments, unfortunately with no wins).
The PageRank rankings are compared with the official World Snooker rankings in Fig.~\ref{fig:2018_ranks}(b) where we observe that the two sets of ranks are diverging considerably, particularly in the case of larger ranks. This has an effect on the corresponding correlation which is notably less than in the previously considered case ($\rho = 0.635$, $\tau = 0.471$). Both panels in Fig.~\ref{fig:2018_ranks} suggests that the PageRank is in some sense capturing the important features of the two alternative ranking metrics. In particular it appears that the algorithm is correctly incorporating the prestige of the tournament (shown by the good fit at higher ranks with the official rankings) while also more accurately capturing the performances of those players with lower ranks based upon their number of wins rather than their prize-winnings which can prove negligible in the case of not progressing far in tournaments.
Lastly, the PageRank ranking scheme also, unlike traditional ranking systems, offers the advantage of being applicable across arbitrary time-spans. To demonstrate the potential of this we rank the top 10 players in each decade from the 1970s with the resulting players being shown in Table \ref{tab:top10_era}. These results highlight some interesting behavior in each decade, namely as commented on earlier, Steve Davis and Stephen Hendry are the highest ranked in the 1980s and 1990s respectively which are the periods in which they won all of their World Championships. The three highest ranked players shown in Table \ref{tab:top20}, the class of 92 - John Higgins, Ronnie O'Sullivan, and Mark Williams all feature in the top 10 rankings of three separate decades indicating their longevity in the sport and further justifying their positions in our all-time rankings.
\begin{table*}[b]
\caption{\label{tab:top10_era}The top 10 players by PageRank in different decades of Snooker's history.}
\begin{tabular*}{\textwidth}{l @{\extracolsep{\fill}} ccccc}
&&& Era &&\\
\cmidrule{2-6}
Rank & 1970-1979 & 1980-1989 & 1990-1999 & 2000-2009 & 2010-2019\\
\cmidrule{2-2}\cmidrule{3-3}\cmidrule{4-4}
\cmidrule{5-5} \cmidrule{6-6}
1 & Ray Reardon & Steve Davis & Stephen Hendry & Ronnie O'Sullivan & Judd Trump\\
2 & John Spencer & Jimmy White & Ronnie O'Sullivan & John Higgins & Mark Selby\\
3 & Alex Higgins & Terry Griffiths & John Higgins & Stephen Hendry & Neil Robertson\\
4 & Doug Mountjoy & Dennis Taylor & Steve Davis & Mark Williams & John Higgins\\
5 & Eddie Charlton & Stephen Hendry & Ken Doherty & Mark Selby & Shaun Murphy\\
6 & Graham Miles & Cliff Thorburn & John Parrott & Ali Carter & Mark Williams\\
7 & Cliff Thorburn & Willie Thorne & Jimmy White & Shaun Murphy & Barry Hawkins\\
8 & John Pulman & John Parrott & Alan McManus & Neil Robertson & Ronnie O'Sullivan\\
9 & Dennis Taylor & Tony Meo & Mark Williams & Ding Junhui & Stuart Bingham\\
10 & Rex Williams & Alex Higgins & Peter Ebdon & Stephen Maguire & Mark Allen\\
\bottomrule
\end{tabular*}
\end{table*}
\subsection{Similarity of ranking metrics}
With the aim of more rigorously quantifying the relative performance of the PageRank ranking scheme in comparison to both the official rankings and in-strength of players we consider the \textit{Jaccard similarity} of the approaches. This quantity is a metric used to determine the level of similarity between two sets $A$ and $B$ such that the Jaccard similarity of the two is given by
\begin{equation}
J(A,B) = \frac{|A \cap B|}{|A \cup B|},
\label{eq:jaccard}
\end{equation}
where $|A \cap B|$ describes the number of players that appear in both set $A$ and $B$ while $|A \cup B|$ gives the total number of unique players appearing in both sets. The quantity itself is clearly defined in the range $[0,1]$ with zero indicating no similarity between the two sets and one suggesting two sets being equivalent.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Jaccard.pdf}
\caption{\textbf{Jaccard similarity between the ranking procedures.} \textbf{(a)} Similarity of the top five ranked players by PageRank to both the in-strength and official rankings, calculated via Eq.~\eqref{eq:jaccard}. The right-vertical axis describes the number of players overlapping in the two sets. Equivalent plots in the case of the top \textbf{(b)} 10, \textbf{(c)} 25, and \textbf{(d)} 50 ranked players are also shown.}
\label{fig:jaccard}
\end{figure}
Figure~\ref{fig:jaccard} demonstrates this quantity in the case of the 5, 10, 25, and 50 ranks in each season. In each case the PageRank ranking is taken and compared with both the in-strength and official rankings. Significantly we again observe that the PageRank system offers strong, although not perfect, agreement to both alternative schemes in the case of the higher ranks (panels (a)-(c)). Furthermore the difference between its performance in the case of larger ranks is again evident as shown by the smaller similarity with the official rankings when the top 50 ranks are considered in panel (d). These results provide rigorous justification regarding the use of our PageRank scheme in ranking players as it accurately captures the better ranks in both alternative metrics while also more fairly representing those players with lower ranks through their total number of winning matches in comparison to the use of their prize-money winnings.
\subsection{Rank-clocks}
To provide an insight into the temporal fluctuations of a player's ranking throughout their career we make use of a tool known as a rank-clock~\cite{Batty2006}. These visualizations are obtained by transforming the temporal ranks to polar coordiantes with the rank being represented by the radial component and the corresponding year described through the angular part. As such the temporal variation of a player's rank over their career is demonstrated by the clockwise trend in the rank-clock. Such graphical techniques have previously been considered within the sport of boxing as a tool to consider future opponents for boxers \cite{TennantBoxing17}.
Figure \ref{fig:rank_clock} shows the corresponding temporal fluctuations in the PageRank rankings of the top 12 players of all-time according to the preceding analysis. Each plot begins with the players first competing season before developing clockwise over the course of their career and finally ending with either the 2019-20 season rankings or their final competitive year. Importantly the outermost circle never goes beyond a rank of 50 and as seasons in which a player has a higher PageRank ranking than 50 are demonstrated by a discontinuity in the line. The top four ranked players again demonstrate their longevity in competitive performances with all four practically always being ranked within the top 40 performances each season (the exceptions are the 2012-13 season in which Ronnie O'Sullivan took an extended break and only competed in the World Championship --- which he won --- and the first season of Stephen Hendry's career).
These visualizations prove useful in quickly allowing one to obtain a perspective regarding the comparison of careers (and the direction in which they are going) for a selection of players. For example, a diverging curve in the later part of the clock indicates a drop in the competitive standards of a player which is clearly evident for the two who have already retired --- Stephen Hendry and Steve Davis --- and also Jimmy White who now frequently appears on the Senior tour within the game and only appears in tournaments on an invitational basis due to his pedigree in the game's history. The other extreme in which the players are performing stronger later in their career is evident by a converging curve in time, which is clearly the case for a number of the current crop of players including Judd Trump and Neil Robertson.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{rank_clocks.pdf}
\caption{\textbf{PageRank ranks of the top 12 players in our dataset over the duration of their careers.} The temporal evolution of a player's PageRank ranking, calculated for each individual season, is shown through the radial element representing the rank while the polar coordinate represents the time evolution of the player's career. Discontinuities in the curves represents seasons in which the player finished outside the top 50 ranks. Seasons in which the player obtained a number one ranking are indicated by the purple dots.}
\label{fig:rank_clock}
\end{figure}
\section{Discussion and Conclusions}
The human desire to obtain a definitive ranking of competitors within a sport is intrinsically flawed due to the large number of complexities inherent in the games themselves. Classical approaches to obtain rankings through methods such as simply counting the number of contests a competitor wins are unsatisfactory due to the lack of consideration regarding the quality of opponent faced in said contests. In this article, with the aim of incorporating these considerations into a ranking procedure, we provide further evidence of the advantages offered by network science in providing a more rigorous framework within which one may determine a ranking of competitors. Specifically, here we consider a network describing the matches played between professional snooker players during a period of over 50 years. We constructed this networked representation by observing the entire history of matches within the sport and demonstrated that this network has features consistent with previously studied social networks.
Having obtained this network structure, a much greater understanding of the competition that has taken place is possible by utilizing the network topology rather than simply considering individual matches. With this in mind we proceeded to make use of the PageRank algorithm, which has previously been shown to be effective in such scenarios for a number of sports~\cite{Radicchi2011, Mukherjee2012, Mukherjee2014, TennantBoxing17, Tennant20}, with the aim of obtaining a more efficient ranking system. The advantage of this approach is that it directly considers not only the number of contests a competitor wins but also directly incorporates the quality of opponent whom they are defeating and as such more accurately describe the performance of individual athletes. We show that this procedure is readily applicable to any temporal period one has data for thus allowing statements regarding the ranking at arbitrary points in time within the sport. Another important factor regarding our approach is that it requires no external consideration such as the points-based and monetary-based systems historically used in the official rankings. Furthermore we provide a quantification for the level of similarity between two different ranking schemes through the Jaccard similarity which offers validation of the benefits of our approach in capturing the ranks of players from various skill levels. A visualization tool in the form of the rank-clock is also introduced which offers a novel approach with which policy-makers within the sport of Snooker may quantify the success of competitors over the temporal period of their careers.
\section*{Competing Interests}
\noindent The authors have declared that no competing interests exist.
\section*{Data Availability}
\noindent All data and code used in this article is available at \url{https://github.com/obrienjoey/snooker_rankings}.
\begin{acknowledgements}
\noindent Helpful discussions with Edward Gunning are gratefully acknowledged. This work was supported by Science Foundation Ireland Grant No.~16/IA/4470, 16/RC/3918, 12/RC/2289P2, and 18/CRT/6049 (J.D.O'B. and J.P.G).
\end{acknowledgements}
| {
"attr-fineweb-edu": 2.242188,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUd4M4dbjiVFhKgPfW | \section{Introduction}
\label{intro}
The Traveling Salesman Problem (TSP), is one of the most studied problems in combinatorial optimization \cite{surv} \cite{surv2}. In its classic form, a salesman wants to visit each of a set of cities exactly once and return home while minimizing travel costs. Costs of traveling between cities are stored in a matrix where entry $c_{ij}$ indicates the cost of traveling from city $i$ to city $j$. Units may be distance, time, money, etc.
If the underlying graph for the TSP is sparse, a complete cost matrix can still be constructed by setting $c_{ij}$ equal to the shortest path between city $i$ and city $j$ for each pair of cities. However, this has the disadvantage of turning a sparse graph $G = (V,E)$ where the edge set $E$ could be of size $O(|V|)$ into a complete graph $G' = (V,E')$, where the edge set $E'$ is $O(|V|^2)$.
Ratliff and Rosenthal were the first to consider a case where the edge set is not expanded to a complete graph, but left sparse, \cite{RR}, while soon after, Fleischmann \cite{BF} and Cornu\'ejols, Fonlupt, and Naddef \cite{CFN} examined this in a more general case, the latter giving this its name: the Graphical Traveling Salesman Problem (GTSP). As a consequence, a city may be visited more than once, since there is no guarantee the underlying graph will be Hamiltonian. While the works of Fleischmann and Cornu\'ejols et al. focused on cutting planes and facet-defining inequalities, this paper will look at a new compact formulation that can improve on the integrality gap created when solving a linear programming relaxation of the problem.
\section{Basic Formulations
\label{sec1}
This paper will investigate the symmetric GTSP, where the cost of traveling between two cities is the same, regardless of direction, which allows the following notation to be used:
\begin{displaymath}
\begin{array}{rl}
G = (V,E) & \mbox{: The graph $G$ with vertex set $V$ and edge set $E$.} \\
c_e & \mbox{: The cost of using edge $e$, replaces $c_{ij}$.} \\
x_e & \mbox{: The variable indicating the use of edge $e$, replaces $x_{ij}$ which } \\
& \mbox{~~~~~is used in most general TSP formulations.} \\
\delta(v) & \mbox{: The set of edges incident to vertex $v$.} \\
\delta(S) & \mbox{: The set of edges with exactly one endpoint in vertex set $S$.} \\
x(F) & \mbox{: The sum of variables $x_e$ for all $e\in F \subset E$.}
\end{array}
\end{displaymath}
If given a formulation on a complete graph $K_n$, a formulation for a sparse graph $G$ can be created by simply setting
$x_e = 0$ for any edge e in the graph $K_n$ but not in the graph $G$.
\subsection{Symmetric TSP}
\label{sec1a}
The standard formulation for the TSP, attributed to Dantzig, Fulkerson, and Johnson \cite{DFJ}, contains constraints that guarantee the degree of each node in a solution is exactly two (degree constraints) and constraints that prevent a solution from being a collection of disconnected subtours (subtour elimination constraints).
\begin{displaymath}
\begin{array}{llll}
\mbox{minimize}&\sum\limits_{e \in E} c_e x_e \\
\mbox{subject to}
&\sum\limits_{e\in \delta(v)} x_e = 2 & \forall v\in V \\
&\sum\limits_{e\in \delta(S)} x_e\geq 2 & \forall S\subset V. ~S \neq\emptyset \\
&x_e\in \left\{{0,1}\right\} & \forall e \in E.
\end{array}
\end{displaymath}
When this integer program is relaxed, the integer constraints $x_e \in \left\{{0,1}\right\}$ are replaced by the boundary constraints $0\leq x_e\leq 1$.
It should also be noted that while the subtour elimination constraints are only needed for the cases where $3\leq |S|\leq \frac{|V|}{2}$, there are still exponentially many of these constraints. Using a similar technique to Martin \cite{RKM}, which was directly applied to the TSP by Carr and Lancia \cite{CL}, these constraints can be replaced by a polynomial number of flow constraints which ensure the solution is a 2 edge-connected graph.
\subsection{Symmetric GTSP}
\label{sec1b}
This formulation for the Graphical TSP comes from Cornu\'ejols, Fonlupt, and Naddef \cite{CFN}, and differs from the formulation above by allowing the degree of a node to be any even integer, and by removing any upper bound on the variables.
\begin{displaymath}
\begin{array}{llll}
\mbox{minimize}&\sum\limits_{e \in E} c_e x_e \\
\mbox{subject to}
&\sum\limits_{e\in \delta(v)} x_e \mbox{~is positive and even} & \forall v\in V \\
&\sum\limits_{e\in \delta(S)} x_e \geq 2 & \forall S\subset V,~S\neq\emptyset \\
&x_e\geq 0 & \forall e \in E. \\
&x_e \mbox{~is integer} & \forall e \in E.
\end{array}
\end{displaymath}
When this program is relaxed, the integer constraints at the end are removed, and the disjunctive constraints that require the degree of each node to be any positive and even integer are effectively replaced by a lower bound of two on the degree of each node.
The disjunctive constraints for the formuation above are unusual for two reasons. Firstly, in most mixed-integer programs, only variables are constrained to be integers, not sums of variables found in constraints. Secondly, the sum is required not to be just integer, but an even integer. In terms of a mixed-integer formulation, the second perculiarity could be addressed with:
\begin{displaymath}
\begin{array}{llll}
&\sum\limits_{e\in \delta(v)} \frac{x_e}{2} \in \mathbb{Z} & \forall v\in V \\
\end{array}
\end{displaymath}
To our knowledge, no other integer programming formulation for a graph theory application uses constraints of this kind. (Even the
constraints for the common T-join problem are different than what we are proposing here, which will be discussed at the end of the paper.)
Addressing the first peculiarity, that integer and mixed-integer programs only allow integrality of variables, we could set these sums to new variables indexed on the vertices of the graph, $d_v$.
\begin{displaymath}
\begin{array}{lrlll}
&\sum\limits_{e\in \delta(v)} \frac{x_e}{2} &=& d_v & \forall v\in V \\
&d_v &\in& \mathbb{Z} & \forall v\in V \\
\end{array}
\end{displaymath}
While this approach works, it feels unsatisfying. The addition of these $d_v$ variables is purely cosmetic. When solving the relaxation, there is nothing preventing us from waiting until a solution is generated before defining the values of $d_v$ using the sums above. Thus the new variables do not facilitate the addition of any new constraints, and do nothing to strengthn the LP relaxation in any way.
When solving the integer program, we can bypass the $d_v$ variables by branching with constraints based on the degree sums. For example, if the solution from a relaxation creates a graph where node $i$ has odd degree $q$, we branch with constraints of the form:
\begin{displaymath}
\begin{array}{lrlll}
&\sum\limits_{e\in \delta(i)} x_e &\leq& q-1 & \\
\end{array}
\end{displaymath}
and
\begin{displaymath}
\begin{array}{lrlll}
&\sum\limits_{e\in \delta(i)} x_e &\geq& q+1 & \\
\end{array}
\end{displaymath}
At a conference, Denis Naddef proposed a challenge of finding a set of constraints for a mixed-integer formulation of GTSP, where integrality constraints are limited to only $x_e \in \mathbb{Z}$ \cite{Ndf}. We will address the state of this challenge in section~\ref{mip}.
\section{New Constraints
\label{sec2}
Cornu\'ejols et al. proved that an upper bound of two on each $x_e$ is implied if all the edge costs are positive \cite{CFN} (and also note that without this additional bound, graphs with negative weight edges would not have finite optimal solutions). This fact allows us to dissect the variables $x_e$ into two components $y_e$ and $z_e$ such that, for each edge $e\in E$:
\begin{displaymath}
\begin{array}{rll}
y_e = 1 & \mbox{if edge $e$ is used exactly once,} & y_e = 0 \mbox{~otherwise} \\
z_e = 1 & \mbox{if edge $e$ is used exactly twice,~} & z_e = 0 \mbox{~otherwise}
\end{array}
\end{displaymath}
Note that
\begin{equation}
x_e = y_e + 2z_e
\label{xy2z}
\end{equation}
Additionally, we can add the constraint $y_e + z_e\leq 1$ for each edge $e\in E$, since using both would imply an edge being used three times in a solution. But more importantly, we now have a way to enforce even degree without using disjunctions, since only the $y_e$ variables matter in determining if the degree of a node is odd or even.
\subsection{Enforcing even degree without disjunctions}
\label{sec2a}
Since the upper bound on the $y_e$ variables is one, the following constraints will enforce even degree:
\begin{equation}
\sum\limits_{e\in F} (1-y_e) + \sum\limits_{e\in \delta(v)\setminus F} y_e \geq 1~~\forall v\in V \mbox{and $F\subset\delta(v)$ with $|F|$ odd.}
\label{iseven}
\end{equation}
This type of constraint was used by Yannakakis et al. \cite{yan} and Lancia et al. \cite{lanc} when working with the parity polytope.
Note that for integer values of $y_e$, if the $y$-degree of a node $v$ is odd, then when $F$ is the set of nodes adjacent to $v$ indicated by $y$, the expression in the left-hand side of the constraint above will be zero. If the $y$-degree of a node $v$ is even, then for any set $F$ with $|F|$ odd, the left-hand side must be at least one.
For sparse graphs, this adds at most $O(|V|2^{\Delta-1})$ constraints, where $\Delta$ is the maximum degree in $G$. Typical graphs from roadmaps usually have $4\leq\Delta\leq 6$, while graphs from highway maps might have $5\leq\Delta\leq 8$. Also note that Euler's formula for planar graphs guarantees that $|E| \leq 3|V| - 6$, and so the average degree of a node in a planar graph cannot be more than six.
Unfortunately, in the relaxation of the linear program with these constraints, odd degree nodes can still result from allowing a path of nodes where, for each edge $e$ in the path, $y_e = 0$ and $z_e = \frac{1}{2}$. See figure~\ref{fig1}.
\begin{figure}[h]
\begin{center}
\pgfplotsset{every axis legend/.append style={draw=none, at={(1.03,0.5)},anchor=west}}
\begin{tikzpicture}
\begin{axis}[hide axis, axis equal image, tiny, legend style={font=\normalfont}]
\addplot [style={solid}, black] coordinates {
(0,0) (1.5,1) (3,1) (4.5,0) (3,-1) (1.5,-1) (0,0)
};
\addlegendentry{$y_e = 1$ and $z_e = 0$}
\addplot [black, style={dashed}] coordinates {
(0,0) (4.5,0)
};
\addlegendentry{$y_e = 0$ and $z_e = \frac{1}{2}$}
\addplot [only marks, black] coordinates {
(0,0) (1.5,1) (3,1) (4.5,0) (3,-1) (1.5,-1) (1.5,0) (3,0)
};
\end{axis}
\end{tikzpicture}
\caption{A path where $y_e = 0$ and $z_e = \frac{1}{2}$}
\label{fig1}
\end{center}
\end{figure}
\subsection{Spanning Tree Constraints}
\label{sec2d}
One method to discourage this half-$z$ path is to require the edges indicated by $\bm{y}$ and $\bm{z}$ contain a spanning tree. This is different than demanding that $\bm{x}$ contains a spanning tree since each unit of $z_e$ contributes two units to $x_e$. For the spanning tree constraint, each $z_e$ contributes only one unit toward a spanning tree, which means that for any node whose $y$-degree is zero, the $z$-degree must be at least one, and in the case where two nodes with $y$-degree zero are connected using an edge in the spanning tree, the $z$-degree of one of those nodes must be at least two, which effectively prevents this half-$z$ path.
Place constraints on binary variables $\bm{t}$ such that the edges where $t_e = 1$ indicate a tree that spans all nodes of $G$ (the graph must be connected and contain no cycles). This is done by the well-known partition inequalities that will be discussed in section \ref{mip}. As with the subtour elimination constraints, Martin describes a compact set of constraints that ensure $\bm{t}$ indicates a tree (or a convex combination of trees) \cite{RKM}. Then, add the constraint:
\begin{equation}
t_e \leq y_e + z_e,~~ \forall e \in E
\label{yztree}
\end{equation}
Only the connectedness of the graph indicated by $t_e$ is important, since the constraint only requires that $\bm{y}+\bm{z}$ dominate a spanning tree, so the constraints that would prevent cycles are unnecessary.
This constraint is valid since any tour that visits every node has within it a spanning tree that touches each node.
\section{A New Mixed IP Formulation}
\label{mip}
\subsection{Proving the Formulation}
\label{sec4a}
The tree constraints are sufficient, when combined with constraint (\ref{iseven}) and integrality constraints $y_e\in\left\{0,1\right\}$, to find an optimal integer solution value, making the subtour elimination constraints unnecessary. The following new mixed integer programming formulation therefore does not include these optional constraints. Note that all GTSP tours will satisfy the constraints in this formulation. The notation
$\delta(V_1, ..., V_k)$ refers to the set of edges with endpoints in different vertex sets.
\begin{displaymath}
\begin{array}{lcll}
\mbox{minimize}&\sum\limits_{e \in E} c_e x_e \\
\mbox{subject to}
&x_e = y_e + 2z_e & \forall e\in E & (4.1) \\
&\sum\limits_{e\in F} 1-y_e + \sum\limits_{e\in \delta(v)\setminus F} y_e \geq 1 & \forall v\in V \mbox{and $F\subset\delta(v)$ with $|F|$ odd} & (4.2) \\
&\sum\limits_{e\in \delta(V_1, ..., V_k)} t_e \geq k-1 & \forall \mbox{~partitions~} V_1, ..., V_k \mbox{~of~} V & (4.3) \\
&t_e \leq y_e + z_e \leq 1 & \forall e\in E & (4.4) \\
&0\leq t_e \leq 1 & \forall e \in E & (4.5) \\
&0\leq z_e \leq 1 & \forall e \in E & (4.6) \\
&y_e\in \left\{0,1\right\}& \forall e \in E & (4.7) \\
\end{array}
\end{displaymath}
\begin{theorem}
Given a MIP solution $(\bm{y^*}, \bm{z^*})$ to the GTSP formulation above, then $\bm{x^*} = \bm{y^*} + 2\bm{z^*}$ will indicate an edge set that is an Euler tour, or a convex combination of Euler tours.
\end{theorem}
$Proof$. It should be noted that it is sufficient for the MIP solultion to dominate an Euler tour (or a convex combination of them), since if there is some edge $e$ where $x_e^*$ is larger than necessary by $\epsilon$ units (for some $0 < \epsilon \leq x^*_e \leq 2$), one can add edge $e$ twice to a collection of Euler tours with total weight $\frac{\epsilon}{2}$.
Let $(\bm{y^*}, \bm{z^*})$ be a feasible solution to the GTSP formulation specified.
By constraints (4.3) and (4.4), we know that $\bm{y^*}+\bm{z^*}$ dominates a convex combination of spanning trees, thus we have
$\bm{y^*}+\bm{z^*} \geq \sum_i \lambda_i \bm{T^i}$, where
each $\bm{T^i}$ is an edge incidence vector of a spanning tree.
Define $\bm{R^i}$ by $R_e^i = 1$ if both $T_e^i = 1$ and $y_e^* = 0$, and $R_e^i = 0$ otherwise. So $\bm{R^i}$ becomes the remnant of tree
$\bm{T^i}$ when edges indicated by $\bm{y^*}$ are removed. Since $\bm{y^*}$ only contains integer values, the constraint
$y_e + z_e \leq 1$ guarantees that $z_e = 0$ whenever $y_e = 1$, and guarantees $y_e = 0$ whenever $z_e > 0$. This means that for all edges where
$y_e = 0$, we have $\bm{z^*} \geq \sum_i \lambda_i \bm{T^i} = \sum_i \lambda_i \bm{R^i}$. Since $\bm{R^i}$ is the result of removing edges from $\bm{T^i}$ where $y_e = 1$, we are guaranteed that $R_e^i = 0$ for every $i$ where $y_e = 1$, and thus
$\bm{z^*} \geq \sum_i \lambda_i \bm{R^i}$ over all edges.
Hence, $\bm{x^*} = \bm{y^*} + 2\bm{z^*} \geq \bm{y^*} + 2\sum_i \lambda_i \bm{R^i} = \sum_i \lambda_i (\bm{y^*} +2\bm{R^i})$, where for each $i$, $\bm{y^*} +2\bm{R^i}$ is an Euler tour, because constraint (4.2) ensures the graph indicated by $\bm{y^*} +2\bm{R^i}$ will have even degree at every node, and constraints (4.3) and (4.4) ensure the graph indicated by $\bm{y^*} +\bm{R^i}$, and thus $\bm{y^*} +2\bm{R^i}$, is connected. \qed
Constraints (4.3) are exponential in number, but these can also be reduced to a compact set of constraints using the techniques from Martin \cite{RKM}. The constraints we used are below, and require a model using directed edges to regulate flow variables $\bm{\phi}$.
Assume $V = \{1, 2, ..., n\}$ and designate city $n$ as the home city.
In Martin's formulation, we use directed flow variables $\bm{\phi^k}$ that carry one unit of flow from any node with index higher than $k$ to node $k$ and supported by the values $\overrightarrow{t}_{ij}$ as edge capacities.
From any feasible integral solution (directed spanning tree), it is not hard to derive such a set of unit flows by directing the tree from the home node $n$.
For this flow, we can now set flow going into any node$j$ with $j>k$ to zero in flow problem $\bm{\phi^k}$ , and flow balancing constraints among the
nodes numbered $k+1$ or higher are also unnecessary. Finally, $t_e = \overrightarrow{t}_{ij} + \overrightarrow{t}_{ji}$ to create variables for the undirected spanning tree.
\begin{displaymath}
\begin{array}{rlll}
\phi^k_{i,j} &=& 0 & \forall j\in V, k\in V\setminus\{n\} \mbox{~with~} j > k,\{i,j\}\in E \\[4pt]
\phi^k_{k,i} &=& 0 & \forall k\in V\setminus\{n\},\{i,k\}\in E \\[4pt]
\sum\limits_{i\in \delta(k)} \phi^k_{i,k} &=& 1 & \forall k\in V\setminus\{n\} \\
\sum\limits_{i\in \delta(j)} \phi^k_{i,j} - \sum\limits_{i\in \delta(j)} \phi^k_{j,i} &=& 0 & \forall j\in V, k\in V\setminus\{n\} \mbox{~with~} j < k \\
0\leq\phi^k_{i,j} &\leq& \overrightarrow{t}_{ij} & \forall k\in V\setminus\{n\}, \{i,j\} \in E \\[4pt]
t_e & = & \overrightarrow{t}_{ij} + \overrightarrow{t}_{ji} & \forall e=\{i,j\} \in E \\[4pt]
\sum\limits_{e\in E} t_e&\leq&n-1 \\
t_e&\leq& y_e + z_e & \forall e\in E
\end{array}
\end{displaymath}
Constraints (4.2) are exponential in $\Delta$, the maximum degree of the graph, which is not a concern if the graph is sparse, leading to a compact formulation.
If the graph is not sparse, identifying when a constraint from the set (4.2) is violated, a process called separation, can be done quickly and efficiently, even if the solution is from a relaxation and thus contains fractional values for some $y_e$ variables.
\begin{theorem}
Given a solution to the relaxation of the GTSP formulation above without constraints (4.2), if a constraint from (4.2) is violated, it can be found in $O(|V|^2)$ time.
\end{theorem}
$Proof$. For each node $v\in V$, minimize the left-hand side of constraint (4.2) over all possible sets $F\subset\delta(v)$ ($|F|$ even or odd), by placing edges with $y_e > \frac{1}{2}$ in $F$ and leaving edges with $y_e < \frac{1}{2}$ for $\delta(v)\setminus F$. Edges with $y_e = \frac{1}{2}$ could go in either set.
\begin{itemize}
\item If this minimum is not less than 1, no constraint from (4.2) will be violated for this node.
\item If the minimum is less than 1, and $|F|$ is odd, this is a violated constraint from (4.2).
\item If the minimum is less than 1, and $|F|$ is even, find the edge $e$ where $|y_e - \frac{1}{2}|$ is smallest. Then flip the status of the membership of edge $e$ in $F$. This will create the minimum left-hand side over all sets $F$ with $|F|$ odd.
\end{itemize}
For each node, this requires summing or searching items indexed by $\delta(v)$ a constant number of times, and since $|\delta(v)| < |V|$ this requires $O(|V|^2)$ time.\qed
\subsection{Addressing the Naddef Challenge}
\label{sec4b}
We would have preferred to simply require the values in $\bm{x}$ to be integer and allow $\bm{y}$ and $\bm{z}$ to hold fractional values, which addresses the challenge that Denis Naddef proposed \cite{Ndf}. He wished to know if one could find a simple formulation for the GTSP that finds optimal solutions by only requiring integrality of the decision variables $\bm{x^*}$, and nothing else. But this cannot be done (for polynomially-sized or polynomially-separable classes of inequalities unless $P=NP$), which can be seen by the folowing theorem.
\begin{theorem}
Let $G$ be a 3-regular graph, and let $G'$ be the result of adding one vertex to the middle of each edge in $G$. Consider a
solution $\bm{x^*}$, where $x^*_e = 1$ for each edge $e \in G'$. Then $\bm{x^*}$ is in the GTSP polytope iff $G$ is Hamiltonian.
\end{theorem}
\label{hammy}
In this proof, define $x^*(S) = \sum_{e \in S} x^*_e$.
$Proof$. If $G$ is Hamiltonian, let $P$ be the set of edges in a Hamilton cycle of $G$, and let $P'$ be the set of corresponding edges in the graph $G'$. Note that in $G'$ every degree-two node is adjacent to two degree-three nodes, and that the cycle $P'$ reaches every degree-three node in $G'$. One GTSP tour in $G'$ can be created by adding an edge of weight two on exactly one of the two edges adjacent to each degree-two node in $G'$ but not used in $P'$. The other GTSP tour can be created by adding an edge of weight two on the edges not chosen by the first tour. The convex combination of each of these tours with weight $\frac{1}{2}$ will create a solution where $x^*_e = 1$ for each edge $e \in G'$
Now suppose we have a solution $\bm{x^*}$, where $x^*_e = 1$ for each edge $e \in G'$ and $\bm{x^*}$ is in the GTSP polytope. Express $\bm{x^*} = \sum_k \lambda_k \bm{\chi^k}$ as a convex combination of GTSP tours. Consider any degree-two vertex $v$ in $G'$. Since $v$ has degree two, $\bm{x^*}(\delta(v)) = 2$. Also $\bm{\chi^k}(\delta(v)) \geq 2$ must be true for any GTSP tour $\bm{\chi^k}$, and so, by the convex combination, it must be that $\bm{\chi^k}(\delta(v)) = 2$ for each $\bm{\chi^k}$. If the neighbors of $v$ are nodes $i$ and $j$, then $\bm{\chi^k}(\delta(v)) = 2$ implies either $\chi^k_{i,v}=1$ and $\chi^k_{j,v}=1$, or $\chi^k_{i,v}=2$ and $\chi^k_{j,v}=0$, or $\chi^k_{i,v}=0$ and $\chi^k_{j,v}=2$. The edges of weight one in $\bm{\chi^k}$ form disjoint cycles, so pick one such cycle $C$, and let $S_1$ be the set of vertices in $C$. (If there are no edges of weight one in $\bm{\chi^k}$, let $S_1$ be a set containing any single degree three vertex.) Let $S_2$ be the set of degree two vertices $v$ such that $\chi^k_{i,v}=2$ for some $i \in S_1$. Notice that $\bm{\chi^k}(\delta(S_1 \cup S_2)) = 0$, since the degree of any node, $v\in S_2$ is exactly two in $\bm{\chi^k}$, and the edge connecting $v$ to $S_1$ has weight two.
$\bm{\chi^k}$ is a tour and thus must be connected, which is only possible if $S_1 \cup S_2$ is the entire vertex set of $G'$, and therefore the cycle $C$ must visit every degree three node in $G'$. The corresponding cycle in the graph $G$ would therefore be a Hamilton cycle. \qed
If one knew when the integer solution $\bm{x^*}$ were in the GTSP polytope, then this theorem would imply a polynomial time algorithm to determine if a 3-regular graph is Hamiltonian, which is an $NP$-complete problem.
The challenge that Naddef proposed never specifically defined what makes a formulation simple. Certainly having all constraint sets be polynomially-sized or polynomially-separable (in terms of $n$, the number of nodes) would qualify as simple, but there may be other normal sets of constraints that could satisfy the spirit of Naddef's challenge. One such example is Naddef's conjecture that simply using the three classes of inequalities from his 1985 paper (path, wheelbarrow, and bicycle inequalities) \cite{CFN} with integrality constraints only on the variables $\bm{x}$, would be sufficient to formulate the problem. Since it is not known if these three classes of inequalities can be separated in polynomial time, the theorem above does not directly address this conjecture.
However, if we are given an arbitrary constraint of the form $\bm{ax} \geq b$, it can be recognized in polynomial time whether or not this constraint belongs to a particular class of inequality (path, wheelbarrow, or bicycle) and whether or not a potential solution $\bm{x^*}$ violates this constraint.
If that potential solution $\bm{x^*}$ were not in the GTSP polytope, then there would be a polynomially sized verification of the graph $G$ not being Hamiltonian, which would imply $NP$ = co-$NP$.
This would apply to any integer programming formulation with a finite number of inequality classes that contain inequalties that are normal. In this case, we define normal to mean that the membership of any individual constraint in a class can be verified in polynomial time.
This implies Naddef's challenge cannot be completed successfully using normal inequalities, unless $NP$ = co-$NP$. However, our formulation follows its spirit, as the integer constrained variables in our formulation $\bm{y}$ have a one-to-one correspondence to the integer constrained variables $\bm{x}$ in the challenge.
\subsection{Interesting Notes Concerning Degree Two Nodes in GTSP}
\label{sec4c}
It is difficult to solve Naddef's challenge because the integer programming formulation for the GTSP has $x_e \in \{0, 1, 2\}$, whereas most formulations for other graph theory applications simply require $x_e \in \{0, 1\}$. In the variant of GTSP where doubled edges are disallowed, but nodes may still be visited multiple times, the formulation from above would be a solution to the Naddef challenge, since in this case, $\bm{x} = \bm{y}$ and $\bm{z} = \bm{0}$. If there are degree 2 nodes present in the graph, then disallowing doubles edges forces the tour across a particular path, since the tour cannot visit this degree 2 node and return back along the same edge.
Assume we have an integer programming formulation for the GTSP. Then any integer point $\bm{x^*}$ dominates a convex combination of GTSP tours, or it must violate at least one inequality from this formulation. Therefore, using the graphs $G$ and $G'$ illustrated in Theorem 3 (where $G$ is a 3-regular graph, and $G'$ is the same graph with every edge subdivided into two with a new degree 2 node), the GTSP formulation when applied to $G'$ would certify whether the graph $G$ is Hamiltonian or non-Hamiltonian. If the constraint classes of the formulation are normal, as defined at the end of the previous section, then this certificate can be constructed in polynomial time.
In the case where $G$ is not Hamiltonian, an integer programming formulation for the GTSP must have a violated constraint for any solution $\bm{x^*}$ where
$x^*(E') = |E'|$, where $E'$ is the edge set of $G'$. Assuming $n = |V|$, the node set of $G$, and $n' = |V'|$, the node set of $G'$, we can determine the size of $|E'|$ by noting that every edge in $E'$ connects a degree 3 node to a degree 2 node. Therefore, the set of degree 2
nodes can be represented by $W = V' \setminus V$ and $|E'|$ is equal to the number of degree 2 nodes in $G'$ times two, or $2(n'-n)$. The number of degree 2 nodes in $G'$ is the same as the number of edges in $G$, which is $\frac{3}{2}n$, so we get
$n'-n = \frac{3}{2}n$ or $n = \frac{2}{5}n'$, and $2(n'-n) = 2(n'- \frac{2}{5}n') = \frac{6}{5}n'$. Since $x(\delta(v')) \geq 2$
for any node in $v' \in G'$,
adding these constraints over all degree 2 nodes gives the constraint:
\begin{displaymath}
\begin{array}{c}
\sum\limits_{v'\in W} x(\delta(v')) = x(E') \geq \frac{6}{5}n'
\end{array}
\end{displaymath}
Since the graph is not Hamiltonian, this contraint cannot be satisfied at equality, leading to $x(E') > \frac{6}{5}n'$. Since the expression $\frac{6}{5}n'$ is equal to two times an integer quantity $(n'-n)$, $\frac{6}{5}n'$ must be an even integer. Furthermore, for any solution to the formulation, $x(\delta(v'))$ must be positive and even for every degree 2 node $v' \in G'$, and thus $x(E')$ must also be even, so the constraint can be stated as:
\begin{displaymath}
\begin{array}{c}
\sum\limits_{v'\in W} x(\delta(v')) = x(E') \geq \frac{6}{5}n' + 2
\end{array}
\end{displaymath}
But these inequalities do not make up a normal class, as defined in the previous section. This is because the validity of this inequality relies on the certainty of $G$ being non-Hamiltonian. We believe these inequalities can be lifted to a complete graph with no coefficient greater than 3 (we are quite sure we could do this with maximum coefficient 4).
Another interesting graph involving degree 2 nodes comes from subdividing an edge twice, creating a path of three edges with two intermediate degree 2 nodes. Given any 3-regular, 3-edge connected graph, Haddadan et al. \cite{Ravi} showed that the point $x^*$ where $x_e = 1$ for every edge in such a graph will be in the GTSP polytope, even though every node in the graph has odd degree. Now imagine choosing any individual degree 3 node, call it $v$, and subdividing each of the incident edges twice, creating six degree 2 nodes, two along each path. Now the solution $x^*$ where $x_e = 1$ for every edge cannot be in the GTSP polytope, since we can easily find a violated 3-tooth comb inequality by choosing $v$ and its immediate neighbors for the handle, and the pairs of adjacent degree 2 nodes as the teeth (see figure \ref{comb}).
\begin{figure}[h]
\begin{center}
\pgfplotsset{every axis legend/.append style={draw=none, at={(1.03,0.5)},anchor=west}}
\begin{tikzpicture}
\begin{axis}[hide axis, axis equal image, tiny, legend style={font=\normalfont}]
\addplot [style={solid}, black] coordinates {
(7,-3.25) (2,-2) (0,0) (2,2) (7,3.5)
};
\addplot [style={solid}, black] coordinates {
(0,0) (6,0)
};
\addplot [style={solid}, black] coordinates {
(6,3) (7,2.5)
};
\addplot [style={solid}, black] coordinates {
(6,-3) (7,-2.5)
};
\addplot [style={solid}, black] coordinates {
(7,0.5) (6,0) (7,-0.5)
};
\addplot [only marks, black] coordinates {
(0,0) (2,2) (4,2.5) (6,3) (2,0) (4,0) (6,0) (2,-2) (4,-2.5) (6,-3)
};
\addplot [domain=-pi:pi,samples=200,black,line width=0.7pt]({1.2+2*sin(deg(x))}, {3*cos(deg(x))});
\addplot [domain=-pi:pi,samples=200,black,line width=0.7pt]({3+2*sin(deg(x))}, {2.25+0.9*cos(deg(x))});
\addplot [domain=-pi:pi,samples=200,black,line width=0.7pt]({3+2*sin(deg(x))}, {0.9*cos(deg(x))});
\addplot [domain=-pi:pi,samples=200,black,line width=0.7pt]({3+2*sin(deg(x))}, {-2.25+0.9*cos(deg(x))});
\end{axis}
\end{tikzpicture}
\caption{A violated 3-tooth comb inequality}
\label{comb}
\end{center}
\end{figure}
Furthermore, consider a solution $\bm{x^*}$ that is in the GTSP polytope for a graph, where $x_e = 1$ along each edge of a path of at least three edges connecting two higher-degree nodes with only degree 2 nodes along the interior of the path (see figure \ref{path}).
Then every GTSP tour that makes up the convex combination of tours for the solution $\bm{x^*}$ must also have $x_e = 1$ along each edge of the path. To prove this for a path $P$ of three edges, notice that $x(P) \geq 3$ for any GTSP tour, and since $x^*(P) = 3$, we know $x(P) = 3$ for every GTSP tour in the convex combination indicated by $\bm{x^*}$. If $x_e = 2$ for some edge in the path, then since the path has three edges, $x(P) \geq 4$, and thus could not be in the convex combination of tours indicated by $\bm{x^*}$. Alternatively, consider a solution $\bm{x^*}$ that is in the GTSP polytope for a graph, where $x_e = 1$ along each edge of a path of only two edges connecting two higher-degree nodes with a degree 2 node in the middle of the path. Note that an edge with $x_e = 2$ may be in one of the GTSP tours in the convex combination indicated by $\bm{x^*}$, since one tour of weight $\frac{1}{2}$ could visit one edge of the path twice, and another tour of weight $\frac{1}{2}$ could visit the other edge twice.
\begin{figure}[h]
\begin{center}
\pgfplotsset{every axis legend/.append style={draw=none, at={(1.03,0.5)},anchor=west}}
\begin{tikzpicture}
\begin{axis}[hide axis, axis equal image, tiny, legend style={font=\normalfont}]
\addplot [style={solid}, black] coordinates {
(0,1) (1,0) (0,-1)
};
\addplot [style={solid}, black] coordinates {
(8,1) (7,0) (8,-1)
};
\addplot [style={solid}, black] coordinates {
(0,0) (8,0)
};
\addplot [only marks, black] coordinates {
(1,0) (3,0) (5,0) (7,0)
};
\end{axis}
\end{tikzpicture}
\caption{A path of 3 edges connecting 2 higher-degree nodes with interior nodes of degree 2}
\label{path}
\end{center}
\end{figure}
\section{Relaxations and Steiner Nodes}
\subsection{Symmetric GTSP with Steiner Nodes}
\label{sec1c}
Cornu\'ejols et al. also proposed a variant of the GTSP where only a subset of nodes are visited \cite{CFN}. As in most road networks, one may travel through many intersections that are not also destinations when traveling from one place to another. Cornu\'ejols et al. referred to these intersection nodes as Steiner nodes. This creates a formulation on a graph $G = (V_d\cup V_s, E)$ with $V_d\cap V_s = \emptyset$ where $V_d$ represents the set of destination nodes and $V_s$ represents the set of Steiner nodes.
\begin{displaymath}
\begin{array}{llll}
\mbox{minimize}&\sum\limits_{e \in E} c_e x_e \\
\mbox{subject to}
&\sum\limits_{e\in \delta(v)} x_e \mbox{~is positive and even} & \forall v\in V_d \\
&\sum\limits_{e\in \delta(v)} x_e \mbox{~is even} & \forall v\in V_s \\
&\sum\limits_{e\in \delta(S)} x_e \geq 2 & \forall S\subset V \mbox{~where~} S\cap V_d\neq \emptyset, \neq V_d \\
&x_e\geq 0 & \forall e \in E. \\
&x_e \mbox{~is integer} & \forall e \in E.
\end{array}
\end{displaymath}
Note that only sets that include destination nodes need to have corresponding cut constraints, and these can be limited to sets where the intersection is $\frac{|V_d|}{2}$ or smaller. Again, these can be replaced by the flow constraints in the style proposed by Martin \cite{RKM}. The constraints used in our computational results are similar to those in the multi-commodity flow formulation found by Letchford, Nasiri, and Theis \cite{LNT}. We were able to reduce the number of variables used by Letchford, et al. by a factor of 2, by setting many variables to 0,
as we did with the flow variables in the formulation of section \ref{sec4a}. Assume $V_d = \{1, 2, ..., d\}$ and $V_s = \{d+1, d+2, ... n\}$ and designate city $d$ as the home city. We include $d-1$ flow problems, where each problem requires that 2 units of flow pass from nodes $S = \{k+1, k+2, ..., d\}$ to node $k$ using the values of $\bm{x}$ as edge capacities.
Since nodes in $S$ are all sources, we can set flow into these nodes to zero, as well as setting the flow coming out of node $k$ to zero.
\begin{displaymath}
\begin{array}{rlll}
f^k_{i,j} &=& 0 & \forall j\in V_d, k\in V_d\setminus\{d\} \mbox{~with~} j > k,\{i,j\}\in E \\[4pt]
f^k_{k,i} &=& 0 & \forall k\in V_d\setminus\{d\}, \{i,k\}\in E\\[4pt]
\sum\limits_{i\in \delta(k)} f^k_{i,k} &=& 2 & \forall k\in V_d\setminus\{d\} \\
\sum\limits_{i\in \delta(j)} f^k_{i,j} - \sum\limits_{i\in \delta(j)} f^k_{j,i} &=& 0 & \forall j\in V, k\in V_d\setminus\{d\} \mbox{~with either~} j < k \mbox{~or~} j\in V_s \\
f^k_{i,j} + f^k_{j,i}&\leq& x_e & \forall k\in V_d\setminus\{d\}, \{i,j\}=e\in E
\end{array}
\end{displaymath}
\subsection{Preventing the Half-$z$ Path without Spanning Trees}
\label{sec2b}
While the spanning tree constaints (\ref{yztree}) of section \ref{sec2d} can prevent half-$z$ paths when integrality of $\bm{y}$ is enforced, for the computational results in the next section, better integrality gaps were obtained by using the subtour elimination
constraints plus the following, which prevents half-$z$ paths (with three or more edges) without requiring the
integrality of $\bm{y}$.
\begin{equation}
\sum\limits_{e'\in \delta(i)} x_{e'} + \sum\limits_{e'\in \delta(j)} x_{e'} - 2z_e\geq 4~~\forall e\in E
\label{bobslatest}
\end{equation}
where $i$ and $j$ are endpoints of edge $e$.
If $z_e = 1$, this constraint is the subtour elimination constraint for the set $\left\{ {i,j} \right\}$.
If $z_e = 0$, this is the sum of the degree constraints (lower bound) for nodes $i$ and $j$.
But in the middle of a path of length three or longer with edges that have $y_e = 0$ and $z_e = \frac{1}{2}$, the left side of this constraint will only add to three.
It should be noted that this constraint can only be used when both endpoints of $e$ are destination nodes, since Steiner nodes do not have a lower bound of degree 2, but could be degree zero.
It should also be noted that if the GTSP instance is composed only of three paths of length three between two specific nodes
(see figure \ref{fig1} from section \ref{sec2a}) constraints (\ref{iseven}) from section \ref{sec2a} (those that enforce even degree) and (\ref{bobslatest}) (defined above) will be enough to close the entire integrality gap using an LP relaxation. If the paths are all four or more edges long, this constraint will not eliminate the integrality gap, but will help. (See figure \ref{3p11})
\begin{figure}[h]
\begin{center}
\pgfplotsset{every axis legend/.append style={draw=none, at={(1.1,0.5)},anchor=west}}
\begin{tikzpicture}
\begin{axis}[hide axis, axis equal image, legend style={font=\normalfont}]
\addplot [style={solid}, black] coordinates {
(0,10) (3,11) (9,11) (12,10) (9,9) (3,9) (0,10)
};
\addlegendentry{$y_e = 1$ and $z_e = 0$}
\addplot [black, style={dashed}] coordinates {
(0,10) (12,10)
};
\addlegendentry{$y_e = 0$ and $z_e = \frac{1}{2}$}
\addplot [style={solid}, black, line width=2pt] coordinates {
(6,6) (9,6)
};
\addlegendentry{$y_e = 0$ and $z_e = 1$}
\addplot [style={solid}, black] coordinates {
(0,6) (3,7) (9,7) (12,6) (9,5) (3,5) (0,6)
};
\addplot [black, style={dashed}] coordinates {
(0,6) (12,6)
};
\addplot [style={solid}, black] coordinates {
(0,2) (3,3) (9,3) (12,2) (9,1) (3,1) (0,2)
};
\addplot [style={solid}, black, line width=2pt] coordinates {
(0,2) (9,2)
};
\addplot [only marks, black] coordinates {
(0,10) (3,11) (6,11) (9,11) (12,10) (9,9) (6,9) (3,9) (3,10) (6,10) (9,10)
(0,6) (3,7) (6,7) (9,7) (12,6) (9,5) (6,5) (3,5) (3,6) (6,6) (9,6)
(0,2) (3,3) (6,3) (9,3) (12,2) (9,1) (6,1) (3,1) (3,2) (6,2) (9,2)
};
\addplot[scatter, only marks, nodes near coords, nodes near coords style={font=\small}, point meta=explicit symbolic, mark size=0pt]
table [meta=lab] {
x y lab
6 7.8 {Objective value 12 using only constraint (\ref{iseven})}
6 3.8 {Objective value 13 using both constraints (\ref{iseven}) and (\ref{bobslatest})}
6 -0.2 {Objective value 14 for an integer solution}
};
\end{axis}
\end{tikzpicture}
\caption{Solutions from a 3-path configuration of four edges each}
\label{3p11}
\end{center}
\end{figure}
As the paths get longer, the integrality gap slowly grows. The spanning tree constraints will be useful once the paths reach a length of at least seven. (See figure \ref{3p20})
\begin{figure}[h]
\begin{center}
\pgfplotsset{every axis legend/.append style={draw=none, at={(1.03,0.5)},anchor=west}}
\begin{tikzpicture}
\begin{axis}[hide axis, axis equal image, legend style={font=\normalfont}]
\addplot [style={solid}, black] coordinates {
(0,10) (2,11) (12,11) (14,10) (12,9) (2,9) (0,10)
};
\addlegendentry{$y_e = 1$ and $z_e = 0$}
\addplot [black, style={dashed}] coordinates {
(0,10) (14,10)
};
\addlegendentry{$y_e = 0$ and $z_e = \frac{1}{2}$}
\addplot [style={solid}, black, line width=2pt] coordinates {
(4,6) (6,6)
};
\addlegendentry{$y_e = 0$ and $z_e = 1$}
\addplot [style={solid}, black, line width=2pt] coordinates {
(10,6) (12,6)
};
\addplot [style={solid}, black] coordinates {
(0,6) (2,7) (12,7) (14,6) (12,5) (2,5) (0,6)
};
\addplot [black, style={dashed}] coordinates {
(0,6) (14,6)
};
\addplot [style={solid}, black] coordinates {
(0,2) (2,3) (12,3) (14,2) (12,1) (2,1) (0,2)
};
\addplot [style={solid}, black, line width=2pt] coordinates {
(0,2) (12,2)
};
\addplot [only marks, black] coordinates {
(0,10) (2,11) (4,11) (6,11) (8,11) (10,11) (12,11) (14,10) (12,9) (10,9) (8,9) (6,9) (4,9) (2,9)
(2,10) (4,10) (6,10) (8,10) (10,10) (12,10)
(0,6) (2,7) (4,7) (6,7) (8,7) (10,7) (12,7) (14,6) (12,5) (10,5) (8,5) (6,5) (4,5) (2,5) (2,6) (4,6) (6,6) (8,6) (10,6) (12,6)
(0,2) (2,3) (4,3) (6,3) (8,3) (10,3) (12,3) (14,2) (12,1) (10,1) (8,1) (6,1) (4,1) (2,1) (2,2) (4,2) (6,2) (8,2) (10,2) (12,2)
};
\addplot[scatter, only marks, nodes near coords, nodes near coords style={font=\small}, point meta=explicit symbolic, mark size=0pt]
table [meta=lab] {
x y lab
7 7.8 {Objective value 21 using only constraint (\ref{iseven})}
7 3.8 {Objective value 23 using both constraints (\ref{iseven}) and (\ref{bobslatest})}
7 -0.2 {Objective value 26 for an integer solution}
};
\end{axis}
\end{tikzpicture}
\caption{Solutions from a 3-path configuration of seven edges each}
\label{3p20}
\end{center}
\end{figure}
Spanning tree constraints help close the integrality gap on these long-path graphs because any spanning tree must contain $n-1$ edges, which will enforce $\sum_{e\in E} y_e + z_e \geq n-1$, where $n$ is the number of nodes in the graph. For a relaxation on a graph that tries to save costs by employing frequent fractional $z$ variables, this constraint limits the amount that can be saved. An optimal solution for a relaxation including constraint (\ref{yztree}) for the 3-path configuration of seven-edge paths is shown in figure~\ref{3p20tree}.
\begin{figure}[h]
\begin{center}
\pgfplotsset{every axis legend/.append style={draw=none, at={(1.03,0.5)},anchor=west}}
\begin{tikzpicture}
\begin{axis}[hide axis, axis equal image, legend style={font=\normalfont}]
\addplot [style={solid}, black] coordinates {
(4,11) (6,11)
};
\addlegendentry{$y_e = \frac{2}{3}$ and $z_e = \frac{1}{3}$}
\addplot [black, style={dashed}] coordinates {
(4,11) (2,11) (0,10) (2,9) (4,9)
};
\addlegendentry{$y_e = \frac{2}{3}$ and $z_e = \frac{1}{6}$}
\addplot [style={solid}, black] coordinates {
(10,11) (12,11)
};
\addplot [style={solid}, black] coordinates {
(4,9) (6,9)
};
\addplot [style={solid}, black] coordinates {
(10,9) (12,9)
};
\addplot [style={solid}, black] coordinates {
(4,10) (14,10)
};
\addplot [black, style={dashed}] coordinates {
(0,10) (4,10)
};
\addplot [black, style={dashed}] coordinates {
(6,11) (10,11)
};
\addplot [black, style={dashed}] coordinates {
(6,9) (10,9)
};
\addplot [black, style={dashed}] coordinates {
(12,11) (14,10) (12,9)
};
\addplot [only marks, black] coordinates {
(0,10) (2,11) (4,11) (6,11) (8,11) (10,11) (12,11) (14,10) (12,9) (10,9) (8,9) (6,9) (4,9) (2,9)
(2,10) (4,10) (6,10) (8,10) (10,10) (12,10)
};
\end{axis}
\end{tikzpicture}
\caption{Objective value 24 using constraints (\ref{iseven}), (\ref{yztree}), and (\ref{bobslatest})}
\label{3p20tree}
\end{center}
\end{figure}
Spanning tree constraints (\ref{yztree}) did not contribute to smaller integrality gaps in our computational results of section \ref{seccomp} when added to the LP relaxation consisting of the subtour elimination constraints (or their compact equivalent), and the constraints in (\ref{bobslatest}) designed specifically to prevent the short half-$z$ path.
\subsection{Removing Steiner Nodes}
\label{sec2c}
Removing Steiner nodes increase the effectiveness of constraints in (\ref{bobslatest}).
A graph with Steiner nodes $G = (V_d\cup V_s, E)$ can be transformed into a graph without Steiner nodes $G' = (V_d, E')$ by doing the following:
For each pair of nodes $i,j\in V_d$, if the shortest path from $i$ to $j$ in $G$ contains no other nodes in $V_d$, then add an edge connecting $i$ to $j$ to $E'$ with a cost equal to the cost of this shortest path; otherwise, do not add an edge from $i$ to $j$ to $E'$.
In all but one of our test problems (see section \ref{seccomp}), removing Steiner nodes resulted in fewer, not more, edges in the original instance. That removing Steiner nodes often reduces the total edges in a graph was also observed by Corber\'an, Letchford, and Sanchis \cite{CLS}.
\section{Computational Results}
\label{seccomp}
\begin{figure}[h]
\includegraphics[width=0.9\textwidth]{map3.png}
\caption{Map of basic United States highway system}
\label{fig2}
\end{figure}
\begin{table}[b]
\caption{GTSP instances}
\label{tabInst}
\begin{tabular}{llcc}
\hline\noalign{\smallskip}
& & & Solution \\
name & desccription & destinations & (miles) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
dakota3path & 3-path configuarion in northern plains & 11 & 2682 \\
NFLcities & Cities with National Football League teams & 29 & 11050 \\
NWcities & Cities in the Northwest region & 43 & 8119 \\
CAPcities & 48 state capitals plus Washington D.C. & 49 & 14878 \\
AtoJcities & Cities beginning with letters from A to J & 101 & 17931 \\
ESTcities & Cities east of the Mississippi River & 139 & 13251 \\
MSAcities & Centers of 145 metropolitan statistical areas & 145 & 22720 \\
deg3cities & Cities in original graph with degree~$\geq 3$ & 171 & 18737 \\
NScities & Cities that Neil Simonetti has visited & 174 & 22127 \\
CtoWcities & Cities beginning with letters from C to W & 182 & 24389 \\
ALLcities & Entire graph & 216 & 26410 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
Our search for a reasonable sized data set based on the interstate highways of the United States led us to a text file uploaded by Sergiy Kolodyazhnyy on GitHub \cite{Serg}. After a few errors were corrected and additions made, we had a highway network with 216 nodes and 358 edges, with a maximum degree node of seven (Indianapolis). See figure~\ref{fig2}. Data for this graph, and the cities used to create the instances in this section, may be found at https://ns.faculty.brynathyn.edu/interstate/
Instances were created from this map by choosing a subset of cities as destination nodes, and adding any cities along a shortest path between destinations as Steiner nodes. Alternate versions of these instances were constructed by removing the Steiner nodes as indicated in section~\ref{sec2c}. Table~\ref{tabInst} gives the basic information for several instances we used. Table~\ref{tabCFN} shows the results from running the relaxation of the formulation from Cornu\'ejols et al. \cite{CFN}. It should be noted that the solutions found by this relaxation were the same whether Steiner nodes were removed or not.
\begin{table}[h]
\caption{Relaxations from Cornu\'ejols et al. formulation \cite{CFN}}
\label{tabCFN}
\begin{tabular}{lccc}
\hline\noalign{\smallskip}
& destinations & edges & integrality \\
name & (Steiner nodes) & (w/o Steiner) & gap (\%) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
dakota3path & 11 (0) & 12 (12) & 139 (5.18\%) \\
NFLcities & 29 (152) & 304 (135) & 35 (0.32\%) \\
NWcities & 43 (4) & 63 (59) & 12 (0.15\%) \\
CAPcities & 49 (132) & 301 (199) & 34 (0.23\%) \\
AtoJcities & 101 (95) & 326 (289) & 261.5 (1.45\%) \\
ESTcities & 139 (2) & 243 (240) & 61.4 (0.46\%) \\
MSAcities & 145 (63) & 348 (317) & 143 (0.63\%) \\
deg3cities & 171 (16) & 321 (305) & 70 (0.37\%) \\
NScities & 174 (29) & 341 (324) & 93.5 (0.42\%) \\
CtoWcities & 182 (30) & 353 (358) & 151 (0.62\%) \\
ALLcities & 216 (0) & 358 (358) & 274.8 (1.04\%) \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\begin{table} [h]
\caption{Relaxations from our additional constraints}
\label{tabUS}
\begin{tabular}{lcccc}
\hline\noalign{\smallskip}
& integrality gap with & integrality gap w/o & best \% of gap closed \\
name & Steiner nodes (\%) & Steiner nodes (\%) & from formulation in \cite{CFN} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
dakota3path & - & 0 (0\%) & 100\% \\
NFLcities & 35 (0.32\%) & same as Steiner & 0\% \\
NWcities & 8 (0.10\%) & same as Steiner & 33.3\% \\
CAPcities & 34 (0.23\%) & same as Steiner & 0\% \\
AtoJcities & 228.5 (1.45\%) & same as Steiner & 12.6\% \\
ESTcities & 53.9 (0.46\%) & same as Steiner & 12.2\% \\
MSAcities & 114.5 (0.50\%) & 98.5 (0.43\%) & 31.1\% \\
deg3cities & 70 (0.37\%) & same as Steiner & 0\% \\
NScities & 48.5 (0.22\%) & same as Steiner & 48.1\% \\
CtoWcities & 103 (0.42\%) & 111.5 (0.46\%) & 31.8\% \\
ALLcities & - & 217.8 (0.82\%) & 20.7\% \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
Running times on a 2.1GHz Xeon processor for all of the relaxations were under 10 seconds, while the running times to generate the integer solutions never exceeded five minutes. We wish to point out that the value of the new formulation is not a faster running time, but the reduced integrality gap.
In this paper, the integrality gap refers to the difference in objective values between the program where integer constraints are enforced and the program where the integer constraints are relaxed. The percentage is the gap size expressed as a percentage of the integer solution value. This is different than the ratio definitions of integrality gap used in some other contexts. \cite{CV}
When our constraints were added, the spanning tree constraints (\ref{yztree}) were not useful when (\ref{iseven}) and (\ref{bobslatest}) were present. In most cases, removing Steiner nodes did not change the optimal values found by our relaxation. In one case, the relaxation was better when the Steiner nodes were removed, and in one case, the relaxation was worse when the Steiner nodes were removed. Table~\ref{tabUS} shows our results, where the last column indicates the percentage that our formulation closed of the gap left by the formulation of Cornu\'ejols et al. \cite{CFN}.
We noticed that in instances where the $z_e$ variables were rarely positive, our relaxation fared no better than that of Cornu\'ejols et al. But when the number of edges with values of $z_e > 0$ reached about 10\% of the total of edges where $x_e > 0$, we were able to shave anywhere from 10\% to almost 50\% of the gap left behind by Cornu\'ejols et al. (See figure~\ref{fig3})
\begin{figure} [h]
\begin{center}
\begin{tikzpicture}
\begin{axis}[
xlabel=Percent of C-F-N integrality gap closed,
ylabel=Percent of $x_e > 0$ edges also with $z_e > 0$]
\addplot[scatter, only marks, point meta=explicit symbolic, scatter/classes={
a={mark size=0.5pt},
b={mark size=0.8pt},
c={mark size=1pt},
d={mark size=1.5pt},
e={mark size=1.8pt},
f={mark size=2pt},
g={mark size=2.2pt}
}]
table [meta=lab] {
x y lab
0 0 b
0 3.65 c
33.33 20.83 c
100 18.18 a
12.6 8.41 d
12.21 12.73 e
48.1 10.99 f
0 3.15 f
20.7 13.01 g
31.12 12.58 e
26.16 9.95 f
};
\addplot[scatter, only marks, nodes near coords, nodes near coords style={font=\small}, point meta=explicit symbolic, mark size=0pt]
table [meta=lab] {
x y lab
13 -1.1 NFLcities
13 3.1 CAPcities
46.0 19.73 NWcities
84 17.08 dakota3path
25.6 7.2 AtoJcities
12.21 10.53 ESTcities
59.5 9.95 NScities
13 1.5 deg3cities
20.7 13.1 ALLcities
45.1 11.5 MSAcities
41.5 8.7 CtoWcities
70 0.0 {Size of dot is proportional}
70 -1.0 {to number of destinations}
70 4.0 {All instances have}
70 2.7 {Steiner nodes removed}
};
\end{axis}
\end{tikzpicture}
\end{center}
\caption{Scatter plot of integrality gap closure and percent of variables with $z_e > 0$}
\label{fig3}
\end{figure}
\section{A Note Concerning T-joins}
\label{Tjoinsec}
In section \ref{sec1}, we noted that the constraints of requiring a sum of variables to be even was unique and difficult, which
was a reason Naddef proposed his challenge, described in section \ref{mip}. At first glance, these constraints may appear
to have the same type of structure as the T-join problem, but this is not the case, as seen in a book by Cook, et al. \cite{Tjoin}.
Let $G = (V,E)$ be an undirected graph and let $T \subset V$ with $|T|$ even.
A T-join is a subgraph $H$ of $G$ where the set of all nodes of odd degree in $H$ is $T$.
The T-join polytope would therefore consist of all edge vectors $\bm{t}$ which indicate a T-join $H$.
The constraint for this polytope would be $t(\delta(v))$ is odd for all vertices $v$ in $T$ and even for all vertices $v$ not in $T$.
\begin{displaymath}
\begin{array}{lrlll}
&\frac{t(\delta(v))-1}{2} &\in& \mathbb{Z} & \mbox{for all $v \in T$}\\
\\
&\frac{t(\delta(v))}{2} &\in& \mathbb{Z} & \mbox{for all $v \not\in T$} \\
\end{array}
\end{displaymath}
While this appears to have the same issue as our GTSP formulation of section \ref{sec1}, the T-join problem can be
described very differently. The common application of the T-join problem, used in solving the Chinese postman problem \cite{CPP}
assumes nonnegative costs within an IP formulation seeking a minimum cost T-join, and therefore
we only need the dominant polytope, which can be described as:
\begin{displaymath}
\begin{array}{lrlll}
&t(\delta(S)) &\geq& 1 & \mbox{for any set $S$ where $|S \cap T|$ is odd} \\
\end{array}
\end{displaymath}
| {
"attr-fineweb-edu": 2.328125,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdmzxaL3SujRqKfY9 | \section{Introduction and Motivation}\label{intro}
There is substantial public and private interest in the projection of future hitting performance in
baseball. Major league baseball teams award large monetary contracts to top free agent hitters under the assumption that they can reasonably expect that past success will continue into the future. Of course, there is an expectation that future performance will vary, but for the most part it appears that teams are often quite foolishly seduced by a fine performance over a single season. There are many questions: How should past consistency be balanced with advancing age when projecting future hitting performance? In young players, how many seasons of above-average performance need to be observed before we consider a player to be a truly exceptional hitter? What is the effect of a single sub-par year in an otherwise consistent career? We will attempt to answer these questions through the use of fully parametric statistical models for hitting performance.
Modeling and prediction of hitting performance is an area of very active research within the baseball community. Popular current methods include PECOTA \cite[]{Sil03} and MARCEL \cite[]{Tan04b}. PECOTA is an extremely sophisticated method for prediction of future performance that is based on matching a player's past career performance to the careers of a set of comparable major league ballplayers. For each player, their set of comparable players is found by a nearest neighbor analysis of past players (both minor and major league) with similar performance at the same age. Once a comparison set is found, the future performance prediction for the player is based on the historical performance of those past comparable players. Factors such as park effects, league effects and physical attributes of the player are also taken into account. MARCEL is a very simple system for prediction that uses the most recent three seasons of a players major league data, with the most recent seasons weighted more heavily. There is also a component of the MARCEL methodology that regresses predictions to the overall population mean. These methods serve as effective tools for prediction, but our focus is on a fully model-based approach that integrates multiple sources of information from publicly available data, such as the Lahman database \cite[]{Lah06}. One advantage of a model-based approach is the ability to move beyond point predictions to the incorporation of variability via predictive intervals.
In Section~\ref{allmodel}, we present a Bayesian hierarchical model for the evolution of hitting performance throughout the careers of individual players. Bayesian or Empirical Bayes approaches have recently been used to model individual hitting events based on various within-game covariates \cite[]{QuiMulRos08} and for prediction of within-season performance \cite[]{Bro08}. We are addressing a different question: how can we predict the course of a particular hitters career based on the seasons of information we have observed thus far? Our model includes several covariates that are crucial for the accurate prediction of hitting for a particular player in a given year. A player's age and home ballpark certainly has an influence on their hitting; we will include this information among the covariates in our model. We will also include player position in our model, since we believe that position is an important proxy for hitting performance ({\it e.g.}, second basemen have a generally lower propensity for home runs than first basemen). Finally, our model will factor past performance of each player into future predictions. In Section~\ref{results}, we test our predictions against a hold out data set, and compare our performance with several competing methods. As mentioned above, current external methods focus largely on point prediction with little attention given to the variance of these predictions. We address this issue by
examining our results not only in terms of accuracy of our point predictions, but also the quality the prediction intervals produced by our model. We also investigate several other interesting aspects of our model in Section~\ref{results} and then conclude with a brief discussion in Section~\ref{discussion}.
\bigskip
\section{Model and Implementation}\label{allmodel}
Our data comes from the publicly-available Lahman Baseball Database \cite[]{Lah06}, which contains hitting totals for each major league baseball player from 1871 to the present day, though we will fit our model using only seasons from 1990 to 2005. In total, we have 10280 player-years of of information from major league baseball between 1990 and 2005 that will be used for model estimation. Within each season $j$, we will use the following data for each player $i$:
\begin{enumerate}
\item Home Run Total : $Y_{ij}$
\item At Bat Total : $M_{ij}$
\item Age : $A_{ij}$
\item Home Ballpark : $B_{ij}$
\item Position : $R_{ij}$
\end{enumerate}
As an example, Barry Bonds in 2001 had $Y_{ij} = 73$ home-runs out of $M_{ij} = 664$ at bats. We excluded pitchers from our model, leaving us with nine positions: first basemen (1B), second basemen (2B), third basemen (3B), shortstop (SS), left fielder (LF), center fielder (CF), right fielder (RF), catcher (C), and the designated hitter (DH). There were 46 different home ballparks used in major league baseball between 1990 and 2005. Player ages ranged between 20 and 49, though the vast majority of player ages were between 23 and 44.
\bigskip
\subsection{Hierarchical Model for Hitting}\label{singleeventmodeling}
Our outcome of interest for a given player $i$ in a given year (season) $j$ is their home-run total $Y_{ij}$, which we model as a Binomial variable:
\begin{eqnarray}
Y_{ij} \sim {\rm Binomial} (M_{ij}, \theta_{ij}) \label{yequation}
\end{eqnarray}
where $\theta_{ij}$ is a player- and year-specific home run rate, and $M_{ij}$ are the number of opportunities (at bats) for player $i$ in year $j$. Note that by using at-bats as our number of opportunities, we are excluding outcomes such as walks, sacrifice flies and hit-by-pitches. We will assume that the number of opportunities $M_{ij}$ are fixed and known so we focus our efforts on modeling each home run rate $\theta_{ij}$. The i.i.d. assumption underlying the binomial model has already been justified for hitting totals within a single season \cite[]{Bro08}, and so seems reasonable for hitting totals across an entire season.
We next model each unobserved player-year rate $\theta_{ij}$ as a function of home ballpark $b=B_{ij}$, position $k=R_{ij}$ and age $A_{ij}$ of player $i$ in year $j$.
\begin{eqnarray}
\log \left( \frac{\theta_{ij}}{1-\theta_{ij}} \right) = \alpha_{k} + \beta_{b} + f_k(A_{ij}) \label{thetaequation}
\end{eqnarray}
The parameter vector $\pmb{\alpha} = (\alpha_1,\ldots,\alpha_9)$ are the position-specific intercepts for each of the nine player positions. The function $f_k(A_{ij})$ is a smooth trajectory of $A_{ij}$, that is different for each position $k$. We allow a flexible model for $f_k(\cdot)$ by using a cubic B-spline \cite[]{Boo78} with different spline coefficients $\pmb{\gamma}$ estimated for each position. The age trajectory component of this model involves the estimation of 36 parameters: four B-spline coefficients per position $\times$ nine different positions.
We call the parameter vector $\pmb{\beta}$ the ``team effects" since these parameters are shared by all players with the same team and home ballpark. However, these coefficients $\pmb{\beta}$ can not be interpreted as a true ``ballpark effect" since they are confounded with the effect of the team playing in that ballpark. If a particular team contains many home run hitters, then that can influence the effect of their home ballpark. Separating the effect of team versus the effect of ballpark would require examining hitting data at the game level instead of the seasonal level we are using for our current model.
There are two additional aspects of hitting performance that are not captured by the model outlined in (\ref{yequation})-(\ref{thetaequation}). Firstly, conditional on the covariates age, position, and ballpark, our model treats the home run rate $\theta_{ij}$ as independent and identically-distributed across players $i$ and years $j$. However, we suspect that not all hitters are created equal: we posit that there exists a sub-group of elite home run hitters within each position that share a higher mean home run rate. We can represent this belief by placing a mixture model on the intercept term $\alpha_k$ dictated by a latent variable $E_{ij}$ in each player-year. In other words,
\[ \alpha_k = \left\{ \begin{array}
{r@{\qquad {\rm if} \quad}l@{ \quad}l}
\alpha_{k0} & {\rm E}_{ij} = 0 &\\
\alpha_{k1} & {\rm E}_{ij} = 1 & \label{mixture} \\
\end{array} \right. \]
where we force $\alpha_{k0} < \alpha_{k1}$ for each position $k$. We call the latent variable ${\rm E}_{ij}$ the elite status for player $i$ in year $j$. Players with elite status are modeled as having the same shape to their age trajectory, but with an extra additive term (on the log-odds scale) that increases their home run rate. However, we have a different elite indicator ${\rm E}_{ij}$ for each player-year, which means that a particular player $i$ can move in and out of elite status during the course of his career. Thus, the elite sub-group is maintained in the player population throughout time even though this sub-group will not contain the exact same players from year to year.
The second aspect of hitting performance that needs to be addressed is that the past performance of a particular player should contain information about his future performance. One option would be to use player-specific intercepts in the model to allow each player to have a different trajectory. However, this model choice would involve a large number of parameters, even if these player-specific intercepts were assumed to share a common prior distribution. In addition, many of these intercepts would be subject to over-fitting due to small number of observed years of data for many players. We instead favor an approach that involves fewer parameters (to prevent over-fitting) while still allowing different histories for individual players. We accomplish this goal by building the past performance of each player into our model through a hidden Markov model on the elite status indicators ${\rm E}_{ij}$ for each player $i$. Specifically, our probability model of the elite status indicator for player $i$ in year $j+1$ is allowed to depend on the elite status indicator for player $i$ in year $j$:
\begin{eqnarray}
p ({\rm E}_{i,j+1} = b | {\rm E}_{ij} = a, R_{ij} = k) = \nu_{abk} \qquad a,b \in \{0,1\} \label{Eequation}
\end{eqnarray}
where ${\rm E}_{ij}$ is the elite status indicator and $R_{ij}$ is the position of player $i$ in year $j$. This relationship is also graphically represented in Figure~\ref{markovfigure}. The Markovian assumption induces a dependence structure on the home run rates $\theta_{i,j}$ over time for each player $i$. Players that show elite performance up until year $j$ are more likely to be predicted as elite at year $j+1$. The transition parameters $\pmb{\nu}_k = (\nu_{00k},\nu_{01k},\nu_{10k},\nu_{11k})$ for each position $k = 1,\ldots,9$ are shared across players at their position, but can differ between positions, which allows for a different proportion of elite players in each position. We initialize each player's Markov chain by setting ${\rm E}_{i0} = 0$ for all $i$, meaning that each player starts their career in non-elite status. This initialization has the desired consequence that young players must show consistently elite performance in multiple years in order to have a high probability of moving to the elite group.
\begin{figure}
\caption{Hidden Markov Model for Elite Status}\label{markovfigure}
\vspace{-0.5cm}
\begin{center}
\includegraphics[height=2in,width=3.5in]{fig.hmm.eps}
\end{center}
\vspace{-0.5cm}
\end{figure}
In order to take a fully Bayesian approach to this problem, we must specify prior distributions for all of our unknown parameters. The forty-eight different ballpark coefficients $\pmb{\beta}$ in our model all share a common Normal distribution,
\begin{eqnarray}
\beta_{l} & \sim & {\rm Normal} (0, \tau^2) \qquad \forall \quad l = 1, \ldots, 48 \label{priorbeta}
\end{eqnarray}
The spline coefficients $\pmb{\gamma}$ needed for the modeling of our age trajectories also share a common Normal distribution,
\begin{eqnarray}
\gamma_{kl} & \sim & {\rm Normal} (0, \tau^2) \qquad \forall \quad k = 1, \ldots, 9, \, l = 1,\ldots,L \label{priorgamma}
\end{eqnarray}
where $L$ is the number of spline coefficients needed in the modeling of age trajectories for $f(A_{ij},R_{ij})$ for each position. In our latent mixture model, we also have two intercept coefficients for each position, $\pmb{\alpha}_k = (\alpha_{k0},\alpha_{k1})$, which share a truncated Normal distribution,
\begin{eqnarray}
\pmb{\alpha}_{k} & \sim & {\rm MVNormal} (\pmb{0}, \tau^2 \bI_2) \cdot {\rm Ind} (\alpha_{k0} < \alpha_{k1}) \qquad \forall \quad k = 1, \ldots, 9 \label{prioralpha}
\end{eqnarray}
where $\pmb{0}$ is the $2 \times 1$ vector of zeros and $\bI_2$ is the $2 \times 2$ identity matrix. This bivariate distribution is truncated by the indicator function ${\rm Ind} (\cdot)$ to ensure that $\alpha_{k0} < \alpha_{k1}$ for each position $k$. We make each of the prior distributions (\ref{priorbeta})-(\ref{prioralpha}) non-informative by setting the variance hyperparameter $\tau^2$ to a very large value (10000 in this study). Finally, for the position-specific transition parameters of our elite status $\pmb{\nu}$, we use flat Dirichlet prior distributions,
\begin{eqnarray}
( \nu_{00k}, \nu_{01k} ) & \sim & {\rm Dirichlet} (\omega,\omega) \qquad \forall \quad k = 1, \ldots, 9 \nonumber \\
( \nu_{10k}, \nu_{11k} ) & \sim & {\rm Dirichlet} (\omega,\omega) \qquad \forall \quad k = 1, \ldots, 9
\end{eqnarray}
These prior distributions are made non-informative by setting $\omega$ to a small value ($\omega = 1$ in this study). We also examined other values for $\omega$ and found that using different values had no influence on our posterior inference, which is to be expected considering the dominance of the data in equation (\ref{gibbstransition}). Combining these prior distributions together with equations (\ref{yequation})-(\ref{Eequation}) give us the full posterior distribution of our unknown parameters,
\begin{eqnarray}
p(\pmb{\alpha},\pmb{\beta},\pmb{\gamma},\pmb{\nu}, \bE | \bX) & \propto & \prod\limits_{i,j} p(Y_{ij} | M_{ij}, \theta_{ij} )
\cdot p(\theta_{ij} | R_{ij}, A_{ij}, B_{ij}, E_{ij}, \pmb{\alpha}, \pmb{\beta}, \pmb{\gamma}) \nonumber \\
& &\hspace{0.5cm} \cdot p(E_{ij} | E_{i,j-1}, \pmb{\nu} ) \cdot p(\pmb{\alpha},\pmb{\beta},\pmb{\gamma},\pmb{\nu}). \label{fullposterior}
\end{eqnarray}
where we use $\bX$ to denote our entire set of observed data $\bY$ and covariates $(\bA,\bB,\bM,\bR)$.
\bigskip
\subsection{MCMC Implementation}\label{gibbs}
We estimate our posterior distribution (\ref{fullposterior}) by a Gibbs sampling strategy \cite[]{GemGem84}. We iteratively sample from the following conditional distributions of each set of parameters given the current values of the other parameters:
\begin{enumerate}
\item $p(\pmb{\alpha} |\pmb{\beta},\pmb{\gamma},\pmb{\nu}, \bE, \bX) = p(\pmb{\alpha} | \pmb{\beta}, \pmb{\gamma}, \bE, \bX) $
\item $p(\pmb{\beta} |\pmb{\alpha},\pmb{\gamma},\pmb{\nu}, \bE, \bX) = p(\pmb{\beta} |\pmb{\alpha},\pmb{\gamma}, \bE, \bX) $
\item $p(\pmb{\gamma} |\pmb{\beta},\pmb{\alpha},\pmb{\nu}, \bE, \bX) = p(\pmb{\gamma} |\pmb{\beta},\pmb{\alpha}, \bE, \bX)$
\item $p(\pmb{\nu} |\pmb{\beta},\pmb{\gamma},\pmb{\alpha}, \bE, \bX) = p(\pmb{\nu} | \bE) $
\item $p(\bE | \pmb{\beta},\pmb{\gamma},\pmb{\nu}, \bE, \bX) $
\end{enumerate}
where again $\bX$ denotes our entire set of observed data $\bY$ and covariates $(\bA,\bB,\bM,\bR)$. Combined together, steps 1-3 of the Gibbs sampler represent the usual estimation of regression coefficients ($\pmb{\alpha},\pmb{\beta},\pmb{\gamma}$) in a Bayesian logistic regression model. The conditional posterior distributions for these coefficients are complicated and we employ the common strategy of using the Metropolis-Hastings algorithm to sample each coefficient (see, e.g. \cite{GelCarSte03}). The proposal distribution for a particular coefficient is a Normal distribution centered at the maximum likelihood estimate of that coefficient. The variance of this Normal proposal distribution is a tuning parameter that was adaptively adjusted to provide a reasonable rejection/acceptance ratio \cite[]{GelRobGil95}. Step 4 of the Gibbs sampler involves standard distributions for our transition parameters $\pmb{\nu}_k = (\nu_{00k},\nu_{01k},\nu_{10k},\nu_{11k})$ for each position $k = 1,\ldots,9$. The conditional posterior distributions for our transition parameters implied by (\ref{fullposterior}) are
\begin{eqnarray}
(\nu_{00k},\nu_{01k}) | \bE & \sim & {\rm Dirichlet} \left( {\rm N}_{00k} + \omega, {\rm N}_{01k} + \omega \right) \nonumber \\
(\nu_{11k},\nu_{10k}) | \bE & \sim & {\rm Dirichlet} \left( {\rm N}_{11k} + \omega, {\rm N}_{10k} + \omega \right) \label{gibbstransition}
\end{eqnarray}
where ${\rm N}_{abk} = \sum\limits_{i} \sum\limits_{t=1}^{n_i} {\rm I} ({\rm E}_{i,t} = a, {\rm E}_{i,t+1} = b)$ over all players $i$ in position $k$ and where $n_i$ represents the number of years of observed data for player $i$'s career. Finally, step 5 of our Gibbs sampler involves sampling the elite status $E_{ij}$ for each year $j$ of player $i$, which can be done using the ``Forward-summing Backward-sampling" algorithm for hidden Markov models \cite[]{Chi96}. For a particular player $i$, this algorithm ``forward-sums" by recursively calculating
\begin{eqnarray}
p({\rm E}_{it} | \bX_{i,t}, \pmb{\Theta}) & \propto & p(X_{i,t} | {\rm E}_{it}, \pmb{\Theta}) \times p ({\rm E}_{it} | \bX_{i,t-1}, \pmb{\Theta}) \nonumber \\
& \propto & p(X_{i,t} | {\rm E}_{it}, \pmb{\Theta}) \times \sum\limits_{e = 0}^1 p({\rm E}_{it} | {\rm E}_{i,t-1} = e, \pmb{\Theta}) \cdot p({\rm E}_{i,t-1} = e | \bX_{i,t-1}, \pmb{\Theta}) \hspace{1cm}
\end{eqnarray}
for all $t=1,\ldots,n_i$ where $\bX_{i,k}$ represents the observed data for player $i$ up until year $k$, $X_{i,k}$ represents only the observed data for player $i$ in year $k$, and $\pmb{\Theta}$ represents all other parameters. The algorithm then "backward-samples" by sampling the terminal elite state $E_{i,n_i}$ from the distribution $p({\rm E}_{i,n_i} | \bX_{i,n_i}, \Theta)$ and then sampling ${\rm E}_{i,t-1} | {\rm E}_{i,t}$ for $t=n_i$ back to $t=1$. Repeating this algorithm for each player $i$ gives us a complete sample of our elite statuses $\bE$. We ran multiple chains from different starting values to evaluate convergence of our Gibbs sampler. Our results are based on several chains where the first 1000 iterations were discarded as burn-in. Our chains were also thinned, taking only every eighth iteration, in order to eliminate autocorrelation.
\subsection{Model Extension: Player-Specific Transition Parameters}\label{pshmmmodeling}
In Section~\ref{singleeventmodeling}, we introduced a hidden Markov model that allows the past performance of each player to influence predictions for future performance. If we infer player $i$ to have been elite in year $t$ (${\rm E}_{i,t} = 1$), then this inference influences the elite status of that player in his next year, ${\rm E}_{i,t+1}$ through the transition parameters $\pmb{\nu}_k$. However, one potential limitation of these transition parameters $\pmb{\nu}_k$ is that they are shared globally across all players at that position: each player at position $k$ has the same probability of transitioning from elite to non-elite and vice versa. This model assumption allows us to pool information across players for the estimation of our transition parameters in (\ref{gibbstransition}), but may lead to loss of information if players are truly heterogeneous with respect to the probability of transitioning between elite and non-elite states. In order to address this possibility, we consider extending our model to allow player-specific transition parameters in our hidden Markov model.
Our proposed extension, which we call the PSHMM, has player-specific transition parameters $\pmb{\nu}^i = (\nu^i_{00},\nu^i_{01},\nu^i_{10},\nu^i_{11})$ for each player $i$, that share a common prior distribution,
\begin{eqnarray}
(\nu^i_{00},\nu^i_{01}) & \sim & {\rm Dirichlet} \left(\omega_{00k} \, , \, \omega_{01k} \right) \nonumber \\
(\nu^i_{11},\nu^i_{10}) & \sim & {\rm Dirichlet} \left(\omega_{11k} \, , \, \omega_{10k} \right) \label{newprior1}
\end{eqnarray}
where $k$ is the position of player $i$. Global parameters $\pmb{\omega}_k = (\omega_{00k}, \omega_{01k}, \omega_{11k}, \omega_{10k})$ are now allowed to vary with flat prior distributions. This new hierarchical structure allows for transition probabilities $\pmb{\nu}^i$ to vary between players, but still imposes some shrinkage towards a common distribution controlled by global parameters $\pmb{\omega}_k$ that are shared across players with position $k$. Under this model extension, the new conditional posterior distribution for each $\pmb{\nu}^i$ is
\begin{eqnarray}
(\nu^i_{00},\nu^i_{01}) | \bE & \sim & {\rm Dirichlet} \left( {\rm N}^i_{00} + \omega_{00k} \, , \, {\rm N}^i_{01} + \omega_{01k} \right) \nonumber \\
(\nu^i_{11},\nu^i_{10}) | \bE & \sim & {\rm Dirichlet} \left( {\rm N}^i_{11} + \omega_{11k} \, , \, {\rm N}^i_{10} + \omega_{10k} \right) \label{newposterior}
\end{eqnarray}
where ${\rm N}^i_{ab} = \sum\limits_{t=1}^{n_i-1} {\rm I} (E_{i,t} = a, E_{i,t+1} = b)$.
To implement this extended model, we must replace step 4 in our Gibbs sampler with a step where we draw $\pmb{\nu}^i$ from (\ref{newposterior}) for each player $i$. We must also insert a new step in our Gibbs sampler where we sample the global parameters $\pmb{\omega}_k$ given our sampled values of all the $\pmb{\nu}^{i}$ values for players at position $k$. This added step requires sampling $(\omega_{00k},\omega_{01k})$ from the following conditional distribution:
\begin{eqnarray}
p(\omega_{00k},\omega_{01k} | \pmb{\nu}) \, \propto \, \left[ \frac{\Gamma(\omega_{00k} + \omega_{01k})}{\Gamma(\omega_{00k}) \Gamma(\omega_{01k})} \right]^{n_k} \times \left[ \prod\limits_{i = 1}^{n_k} \nu^i_{00} \right]^{\omega_{00k}-1} \times \left[ \prod\limits_{i = 1}^{n_k} \nu^i_{01} \right]^{\omega_{01k}-1} \label{abdist}
\end{eqnarray}
where each product is only over players $i$ at position $k$ and $n_k$ is the number of players at position $k$. We accomplish this sampling by using a Metropolis-Hastings step with true distribution (\ref{abdist}) and Normal proposal distributions: $\omega_{00k}^{prop} \sim {\rm N} (\hat{\omega}_{00k}, \sigma^2)$ and $\omega_{01k}^{prop} \sim {\rm N} (\hat{\omega}_{01k}, \sigma^2)$. The means of these proposal distributions are:
\begin{eqnarray}
\hat{\omega}_{00k} = \overline{\nu}_{00k} \left( \frac{\overline{\nu}_{00k} (1 - \overline{\nu}_{00k})}{s_{0k}^2} - 1 \right) \quad {\rm and} \quad \hat{\omega}_{01k} = (1-\overline{\nu}_{00k}) \left( \frac{\overline{\nu}_{00k} (1 - \overline{\nu}_{00k})}{s_{0k}^2} - 1 \right)
\end{eqnarray}
with
\begin{eqnarray}
\overline{\nu}_{00k} = \sum\limits_{i=1}^{n_k} \nu^i_{00} \, / \, n_k \qquad {\rm and} \qquad s_{0k}^2 = \sum\limits_{i=1}^{n_k} (\nu^i_{00} -\overline{\nu}_{00k})^2 \, / \, n_k \nonumber
\end{eqnarray}
where each sum is over all players $i$ at position $k$ and $n_k$ is the number of players at position $k$.
These estimates $\hat{\omega}_{00k}$ and $\hat{\omega}_{01k}$ were calculated by equating the sample mean $\overline{\nu}_{00k}$ and sample variance $s_{0k}^2$ with the mean and variance of the Dirichlet distribution (\ref{abdist}). Similarly, we sample $(\omega_{11k},\omega_{10k})$ with the same procedure but with obvious substitutions.
\section{Results and Model Comparison}\label{results}
Our primary interest is the prediction of future hitting events, $Y^\star_{t+1}$ for years $t+1,\ldots$ based on our model and observed data up to year $t$. We estimate the full posterior distribution (\ref{fullposterior}) and then use
this posterior distribution to predict home run totals $Y^\star_{i,2006}$ for each player $i$ in the 2006 season. The 2006 season serves as an external validation of our method, since this season is not included in our model fit. We use our predicted home run totals $\bY^\star_{2006}$ for the 2006 season to compare our performance to several previous methods (Section~\ref{predictions-external}) as well as evaluate several internal model choices (Section~\ref{predictions-internal}). In Section~\ref{trajectoryresults}, we present inference for other parameters of interest from our model, such as the position-specific age curves.
\subsection{Prediction of 2006 Home Run Totals: Internal Comparisons} \label{predictions-internal}
We can use our posterior distribution (\ref{fullposterior}) based on data from MLB seasons up to 2005 to calculate the predictive distribution of the 2006 hitting rate $\theta_{i,2006}$ for each player $i$.
\begin{eqnarray}
p (\theta_{i,2006} | \bX) & = & \int p (\theta_{i,2006} | R_{i,2006}, A_{i,2006}, B_{i,2006}, {\rm E}_{i,2006}, \pmb{\alpha},\pmb{\beta},\pmb{\gamma}) \nonumber \\
& & \hspace{0.5cm} \cdot p({\rm E}_{i,2006} | \bE_{i}, \pmb{\nu}) p(\pmb{\alpha},\pmb{\beta},\pmb{\gamma},\pmb{\nu}, \bE_i | \bX) \,
d \pmb{\alpha} \, d \pmb{\beta} \, d \pmb{\gamma} \, d\pmb{\nu} \, d\bE \label{predtheta}
\end{eqnarray}
where $\bX$ represents all observed data up to 2005. This integral is estimated using the sampled values from our posterior distribution $p(\pmb{\alpha},\pmb{\beta},\pmb{\gamma},\pmb{\nu}, \bE_i | \bX)$ that were generated via our Gibbs sampling strategy.
We can use the posterior predictive distribution (\ref{predtheta}) of each 2006 home run rate $\theta_{i,2006}$ to calculate the distribution of the home run total $Y^\star_{i,2006}$ for each player in the 2006 season.
\begin{eqnarray}
p(Y^\star_{i,2006} | \bX) & = & \int p(Y^\star_{i,2006} | M_{i,2006} , \theta_{i,2006}) \cdot p (\theta_{i,2006} | \bX) \,
d \theta_{i,2006} \label{predY}
\end{eqnarray}
However, the issue with prediction of home run totals is that we must also consider the number of opportunities $M_{i,2006}$. Since our overall focus has been on modeling home run rates $\theta_{i,2006}$, we will use the true value of $M_{i,2006}$ for the 2006 season in equation (\ref{predY}). Using the true value of each $M_{i,2006}$ gives a fair comparison of the rate predictions $\theta_{i,2006}$ for each model choice, since it is a constant scaling factor. This is not a particularly realistic scenario in a prediction setting since the actual number of opportunities will not be known ahead of time.
Based on the predictive distribution $p(Y^\star_{i,2006} | \bX)$, we can report either a predictive mean ${\rm E}(Y^\star_{i,2006} | \bX)$ or a predictive interval $C^\star_i$ such that $p(Y^\star_{i,2006} \in C^\star_i | \bX) \geq 0.80$. We can examine the accuracy of our model predictions by comparing to the observed HR totals $Y_{i,2006}$ for the 559 players in the
2006 season, which we did not include in our model fit. We use the following three comparison metrics:
\begin{enumerate}
\item {\bf RMSE}: root mean square error of predictive means: $\sqrt{\sum\limits_i ({\rm E}(Y^\star_{i,2006} | \bX) - Y_{i,2006})^2 / n}$
\item {\bf Interval Coverage}: fraction of 80\% predictive intervals $C^\star_i$ covering observed $Y_{i,2006}$
\item {\bf Interval Width}: average width of 80\% predictive intervals $C^\star_i$
\end{enumerate}
In Table~\ref{internalcomp}, we evaluate our full model outlined in Section~\ref{singleeventmodeling} relative to a couple simpler modeling choices. Specifically, we examine a simpler version of our model without positional information or the mixture model on the $\alpha$ coefficients. We see from Table~\ref{internalcomp} that our full model gives proper coverage and a substantially lower RMSE than the version of our model without positional information or the elite/non-elite mixture model. We also examine a truly simplistic strawman, which is to take last years home run totals as the prediction for this years home run totals (ie. $Y^\star_{i,2006} = Y_{i,2005}$). Since this strawman is only a point estimate, that comparison is made based solely on the RMSE. As expected, the relative performance of this strawman model is terrible, with a substantially higher RMSE compared to our full model. Of course, this simple strawman alternative is rather naive and in Section~\ref{predictions-external}, we compare our performance to more sophisticated external prediction approaches.
\begin{table}[ht]
\caption{Internal Comparison of Different Model Choices. Measures are calculated over 559 Players from 2006 season.}\label{internalcomp}
\begin{center}
\begin{tabular}{|l|ccc|}
\hline
& & Coverage & Average \\
Model & RMSE & of 80\%& Interval \\
& & Intervals & Width \\
\hline
Full Model & 5.30 & 0.855 & 9.81 \\
No Position or Elite Indicators & 6.87 & 0.644 & 6.56 \\
Strawman: $Y^\star_{i,2006} = Y_{i,2005}$ & 8.24 & NA & NA \\
Player-Specific Transitions & 5.45 & 0.871 & 10.36 \\
\hline
\end{tabular}
\end{center}
\end{table}
We also considered an extended model in Section~\ref{pshmmmodeling} with player-specific transition parameters for the hidden Markov model on elite status, and the validation results from this model are also given in Table~\ref{internalcomp}. Our motivation for this extension was that allowing player-specific transition parameters might reduce the interval width for players that have displayed consistent past performance. However, we see that the overall prediction accuracy was not improved with this model extension, suggesting that there is not enough additional information in the personal history of most players to noticeably improve the model predictions. Somewhat surprisingly, we also see that the width of our 80\% predictive intervals are not actually reduced in this extended model. The reason is that, even for players with long careers of data, the player-specific transition parameters $\pmb{\nu}^{i}$ fit by this extended model are not extreme enough to force all sampled elite indicators $E_{i,2006}$ to be either 0 or 1, and so the predictive interval is still wide enough to include both possibilities.
\subsection{Prediction of 2006 Home Run Totals: External Comparisons} \label{predictions-external}
Similarly to Section~\ref{predictions-internal}, we use hold-out home run data for the 2006 season to evaluate our model predictions compared to the predictions from two external methods, PECOTA \cite[]{Sil03}and MARCEL \cite[]{Tan04b}, both described in Section~\ref{intro}. For a reasonable comparison set, we focus our external validation on hitters with an empirical home run rate of least 1 HR every 40 AB in at least one season up to 2005 (minimum of 300 AB in that season). This restriction reduces our dataset for model fitting down to 118 top home run hitters who all have predictions from the competing methods PECOTA and MARCEL. As noted above, our predicted home-run totals for 2006 are based on the true number of at bats for 2006. In order to have a fair comparison to external methods such as PECOTA or MARCEL, we also scale the predictions from these methods by the true number of at bats in 2006.
Our approach has the advantage of producing the full predictive distribution of future observations (summarized by our predictive intervals). However, the external method MARCEL does not produce comparable intervals, so we only compare to other approaches in terms of prediction accuracy. We expand our set of accuracy measures to include not only the root mean square error (RMSE), but also the median absolute error (MAE). In addition to comparing the predictions from each method using overall error rates, we also calculated ``\% BEST" which is, for each method, the percentage of players for which the predicted home run total $Y^\star_{i,2006}$ is the closest to the true home run total among all methods. Each of these comparison statistics are given in Table~\ref{externalcomp}. In addition to giving these validation measures for all 118 players, we also separate our comparison for young players (age $\leq$ 26 years) versus older players (age $>$ 26 years).
\begin{table}[ht]
\caption{Comparison of our model to two external methods on the 2006 predictions of 118 top home-run hitters. We also provide this comparison for only young players (age $\leq$ 26 years) versus only older players (age $>$ 26 years). }\label{externalcomp}
\begin{center}
{\small
\begin{tabular}{|l|ccc|ccc|ccc|}
\hline
Method & \multicolumn{3}{|c|}{{\bf All Players}} & \multicolumn{3}{|c|}{{\bf Young Players}} & \multicolumn{3}{|c|}{{\bf Older Players}} \\
& RMSE & MAE & \% BEST & RMSE & MAE & \% BEST & RMSE & MAE & \% BEST\\
\hline
Our Model & 7.33 & 4.40 & 41 \% & 2.62 & 1.93 & 62\% & 7.56 & 4.48 & 39\% \\
PECOTA & 7.11 & 4.68 & 28 \% & 4.62 & 3.44 & 0\% & 7.26 & 4.79 & 30\% \\
MARCEL & 7.82 & 4.41 & 31 \% & 4.15 & 2.17 & 38\% & 8.02 & 4.57 & 31\%\\
\hline
\end{tabular}
}
\end{center}
\end{table}
We see from Table~\ref{externalcomp}, that our model is extremely competitive with the external methods PECOTA and MARCEL. When examining all 118 players, our has the smallest median absolute error and the highest ``\% Best" measure, suggesting that our predictions are superior on these absolute scales. Our superior performance is even more striking when we examine only the young (age $\leq$ 26 years) players in our dataset. We have the best prediction on 62\% of all young players, and for these young players, both the RMSE and MAE from our method is substantially lower than either PECOTA or MARCEL. We credit this superior performance to our sophisticated hierarchical approach that builds in information via position as well as past performance.
However, our method is not completely dominant: we have a larger root mean square error than PECOTA for older players (and overall), which suggests that our model might be making large errors on a small number of players. Further investigation shows that our model commits its largest errors for players in the designated hitter (DH) position. This is somewhat expected, since our model seems to perform best for young players and DH is a position almost always occupied by an older player. Beyond this, the model appears to be over-shrinking predictions for players in the DH role, perhaps because this player position is rather unique and does not fit our model assumptions as well as the other positions. However, the validation results are generally very encouraging for our approach compared to previous methods, especially among younger players where a principled balance of positional information with past performance is most advantageous.
We further investigate our model dynamics among young players by examining how many years of observed performance are needed to decide that a player is an elite home run hitter. This question was posited in Section~\ref{intro} and we now address the question using our elite status indicators ${\rm E}_{ij}$. Taking all 559 available players examined in Section~\ref{predictions-internal}, we focus our attention on the subset of players that were determined by our model to be in the elite group (${\rm P}(E_{ij} = 1) \geq 0.5$) for at least two years in their career. For each elite home run hitter, we tabulate the number of years of observed data that were needed before they were declared elite. The distribution of the number of years needed is given in Figure~\ref{elitedist}. We see that although some players are determined to be elite based on just one year of observed data, most players (74\%) need more than one year of observed performance to determine that they are elite home run hitters. In fact, almost half of players (46\%) need more than two years of observed performance to determine that they are elite home run hitters.
\begin{figure}[ht]
\caption{Distribution of number of years of observed data needed to infer elite status (${\rm P}(E_{ij} = 1) \geq 0.5$) among all players determined by our model to be elite during their career.} \label{elitedist}
\vspace{-0.5cm}
\begin{center}
\includegraphics[width=6in,height=4in]{fig.elitedist.eps}
\end{center}
\end{figure}
We also investigated our model dynamics among older players by examining the balancing of past consistency with advancing age, which was also posited as a question in Section~\ref{intro}. Specifically, for the older players (age $\geq 35$) in our dataset, we examined the differences between the 2006 HR rate predictions $\hat{\theta}_{i,2006} = {\rm E}(\theta_{i,2006} | \bX)$ from our model versus the naive prediction based entirely on the previous year $\tilde{\theta}_{i,2006} = Y_{i,2005}/M_{i,2005}$. Is our model contribution for a player (which we define as the difference between our model prediction $\hat{\theta}_{i,2006}$ and the naive prediction $\tilde{\theta}_{i,2006}$) more a function of advancing age or past consistency of that player? Both age and past consistency (measured as the standard deviation of their past home run rates) were found to be equally good predictors of our model contribution, which suggests that both sources of information are being evenly balanced in the predictions produced by our model.
\subsection{Age Trajectory Curves} \label{trajectoryresults}
In addition to validating our model in terms of prediction accuracy, we can also examine the age trajectory curves that are implied by our estimated posterior distribution (\ref{fullposterior}). We will examine these curves on the scale of the home run rate $\theta_{ij}$ which is a function of age $A_{ij}$, ball-park $b$, and elite status ${\rm E}_{ij}$ for player $i$ in year $j$ (with position $k$):
\begin{eqnarray}
\theta_{ij} = \frac{\exp \left[(1-{\rm E}_{ij})\cdot\alpha_{k0} + {\rm E}_{ij}\cdot\alpha_{k1} + \beta_{b} + f_k(A_{ij})\right]}{1 + \exp \left[(1-{\rm E}_{ij})\cdot\alpha_{k0} + {\rm E}_{ij}\cdot\alpha_{k1} + \beta_b + f_k(A_{ij})\right]} \label{thetaequation2}
\end{eqnarray}
The shape of these curves can differ by position $k$, ballpark $b$ and also can differ between elite and non-elite status as a consequence of having a different additive effect $\alpha_{k0}$ vs. $\alpha_{k1}$. In Figure~\ref{agecurves}, we compare the age trajectories for two positions, DH and SS, for both elite player-years ($E_{ij} = 1$) vs. non-elite player-years ($E_{ij} = 0$) for an arbitrary ballpark. Each graph contains multiple curves (100 in each graph), each of which is the curve implied by the sampled values $(\pmb{\alpha},\pmb{\gamma})$ from a single iteration of our converged and thinned Gibbs sampling output. Examining the curves from multiple samples gives us an indication of the variability in each curve.
\begin{figure}[ht]
\caption{ Age Trajectories $f_k(\cdot)$ for two positions and elite vs. non-elite status. X-axis is age and Y-axis is Rate = $\theta_{ij}$} \label{agecurves}
\begin{center}
\includegraphics[width=6.5in,height=5in]{fig.agecurves.eps}
\end{center}
\end{figure}
We see a tremendous difference between the two positions DH and SS in terms of the magnitude and shape of their age trajectory curves. This is not surprising, since home-run hitting ability is known to be quite different between designated hitters and shortstops. In fact, DH and SS were chosen specifically to illustrate the variability between position with regards to home-run hitting. For the DH position, we also see that elite vs. non-elite status show a substantial difference in the magnitude of the home-run rate, though the overall shape across age is restricted to be the same by the fact that players of both statuses share the same $f_k(A_{ij})$ in equation (\ref{thetaequation2}). There is less difference between elite and non-elite status for shortstops, in part due to the lower range of values for shortstops overall. Not surprisingly, the variability in the curves grows with the magnitude of the home run rate.
We also perform a comparison across all positions by examining the elite vs. non-elite intercepts $(\pmb{\alpha}_0,\pmb{\alpha}_1)$ that were allowed to vary by position. We present the posterior distribution of each elite and non-elite intercept in Figure~\ref{intercepts}. For easier interpretation, the values of each $\alpha_{k0}$ and $\alpha_{k1}$ have been transformed into the implied home run rate $\theta_{ij}$ for very young (age = 23) players in our dataset. We see in Figure~\ref{intercepts} that the variability is higher for the elite intercept in each position, and there is even more variability between positions. The ordering of the positions is not surprising: the corner outfielders and infielders have much higher home run rates than the middle infielder and centerfielder positions.
\begin{figure}[ht]
\caption{Distribution of the elite vs. non-elite intercepts $(\pmb{\alpha}_0,\pmb{\alpha}_1)$ for each position. The distributions of each $(\pmb{\alpha}_0,\pmb{\alpha}_1)$ are presented in terms of the home run rate $\theta_{ij}$ for very young (age = 23) players. The posterior mean is given as a black dot, and the 95\% posterior interval as a black line. } \label{intercepts}
\vspace{-1.25cm}
\begin{center}
\rotatebox{270}{\includegraphics[width=4.5in,height=6in]{fig.intercepts.eps}}
\end{center}
\end{figure}
For a player at a specific position, such as DH, our predictions of his home run rate for a future season is a weighted mixture of elite and non-elite DH curves given in Figure~\ref{agecurves}. The amount of weight given to elite vs. non-elite for a given player will be determined by the full posterior distribution (\ref{fullposterior}) as a function of that player's past performance. We illustrate this characteristic of our model in more detail in Figure~\ref{badyear} by examining six different hypothetical scenarios for players at the 2B position. Each plot in Figure~\ref{badyear} gives several seasons of past performance for a single player, as well as predictions for an additional season (age 30). Predictions are given both in terms of posterior draws of the home run rate as well as the posterior mean of the home run rate. The elite and non-elite age trajectories for the 2B position are also given in each plot. We focus first on the left column of plots, which shows hypothetical players with consistently high (top row), average (middle row), and poor (bottom row) past home run rates. We see in each of these left-hand plots that our posterior draws (gray dots) for the next season are a mixture of posterior samples from the elite and non-elite curves, though each case has a different proportion of elite vs. non-elite, as indicated by the posterior mean of those draws (black $\times$).
Now, what would happen if each of these players was not so consistent? In Section~\ref{intro}, we asked about the effect of a single sub-par year on our model predictions. The plots in the right column show the same three hypothetical players, but with their most recent past season replaced by a season with distinctly different (and relatively poor) HR hitting performance. We see from the resulting posterior means in each case that only the average player (middle row) has his predictions substantially affected by the one season of relatively poor performance. Despite the one year of poor performance, the player in the top row of Figure~\ref{badyear} is still considered to be elite in the vast majority of posterior draws. Similarly, the player in the bottom row of Figure~\ref{badyear} is going to be considered non-elite regardless of that one year of extra poor performance. The one season of poor performance has the most influence on the player in the middle row, since the model has the most uncertainty with regards to the elite vs. non-elite status of this average player.
\begin{figure}[ht]
\caption{Six different hypothetical scenarios for a player at the 2B position. Black curves indicate the elite and non-elite age trajectories for the 2B position. Black points represent several several seasons of past performance for a single player. Predictions for an additional season are given as posterior draws (gray points) of the home run rate and the posterior mean of the home run rate (black $\times$). Left column of plots gives hypothetical players with consistently high (top row), average (middle row), and poor (bottom row) past home run rates. Right column of plots show the same hypothetical players, but with their most recent past season replaced by a relatively poor HR hitting performance.} \label{badyear}
\begin{center}
\includegraphics[width=6.5in,height=7.5in]{fig.badyear.eps}
\end{center}
\end{figure}
\bigskip
\section{Discussion}\label{discussion}
We have presented a sophisticated Bayesian hierarchical model for home run hitting among major league baseball players. Our principled approach builds upon information about past performance, age, position, and home ballpark to estimate the underlying home run hitting ability of individual players, while sharing information across players. Our primary outcome of interest is the prediction of future home run hitting, which we evaluated on a held out season of data (2006). When compared to the previous methods, PECOTA \cite[]{Sil03} and MARCEL \cite[]{Tan04b}, we perform well in terms of prediction accuracy, especially our ``\% BEST" measure which tabulates the percentage of players for which our predictions are the closest to the truth. Our approach does especially well among young players, where a principled balance of positional information with past performance seems most helpful. It is worth noting that our method is fully automated without any post-hoc adjustments, which could possibly have been used to improve the performance of competing methods.
In addition, our method has the advantage of estimating the full posterior predictive distribution of each player, which provides additional information in the form of posterior intervals. Beyond our primary goal of prediction, our model-based approach also allows us to answer interesting supplemental questions such as the ones posed in Section~\ref{intro}.
We have illustrated our methodology using home runs as the hitting event since they are a familiar outcome that most readers can calibrate with their own anecdotal experience. However, our approach could easily be adapted to other hitting outcomes of interest, such as on-base percentage (rate of hits or walks) which has become a popular tool
for evaluating overall hitting quality. Also, although our procedure is presented in the context of predicting a single hitting event, we can also extend our methodology in order to model multiple hitting outcomes simultaneously. In this more general case,
there are several possible outcomes of an at-bat (out, single, double, etc.). Our units of observation for a given player $i$ in a given year $j$ is now a vector of outcome totals $\bY_{ij}$, which can be modeled as a multinomial outcome:
$\bY_{ij} \sim {\rm Multinomial} (M_{ij}, \pmb{\theta}_{ij})$ where $M_{ij}$ are the number of opportunities (at bats) for player $i$ in year $j$ and $\pmb{\theta}_{ij}$ is the vector of player- and year-specific rates for each outcome. Our underlying model for the rates $\theta_{ij}$ as a function of position, ball-park and past performance could be extended to a vector of rates $\pmb{\theta}_{ij}$. Our preliminary experience with this type of multinomial model indicate that single-event predictions (such as home runs) are not improved by considering multiple outcomes simultaneously, though one could argue that a more honest assessment of the variance in each event would result from acknowledging the possibility of multiple events from each at-bat.
An important element of our approach was the use of mixture modeling of the player population to further refine our estimated home run rates. Sophisticated statistical models have been used previously to model the careers of baseball hitters \cite[]{BerReeLar99}, but these approaches have not employed mixtures for the modeling of the player population. Our internal model comparisons suggest that this mixture model component is crucial for the accuracy of our model, dominating even information about player position. Using a mixture of elite and non-elite players limits the shrinkage towards the population mean of consistently elite home run hitters, leading to more accurate predictions. Our fully Bayesian approach also allows us to investigate the dynamics of our elite status indicators directly, as we do in Section~\ref{predictions-external}.
In addition to our primary goal of home run prediction, our model also estimates several secondary parameters of interest. We estimate career trajectories for both elite and non-elite players within each position. In addition to evaluating the dramatic differences between positions in terms of home run trajectories, our fully Bayesian model also has the advantage of estimating the variability in these trajectories, as can be seen in Figure~\ref{agecurves}. It is worth noting that our age trajectories do not really represent the typical major league baseball career, especially at the higher values of age. More accurately, our trajectories represent the typical career conditional on the player staying in baseball, which is one reason why we do not see dramatic dropoff in Figure~\ref{agecurves}. Since our primary goal is prediction, the fact that our trajectories are conditional is acceptable, since one would presumably only be interested in prediction for baseball players that are still in the major leagues. However, if one were more interested in estimating unconditional trajectories, then a more sophisticated modeling of the drop-out/censoring process would be needed.
Our focus in this paper has been the modeling of home run rates $\theta_{ij}$ and so we have made an assumption throughout our analysis that the number of plate appearances, or opportunities, for each player is a known quantity. This is a reasonable assumption when retrospectively estimating past performance, but when predicting future hitting performance the number of future opportunities is not known. In order to maintain a fair comparison between our method and previous approaches for prediction of future totals, we have used the future number of opportunities, which is not a reasonable strategy for real prediction. A focus of future research is to adapt our sophisticated hierarchical approach to the modeling and prediction of plate appearances $M_{ij}$ in addition to our current modeling of hitting rates $\theta_{ij}$.
\section{Acknowledgements}\label{acknowledgements}
We would like to thank Dylan Small and Larry Brown for helpful discussions.
| {
"attr-fineweb-edu": 2.087891,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdsc4eIOjSKE2umyi | \section{Introduction}
Cricket is the most popular sports in India. Different format of this game has gained popularity in different times, and in recent times Twenty-twenty (T20) format of the game has gained popularity. In last eleven years, Indian Premier League (IPL) has created a position of its own in the world cricket community. In this tournament, there are eight teams named after eight cities of India. Each team is owned by one or more franchises. A pool of players is created for an auction. Each player is allotted a base price, and the maximum amount each franchise can spend for its entire team is fixed. The auction determines the team for each franchise.
Naturally, the aim of every franchise is to buy the best players from the auction who can help them win the tournament. Each franchise, therefore, maintains a think-tank whose primary job is to determine the players whom they want to buy from the auction. This is not a trivial task because (i) it is not possible to always pick the best players since the budget for a team is fixed, and (ii) often it so happens that some other franchise buys a player who was in the target list of a franchise. In such situations, the think-tank must determine the best alternate player for the team. The history of IPL has repeatedly seen teams failing to perform in the tournament due to poor player selection.
In this paper, we have developed a recommendation system for player selection based on heuristic ranking of players, and a greedy algorithm for the team selection. The algorithm can help the think-tank to determine the best potential team, and an alternate player if their target player is not available. Previous studies \cite{dey, prakash, shah, sharma} concentrate either on the current form of the players \cite{Carl, Moore}, or their long term performance history \cite{Harsha}. However, these two factors individually are not sufficient to decide whether a player is to be bought. Other factors such as, whether a batsman is an opener, or middle-order player or finisher, and consideration of features pertinent to those ordering is of utmost necessity. For example, a middle-order batsman can afford a lower strike rate if he has a good average, but not a finisher. Furthermore, it is necessary to determine a balance between the recent form of a player, and the past history of a good player whose recent form may not be up to the mark.
In this paper, we have considered a set of traditional and derived features and have quantified them. Not all of these features are equally important for every player in every position. Therefore, we have broadly classified a potential team into multiple positions, and for each position we have heuristically determined the appropriate weight for these features. For each player in the pool, we have obtained a score based on these weighted features, and have ranked them accordingly. The ranking obtained by this technique is in accordance with the well known ranking of players in IPL. Finally, a relative score on the scale of 1-10 is allotted for each player. Moreover, a fixed basis score is allocated for a team of 15 players, which emulates the fixed budget assigned to each team. We then use greedy algorithm to select the best team within this budget using the aforementioned ranking scheme.
The rest of the paper is organized as follows - In Section 2, we define the traditional and derived features which have been considered for batsmen, and quantify them. Three clusters - openers, middle-order and finishers, have been defined in Section 3, each having a heuristic scoring formula which is a weighted sum of those features. The batsmen have been ranked into these clusters according to their points by these heuristics. The features for the bowlers are quantified in Section 4 and the bowlers are ranked accordingly. In Section 5, we further assign credit points to the players according to their ranks. We present two greedy algorithms for selecting the best IPL team from the previous ranking when the total credit point of the team is fixed. We conclude in Section 6.
\section{Analyzing features for batsmen}
We have created a database of all the players and their performance in the last eleven seasons of IPL. Some players, who have already retired, are removed from the database. The performance values for those players who have not played some of the early seasons are assigned 0 for those seasons. This comes handy later on while determining the experience factor. For analysis of current form, we have considered the values from the 2018 season of IPL only. In Table~\ref{tab:traditional} we note the traditional features which are considered for the analysis of players. These are very standard features used to report the performance of players in every cricket matches \cite{david}, and hence we do not discuss about these. Apart from these features, some derived features are also quantified, which we shall discuss later in this section.
\begin{table}[htb]
\centering
\caption{Standard features for batsmen and bowlers}
\begin{tabular}{|c|c|}
\hline
Batsmen & Bowlers \\
\hline
Innings & Innings\\
Runs Scored & Wickets Taken\\
\# Balls Faced & \# Balls Bowled\\
Average & Average\\
Strike Rate & Strike Rate\\
\# 100s & \# Runs Conceded\\
\# 50s & Economy Rate\\
\# 4s Hit & \# 4 Wickets\\
\# 6s Hit & \# 5 Wickets\\
\hline
\end{tabular}
\label{tab:traditional}
\end{table}
We have grouped the batsmen into three position clusters - openers, middle order batsmen and finisher, since these three types of players have three very different role in the match. In the remaining part of this section, we have quantified the features considered in Table~\ref{tab:traditional} for grouping. However, these features conform to all batsmen, and hence are not sufficient for the clustering. Therefore, we also consider some derived features which take into account the specific roles of batsmen in different position clusters.
\begin{itemize}
\item \textbf{Batting Average}: This feature denotes the average run scored by a batsman per match before getting out.
\begin{center}
Avg = Runs/(\# Innings - \# Not Out)
\end{center}
\item \textbf{Strike Rate}: Strike rate is defined as the average runs scored by a batsman per 100 balls. The higher the strike rate, the more effective a batsman is at scoring runs quickly.
\begin{center}
SR = ($100 \times$ Runs)/(\# Balls)
\end{center}
\item \textbf{Running Between the Wicket}: Though this is a very frequently used term in cricket, there is no proper quantification of this feature. We have quantified it as the number of runs scored per ball in which fours or sixes were not hit.
\begin{center}
RunWicket = (Runs - \# Fours$\times 4$ - \# Sixes$\times 6$)\\ /(\# Balls - \# Fours - \# Sixes)
\end{center}
\item \textbf{Hard Hitting}: T20 is a game of runs, and to win it is necessary to score runs quickly. Therefore, apart from quick running, it is necessary to hit many fours and sixes. ``Hard Hitter" is a common term in T20 cricket, but it is not quantified. We have quantified this feature as the number of runs scored per ball by hitting four or six.
\begin{center}
HardHitting = (\# Fours$\times4$ + \# Sixes$\times6$)/\# Balls
\end{center}
\end{itemize}
In addition to these, we have defined a \texttt{cost} feature for each of the features. The set of \texttt{cost} features is used to obtain a relative score of an individual with respect to all the IPL players. Let $f(i)$ denote the value of a feature $f$ for the $i$-th player. If the total number of IPL players is $n$, then the cost feature for $f$ is defined as $f(i)$/($\max\limits_{1\le j \le n}$ \{$f(j)$\}). Using this formula, we have calculated the cost feature for each of the features discussed above.
Experience of a player is an important criteria which should be considered in addition to the above features. Therefore we have defined experience factor ($x$fact) as
\begin{center}
$x$fact(i) = innings(i)/(\# innings in IPL so far)
\end{center}
where innings(i) implies the number of innings the $i$-th player has played. Define range$_x$fact as follows
\begin{center}
range$_x$fact = $\max\limits_{1\le j \le n}$ \{$x$fact(j)\} - $\min\limits_{1\le k \le n}$ \{$x$fact(k)\}
\end{center}
Then the relative experience of a player (cost$_x$fact) is defined as
\begin{center}
cost$_x$fact(i) = $x$fact(i)/range$_x$fact
\end{center}
The calculation of cost feature and cost$_x$fact is similar for bowlers also.
\section{Clustering and ranking of batsmen}
We have clustered the batsmen into three major categories - (i) opener, (ii) middle order and (iii) finisher. These three types of batsmen are required to play different roles in the match, and hence are expected to have different skills. A total weight of $100$ is divided into the features for each batsman. The division of the total weight into features is heuristic so that it models the skill requirements for batsman in different clusters. Furthermore, the ranking of players obtained by such weight distribution conforms with our known player ranking. In the following subsections we discuss the motivations for weight division in each position cluster, and show the top five players according to our ranking scheme.
\subsection{Opening batsman}
The responsibility of setting up a good foundation for the team's score lies on the openers. The openers get to face the maximum number of balls, and therefore is expected to have a high average. Furthermore, they need to score quickly in the first power play. So a handy strike rate is also a good indicator of the effectiveness of an opening batsman. Both these features are equally important and are, therefore, assigned the highest weight of 30 each. Furthermore, an opener is expected to stay on the crease for a long time and score big runs. Therefore, we have assigned a weight of 20 to the number of half-centuries (hc) scored by an opener per innings. Often an opener requires some time to set in, and then start hard hitting. During the time, when an opener is still not hitting hard, he should rotate the strikes quickly to keep the scoreboard moving. However, the necessity of hard hitting cannot be totally ignored during the powerplay. This motivates us to assign a weight of 10 for both running between the wickets and hard hitting.
Based on the choice of feature and weight division, the relative score of the $i$-th opener (opener(i)) is determined as
\begin{center}
opener(i) = cost\_SR(i)$\times 30$ + cost\_Avg(i)$\times 30$ + (hc(i)/innings(i))$\times 20$ + cost\_RunWicket(i)$\times 10$ + cost\_HardHitting(i)$\times 10$
\end{center}
We have used the notation $f(i)$ to denote the value of the feature $f$ for the $i$-th player considering all the seasons of IPL. Another notation $f[i]$ is used to denote the value of the same feature considering only the last season of IPL. The relative current score of the $i$-th opener(curr\_opener[i]) is determined as
\begin{center}
curr\_opener[i] = cost\_SR[i]$\times 30$ + cost\_Avg[i]$\times 30$ + (hc[i]/innings[i])$\times 20$ + cost\_RunWicket[i]$\times 10$ + cost\_HardHitting[i]$\times 10$
\end{center}
Considering the experience factor for each player, the final rank of the $i$-th opener is calculated as
\begin{center}
opener\_rank(i) = opener(i)$\times$ cost$_x$fact$\times$ (curr\_opener[i]/mean\_opener) + curr\_opener[i]
\end{center}
where mean\_opener is the average score of all the openers.
The top five opening batsman from IPL pool of players and their corresponding point derived according to our ranking scheme is shown in Table~\ref{tab:Open}.
\begin{table}[htb]
\centering
\caption{Top five opening batsmen according to our ranking scheme}
\begin{tabular}{|c|c|}
\hline
Batsman & Points \\
\hline
AB de Villiers & 173.5798\\
MS Dhoni & 159.0942\\
DA Warner & 150.113\\
V Kohli & 133.8061\\
CH Gayle & 132.4749\\
\hline
\end{tabular}
\label{tab:Open}
\end{table}
Four out of the five names are indeed the top openers or first down batsmen in IPL. The striking inclusion in this table is MS Dhoni who is almost always a finisher. However, we shall see in the subsequent subsections that the points obtained by Dhoni as a finisher is significantly higher than his points as an opener. That his name appeared in this table simply shows the effectiveness of Dhoni in a T20 match.
\subsection{Middle order batsman}
The batsmen in these genre need to provide the stability and also must possess the ability to accelerate the scoreboard when chasing a big total. A middle order batsman must be a good runner between the wickets since it becomes difficult to hit big shots during this phase of the match with the fielders spread out. Furthermore, often when one or both the openers get out quickly, the middle order batsmen must take up to responsibility to score big runs. Therefore a decent average is necessary.
The weights for middle order batsmen have been distributed among the features taking the above requirements into consideration. The relative score of the $i$-th middle order batsman (middle(i)) is determined as
\begin{center}
middle(i) = cost\_SR(i)$\times 20$ + cost\_Avg(i)$\times 30$ + (hc(i)/innings(i))$\times 10$ + cost\_RunWicket(i)$\times 25$ + cost\_HardHitting(i)$\times 15$
\end{center}
In accordance with the calculation for openers, the relative current score of the $i$-th middle order batsman (curr\_middle[i]) is determined as
\begin{center}
curr\_middle[i] = cost\_SR[i]$\times 20$ + cost\_Avg[i]$\times 30$ + (hc[i]/innings[i])$\times 10$ + cost\_RunWicket[i]$\times 25$ + cost\_HardHitting[i]$\times 15$
\end{center}
Considering the experience factor for each player, the final rank of the $i$-th middle order batsman is calculated as
\begin{center}
middle\_rank(i) = middle(i)$\times$ cost$_x$fact$\times$ (curr\_middle[i]/mean\_middle) + curr\_middle[i]
\end{center}
where mean\_middle is the average score of all the middle order batsmen.
Based on the middle\_rank, we have sorted all the batsmen in descending order of their score. The top five middle order batsmen, according to our scoring scheme is shown in Table~\ref{tab:middle}.
\begin{table}[htb]
\centering
\caption{Top five middle order batsmen according to our ranking scheme}
\begin{tabular}{|c|c|}
\hline
Batsman & Points\\
\hline
AB de Villiers & 183.9566 \\
MS Dhoni & 169.7258\\
DA Warner & 163.6285\\
V Kohli & 150.5608\\
KD Karthik & 137.0331\\
\hline
\end{tabular}
\label{tab:middle}
\end{table}
Once again, the names in this ranking do not require any justification. It is worthwhile to note that Dhoni is present in this list also, and his score is slightly higher than his score as an opener. This shows that Dhoni is more effective as a middle order batsman.
\subsection{Finisher}
Finishers usually have the task of scoring quick runs in the end of the match. Naturally, strike rate and hard hitting are the most important factors for any finisher. It is difficult for a finisher to score big runs regularly since they usually get to play very few overs. Therefore, average score is not considered for these players. Running between the wicket is also an important factor for these batsmen. These players are also expected to remain not out and win the match for the team.\\
In accordance to the above requirements, we have calculated the relative score of the $i$-th finisher (finisher(i)) as follows
\begin{center}
finisher(i) = cost\_SR(i)$\times 40$ + cost\_HardHitting(i)$\times 40$ + not\_out(i)$\times 5$ + cost\_RunWicket(i)$\times 15$
\end{center}
The current form of the $i$-th finisher (curr\_finisher) is calculated considering only the feature scores for last year.
\begin{center}
cur\_finisher[i] = cost\_SR[i]$\times 40$ + cost\_HardHitting[i]$\times 40$ + not\_out[i]$\times 5$ + cost\_RunWicket[i]$\times 15$
\end{center}
Mean\_finisher is the average score of all the middle order batsmen. Eventually the total score of the $i$-th finisher, considering the experience factor is calculated as follows.
\begin{center}
finisher\_rank(i) = finisher(i)$\times$ cost$_x$fact$\times$ (curr\_finisher[i]/mean\_finisher) + curr\_finisher[i]
\end{center}
Based on the score of finisher\_rank, the top five finishers in IPL are showed in Table~\ref{tab:finish} which clearly shows that Dhoni should be used as a finisher rather than an opener or middle-order batsman.
\begin{table}[htb]
\centering
\caption{Top five finishers according to our ranking scheme}
\begin{tabular}{|c|c|}
\hline
Batsman & Points\\
\hline
MS Dhoni & 364.3758 \\
DJ Bravo & 248.9014\\
AB de Villiers & 223.4076\\
YK Pathan & 215.7580\\
KD Karthik & 214.2518\\
\hline
\end{tabular}
\label{tab:finish}
\end{table}
Having obtained the score for each player in these three categories, we assign one or more labels (O (Opener), M (Middle Order), F (Finisher)) to the players. The category in which the player has the maximum score is naturally assigned as a label for that player. However, if a player has a higher (or equal) rank in some other category, then that category is also assigned to that player. Such players can be used interchangeably among those categories. For example, Dhoni is assigned only as a finisher since both his rank and his score is higher as a finisher than the other two categories. However, de Villiers has a higher score as a finisher, but a better rank as a middle order or opening batsman. So he can be used interchangeably among these three categories. Similarly, Karthik can be used both as a middle order batsman or as a finisher.
\section{Analyzing features for bowlers}
Similar to batsmen, we have considered a set of parameters for bowlers and have quantified them. The features which have been considered are as follows -
\begin{itemize}
\item \textbf{Wicket Per Ball}: It is defined as the number of wickets taken per ball.
\begin{center}
wicket\_per\_ball = (\# wickets taken)/(\# balls)
\end{center}
\item \textbf{Average}: It denotes the number of runs conceded per wicket taken.
\begin{center}
Ave = (\# runs conceded)/(\# wickets taken)
\end{center}
\item \textbf{Economy rate}: Economy rate for a bowler is defined as the number of runs conceded per over bowled.
\begin{center}
Eco = (\# runs conceded)/(\# overs bowled) = (\# runs conceded * 6)/(\# balls)
\end{center}
\end{itemize}
We have not clustered the bowlers into groups. Instead we have considered two parameters for a good bowler into the same heuristic. A bowler who can take 4 or 5 wickets should be included in the team. However, it is better to take a bolwer who can take 1 or 2 wickets per match rather than a bolwer who takes 4 or 5 wickets once in a while. Strike rate and consistency has been together quantified for the $i$-th bowler as
\begin{center}
bowler(i) = (4*four(i) + 5*five(i) + wicket(i))*6/ball(i)
\end{center}
where four(i) and five(i) denote the number of matches where the bowler took 4 and 5 wickets respectively, whereas wicket(i) denote the total number of wickets taken in those matches where the bowler did not take 4 or 5 wickets. Taking the other features into consideration, we divide a total weight of 100 as follows -
\begin{center}
bowler\_val(i) = wicket\_per\_ball(i)*35 + bowler(i)*35 + (1/Ave(i))*10 + (1/Eco(i))*10
\end{center}
The current form of a bowler (curr\_bowler\_val) is calculated similarly considering only the last season's values. The total point of a bowler, considering both current and overall form, is denoted as
\begin{center}
final\_bolwer(i) = bowler\_val(i) $\times$ (curr\_bowler\_val(i)/mean\_bowler) $\times$ cost$_x$fact + curr\_bowler\_val(i)
\end{center}
Based on this ranking scheme, we show the top 5 bowlers in Table~\ref{tab:bowl}.
\begin{table}[htb]
\centering
\caption{Top five bowlers according to our ranking scheme}
\begin{tabular}{|c|c|}
\hline
Bowler & Points\\
\hline
A. Tye & 335.3772 \\
A. Mishra & 296.390\\
S. Narine & 254.321\\
P. Chawala & 223.7809\\
R. Jadeja & 223.283\\
\hline
\end{tabular}
\label{tab:bowl}
\end{table}
In the next section we provide the algorithms for selecting a team of 15 players, where the budget is fixed.
\section{Greedy algorithm for team selection}
In this section, we propose two greedy algorithms for team selection. Each team, containing $n$ players, is partitioned into the following buckets: $B$ = \{Opener, Middle-order, Finisher, Bowler\}, where each bucket $B[i]$ is a set of $k_i$ players such that
\begin{equation}
\label{eq:com}
\sum_{i \in B} k_i = n
\end{equation}
Each team is allotted a \emph{value}, which emulates the total budget for a team. The number of players $k_i$ in each bucket $B[i]$ is decided by the user, and the \emph{unit} for each bucket is $unit(B_i) = (value*k_i)/4$.
\subsection{Assigning credit points to players}
Players in each cluster are further assigned credit points based on their ranking. This helps us to emulate the base price of a player. If a cluster contains $c_n$ players, and it is partitioned into $c_p$ credit point groups, then each group contains $c_n$/$c_p$ players. The first $c_p$ players are assigned a to the highest credit point group, the next $c_p$ players to the second highest credit point group and so on. Finally each group is assigned a credit point that decreases as we go down the groups. This step is necessary because the fixed budget of each team has been emulated as a fixed value for each team. The total credit of the team should not exceed the fixed value.
In the example of the following subsection, we have considered four credit groups with valuation 10,9,8 and 7. However, the number of groups, as well as the valuation can be varied according to the team selection criteria.
\subsection{First greedy algorithm}
In our first algorithm, we consider wicket-keepers as a separate bucket. Therefore, for our first algorithm, Equation~\ref{eq:com} is modified as $\sum_{i \in B} k_i + w = n$, where $w$ is the number of wicketkeepers. In Algorithm~\ref{greedy1}, we show our first algorithm for team selection.
\begin{algorithm}[htb]
\caption{Greedy Algorithm 1 for team selection}
\begin{algorithmic}[1]
\REQUIRE The pool of players clustered into one or more of the buckets \emph{opener, middle-order, finisher} and \emph{bowler} along with their corresponding rank. The total number of players $n$ in a team, the total valuation \emph{value} of the team, the number of players $k_i$ in each bucket $B[i]$ and the number of wicketkeepers $w$ in the team, such that $\sum_{i \in B} k_i + w = n$.
\ENSURE An optimal team of $n$ players.
\STATE unit $\leftarrow$ value/5
\FORALL{$b \in$ \{Wicketkeeper, Opener, Middle-order, Finisher, Bowler\}}
\STATE $cap_b$ $\leftarrow$ $unit \times k_b$
\STATE $min_b$ $\leftarrow$ minimum credit point of a player in bucket $b$
\STATE rem $\leftarrow$ $cap_b$
\FOR{\emph{pos} in 1 to $k_b$}
\IF{rem $< min_b$}
\WHILE{True}
\STATE j $\leftarrow$ pos
\WHILE{rem $< min_b$}
\STATE j = j-1
\IF{(credit at j)-1 $\geq min_b$}
\STATE credit at j = (credit at j)-1
\STATE rem = rem+1
\ENDIF
\IF{rem $\geq min_b$}
\STATE \algorithmicbreak
\ENDIF
\ENDWHILE
\IF{rem $\geq min_b$}
\STATE \algorithmicbreak
\ENDIF
\ENDWHILE
\ENDIF
\STATE Assign the highest credit $\leq$ rem in pos
\STATE rem = rem - assigned credit
\ENDFOR
\ENDFOR
\end{algorithmic}
\label{greedy1}
\end{algorithm}
Algorithm~\ref{greedy1} assigns the best possible credit for each player position. The total credit of each position is bounded by the value of \emph{unit}. If the algorithm comes across any position where the remaining unit is less than the minimum credit in the player pool, then it backtracks and reduces the credits assigned in the previous positions till a player is assignable in the current position. This, being a greedy algorithm, has the risk that it may end up assigning a few players with best rankings along with a few players with very low ranking.
We now produce a team of 15 players using the proposed algorithm. If a total value of $150$ or more is assigned to the team, then players of credit 10 can be selected for each position. Furthermore, a very low value can lead to a very poor team. For our example, we have chosen a value of 135, such that the unit is 9. Furthermore, in our example team, we shall have two wicketkeepers, two openers, three middle-order batsman, two finishers and six bowlers.
Since, the capacity of wicketkeeper is 2, and unit is 9, a total credit of 18 can be assigned to the two wicketkeepers. We first assign a point of 10 to the first position, and the remaining 8 points is assigned to the second position. From the rank of players, who are also wicketkeepers, Dhoni is assigned in the first position with 10 points, and S. Samson is assigned to the second position with 8 points. The team of 15 players, as obtained using the Algorithm~\ref{greedy1} is shown in Table~\ref{tab:team1}.
\begin{table}[htb]
\centering
\caption{A team of 15 players, with a total credit point of 135, selected using Algorithm~\ref{greedy1}}
\begin{tabular}{|c|c|c|}
\hline
Position & Player & Credit Point\\
\hline
\multirow{2}{*}{Wicketkeeper} & M.S. Dhoni & 10 \\
& S. Samson & 8\\
\hline
\multirow{2}{*}{Opener} & D. Warner & 10\\
& K.L. Rahul & 8\\
\hline
\multirow{3}{*}{Middle-order} & V. Kohli & 10\\
& A.B. de Villiers & 10\\
& F. du Plesis & 7\\
\hline
\multirow{2}{*}{Finisher} & D. Bravo & 10\\
& R. Pant & 8\\
\hline
\multirow{6}{*}{Bowlers} & A. Tye & 10\\
& A. Mishra & 10\\
& T. Boult & 9\\
& S. Al Hassan & 9\\
& K. Jadav & 9\\
& M. Johnson & 7\\
\hline
\end{tabular}
\label{tab:team1}
\end{table}
\subsection{Second Greedy Algorithm}
In the team selected (Table~\ref{tab:team1}) using Algorithm~\ref{greedy1}, both the wicketkeepers are finishers. Therefore, the selected team ends up with four finishers. To avoid this scenario, the second greedy algorithm, which is similar to Algorithm~\ref{greedy1}, keeps an extra restriction that the two wicketkeepers should not belong to the same bucket. By keeping this restriction, the team is exactly similar to that in Table~\ref{tab:team1}, except that instead of S. Samson, we select P. Patel as the second wicketkeeper, who is an opener.
This second algorithm can be easily further modified to ensure the cluster of the wicketkeeper. For example, one can impose a restriction such as one of the wicketkeepers must be an opener. Our proposed algorithm is flexible to handle such restrictions. Moreover, using this algorithm along with the the aforementioned ranking, a franchise can easily determine the best alternate player for a position if one of their target player is not available. For example, both Warner and Gayle have credit point 10, but the rank (and point) of Warner is higher than that of Gayle. Therefore, if Warner is already selected by some other team, then he can be replaced with Gayle. If no player of credit 10 is available, then that position can be filled with players of credit 9, and so on.
\section{Conclusion}
In this paper, we have shown a heuristic method for IPL team selection. For each player, we have considered some traditional and derived features, and have quantified them. We have clustered the players into one or more of the clusters - Opener, Middle-order, Finisher and Bowler according to the score achieved from those features. We have also taken into consideration both the current from and the experience of a player for such ranking. The ranking obtained by our heuristic scheme is in acceptance with the known player rankings in IPL. Finally we have proposed two greedy algorithms to select the best possible team from this ranking when the total credit point and the number of players in each bucket is fixed.
The future scope of this paper is to incorporate two higher level clusters of batting and bowling allrounders. The selection of the team can also include some more flexible buckets where allrounders are given higher preference than batsman and bowlers. A trade-off between inclusion of an allrounder in the team or a batsman or bowler with higher credit point can be studied. Furthermore, the greedy algorithm for team selection has a shortcoming that it may select some high ranking players with some very low ranking ones. A dynamic programming approach may be studied to ensure more or less equal quality players in the team.
| {
"attr-fineweb-edu": 2.302734,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbQ45qU2Ap2Ipcbbf | \section*{Abstract}
Given a set of sequences comprised of time-ordered events, sequential pattern mining is useful to identify frequent subsequences from different sequences or within the same sequence.
However, in sport, these techniques cannot determine the importance of particular patterns of play to good or bad outcomes, which is often of greater interest to coaches and performance analysts.
In this study, we apply a recently proposed supervised sequential pattern mining algorithm called safe pattern pruning (SPP) to 490 labelled event sequences representing passages of play from one rugby team's matches from the 2018 Japan Top League.
We compare the SPP-obtained patterns that are the most discriminative between scoring and non-scoring outcomes from both the team's and opposition teams' perspectives, with the most frequent patterns obtained with well-known unsupervised sequential pattern mining algorithms when applied to subsets of the original dataset, split on the label.
Our obtained results found that linebreaks, successful lineouts, regained kicks in play, repeated phase-breakdown play, and failed exit plays by the opposition team were identified as as the patterns that discriminated most between the team scoring and not scoring.
Opposition team linebreaks, errors made by the team, opposition team lineouts, and repeated phase-breakdown play by the opposition team were identified as the patterns that discriminated most between the opposition team scoring and not scoring.
It was also found that, by virtue of its supervised nature as well as its pruning and safe-screening properties, SPP obtained a greater variety of generally more sophisticated patterns than the unsupervised models that are likely to be of more utility to coaches and performance analysts.
\section*{Introduction}
Large amounts of data are now being captured in sport as a result of the increased use of GPS tracking and video analysis systems, as well as enhancements in computing power and storage, and there is great interest in making use of this data for performance analysis purposes.
A wide variety of methods have been used in the analysis of sports data, ranging from statistical methods to, more recently, machine learning and data mining techniques.
Among the various analytical frameworks available in sports analytics, in this paper, we adopt an approach to extract events from sports matches and analyze sequences of events.
The most basic events-based approach is based on the analysis of the \emph{frequencies} of events.
These frequencies can be used as performance indicators \cite{hughes2002use} by comparing the frequency of each event in positive outcomes (winning, scoring points, etc.) and negative outcomes (losing, conceding points, etc.) in order to investigate which events are commonly associated with these outcomes.
However, frequency-based analyses have drawbacks in that the information contained in the order of events cannot be exploited.
In this study, we consider a sequence of events, and refer to a partial sequence of events a \emph{sequential pattern} or simply a \emph{pattern (of play)}.
In sports, the occurrence of certain events in a particular order often has a strong influence on outcomes, so it is useful to use patterns as a basic analytical unit.
Invasion sports such as rugby (as well as soccer and basketball, for example) have many events and patterns that occur very frequently while having a paucity of events that are important for scoring.
For instance, in soccer, a pattern consisting of an accurate cross followed by a header that is on target will occur much less frequently than a pattern consisting of repeated passes between players, but the former pattern is likely to be of much greater interest to coaches and performance analysts because there is a good chance that the pattern may lead to a goal being scored.
The computational framework for finding patterns from sequential data that have specific characteristics is known as \emph{sequential mining} in the field of data mining.
The most basic problem setup in sequential mining is to enumerate frequent patterns, which is called \emph{frequent sequential mining}.
Although the total number of patterns (i.e., the number of ordered sequences of all possible events) is generally very large, it is possible to efficiently enumerate patterns that appear more than a certain frequency by making effective use of branch-and-bound techniques.
Frequent sequential mining is categorised as an \emph{unsupervised learning technique} in the terminology of machine learning.
When applying frequent sequential mining to data from sport, there are several options.
The first option is to simply extract the frequent patterns from the entire dataset.
The drawback of this approach is that it is not possible to distinguish whether a pattern leads to good or bad outcomes.
The second option is to split the dataset into a ``good-outcome" dataset and a ``bad-outcome" dataset, and perform frequent sequential mining on each dataset.
The third option is to perform frequent sequential mining on the entire dataset to identify frequent patterns, and then create a machine learning model that uses the patterns as features to predict whether the outcomes are good or bad.
The disadvantage of the second and third options is that the process of pattern extraction and the process of relating the patterns to the ``goodness" of the outcomes are conducted separately.
Unlike unsupervised mining, a mining method that directly extracts patterns that are associated with good or bad outcomes is called \emph{supervised mining}.
Roughly speaking, by using supervised mining, we can directly find patterns that have different frequencies depending on the outcomes, thus we can find more direct effects on the outcomes than by simply combining unsupervised mining, as described above.
\subsection*{Related Work}
\subsubsection*{Sequential pattern mining}
Sequential pattern mining \cite{agrawal1995mining} involves discovering frequent subsequences as patterns from a database that consists of ordered event sequences, with or without strict notions of time \cite{mabroukeh2010taxonomy}.
Originally applied for the analysis of biological sequences \cite{wang2004scalable, ho2005sequential, exarchos2008mining, hsu2007identification}, sequential pattern mining techniques have also been applied to various other domains including XML document classification \cite{garboni2005sequential}, keyword and key-phrase extraction \cite{feng2011keyword, xie2014document, xie2017efficient}, as well as next item/activity prediction and recommendation systems \cite{yap2012effective,salehi2014personalized,ceci2014completion,wright2015use,tsai2015location,tarus2018hybrid}.
For an overview of the field of sequential pattern mining, we refer the reader to \cite{fournier2017survey}.
One of the first sequential pattern mining algorithms was GSP \cite{srikant1996mining}, which was based on earlier work in which the A-priori algorithm was proposed by the same authors \cite{agarwal1994fast}.
SPADE \cite{zaki2001spade}, SPAM \cite{ayres2002sequential}, and the pattern-growth algorithm PrefixSpan \cite{pei2004mining} were proposed to address some limitations that were identified with the GSP algorithm.
PrefixSpan is known as a pattern-growth algorithm, since its grows a tree which extends from a singleton (set with a single event) and adds more events in descendent nodes.
More recently, CM-SPAM and CM-SPADE \cite{fournier2014fast} as well as Fast \cite{salvemini2011fast} have been proposed to provide further improvements in computational efficiency and therefore speed.
It should be noted that these frequently applied sequential mining algorithms listed above are unsupervised, i.e., are applied to unlabelled sequence data.
Safe pattern pruning (SPP) was proposed by \cite{nakagawa2016safe,sakuma2019efficient}, and combines a convex optimisation technique called safe screening \cite{ghaoui2010safe} with sequential pattern mining.
SPP is supervised and is applied to labelled data, i.e., to datasets consisting of labelled sequences.
SPP uses PrefixSpan as a building block to grow the initial pattern tree, which is then pruned according to a particular criterion, which prunes the tree structure among all possible patterns in a database, grown by PrefixSpan, in such a way that if a node corresponding to a particular pattern is pruned, it is guaranteed that all patterns corresponding to its descendant nodes are not required for the predictive model (Fig~\ref{fig-spp-tree-pruning}).
\begin{figure}[!h]
\centering
\includegraphics[width=0.4\textwidth]{spp_tree.PNG}
\caption{{\bf SPP pruning}. One of the mechanisms within SPP identifies and deletes patterns that do not contribute to the model before performing the optimization. For example, if pattern \textit{t} does not satisfy the SPP pruning criterion specified in \cite{sakuma2019efficient}, the sub-tree below pattern pattern \textit{t} is deleted.}
\label{fig-spp-tree-pruning}
\end{figure}
All of the possible pruned patterns in the database are then multiplied by weights in the form of a linear model, and these weights are solved for by solving an optimization problem, however, prior to solving, safe screening is used to eliminate weights that will not be discriminative (i.e., will have values of zero) at the optimal solution.
SPP has been applied to datasets consisting of animal trajectories \cite{sakuma2019efficient}; however, compared with animal trajectories, sports data often contains a greater diversity of events.
\subsubsection*{Application of sequential pattern mining techniques in sport}
Unsupervised sequential pattern mining techniques have been applied to data from sport, focusing primarily on the identification, interpretation and visualization of sequential patterns.
Table \ref{tbl-sport-sequential pattern mining} summarizes previous studies that have applied sequential pattern mining techniques to datasets in sport.
CM-SPAM has been applied in order to conduct technical tactical analysis in judo \cite{la2017ontology}.
Sequential data, obtained using trackers, has been used to test for significant trends and interesting sequential patterns in the context of the training of a single cyclist over an extended period of time \cite{hrovat2015interestingness}.
Decroos et al. \cite{decroos2018automatic} combined clustering and CM-SPADE to data from soccer, using a five-step approach, which is presented in Table \ref{tbl-sport-sequential pattern mining}.
Their ranking function allowed the user, e.g., a coach, to assign higher weights to events that are of higher relevance, such as shots and crosses, compared to normal passes, which are very frequent but not necessarily relevant.
\begin{table}[!ht]
\begin{adjustwidth}{-2.25in}{0in}
\centering
\caption{
{\bf Prior studies that have applied sequential pattern mining techniques in sport.}}
\begin{tabular}{|p{0.08\linewidth}|l|p{0.09\linewidth}|l|p{0.3\linewidth}|p{0.2\linewidth}|}
\thickhline
\textbf{Study} &
\textbf{Sport} &
\textbf{Model Used} &
\textbf{Model Type} &
\textbf{Summary of Approach} &
\textbf{Evaluation Metrics} \\ \thickhline
Hrovat (2015) &
Cycling &
SPADE &
Unsupervised &
Applied the sequential pattern mining algorithm SPADE to identify frequent sequential patterns, calculated interestingness measures (p-values) for these frequent patterns, and visualized these patterns for increasing/decreasing daily and duration trends &
Support, permutation test p-values \\ \hline
La Puma \& Giorno (2017) &
Judo &
CM-SPAM &
Unsupervised &
Identified patterns using sequential pattern mining for the tactical analysis of judo techniques &
Support \\ \hline
Decroos (2018) &
Soccer &
CM-SPADE &
Unsupervised &
clustered phases based on spatio-temporal components, ranked these clusters, mined the clusters to identify frequent sequential patterns, used a ranking function (a weighted support function) - in which a coach can assign higher weights to more relevant events - to score obtained patterns, interpreted the obtained patterns &
Support (weighted by user's judgement weighting of the relevance of events), and identified the top-ranked frequent sequences in the clusters \\ \thickhline
\end{tabular}%
\begin{flushleft}
\end{flushleft}
\label{tbl-sport-sequential pattern mining}
\end{adjustwidth}
\end{table}
\subsubsection*{Analysis of sequences in rugby union}
In the sport of rugby union (hereafter referred to simply as rugby) specifically, some previous studies have analyzed matches at the sequence level by analyzing the duration of sequences.
For example, the duration of the sequences of plays leading to tries at the 1995 Rugby World Cup (RWC) were studied by \cite{carter20011995}.
In a study of the 2003 RWC, \cite{van2006movement} found that teams that were able to create movements that lasted longer than 80 seconds were more successful.
More recently, \cite{coughlan2019they} applied K-modes cluster analysis using sequences of play in rugby, and found that scrums, line-outs and kick receipts were common approaches that led to tries being scored in the 2018 Super Rugby season.
Recently, \cite{watson2020integrating} used convolutional and recurrent neural networks to predict the outcomes (territory gain, retaining possession, scoring a try, and conceding/being awarded a penalty) of sequences of play, based on event order and their on-field locations.
\subsection*{Motivation and Contributions}
In this study, we apply SPP, a supervised sequential pattern mining model, to data consisting of event sequences from all of the matches played by a professional rugby union team in their 2018 Japan Top League season.
The present study is motivated by the fact that, although sequential pattern mining techniques have been applied to sport, only unsupervised models appear to have been used to date.
In addition, no form of sequential pattern mining technique, unsupervised or supervised, appears to have been applied to the analysis of sequences of play in the sport of rugby union.
As a basis for comparison, we also compare the SPP-obtained subsequences with those obtained by well-known unsupervised sequential pattern mining algorithms (PrefixSpan, GSP, Fast, CM-SPADE and CM-SPAM) when they are applied to subsets of the original labelled data, split on the label.
The main contributions of this study are in the comparison of the usefulness of supervised and unsupervised sequential pattern mining models that are applied to event sequence data in sport, the application of a supervised sequential pattern mining model to event sequence data in sport, and the application of an sequential pattern mining model for the analysis of sequences of play in rugby.
\subsection*{Notation}
The number of unique event symbols is denoted as $m$ and the set of those event symbols is denoted as ${\cal S} := \{s_1, \ldots, s_m\}$.
In this paper, we refer to sequences and subsequences as \emph{passages} of play and \emph{patterns} of play (or simply \emph{patterns}), respectively.
Let $n$ denote the number of sequences in the dataset ($n$=490 in our dataset).
Sequences with the labels 1 and -1 are denoted as ${\cal G}_{+}, {\cal G}_{-} \subseteq [n]$ and are of size $n_+ := |{\cal G}_+|, n_- := |{\cal G}_-|$, respectively.
The dataset for building the SPP model is
\begin{align*}
\{(\bm g_i, y_i)\}_{i \in [n]},
\end{align*}
where
$\bm g_i$
represents the $i$-th sequence/passage of play. Each sequence $\bm g_i$ takes a label from
$y_i \in \{\pm 1\}$
and can be written as
\begin{align*}
\bm g_i = \langle g_{i1}, g_{i2}, \ldots, g_{iT(i)} \rangle, i \in [n],
\end{align*}
where
$g_{it}$
is the $t$-th symbol of the $i$-th sequence, which takes one of the event symbols in ${\cal S}$,
and $T(i)$ indicates the length of the $i$-th sequence, i.e., the number of events in this particular sequence.
Patterns of play are denoted as $\bm q_1, \bm q_2, \ldots$, each of which is also a sequence of event symbols:
\begin{align*}
\bm q_j = \langle q_{j1}, q_{j2}, \ldots, q_{jL(j)} \rangle, j = 1, 2, \ldots,
\end{align*}
where $L(j)$ is the length of pattern $\bm q_j$ for $j = 1, 2, \ldots$.
The relationship whereby sequence $\bm g_i$ contains subsequence $\bm q_j$ is represented as $\bm q_j \sqsubseteq \bm g_i$.
The set of all possible patterns contained in any sequence
$\{\bm g_i\}_{i \in [n]}$
is denoted as
${\cal Q} = \{ \bq_i \}_{i \in [d]}$, where $d$ is the number of possible patterns (large in general).
\section*{Materials and Methods}
\label{sec-materials-methods}
\subsection*{Data}
We obtained XML data generated from video tagged in Hudl Sportscode (\url{https://www.hudl.com/products/sportscode}) by the performance analyst of one of the teams in the Japan Top League competition (not named for reasons of confidentiality).
Written consent was obtained to use the data for research purposes.
Seasons are comprised of a number of matches, matches are made up of sequences of play, which are, in turn, comprised of events.
Our dataset consisted of all of this particular team's matches in their 2018 season against each of the opposition teams they faced.
These matches consist of passages of play (i.e., sequences of events), however, each match in the original dataset were each one long sequence.
One approach is to label as sequences with win/loss outcomes, however, in our initial trials, this did not produce interesting results since it is obvious that sequences containing a greater number of scoring events will be within match-sequences labelled with wins.
Therefore, we generated a dataset that is of greater granularity by defining rules that delimit matches into sequences representing passages of play (we outline these in the following subsection).
The 24 unique events (12 unique events for the team and opposition teams), in our data are listed in Table \ref{tbl-original-events}, and some are also depicted in Fig \ref{fig-rugby-events}.
The XML data also contained a more granular level of data than these 24 events represent (i.e., with more detailed events---in other words, a larger number of events); however, in order to reduce computational complexity, the higher level of the data was considered.
\begin{table}[!ht]
\begin{adjustwidth}{-2.25in}{0in}
\centering
\caption{
{\bf Unique events in the original XML data.} Events prefixed by "O-" are performed by the opposition team, while those that are not a performed by the team.}
\begin{tabular}{|l|l|l|}
\thickhline
{\bf event ID} & {\bf event} & {\bf event description} \\ \thickhline
1 & Restart Receptions & Team receives a kick restart made by the opposition team\\ \hline
2 & Phase & Period between breakdowns (team in possession of the ball) \\ \hline
3 & Breakdown & Team player is tackled, resulting in a ruck \\ \hline
4 & Kick in Play & Kick within the field of play (rather than to touch) made by the team\\ \hline
5 & Penalty Conceded & Team gives away a penalty, opposition may re-gain possession \\ \hline
6 & Kick at Goal & Team attempts kick at goal\\ \hline
7 & Quick Tap & Quick restart of play by the team following a free kick awarded to them\\ \hline
8 & Lineout & Ball is thrown in by the team \\ \hline
9 & Error & Mistake made by the team, e.g., lost possession, forward pass, etc. \\ \hline
10 & Scrum & Set piece in which the forwards attempt to push the opposing team off the ball \\ \hline
11 & Try Scored & Team places the ball down over opposition team's line (five points) \\ \hline
12 & Line Breaks & Team breaches the opposition team's defensive line \\ \hline
13 & O-Restart Receptions & Opposition team receives a kick restart made by the team\\ \hline
14 & O-Phase & Period between breakdowns (opposition team in possession of the ball)\\ \hline
15 & O-Breakdown & Opposition player is tackled, resulting in a ruck\\ \hline
16 & O-Kick in Play & Kick within the field of play (rather than to touch) made by the opposition team\\ \hline
17 & O-Penalty Conceded & Opposition team gives away a penalty, team may re-gain possession \\ \hline
18 & O-Kick at Goal & Opposition team attempts kick at goal\\ \hline
19 & O-Quick Tap & Quick restart of play by the opposition team following a free kick awarded to them\\ \hline
20 & O-Lineout & Ball is thrown in by the opposition team\\ \hline
21 & O-Error & Mistake made by the opposition team, e.g., lost possession, forward pass, etc. \\ \hline
22 & O-Scrum & Set piece in which the forwards attempt to push the team off the ball \\ \hline
23 & O-Try Scored & Opposition team places the ball down over the team's line (five points) \\ \hline
24 & O-Line Breaks & Opposition team breaches the team's defensive line \\ \thickhline
\end{tabular}
\begin{flushleft}
\end{flushleft}
\label{tbl-original-events}
\end{adjustwidth}
\end{table}
\begin{figure}[!h]
\centering
\includegraphics[width=1\textwidth]{rugby_events.PNG}
\caption{{\bf Key events in rugby matches.} The photographs used as the original images are listed in parentheses. All of them are licensed under the unsplash.com license (\url{https://unsplash.com/license}). Top left: Kick at goal (\url{https://unsplash.com/photos/xJSPP3H8XTQ});
Bottom left: Lineout (\url{https://unsplash.com/photos/CTEvFbFpVC8});
Center top: Kick restart/Kick-off (\url{https://unsplash.com/photos/OMdge7F2FyA});
Center bottom: Scrum (\url{https://unsplash.com/photos/y5H3\_7OobJw});
Top Right: Linebreak (\url{https://unsplash.com/photos/XAlKHW9ierw)};
Middle Right: Beginning of a phase (\url{https://unsplash.com/photos/fqrzserMsX4)};
Bottom Right: Breakdown (\url{https://unsplash.com/photos/WByu11skzSc)}}
\label{fig-rugby-events}
\end{figure}
\subsection*{Methods}
\subsubsection*{Delimiting matches into sequences}
Our dataset was converted into labelled event sequences by delimiting each match into passages of play (Fig~\ref{fig-process-flow-1})
The rules to delimit matches into sequences of events (passages of play), should ideally begin and end at logical points in the match, e.g., when certain events occur, when play stops or when possession changes (e.g., \cite{liu2018using}), and should result in sequences which are neither overly long nor overly short.
In this study, a passage of play was defined to start with either a kick restart, scrum, or lineout, which are events that result in play temporarily stopping and therefore represent natural delimiters for our dataset.
When there is a kick restart, scrum (except for a scrum reset where a scrum follows another scrum), or lineout, this event becomes the first event in a new event sequence; otherwise, if a try is scored or a kick at goal occurs, a new passage of play also begins.
Applying these rules (also shown in Fig~\ref{fig-process-flow-1}) resulted in delimited dataset consisted of 490 sequences.
Each of these sequences were made up of events from Table \ref{tbl-original-events}.
At this stage, the delimited dataset is unlabelled, with the scoring events (try scored, kick at goal) for the team and opposition teams contained in the sequences.
\begin{figure}[!h]
\begin{adjustwidth}{-2.25in}{0in}
\includegraphics[width=1.5\textwidth]{fig1-process-flow-part1.PNG}
\caption{{\bf Illustration of the procedure to delimit the raw XML data into labelled sequences of events.}}
\label{fig-process-flow-1}
\begin{flushleft}
\end{flushleft}
\end{adjustwidth}
\end{figure}
\subsubsection*{Experimental dataset creation and comparative approach}
The delimited dataset described was then divided into two datasets.
In the first, which we call the scoring dataset, we consider the case where the sequences are from the team's scoring perspective.
In this dataset, the label $y_i = +1$ represents points being scored or attempted.
Note that while a try scored was certain in terms of points being scored, a kick at goal (depicted in the top-left of Fig \ref{fig-rugby-events}) is not always successful.
In our data, only the kick at goal being attempted (event id 6) was available---not whether the goal was actually successful or not.
However, since it is more important to be able to identify points-scoring opportunities than whether or not the kick was ultimately successful (which is determined by the accuracy of the goal kicker), we assume that 100\% of kicks at goal resulted in points being scored.
In the scoring dataset, the label $y_i = +1$ was assigned assigned to the sequences from the original delimited dataset if a try was scored or a kick at goal was made by the team in sequence $i$.
If there was no try scored and no kick at goal made by the team in sequence $i$, the label $y_i = -1$ was assigned.
Then, since the label now identifies scoring/not scoring, the events that relate to the team scoring---Try scored (event ID = 11) and Kick at goal (event ID = 6)---were removed from the event sequences.
In the second, which we call the conceding dataset, we consider the case where the sequences are from the team's conceding perspective, or equivalently, from the opposition teams' scoring perspective.
In the conceding dataset, the label $y_i = +1$ was assigned to the sequences from the original delimited dataset if a try was scored or a kick at goal was made by the \textit{opposition} team in sequence $i$.
If there was no try scored and no kick at goal made by the \textit{opposition} team in sequence $i$, the label $y_i = -1$ was assigned.
The list of events for the original delimited, scoring and conceding datasets are presented in Table \ref{table1}.
Then, since the label now identifies scoring/not scoring, the events that relate to the \textit{opposition} team scoring---Try scored (event ID = 11) and Kick at goal (event ID = 6)---were removed from the event sequences.
The process applied to create the scoring and conceding datasets from the original delimited dataset is shown in the upper half of Fig~\ref{fig-process-flow-2}.
\begin{table}[!ht]
\centering
\caption{
{\bf Event lists for the original, scoring and conceding datasets.}}
\begin{tabular}{|l|l|l|l|}
\thickhline
{\bf event ID} & {\bf original} & {\bf scoring} & {\bf conceding} \\ \thickhline
1 & Restart Receptions & Restart Receptions & Restart Receptions \\ \hline
2 & Phase & Phase & Phase \\ \hline
3 & Breakdown & Breakdown & Breakdown \\ \hline
4 & Kick in Play & Kick in Play & Kick in Play \\ \hline
5 & Penalty Conceded & Penalty Conceded & Penalty Conceded \\ \hline
6 & Kick at Goal & & Kick at Goal \\ \hline
7 & Quick Tap & Quick Tap & Quick Tap \\ \hline
8 & Lineout & Lineout & Lineout \\ \hline
9 & Error & Error & Error \\ \hline
10 & Scrum & Scrum & Scrum \\ \hline
11 & Try Scored & & Try Scored \\ \hline
12 & Line Breaks & Line Breaks & Line Breaks \\ \hline
13 & O-Restart Receptions & O-Restart Receptions & O-Restart Receptions \\ \hline
14 & O-Phase & O-Phase & O-Phase \\ \hline
15 & O-Breakdown & O-Breakdown & O-Breakdown \\ \hline
16 & O-Kick in Play & O-Kick in Play & O-Kick in Play \\ \hline
17 & O-Penalty Conceded & O-Penalty Conceded & O-Penalty Conceded \\ \hline
18 & O-Kick at Goal & O-Kick at Goal & \\ \hline
19 & O-Quick Tap & O-Quick Tap & O-Quick Tap \\ \hline
20 & O-Lineout & O-Lineout & O-Lineout \\ \hline
21 & O-Error & O-Error & O-Error \\ \hline
22 & O-Scrum & O-Scrum & O-Scrum \\ \hline
23 & O-Try Scored & O-Try Scored & \\ \hline
24 & O-Line Breaks & O-Line Breaks & O-Line Breaks \\ \thickhline
label & - & Points Scored & O-Points Scored\\ \hline
& $n$=490 & $n_+$=86, $n_-$=404 & $n_+$=44, $n_-$=446\\ \thickhline
\end{tabular}
\begin{flushleft}
\end{flushleft}
\label{table1}
\end{table}
\begin{figure}[!h]
\begin{adjustwidth}{-2.25in}{0in}
\includegraphics[width=1.5\textwidth]{fig1-process-flow-part2.PNG}
\caption{{\bf Illustration of dataset creation and experimental approach.}
Illustration of the procedures to create the datasets from the original delimited dataset to be used in the experiments and to compare the unsupervised and supervised sequential pattern mining models.}
\label{fig-process-flow-2}
\begin{flushleft}
\end{flushleft}
\end{adjustwidth}
\end{figure}
The SPP algorithm (software is available at \url{https://github.com/takeuchi-lab/SafePatternPruning}) was applied to the scoring and conceding datasets.
As a basis for comparison, we compare the obtained subsequences ($q_{j}$s) from SPP with those obtained by the unsupervised algorithms: PrefixSpan, CM-SPAM, CM-SPADE, GSP and Fast.
The SPMF pattern mining package \cite{fournier2014sequential pattern miningf} (v2.42c) was used for the application of the five unsupervised sequential pattern mining algorithms to our dataset.
Since the unsupervised models use unlabelled data, while support values of the patterns of play can be obtained, we cannot obtain weights for the patterns.
For a more fair comparison between the unsupervised models and the supervised model, SPP, we assume prior knowledge of the sequence labels to apply the unsupervised models.
Thus, the unsupervised models were applied to the dataset, which we call ``scoring+1," containing the sequences where the team actually scored, and to the ``conceding+1" dataset, containing the sequences where the team actually conceded points (i.e., the opposition team scored points).
The dataset creation process and comparative approach is presented in Fig~\ref{fig-process-flow-2}.
\subsubsection*{Obtaining pattern weights with safe pattern pruning}
As mentioned, our data consists of sequences comprised of events from Table \ref{tbl-original-events}, which are labelled with an outcome: either +1 or -1, e.g. \newline
\texttt{-1 22 22 17}
\newline
\texttt{1 8 11 2 6} \newline
\texttt{-1 1 2 3 2 9} \newline
\texttt{-1 20 21} \newline
\texttt{-1 10 10 2 3 2 3 2 3 2 3 2 17} \newline
\texttt{1 8 11 2 6} \newline
\texttt{-1 1 2 3 2 3 9} \newline
\texttt{-1 22 16 2} \newline
\texttt{-1 13 14 16}
\newline
...
\newline
We are interested in using SPP to identify subsequences of events that discriminate between outcome +1 and outcome -1.
For instance, in the dataset above, it would seem that subsequence [2,3,2] is potentially a discriminative pattern, since it appears in three sequences that are labeled with -1 and none that are labeled with 1, while [11,2,6] is also potentially a discriminative pattern since it appears in two sequences with label 1 and none with -1.
SPP involves taking linear combinations of the subsequences with weights, e.g., $w_1$[2,3,2] + $w_2$[11,2,6]..., for each sequence, and then to use an optimization model to calculate these weights.
Discriminative patterns have positive absolute values at the optimal solution.
A classifier based on a sparse linear combinations of patterns can be written as
\begin{align}
f(\bg_i ; {\cal Q}) = \sum_{\bq_j \in {\cal Q}} w_j I(\bq_j \sqsubseteq \bg_i) + b,
\label{eq:linear-weighted-subsequence-model}
\end{align}
where $I(\cdot)$ is an indicator function that takes the value 1 if sequence $\bg_i$ contains subsequence $\bg_i$ and 0 other otherwise; and $w_j \in \mathbb{R}$ and $b \in \mathbb{R}$ are parameters of the linear model, which are estimated by solving the following minimisation problem (as well as its dual maximization problem; see \cite{sakuma2019efficient} for details of the pruning criterion):
\begin{align}
\min_{\bw, b} \sum_{i \in [n]} \ell(y_i, f(\bg_i ; {\cal Q})) + \lambda \| \bw \|_1,
\label{eq:LWSM-optimization}
\end{align}
where
$\bw = [w_1, \ldots, w_d]^\top$ is a vector of weights, $\ell$ is a loss function and $\lambda > 0$ is a regularization parameter that can be tuned by cross-validation.
Note that, due to the permutations in terms of the number of potential patterns of play, the size of ${\cal Q}$ is quite large in general.
The goal of SPP is to reduce the size of ${\cal Q}$ by removing unnecessary patterns from the entire pattern-tree that was grown by PrefixSpan according to the SPP pruning criterion \cite{sakuma2019efficient}.
The minimization problem \eqref{eq:linear-weighted-subsequence-model} was, in the present study, solved with an L1-regularised L2-Support Vector Machine (the default option -u 1 in the S3P classifier command line options \url{https://github.com/takeuchi-lab/S3P-classifier}), with 10-times-10 cross-validation used to tune the regularization parameter lambda (options -c 1 -M 1 in the S3P classifier command line options).
The maximum pattern length parameter (option -L in the S3P classifier command line options) was set to 20.
The feature vector $\bx_i = [x_{i1}, x_{i2},\ldots, x_{id}]$ is defined for the $i$th sequence $\bg_i$ as
\begin{align}
x_{ij} = I(\bq_j \sqsubseteq \bg_i), \hspace{5em} j = 1, \ldots, |{\cal Q}|.
\end{align}
In other words, the feature vectors $\bx_i = [I(\bq_1 \sqsubseteq \bg_i), I(\bq_2 \sqsubseteq \bg_i),\ldots, I(\bq_d \sqsubseteq \bg_i)]$ are binary variables that take the respective values 1 or 0 based on whether or not subsequence $\bq_j$ is contained within sequence $\bg_i$.
The squared hinge-loss function $\ell(y,f(\bx_i)) = \max\{0, 1 - y f(\bx_i) \}^2$ is used for a two-class problem like ours, in which case the optimization problem (\ref{eq:LWSM-optimization}) becomes:
\begin{align}
\min_{\bw, b} \sum_{i \in [n]} \max \left\{ 0, 1 - y_i (\bw^\top \bx_i + b) \right\}^2 + \lambda \| \bw \|_1,
\label{eq:L2-SVM}
\end{align}
Discriminative patterns are those that have positive weights (in absolute terms) in the optimal solution to (\ref{eq:L2-SVM}) (in SPP, some weights are removed prior to solving the optimization problem by using safe screening---see \nameref{S2_Appendix} for more details).
In this study, in order to exclude patterns that may have occurred merely by chance, the obtained patterns ($q_{j}$s) for all datasets with support of less than five were removed.
In the case of the patterns obtained by the unsupervised model, the top five patterns with the largest support values were recorded.
In the case of the SPP-obtained patterns, the top five patterns with the largest positive $w_{j}$ values were recorded.
In addition, we restricted our analysis to patterns of play that had the highest positive weights.
For the scoring dataset, this means the patterns that had a positive contribution to the team scoring.
For the conceding dataset, this means the patterns that had a positive contribution to opposition teams scoring.
In other words, for the sake of brevity, we did not consider the patterns that had the highest contribution to ``not scoring'' and ``not conceding.''
The obtained results are presented in the following section.
\section*{Results}
\label{sec-results}
\subsection*{Analysis of sequence lengths}
There were an average of 10.6 events in each sequence in the scoring dataset, and 10.8 events in the conceding dataset.
The shortest sequence contained two events, and the longest contained 48 events (Table \ref{table2}).
The slight differences in mean sequence lengths between the scoring and conceding datasets is a result of the removal of the try and kick at goal events from the sequences in order to create the sequence outcome label (as mentioned in the Materials and Methods section above).
The sequence length distributions are positively skewed and non-normal (Fig~\ref{fig-seq-len-dists}), which was confirmed by Shapiro-Wilk tests.
By comparing these distributions, it is clear that the number of sequences in which points were scored was higher in the scoring dataset than the conceding dataset, which is reflective of the strength of the team in the 2018 season.
From the team's scoring perspective, 86 out of the 490 passages of play (18\%) resulted in points being scored by the team, while from the team's conceding perspective, 44 out of the 490 passages of play (9\%) resulted in points conceded.
The sequences in which the team scored points were slightly longer, containing 12.8 events on average compared to those where the team didn't score, which contained 10.2 events, on average.
The sequences in which the team conceded points contained 11.2 events on average, while those where the team didn't concede points contained 10.8 events, on average.
\begin{figure}[!h]
\includegraphics[width=1\textwidth]{scoring_conceding_plot.PNG}
\caption{{\bf Sequence length distributions.}
Distribution of sequence lengths by points-scoring outcome for the scoring and conceding datasets. Sequence length is defined as the number of events in each sequence (excluding the outcome label).}
\label{fig-seq-len-dists}
\end{figure}
\begin{table}[]
\centering
\caption{
{\bf Descriptive statistics for the scoring and conceding datasets}}
\begin{tabular}{|l|l|l|}
\thickhline
{\bf } & {\bf scoring} & {\bf conceding} \\ \thickhline
Mean & 10.6 & 10.8 \\ \hline
Standard deviation & 7.8 & 7.9\\ \hline
Minimum & 2 & 2 \\ \hline
25th percentile & 5 & 5\\ \hline
Median & 8 & 8\\ \hline
75th percentile & 15 & 15\\ \hline
Maximum & 48 & 48\\ \hline
Skewness & 1.3 & 1.4\\ \thickhline
\end{tabular}
\begin{flushleft}
\end{flushleft}
\label{table2}
\end{table}
\subsection*{Identification of important patterns of play using SPP}
\label{subsec-important-patterns}
SPP initially obtained 93 patterns when applied to the scoring dataset, of which 75 had support of 5 or higher.
Out of these 75 patterns of play, 38 had a positive weight ($w_j > 0$).
The 75 patterns with minimum support of 5 contained an average of 4.5 events, and the 38 patterns with positive weights contained an average of 5.4 events.
The longest obtained pattern in the scoring dataset contained 16 events.
Applying SPP to the conceding dataset resulted in a total of 72 patterns, of which 51 had support of 5 or higher.
Out of these 51 patterns of play, 31 had a positive weight ($w_j > 0$).
The 51 patterns with minimum support of 5 contained an average of 3.8 events, and the 31 patterns with positive weights contained an average of 4.4 events.
The longest obtained pattern in the conceding dataset contained 15 events.
The five most discriminative patterns between scoring and non-scoring outcomes (i.e., patterns with the highest positive weight contributions) were obtained by applying SPP to the scoring dataset, and are listed along with their weight values and odds ratios in Table \ref{table3}.
In the results tables, the notation [p] x n, denotes that pattern p is repeated n times.
We include the odds ratio (OR) for these patterns (simply the exponential of the weight), which aids in interpretation by providing a value that compares the cases where a sequence contains a particular pattern, and when it does not.
The pattern in the scoring dataset with the highest weight value (0.919), which discriminated the most between scoring and non-scoring sequences, was a pattern with a single line break event (event id 12).
The OR for the linebreak pattern is exp(0.919)=2.506, meaning that the team is 2.5 times more likely to score when a line break occurs in a sequence of play than if a line break is not made in a sequence of play.
Line breaks, which involve breaking through an opposition team's line of defense (see the top-right image in Fig \ref{fig-rugby-events}), advance the attacking team forward and are thus expected to create possible scoring opportunities.
A lineout followed by phase play (8 2) was the second most discriminative pattern between scoring and not scoring, with a weight of 0.808 and an OR of 2.242, indicating that the team is 2.2 times more likely to score when a lineout followed by a phase occurs in a sequence of play than if it does not.
The third most discriminative pattern, 2 3 4 2 3 (w=0.796, OR=2.217), can be interpreted as a kick in play being made by the team and being re-gathered by the team, thus resulting in retained possession.
This indicates that the team is 2.2 times more likely to score when this pattern occurs in a sequence of play than if it does not.
The fourth most discriminative pattern, 2 3 2 3 2 3 2 3 4 (w=0.732, OR=2.079), represents four repeated phase-breakdown plays by the team, followed by the team making a kick in play, which indicates repeated retaining of possession before presumably gaining territory in the form of a kick.
This indicates that the team is 2.1 times more likely to score when this pattern occurs in a sequence of play than if it does not.
The fifth most discriminative pattern, 13 14 15 14 15 16 14 2 3 (w=0.710, OR=2.033), can be interpreted as the opposition team receiving a kick restart made by the team, attempting to exit their own territory via a kick but not finding touch, thus giving the ball back to the team from which they can potentially build phases and launch an attack.
This indicates that the team is twice as likely to score when this pattern occurs in a sequence of play than if it does not.
\begin{table}[!ht]
\begin{adjustwidth}{-2.25in}{0in}
\centering
\caption{
{\bf Top five most discriminative SPP-obtained patterns between scoring and non-scoring outcomes.}}
\begin{tabular}{|l|p{10cm}|l|l|l|}
\thickhline
\bf{pattern ($q_{j}$)} & \bf{pattern description} & \bf{support} & \bf{weight} & \bf{OR}\\ \thickhline
12 & linebreak & 77 & 0.919 & 2.506\\ \hline
8 2 & lineout, phase & 71 & 0.808 & 2.242\\ \hline
2 3 4 2 3 & phase, breakdown, kick in play, phase, breakdown & 9 & 0.796 & 2.217 \\ \hline
2 3 2 3 2 3 2 3 4 & {[}phase, breakdown{]}x4, kick in play & 9 & 0.732 & 2.079\\ \hline
13 14 15 14 15 16 14 2 3 & O-restart received, {[}O-phase, O-breakdown{]}x2, O-kick in play, phase, breakdown & 6 & 0.710 & 2.033\\ \thickhline
\end{tabular}
\begin{flushleft}
\end{flushleft}
\label{table3}
\end{adjustwidth}
\end{table}
The five most discriminative patterns between conceding and non-conceding outcomes (i.e., patterns with the highest positive weights) were obtained by applying SPP to the conceding dataset, and are listed along with their weight values and odds ratios in Table \ref{table4}.
A linebreak (event ID 24) (w=0.613, OR=1.846) being made by the opposition team was the most discriminative pattern between sequences in which the team conceded and did not concede, or in other words, a linebreak by the opposition team was the pattern that discriminated the most between the group of sequences in which the opposition team scored and the group of sequences in which the opposition team did not score.
The weight magnitude was not as large as for the team scoring from a linebreak against the opposition team (w=0.919 vs. w=0.613), suggesting that the team has strong defence since linebreaks by the opposition team were less likely to result in the opposition team scoring compared to the likelihood of linebreaks made by the team through the opposition defensive line resulting in them scoring.
The OR of 1.8 indicated that the opposition team is 1.8 times more likely to score when they make a linebreak in a sequence of play than if they do not.
The second most discriminative pattern 14 9 15 (w=0.392, OR=1.479) between conceding and non-conceding outcomes can be interpreted as the opposition team being in possession of the ball, the team making some form of error, and the opposition team regaining possession.
The opposition team is 1.5 times more likely to score when this pattern occurs in a sequence of play than if it does not.
The third most discriminative pattern (20) between conceding and non-conceding outcomes was an opposition team lineout (w=0.357, OR=1.428).
The opposition team is 1.4 times more likely to score if they have a lineout in a sequence of play than if they do not.
The fourth (w=0.339, OR=1.403) and fifth (w=0.261, 1.299) most discriminative patterns for the conceding dataset represent repeated phase and breakdown play, with the fifth subsequence, for example, indicating the opposition team making over six repeated consecutive phases and breakdowns, suggesting the retaining of possession and building of pressure by the opposition team.
\begin{table}[!ht]
\begin{adjustwidth}{-2.25in}{0in}
\caption{
{\bf Top five most discriminative SPP-obtained patterns between conceding and non-conceding outcomes.}}
\begin{tabular}{|l|p{7cm}|l|l|l|}
\thickhline
\bf{event id pattern ($q_{j}$)} & \bf{pattern description} & \bf{support} & \bf{weight} & \bf{OR}\\ \thickhline
24 & O-Linebreak & 32 & 0.613 & 1.846\\ \hline
14 9 15 & O-phase, error, O-breakdown & 10 & 0.392 & 1.479\\ \hline
20 & O-lineout & 86 & 0.357 & 1.428\\ \hline
15 15 14 15 & O-breakdown, O-breakdown, O-phase, O-breakdown & 5 & 0.339 & 1.403 \\ \hline
15 14 15 14 15 14 15 14 15 14 15 14 15 & {[}O-breakdown, O-phase{]}x6, O-breakdown & 16 & 0.261 & 1.299\\ \thickhline
\end{tabular}
\begin{flushleft}
\end{flushleft}
\label{table4}
\end{adjustwidth}
\end{table}
\subsection*{Comparison of SPP-obtained patterns to those obtained by unsupervised models}
\label{subsec-comparison-unsupervised}
Tables \ref{table5} and \ref{table6} show the top five subsequences in terms of their support from the scoring+1 and conceding+1 datasets.
\begin{table}[!ht]
\caption{
{\bf Top five PrefixSpan-obtained patterns of play with the largest support: scoring+1 dataset.}}
\begin{tabular}{|l|l|l|l|l|l|}
\thickhline
\bf{PrefixSpan} & \bf{CM-SPAM} & \bf{CM-SPADE} & \bf{GSP} & \bf{Fast} & \bf{support}\\ \thickhline
2 & 2 & 2 & 2 & 2 & 84\\ \hline
2 3 & 3 & 3 & 3 & 3 & 60\\ \hline
3 & 2 3 & 2 3 & 2 3 & 2 3 & 60\\ \hline
2 2 & 2 2 & 3 2 & 2 2 & 2 2 & 59\\ \hline
2 3 2 & 2 3 2 & 2 2 & 3 2 & 3 2 & 59\\ \thickhline
\end{tabular}
\label{table5}
\end{table}
\begin{table}[!ht]
\caption{
{\bf Top five PrefixSpan-obtained patterns of play with the largest support: conceding+1 dataset.}}
\begin{tabular}{|l|l|l|l|l|l|}
\thickhline
\bf{PrefixSpan} & \bf{CM-SPAM} & \bf{CM-SPADE} & \bf{GSP} & \bf{Fast} & \bf{support}\\ \thickhline
14 & 14 & 14 & 14 & 14 & 39\\ \hline
14 15 & 15 & 15 & 15 & 15 & 33\\ \hline
15 & 14 15 & 14 15 & 14 15 & 14 15 & 33\\ \hline
14 14 & 14 14 & 15 14 & 14 14 & 14 14 & 29\\ \hline
14 15 14 & 14 15 14 & 14 14 & 15 14 & 15 14 & 29\\ \thickhline
\end{tabular}
\label{table6}
\end{table}
The obtained results show that common events and patterns were detected with the unsupervised models, i.e., breakdowns and phases.
Repeated breakdown and phase play is a means retaining possession of the ball and building pressure (see the middle and bottom images on the right-hand side of Fig \ref{fig-rugby-events}).
Longer repeated breakdown and phases plays were also identified by SPP.
However, in the case of the unsupervised model-obtained results, these patterns are not particularly useful for coaches or performance analysts since they merely reflect common, repeated patterns rather than interesting patterns.
The supervised approach with SPP, by using sequences representing passages of play labelled with points scoring outcomes, by virtue of the computed weights, is able to provide a measure of the importance of patterns of plays to these outcomes.
In addition, compared to the unsupervised models, the supervised SPP model obtained a greater variety of patterns of play, i.e., not only those containing breakdowns and or phases, and also discovered more sophisticated patterns.
\section*{Discussion}
\label{sec-discussion}
In this study, a supervised sequential pattern mining model called safe pattern pruning (SPP) was applied to data from professional rugby union in Japan, consisting of sequences in the form of passages of play that are labelled with points scoring outcomes.
The obtained results suggest that the SPP model was useful in detecting complex patterns (patterns of play) that are important to scoring outcomes.
SPP was able to identify relatively sophisticated, discriminative patterns of play, which make sense in terms of their interpretation, and which are potentially useful for coaches and performance analysts for own- and opposition-team analysis in order to identify vulnerabilities and tactical opportunities.
By considering both the scoring and conceding perspectives of the team, insight was able to be obtained that would be useful to both the team as well as opposition teams that are due to play the team.
For both the team and their opposition teams during the 2018 season, linebreaks were found to be most associated with scoring.
For both the team and their opposition teams, lineouts were found to be more beneficial to generate scoring opportunities than scrums.
These results are consistent with \cite{coughlan2019they}, who found that lineouts followed by a driving maul are common approaches to scoring tries (albeit in a different competition, Super Rugby), and with \cite{sasaki2007scoring}, who found that around one-third of tries came from lineouts in the Japan Top League in 2003 to 2005---the highest of any try source.
As well as creating lineouts or perhaps prioritising them over scrums, for opposition teams playing the team, effective strategies may include maintaining possession with repeated phase-breakdown play (by aiming for over six repetitions), shutting down the team's ability to regain kicks, and making sure to find touch on exit plays from kick restarts made by the team.
As mentioned, compared to the unsupervised models, the supervised SPP model obtained a greater variety of patterns that were also more complex.
This is likely due to the advantage of the supervised (i.e., labelled) nature of SPP as well as the safe screening and pattern pruning mechanisms of SPP, which prune out irrelevant sequential patterns and model weights in advance.
The approach highlighted the potential utility of supervised sequential pattern mining as an analytical framework for performance analysis in sport, and more specifically, the potential usefulness of sequential pattern mining techniques for performance analysis in rugby.
Although the results obtained are encouraging, a limited amount of data from one sport was used.
Also, spatial information such as field position was not available in the data, which may have improved the analysis.
Although the team that performed a particular event was used in our analysis, which player performed particular events was not considered. This may be interesting to investigate in future work.
A limitation of SPP is that, although we considered the order of events within the sequences and their label, the method does not consider the order of sequences within matches, which could also be of informative value (e.g., a particular pattern occurring in the second half of a match may be more important than if it occurs in the first half).
Furthermore, although SPP was useful for the specific dataset in this study, its usefulness is to some degree dependent on the structure of the input data and the specific definition of the sequences and labels.
For instance, applying the approach to a dataset that consists of entire matches as sequences and win/loss outcomes as the labels does not tend to produce interesting results since it is self-evident that sequences that contain more scoring events will be more associated with wins, thus, SPP would pick up the scoring events on such datasets.
In future work, it would be interesting to apply the approach to a larger amount of data from rugby, as well as to similarly structured datasets in other sports in order to confirm its efficacy.
\paragraph*{S1 Dataset.}
\label{S1_Dataset}
The delimited sequence data that is described in the paper is available on GitHub: \url{https://github.com/rbun013/Rugby-Sequence-Data}.
\paragraph*{S2 Appendix.}
\label{S2_Appendix}
{\bf Safe Screening and Regularization Path Initialization.}
Some weights are removed prior to solving (\ref{eq:L2-SVM}) using safe screening, which corresponds to finding $j$ such that $w_j = 0$ in the optimal solution $\bw^* := [w_1^*, \ldots, w_{d}^*]^\top$ in the optimization problem (\ref{eq:L2-SVM}).
Such $w_j$ do not affect the optimal solution even if they are removed prior.
In the optimal solution, the $\bw^*$ of the optimization problem {\rm (\ref{eq:LWSM-optimization})}, a set of $j$ such that $|w_j^*| > 0$ is called the active set, and is denoted as ${\cal A} \subseteq {\cal Q}$.
In this case, even if only the subsequence patterns included in ${\cal A}$ are used, the same optimal solution as when using all the subsequence patterns can be obtained.
Thus, if one solves
\begin{align}
(\bw^{\prime *}_{{\cal A}},b^{\prime *}) := \mathop{\rm argmin}_{\bw,b} &
\sum_{i \in [n]} \ell(y_i, f(\bg_i ; \{ \bq \}_{i \in {\cal A}})) + \lambda \| \bw \|_1,
\label{eq:primal-problem-active}
\end{align}
%
then it is guaranteed that $\bw^* = \bw^{\prime *}_{{\cal A}}$ and $b^* = b^{\prime *}$.
\\
In practice,
the $\lambda$ parameter is found
based on a model selection technique
such as cross-validation.
In model selection,
a sequence of solutions, a so-called regularization path, with various penalty parameters must be trained.
The regularization path of the problem
\eq{eq:LWSM-optimization}, \{$\lambda_{0},\lambda_{1}, \ldots, \lambda_K$\},
is usually computed with decreasing $\lambda$
because sparser solutions are obtained for larger $\lambda$.
The initial values for computing the regularization path are set to $\bm w^* \xleftarrow[]{} \bm 0$,
$b^* \xleftarrow[]{} \bar{y}$ (where $\bar{y}$ is the sample mean of $\{y_i\}_{i \in [n]}$) and $\lambda_{\rm 0} \xleftarrow[]{} \lambda_{\rm max}$ (see \cite{sakuma2019efficient} for how $\lambda_{\rm max}$ is calculated and for further details of the safe pattern pruning model and its safe-screening mechanism).
\section*{Acknowledgments}
This work was partially supported by MEXT KAKENHI (20H00601, 16H06538), JST CREST (JPMJCR1502), and RIKEN Center for Advanced Intelligence Project.
\nolinenumbers
| {
"attr-fineweb-edu": 2.023438,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUd7jxK1ThhCdy9DZ7 | \section*{An overview on athletic performance prediction and our contributions}
Performance prediction and modeling are cornerstones of sports medicine, essential in training and assessment of athletes with implications beyond sport, for example in the understanding of aging, muscle physiology, and the study of the cardiovascular system.
Existing research on athletic performance focuses either on (A) explaining world records~\cite{lietzke1954analytical, henry1955prediction, riegel1980athletic, katz1994fractal, savaglio2000human, garcia2005middle}, (B) equivalent scoring~\cite{purdy1974computer,purdy1976computer}, or (C) modelling of individual physiology.
Currently, however, there is no parsimonious model which is able to simultaneously explain individual physiology (C) and collective performance (A,B).
We present such a model, a non-linear low-rank model derived from a database of UK athletes. As an important feature, it levers an individual power law which explains power laws which are known to apply to world records, and which allows us to derive individual training parameters from prior performances data, without recourse to physiological measurements. Predictions obtained using our approach are the most accurate to date, with an average prediction error of under 4 minutes (2\% rel.MAE and 3\% rel.RMSE out-of-sample, see Tables~\ref{tab:rrmse_time},~\ref{tab:rmae_time} and appendix S.I.b) for elite performances.
We anticipate that our framework will allow us to leverage existing insights in the study of world record performances, collective performances, and especially sports medicine for a better understanding of human physiology.
Our work builds on the three major research strands in prediction and modeling of running performance, which we briefly summarize:
{\bf (A) Power law models of performance} posit a power law dependence $t = c\cdot s^\alpha$ between duration of the distance run $t$ and the distance $s$, for constants $c$ and $\alpha$.
Power law models have been known to describe world record performances across sports for over a century~\cite{kennelly1906approximate}, and have been applied extensively to running performance~\cite{lietzke1954analytical, henry1955prediction, riegel1980athletic, katz1994fractal, savaglio2000human, garcia2005middle}. These power laws have been applied by practitioners for prediction: the Riegel formula~\cite{riegel1977time} predicts performance by fitting $c$ to each athlete and leaving $\alpha =1.06$. The power law approach has the benefit of \emph{modelling} performances in a scientifically parsimonious way.
{\bf (B) Scoring tables}, such as those of the international association of athletics federations (IAAF), render performances over disparate distances comparable by presenting them on a single scale. These tables have been published by sports associations for almost a century~\cite{purdy1974history}
and catalogue, rather than model, performances of equivalent standard. Performance predictions can be obtained from scoring tables by forecasting a time with the same score as an existing attempt, as implemented in the popular Purdy Points scheme~\cite{purdy1974computer,purdy1976computer}. The scoring table approach has the benefit of \emph{describing} performances in an empirically accurate way.
{\bf (C) Explicit modeling of performance related physiology} is an active subfield of sports science. Several physiological parameters are known to be related to athletic performance, including critical speed (= VO2max)~\cite{hill1924muscular,billat1996significance}, blood lactate concentration and the anaerobic threshold~\cite{wasserman1973anaerobic,bosquet2002methods}. Physiological parameters can be used (C.i) to make direct predictions when clinical measurements are available~\cite{noakes1990peak,billat1996use,bundle2003high}, or (C.ii) to obtain theoretical models describing physiological processes~\cite{keller1973ia, peronnet1989mathematical,van1994optimisation,di2003factors}. This approach has the benefit of \emph{explaining} performances physiologically.
All three approaches (A),(B),(C) have appealing properties, as explained above, but none provides a complete treatment of athletic performance prediction:
(A) individual performances do not follow the parsimonious power law perfectly; (B) the empirically accurate scoring tables do not provide a simple interpretable relationship.
\emph{Neither} (A) nor (B) can deal with the fact that athletes may differ from one another in multiple ways.
The clinical measurements in (C.i) are informative but usually available only for a few select athletes, typically at most a few dozen (as opposed to the 164,746 considered in our study). The interpretable models in (C.ii) are usually designed not with the aim of predicting performance but to explain physiology or to estimate physiological parameters from performances; thus these methods are not directly applicable without additional work.
The approach we present unifies the desirable properties of (A),(B) and (C), while avoiding the aforementioned shortcomings.
We obtain (A) a parsimonious model for individual athletic performance that is (B) empirically derived from a large database of UK athletes. It yields the best performance predictions to date (rel.MAE 2\%, average error 3-4 min for elite athletes) and (C) unveils hidden descriptors for individuals which we find to be related to physiological characteristics.
Our approach bases predictions on \emph{Local Matrix Completion} (LMC), a machine learning technique which posits the existence of a small number of explanatory variables which describe the performance of individual athletes.
Application of LMC to a database of athletes allows us to derive a parsimonious physiological model describing athletic performance of \emph{individual} athletes. We discover that a \emph{three number-summary} for each individual explains performance over the full range of distances from 100m to the Marathon, which relate to: (1) the endurance of an athlete, (2) the relative balance between speed and endurance, and
(3) specialization over middle distances. The first number explains most of the individual differences over distances greater than 800m, and may be interpreted as describing the presence of \emph{an individual power law for each athlete}, which holds with remarkably high precision; while world-record data have been shown to follow an approximate power-law with an exponent $\alpha$, we show that the performances of \emph{each athlete}, $i$,
follow a power-law with a distinct exponent $\alpha_i$. Vitally, we show that these power-laws reflect the data more accurately than the power-law for world records.
We anticipate that this individual power law and the three-number summary will allow for exact quantitative assessment in the science of running and related sports.
\begin{figure}
\begin{center}
\includegraphics[width=150mm,clip=true,trim= 5mm 20mm 5mm 20mm]{../../figures/select_examples/exp_illustration/three_athletes_anno.pdf} \\
\end{center}
\caption{Non-linear deviation from the power law in individuals as central phenomenon. Top left: performances of world record holders. Curves labelled by athletes are their known best performances (y-axis) at that event (x-axis). Individual performances deviate non-linearly from the world record power law. Top right: a good model will take into account specialization, illustration by example. Hypothetical performance curves of three athletes, green, red and blue are shown, the task is to predict green on 1500m from all other performances. Dotted green lines are predictions. State-of-art methods such as Riegel or Purdy predict green performance on 1500m close to blue and red; a realistic predictor for 1500m performance of green - such as LMC - will predict that green outperforms red and blue on 1500m; since blue and red being better on 400m indicates that out of the three athletes, green specializes most on longer distances. Bottom: using local matrix completion as a mathematical prediction principle by filling in an entry in a $(3\times 3)$ sub-pattern. Schematic illustration of the algorithm.
\label{fig:illustration}}
\end{figure}
\section*{Local Matrix Completion and the Low-Rank Model}
It is well known that world records over distinct distances are held by distinct athletes---no one single athlete holds all running world records. Since world record data obey an approximate power law, this implies that the individual performance of each athlete deviates from this power law.
The left top panel of Figure~\ref{fig:illustration} displays world records and the corresponding individual performances of world record holders in logarithmic coordinates---an exact power law would follow a straight line. The world records align closely to a fictitious line, while individuals deviate non-linearly. Notable is also the kink in the world records which makes them deviate from an exact straight line, a phenomenon which we refer to as the ``broken power law''~\cite{savaglio2000human} which so far has remained unexplained.
Thus any model for individual performances must model this individual, non-linear variation - and will, optimally, explain the broken power law observed for world records as an epiphenomenon of such variation over individuals. We explain how the LMC scheme captures individual variation in a typical scenario:
Consider three athletes (taken from the data base) as shown in the right top panel of Figure~\ref{fig:illustration}. The 1500m performance of the green athlete is not known and is to be predicted. All three athletes - green, blue and red - have similar performance on 800m. Any classical method taking only that information into account will predict green performing similarly on 1500m to the blue and the red, e.g.~somewhere in-between. However, this is unrealistic, since it does not take into account specialization: looking at the 400m performance, one can see that the red athlete is quickest over short distances, followed by the blue and then by the green whose relative speed surpasses the remaining athletes over longer distances. Using this additional information leads to the more realistic prediction that the the green athlete will out-perform red and blue on 1500m. Supplementary analysis (S.IV) validates that the phenomenon presented in the example is prevalent in the whole data.
LMC is a quantitative method for taking into account this event specialization. A schematic overview of the simplest variant is displayed in the bottom panel of Figure~\ref{fig:illustration}: to predict an event for an athlete (figure: 1500m for green) we find a 3-by-3-pattern $A$ with exactly one missing entry, which means the two other athletes (figure: red and blue) have attempted similar events and have data available. Explanation of the green athlete's curve by the red and the blue is mathematically modelled by demanding that the data of the green athlete is given as a weighted sum of the data of the red and the blue. The green athlete is a linear combination whenever the \emph{determinant} of the pattern vanishes, which we write as $\det(A) = 0$. The determinant may be though of as a quantitative measure for describing when the numbers of degrees of freedom in a set of variables is lower than the number of variables, implying that the variables
are in a sense redundant. This redundancy is exactly what is exploited in applying LMC.
A prediction is made by solving the equation $\det(A) = 0$ for ``?''. Fore more accuracy, candidate solutions from multiple 3-by-3-patterns are averaged in a way that minimizes the expected error in approximation. We will consider variants of the algorithm which use $n$-by-$n$-patterns, $n$ corresponding to the complexity of the model (we later show $n=4$ to be optimal). See the methods appendix for an exact description of the algorithm used.
The LMC prediction scheme is an instance of the more general local low-rank matrix completion framework introduced in~\cite{kiraly15matrixcompletion}, here applied to performances in the form of a numerical table (= matrix) with columns corresponding to events and rows to athletes. The cited framework is the first matrix completion algorithm which allows prediction of single missing entries as opposed to all entries. While matrix completion has proved vital in predicting consumer behaviour and recommender systems, we find that existing approaches which predict all entries at once cannot cope with the non-standard distribution of missingness and the noise associated with performance prediction in the same way as LMC can (see findings and supplement S.II.a). See the methods appendix for more details of the method and an exact description.
In a second step, we use the LMC scheme to fill in all missing performances (over all events considered---100m, 200m etc.) and obtain a parsimonious low-rank model that explains individual running times $t$ in terms of distance $s$ by:
\begin{align}
\log t = \lambda_1 f_1(s) + \lambda_2 f_2(s) + \dots + \lambda_r f_r(s) \label{eq:model},
\end{align}
with components $f_1$, $f_2$, $\dots$ that are universal over athlete, and coefficients $\lambda_1$, $\lambda_2$, $\dots$ which depend on the athlete under consideration.
The number of components $r$ is known as the \emph{rank} of the model and measures its complexity; when considering the data in matrix form, $r$ translates to
\emph{matrix rank}.
The Riegel power law is a very special case, demanding that $\lambda_1 = 1.06$ for every athlete, $ f_1(s) = \text{log}~s$ and $\lambda_2 f_2(s) = c$ for a constant $c$ depending on the athlete. Our analyses will show that the best model is of rank $r = 3$ (meaning above we consider patterns or matrices of size $n \times n = 4x4$ since above $n=r+1$), and all three coefficients $\lambda_1,\dots,\lambda_3$ vary across all athletes. Remarkably, we find that $f_1(s) = \text{log}~s$, yielding an individual power law; the coefficient $\lambda_1$ which varies over athletes thus has the natural interpretation as an individual power law exponent.
We remark that first filling in the entries with LMC and only then fitting the model is crucial due to data which is non-uniformly missing (see supplement S.II.a). More details on our methodology can be found in the methods appendix.
\subsection*{Data Set, Analyses and Model Validation}
The basis for our analyses is the online database {\tt www.thepowerof10.info}, which catalogues British individuals' performances achieved in officially ratified athletics competitions since 1954.
The excerpt we consider dates from August 3, 2013. It contains (after error removal) records of 164,746 individuals of all genders, ranging from amateur to elite, young to old, comprising a total of 1,417,432 individual performances over 10 distances, from 100m to Marathon (42,195m).
As performances for the two genders distribute differently, we present only results on the 101,775 male athletes in the main corpus of the manuscript; female athletes and subgroup analyses are considered in the supplementary results.
The data set is available upon request, subject to approval by British Athletics. Full code of our analyses can be obtained from [download link will be provided here after acceptance of the manuscript].
Adhering to state-of-the-art statistical practice (see ~\cite{efron1983estimating,kohavi1995study,efron1997improvements,browne2000cross}), all prediction methods are validated \emph{out-of-sample}, i.e., by using only a subset of the data for estimation of parameters (training set) and computing the error on predictions made for a distinct subset (validation or test set). As error measures, we use the root mean squared error (RMSE) and the mean absolute error (MAE), estimated by leave-one-out validation for 1000 data points omitted at random.
We would like to stress that out-of-sample error is the correct way to evaluate \emph{predictive goodness}, as opposed to merely reporting goodness-of-fit in-sample, since fitting data that the method has already seen does not qualify as prediction, for which the data must be unseen. TODO
More details on the data set and our validation setup can be found in the supplementary material.
\section*{Findings on the UK athletes data set}
{\bf (I) Prediction accuracy.} We evaluate prediction accuracy of ten methods, including our proposed method, LMC. We include, as
{\it naive baselines:} (1.a) imputing the event mean, (1.b) imputing the nearest neighbour, (1.c) log-linear regression per athlete; as representative of the {\it state-of-the-art in quantitative sports science:} (2.a) the Riegel formula, (2.b) a power-law predictor with estimated athlete-independent exponent estimated from the data, (2.c) the Purdy points scheme \cite{purdy1974computer}; as representatives for the {\it state-of-the-art in low-rank matrix completion:} (3.a) imputation by expectation maximization on a multivariate Gaussian \cite{dempster1977maximum} (3.b) nuclear norm minimization~\cite{candes2009exact,candes2010power}.\\
We instantiate our {\it low-rank local matrix completion} (LMC) in two variants: (4.a) rank 1, and (4.b) rank 2.
Methods (1.a), (1.b), (2.a), (2.c), (4.a) require at least one observed performance per athlete, methods (1.c), (2.b), (4.b) require at least two observed performances in distinct events. Methods (3.a), (3.b) will return a result for any number of observed performances (including zero). Prediction accuracy is therefore measured by out-of-sample RMSE and MAE on the athletes who have attempted at least three distances, so that the two necessary performances remain when one is removed for leave-one-out validation. Prediction is further restricted to the fastest 95-percentile of performances to reduce the effect of outliers. Whenever the method demands that the predicting events need to be specified, the events which are closest in log-distance to the event to be predicted are taken. The accuracy of predicting time (normalized w.r.t.~the event mean), log-time, and speed are measured.
We also repeat this scenario for the year of best performance and a random calendar year. Moreover, for completeness and comparison we
treat 2 additional cases: the top 25\% of athletes and athletes who have attempted at least 4 events, each in log time.
More details on methods and validation are presented in the methods appendix.
The results are displayed in Table~\ref{tab:compare_methods} (RMSE) and supplementary Table~\ref{tab:compare_methods_mae} (MAE). Of all benchmarks, Purdy points (2.c) and Expectation Maximization (3.a) perform best. LMC in rank 2 substantially outperforms Purdy points and Expectation Maximization (two-sided Wilcoxon signed-rank test significant at $p \le$ 1e-4 on the validation samples of absolute prediction errors); rank 1 outperforms Purdy points on the year of best performance data ($p=$5.5e-3) for the best athletes, and is on a par on athletes up to the 95th percentile. Both rank 1 and 2 outperform the power law models ($p\le$1e-4), the improvement in RMSE over the power-law reaches over 50\% for data from the fastest 25\% of athletes.
{\bf (II) The rank (number of components) of the model.} Paragraph (I) establishes that LMC is the best method for prediction. LMC assumes a fixed number of prototypical athletes, viz.~the rank $r$, which is the complexity parameter of the model.
We establish the optimal rank by comparing prediction accuracy of LMC with different ranks. The rank $r$ algorithm needs $r$ attempted events for prediction, thus $r+1$ observed events are needed for validation. Table~\ref{tab:determine_rank} displays prediction accuracies for LMC ranks $r=1$ to $r=4$, on the athletes who have attempted $k > r$ events, for all possible $k \le 5$. The data is restricted to the top 25\% in the year of best performance in order to obtain a high signal to noise ratio. We observe that rank 3 outperforms all other ranks, when applicable; rank 2 always outperforms rank 1 (both $p\le$1e-4).
We also find that the improvement of rank 2 over rank 1 depends on the event predicted: improvement is 26.3\% for short distances (100m,200m), 29.3\% for middle distances (400m,800m,1500m), 12.8\% for the mile to half-marathon, and 3.1\% for the Marathon (all significant at $p$=1e-3 level) (see Figure~\ref{fig:individual_events}).
These results indicate that inter-athlete variability is greater for short and middle distances than for Marathon.
{\bf (III) The three components of the model.} The findings in (II) imply that the best low-rank model assumes $3$ components. To estimate the components ($f_i$ in Equation~\eqref{eq:model}) we impute all missing entries in the data matrix of the top 25\% athletes who have attempted 4 events and compute its singular value decomposition (SVD)~\cite{golub1970singular}. From the SVD, the exact form of components can be directly obtained as the right singular vectors (see methods appendix). We obtain three components in log-time coordinates, which are displayed in the left hand panel of Figure~\ref{fig:svs}. The first component for log-time prediction is linear (i.e., $f_1(s) = \log s$ in~\ref{eq:model}) to a high degree of precision ($R^2 = 0.9997$) and corresponds to an \emph{individual} power law, applying distinctly to each athlete. The second and third components are non-linear; the second component decreases over short sprints and increases over the remainder, and the third component resembles a parabola with extremum at middle distances.
In speed coordinates, the first, individual power law component does not display the ``broken power law'' behaviour of the world records. Deviations from an exact line can be explained by the second and third component (Figure~\ref{fig:svs} middle).
The three components together explain the world record data and its ``broken power law'' remarkably more accurately than a simple linear power law trend - with the rank 3 model fitting the world records almost exactly (Figure~\ref{fig:svs} right).
{\bf (IV) The three athlete-specific coefficients.} From the SVD, model coefficients for each athlete ($\lambda_1,\lambda_2,\lambda_3$ in ~\ref{eq:model}) are obtained from the entries of the left singular vectors (see methods appendix). Since the first coefficient in log-time is the coefficient of log-distance (of $f_1(s) = \log s$), it is identical to a power law exponent for the given athlete, whence we term it the \emph{individual exponent}. Since all three coefficients summarize the athlete, we refer to them collectively as the \emph{three-number-summary}.
(IV.i) Figure~\ref{fig:scores} displays scatter plots and Spearman correlations between the coefficients and performance over the full range of distances. The individual exponent correlates with performance on distances greater than 800m. The second and third coefficient correlate with the athlete's preferred distance: the second coefficient correlates positively with performance over short distances and displays a non-linear dependence with performance over middle distances.
Over long-distances, the second coefficient is positively correlated with performance.
The third coefficient correlates with performance over middle distances. All dependencies are non-linear, with the notable exception of the individual exponent on distances exceeding 800m, hence the application of Spearman correlations.
(IV.ii) Figure~\ref{fig:scatter} top displays the three-number-summary for the top 95\% athletes in the data base. The athletes are separated into (at least) four classes, correlating with the athlete's preferred distance. A phase transition can be observed over middle distances.
(IV.iii) Figure~\ref{fig:scatter} bottom left shows that a low individual exponent correlates positively with performance in an athlete's preferred event. The individual exponents are higher on average (median=1.12; 5th, 95th percentiles=1.10,1.15) than the world record exponents estimated by Riegel~\cite{riegel1980athletic} (1.08 for elite athletes, 1.06 for senior athletes).
(IV.iv) Figure~\ref{fig:scatter}, bottom right, shows that in cross-section, the individual exponent decreases with age until 20 years, and subsequently increases.
\begin{figure}
\begin{center}
$\begin{array}{c c c}
\includegraphics[width=45mm,clip=true,trim= 0mm 0mm 0mm 0mm]{../../figures/SingularVectors/exp_svs/svs_log_time} &
\includegraphics[width=45mm,clip=true,trim= 0mm 0mm 0mm 0mm]{../../figures/SingularVectors/exp_r2_comparison/r2.pdf} &
\includegraphics[width=45mm,clip=true,trim= 0mm 0mm 0mm 0mm]{../../figures/world_record_experiments/exp_wr_residuals/wr_residuals.pdf}
\end{array}$
\end{center}
\caption{The three components of the low-rank model, and explanation of the world record data. Left: the components displayed (unit norm, log-time vs log-distance). Tubes around the components are one standard deviation, estimated by the bootstrap. The first component is an exact power law (straight line in log-log coordinates); the last two components are non-linear, describing transitions at around 800m and 10km. Middle: Comparison of first component and world record to the exact power law (log-time vs log-distance). Right: Least-squares fit of rank 1-3 models to the world record data (log-speed vs log-distance).
\label{fig:svs}}
\end{figure}
\begin{figure}
\begin{center}
$\begin{array}{c}
\includegraphics[width=160mm,clip=true,trim= 5mm 70mm 5mm 60mm]{../../figures/SingularVectors/exp_svs/score1_by_event_anno.pdf}
\end{array}$
\end{center}
\caption{Matrix scatter plot of three-number-summary vs performance. For each of the scores in the three-number-summary (rows) and every event distance (columns), the plot matrix shows: a scatter plot of performances (time) vs the coefficient score of the top 25\% (on the best event) athletes who have attempted at least 4 events. Each scatter plot in the matrix is colored on a continuous color scale according to the absolute value of the scatter sample's Spearman rank correlation (red = 0, green = 1).
\label{fig:scores}
}
\end{figure}
\begin{figure}
\begin{center}
$\begin{array}{c c}
\includegraphics[width=55mm,clip=true,trim= 0mm 0mm 0mm 0mm]{../../figures/SingularVectors/exp_svs/score_scatter3_1.png} &
\includegraphics[width=55mm,clip=true,trim= 0mm 0mm 0mm 0mm]{../../figures/SingularVectors/exp_svs/score_scatter3_famous.png}\\
\includegraphics[width=55mm,clip=true,trim= 0mm 0mm 0mm 0mm]{../../figures/SingularVectors/exp_svs/spec_exponent_standard.png} &
\includegraphics[width=55mm,clip=true,trim= 0mm 0mm 0mm 0mm]{../../figures/SingularVectors/exp_aging/exponent_age_specialization.png}
\end{array}$
\end{center}
\caption{Scatter plots exploring the three number summary. Top left and right: 3D scatter plot of the three-number-summary, colored by preferred distance and shown from two different angles. A negative value for the second score indicates a sprinter, a positive value an endurance runner. In the top right panel, the summaries of the elite athletes are Usain Bolt (world record holder, 100m, 200m), Mo Farah (world leader over distances between 1500m and 10km), Haile Gabrselassie (former world record holder from 5km to Marathon) and Takahiro Sunada (100km world record holder) are shown; for comparison we also display the hypothetical data of an athlete
who holds all world records.
The results show that the elite athletes trace a frontier around the population: all elite athletes are subject to a low individual exponent.
Bottom left: preferred distance vs individual exponents, color is percentile on preferred distance. Bottom right: age vs. exponent, colored by preferred distance.
\label{fig:scatter}}
\end{figure}
\begin{table}
\begin{center}
\input{../../tables/elite_scores.tex}
\end{center}
\label{tab:elite_scores}
\caption{Estimated component scores ($\lambda_i$) in log(time) coordinates of selected elite athletes. The scores $\lambda_1$, $\lambda_2$, $\lambda_3$ are defined by Equation~\eqref{eq:model} and may be interpreted as the contribution of each component to performance for a given athlete. Since component 1 is a power-law (see the top-left of Figure~\ref{fig:svs}), $\lambda_1$ may be interpreted as the individual exponent. See the bottom right panel of Figure~\ref{fig:scatter} for a scatter plot of athletes.}
\end{table}
{\bf (V) Phase transitions.} The data exhibit a phase transition around 1500m: the second and third component makes a zero transition (Figure~\ref{fig:svs}); the association of the first two scores with performance changes (Figure~\ref{fig:scores}); the three-number-summary scatter plot exhibits a bottleneck (Figure~\ref{fig:scatter}).
The supplement contains further experiments corroborating a transition, see (S.IV).
{\bf (VI) Universality over subgroups.} Qualitatively and quantitatively similar results to the above can be deduced for female athletes, and subgroups stratified by age or training standard; LMC remains an accurate predictor, and the low-rank model has similar form. See supplement (S.II.b).
\section*{Discussion and Outlook}
We have presented the most accurate existing predictor for running performance---local low-rank matrix completion (LMC) (observation I); its predictive power relies confirms the validity of a three-component model that offers a parsimonious explanation for many known phenomena in quantitative sports science, including answers to some of the major open questions of the field. More precisely, we establish:
{\bf The individual power law.} In log-time coordinates, the first component of our physiological model is linear with high accuracy, yielding an individual power law (observation III). This is a novel and rather surprising finding, since, although world-record performances are known to obey a power law~\cite{lietzke1954analytical, henry1955prediction, riegel1980athletic, katz1994fractal, savaglio2000human, garcia2005middle}, there is no reason to \emph{a-priori} suppose that the performance of individuals is governed by a power law. This \emph{parsimony a-posteriori} unifies (A) the parsimony of the power law with the (B) empirical correctness of scoring tables. To which extent this individual power law is exact is to be determined in future studies.
{\bf An explanation of the world record data.} The broken power law on world records can be seen as a consequence of the individual power law and the non-linearity in the second and third component (observation III) of our low-rank model. The breakage point in the world records can be explained by different contributions of the non-linear components in the different individuals holding the world records.
Thus both \emph{the power law and the broken power law on world record data can be understood as epiphenomena of the individual power law and its non-linear corrections}.
{\bf Universality of our model.} The low-rank model remains unchanged when considering different subgroups of athletes, stratified by gender, age, or calendar year; what changes is only the individual three-number-summaries (observation VI). This shows the low-rank model to be universal for running.
{\bf The three-number-summary reflects an athlete's training state.} Our predictive validation implies that the number of components of our model is three (observation II), which yields three numbers describing the training state of a given athlete (observation IV). The most important summary is \emph{the individual exponent} for the individual power law which describes overall performance (IV.iii). The second coefficient describes whether the athlete has greater endurance (positive) or speed (negative), the third describes specialization over middle distances (negative) vs short and long distances (positive). All three numbers together clearly separate the athletes into four clusters, which fall into two clusters of short-distance runners and one cluster of middle-and long-distance runners respectively (IV.i).
Our analysis provides strong evidence that the three-number-summary captures physiological and/or social/behavioural characteristics of the athletes, e.g., training state, specialization, and which distance they choose to attempt.
While the data set does not allow us to separate these potential influences or to make statements about cause and effect, we conjecture that combining the three-number-summary with specific experimental paradigms will lead to a clarification; further, we conjecture that a combination of the three-number-summary with additional data, e.g. training logs, high-frequency training measurements or clinical parameters, will lead to a better understanding of (C) existing physiological models.\\
Some novel physiological insights can be deduced from leveraging our model on the UK athletics data base:
\begin{itemize}
\itemsep0em
\item We find that the higher rank LMC predictor is most effective for the longer-sprints and middle distances, and less effective in comparison to the rank 1 predictor over the marathon distance.
This may be explained by some middle-distance runners using a high maximum velocity to coast whereas other runners use greater endurance to run closer to their maximum speed for the duration of the race; it would be interesting to check empirically whether the type of running (coasting vs endurance) is the physiological correlate to the specialization summary. If this was verified, it could imply that (presently) there is {\bf only one way to be a fast marathoner}, i.e., possessing a high level of endurance - as opposed to being able to coast relative to a high maximum speed. In any case, the low-rank model predicts that a marathoner who is not close to world class over 10km is unlikely to be a world class marathoner.
\item The {\bf phase transition around 1 km} (observation V) provides additional observational evidence for a transition in the complexity of the physiology underlying performance between long and short distances.
\item An open question in the physiology of aging is whether power or endurance capabilities diminish faster with age. Our analysis provides cross-sectional evidence that {\bf training standard decreases with age, and specialization shifts towards endurance}. This confirms observations of Rittweger et al.~\cite{rittweger2009sprint} on masters world-record data. There are multiple possible explanations for this, for example a longitudinal development of the same kind, or selection bias due to older athletes preferring longer distances; our model renders these hypotheses amenable to quantitative validation.
\item We find that there are a number of {\bf high-standard athletes who attempt distances different from their inferred best distance}; most notably a cluster of young athletes (< 25 ys) who run short distances, and a cluster of older athletes (>40 y) who run long distances, but who could achieve better percentiles on longer resp.~shorter distances.
On the other hand, the third component of our model implies the existence of {\bf athletes with very strong specialization in their best event}; there are indeed high profile examples of such athletes, such as Zersenay Tadese, who holds the half-marathon world best performance (58:23) but has as yet to produce a marathon performance even close to this in quality (best performance, 2:10:41).
\end{itemize}
We also anticipate our framework to be fruitful in providing {\bf new methods to the practitioner}:
\begin{itemize}
\itemsep0em
\item Individual predictions are crucial in {\bf race planning}---especially for predicting a target performance for events such as the Marathon for which months of preparation are needed;
the ability to accurately select a realistic target speed will make the difference between an athlete achieving a personal best performance and dropping out of the race from exhaustion.
\item Predictions and the three-number-summary yield a concise description of the runner's specialization and training state, thus are of immediate use in {\bf training assessment and planning}, for example in determining the potential effect of a training scheme or finding the optimal event(s) for which to train.
\item The presented framework allows for the derivation of novel and more accurate {\bf scoring schemes} including scoring tables for any type of population.
\item Predictions for elite athletes allow for a more precise {\bf estimation of quotas and betting risk}. For example, we predict that a fair race between Mo Farah and Usain Bolt is over 506m (479-542m with 90\% prob),
Chris Lemaitre and Adam Gemili have the calibre to run 43.5 ($\pm0.3$) and 43.2 ($\pm0.2$) resp.~seconds over 400m and Kenenisa Bekele is capable at his best of a 2:00:36 marathon ($\pm90$ secs). We predict that an athlete with
identical 1500m speed to Bekele but 2 seconds faster over 5000m and 7 seconds faster over 10000m should be capable of running a two hour marathon ($\pm85$ secs).
\end{itemize}
We further conjecture that the physiological laws we have validated for running will be immediately transferable to any sport where a power law has been observed on the collective level, such as swimming, cycling and horse racing.
\section*{Acknowledgments}
We thank Ryota Tomioka for providing us with his code for matrix completion via nuclear norm minimization, and for advice on its use. We thank Louis Theran for advice regarding the implementation of local matrix completion in higher ranks.
\section*{Author contributions}
DAJB conceived the original idea of applying machine learning and LMC to athletic performance prediction, and acquired the data.
DAJB and FJK jointly contributed to methodology and the working form of the LMC prediction algorithm. FJK conceived the LMC algorithm in higher ranks; DAJB designed its concrete implementation and adapted the LMC algorithm for the performance prediction problem. DAJB and FJK jointly designed the experiments and analyses. DAJB carried out the experiments and analyses. DAJB and FJK jointly wrote the paper and are jointly responsible for presentation, discussion and interpretation.
\section*{Methods}
The following provides a guideline for reproducing the results.
Raw and pre-processed data in MATLAB and CSV formats is available upon request, subject to approval by British Athletics.
Complete and documented source code of algorithms and analyses can be obtained from ...
\subsection*{Data Source}
The basis for our analyses is the online database {\tt www.thepowerof10.info}, which catalogues British individuals' performances achieved in officially ratified athletics competitions since 1954, including Olympic athletic events (field and non-field events), non-Olympic athletic events, cross country events and road races of all distances.
With permission of British Athletics, we obtained an excerpt of the database by automated querying of the freely accessible parts of {\tt www.thepowerof10.info}, restricted to ten types of running events: 100m, 200m, 400m, 800m, 1500m, the Mile, 5000m (track and road races), 10000m (track and road races), Half-Marathon and Marathon. Other types of running events were available but excluded from the present analyses to prevent incomparability and significantly different subgroups; the reasons for exclusion were a smaller total of attempts (e.g. 3000m), a different population of athletes (e.g. 3000m is mainly attempted by younger athletes), and varying conditions (steeplechase/ hurdles and cross-country races).
For each athletes we have two kinds of data, as follows:
(a) athlete ID, gender, date of birth (\texttt{athletes.csv}).
(b) individual attempts on running events until August 3, 2013, where for each attempt athlete ID, event type, date of the attempt, and performance in seconds are recorded (\texttt{events.csv}).
The data set is available upon request, subject to approval by British Athletics.
\subsection*{Data Cleaning}
Our excerpt of the database contains (after error and duplication removal) records of 164,746 individuals of all genders, ranging from amateur to elite, or young to old, and a total of 1,410,789 individual performances for 10 different types of events, ranging from 100 m to the Marathon (42,195,000 m).
Gender is available for all entries in the database (101,775 male, 62,971 female). The dates of birth of 114,168 athletes are missing (recorded as January 1, 1900 in \texttt{athletes.csv} due to particulars of the automated querying); the date of birth of six athletes is set to missing due to an recorded age at recorded attempts of eight years or less.
For the above athletes, a total of 1,410,789 attempts are recorded: 192,947 over 100m, 194,107 over 200m, 109,430 over 400m, 239,666 over 800m, 176,284 over 1500m, 6,590 at the Mile distance, 96,793 over 5000m (the track and road races), 161,504 over 10000m (on the track and road races), 140,446 for the Half-Marathon and 93,033 for the Marathon. Dates of the attempt are set to missing for 225 of the attempts that record January 1, 1901, and one of the attempts that records August 20, 2038. A number of 44 events is removed from the working data set whose reported performances are better than the official world records of their time, or extremely slow, leaving a total of 1,407,432 recorded attempts in the cleaned data set.
\subsection*{Data Preprocessing}
The events and athletes data sets are collated into $(10\times 164,746)$-tables/matrices of performances, where
the $10$ columns correspond to events and the $164,746$ rows to individual athletes. Rows are indexed increasingly by athlete ID, columns by the type of event. Each entry of the table/matrix contains one performance (in seconds) of the athlete by which the row is indexed, at the event by which the column is indexed, or a missing value. If the entry contains a performance, the date of that performance is stored as meta-information.
We consider two different modes of collation, yielding one table/matrix of performances of size $(10\times 164,746)$ each.
In the first mode, which in Tables~\ref{tab:compare_methods} ff.~is referenced as {\bf ``best''}, one proceeds as follows. First, for each individual athlete, one finds the best event of each individual, measured by population percentile. Then, for each type of event which was attempted by that athlete within a year before that best event, the best performance for that type of event is entered into the table. If a certain event was not attempted in this period, it is recorded as missing.
For the second mode of collation, which in Tables~\ref{tab:compare_methods} ff.~is referenced as {\bf ``random''}, one proceeds as follows. First, for each individual athlete, a calendar year is uniformly randomly selected among the calendar years in which that athlete has attempted at least one event. Then, for each type of event which was attempted by that athlete within the selected calendar year, the best performance for that type of event is entered into the table. If a certain event was not attempted in the selected calendar year, it is recorded as missing.
The first collation mode ensures that the data is of high quality: athletes are close to optimal fitness, since
their best performance was achieved in this time period. Moreover, since fitness was at a high level,
it is plausible that the number of injuries incurred was low; indeed it can be observed that the number of attempts per event is higher in this period, effectively decreasing the influence of noise and the chance that outliers are present after collation.
The second collation mode is used to check whether and if yes how strongly the results depend on the athletes being close to optimal fitness.
In both cases choosing a narrow time frame ensures that performances are relevant to one another for prediction.
\subsection*{Athlete-Specific Summary Statistics}
For each given athlete, several summaries are computed based on the collated matrix.
Performance {\bf percentiles} are computed for each event that athlete attempts in relation to the other athletes' performances on the same event. These column-wise event-specific percentiles, yield a percentile matrix with the same filling pattern as the collated matrix.
The {\bf preferred distance} for a given athlete is the geometric mean of the attempted events' distances. That is, if $s_1,\dots, s_m$ are the distances for the events which the athlete has attempted, then $\tilde{s} = (s_1\cdot s_2\cdot\ldots\cdot s_m)^{1/m}$ is the preferred distance.
The {\bf training standard} for a given athlete is the mean of all performance percentiles in the corresponding row.
The {\bf no.~events} for a given athlete is the number of events attempted by an athlete in the year of the data considered ({\bf best} or {\bf random}).
Note that the percentiles yield a mostly physiological description; the preferred distance is a behavioural summary since it describes the type of events the athlete attempts. The training standard combines both physiological and behavioural characteristics.
Percentiles, preferred distance, and training standard depend on the collated matrix. At any point when rows of the collated matrix are removed, future references to those statistics refer to and are computed for the matrix where those have been removed; this affects the percentiles and therefore the training standard which is always relative to the athletes in the collated matrix.
\subsection*{Outlier Removal}
Outliers are removed from the data in both collated matrices. An outlier score for each athlete/row is obtained as the difference of maximum and minimum of all performance percentile of the athlete. The five percent rows/athletes with the highest outlier score are removed from the matrix.
\subsection*{Prediction: Evaluation and Validation}
Prediction accuracy is evaluated on row-sub-samples of the collated matrices, defined by (a) a potential subgroup, e.g., given by age or gender, (b) degrees-of-freedom constraints in the prediction methods that require a certain amount of entries per row, and (c) a certain performance percentiles of athletes.
The row-sub-samples referred to in the main text and in Tables~\ref{tab:compare_methods} ff.~are obtained by (a) retaining all rows/athletes in the subgroup specified by gender, or age at their best event, (b) retaining all rows/athletes with at least {\bf no.~events} or more entries non-missing, and discarding all rows/athletes with $(n-1)$ or less entries non-missing, then (c) retaining all athletes in a certain percentile range. The percentiles referred to in (c) are computed as follows: first, for each column, in the data retained after step (b), percentiles are computed. Then, for each row/athlete, the best of these percentiles is selected as the score over which the overall percentiles are taken.
The accuracy of prediction is measured empirically in terms of out-of-sample root mean squared error (RMSE) and mean absolute error (MAE), with RMSE, MAE, and standard deviations estimated from the empirical sample of residuals obtained in 1000 iterations of leave-one-out validation.
Given the row-sub-sample matrix obtained from (a), (b), (c), prediction and thus leave-one-out validation is done in two ways:
(i) predicting the left-out entry from potentially all remaining entries. In this scenario, the prediction method may have access to the performance of the athlete in question which lie in the future of the event to be predicted, though only performances of other events are available; and\\
(ii) predicting the left-out entry from all remaining entries of other athletes, but only from those events of the athlete in question that lie in the past of the event to be predicted. In this task, temporal causality is preserved on the level of the single athlete for whom prediction is done; though information about other athletes' results that lie in the future of the event to be predicted may be used.
The third option (iii) where predictions are made only from past events has not been studied due to the size of the data set which makes collation of the data set for every single prediction per method and group a computationally extensive task, and potential group-wise sampling bias which would be introduced, skewing the measures of goodness - the population of athletes on the older events is different in many respects from the newer ones.
We further argue that in the absence of such technical issues, evaluation as in (ii) would be equivalent to (iii); since the performances of two randomly picked athletes, no matter how they are related temporally, can in our opinion be modelled as statistically independent; positing the contrary would be equivalent to postulating that any given athlete's performance is very likely to be directly influenced by a large number of other athlete's performance history, which is an assumption that appears scientifically implausible to us.
Given the above, due to equivalence of (ii) and (iii) the issues occurring in (iii) exclusively, we can conclude that (ii) is preferrable over (iii) from a scientific and statistical viewpoint.
\subsection*{Prediction: Target Outcomes}
The principal target outcome for the prediction is ``performance'', which we present to the prediction methods in three distinct parameterisations. This corresponds to passing not the raw performance matrices obtained in the section ``Data Pre-processing'' to the prediction methods, but re-parameterized variants where the non-missing entries undergo a univariate variable transform. The three parameterizations of performance considered in our experiments are the following:\\
(a) {\bf normalized}: performance as the time in which the given athlete (row) completes the event in question (column), divided by the average time in which the event in question (column) is completed in the sub-sample;\\
(b) {\bf log-time}: performance as the natural logarithm of time in seconds in which the given athlete (row) completes the event in question (column);\\
(c) {\bf speed}: performance as the average speed in meters per second, with which the given athlete (row) completes the event in question (column).\\
The words in italics indicate which parameterisation is referred to in Table~\ref{tab:compare_methods}. The error measures, RMSE and MAE, are evaluated in the same parameterisation in which prediction is performed. We do not evaluate performance directly in un-normalized time units, as in this representation performances between 100m and the Marathon span six orders of magnitude, which would skew the measures of goodness heavily towards accuracy on the Marathon. Thus even if only a measure of goodness on the Marathon would be desired, a more consistent and comparable measure can be obtained by restricting RMSE and MAE in one of the parameterisations (a)--(c) to the event in question.
In the supplement we also check the difference in prediction when prediction is conducted in one parameterization and evaluation in another.
\subsection*{Prediction: Models and Algorithms}
In the experiments, a variety of prediction methods are used to perform prediction from the performance data, given as described in ``Prediction: Target Outcomes'', evaluated by the measures as described in the section ``Prediction: Evaluation and Validation''.
Each method is encapsulated as a routine which predicts a missing entry when given the (training entries in the) performance matrix. The methods can be roughly divided in four classes: (1) baselines which are very generic and naive methods in the context, (2) representatives of the state-of-the-art in practical performance prediction, (3) representatives of the state-of-the-art in matrix completion, and (4) our proposed method and its variants.
The generic baselines are:\\
(1.a) {\bf mean}: predicting the the mean over all performances for the same event.
(1.b) {\bf $k$-NN}: $k$-nearest neighbor prediction. The $k$ most similar athletes that have taken the event are determined by the minimizers of normalized distance on the other events
and the mean of these athletes is taken on the event to be predicted.
We then compare against the state of the art methods for predicting running performance:
(2.a) {\bf Riegel}: The Riegel power law formula with exponent 1.06.
(2.b) {\bf power-law}: A power-law predictor, as per the Riegel formula, but with the exponent estimated from the data. The exponent is the same for all athletes and estimated as the minimizer of the residual sum of squares.
(2.c) {\bf ind.power-law}: A power-law predictor, as per the Riegel formula, but with the exponent estimated from the data. The exponent may be different for every athlete and is estimated as the minimizer of the residual sum of squares.
(2.d) {\bf Purdy}: Prediction by calculation of equivalent performances using the Purdy points scheme~\cite{purdy1974computer}. Purdy points are calculated by using the measurements given by the Portugese scoring tables which estimate the maximum velocity for a given distance in a straight line, and adjusting for the cost of traversing curves and the time required to reach race velocity.
The performance with the same number of points as the predicting event is imputed.
We then compare against state of the art methods for general matrix completion:
(3.a) {\bf EM}: Expectation maximization algorithm on a multivariate Gaussian distribution using the log-time data. Missing entries are initialized by the mean of each column. The updates are terminated when
the percent increase in log-likelihood is less than 0.1\%. For a review of the em-algorithm see \cite{bishop2006pattern}.
(3.b) {\bf Nuclear Norm}: Matrix completion via nuclear norm minimization~\cite{candes2009exact}. We use the regularized algorithm ... in an implementation by Ryota Tomioka.
Our novel prediction method is an instance of local matrix completion:
(4.a-d) {\bf LMC rank $r$}: local matrix completion for the low-rank model, with a rank $r = 1,\dots,4$.
Our algorithm follows the local/entry-wise matrix completion paradigm in~\cite{kiraly15matrixcompletion}. It extends the rank $1$ local matrix completion method described in~\cite{KirThe13RankOneEst} to arbitrary ranks.
Our implementation uses: determinants of size $(r+1 \times r+1)$ as the only circuits; the weighted variance minimization principle in~\cite{KirThe13RankOneEst}; the linear approximation for the circuit variance outlined in the appendix of~\cite{blythe2014algebraic}; modelling circuits as independent for the co-variance approximation.
We further restrict to circuits supported on the event to be predicted and the $r$ log-distance closest events.
For the convenience of the reader, we describe in the section ``Prediction: Local Matrix Completion'' below the exact way in which the local matrix completion principle is instantiated.
\subsection*{Prediction: Local Matrix Completion}
The LMC algorithm we use is an instance of Algorithm~5 in~\cite{kiraly15matrixcompletion}, where, as detailed in the last section, the circuits are all determinants, and the averaging function is the weighted mean which minimizes variance, in first order approximation, following the strategy outlined in~\cite{KirThe13RankOneEst} and~\cite{blythe2014algebraic}.
The LMC rank $r$ algorithm is described below in pseudo-code. For readability, we use bracket notation $M[i,j]$ (as in R or MATLAB) instead of the usual subscript notation $M_{ij}$ for sub-setting matrices. The notation $M[:,(i_1,i_2,\dots, i_r)]$ corresponds to the sub-matrix of $m$ with columns $i_1,\dots, i_r$. The notation $M[k,:]$ stands for the whole $k$-th row. Also note that the row and column removals in Algorithm~\ref{alg:LMC} are only temporary for the purpose of computation, in the boundaries of the algorithm, and do not affect the original collated matrix.
\begin{algorithm}[ht]
\caption{ -- Local Matrix Completion in Rank $r$.\newline
\textit{Input:} An athlete $a$, an event $s^*$, the collated data matrix of performances $M$. \newline
\textit{Output:} An estimate/denoising for the entry $M[a,s^*]$ \label{alg:LMC}}
\begin{algorithmic}[1]
\State Determine distinct events $s_1,\dots, s_r\neq s^*$ which are log-closest to $s^*$, i.e., minimize $\sum_{i=1}^r( \log s_i-\log s^*)^2$
\State Restrict $M$ to those events, i.e., $M\gets M[:,(s^*,s_1,\dots, s_r)]$
\State Let $v$ be the vector containing the indices of rows in $M$ with no missing entry.
\State $M\gets M[(v,a),:]$, i.e., remove all rows with missing entries from $M$, except $a$.
\For{ $i = 1$ to $400$ }
\State Uniformly randomly sample distinct athletes $a_1,\dots,a_r\neq a$ among the rows of $M$.
\State Solve the circuit equation $\det M[(a,a_1,\dots, a_r),(s^*,s_1,\dots, s_r)] = 0$ for $s^*$, obtaining a number $m_i$.
\State Let $A_0,A_1\gets M[(a,a_1,\dots, a_r),(s^*,s_1,\dots, s_r)]$.
\State Assign $A_0[a,s^*]\gets 0$, and $A_1[a,s^*]\gets 1$.
\State Assign the weight $w_i\gets\frac{1}{|\det A_0 + \det A_1|}+\frac{|\det A_0|}{(\det A_0 - \det A_1)^2}$
\EndFor
\State Compute $m^*\gets \left(\sum_{i=1}^{400}w_im_i\right)\cdot\left(\sum_{i=1}^{400} w_i\right)^{-1}$
\State Return $m^*$ as the estimated performance.
\end{algorithmic}
\end{algorithm}
The bagged variant of LMC in rank $r$ repeatedly runs LMC rank $r$ with choices of events different from the log-closest, weighting the results obtained from different choices of $s_1,\dots, s_r$. The weights are obtained from $5$-fold cross-validation on the training sample.
\subsection*{Obtaining the Low-Rank Components and Coefficients}
For each of the three target outcomes detailed in ``Prediction: Target outcomes'' - normalized, log-time, and speed - three low-rank components
$f_1,\dots, f_3$ and corresponding coefficients $\lambda_{1},\dots, \lambda_{3}$ for each athlete are obtained. Each component $f_i$ is a vector of length 10, with entries corresponding to events. Each coefficient is a scalar, potentially different per athlete.
To obtain the components and coefficients, we consider the data matrix for the specific target outcome, sub-sampled to contain the athletes who have attempted four or more events and the top 25\% percentiles, as described in ``Prediction: Evaluation and Validation''. In this data matrix, all missing values are imputed using the rank 3 local matrix completion algorithm, as described in (4.c) of ``Prediction: Models and Algorithms'', to obtain a complete data matrix $M$. For this matrix, the singular value decomposition $M=USV^\top$ is computed, see~\cite{golub1970singular}. We take $f_i$ to be the $i$-th column of $V$, and the coefficient $\lambda_j$ for the $k$-th athlete to be $U_{kj}$. The singular value decomposition
has the property that these $f_i$ and $\lambda_j$ are guaranteed to be least-squares estimators for the components and coefficients.
The coefficients obtained from log-time representation are used as the three-number-summary referenced in the main corpus of the manuscript; $\lambda_1$ is the individual exponent.
\subsection*{Computation of standard error and significance}
Standard errors are computed via independent bootstrap sub-sampling on the rows of the data set (athletes).
A method is considered significantly better performing than another when error regions at the 95\% confidence level (= mean over repetitions $\pm$ 1.96 standard errors) do not intersect.
\subsection*{Calculating a fair race}
We describe here the procedure for calculating a fair racing distance with error bars between two athletes: athlete 1 and athlete 2. We first calculate predictions
for all events. Provided that athlete 1 is quicker on some events and athlete 2 is quicker on others, then calculating a fair race is feasible.
If athlete 1 is quicker on shorter events then athlete 2 is typically quicker on all longer events beyond a certain distance. In that case, we can find the shortest race $s_i$ whereby athlete 2 is predicted to be
quicker; then a fair race lies between $s_i$ and $s_{i-1}$. The performance curves in log-time vs. log-distance of both athletes will be locally approximately linear.
We thus interpolate the performance curves between $\text{log}(s_i)$ and $\text{log}(s_{i-1})$---the crossing point gives the position of a fair race in log-coordinates.
We obtain confidence intervals by repeating this procedure on bootstrap subsamples of the dataset.
\section*{Supplementary Analyses}
This appendix contains a series of additional experiments supplementing those in the main corpus. It contains the following observations:\\
{\bf (S.I) Validation of the LMC prediction framework.}\\
{\bf (S.I.a) Evaluation in terms of MAE.} The results in terms of MAE are qualitatively similar to those in RMSE; smaller MAEs indicate the presence of outliers.\\
{\bf (S.I.b) Evaluation in terms of time prediction.} The results are qualitatively similar to measuring goodness in RMSE and MAE of log-time. LMC rank 2 has an average error or approximately 3\% when predicting the top 25\% of male athletes.\\
{\bf (S.I.c) Prediction for individual events.} LMC outperforms the other predictors on each type of event. The benefit of higher rank is greatest for middle distances.\\
{\bf (S.I.d) Stability w.r.t.~the unit measuring performance.} LMC performs equally well in predicting (performance in time units) when performances are presented in log-time or time normalized by event average. Speed is worse when the rank 2 predictor is used.\\
{\bf (S.I.e) Stability w.r.t.~the events used in prediction.} LMC performs equally well when predicting from the closest-distance events and when using a bagged version which uses all observed events for prediction.\\
{\bf (S.I.f) Stability w.r.t.~the event predicted.} LMC performs well both when the predicted event is close to those observed and when the predicted event is further from those observed, in terms of event distance.\\
{\bf (S.I.g) Temporal independence of performances.} There are no differences between predictions made only from past events and predictions made from all available events (in the training set).\\
{\bf (S.I.h) Run-time comparisons.} LMC is by orders of magnitude the fastest among the matrix completion methods.\\
{\bf (S.II) Validation of the low-rank model.}\\
{\bf (S.II.a) Synthetic validation.} In a synthetic low-rank model of athletic performance that is a proxy to the real data, the model, its rank, and the singular components can be correctly recovered by the exact same procedure as on the real data. The generative assumption of low rank is therefore appropriate.\\
{\bf (S.II.b) Universality in sub-groups.} Quality of prediction, the low-rank model, its rank, and the singular components remain mostly unchanged when considering subgroups male/female, older/younger, elite/amateur.\\
{\bf (S.III) Exploration of the low-rank model.}\\
{\bf (S.III.a) Further exploration of the three-number-summary.} The three number summary also correlates with specialization and training standard.\\
{\bf (S.III.b) Preferred distance vs optimal distance.} Most but not all athletes prefer to attend the event at which they are predicted to perform best. A notable number of younger athletes prefer distances shorter than optimal, and some older athletes prefer distances longer than optimal.\\
{\bf (S.IV) Pivoting and phase transitions.} The pivoting phenomenon in Figure~\ref{fig:illustration}, right panel, is found in the data for any three close-by distances up to the Mile, with anti-correlation between the shorter and the longer distance. Above 5000m, a change in the shorter of the three distances positively correlates with a change in the longer distance.\\
\noindent
{\bf (S.I.a) Evaluation in terms of MAE.} Table~\ref{tab:compare_methods_mae} reports on the goodness of prediction methods in terms of MAE. Compared with the RMSE table~\ref{tab:compare_methods}, the MAE tend to be smaller than the RMSE, indicating the presence of outliers. The relative goodness of methods when compared to each other is qualitatively the same.\\
\noindent
{\bf (S.I.b) Evaluation in terms of time prediction.} Tables~\ref{tab:rrmse_time} and~\ref{tab:rmae_time} report on the goodness of predicton methods in terms the relative RMSE and MAE of predicting time. Relative measures are chosen to avoid bias towards the longer events. The results are qualitatively and quantitatively very similar to the log-time results in Tables~\ref{tab:compare_methods} and~\ref{tab:compare_methods_mae}, this can be explained that mathematically the RMSE and MAE of a logarithm approximate the relative RMSE and MAE well for small values.\\
\noindent
{\bf (S.I.c) Individual Events.}
Performance of LMC rank 1 and rank 2 on the ten different events is displayed in Figure~\ref{fig:individual_events}.
The reported goodness is out-of-sample RMSE of predicting log-time, on the top 25 percentiles of Male athletes who have attempted 3 or more events, of events in their best year of performance. The reported RMSE for a given event is the mean over 1000 random prediction samples, standard errors are estimated by the bootstrap.\\
The relative improvement of rank 2 over rank 1 tends to be greater for shorter distances below the Mile. This is in accordance with observation (IV.i) which indicates that the individual exponent is the best descriptor for longer events, above the Mile.\\
\noindent
{\bf (S.I.d) Stability w.r.t.~the measure of performance.} In the main experiment, the LMC model is learnt on the same measure of performance (log-time, speed, normalized) which is predicted. We investigate whether the measure of performance on which the model is learnt influences the prediction by learning the LMC model on either measure and comparing all predictions in log-time measure. Table~\ref{tab:choose_feature} shows goodness of prediction when the model is learnt in either measure of performance. Here we check the effect of calibration in one coordinates system and testing in another. The reported goodness is out-of-sample RMSE of predicting log-time, on the top 25 percentiles of Male athletes who have attempted 3 or more events, of events in their best year of performance. The reported RMSE for a given event is the mean over 1000 random prediction samples, standard errors are estimated by the bootstrap.\\
We find that there is no significant difference in prediction goodness when learning the model in log-time coordinates or normalized time coordinates. Learning the model in speed coordinates leads to a significantly better prediction than log-time or normalized time when LMC rank 1 is applied, but to a worse prediction with LMC rank 2. As overall prediction with LMC rank 2 is better, log-time or normalized time are the preferable units of performance.\\
\begin{figure}
\begin{center}
$\begin{array}{c}
\includegraphics[width=100mm,clip=true,trim= 0mm 0mm 0mm 0mm]{../../figures/Accuracy/exp_individual_events/individual_events.pdf}
\end{array}$
\end{center}
\caption{The figure displays the results of prediction by event for the top 25\% of male athletes who attended $\geq3$ events in their year of best performance. RMSE is displayed on the $y$-axis against distance on the $x$-axis. \label{fig:individual_events}}
\end{figure}
\noindent
{\bf (S.I.e) Stability w.r.t.~the event predicted.}
We consider here the effect of the ratio between the predicted event and the closest predictor. For data of the best 25\% of Males
in the {\bf best} year, we compute the log-ratio of the closest predicting distance and the predicted distance for Purdy Points,
the power-law formula and LMC in rank 2.
See Figure~\ref{fig:by_deviation}, where this log ratio is plotted by error. The results show that LMC is far more robust to
error for predicting distances far from the predicted distance.\\
\begin{figure}
\begin{center}
$\begin{array}{c}
\includegraphics[width=140mm,clip=true,trim= 20mm 0mm 20mm 0mm]{../../figures/Accuracy/exp_compare_methods/by_deviations}
\end{array}$
\end{center}
\caption{The figure displays the absolute log ratio in distance predicted and predicting distance vs. absolute relative error per athlete.
We see that LMC in rank 2 is particularly robust for large ratios in comparison to the power-law and Purdy Points. Data is taken from
the top 25\% of male athletes with {\bf no.~events}$\geq 3$ in the {\bf best }year.
\label{fig:by_deviation}}
\end{figure}
\noindent
{\bf (S.I.f) Temporal independence of performances.}
We check here whether the results are affected by using only temporally prior event in predicting an athlete's performance, see section ``Prediction: Evaluation and Validation'' in ``Methods''. For that, we compute out-of-sample RMSEs when predictions are made only from those events.
Table~\ref{tab:rrmse_causal} reports out-of-sample RMSE of predicting log-time, on the top 25 percentiles of Male athletes who have attempted 3 or more events, of events in their best year of performance. The reported RMSE for a given event is the mean over 1000 random prediction samples, standard errors are estimated by the bootstrap.
The results are qualitatively similar results to those of Table~\ref{tab:compare_methods} where all events are used in prediction.\\
\noindent
{\bf (S.I.g) Run-time comparisons.} We compare the run-time cost of a single prediction for the three matrix completion methods LMC, nuclear norm minimiziation, and EM. The other (non-matrix completion) methods are fast or depend only negligibly on the matrix size. We measure run time of LMC rank 3 for completion of a single entry for matrices of $2^8,2^9,\dots,2^{13}$ athletes, generated as described in (S.II.a). This is repeated 100 times.
For a
fair comparison, the nuclear norm minimization algorithm is run with a hyper-parameter already pre-selected by cross validation.
The results are displayed in Figure~\ref{fig:runtime}; LMC is faster by orders of magnitude than nuclear norm and EM and is very robust to the
size of the matrix. The reason computation speeds up over the smallest matrix sizes is that $4\times 4$ minors, which are required for rank 3 estimation
are not available, thus the algorithm must attempt all ranks lower than 3 to find sufficiently many minors.
\\
\begin{figure}
\begin{center}
$\begin{array}{c}
\includegraphics[width=60mm,clip=true,trim= 0mm 0mm 0mm 0mm]{../../figures/RunTime/exp_runtime/runtime.pdf}
\end{array}$
\end{center}
\caption{The figure displays mean run-times for the 3 matrix completion algorithms tested in the paper: Nuclear Norm, EM and LMC (rank 3). Run-times (y-axis) are recorded for completing a single entry in a matrix of size indicated by the x-axis. The averages are over 100 repetitions, standard errors are estimated by the bootstrap.
\label{fig:runtime}}
\end{figure}
\noindent
{\bf (S.II.a) Synthetic validation.} To validate the assumption of a generative low-rank model, we investigate prediction accuracy and recovery of singular vectors in a synthetic model of athletic performance.
Synthetic data for a given number of athletes is generated as follows:
For each athlete, a three-number summary $(\lambda_1,\lambda_2,\lambda_3)$ is generated independently from a Gaussian distribution with same mean and variance as the three-number-summaries in the real data and with uncorrelated entries.
Matrices of performances are generated from the model
\begin{align}
\text{log}(t)= \lambda_1 f_1(s) + \lambda_2 f_2(s) + \lambda_3 f_3(s)+ \eta(s) \label{eq:synthetic_model}
\end{align}
where $f_1,f_2,f_3$ are the three components estimated from the real data and $\eta(s)$ is a stationary zero-mean Gaussian white noise process with adjustable variance. We take the components estimated in log-time coordinates from the top 25\% of male athletes who have attempted at least 4 events as the three components of the model. The distances $s$ are the same ten event distances as on the real data. In each experiment the standard deviation of $\eta(s)$
{\bf Accuracy of prediction:} We synthetically generate a matrix of $1000$ athletes according to the model~\label{eq:synthetic_model}, taking
as distances the same distances measured on the real data.
Missing entries are randomized according to two schemes:
(a) 6 (out of 10) uniformly random missing entries per row/athlete.
(b) per row/athlete, four in terms of distance consecutive entries are non-missing, uniformly at random.
We then apply LMC rank 2 and nuclear norm minimization for prediction. This setup is repeated 100 times for ten different standard deviations of $\eta$ between 0.01 and 0.1. The results are displayed in Figure~\ref{fig:toy_accuracy}.
LMC performance is generally better than nuclear norm; LMC performance is also robust to the pattern of missingness, while nuclear norm minimization is negatively affected by clustering in the rows. RMSE of LMC approaches zero with small noise variance, while RMSE of nuclear norm minimization does not.
Comparing the performances with Table~\ref{tab:compare_methods}, an assumption of a noise variance of $\mbox{Std}(\eta) = 0.01$ seems plausible. The performance of nuclear norm on the real data is explained by a mix of the sampling schemes (a) and (b).
\begin{figure}
\begin{center}
\includegraphics[width = 100mm]{../../figures/Assumptions/exp_same_pattern_acc/accuracy.pdf}
\end{center}
\caption{LMC and Nuclear Norm prediction accuracy on the synthetic low-rank data. $x$-axis denotes the noise level (standard deviation of additive noise in log-time coordinates); $y$-axis is out-of-sample RMSE predicting log-time. Left: prediction performance when (a) the missing entries in each ros are uniform. Right: prediction performance when (b) the observed entries are consecutive. Error bars are one standard deviation, estimated by the bootstrap.}
\label{fig:toy_accuracy}
\end{figure}
{\bf Recovery of model components.} We synthetically generate a matrix which has a size and pattern of observed entries identical to the matrix of top 25\% of male athletes who have attempted at least 4 events in their best year.
We then complete all missing entries of the matrix using LMC rank 3. After this initial step we estimate singular components
using SVD, exactly as on the real data. Confidence intervals are estimated by a bootstrap on the rows with 100 iterations.
The results are displayed in Figure~\ref{fig:accuracy_singular_vectors}.
\begin{figure}
\begin{center}
\includegraphics[width = 100mm]{../../figures/Assumptions/exp_same_pattern_svs/svs}
\end{center}
\caption{Accuracy of singular component estimation with missing data on synthetic model of performance. $x$-axis is distance, $y$-axis is components in log-time.
Left: singular components of data generated according to Equation~\ref{eq:synthetic_model} with all data present. Right: singular components of data generated according to Equation~\ref{eq:synthetic_model} with missing entries
estimated with LMC in rank 3; the observation pattern and number of athletes is identical to the real data. The tubes denote one standard deviation estimated by the bootstrap.}
\label{fig:accuracy_singular_vectors}
\end{figure}
One observes that the first two singular components are recovered almost exactly, while the third is a bit deformed in the process.\\
\noindent
{\bf (S.II.b) Universality in sub-groups.} We repeat the methodology described above and obtain the three components in the following sub-groups:
female athletes, older athletes (> 30 years), and amateur athletes (25-95 percentile range of training standard). Male athletes have been considered in the main corpus. For female and older athletes, we restrict to the top 95\% percentiles of the respective groups for estimation.
Figure~\ref{fig:svs_subgroups} displays the estimated components of the low-rank model. The individual power law is found to be unchanged in all groups considered. The second and third component vary between the groups but resemble the components for the male athletes. The empirical variance of the second and third component is higher, which may be explained by a slightly reduced consistency in performance, or a reduction in sample size. Whether there is a genuine difference in form or whether the variation is explained by different three-number-summaries in the subgroups can not be answered from the dataset considered.
Table~\ref{tab:generalize} displays the prediction results in the three subgroups. Prediction accuracy is similar but a bit worse when compared to the male athletes. Again this may be explained by reduced consistency in the subgroups' performance.\\
\begin{figure}
\begin{center}
$\begin{array}{c c c}
\includegraphics[width=50mm,clip=true,trim= 0mm 0mm 0mm 0mm]{../../figures/SingularVectors/exp_svs_subgroups/old} &
\includegraphics[width=50mm,clip=true,trim= 0mm 0mm 0mm 0mm]{../../figures/SingularVectors/exp_svs_subgroups/ama} &
\includegraphics[width=50mm,clip=true,trim= 0mm 0mm 0mm 0mm]{../../figures/SingularVectors/exp_svs_subgroups/fem}
\end{array}$
\end{center}
\caption{The three components of the low-rank model in subgroups. Left: for older runners. Middle: for amateur runners = best event below 25th percentile. Right: for female runners. Tubes around the components are one standard deviation, estimated by the bootstrap. Figure~\ref{fig:svs}. \label{fig:svs_subgroups}}
\end{figure}
\noindent
{\bf (S.III.a) Further exploration of the three-number-summary.} Scatter plots of preferred distance and training standard against the athletes' three-number-summaries are displayed in Figure~\ref{fig:meaning_components}.
The training standard correlates predominantly with the individual exponent (score 1); score 1 vs. standard---$r=-0.89$ ($p \le 0.001$); score 2 vs. standard---$r=0.22$ ($p \le 0.001$); score 3 vs. standard---$r=0.031$ ($p = 0.07$);
all correlations are Spearman correlations with significance computed using a $t$-distribution approximation to the correlation coefficient under the null.
On the other hand preferred distance with all three numbers in the summary, especially the second; score 1 vs. log(specialization)---$r=0.29$ ($p \le 0.001$); score 2 vs. log(specialization)---$r=-0.58$ ($p \le 0.001$); score 3 vs. log(specialization)---$r=-0.14$ ($p = \le 0.001$);
The dependency between the third score and specialization is non-linear with an optimal value close to middle distances. We stress that low correlation does not imply low predictive power; the whole summary should be considered as a whole, and the LMC predictor is non-linear. Also, we observe that correlations increase when considering only performances over certain distances, see Figure~\ref{fig:svs}.\\
\begin{figure}
\begin{center}
$\begin{array}{c}
\includegraphics[width=130mm,clip=true,trim= 0mm 0mm 0mm 0mm]{../../figures/SingularVectors/exp_svs/score_vs_standard.png} \\
\includegraphics[width=130mm,clip=true,trim= 0mm 0mm 0mm 0mm]{../../figures/SingularVectors/exp_svs/score_vs_spec.png}
\end{array}$
\end{center}
\caption{Scatter plots of training standard vs.~three-number-summary (top) and preferred distance vs.~three-number-summary
\label{fig:meaning_components}}
\end{figure}
\noindent
{\bf (S.III.b) Preferred event vs best event.} For the top 95\% male athletes who have attempted 3 or more events, we use LMC rank 2 to compute which percentile they would achieve in every event. We then determine the distance of the event at which they would achieve the best percentile, to which we will refer as the ``optimal distance''. Figure~\ref{fig:best_event_pred} shows for each athlete the difference between their preferred and optimal distance.
It can be observed that the large majority of athletes prefer to attempt events i`n the vicinity of those they would be best at. There is a group of young athletes attempt events which are shorter than the predicted optimal distance, and a group of old athletes attempting events which are longer than optimal. One may hypothesize that both groups could be explained by social phenomena: young athletes usually start to train on shorter distances for a while, no matter their actual training state. Older athletes may be biased attending endurance type events.
\begin{figure}
\begin{center}
$\begin{array}{c}
\includegraphics[width=100mm,clip=true,trim= 0mm 0mm 0mm 0mm]{../../figures/Aging/exp_predicted_optimal_event/best_event_predicted.png}
\end{array}$
\end{center}
\caption{Difference of preferred distance and optimal distance, versus age of the athlete, colored by specialization distance. Most athletes prefer the distance they are predicted to be best at. There is a mismatch of best and preferred for a group of younger athletes who have greater potential over longer distances, and for a group of older athletes who's potential is maximized over shorter distances than attempted.
\label{fig:best_event_pred}}
\end{figure}
\noindent
{\bf (S.IV) Pivoting and phase transitions.} We look more closely at the pivoting phenomenon illustrated in Figure~\ref{fig:illustration} top right, and the phase transition discussed in observation (V). We consider the top $25\%$ of men who have attempted at least 3 events, in their best year.
We compute 10 performances of equivalent standard by using LMC in rank 1 in log-time coordinates, by
setting a benchmark performance over the marathon and sequentially predicting each lower distance (marathon
predicts HM, HM predicts 10km etc.). This yields equivalent benchmark performances $t_{1},\dots,t_{10}$.
We then consider triples of consecutive distances $s_{i-1},s_i,s_{i+1}$ (excluding the Mile since close in distance to the 1500m) and study the pivoting behaviour on the data set, by doing the prediction in Figure~\ref{fig:illustration}.
More specifically, in each triple, we predict the performance on that distance $s_{i+1}$ using LMC rank 2, from the performances over the distances $s_{i-1}$ and $s_{i}$. The prediction is made in two ways, once with and once without perturbation of the benchmark performance at $s_{i-1}$, which we then compare. Intuitively, this corresponds to comparing the red to the green curve in Figure~\ref{fig:illustration}. In mathematical terms:
\begin{enumerate}
\item we obtain a prediction $\widehat{t}_{i+1}$ for the distance $s_{i+1}$ from the benchmark performances $t_{i}$, $t_{i-1}$ and consider this as the unperturbed prediction, and
\item we obtain a prediction $\widehat{t}_{i+1}+\delta(\epsilon)$ for the distance $s_{i+1}$ from the benchmark performance $t_{i}$ on $s_i$ and the perturbed performance $(1+\epsilon)t_{i-1}$ on the distance $s_{i-1}$, considering this as the perturbed prediction.
\end{enumerate}
We record these estimates for $\epsilon = -0.1,0.09,\dots,0,0.01,\dots,0.1$ and calculate the relative change of the perturbed prediction with respect to the unperturbed, which is $\delta_i(\epsilon)/\widehat{t}_i$. The results are displayed in Figure~\ref{fig:pivots}.
We find that for pivot distances $s_{i}$ shorter than 5km, a slower performance on the shorter distance $s_{i-2}$ leads to
a faster performance over the longer distance $s_{i}$, insofar as this is predicted by the rank 2 predictor.
On the other hand we find that for pivot distances greater than or equal to 5km, a faster performance over the shorter distance
also implies a faster performance over the longer distance.
\begin{figure}
\begin{center}
$\begin{array}{c}
\includegraphics[width=100mm,clip=true,trim= 0mm 0mm 0mm 0mm]{../../figures/HigherRank/exp_negative_correlation/pivots}
\end{array}$
\end{center}
\caption{Pivot phenomenon in the low-rank model. The figure quantifies the strength and sign of pivoting as in Figure~\ref{fig:illustration}, top right, at different middle distances $s_i$ (x-axis). The computations are based on equivalent log-time performances $t_{i-1},t_i,t_{i+1}$ at consecutive triples $s_{i-1},s_i,s_{i+1}$ of distances. The y-coordinate indicates the signed relative change of the LMC rank 2 prediction of $t_{i+1}$ from $t_{i-1}$ and $t_i$ changes, when $t_i$ is fixed and $t_{i-1}$ undergoes a relative change of $1\%, 2\%,\ldots, 10\%$ (red curves, line thickness is proportional to change), or $-1\%, -2\%,\ldots, -10\%$ (blue curves, line thickness is proportional to change).
For example, the largest peak corresponds to a middle distance of $s_i = 400m$. When predicting 800m from 400m and 200m, the predicted log-time $t_{i+1}$ (= 800m performance) decreases by 8\% when $t_{i-1}$ (= 200m performance) is increased by 10\% while $t_i$ (= 400m performance) is kept constant.
\label{fig:pivots}}
\end{figure}
\noindent
\normalsize
\pagestyle{empty}
\bibliographystyle{plain}
{\small
\section*{An overview on athletic performance prediction and our contributions}
Performance prediction and modeling are cornerstones of sports medicine, essential in training and assessment of athletes with implications beyond sport, for example in the understanding of aging, muscle physiology, and the study of the cardiovascular system.
Existing research on athletic performance focuses either on (A) explaining world records~\cite{lietzke1954analytical, henry1955prediction, riegel1980athletic, katz1994fractal, savaglio2000human, garcia2005middle}, (B) equivalent scoring~\cite{purdy1974computer,purdy1976computer}, or (C) modelling of individual physiology~\cite{hill1924muscular,billat1996significance,wasserman1973anaerobic,noakes1990peak,billat1996use,keller1973ia, peronnet1989mathematical,van1994optimisation}.
Currently, however, there is no parsimonious model which is able to simultaneously explain individual physiology (C) and collective performance (A,B).
We present such a model, a non-linear low-rank model derived from a database of UK athletes. It levers an individual power law which explains the power laws known to apply to world records, and which allows us to derive athlete-individual training parameters from prior performances data. Performance predictions obtained using our approach are the most accurate to date, with an average prediction error of under 4 minutes (2\% rel.MAE and 3\% rel.RMSE out-of-sample, see Tables~\ref{tab:rrmse_time},~\ref{tab:rmae_time} and appendix S.I.b) for elite performances.
We anticipate that our framework will allow us to leverage existing insights in the study of world record performances and sports medicine for an improved understanding of human physiology.
Our work builds on the three major research strands in prediction and modeling of running performance, which we briefly summarize:
{\bf (A) Power law models of performance} posit a power law dependence $t = c\cdot s^\alpha$ between the duration of the distance run $t$ and the distance $s$, for constants $c$ and $\alpha$.
Power law models have been known to describe world record performances across sports for over a century~\cite{kennelly1906approximate}, and have been applied extensively to running performance~\cite{lietzke1954analytical, henry1955prediction, riegel1980athletic, katz1994fractal, savaglio2000human, garcia2005middle}. These power laws have been applied by practitioners for prediction: the Riegel formula~\cite{riegel1977time} predicts performance by fitting $c$ to each athlete and fixing $\alpha =1.06$ (derived from world-record performances). The power law approach has the benefit of \emph{modelling} performances in a scientifically parsimonious way.
{\bf (B) Scoring tables}, such as those of the international association of athletics federations (IAAF), render performances over disparate distances comparable by presenting them on a single scale. These tables have been published by sports associations for almost a century~\cite{purdy1974history}
and catalogue, rather than model, performances of equivalent standard. Performance predictions may be obtained from scoring tables by forecasting a time with the same score as an existing attempt, as implemented in the popular Purdy Points scheme~\cite{purdy1974computer,purdy1976computer}. The scoring table approach has the benefit of \emph{describing} performances in an empirically accurate way.
{\bf (C) Explicit modeling of performance related physiology} is an active subfield of sports science. Several physiological parameters are known to be related to athletic performance; these include maximal oxygen uptake (\.VO$_2$-max) and critical speed (speed at \.{V}O$_2$-max) \cite{hill1924muscular,billat1996significance}, blood lactate concentration, and the anaerobic threshold~\cite{wasserman1973anaerobic,bosquet2002methods}.
Physiological parameters may be used (C.i) to make direct predictions when clinical measurements are available~\cite{noakes1990peak,billat1996use,bundle2003high}, or (C.ii) to obtain theoretical models describing physiological processes~\cite{keller1973ia, peronnet1989mathematical,van1994optimisation,di2003factors}. These approaches have the benefit of \emph{explaining} performances physiologically.
All three approaches (A),(B),(C) have appealing properties, as explained above, but none provides a complete treatment of athletic performance prediction:
(A) individual performances do not follow the parsimonious power law perfectly; (B) the empirically accurate scoring tables do not provide a simple interpretable relationship.
\emph{Neither} (A) nor (B) can deal with the fact that athletes may differ from one another in multiple ways.
The clinical measurements in (C.i) are informative but usually available only for a few select athletes, typically at most a few dozen (as opposed to the 164,746 considered in our study). The interpretable models in (C.ii) are usually designed not with the aim of predicting performance but to explain physiology or to estimate physiological parameters from performances; thus these methods are not directly applicable without additional work.
The approach we present unifies the desirable properties of (A),(B) and (C), while avoiding the aforementioned shortcomings.
We obtain (A) a parsimonious model for individual athletic performance that is (B) empirically derived from a large database of UK athletes. It yields the best performance predictions to date (2\% average error for elite athletes on all events, average error 3-4 min for Marathon, see Table~\ref{tab:rmae_time}) and (C) unveils hidden descriptors for individuals which we find to be related to physiological characteristics.
Our approach bases predictions on \emph{Local Matrix Completion} (LMC), a machine learning technique which posits the existence of a small number of explanatory variables which describe the performance of individual athletes.
Application of LMC to a database of athletes allows us, in a second step, to derive a parsimonious physiological model describing athletic performance of \emph{individual} athletes. We discover that a \emph{three number-summary} for each individual explains performance over the full range of distances from 100m to the Marathon. The three-number-summary relates to: (1) the endurance of an athlete, (2) the relative balance between speed and endurance, and
(3) specialization over middle distances. The first number explains most of the individual differences over distances greater than 800m, and may be interpreted as the exponent of \emph{an individual power law for each athlete}, which holds with remarkably high precision, on average. The other two numbers describe individual, non-linear corrections to this individual power law. Vitally, we show that the individual power law with its non-linear corrections reflects the data more accurately than the power law for world records.
We anticipate that individual power law and three-number summary will allow for exact quantitative assessment in the science of running and related sports.
\begin{figure}
\begin{center}
\includegraphics[width=150mm,clip=true,trim= 5mm 20mm 5mm 20mm]{figures/select_examples/exp_illustration/three_athletes_anno.pdf} \\
\end{center}
\caption{Non-linear deviation from the power law in individuals as central phenomenon. Top left: performances of world record holders and a selection of random athletes. Curves labelled by athletes are their known best performances (y-axis) at that event (x-axis). Black crosses are world record performances. Individual performances deviate non-linearly from the world record power law. Top right: a good model should take into account specialization, illustration by example. Hypothetical performance curves of three athletes, green, red and blue are shown, the task is to predict green on 1500m from all other performances. Dotted green lines are predictions. State-of-art methods such as Riegel or Purdy predict green performance on 1500m close to blue and red; a realistic predictor for 1500m performance of green - such as LMC - will predict that green is outperformed by red and blue on 1500m; since blue and red being worse on 400m indicates that out of the three athletes, green specializes most on shorter distances. Bottom: using local matrix completion as a mathematical prediction principle by filling in an entry in a $(3\times 3)$ sub-pattern. Schematic illustration of the algorithm.
\label{fig:illustration}}
\end{figure}
\section*{Local Matrix Completion and the Low-Rank Model}
It is well known that world records over distinct distances are held by distinct athletes---no one single athlete holds all running world records. Since world record data obey an approximate power law, this implies that the individual performance of each athlete deviates from this power law.
The left top panel of Figure~\ref{fig:illustration} displays world records and the corresponding individual performances of world record holders in logarithmic coordinates---an exact power law would follow a straight line. The world records align closely to a straight line, while individuals deviate non-linearly. Notable is also the kink in the world records which makes them deviate from an exact straight line, yielding a ``broken power law'' for world records~\cite{savaglio2000human}.
Any model for individual performances must model this individual, non-linear variation - and will, optimally, explain the broken power law observed for world records as an epiphenomenon of such variation over individuals. In following paragraphs we explain how the LMC scheme captures individual variation in a typical scenario.
Consider three athletes (taken from the data base) as shown in the right top panel of Figure~\ref{fig:illustration}. The 1500m performance of the green athlete is not known and is to be predicted. All three athletes, green, blue and red, have similar performance on 800m. Any classical method for performance prediction which only takes that information into account will predict that green performs similarly on 1500m to the blue and the red, e.g.~somewhere in-between. However, this is unrealistic, since it does not take into account event specialization: looking at the 400m performance, one can see that the red athlete is slowest over short distances, followed by the blue and then by the green whose relative speed surpasses the remaining athletes over longer distances. Using this additional information leads to the more realistic prediction that the the green athlete will be out-performed by red and blue on 1500m. Supplementary analysis (S.IV) validates that the phenomenon presented in the example is prevalent throughout the data set.
LMC is a quantitative method for taking into account this event specialization. A schematic overview of the simplest variant is displayed in the bottom panel of Figure~\ref{fig:illustration}: to predict an event for an athlete (figure: 1500m for green) we find a 3-by-3-pattern of performances, denoted by $A$, with exactly one missing entry - this means the two other athletes (figure: red and blue) have attempted similar events and have data available. Explanation of the green athlete's curve by the red and the blue is mathematically modelled by demanding that the data of the green athlete is given as a weighted sum of the data of the red and the blue; i.e., more mathematically, the green row is a linear combination of the blue and the red row. By a classical result in matrix algebra, the green row is a linear combination of red and blue whenever the \emph{determinant} of $A$, a polynomial function in the entries of $A$, vanishes; i.e., $\det(A) = 0$.
A prediction is made by solving the equation $\det(A) = 0$ for ``?''. To increase accuracy, candidate solutions from multiple 3-by-3-patterns (obtained from many triples of athletes) are averaged in a way that minimizes the expected error in approximation. We will consider variants of the algorithm which use $n$-by-$n$-patterns, $n$ corresponding to the complexity of the model (we later show $n=4$ to be optimal). See the methods appendix for an exact description of the algorithm used.
The LMC prediction scheme is an instance of the more general local low-rank matrix completion framework introduced in~\cite{kiraly15matrixcompletion}, here applied to performances in the form of a numerical table (or matrix) with columns corresponding to events and rows to athletes. The cited framework is the first matrix completion algorithm which allows prediction of single missing entries as opposed to all entries. While matrix completion has proved vital in predicting consumer behaviour and recommender systems, we find that existing approaches which predict all entries at once cannot cope with the non-standard distribution of missingness and the noise associated with performance prediction in the same way as LMC can (see findings and supplement S.II.a). See the methods appendix for more details of the method and an exact description.
In a second step, we use the LMC scheme to fill in all missing performances (over all events considered---100m, 200m etc.) and obtain a parsimonious low-rank model that explains individual running times $t$ in terms of distance $s$ by:
\begin{align}
\log t = \lambda_1 f_1(s) + \lambda_2 f_2(s) + \dots + \lambda_r f_r(s) \label{eq:model},
\end{align}
with components $f_1$, $f_2$, $\dots$ that are universal over athletes, and coefficients $\lambda_1$, $\lambda_2$, $\dots$, $\lambda_r$ which summarize the athlete under consideration.
The number of components and coefficients $r$ is known as the \emph{rank} of the model and measures its complexity; when considering the data in matrix form, $r$ translates to
\emph{matrix rank}.
The Riegel power law is a very special case, demanding that $\lambda_1 = 1.06$ for every athlete, $ f_1(s) = \text{log}~s$ and $\lambda_2 f_2(s) = c$ for a constant $c$ depending on the athlete. Our analyses will show that the best model has rank $r = 3$ (meaning above we consider patterns or matrices of size $n \times n = 4\time 4$ since above $n=r+1$). This means that the model has $r=$ \emph{three} universal components $f_1(s), f_2(s), f_3(s)$, and every athlete is described by their individual three-coefficient-summary $\lambda_1,\lambda_2,\lambda_3$. Remarkably, we find that $f_1(s) = \text{log}~s$, yielding an individual power law; the corresponding coefficient $\lambda_1$ thus has the natural interpretation as an individual power law exponent.
We remark that first filling in the entries with LMC and only then fitting the model is crucial due to data which is non-uniformly missing (see supplement S.II.a). More details on our methodology can be found in the methods appendix.
\subsection*{Data Set, Analyses and Model Validation}
The basis for our analyses is the online database {\tt www.thepowerof10.info}, which catalogues British individuals' performances achieved in officially ratified athletics competitions since 1954.
The excerpt we consider dates from August 3, 2013. It contains (after error removal) records of 164,746 individuals of both genders, ranging from the amateur to the elite, young to old, comprising a total of 1,417,432 individual performances over 10 different distances: 100m, 200m, 400m, 800m, 1500m, the Mile, 5km, 10km, Half-Marathon, Marathon (42,195m). All British records over the distances considered are contained in the dataset; the 95th percentile for the 100m, 1500m and Marathon are 15.9, 6:06.5 and 6:15:34, respectively.
As performances for the two genders distribute differently, we present only results on the 101,775 male athletes in the main corpus of the manuscript; female athletes and subgroup analyses are considered in the supplementary results.
The data set is available upon request, subject to approval by British Athletics. Full code of our analyses can be obtained from [download link will be provided here after acceptance of the manuscript].
Adhering to state-of-the-art statistical practice (see ~\cite{efron1983estimating,kohavi1995study,efron1997improvements,browne2000cross}), all prediction methods are validated \emph{out-of-sample}, i.e., by using only a subset of the data for estimation of parameters (training set) and computing the error on predictions made for a distinct subset (validation or test set). As error measures, we use the root mean squared error (RMSE) and the mean absolute error (MAE), estimated by leave-one-out validation for 1000 single performances omitted at random.
We would like to stress that out-of-sample prediction error is the correct way to evaluate the quality of \emph{prediction}, as opposed to merely reporting goodness-of-fit in-sample; since outputting an estimate for an instance that the method has already seen does not qualify as prediction.
More details on the data set and our validation setup can be found in the supplementary material.
\section*{Findings on the UK athletes data set}
{\bf (I) Prediction accuracy.} We evaluate prediction accuracy of ten methods, including our proposed method, LMC. We include, as
{\it naive baselines:} (1.a) imputing the event mean, (1.b) imputing the average of the $k$-nearest neighbours; as representative of the {\it state-of-the-art in quantitative sports science:} (2.a) the Riegel formula, (2.b) a power-law predictor with exponent estimated from the data, which is the same for all athletes, (2.c) a power-law predictor with exponent estimated from the data, one exponent per athlete, (2.d) the Purdy points scheme \cite{purdy1974computer}; as representatives for the {\it state-of-the-art in matrix completion:} (3.a) imputation by expectation maximization on a multivariate Gaussian \cite{dempster1977maximum} (3.b) nuclear norm minimization~\cite{candes2009exact,candes2010power}.\\
We instantiate our {\it low-rank local matrix completion} (LMC) in two variants: (4.a) rank 1, and (4.b) rank 2.
Methods (1.a), (1.b), (2.a), (2.b), (2.d), (4.a) require at least one observed performance per athlete, methods (2.c), (4.b) require at least two observed performances in distinct events. Methods (3.a), (3.b) will return a result for any number of observed performances (including zero). Prediction accuracy is therefore measured by evaluating the RMSE and MAE out-of-sample on the athletes who have attempted at least three distances, so that the two necessary performances remain when one is removed for leave-one-out validation. Prediction is further restricted to the best 95-percentile of athletes (measured by performance in the best event) to reduce the effect of outliers. Whenever the method demands that the predicting events need to be specified, the events which are closest in log-distance to the event to be predicted are taken. The accuracy of predicting time (normalized w.r.t.~the event mean), log-time, and speed are measured.
We repeat this validation setup for the year of best performance and a random calendar year. Moreover, for completeness and comparison we
treat 2 additional cases: the top 25\% of athletes and athletes who have attempted at least 4 events, each in log time.
More details on methods and validation are presented in the methods appendix.
The results are displayed in Table~\ref{tab:compare_methods} (RMSE) and supplementary Table~\ref{tab:compare_methods_mae} (MAE). Of all benchmarks, Purdy points (2.d) and Expectation Maximization (3.a) perform best. LMC in rank 2 substantially outperforms Purdy points and Expectation Maximization (two-sided Wilcoxon signed-rank test significant at $p \le$ 1e-4 on the validation samples of absolute prediction errors); rank 1 outperforms Purdy points on the year of best performance data ($p=$5.5e-3) for the best athletes, and is on a par on athletes up to the 95th percentile. Both rank 1 and 2 outperform the power law models ($p\le$1e-4), the improvement in RMSE over the power-law reaches over 50\% for data from the fastest 25\% of athletes.
{\bf (II) The rank (number of components) of the model.} Paragraph (I) establishes that LMC is the best method for prediction. LMC assumes a fixed number of prototypical athletes, viz.~the rank $r$, which is the complexity parameter of the model.
We establish the optimal rank by comparing prediction accuracy of LMC with different ranks. The rank $r$ algorithm needs $r$ attempted events for prediction, thus $r+1$ observed events are needed for validation. Table~\ref{tab:determine_rank} displays prediction accuracies for LMC ranks $r=1$ to $r=4$, on the athletes who have attempted $k > r$ events, for all $k \le 5$. The data is restricted to the top 25\% in the year of best performance in order to obtain a high signal to noise ratio. We observe that rank 3 outperforms all other ranks, when applicable; rank 2 always outperforms rank 1 (both $p\le$1e-4).
We also find that the improvement of rank 2 over rank 1 depends on the event predicted: improvement is 26.3\% for short distances (100m,200m), 29.3\% for middle distances (400m,800m,1500m), 12.8\% for the mile to half-marathon, and 3.1\% for the Marathon (all significant at $p$=1e-3 level) (see Figure~\ref{fig:individual_events}).
These results indicate that inter-athlete variability is greater for short and middle distances than for Marathon.
{\bf (III) The three components of the model.} The findings in (II) imply that the best low-rank model assumes $3$ components. To estimate the components ($f_i$ in Equation~\eqref{eq:model}) we impute all missing entries in the data matrix of the top 25\% athletes who have attempted 4 events and compute its singular value decomposition (SVD)~\cite{golub1970singular}. From the SVD, the exact form of components can be directly obtained as the right singular vectors (in a least-squares sense, and up to scaling, see methods appendix). We obtain three components in log-time coordinates, which are displayed in the left hand panel of Figure~\ref{fig:svs}. The first component for log-time prediction is linear (i.e., $f_1(s) \propto \log s$ in Equation~\eqref{eq:model}) to a high degree of precision ($R^2 = 0.9997$) and corresponds to an \emph{individual} power law, applying distinctly to each athlete. The second and third components are non-linear; the second component decreases over short sprints and increases over the remainder, and the third component resembles a parabola with extremum positioned around the middle distances.
In speed coordinates, the first, individual power law component does not display the ``broken power law'' behaviour of the world records. Deviations from an exact line can be explained by the second and third component (Figure~\ref{fig:svs} middle).
The three components together explain the world record data and its ``broken power law'' far more accurately than a simple linear power law trend---with the rank 3 model fitting the world records almost exactly (Figure~\ref{fig:svs} right;
rank 1 component: $R^2 = 0.99$; world-record data: $R^2 = 0.93$).
{\bf (IV) The three athlete-specific coefficients.} The three summary coefficients for each athlete ($\lambda_1,\lambda_2,\lambda_3$ in Equation~\eqref{eq:model}) are obtained from the entries of the left singular vectors (see methods appendix). Since all three coefficients summarize the athlete, we refer to them collectively as the \emph{three-number-summary}.
(IV.i) Figure~\ref{fig:scores} displays scatter plots and Spearman correlations between the coefficients and performance over the full range of distances. The individual exponent correlates with performance on distances greater than 800m. The second coefficient correlates positively with performance over short distances and displays a non-linear association with performance over middle distances.
The third coefficient correlates with performance over middle distances. The associations for all three coefficients are non-linear, with the notable exception of the individual exponent on distances exceeding 800m, hence the application of Spearman correlations.
(IV.ii) Figure~\ref{fig:scatter} top displays the three-number-summary for the top 95\% athletes in the data base. The athletes appear to separate into (at least) four classes, which associate with the athlete's preferred distance. A qualitative transition can be observed over middle distances. Three-number-summaries of world class athletes (not all in the UK athletics data base), computed from their personal bests, are listed in Table~\ref{tab:elite_scores}; they and also shown as highlighted points in Figure~\ref{fig:scatter} top right. The elite athletes trace a frontier around the population: all elite athletes are subject to a low individual exponent. A hypothetical athlete holding all the world records is also shown in Figure~\ref{fig:scatter} top right, obtaining an individual exponent which comes close to the world record exponent estimated by Riegel~\cite{riegel1980athletic}.
(IV.iii) Figure~\ref{fig:scatter} bottom left shows that a low individual exponent correlates positively with performance in an athlete's preferred event. The individual exponents are higher on average (median=1.12; 5th, 95th percentiles=1.10,1.15) than the world record exponents estimated by Riegel~\cite{riegel1980athletic} (1.08 for elite athletes, 1.06 for senior athletes).
(IV.iv) Figure~\ref{fig:scatter} bottom right shows that in cross-section, the individual exponent decreases with age until 20 years, and subsequently increases.
\begin{figure}
\begin{center}
$\begin{array}{c c c}
\includegraphics[width=45mm,clip=true,trim= 0mm 0mm 0mm 0mm]{figures/SingularVectors/exp_svs/svs_log_time} &
\includegraphics[width=45mm,clip=true,trim= 0mm 0mm 0mm 0mm]{figures/SingularVectors/exp_r2_comparison/r2.pdf} &
\includegraphics[width=45mm,clip=true,trim= 0mm 0mm 0mm 0mm]{figures/world_record_experiments/exp_wr_residuals/wr_residuals.pdf}
\end{array}$
\end{center}
\caption{The three components of the low-rank model, and explanation of the world record data. Left: the components displayed (unit norm, log-time vs log-distance). Tubes around the components are one standard deviation, estimated by the bootstrap. The first component is an exact power law (straight line in log-log coordinates); the last two components are non-linear, describing transitions at around 800m and 10km. Middle: Comparison of first component and world record to the exact power law (log-speed vs log-distance). Right: Least-squares fit of rank 1-3 models to the world record data (log-speed vs log-distance).
\label{fig:svs}}
\end{figure}
\begin{figure}
\begin{center}
$\begin{array}{c}
\includegraphics[width=160mm,clip=true,trim= 5mm 70mm 5mm 60mm]{figures/SingularVectors/exp_svs/score1_by_event_anno.pdf}
\end{array}$
\end{center}
\caption{Matrix scatter plot of the three-number-summary vs performance. For each of the scores in the three-number-summary (rows) and each event distance (columns), the plot matrix shows: a scatter plot of performances (time) vs the coefficient score of the top 25\% (on the best event) athletes who have attempted at least 4 events. Each scatter plot in the matrix is colored on a continuous color scale according to the absolute value of the scatter sample's Spearman rank correlation (red = 0, green = 1).
\label{fig:scores}
}
\end{figure}
\begin{figure}
\begin{center}
$\begin{array}{c c}
\includegraphics[width=55mm,clip=true,trim= 0mm 0mm 0mm 0mm]{figures/SingularVectors/exp_svs/score_scatter3_1.png} &
\includegraphics[width=55mm,clip=true,trim= 0mm 0mm 0mm 0mm]{figures/SingularVectors/exp_svs/score_scatter3_famous.png}\\
\includegraphics[width=55mm,clip=true,trim= 0mm 0mm 0mm 0mm]{figures/SingularVectors/exp_svs/spec_exponent_standard.png} &
\includegraphics[width=55mm,clip=true,trim= 0mm 0mm 0mm 0mm]{figures/SingularVectors/exp_aging/exponent_age_specialization.png}
\end{array}$
\end{center}
\caption{Scatter plots exploring the three number summary. Top left and right: 3D scatter plot of three-number-summaries of athletes in the data set, colored by preferred distance and shown from two angles. A negative value for the second score is a indicates that the athlete is a sprinter, a positive value an endurance runner. In the top right panel, the summaries of the elite athletes Usain Bolt (world record holder, 100m, 200m), Mo Farah (world beater over distances between 1500m and 10km), Haile Gabrselassie (former world record holder from 5km to Marathon) and Takahiro Sunada (100km world record holder) are shown; summaries are estimated from their personal bests. For comparison we also display the hypothetical data of an athlete
who holds all world records.
Bottom left: preferred distance vs individual exponents, color is percentile on preferred distance. Bottom right: age vs. exponent, colored by preferred distance.
\label{fig:scatter}}
\end{figure}
\begin{table}
\begin{center}
\input{tables/elite_scores.tex}
\end{center}
\caption{Estimated three-number-summary ($\lambda_i$) in log(time) coordinates of selected elite athletes. The scores $\lambda_1$, $\lambda_2$, $\lambda_3$ are defined by Equation~\eqref{eq:model} and may be interpreted as the contribution of each component to performance for a given athlete. Since component 1 is a power-law (see the top-left of Figure~\ref{fig:svs}), $\lambda_1$ may be interpreted as the individual exponent. See the bottom right panel of Figure~\ref{fig:scatter} for a scatter plot of athletes.
\label{tab:elite_scores}
}
\end{table}
{\bf (V) Phase transitions.} We observe two transitions in behaviour between short and long distances.
The data exhibit a phase transition around 800m: the second component exhibits a kink and the third component makes a zero transition (Figure~\ref{fig:svs}); the association of the first two scores with performance shifts from the second to the first score (Figure~\ref{fig:scores}). The data also exhibits a transition around 5000m. We find that for distances shorter than 5000m, holding the event performance constant, and increasing the standard of shorter events leads
to a decrease in the predicted standard of longer events and vice versa. On the other hand for distances greater than 5000m this behaviour reverses; holding the event performance constant, and increasing the standard of shorter events
leads to an increase in the predicted standard of longer events.
See supplementary section (S.IV) for details.
{\bf (VI) Universality over subgroups.} Qualitatively and quantitatively similar results to the above can be deduced for female athletes, and subgroups stratified by age or training standard; LMC remains an accurate predictor, and the low-rank model has similar form. See supplement (S.II.b).
\section*{Discussion and Outlook}
We have presented the most accurate existing predictor for running performance---local low-rank matrix completion (finding I); its predictive power confirms the validity of a three-component model (finding II) that offers a parsimonious explanation for many known phenomena in the quantitative science of running, including answers to some of the major open questions of the field. More precisely, we establish:
{\bf The individual power law.} In log-time coordinates, the first component of our physiological model is linear with high accuracy, yielding an individual power law (finding III). This is a novel and rather surprising finding, since, although world-record performances are known to obey a power law~\cite{lietzke1954analytical, henry1955prediction, riegel1980athletic, katz1994fractal, savaglio2000human, garcia2005middle}, there is no reason to \emph{a-priori} suppose that the performance of individuals is governed by a power law. This \emph{parsimony a-posteriori} unifies (A) the parsimony of the power law with the (B) empirical correctness of scoring tables. To which extent this individual power law is exact is to be determined in future studies.
{\bf An explanation of the world record data.} The broken power law on world records can be seen as a consequence of the individual power law and the non-linearity in the second and third component (finding III) of our low-rank model. The breakage point in the world records can be explained by the differing contributions in the non-linear components of the distinct individuals holding the world records.
Thus both \emph{the power law and the broken power law on world record data can be understood as epiphenomena of the individual power law and its non-linear corrections}.
{\bf Universality of our model.} The low-rank model remains unchanged when considering different subgroups of athletes, stratified by gender, age, or calendar year; what changes is only the individual three-number-summaries (finding VI). This shows the low-rank model to be universal for running.
{\bf The three-number-summary reflects an athlete's training state.} Our predictive validation implies that the number of components of our model is three (finding II), which yields three numbers describing the training state of a given athlete (finding IV). The most important summary is \emph{the individual exponent} for the individual power law which describes overall performance (IV.iii). The second coefficient describes whether the athlete has greater endurance (positive) or speed (negative), the third describes specialization over middle distances (negative) vs short and long distances (positive). All three numbers together clearly separate the athletes into four clusters, which fall into two clusters of short-distance runners and one cluster of middle-and long-distance runners respectively (IV.i).
Our analysis provides strong evidence that the three-number-summary captures physiological and/or social/behavioural characteristics of the athletes, e.g., training state, specialization, and which distance an athlete chooses to attempt.
While the data set does not allow us to separate these potential influences or to make statements about cause and effect, we conjecture that combining the three-number-summary with specific experimental paradigms will lead to a clarification; further, we conjecture that a combination of the three-number-summary with additional data, e.g. training logs, high-frequency training measurements or clinical parameters, will lead to a better understanding of (C) existing physiological models.\\
Some novel physiological insights can be deduced from leveraging our model on the UK athletics data base:
\begin{itemize}
\itemsep0em
\item We find that the higher rank LMC predictor is most effective for the longer-sprints and middle distances, and in comparison to the rank 1 predictor; the improvement of the higher rank over the rank 1 version is lowest over the marathon distance.
This may be explained by some middle-distance runners using a high maximum velocity to coast whereas other runners use greater endurance to run closer to their maximum speed for the duration of the race; it would be interesting to check empirically whether the type of running (coasting vs endurance) is the physiological correlate to the specialization summary. If this was verified, it could imply that (presently) there is {\bf only one way to be a fast marathoner}, i.e., possessing a high level of endurance---as opposed to being able to coast relative to a high maximum speed. In any case, the low-rank model predicts that a marathoner who is not close to world class over 10km is unlikely to be a world class marathoner.
\item The {\bf phase transitions} which we observe (finding V) provide additional observational evidence for a transition in the complexity of the physiology underlying performance between long and short distances.
This finding is bolstered by the difference we observe between the increase in performance of the rank 2 predictor over the rank 1 predictor for short/middle distances over long distances.
Our results may have implications for existing hypotheses and findings in sports science on the differences in physiological determinants of long and short distance running respectively.
These include differences in the muscle fibre types contributing to performance (type I vs. type II) \cite{saltin1977fibre, hoppeler1985endurance}, whether the race length demands energy primarily from aerobic or anaerobic metabolism \cite{bosquet2002methods, faude2009lactate},
which energy systems are mobilized (glycolysis vs. lipolysis) \cite{brooks1994balance, venables2005determinants} and whether the race terminates before the onset of a \.VO$_2$ slow component \cite{borrani2001v, poole1994vo2}.
We conjecture that the combination of our methodology with experiments will shed further light on these differences.
\item An open question in the physiology of aging is whether power or endurance capabilities diminish faster with age. Our analysis provides cross-sectional evidence that {\bf training standard decreases with age, and specialization shifts away from endurance}. This confirms observations of Rittweger et al.~\cite{rittweger2009sprint} on masters world-record data.
There are multiple possible explanations for this, for example longitudinal changes in specialization, or selection bias due to older athletes preferring longer distances; our model renders these hypotheses amenable to quantitative validation.
\item We find that there are a number of {\bf high-standard athletes who attempt distances different from their inferred best distance}; most notably a cluster of young athletes (< 25 ys) who run short distances, and a cluster of older athletes (>40 y) who run long distances, but who we predict would perform better on longer resp.~shorter distances.
Moreover, the third component of our model implies the existence of {\bf athletes with very strong specialization in their best event}; there are indeed high profile examples of such athletes, such as Zersenay Tadese, who holds the half-marathon world best performance (58:23) but has as yet to produce a marathon performance even close to this in quality (best performance, 2:10:41).
\end{itemize}
We also anticipate that our framework will prove fruitful in {\bf equipping the practioner with new methods for prediction and quantification}:
\begin{itemize}
\itemsep0em
\item Individual predictions are crucial in {\bf race planning}---especially for predicting a target performance for events such as the Marathon for which months of preparation are needed;
the ability to accurately select a realistic target speed will make the difference between an athlete achieving a personal best performance and dropping out of the race from exhaustion.
\item Predictions and the three-number-summary yield a concise description of the runner's specialization and training state and are thus of immediate use in {\bf training assessment and planning}, for example in determining the potential effect of a training scheme or finding the optimal event(s) for which to train.
\item The presented framework allows for the derivation of novel and more accurate {\bf scoring schemes} including scoring tables for any type of population.
\item Predictions for elite athletes allow for a more precise {\bf estimation of quotas and betting risk}. For example, we predict that a fair race between Mo Farah and Usain Bolt is over 492m (374-594m with 95\% prob),
Chris Lemaitre and Adam Gemili have the calibre to run 43.5 ($\pm1.3$) and 43.2 ($\pm1.3$) resp.~seconds over 400m and Kenenisa Bekele is capable at his best of a 2:00:36 marathon ($\pm3.6$ mins).
\end{itemize}
We further conjecture that the physiological laws we have validated for running will be immediately transferable to any sport where a power law has been observed on the collective level, such as swimming, cycling, and horse racing.
\section*{Acknowledgments}
We thank Ryota Tomioka for providing us with his code for matrix completion via nuclear norm minimization~\cite{tomioka2010extension}, and for advice on its use. We thank Louis Theran for advice regarding the implementation of local matrix completion in higher ranks.
We thank Klaus-Robert M\"uller for helpful comments on earlier versions of the manuscript.
DAJB was supported by a grant from the German Research Foundation, research training group GRK 1589/1 ``Sensory Computation in Neural Systems''.
FK was partially supported by Mathematisches Forschungsinstitut Oberwolfach (MFO). This research was partially carried out at MFO with the support of FK's Oberwolfach Leibniz Fellowship.
\section*{Author contributions}
DAJB conceived the application LMC to athletic performance prediction, and acquired the data.
DAJB and FJK jointly contributed to methodology and the working form of the LMC prediction algorithm. FJK conceived the LMC algorithm in higher ranks; DAJB designed its concrete implementation and adapted the LMC algorithm for the performance prediction problem. DAJB and FJK jointly designed the experiments and analyses. DAJB carried out the experiments and analyses. DAJB and FJK jointly wrote the paper and are jointly responsible for presentation, discussion and interpretation.
\newpage
\section*{Methods}
The following provides a guideline for reproducing the results.
Raw and pre-processed data in MATLAB and CSV formats is available upon request, subject to approval by British Athletics.
Complete and documented source code of algorithms and analyses can be obtained from [download link will be provided here after acceptance of the manuscript].
\subsection*{Data Source}
The basis for our analyses is the online database {\tt www.thepowerof10.info}, which catalogues British individuals' performances achieved in officially ratified athletics competitions since 1954, including Olympic athletic events (field and non-field events), non-Olympic athletic events, cross country events and road races of all distances.
With permission of British Athletics, we obtained an excerpt of the database by automated querying of the freely accessible parts of {\tt www.thepowerof10.info}, restricted to ten types of running events: 100m, 200m, 400m, 800m, 1500m, the Mile, 5000m (track and road races), 10000m (track and road races), Half-Marathon and Marathon. Other types of running events were available but excluded from the present analyses; the reasons for exclusion were a smaller total of attempts (e.g. 3000m), a different population of athletes (e.g. 3000m is mainly attempted by younger athletes), and varying conditions (steeplechase/ hurdles and cross-country races).
The data set consists of two tables: \texttt{athletes.csv}, containing records of individual athletes, with fields: athlete ID, gender, date of birth; and \texttt{events.csv}, containing records of individual attempts on running events until August 3, 2013, with fields: athlete ID, event type, date of the attempt, and performance in seconds.
The data set is available upon request, subject to approval by British Athletics.
\subsection*{Data Cleaning}
Our excerpt of the database contains (after error and duplication removal) records of 164,746 individuals of both genders, ranging from the amateur to the elite, young to old, and a total of 1,410,789 individual performances for 10 different types of events
(see previous section).
Gender is available for all athletes in the database (101,775 male, 62,971 female). The dates of birth of 114,168 athletes are missing (recorded as January 1, 1900 in \texttt{athletes.csv} due to particulars of the automated querying); the date of birth of six athletes is set to missing due to an recorded age at recorded attempts of eight years or less.
For the above athletes, a total of 1,410,789 attempts are recorded: 192,947 over 100m, 194,107 over 200m, 109,430 over 400m, 239,666 over 800m, 176,284 over 1500m, 6,590 at the Mile distance, 96,793 over 5000m (the track and road races), 161,504 over 10000m (on the track and road races), 140,446 for the Half-Marathon and 93,033 for the Marathon. Dates of the attempt are set to missing for 225 of the attempts that record January 1, 1901, and one of the attempts that records August 20, 2038. A number of 44 events is removed from the working data set whose reported performances are better than the official world records of their time, or extremely slow, leaving a total of 1,407,432 recorded attempts in the cleaned data set.
\subsection*{Data Preprocessing}
The events and athletes data sets are collated into $(10\times 164,746)$-tables/matrices of performances, where
the $10$ columns correspond to events and the $164,746$ rows to individual athletes. Rows are indexed increasingly by athlete ID, columns by the type of event. Each entry of the table/matrix contains one performance (in seconds) of the athlete by which the row is indexed, at the event by which the column is indexed, or a missing value. If the entry contains a performance, the date of that performance is stored as meta-information.
We consider two different modes of collation, yielding one table/matrix of performances of size $(10\times 164,746)$ each.
In the first mode, which in Tables~\ref{tab:compare_methods} ff.~is referenced as {\bf ``best''}, one proceeds as follows. First, for each individual athlete, one finds the best event of each individual, measured by population percentile. Then, for each type of event which was attempted by that athlete within a year before that best event, the best performance for that type of event is entered into the table. If a certain event was not attempted in this period, it is recorded as missing.
For the second mode of collation, which in Tables~\ref{tab:compare_methods} ff.~is referenced as {\bf ``random''}, one proceeds as follows. First, for each individual athlete, a calendar year is uniformly randomly selected among the calendar years in which that athlete has attempted at least one event. Then, for each type of event which was attempted by that athlete within the selected calendar year, the best performance for that type of event is entered into the table. If a certain event was not attempted in the selected calendar year, it is recorded as missing.
The first collation mode ensures that the data is of high quality: athletes are close to optimal fitness, since
their best performance was achieved in this time period. Moreover, since fitness was at a high level,
it is plausible that the number of injuries incurred was low; indeed it can be observed that the number of attempts per event is higher in this period, effectively decreasing the influence of noise and the chance that outliers are present after collation.
The second collation mode is used to check whether and, if so how strongly, the results depend on the athletes being close to optimal fitness.
In both cases choosing a narrow time frame ensures that performances are relevant to one another for prediction.
\subsection*{Athlete-Specific Summary Statistics}
For each given athlete, several summaries are computed based on the collated matrix.
Performance {\bf percentiles} are computed for each event which an athlete attempts in relation to the other athletes' performances on the same event. These column-wise event-specific percentiles, yield a percentile matrix with the same filling pattern (pattern of missing entries) as the collated matrix.
The {\bf preferred distance} for a given athlete is the geometric mean of the attempted events' distances. That is, if $s_1,\dots, s_m$ are the distances for the events which the athlete has attempted, then $\tilde{s} = (s_1\cdot s_2\cdot\ldots\cdot s_m)^{1/m}$ is the preferred distance.
The {\bf training standard} for a given athlete is the mean of all performance percentiles in the corresponding row.
The {\bf no.~events} for a given athlete is the number of events attempted by an athlete in the year of the data considered ({\bf best} or {\bf random}).
Note that the percentiles yield a mostly physiological description; the preferred distance is a behavioural summary since it describes the type of events the athlete attempts. The training standard combines both physiological and behavioural characteristics.
Percentiles, preferred distance, and training standard depend on the collated matrix. At any point when rows of the collated matrix are removed, future references to those statistics refer to and are computed for the matrix where those have been removed; this affects the percentiles and therefore the training standard which is always relative to the athletes in the collated matrix.
\subsection*{Outlier Removal}
Outliers are removed from the data in both collated matrices. An outlier score for each athlete/row is obtained as the difference of maximum and minimum of all performance percentile of the athlete. The five percent rows/athletes with the highest outlier score are removed from the matrix.
\subsection*{Prediction: Evaluation and Validation}
Prediction accuracy is evaluated on row-sub-samples of the collated matrices, defined by (a) a potential subgroup, e.g., given by age or gender, (b) degrees-of-freedom constraints in the prediction methods that require a certain amount of entries per row, and (c) a certain performance percentiles of athletes.
The row-sub-samples referred to in the main text and in Tables~\ref{tab:compare_methods} ff.~are obtained by (a) retaining all rows/athletes in the subgroup specified by gender, or age in the best event, (b) retaining all rows/athletes with at least {\bf no.~events} or more entries non-missing, and discarding all rows/athletes with strictly less than {\bf no.~events} entries non-missing, then (c) retaining all athletes in a certain percentile range. The percentiles referred to in (c) are computed as follows: first, for each column, in the data retained after step (b), percentiles are computed. Then, for each row/athlete, the best of these percentiles is selected as the score over which the overall percentiles are taken.
The accuracy of prediction is measured empirically in terms of out-of-sample root mean squared error (RMSE) and mean absolute error (MAE), with RMSE, MAE, and standard deviations estimated from the empirical sample of residuals obtained in 1000 iterations of leave-one-out validation.
Given the row-sub-sample matrix obtained from (a), (b), (c), prediction and thus leave-one-out validation is done in two ways:
(i) predicting the left-out entry from potentially all remaining entries. In this scenario, the prediction method may have access to the performance of the athlete in question which lie in the future of the event to be predicted, though only performances of other events are available;
(ii) predicting the left-out entry from all remaining entries of other athletes, but only from those events of the athlete in question that lie in the past of the event to be predicted. In this task, temporal causality is preserved on the level of the single athlete for whom prediction is done; though information about other athletes' results that lie in the future of the event to be predicted may be used.
The third option (iii) where predictions are made only from past events has not been studied due to the size of the data set which makes collation of the data set for every single prediction per method and group a computationally extensive task, and due to the potential group-wise sampling bias which would be introduced, skewing the measures of prediction-quality---the population of athletes on the older attempts is different in many respects from the more recent attempts.
We further argue that in the absence of such technical issues, evaluation as in (ii) would be equivalent to (iii); since the performances of two randomly picked athletes, no matter how they are related temporally, can in our opinion be modelled as statistically independent; positing the contrary would be equivalent to postulating that any given athlete's performance is very likely to be directly influenced by a large number of other athlete's performance history, which is an assumption that appears to us to be scientifically implausible.
Given the above, due to equivalence of (ii) and (iii), and the issues occurring in (iii) exclusively, we can conclude that (ii) is preferrable over (iii) from a scientific and statistical viewpoint.
\subsection*{Prediction: Target Outcomes}
The principal target outcome for the prediction is ``performance'', which we present to the prediction methods in three distinct parameterisations. This corresponds to passing not the raw performance matrices obtained in the section ``Data Pre-processing'' to the prediction methods, but re-parameterized variants where the non-missing entries undergo a univariate variable transform. The three parameterizations of performance considered in our experiments are the following:\\
(a) {\bf normalized}: performance as the time in which the given athlete (row) completes the event in question (column), divided by the average time in which the event in question (column) is completed in the sub-sample;\\
(b) {\bf log-time}: performance as the natural logarithm of time in seconds in which the given athlete (row) completes the event in question (column);\\
(c) {\bf speed}: performance as the average speed in meters per second, with which the given athlete (row) completes the event in question (column).\\
The words in italics indicate which parameterisation is referred to in Table~\ref{tab:compare_methods}. The error measures, RMSE and MAE, are evaluated in the same parameterisation in which prediction is performed. We do not evaluate performance directly in un-normalized time units, as in this representation performances between 100m and the Marathon span 4 orders of magnitude (base-10), which would skew the measures of goodness heavily towards accuracy over the Marathon.
Unless stated otherwise, predictions are made in the same parameterisation on which the models are learnt.
\subsection*{Prediction: Models and Algorithms}
In the experiments, a variety of prediction methods are used to perform prediction from the performance data, given as described in ``Prediction: Target Outcomes'', evaluated by the measures as described in the section ``Prediction: Evaluation and Validation''.
In the code available for download, each method is encapsulated as a routine which predicts a missing entry when given the (training entries in the) performance matrix. The methods can be roughly divided in four classes: (1) naive baselines, (2) representatives of the state-of-the-art in prediction of running performance, (3) representatives of the state-of-the-art in matrix completion, and (4) our proposed method and its variants.
The naive baselines are:\\
(1.a) {\bf mean}: predicting the the mean over all performances for the same event.
(1.b) {\bf $k$-NN}: $k$-nearest neighbours prediction. The parameter $k$ is obtained as the minimizer of out-of-sample RMSE on five groups of 50 randomly chosen validation data points from the training set, from among $k=1, k=5$, and $k=20$.
The representatives of the state-of-the-art in predicting running performance are:
(2.a) {\bf Riegel}: The Riegel power law formula with exponent 1.06.
(2.b) {\bf power-law}: A power-law predictor, as per the Riegel formula, but with the exponent estimated from the data. The exponent is the same for all athletes and estimated as the minimizer of the residual sum of squares.
(2.c) {\bf ind.power-law}: A power-law predictor, as per the Riegel formula, but with the exponent estimated from the data. The exponent may be different for each athlete and is estimated as the minimizer of the residual sum of squares.
(2.d) {\bf Purdy}: Prediction by calculation of equivalent performances using the Purdy points scheme~\cite{purdy1974computer}. Purdy points are calculated by using the measurements given by the Portugese scoring tables which estimate the maximum velocity for a given distance in a straight line, and adjust for the cost of traversing curves and the time required to reach race velocity.
The performance with the same number of points as the predicting event is imputed.
The representatives of the state-of-the-art in matrix completion are:
(3.a) {\bf EM}: Expectation maximization algorithm assuming a multivariate Gaussian model for the rows of the performance matrix in log-time parameterisation. Missing entries are initialized by the mean of each column. The updates are terminated when the percent increase in log-likelihood is less than 0.1\%. For a review of the EM-algorithm see~\cite{bishop2006pattern}.
(3.b) {\bf Nuclear Norm}: Matrix completion via nuclear norm minimization~\cite{candes2009exact, tomioka2010extension}, in the regularized version and implementation by~\cite{tomioka2010extension}.
The variants of our proposed method are as follows:
(4.a-d) {\bf LMC rank $r$}: local matrix completion for the low-rank model, with rank $r = 1,2,3,4$. (4.a) is LMC rank 1, (4.b) is LMC rank 2, and so on.
Our algorithm follows the local/entry-wise matrix completion paradigm in~\cite{kiraly15matrixcompletion}. It extends the rank $1$ local matrix completion method described in~\cite{KirThe13RankOneEst} to arbitrary ranks.
Our implementation uses: determinants of size $(r+1 \times r+1)$ as the only circuits; the weighted variance minimization principle in~\cite{KirThe13RankOneEst}; the linear approximation for the circuit variance outlined in the appendix of~\cite{blythe2014algebraic}; modelling circuits as independent for the co-variance approximation.
We further restrict to circuits supported on the event to be predicted and the $r$ log-distance closest events.
For the convenience of the reader, we describe the exact way in which the local matrix completion principle is instantiated, in the section ``Prediction: Local Matrix Completion'' below
In the supplementary experiments we also investigate two aggregate predictors to study the potential benefit of using other lengths for prediction:
(5.a) {\bf bagged power law}: bagging the power law predictor with estimated coefficient (2.b) by a weighted average of predictions obtained from different events. The weighting procedure is described below.
(5.b) {\bf bagged LMC rank 2}: estimation by LMC rank 2 where determinants can be supported at any three events, not only on the closest ones (as in line 1 of Algorithm~\ref{alg:LMC} below). The final, bagged predictor is obtained as a weighted average of LMC rank 2 running on different triples of events. The weighting procedure is described below.
The averaging weights for (5.a) and (5.b) are both obtained from the Gaussian radial basis function kernel $\exp \left( \gamma \Delta \Delta^\top\right)$, where $\Delta = \text{log}(\mathbf{s}_{p})-\text{log}(s_{p'})$
and $\mathbf{s}_{p}$ is the vector of predicting distances and $s_{p'}$ is the predicted distance. The kernel width $\gamma$ is a parameter of the bagging. As $\gamma$ approaches 0, aggregation approaches averaging and thus the ``standard'' bagging predictor. As $\gamma$ approaches $-\infty$, the aggregate prediction approaches the non-bagged variants (2.b) and (4.b).
\subsection*{Prediction: Local Matrix Completion}
The LMC algorithm we use is an instance of Algorithm~5 in~\cite{kiraly15matrixcompletion}, where, as detailed in the last section, the circuits are all determinants, and the averaging function is the weighted mean which minimizes variance, in first order approximation, following the strategy outlined in~\cite{KirThe13RankOneEst} and~\cite{blythe2014algebraic}.
The LMC rank $r$ algorithm is described below in pseudo-code. For readability, we use bracket notation $M[i,j]$ (as in R or MATLAB) instead of the usual subscript notation $M_{ij}$ for sub-setting matrices. The notation $M[:,(i_1,i_2,\dots, i_r)]$ corresponds to the sub-matrix of $m$ with columns $i_1,\dots, i_r$. The notation $M[k,:]$ stands for the whole $k$-th row. Also note that the row and column removals in Algorithm~\ref{alg:LMC} are only temporary for the purpose of computation, within the boundaries of the algorithm, and do not affect the original collated matrix.
\begin{algorithm}[ht]
\caption{ -- Local Matrix Completion in Rank $r$.\newline
\textit{Input:} An athlete $a$, an event $s^*$, the collated data matrix of performances $M$. \newline
\textit{Output:} An estimate/denoising for the entry $M[a,s^*]$ \label{alg:LMC}}
\begin{algorithmic}[1]
\State Determine distinct events $s_1,\dots, s_r\neq s^*$ which are log-closest to $s^*$, i.e., minimize $\sum_{i=1}^r( \log s_i-\log s^*)^2$
\State Restrict $M$ to those events, i.e., $M\gets M[:,(s^*,s_1,\dots, s_r)]$
\State Let $v$ be the vector containing the indices of rows in $M$ with no missing entry.
\State $M\gets M[(v,a),:]$, i.e., remove all rows with missing entries from $M$, except $a$.
\For{ $i = 1$ to $400$ }
\State Uniformly randomly sample distinct athletes $a_1,\dots,a_r\neq a$ among the rows of $M$.
\State Solve the circuit equation $\det M[(a,a_1,\dots, a_r),(s^*,s_1,\dots, s_r)] = 0$ for $s^*$, obtaining a number $m_i$.
\State Let $A_0,A_1\gets M[(a,a_1,\dots, a_r),(s^*,s_1,\dots, s_r)]$.
\State Assign $A_0[a,s^*]\gets 0$, and $A_1[a,s^*]\gets 1$.
\State Compute $\sigma_i\gets\frac{1}{|\det A_0 + \det A_1|}+\frac{|\det A_0|}{(\det A_0 - \det A_1)^2}$
\State Assign the weight $w_i\gets \sigma_i^{-2}$
\EndFor
\State Compute $m^*\gets \left(\sum_{i=1}^{400}w_im_i\right)\cdot\left(\sum_{i=1}^{400} w_i\right)^{-1}$
\State Return $m^*$ as the estimated performance.
\end{algorithmic}
\end{algorithm}
The bagged variant of LMC in rank $r$ repeatedly runs LMC rank $r$ with choices of events different from the log-closest, weighting the results obtained from different choices of $s_1,\dots, s_r$. The weights are obtained from $5$-fold cross-validation on the training sample.
\subsection*{Obtaining the Low-Rank Components and Coefficients}
We obtain three low-rank components
$f_1,\dots, f_3$ and corresponding coefficients $\lambda_{1},\dots, \lambda_{3}$ for each athlete by considering the data in log-time coordinates. Each component $f_i$ is a vector of length 10, with entries corresponding to events. Each coefficient is a scalar, potentially different per athlete.
To obtain the components and coefficients, we consider the data matrix for the specific target outcome, sub-sampled to contain the athletes who have attempted four or more events and the top 25\% percentiles, as described in ``Prediction: Evaluation and Validation''. In this data matrix, all missing values are imputed using the rank 3 local matrix completion algorithm, as described in (4.c) of ``Prediction: Models and Algorithms'', to obtain a complete data matrix $M$. For this matrix, the singular value decomposition $M=USV^\top$ is computed, see~\cite{golub1970singular}.
We take the components $f_2,f_3$ to be the the $2$-th and $3$-rd right singular vectors, which are the $2$-nd and $3$-rd column of $V$. The component $f_1$ is a re-scaled version of the $1$-st column $v$ of $V$, such that $f_1(s) \approx \log s$, where the natural logarithm is taken. More precisely, $f_1 := \alpha v$, where the re-scaling factor $\alpha$ is the minimizer of the sum of squared residuals of $\alpha f_1(s) - \log (s)$ over $s$ being the ten event distanes.
The three-number-summary referenced in the main corpus of the manuscript is obtained as follows: for the $k$-th athlete we obtain from the left singular vector the entries $U_{kj}$. The second and third score of the three-number-summary are obtained as $\lambda_2 = U_{k2}$ and $\lambda_3= U_{k3}$. The individual exponent is $\lambda_1 = \alpha^{-1}\cdot U_{j1}$.
The singular value decomposition has the property that the $f_i$ and $\lambda_j$ are guaranteed to be least-squares estimators for the components and the coefficients in a projection sense.
\subsection*{Computation of standard error and significance}
Standard errors for the singular vectors (components of the model of Equation~\ref{eq:model}) are computed via independent bootstrap sub-sampling on the rows of the data set (athletes).
Standard errors for prediction accuracies are obtained by bootstrapping of the predicted performances (1000 per experiment).
A method is considered to perform significantly better than another when error regions at the 95\% confidence level (= mean over repetitions $\pm$ 1.96 standard errors) do not intersect.
\subsection*{Predictions and three-number-summary for elite athletes}
Performance predictions and three-number-summaries for the selected elite athletes in Table~\ref{tab:elite_scores} and Figure~\ref{fig:scatter} are obtained from their personal best times. The relative standard error of the predicted performances is estimated to be the same as the relative RMSE of predicting time, as reported in Table~\ref{tab:compare_methods}.
\subsection*{Calculating a fair race}
Here we describe the procedure for calculating a fair racing distance with error bars between two athletes: athlete 1 and athlete 2. We first calculate predictions
for all events. Provided that athlete 1 is quicker on some events and athlete 2 is quicker on others, then calculating a fair race is feasible.
If athlete 1 is quicker on shorter events then athlete 2 is typically quicker on all longer events beyond a certain distance. In that case, we can find the shortest race $s_i$ whereby athlete 2 is predicted to be
quicker; then a fair race lies between $s_i$ and $s_{i-1}$. The performance curves in log-time vs. log-distance of both athletes will be locally approximately linear.
We thus interpolate the performance curves between $\text{log}(s_i)$ and $\text{log}(s_{i-1})$---the crossing point gives the position of a fair race in log-coordinates.
We obtain confidence intervals by repeating this procedure after sampling data points around the estimated performances with standard deviation equal to the RMSE (see Table~\ref{tab:compare_methods}) on the top 25\% of athletes in log-time.
\newpage
\section*{Supplementary Analyses}
This appendix contains a series of additional experiments supplementing those in the main corpus. It contains the following findings:\\
{\bf (S.I) Validation of the LMC prediction framework.}\\
{\bf (S.I.a) Evaluation in terms of MAE.} The results in terms of MAE are qualitatively similar to those in RMSE; smaller MAEs indicate the presence of outliers.\\
{\bf (S.I.b) Evaluation in terms of time prediction.} The results are qualitatively similar to measuring prediction accuracy in RMSE and MAE of log-time. LMC rank 2 has an average error of approximately 2\% when predicting the top 25\% of male athletes.\\
{\bf (S.I.c) Prediction for individual events.} LMC outperforms the other predictors on each type of event. The benefit of higher rank is greatest for middle distances.\\
{\bf (S.I.d) Stability w.r.t.~the unit measuring performance.} LMC performs equally well in predicting (performance in time units) when performances are presented in log-time or time normalized by event average. Speed is worse when the rank 2 predictor is used.\\
{\bf (S.I.e) Stability w.r.t.~the events used in prediction.} LMC performs equally well when predicting from the closest-distance events and when using a bagged version which uses all observed events for prediction.\\
{\bf (S.I.f) Stability w.r.t.~the event predicted.} LMC performs well both when the predicted event is close to those observed and when the predicted event is further from those observed, in terms of event distance.\\
{\bf (S.I.g) Temporal independence of performances.} There are no differences between predictions made only from past events and predictions made from all available events (in the training set).\\
{\bf (S.I.h) Run-time comparisons.} LMC is by orders of magnitude the fastest among the matrix completion methods.\\
{\bf (S.II) Validation of the low-rank model.}\\
{\bf (S.II.a) Synthetic validation.} In a synthetic low-rank model of athletic performance that is a proxy to the real data, the singular components of the model can be correctly recovered by the exact same procedure as on the real data. The generative assumption of low rank is therefore appropriate.\\
{\bf (S.II.b) Universality in sub-groups.} Quality of prediction, the low-rank model, its rank, and the singular components remain mostly unchanged when considering subgroups male/female, older/younger, elite/amateur.\\
{\bf (S.III) Exploration of the low-rank model.}\\
{\bf (S.III.a) Further exploration of the three-number-summary.} The three number summary also correlates with specialization and training standard.\\
{\bf (S.III.b) Preferred distance vs optimal distance.} Most but not all athletes prefer to attend the event at which they are predicted to perform best. A notable number of younger athletes prefer distances shorter than optimal, and some older athletes prefer distances longer than optimal.\\
{\bf (S.IV) Pivoting and phase transitions.} The pivoting phenomenon in Figure~\ref{fig:illustration}, right panel, is found in the data for any three close-by distances up to the Mile, with anti-correlation between the shorter and the longer distance. Above 5000m, a change in the shorter of the three distances positively correlates with a change in the longer distance.\\
\noindent
{\bf (S.I.a) Evaluation in terms of MAE.} Table~\ref{tab:compare_methods_mae} reports on the goodness of prediction methods in terms of MAE. Compared with the RMSE (Table~\ref{tab:compare_methods}, the MAE tend to be smaller than the RMSE, indicating the presence of outliers. The relative prediction-accuracy of methods when compared to each other is qualitatively the same.\\
\noindent
{\bf (S.I.b) Evaluation in terms of time prediction.} Tables~\ref{tab:rrmse_time} and~\ref{tab:rmae_time} report on the prediction accuracy of the methods tested in terms the relative RMSE and MAE of predicting time. Relative measures are chosen to avoid bias towards the longer events. The results are qualitatively and quantitatively very similar to the log-time results in Tables~\ref{tab:compare_methods} and~\ref{tab:compare_methods_mae}; this can be explained that mathematically the RMSE and MAE of a logarithm approximate the relative RMSE and MAE well for small values.\\
\noindent
{\bf (S.I.c) Individual Events.}
Prediction accuracy of LMC rank 1 and rank 2 on the ten different events is displayed in Figure~\ref{fig:individual_events}.
The reported prediction accuracy is out-of-sample RMSE of predicting log-time, on the top 25 percentiles of Male athletes who have attempted 3 or more events, of events in their best year of performance. The reported RMSE for a given event is the mean over 1000 random prediction samples, standard errors are estimated by the bootstrap.\\
The relative improvement of rank 2 over rank 1 tends to be greater for shorter distances below the Mile. This is in accordance with observation (IV.i) which indicates that the individual exponent is the best descriptor for longer events, above the Mile.\\
\noindent
{\bf (S.I.d) Stability w.r.t.~the measure of performance.} In the main experiment, the LMC model is learnt on the same measure of performance (log-time, speed, normalized) which is predicted. We investigate whether the measure of performance on which the model is learnt influences the prediction by learning the LMC model on either measure and comparing all predictions using the log-time measure. Table~\ref{tab:choose_feature} displays prediction accuracy when the model is learnt in any one of the measures of performance. Here we check the effect of calibration in one coordinates system and testing in another. The reported goodness is out-of-sample RMSE of predicting log-time, on the top 25 percentiles of Male athletes who have attempted 3 or more events, of events in their best year of performance. The reported RMSE for a given event is the mean over 1000 random prediction samples, standard errors are estimated by the bootstrap.\\
We find that there is no significant difference in prediction goodness when learning the model in log-time coordinates or normalized time coordinates. Learning the model in speed coordinates leads to a significantly better prediction than log-time or normalized time when LMC rank 1 is applied, but to a worse prediction with LMC rank 2. As overall prediction with LMC rank 2 is better, log-time or normalized time are the preferable units of performance.\\
\begin{figure}
\begin{center}
$\begin{array}{c}
\includegraphics[width=100mm,clip=true,trim= 0mm 0mm 0mm 0mm]{figures/Accuracy/exp_individual_events/individual_events.pdf}
\end{array}$
\end{center}
\caption{The figure displays the results of prediction by event for the top 25\% of male athletes who attended $\geq3$ events in their year of best performance. For each
event the prediction accuracy of LMC in rank 1 (blue) is compared to prediction accuracy in rank 2 (red).
RMSE is displayed on the $y$-axis against distance on the $x$-axis; the error bars extend two standard deviations of the bootstrapped RMSE either side of the RMSE.
\label{fig:individual_events}}
\end{figure}
\noindent
{\bf (S.I.e) Stability w.r.t.~the event predicted.}
We consider here the effect of the ratio between the predicted event and the closest predictor. For data of the best 25\% of Males
in the year of best performance ({\bf best}), we compute the log-ratio of the closest predicting distance and the predicted distance for Purdy Points,
the power-law formula and LMC in rank 2.
See Figure~\ref{fig:by_deviation}, where this log ratio is plotted by error. The results show that LMC is far more robust to
error for predicting distances far from the predicted distance.\\
\begin{figure}
\begin{center}
$\begin{array}{c}
\includegraphics[width=140mm,clip=true,trim= 20mm 0mm 20mm 0mm]{figures/Accuracy/exp_compare_methods/by_deviations}
\end{array}$
\end{center}
\caption{The figure displays the absolute log ratio in distance predicted and predicting distance vs. absolute relative error per athlete.
In each case the log ratio in distance is displayed on the $x$-axis and the absolute errors of single data points of the $y$-axis.
We see that LMC in rank 2 is particularly robust for large ratios in comparison to the power-law and Purdy Points. Data is taken from
the top 25\% of male athletes with {\bf no.~events}$\geq 3$ in the {\bf best }year.
\label{fig:by_deviation}}
\end{figure}
\noindent
{\bf (S.I.f) Stability w.r.t.~the events used in prediction.} We compare whether we can improve prediction by using all events an athlete has attempted, by using one of the aggregate predictors (5.a) bagged power law or (5.b) bagged LMC rank 2. The kernel width $\gamma$ for the aggregate predictors is chosen from $-0.001,-0.01,-0.1,-1,-10$ as the minimizer of out-of-sample RMSE on five groups of 50 randomly chosen validation data points from the training set. The validation setting is the same as in the main prediction experiment.\\
Results are displayed in Table~\ref{tab:learn_weights}. We find that prediction accuracy of (2.b) power law and (5.a) bagged power law is not significantly different, nor is (4.b) LMC rank 2 significantly different from (5.b) bagged LMC rank 2 (both $p>0.05$; Wilcoxon signed-rank on the absolute residuals). Even though the kernel width selected is in the majority of cases $\sigma = -1$ and not $\sigma = -10$, the incorporation of all events does not lead to an improvement in prediction accuracy in our aggregation scheme.
We find there is no significant difference ($p>0.05$; Wilcoxon signed-rank on the absolute errors) between the bagged and vanilla LMC for the top 95\% of runners. This demonstrates
that the relevance of closer events for prediction may be learn from the data. The same holds for the bagged version of the power-law formula. \\
\noindent
{\bf (S.I.g) Temporal independence of performances.}
We check here whether the results are affected by using only temporally prior attempts in predicting an athlete's performance, see section ``Prediction: Evaluation and Validation'' in ``Methods''. To this end, we compute out-of-sample RMSEs when predictions are made only from those events.
Table~\ref{tab:rrmse_causal} reports out-of-sample RMSE of predicting log-time, on the top 25 percentiles of Male athletes who have attempted 3 or more events, of events in their best year of performance. The reported RMSE for a given event is the mean over 1000 random prediction samples, standard errors are estimated by the bootstrap.
The results are qualitatively similar to those of Table~\ref{tab:compare_methods} where all events are used in prediction.\\
\noindent
{\bf (S.I.h) Run-time comparisons.} We compare the run-time cost of a single prediction for the three matrix completion methods LMC, nuclear norm minimiziation, and EM. The other (non-matrix completion) methods are fast or depend only negligibly on the matrix size. We measure run time of LMC rank 3 for completion of a single entry for matrices of $2^8,2^9,\dots,2^{13}$ athletes, generated as described in (S.II.a). This is repeated 100 times.
For a
fair comparison, the nuclear norm minimization algorithm is run with a hyper-parameter already pre-selected by cross validation.
The results are displayed in Figure~\ref{fig:runtime}; LMC is faster by orders of magnitude than nuclear norm and EM and is very robust to the
size of the matrix. The reason computation speeds up over the smallest matrix sizes is that $4\times 4$ minors, which are required for rank 3 estimation
are not available, thus the algorithm must attempt all ranks lower than 3 to find sufficiently many minors.
\\
\begin{figure}
\begin{center}
$\begin{array}{c}
\includegraphics[width=60mm,clip=true,trim= 0mm 0mm 0mm 0mm]{figures/RunTime/exp_runtime/runtime.pdf}
\end{array}$
\end{center}
\caption{The figure displays mean run-times for the 3 matrix completion algorithms tested in the paper: Nuclear Norm, EM and LMC (rank 3). Run-times (y-axis) are recorded for completing a single entry in a matrix of size indicated by the x-axis. The averages are over 100 repetitions, standard errors are estimated by the bootstrap.
\label{fig:runtime}}
\end{figure}
\noindent
{\bf (S.II.a) Synthetic validation.} To validate the assumption of a low-rank generative model, we investigate prediction accuracy and recovery of singular vectors in a synthetic model of athletic performance.
Synthetic data for a given number of athletes is generated as follows:
For each athlete, a three-number summary $(\lambda_1,\lambda_2,\lambda_3)$ is generated independently from a Gaussian distribution with the same mean and variance as the three-number-summaries measured on the real data and with uncorrelated entries.
Matrices of performances are generated from the model
\begin{align}
\text{log}(t)= \lambda_1 f_1(s) + \lambda_2 f_2(s) + \lambda_3 f_3(s)+ \eta(s) \label{eq:synthetic_model}
\end{align}
where $f_1,f_2,f_3$ are the three components estimated from the real data and $\eta(s)$ is a stationary zero-mean Gaussian white noise process with adjustable variance. We take the components estimated in log-time coordinates from the top 25\% of male athletes who have attempted at least 4 events as the three components of the model. The distances $s$ are the same ten event distances as on the real data. In each experiment the standard deviation of $\eta(s)$
{\bf Accuracy of prediction:} We synthetically generate a matrix of $1000$ athletes according to the model of Equation~\eqref{eq:synthetic_model}, taking
as distances the same distances measured on the real data.
Missing entries are randomized according to two schemes:
(a) 6 (out of 10) uniformly random missing entries per row/athlete.
(b) per row/athlete, four in terms of distance-consecutive entries are non-missing, uniformly at random.
We then apply LMC rank 2 and nuclear norm minimization for prediction. This setup is repeated 100 times for ten different standard deviations of $\eta$ between 0.01 and 0.1. The results are displayed in Figure~\ref{fig:toy_accuracy}.
LMC performance outperforms nuclear norm; LMC performance is also robust to the pattern of missingness, while nuclear norm minimization is negatively affected by clustering in the rows. RMSE of LMC approaches zero with small noise variance, while RMSE of nuclear norm minimization does not.
Comparing the performances with Table~\ref{tab:compare_methods}, an assumption of a noise variance of $\mbox{Std}(\eta) = 0.01$ seems plausible. The performance of nuclear norm on the real data is explained by a mix of the sampling schemes (a) and (b).
\begin{figure}
\begin{center}
\includegraphics[width = 100mm]{figures/Assumptions/exp_same_pattern_acc/accuracy.pdf}
\end{center}
\caption{LMC and Nuclear Norm prediction accuracy on the synthetic low-rank data. $x$-axis denotes the noise level (standard deviation of additive noise in log-time coordinates); $y$-axis is out-of-sample RMSE predicting log-time. Left: prediction performance when (a) the missing entries in each ros are uniform. Right: prediction performance when (b) the observed entries are consecutive. Error bars are one standard deviation, estimated by the bootstrap.}
\label{fig:toy_accuracy}
\end{figure}
{\bf Recovery of model components.} We synthetically generate a matrix which has a size and pattern of observed entries identical to the matrix of top 25\% of male athletes who have attempted at least 4 events in their best year.
We set $\mbox{Std}(\eta) = 0.01$, which was shown to be plausible in the previous section.
We then complete all missing entries of the matrix using LMC rank 3. After this initial step we estimate singular components
using SVD, exactly as on the real data. Confidence intervals are estimated by a bootstrap on the rows with 100 iterations.
The results are displayed in Figure~\ref{fig:accuracy_singular_vectors}.
\begin{figure}
\begin{center}
\includegraphics[width = 100mm]{figures/Assumptions/exp_same_pattern_svs/svs}
\end{center}
\caption{Accuracy of singular component estimation with missing data on synthetic model of performance. $x$-axis is distance, $y$-axis is components in log-time.
Left: singular components of data generated according to Equation~\ref{eq:synthetic_model} with all data present. Right: singular components of data generated according to Equation~\ref{eq:synthetic_model} with missing entries
estimated with LMC in rank 3; the observation pattern and number of athletes is identical to the real data. The tubes denote one standard deviation estimated by the bootstrap.}
\label{fig:accuracy_singular_vectors}
\end{figure}
One observes that the first two singular components are recovered almost exactly, while the third is a slightly deformed. This is due to the smaller singular value of the third component.
\\
\noindent
{\bf (S.II.b) Universality in sub-groups.} We repeat the methodology for component estimation described above and obtain the three components in the following sub-groups:
female athletes, older athletes (> 30 years), and amateur athletes (25-95 percentile range of training standard). Male athletes were considered in the main corpus. For female and older athletes, we restrict to the top 95\% percentiles of the respective groups for estimation.
Figure~\ref{fig:svs_subgroups} displays the estimated components of the low-rank model. The individual power law is found to be unchanged in all groups considered. The second and third component vary between the groups but resemble the components for the male athletes. The empirical variance of the second and third component is higher, which may be explained by a slightly reduced consistency in performance, or a reduction in sample size. Whether there is a genuine difference in form or whether the variation is explained by different three-number-summaries in the subgroups cannot be answered from the dataset considered.
Table~\ref{tab:generalize} displays the prediction results in the three subgroups. Prediction accuracy is similar but slightly worse when compared to the male athletes. Again this may be explained by reduced consistency in the subgroups' performances.\\
\begin{figure}
\begin{center}
$\begin{array}{c c c}
\includegraphics[width=50mm,clip=true,trim= 0mm 0mm 0mm 0mm]{figures/SingularVectors/exp_svs_subgroups/old} &
\includegraphics[width=50mm,clip=true,trim= 0mm 0mm 0mm 0mm]{figures/SingularVectors/exp_svs_subgroups/ama} &
\includegraphics[width=50mm,clip=true,trim= 0mm 0mm 0mm 0mm]{figures/SingularVectors/exp_svs_subgroups/fem}
\end{array}$
\end{center}
\caption{The three components of the low-rank model in subgroups. Left: for older runners. Middle: for amateur runners = best event below 25th percentile. Right: for female runners. Tubes around the components are one standard deviation, estimated by the bootstrap. The components are the analogous components for the subgroups described as computed in the left-hand panel of Figure~\ref{fig:svs}. \label{fig:svs_subgroups}}
\end{figure}
\noindent
{\bf (S.III.a) Further exploration of the three-number-summary.} Scatter plots of preferred distance and training standard against the athletes' three-number-summaries are displayed in Figure~\ref{fig:meaning_components}.
The training standard correlates predominantly with the individual exponent (score 1); score 1 vs. standard---$r=-0.89$ ($p \le 0.001$); score 2 vs. standard---$r=0.22$ ($p \le 0.001$); score 3 vs. standard---$r=0.031$ ($p = 0.07$);
all correlations are Spearman correlations with significance computed using a $t$-distribution approximation to the correlation coefficient under the null.
On the other hand preferred distance is associated with all three numbers in the summary, especially the second; score 1 vs. log(specialization)---$r=0.29$ ($p \le 0.001$); score 2 vs. log(specialization)---$r=-0.58$ ($p \le 0.001$); score 3 vs. log(specialization)---$r=-0.14$ ($p = \le 0.001$);
The association between the third score and specialization is non-linear with an optimal value around the middle distances. We stress that low correlation does not imply low predictive power; the whole summary should be considered as a whole, and the LMC predictor is non-linear. Also, we observe that correlations increase when considering only performances over certain distances, see Figure~\ref{fig:svs}.\\
\begin{figure}
\begin{center}
$\begin{array}{c}
\includegraphics[width=130mm,clip=true,trim= 0mm 0mm 0mm 0mm]{figures/SingularVectors/exp_svs/score_vs_standard.png} \\
\includegraphics[width=130mm,clip=true,trim= 0mm 0mm 0mm 0mm]{figures/SingularVectors/exp_svs/score_vs_spec.png}
\end{array}$
\end{center}
\caption{Scatter plots of training standard vs.~three-number-summary (top) and preferred distance vs.~three-number-summary.
In each case the individual exponents, 2nd and 3rd scores ($\lambda_2$, $\lambda_3$) are displayed on the $y$-axis and
the log-preferred distance and training standard on the $x$-axis.
\label{fig:meaning_components}}
\end{figure}
\noindent
{\bf (S.III.b) Preferred event vs best event.} For the top 95\% male athletes who have attempted 3 or more events, we use LMC rank 2 to compute which percentile they would achieve in each event. We then determine the distance of the event at which they would achieve the best percentile, to which we will refer as the ``optimal distance''. Figure~\ref{fig:best_event_pred} shows for each athlete the difference between their preferred and optimal distance.
It can be observed that the large majority of athletes prefer to attempt events in the vicinity of their optimal event. There is a group of young athletes who attempt events which are shorter than the predicted optimal distance, and a group of old athletes attempting events which are longer than optimal. One may hypothesize that both groups could be explained by social phenomena: young athletes usually start to train on shorter distances, regardless of their potential over long distances. Older athletes may be biased to attempting endurance type events. \\
\begin{figure}
\begin{center}
$\begin{array}{c}
\includegraphics[width=100mm,clip=true,trim= 0mm 0mm 0mm 0mm]{figures/Aging/exp_predicted_optimal_event/best_event_predicted.png}
\end{array}$
\end{center}
\caption{Difference of preferred distance and optimal distance, versus age of the athlete, colored by specialization distance. Most athletes prefer the distance they are predicted to be best at. There is a mismatch of best and preferred for a group of younger athletes who have greater potential over longer distances, and for a group of older athletes who's potential is maximized over shorter distances than attempted.
\label{fig:best_event_pred}}
\end{figure}
\noindent
{\bf (S.IV) Pivoting and phase transitions.} We look more closely at the pivoting phenomenon illustrated in Figure~\ref{fig:illustration} top right, and the phase transition discussed in observation (V). We consider the top $25\%$ of male athletes who have attempted at least 3 events, in their best year.
We compute 10 performances of equivalent standard by using LMC in rank 1 in log-time coordinates, by
setting a benchmark performance over the marathon and sequentially predicting each lower distance (marathon
predicts HM, HM predicts 10km etc.). This yields equivalent benchmark performances $t_{1},\dots,t_{10}$.
We then consider triples of consecutive distances $s_{i-1},s_i,s_{i+1}$ (excluding the Mile since close in distance to the 1500m) and study the pivoting behaviour on the data set, by performing the analogous prediction displayed Figure~\ref{fig:illustration}.
More specifically, for each triple, we predict the performance on the distance $s_{i+1}$ using LMC rank 2, from the performances over the distances $s_{i-1}$ and $s_{i}$. The prediction is performed in two ways, once with and once without perturbation of the benchmark performance at $s_{i-1}$, which we then compare. Intuitively, this corresponds to comparing the red to the green curve in Figure~\ref{fig:illustration}. In mathematical terms:
\begin{enumerate}
\item We obtain a prediction $\widehat{t}_{i+1}$ for the distance $s_{i+1}$ from the benchmark performances $t_{i}$, $t_{i-1}$ and consider this as the unperturbed prediction, and
\item We obtain a prediction $\widehat{t}_{i+1}+\delta(\epsilon)$ for the distance $s_{i+1}$ from the benchmark performance $t_{i}$ on $s_i$ and the perturbed performance $(1+\epsilon)t_{i-1}$ on the distance $s_{i-1}$, considering this as the perturbed prediction.
\end{enumerate}
We record these estimates for $\epsilon = -0.1,0.09,\dots,0,0.01,\dots,0.1$ and calculate the relative change of the perturbed prediction with respect to the unperturbed, which is $\delta_i(\epsilon)/\widehat{t}_i$. The results are displayed in Figure~\ref{fig:pivots}.
We find that for pivot distances $s_{i}$ shorter than 5km, a slower performance on the shorter distance $s_{i-2}$ leads to
a faster performance over the longer distance $s_{i}$, insofar as this is predicted by the rank 2 predictor.
On the other hand we find that for pivot distances greater than or equal to 5km, a faster performance over the shorter distance
also implies a faster performance over the longer distance.
\begin{figure}
\begin{center}
$\begin{array}{c}
\includegraphics[width=100mm,clip=true,trim= 0mm 0mm 0mm 0mm]{figures/HigherRank/exp_negative_correlation/pivots}
\end{array}$
\end{center}
\caption{Pivot phenomenon in the low-rank model. The figure quantifies the strength and sign of pivoting as in Figure~\ref{fig:illustration}, top right, at different middle distances $s_i$ (x-axis). The computations are based on equivalent log-time performances $t_{i-1},t_i,t_{i+1}$ at consecutive triples $s_{i-1},s_i,s_{i+1}$ of distances. The y-coordinate indicates the signed relative change of the LMC rank 2 prediction of $t_{i+1}$ from $t_{i-1}$ and $t_i$ changes, when $t_i$ is fixed and $t_{i-1}$ undergoes a relative change of $1\%, 2\%,\ldots, 10\%$ (red curves, line thickness is proportional to change), or $-1\%, -2\%,\ldots, -10\%$ (blue curves, line thickness is proportional to change).
For example, the largest peak corresponds to a middle distance of $s_i = 400m$. When predicting 800m from 400m and 200m, the predicted log-time $t_{i+1}$ (= 800m performance) decreases by 8\% when $t_{i-1}$ (= 200m performance) is increased by 10\% while $t_i$ (= 400m performance) is kept constant.
\label{fig:pivots}}
\end{figure}
\noindent
\normalsize
\pagestyle{empty}
\bibliographystyle{plain}
{\small
| {
"attr-fineweb-edu": 2.130859,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUd4TxK0zjCobCPBlo | \section{Introduction}
Billiards are dynamical systems generated by the motion of a point particle
along the geodesics on a compact Riemannian manifold $Q$ with boundary. Upon
hitting the boundary of $Q$, the particle changes its velocity according to
the law of elastic reflections. The studies of chaotic billiard systems were
pioneered by Sina\v{\i} in his seminal paper \cite{Si70} on dispersing
billiards. A major feature of billiards which makes them arguably the most
visual dynamical systems is that all their dynamical and statistical
properties are completely determined by the shape of the billiard table $Q$
and in fact by the structure of the boundary $\partial Q$.
Studies of convex billiards which started much earlier demonstrated that the
convex billiards have regular dynamics and are even integrable. Such
examples are billiards in circles or in squares, which everybody studied
(without knowing that they study billiards) in a middle or in a high school.
Jacobi proved integrability of billiards in ellipses by introducing
elliptical coordinates in which the equations of motion are separated.
Birkhoff conjectured that ellipses are the only integrable two dimensional
smooth convex tables which generate completely integrable billiards. Later
Lazutkin \cite{La} proved that all two-dimensional convex billiards with
sufficiently smooth boundary admit caustics and hence they can not be ergodic
(see also \cite{Dou82}).
The first examples of hyperbolic and ergodic billiards with dispersing as
well as with focusing components were constructed in \cite{Bu74A}. A closer
analysis of these examples allowed one to realize that there is another
mechanism of chaos (hyperbolicity) than the mechanism of dispersing which
generates hyperbolicity in dispersing billiards. This makes it possible to
construct hyperbolic and ergodic billiards which do not have dispersing
components on the boundary \cite{Bu74B,Bu79}. Some billiards on convex tables
also belong to this catergory. The {\it first} one was a table with boundary
component consisting of a major arc and a chord connecting its two end
points. Observe that this billiard is essentially equivalent to the one
enclosed by two circular arcs symmetric with respect to the cutting chord.
This billiard belongs to the class of (chaotic) flower-like billiards. The
boundaries of these tables have the smoothness of order $C^0$. The stadium
billiard (which became strangely much more popular than the others) appeared
as one of many examples of convex ergodic billiards with smoother ($C^1$)
boundary. Observe that in a flower-like billiard, all the circles generated
by the corresponding petals (circular arcs) completely lie within the
billiard table (flower).
In the present paper we consider (not necessarily small) perturbations of the
first class of chaotic focusing billiards, i.e., with boundary made of a
major arc and a chord. While keeping the major arc, we replace the chord (a
neutral or zero curvature component of the boundary) by a circular arc with
smaller curvature. This type of billiards were constructed in \cite{{CMZZ}},
with certain numerical results. We prove rigourously that the corresponding
billiard tables generate hyperbolic billiards under the conditions that the
chord is not too long, and the new circular arc has sufficiently small
curvature. More precisely, we assume that the length of the chord does not
exceed the radius of the circular component of the boundary, see Theorem
\ref{main}. This condition is a purely technical one, and we conjecture that
the hyperbolicity holds without this restriction.
It is worthwhile to recall that defocusing mechanism of chaos was one of a
few examples where discoveries of new mechanisms/laws of nature were made in
mathematics rather than in physics. No wonder that physicists did not believe
that, even though rigorous proofs were present, until they check it
numerically. After that, stadia and other focusing billiards were built in
physics labs over the world. However, intuitive ``physical" understanding of
this mechanism always was that defocusing must occur after any reflection off
the focusing part of the boundary. Indeed, there are just a few very special
classes of chaotic (hyperbolic) billiards where this condition was violated
(see e.g. \cite{BD}). However, these billiards were specially constructed to
get hyperbolic billiards. To the contrary, hyperbolicity of asymmetric lemons
came as a complete surprise to everybody. This class of chaotic billiards
forces physicists as well as mathematicians to reconsider their understanding
of this fundamental mechanism of chaos.
Our billiards can also be viewed as far-reaching generalizations of classical
lemon-type billiards. The lemon billiards were introduced by Heller and
Tomsovic \cite{HeTo} in 1993, by taking the intersection of two unit disks,
while varying the distance between their centers, say $b$. This family of
billiards have been extensively studied numerically in physics literature in
relation with the problems of quantum chaos (see \cite{MHA,ReRe}). The
coexistence of the elliptic islands and chaotic region has also been observed
numerically for all of the lemon tables as long as $b\neq 1$. Therefore, the
lemon table with $b=1$ is the {\it only} possible billiard system with
complete chaos in this family. See also \cite{LMR99,LMR01} for the studies of
classical and quantum chaos of lemon-type billiards with general quadric
curves.
\begin{figure}[h]
\begin{overpic}[width=2.5in]{basic.eps}
\put(70,53){\small$Q$}
\put(90,47){\small$r$}
\put(23,63){\small$R$}
\put(52,44){\small$b$}
\put(80,65){\small$A$}
\put(80,17){\small$B$}
\end{overpic}
\caption{Basic construction of an asymmetric lemon table $Q(b,R)$. }
\label{basic}
\end{figure}
The lemon tables were embedded into a 3-parameter family--the {\it asymmetric
lemon billiards} in \cite{CMZZ}, among which the ergodicity is {\it no
longer} an exceptional phenomenon. More precisely, let $Q(r,b,R)$ be the
billiard table obtained as the intersection of a disk $D_r$ of radius $r$
with another disk $D_R$ of radius $R> r$, where $b>0$ measures the distance
between the centers of these two disks (see Fig.~\ref{basic}). Without loss
of generality, we will assume $r=1$ and denote the lemon table by
$Q(b,R)=Q(1,b,R)$. Restrictions on $b$ and $R$ will be specified later on to
ensure the hyperbolicity of the billiard systems on these asymmetric lemon
tables. On one hand, these billiard tables have extremely simple shape, as
the boundary of the billiard table $Q(b,R)$ only consists of two circular
arcs. Yet on the other hand, these systems already exhibit rich dynamical
behaviors, as it has been numerically observed in \cite{CMZZ} that there
exists an infinite strip $\mathcal{D}\subset [1,\infty)\times [0,\infty)$,
such that for any $(b,R)\in \mathcal{D}$, the billiard system on $Q(b,R)$ is
ergodic.
In this paper we give a rigorous proof of the hyperbolicity on a class of
asymmetric lemon billiards $Q(b,R)$. Our approach is based on the analysis of
continued fractions generated by the billiard orbits, which were introduced
by Sina\v{\i} \cite{Si70}, see also \cite{Bu74A}. Continued fractions are
intrinsic objects for billiard systems, and therefore they often provide
sharper results than those one gets by the abstract cone method, which deals
with hyperbolic systems of any nature and does not explore directly some
special features of billiards. In fact, already in the fundamental paper
\cite{Si70} invariant cones were immediately derived from the structure of
continued fractions generated by dispersing billiards. The study of
asymmetric lemon billiards demonstrates that defocusing mechanism can
generate chaos in much more general setting than it was thought before.
\subsection{Main results}
Let $b>0$ and $R>1$ be two positive numbers, $Q=Q(b,R)$ be the asymmetric
lemon table obtained by intersecting the unit disc $D_1$ with $D_R$, where
$b>0$ measures the distance between the two centers of $D_1$ and $D_R$. Let
$\Gamma=\partial Q$ be the boundary of $Q$, and $\Gamma_1$ be the circular
boundary component of $Q$ on the disk $D_1$, and $\Gamma_R$ be the circular
boundary component of $Q$ on the the disk $D_R$. Let $A$ and $B$ be the
points of intersection of $\Gamma_1$ and $\Gamma_R$, whom we will call the
{\it corner} points of $Q$. It is easy to see the following two extreme
cases: $Q(b,R)=D_1$ when $b\le R-1$, and $Q(b,R)=\emptyset$ when $b\ge 1+R$.
So we will assume $b\in (R-1,R+1)$ for the rest of this paper.
We first review some properties of periodic points of the billiard system on
$Q(b,R)$. It is easy to see that there is no fixed point, and exactly one
period 2 orbit colliding with both arcs\footnote{There are some other period
2 orbits which only collide with $\Gamma_1$. These orbits are parabolic.},
say $\mathcal{O}_2$, which moves along the segment passing through both
centers. The following result is well known, see \cite{Woj86} for example.
\begin{lem
The orbit $\mathcal{O}_2$ is hyperbolic if $1<b<R$, is parabolic if $b=1$ or
$b=R$, and is elliptic if $b<1$ or $b>R$.
\end{lem}
It has been observed in \cite{CMZZ} that under the condition $b<1$ or $b>R$,
$\mathcal{O}_2$ is actually nonlinearly stable (see also \cite{OP05}). That
is, the orbit $\mathcal{O}_2$ is surrounded by some islands. Therefore, the
following is a necessary condition such that the billiard system on $Q(b,R)$
is hyperbolic.
\vskip.1in
\noindent\textbf{(A0)} The parameters $(b,R)$ satisfy $\max\{R-1,1\}< b< R$.
\vskip.1in
In this paper we prove that the billiard system on $Q(b,R)$ is completely
hyperbolic under the assumption \textbf{(A0)} and some general assumptions
\textbf{(A1)}--\textbf{(A3)}. As these assumptions are rather technical, we
will state them in Section \ref{assumption}.
\begin{thm}\label{hyper}
Let $Q(b,R)$ be an asymmetric lemon table satisfying the assumptions
\textbf{(A0)}--\textbf{(A3)}. Then the billiard system on $Q(b,R)$ is
hyperbolic.
\end{thm}
The proof of Theorem \ref{hyper} is given in Section \ref{provehyper}.
To provide more intuitions for these conditions, we consider a special class
of asymmetric lemon billiards. We first cut the unit disk $D_1$ by a chord
with end point $A$ and $B$, and let $\Gamma_{1}$ be the major arc of the unit
circle with end points $A,B$. Denoted by $Q_0$ the larger part of the disk
whose boundary contains $\Gamma_1$. By the classical defocusing mechanism,
the billiard on $Q_0$ is hyperbolic and ergodic. Now replace the chord with a
circular arc on the circle $D_R$, for some large radius $R$. Note that the
distance between the two centers is given by
$b=(R^2-|AB|^2/4)^{1/2}-(1-|AB|^2/4)^{1/2}$. The resulting table
$Q(R):=Q(b,R)$ can be viewed as a perturbation of $Q_0$. The following
theorem shows that the billiard system on $Q_0$, while being nonuniformly
hyperbolic, is robustly hyperbolic under suitable perturbations.
\begin{thm}\label{main}
Let $\Gamma_{1}$ be the major arc of the unit circle whose end points $A,B$
satisfy $|AB|<1$. Then there exists $R_\ast>1$ such that for
each $R\ge R_\ast$, the billiard system on the table $Q(R)$ with two
corners at $A,B$ is hyperbolic.
\end{thm}
The proof of Theorem \ref{main} is given in Section \ref{provemain}.
\begin{remark}
The hyperbolicity of the billiard system guarantees that a typical
(infinitesimal) wave front in the phase space grows exponentially fast along
the iterations of the billiard map. Therefore, one can say that these
billiards in Theorem \ref{main} still demonstrate the defocusing mechanism.
However, the circle completing each of the boundary arcs of the table
$Q(b,R)$ {\it contains} the entire table. Therefore, the defocusing mechanism
can generate hyperbolicity even in the case when the separation condition is
strongly violated.
\end{remark}
The assumption $|AB|<1$ in Theorem \ref{main} is a purely technical one, and
it is used only once in the proof of Theorem \ref{main} to ensure $n^\ast\ge 6$
(see the definition of $n^\ast$ in \S \ref{provemain}). Clearly this
assumption $|AB|<1$ is stronger than the assumption that $\Gamma_1$ is a
major arc. We conjecture that as long as $\Gamma_1$ is a major arc, the
billiard system on $Q(R)$ is completely hyperbolic for any large enough
$R$.
\begin{conj
Fix two points $A$ and $B$ on $\partial D_1$ such that $\Gamma_1$ is a major
arc. Then the billiard system on $Q(b,R)$ is hyperbolic if the center of the
disc $D_R$ lies out side of the table.
\end{conj}
To ease a task of reading we provide hereby a list of notations that we use
in this paper.
\begin{center}
{The List of Notations}
\end{center}
\begin{flushleft}
\begin{tabular}{ l p{3.9in} }
$Q(r,b,R)=D_r\cap D_R$ & the asymmetric lemon table as the intersection of
$D_r$ with $D_R$. We usually set $r=1$ and denote it by $Q(b,R)=Q(b,1,R)$. We also
denote $Q(R)=Q(b,R)$ if $b$ is determined by $R$.\\
$\Gamma=\partial Q(b,R)$ & the boundary of $Q(b,R)$, which consists of two
arcs: $\Gamma_1$ and $\Gamma_R$.\\
$\mathcal{M}=\Gamma\times [-\pi/2,\pi/2]$ & the phase space with coordinate
$x=(s,\varphi)$,
which consists of $\mathcal{M}_1$ and $\mathcal{M}_R$.\\
$\mathcal{F}$ & the billiard map on the phase space $\mathcal{M}$ of $Q(b,R)$.\\
$\mathcal{S}_1$ & the set of points in $\mathcal{M}$ at where $\mathcal{F}$ is not well defined or
not
smooth.\\
$\mathcal{B}^{\pm}(V)$ & the curvature of the orthogonal transversal of the beam of
lines generated by a tangent vector $V\in T_x\mathcal{M}$ before and after
the reflection at $x$, respectively.\\
$d(x)=\rho\cdot \cos\varphi$ & the half of the chord cut out by the
trajectory of the billiard orbit in the disk $D_\rho$, where $\rho\in\{1,R\}$
is given by $x\in
\mathcal{M}_\rho$.\\
$\mathcal{R}(x)=-\frac{2}{d(x)}$ & the reflection parameter that measures the
increment of the curvature after reflection.\\
$\tau(x)$ & the distance between the current position of $x$ with the next
reflection with $\Gamma$.\\
$\chi^{\pm}(\mathcal{F},x)$ & the Lyapunov exponents of $\mathcal{F}$ at the point $x$.
\end{tabular}
\begin{tabular}{ l p{3.9in} }
$\eta(x)$ & the number of successive reflections of $x$ on the arc
$\Gamma_\sigma$, where $\sigma$ is given by $x\in\mathcal{M}_\sigma$.\\
$\hat \mathcal{M}_1=\mathcal{M}_1\backslash \mathcal{F}\mathcal{M}_1$ & the set of points
that first enter $\mathcal{M}_1$. Similarly we define $\hat \mathcal{M}_R$.\\
$M_n=\{x\in\hat\mathcal{M}_1:\eta(x)=n\}$ & the set of points in $\hat \mathcal{M}_1$ having
$n$ reflections on $\Gamma_1$ before hitting $\Gamma_R$.\\
$\hat F(x)=\mathcal{F}^{j_0+j_1+2}x$ & the first return map of $\mathcal{F}$ on the subset
$\hat\mathcal{M}_1$, where $j_0$ is the number of reflections of $x$ on $\Gamma_1$,
and $j_1$ is the number of reflections of $x_1=\mathcal{F}^{j_0+1}x$ on $\Gamma_R$.\\
$\hat\tau_k=\tau_k-j_k\hat d_k-j_{k+1}\hat d_{k+1}$ & a notation for short,
where $\tau_k=\tau(x_k)$,
$d_k=d(x_k)$, $\hat d_k=\frac{d_k}{j_k+1}$.\\
$\hat \mathcal{R}_k=-\frac{2}{\hat d_k}$ & a notation for short.\\
$M=\bigcup_{n\ge 0}\mathcal{F}^{\lceil n/2\rceil}M_n$ & a subset of $\mathcal{M}_1$. Compare
with the set $\hat \mathcal{M}_1=\bigcup_{n\ge 0}M_n$.\\
$F(x)=\mathcal{F}^{i_0+i_1+i_2+2}x$ & the first return map of $\mathcal{F}$ on $M$, where
$i_0$ is the number of reflections of $x$ on $\Gamma_1$, $i_1$ is the number
of reflections of $x_1=\mathcal{F}^{i_0+1}x$ on $\Gamma_R$, and $i_2$ is the number
of
reflections of $x_2=\mathcal{F}^{i_1+1}x_1$ on $\Gamma_1$ before entering $M$.\\
$\bar \tau_k=\tau_k-i_k\hat d_k-d_{k+1}$ & a notation for short. Only $\bar
\tau_1$ is used in this paper.
\end{tabular}
\end{flushleft}
\section{Preliminaries for general convex billiards}
Let $Q\subset \mathbb{R}^2$ be a compact convex domain with piecewise smooth
boundary, $\mathcal{M}$ be the space of unit vectors based at the boundary
$\Gamma:=\partial Q$ pointing inside of $Q$. The set $\mathcal{M}$ is endowed with
the topology induced from the tangent space $TQ$. A point $x\in\mathcal{M}$
represents the initial status of a particle, which moves along the ray
generated by $x$ and then makes an elastic reflection after hitting $\Gamma$.
Denote by $x_1\in\mathcal{M}$ the new status of the particle right after this
reflection. The billiard map, denoted by $\mathcal{F}$, maps each point $x\in\mathcal{M}$ to
the point $x_1\in\mathcal{M}$. Note that each point $x\in\mathcal{M}$ has a natural
coordinate $x=(s, \varphi)$, where $s\in [0,|\Gamma|)$ is the arc-length
parameter of $\Gamma$ (oriented counterclockwise), and $\varphi\in
[-\pi/2,\pi/2]$ is the angle formed by the vector $x$ with the inner normal
direction of $\Gamma$ at the base point of $x$. In particular, the phase
space $\mathcal{M}$ can be identified with a cylinder $\Gamma\times[-\pi/2,\pi/2]$.
The billiard map preserves a smooth probability measure $\mu$ on $\mathcal{M}$, where
$d\mu=(2|\Gamma|)^{-1}\cdot\cos\varphi\, ds\, d\varphi$.
For our lemon table $Q=Q(b,R)$, the boundary $\Gamma=\partial Q$ consists of
two parts $\Gamma_1$ and $\Gamma_R$. Collision vector starting from a corner
point at $A$ or $B$ has $s$-coordinate $s=0$ or $s=|\Gamma_1|$, respectively.
Then we can view $\mathcal{M}$ as the union of two closed rectangles:
$$\mathcal{M}_{1}:=\{(s,\varphi)\in\mathcal{M}\,:\,0\le s \le |\Gamma_1|\}\,\,\,\,\,\,\text{ and
}\,\,\,\,\,\mathcal{M}_{R}:=\{(s,\varphi)\in\mathcal{M}\,:\,|\Gamma_1|\le s\le |\Gamma|\}.$$ For
any point $x=(s,\varphi)\in \mathcal{M}$, we define $d(x)=\cos\varphi$ if
$x\in\mathcal{M}_1$, and $d(x)=R\cos\varphi$ if $x\in\mathcal{M}_R$. Geometrically, the
quantity $2d(x)$ is the length of the chord in the complete disk ($D_1$ or
$D_R$) decided by the trajectory of $x$.
Let $\mathcal{S}_0=\{(s,\varphi)\in\mathcal{M}: s=0\text{ or } s=|\Gamma_1|\}$ be the set of
post-reflection vectors $x\in\mathcal{M}$ that pass through one of the corners $A$ or
$B$. We define $\mathcal{S}_1=\mathcal{S}_0\cup \mathcal{F}^{-1}\mathcal{S}_0$ as the set of points on which
$\mathcal{F}$ is not well-defined. Note that $\mathcal{F}^{-1}\mathcal{S}_0$ consists of $4$
monotone curves $\varphi=\varphi_i(s)$ ($1\le i\le 4$) in $\mathcal{M}$. Moreover, we
define $\mathcal{S}_{-1}:=\mathcal{S}_0\cup \mathcal{F}\mathcal{S}_0$. The set $\mathcal{S}_{\pm 1}$ is called the
{\it singular set} of the billiard map $\mathcal{F}^{\pm 1}$.
\begin{figure}[htb]
\centering
\begin{overpic}[width=85mm]{wave.eps}
\put(25,43){\small$\Gamma$}
\put(31,27){\small$x$}
\put(25,18){\small$\gamma(t)$}
\put(90,42){\small$\sigma$}
\put(66,28){\small$L(0)$}
\put(76,23){\small$s(0)$}
\put(70,10){\small$L(t)$}
\end{overpic}
\caption{\small{A bundle of lines generated by $\gamma$, and the cross-section $\sigma$.}}
\label{wavefront}
\end{figure}
A way to understand chaotic billiards lies in the study of infinitesimal
families of trajectories. More precisely, let $x\in \mathcal{M}\setminus\mathcal{S}_1$,
$V\in T_x\mathcal{M}$, $\gamma: (-\varepsilon_0,\varepsilon_0)\to\mathcal{M}$, $t
\mapsto\gamma(t)=(s(t),\varphi(t))$ be a smooth curve for some $\varepsilon_0>0$,
such that $\gamma(0)=x$ and $\gamma'(0)=V$. Clearly the choice of such a
smooth curve is not unique. Each point in the phase space $\mathcal{M}\subset TQ$ is
a unit vector on the billiard table $Q$. Let $L(t)$ be the line that passes
through the vector $\gamma(t)$, see Fig. \ref{wavefront}. Putting these lines
together, we get a beam of post-reflection lines, say $\mathcal{W}^{+}$, generated by
the path $\gamma$. Let $\sigma$ be the orthogonal cross-section of this
bundle passing through the point $s(0)\in\Gamma$. Then the post-reflection
curvature of the tangent vector $V$, denoted by $\mathcal{B}^+(V)$, is defined as the
curvature of $\sigma$ at the point $s(0)$. Similarly we define the
pre-reflection curvature $\mathcal{B}^{-}(V)$ (using the beam of dashed lines in Fig.
\ref{wavefront}).
Note that $\mathcal{B}^{\pm}(V)$ depend only on $V$, and are independent of the
choices of curves tangent to $V$. These two quantities are related by the
equation
\begin{equation}\label{fpm}
\mathcal{B}^+(V)-\mathcal{B}^-(V)=\mathcal{R}(x),
\end{equation}
where $\mathcal{R}(x):=-2/d(x)$ is the {\it{reflection parameter}} introduced in
\cite{Si70}, see also \cite[\S 3.8]{CM06}. In fact, \eqref{fpm} is the
well-known {\it Mirror Equation} in geometric optics. Note that $\mathcal{R}(x)>0$ on
dispersing components and $\mathcal{R}(x)<0$ on focusing components of the boundary
$\partial Q$. Since we mainly use $\mathcal{B}^-(V)$ in this paper, we drop the minus
sign, simply denote it by $\mathcal{B}(V)=\mathcal{B}^-(V)$.
Let $\tau(x)$ be the distance from the current position of $x$ to the next
reflection with $\Gamma$. According to (\ref{fpm}), one gets the evolution
equation for the curvatures of the pre-reflection wavefronts of $V$ and its
image $V_1=D\mathcal{F}(V)$ at $\mathcal{F} x$:
\begin{equation}\label{ff1}
\mathcal{B}_1(V):=\mathcal{B}(V_1)=\frac{1}{\tau(x)+\dfrac{1}{\mathcal{R}(x)+\mathcal{B}(V)}}.
\end{equation}
More generally, let $x\in \mathcal{M}\setminus \mathcal{S}_1$ be a point with $\mathcal{F}^k x\notin
\mathcal{S}_1$ for all $1\le k\le n$, $V\in T_x\mathcal{M}$ be a nonzero tangent vector, and
$V_{n}=D\mathcal{F}^{n} V$ be its forward iterations. Then by iterating the formula
\eqref{ff1}, we get
\begin{align}
\mathcal{B}(V_{n})&=\frac{1}{\tau(x_{n-1})+\dfrac{1}{\mathcal{R}(x_{n-1})+\dfrac{1}{\tau(x_{n-2})
+\dfrac{1}{\mathcal{R}(x_{n-2})+\dfrac{1}{\ddots+\dfrac{1}{\tau(x)+\dfrac{1}{\mathcal{R}(x)+\mathcal{B}(V)}}}}}}}
\label{ff2}
\end{align}
with $x_{k}=\mathcal{F}^{k}x$. See also \cite[\S 3.8]{CM06} for Eq. \eqref{ff1} and
\eqref{ff2}.
\vskip.3cm
For convenience, we introduce the standard notations for continued fractions
\cite{Kh64}. In the following, we will denote $[a]:=\frac{1}{a}$. The reader
should not be confused by the integral part of $a$, which is never used in
this paper\footnote{We use the ceiling function $\lceil
t\rceil=\min\{n\in\mathbb{Z}: n\ge t\}$ in \S \ref{induced}, and the floor
function $\lfloor t\rfloor=\max\{n\in\mathbb{Z}: n\le t\}$ in \S
\ref{provemain}.}.
\begin{definition}
Let $a_n$, $n\ge0$ be a sequence of real numbers. The finite continued
fraction $[a_1,a_2,\cdots, a_n]$ is defined inductively by:
\begin{align*}
[a_1]=\frac{1}{a_1},\, [a_1,a_2]&=\frac{1}{a_1+[a_2]},\, \cdots,\,
[a_1,a_2,\cdots, a_n]=\frac{1}{a_1+[a_2,\cdots, a_n]}.
\end{align*}
Moreover, we denote $[a_0;a_1,a_2,\cdots, a_n]=a_0+[a_1,a_2,\cdots, a_n]$.
\end{definition}
Using this notation, we see that the evolution \eqref{ff2} of the curvatures
of $V_n=D\mathcal{F}^n(V)$ can be re-written as
\begin{equation}\label{ffn}
\mathcal{B}(V_n)=[\tau(x_{n-1}),\mathcal{R}(x_{n-1}),\tau(x_{n-2}),
\mathcal{R}(x_{n-2}),\cdots,\tau(x),\mathcal{R}(x)+\mathcal{B}(V)].
\end{equation}
Note that Eq. \eqref{ffn} is a recursive formula and hence can be extended
{\it formally} to an infinite continued fraction.
We will need the following basic properties of continued fractions to perform
some reductions. Let $x=[a_1,\cdots,a_n]$ be a finite continued fraction.
Then we can combine two finite continued fractions in the following ways:
\begin{align}
[b_1,\cdots,b_n+x]&=[b_1,\cdots,b_n,a_1,\cdots,a_n],\label{combine}\\
[b_1,\cdots,b_n,x]&=[b_1,\cdots,b_n+a_1,a_2,\cdots,a_n],\label{combine2}\\
[b_1,\cdots,b_n,0,a_1,\cdots,a_n]&=[b_1,\cdots,b_n+a_1,a_2,\cdots,a_n].\label{add0}
\end{align}
\begin{pro}\label{cont}
Suppose $a,b,c$ are real numbers such that $B:=a+c+abc\neq 0$. Then the
relation
\begin{align}
[\cdots,x,a,b,c,y,\cdots]&=[\cdots,x+A,B,C+y,\cdots]\label{ABC}
\end{align}
holds for any finite or infinite continued fractions, where $A=\frac{bc}{B}$
and $C=\frac{ab}{B}$.
\end{pro}
Let $Q$ be a bounded domain with piecewise smooth boundary, $\mathcal{F}$ be the
billiard map on the phase space $\mathcal{M}$ over $Q$. Then the limit $\displaystyle
\chi^+(\mathcal{F},x)=\lim_{n\to\infty}\frac{1}{n}\log\|D_x\mathcal{F}^n\|$, whenever it
exists, is said to be a {\it Lyapunov exponent} of the billiard map $\mathcal{F}$ at
the point $x$. Since $\mathcal{F}$ preserves the smooth measure $\mu$, the other
Lyapunov exponent at $x$ is given by $\chi^-(\mathcal{F},x)=-\chi^+(\mathcal{F},x)$. Then the
point $x$ is said to be hyperbolic, if $\chi^+(\mathcal{F},x)>0$. Moreover, the
billiard map $\mathcal{F}$ is said to be (completely) {\it hyperbolic}, if
$\mu$-almost every point $x\in \mathcal{M}$ is a hyperbolic point. By Oseledets {\it
Multiplicative Ergodic Theorem}, we know that $\chi^+(\mathcal{F},x)$ exists for
$\mu$-a.e. $x\in \mathcal{M}$, and there exists a measurable splitting
$T_x\mathcal{M}=E^u_x\oplus E^s_x$ over the set of hyperbolic points, see
\cite{CM06}.
It is well known that the hyperbolicity of a billiard map is related to the
{\it convergence} of the continued fraction given in Eq. \eqref{ffn} as
$n\to\infty$. In particular, the following proposition reveals the relations
between them. See \cite{Bu74A,CM06, Si70}.
\begin{pro}
Let $x\in \mathcal{M}$ be a hyperbolic point of the billiard map $\mathcal{F}$. Then the
curvature $\mathcal{B}^u(x):=\mathcal{B}(V^u_x)$ of a unit vector $V^u_x\in E^u_x$ is given
by the following infinite continued fraction:
$$\mathcal{B}^u(x)=[\tau(x_{-1}),\mathcal{R}(x_{-1}),\tau(x_{-2}),\mathcal{R}(x_{-2}),\cdots,
\tau(x_{-n}),\mathcal{R}(x_{-n}),\cdots].$$
\end{pro}
Finally we recall an invariant property for consecutive reflections on
focusing boundary components by comparing the curvatures of the iterates of
different tangent vectors. Given two distinct points $a$ and $b$ on the unit
circle $\mathbb{S}^1$, denote by $(a,b)$ the interval from $a$ to $b$
counterclockwise. Given three distinct points $a,b,c$ on $\mathbb{S}^1$,
denote by $a\prec b \prec c$ if $b\in (a,c)$. Endow
$\mathbb{R}\cup\{\infty\}\simeq \mathbb{S}^1$ with the relative position
notation $\prec$ on $\mathbb{S}^1$.
\begin{pro}[\cite{Do91}]\label{order}
Let $X,Y,Z\in T_x\mathcal{M}$ be three tangent vectors at $x\in \mathcal{M}$ satisfying
$\mathcal{B}(X)\prec\mathcal{B}(Y)\prec\mathcal{B}(Z)$. Then for each $n\in\mathbb{Z}$, the iterates
$D\mathcal{F}^nX$, $D\mathcal{F}^nY$ and $D\mathcal{F}^nZ$ satisfy
$$\mathcal{B}(D\mathcal{F}^nX)\prec\mathcal{B}(D\mathcal{F}^nY)\prec\mathcal{B}(D\mathcal{F}^nZ).$$
\end{pro}
\section{Continued fractions for asymmetric lemon billiards}\label{cfrac}
In this section we construct two induced maps of the billiard system
$(\mathcal{M},\mathcal{F})$ on two different but closely related subsets of the phase space
$\mathcal{M}$, and then study the evolutions of continued fractions of the curvatures
$\mathcal{B}(V)$ under these induced maps. Let $Q(b,R)$ be an asymmetric lemon table
obtained as the intersection of a disk of radius 1 with a disk of radius
$R>1$, $\Gamma=\partial Q$, $\mathcal{M}=\Gamma\times [-\pi/2,\pi/2]$ be the phase
space of the billiard map on $Q$. Note that $\mathcal{M}$ consists of two parts:
$\mathcal{M}_1:=\Gamma_1\times[-\pi/2,\pi/2]$ and
$\mathcal{M}_R=\Gamma_R\times[-\pi/2,\pi/2]$, the sets of points in $\mathcal{M}$ based on
the arc $\Gamma_{1}$ and $\Gamma_{R}$, respectively. Assume that $\Gamma_1$
is a major arc.
For $\sigma\in \{1,R\}$, $x\in \mathcal{M}_{\sigma}\setminus \mathcal{S}_1$, let $\eta(x)$ be
the number of successive reflections of $x$ on the arc $\Gamma_{\sigma}$.
That is,
\begin{equation}
\eta(x)=\sup\{n\geq 0\,:\, \mathcal{F}^{k} x\in \mathcal{M}_{\sigma} \text{ for all } k=0,\cdots,n\}.
\end{equation}
For example, $\eta(x)=0$ if $\mathcal{F} x\notin \mathcal{M}_{\sigma}$, and $\eta(x)=\infty$
if $\mathcal{F}^kx\in \mathcal{M}_\sigma$ for all $k\ge 0$. Let $N=\{x\in
\mathcal{M}_1\,:\,\eta(x)=\infty \}$. One can easily check that each point $x\in N$
is either periodic or belongs to the boundary $\{(s,\varphi)\in \mathcal{M}_1\,:\,
\varphi=\pm \pi/2\}$. In particular, $N$ is a null set with $\mu(N)=0$.
Let $\hat \mathcal{M}_{1}:=\{x\in \mathcal{M}_{1}:\mathcal{F}^{-1}x\notin \mathcal{M}_{1}\}$ be the set of
points first entering $\mathcal{M}_1$. Similarly we define $\hat \mathcal{M}_{R}$. The
restriction of $\eta$ on $\hat \mathcal{M}_{1}$ induces a measurable partition of
$\hat \mathcal{M}_{1}$, whose cells are given by $M_n:=\eta^{-1}\{n\}\cap\hat \mathcal{M}_1$
for all $n\geq 0$. Each cell $M_n$ contains all first reflection vectors on
the arc $\Gamma_1$ that will experience exactly $n$ reflections on $\Gamma_1$
before hitting $\Gamma_R$. Then it is easy to check that
\begin{equation}\label{decomp}
\mathcal{M}_1=N\cup\bigcup_{n\ge 0}\bigcup_{0\le k\le n}\mathcal{F}^k M_n.
\end{equation}
\subsection{The first induced map of $\mathcal{F}$ on $\mathcal{M}_1$}
Let $x_0\in \hat \mathcal{M}_1$, and $j_0=\eta(x_0)$ be the numbers of successive
reflections of $x_0$ on $\Gamma_1$. Similarly, we denote $x_1=\mathcal{F}^{j_0+1}
x_0$, and $j_1=\eta(x_1)$. Then the first return map $\hat F$ of $\mathcal{F}$ on
$\hat\mathcal{M}_1$ is given by
$$\hat Fx:=\mathcal{F}^{j_0+j_1+2}x.$$
Note that the similar induced systems appeared in many references about
billiards with convex boundary components, see \cite{CM06,CZ05, M04}. In the
systems considered in these references, the induced systems were shown to be
(uniformly) hyperbolic. However, for our billiard systems on $Q(b,R)$, it is
rather difficult to prove the hyperbolicity for this type of the induced map.
Thus we introduce a new induced map in the next subsection. To make a
comparison, we next investigate the properties of the induced map
$(\hat\mathcal{M}_1, \hat F)$.
To simplify the notations, we denote by $\tau_0:=\tau(\mathcal{F}^{j_0} x_0)$ the
length of the free path of $\mathcal{F}^{j_0} x_0$, and by $\tau_1:=\tau(\mathcal{F}^{j_1}
x_1)$ the length of the free path of $\mathcal{F}^{j_1} x_1$. Moreover, let
$d_k=d(x_k)$, $\hat d_k=\frac{d_k}{j_k+1}$, $\hat\mathcal{R}_k=-2/\hat d_k$, $\hat
\tau_k=\tau_k-j_k\hat d_k-j_{k+1}\hat d_{k+1}$, for $k=0,1$. Note that $\hat
d_k=d_k$ and $\hat\mathcal{R}_k=\mathcal{R}_k$ if $j_k=0$, and $\hat\tau_k=\tau_k$ if
$j_k=j_{k+1}=0$.
\vskip.1in
Using the relations in Proposition \ref{cont}, we can reduce the long
continuous fraction to a shorter one:
\begin{lem}\label{reduce1}
Let $x\in\hat \mathcal{M}_1$, $V\in T_x\mathcal{M}$ and $\hat V_1=D\hat F(V)$. Then $\mathcal{B}(\hat
V_1)$ is given by the continued fraction:
\begin{align}\label{fract}
\mathcal{B}(\hat V_1)&=[\tau_1-j_1\hat d_1,\hat\mathcal{R}_1,
\hat\tau_0,\hat\mathcal{R}_0,-j_0\hat d_0, \mathcal{B}(V)].
\end{align}
\end{lem}
\begin{remark}
Note that in the case $j_0=0$ and $j_1=0$, $\hat F x =\mathcal{F}^2 x$, and the
relation \eqref{fract} reduces to the formula \eqref{ffn} with $n=2$:
$\mathcal{B}(\hat V_1)=[\tau_1,\mathcal{R}_1, \tau_0,\mathcal{R}_0+\mathcal{B}(V)]$.
\end{remark}
\begin{proof}
Suppose a point $x\in \mathcal{M}$ have $m$ consecutive reflections on a circular arc
$\Gamma_\sigma$, where $\sigma\in\{1,R\}$. That is, $\mathcal{F}^{i}x\in
\mathcal{M}_{\sigma}$, for $i=0, \cdots, m$. In this case we always have
$$d(\mathcal{F}^{i}x)= d(x),\, \mathcal{R}(\mathcal{F}^{i}x)=-2/d(x),\, 0\le i\le m,
\text{ and } \tau(\mathcal{F}^{i}x)=2d(x), \, 0\le i< m.$$
Then for any $V\in
T_{x}\mathcal{M}_{\sigma}$,
\begin{align}
\mathcal{B}(D\mathcal{F}^{m+1} V)&=[\tau(\mathcal{F}^{m}x),\underbrace{\mathcal{R}(x),
2 d(x), \mathcal{R}(x),\cdots,2 d(x)}_\text{$m$ times },\mathcal{R}(x)+\mathcal{B}(V)]\nonumber\\
&=[\tau(\mathcal{F}^{m}x),\mathcal{R}(x)/2,-2m\cdot d(x),\mathcal{R}(x)/2+\mathcal{B}(V)],\label{redu0}
\end{align}
see \cite[\S 8.7]{CM06}. Applying this reduction process for each of the two
reflection series on $\Gamma_1$ and $\Gamma_R$ respectively, we see that
\begin{align}
\mathcal{B}(\hat V_1)=[&\tau_1,\mathcal{R}_1/2,-2j_1d_1,\mathcal{R}_1/2,
\tau_0,\mathcal{R}_0/2,-2j_0d_0,\mathcal{R}_0/2+\mathcal{B}(V)].\label{reduction1}
\end{align}
Now we rewrite the last segment in \eqref{reduction1} as
$[\cdots,\mathcal{R}_0/2+\mathcal{B}(V)]=[\cdots,\mathcal{R}_0/2,0,\mathcal{B}(V)]$ by Eq. \eqref{add0}. Then
applying Eq. \eqref{ABC} to the segment $(a,b,c)=(\mathcal{R}_0/2,-2j_0d_0,\mathcal{R}_0/2)$
in Eq. \eqref{reduction1}, we get
\begin{align*}
B_0&:=a+c+abc=\mathcal{R}_0-(\mathcal{R}_0/2)^2\cdot 2j_0d_0
=-\frac{2+2j_0}{d_0}=-\frac{2}{\hat d_0}=\hat\mathcal{R}_0,\\
A_0&:=\frac{bc}{B_0}=-2j_0d_0\cdot \mathcal{R}_0/2\cdot(-\frac{\hat d_0}{2})=-j_0\cdot\hat d_0,\\
C_0&:=\frac{ab}{B_0}=-2j_0d_0\cdot \mathcal{R}_0/2\cdot(-\frac{\hat d_0}{2})=-j_0\cdot\hat d_0.
\end{align*}
Putting them together with \eqref{reduction1}, we have
\begin{align}
\mathcal{B}(\hat V_1)&=
[\tau_1,\mathcal{R}_1/2,-2j_1d_1,\mathcal{R}_1/2,
\tau_0,\mathcal{R}_0/2,-2j_0d_0,\mathcal{R}_0/2,0,\mathcal{B}(V)]\nonumber\\
&=[\tau_1,\mathcal{R}_1/2,-2j_1d_1,\mathcal{R}_1/2,\tau_0+A_0,B_0,0+C_0,\mathcal{B}(V)]\nonumber\\
&=[\tau_1,\mathcal{R}_1/2,-2j_1d_1,\mathcal{R}_1/2,\tau_0-j_0\cdot\hat d_0,
\hat\mathcal{R}_0,-j_0\cdot\hat d_0,\mathcal{B}(V)].\label{inter1}
\end{align}
Similarly we can apply Eq. \eqref{ABC} to the segment
$(\mathcal{R}_1/2,-2j_1d_1,\mathcal{R}_1/2)$, and get $B_1=\hat \mathcal{R}_1$,
$A_1=C_1=-j_1\cdot\hat d_1$. Then we can continue the computation from
\eqref{inter1} and get
\begin{align*}
\mathcal{B}(\hat V_1)&=[\tau_1,\mathcal{R}_1/2,-2j_1d_1,\mathcal{R}_1/2,
\tau_0-j_0\cdot\hat d_0,\hat\mathcal{R}_0,-j_0\cdot\hat d_0,\mathcal{B}(V)]\\
&=[\tau_1+A_1,B_1,\tau_0+C_1-j_0\cdot\hat d_0,\hat\mathcal{R}_0,-j_0\cdot\hat d_0,\mathcal{B}(V)]\\
&=[\tau_1-j_1\hat d_1,\hat\mathcal{R}_1,
\hat\tau_0,\hat\mathcal{R}_0,-j_0\hat d_0, \mathcal{B}(V)],
\end{align*}
where $\hat d_k=\frac{d_k}{j_k+1}$, $\hat\mathcal{R}_k=-2/\hat d_k$ for $k=0,1$, and
$\hat \tau_0=\tau_0-j_0\hat d_0-j_{1}\hat d_{1}$. This completes the proof.
\end{proof}
It is clear that the formula in Eq. \eqref{fract} is recursive. For example,
let $\hat V_2=D\hat F^2(V)$, $\tau_2=\tau_0(\hat Fx)$ and $\tau_3=\tau_1(\hat
Fx)$ be the lengths of the free paths for $\hat Fx$. Then we have
\[\mathcal{B}(\hat
V_2) =[\tau_3-j_3\hat d_3,\hat\mathcal{R}_3, \hat\tau_2,\hat\mathcal{R}_2, \hat\tau_1,\hat\mathcal{R}_1,
\hat\tau_0,\hat\mathcal{R}_0,-j_0\hat d_0, \mathcal{B}(V)].\] %
More generally, using the backward iterates $x_{-n}=\hat F^{-n}x$ of $x$, and
the related notations (for example, $d_{-2n}=d_0(x_{-n})$, and
$d_{1-2n}=d_1(x_{-n})$), we get a formal continued fraction
\begin{equation}\label{tradition}
[\tau_0-j_0\hat d_0, \hat\mathcal{R}_0,\hat\tau_{-1}, \hat\mathcal{R}_{-1},
\hat\tau_{-2},\cdots, \hat\mathcal{R}_{1-2n},\hat\tau_{-2n},\hat\mathcal{R}_{-2n},\hat\tau_{-2n-1},\cdots]
\end{equation}
where $\hat d_k=\frac{d_k}{j_k+1}$, $\hat\mathcal{R}_k=-2/\hat d_i$, $\hat
\tau_k=\tau_k-j_k\hat d_k-j_{k+1}\hat d_{k+1}$, for each $k\le -1$.
\begin{remark}
In the dispersing billiard case, each entry of the continued fraction
\eqref{ffn} is positive. Then Seidel--Stern Theorem (see \cite{Kh64}) implies
that the limit of \eqref{ffn} (as $n\to\infty$) always exists, since the
total time $\tau_0+\cdots+\tau_n\to\infty$. For the reduced continued
fraction \eqref{tradition}, it is clear that $\hat \mathcal{R}_{-n}<0$ for all $n\ge
0$. Moreover, $\hat\mathcal{R}_{-2n}\le\mathcal{R}(x_{-n})\le -2$ for each $n\ge 1$, since
the radius of the small disk is set to $r=1$. Therefore, $\sum_n
\hat\mathcal{R}_{-n}(x)$ always diverges. However, Seidel--Stern Theorem is not
applicable to determine the convergence of \eqref{tradition}, since the terms
$\hat \tau_k$ have no definite sign. This is the reason that we need to
introduce a new return map $(M,F)$ instead of using $(\hat M_1,\hat F)$ to
investigate the hyperbolicity.
\end{remark}
\subsection{New induced map and its analysis}\label{induced}
Denote by $\lceil t\rceil$ the smallest integer larger than or equal to the
real number $t$. We consider a new subset, which consists of ``middle"
sliding reflections on $\Gamma_1$. More precisely, let
\begin{equation}
M:=\bigcup_{n\ge0}\mathcal{F}^{\lceil n/2\rceil}M_n= M_0\cup \mathcal{F} M_1\cup \mathcal{F}
M_2\cup\cdots\cup \mathcal{F}^{\lceil n/2\rceil}M_n\cup\cdots.
\end{equation}
Let $F$ be the first return map of $\mathcal{F}$ with respect to $M$. Clearly the
induced map $F:M\to M$ is measurable and preserves the conditional measure
$\mu_M$ of $\mu$ on $M$, which is given by $\displaystyle \mu_M(A)=\mu(A)/\mu(M)$, for
any Borel measurable set $A\subset M$.
For each $x\in M$, we introduce the following notations:
\begin{enumerate}
\setlength{\itemsep}{4pt}
\item let $i_0=\eta(x)\ge0$ be the number of forward reflections of $x$
on $\Gamma_1$, $\tau_0:=\tau(\mathcal{F}^{i_0}x)$
be the distance between the last reflection on $\Gamma_1$ and the first
reflection on $\Gamma_R$. Let $d_0:=d(x)$ and $\mathcal{R}_0:=\mathcal{R}(x)$, which
stay the same along this series of reflections on $\Gamma_1$;
\item let $i_1=\eta(x_1)\ge0$ be the number of reflections of
$x_1=\mathcal{F}^{i_0+1}x$ on $\Gamma_R$, $\tau_1:=\tau(\mathcal{F}^{i_1}x_1)$ be the
distance between the last reflection on $\Gamma_R$ and the next
reflection on $\Gamma_1$. Let $d_1=d(x_1)$, and $\mathcal{R}_1=\mathcal{R}(x_1)$,
which stay the same along this series of reflections on $\Gamma_R$;
\item let $i_2=\lceil\eta(x_2)/2\rceil\ge0$ be the number\footnote{ Our
choice of $i_2=\lceil\eta(x_2)/2\rceil$ in Item (3), instead of using
$\eta(x_2)$, is due to the fact that $M$ is the union of the sets
$\mathcal{F}^{\lceil n/2\rceil}M_n$, $n\ge 0$.} of reflections of
$x_2=\mathcal{F}^{i_1+1}x_1$ on $\Gamma_1$ till the return to $M$. Let
$d_2=d(x_2)$, $\mathcal{R}_2=\mathcal{R}(x_2)$, which stay the same along this series
of reflections on $\Gamma_1$.
\end{enumerate}
Then the first return map $F$ on $M$ is given explicitly by
$Fx=\mathcal{F}^{i_0+i_1+i_2+2}x$. Note that $x_2=Fx$ and $d_2=d(Fx)$ (in above
notations).
The following result is the analog of Lemma \ref{reduce1} on the reduction of
continued fractions for the new induced return map $F$:
\begin{lem}\label{reduce2}
Let $x\in M$, $V\in T_x\mathcal{M}$ and $V_1=DF(V)$. Then $\mathcal{B}(V_1)$ is given by the
continued fraction:
\begin{align}\label{fract2}
\mathcal{B}(V_1)&=[d_2,\frac{2i_2}{d_2},\bar\tau_1,\hat\mathcal{R}_1,
\hat\tau_0,\hat\mathcal{R}_0,-i_0\hat d_0,\mathcal{B}(V)],
\end{align}
where $\hat d_k=\frac{d_k}{j_k+1}$, $\hat\mathcal{R}_k=-2/\hat d_k$,
$\hat\tau_0=\tau_0-i_0\hat d_0-i_{1}\hat d_{1}$, and
$\bar\tau_1=\tau_1-i_1\hat d_1-d_{2}$.
\end{lem}
Note that \eqref{fract2} may not be as pretty as \eqref{fract}. It involves
three types of quantities: the original type ($i_0$, $i_2$ and $d_2$), the
first variation ($\hat R_k$, $\hat \tau_0$ and $\hat d_0$), and the second
variation $\bar \tau_1$. The quantity $\hat d_2$ appears only in the
intermediate steps of the proof, and does not appear in the final formula
\eqref{fract2}.
\begin{remark}\label{remark-i2}
Note that in the case $i_2=0$, \eqref{fract2} reduces to \eqref{fract}:
\begin{align*}
\mathcal{B}(V_1)&=[d_2,0,\bar\tau_1,\hat\mathcal{R}_1, \hat\tau_0,\hat\mathcal{R}_0,-i_0\hat d_0,\mathcal{B}(V)]
=[d_2+\bar\tau_1,\hat\mathcal{R}_1, \hat\tau_0,\hat\mathcal{R}_0,-i_0\hat d_0,\mathcal{B}(V)]\\
&=[\tau_1-i_1\hat d_1,\hat\mathcal{R}_1, \hat\tau_0,\hat\mathcal{R}_0,-i_0\hat d_0,\mathcal{B}(V)].
\end{align*}
\end{remark}
\begin{proof}
We first consider an intermediate step. That is, let $\hat
V_1=D\mathcal{F}^{i_0+i_1+2}(V)$ and $V_1=D\mathcal{F}^{i_2}(\hat V_1)$. Applying the
reduction process \eqref{redu0} for each of the series of reflection of
lengths $(i_0,i_1)$ (as in the proof of Lemma \ref{reduce1}), we get that
\begin{align*}
\mathcal{B}(\hat V_1)=[\tau_1-i_1\hat d_1,\hat\mathcal{R}_1, \hat\tau_0,\hat\mathcal{R}_0,-i_0\hat d_0,\mathcal{B}(V)].
\end{align*}
This completes the proof of \eqref{fract2} when $i_2=0$ (see Remark
\ref{remark-i2}). In the following we assume $i_2\ge 1$. In this case we have
$\mathcal{B}(V_1)=[\underbrace{2 d_2, \mathcal{R}_2,\cdots,2d_2,\mathcal{R}_2}_\text{$i_2$
times}+\mathcal{B}(\hat V_1)]$. To apply the reduction \eqref{redu0}, we need to
consider
\begin{equation}\label{supply}
\frac{1}{\mathcal{B}(V_1)+\mathcal{R}_2}=[\underbrace{\mathcal{R}_2,2 d_2,
\cdots,\mathcal{R}_2,2d_2}_\text{$i_2$ times},\mathcal{R}_2+\mathcal{B}(\hat V_1)].
\end{equation}
Then we apply the reduction \eqref{redu0} to Eq. \eqref{supply} and get $\displaystyle
\frac{1}{\mathcal{B}(V_1)+\mathcal{R}_2}=[\mathcal{R}_2/2,-i_2\hat d_2,\mathcal{R}_2/2+\mathcal{B}(\hat V_1)]$, which
is equivalent to
\begin{equation}\label{inverse}
\mathcal{B}(V_1)=-\mathcal{R}_2/2+[-i_2\hat d_2,\mathcal{R}_2/2+\mathcal{B}(\hat V_1)]
=[0,-\mathcal{R}_2/2,-i_2\hat d_2,\mathcal{R}_2/2,0,\mathcal{B}(\hat V_1)].
\end{equation}
Applying the relation
\eqref{ABC} to the segment $(a,b,c)=(-\mathcal{R}_2/2,-2i_2d_2,\mathcal{R}_2/2)$, we get
\begin{align*}
B&:=a+c+abc=0+(\mathcal{R}_2/2)^2\cdot 2i_2d_2=\frac{2i_2}{d_2},\\
A&:=\frac{bc}{B}=-2i_2d_2\cdot \mathcal{R}_2/2\cdot\frac{d_2}{2i_2}=d_2,\\
C&:=\frac{ab}{B}=2i_2d_2\cdot \mathcal{R}_2/2\cdot\frac{d_2}{2i_2}=-d_2.
\end{align*}
Putting them together with Eq. \eqref{inverse}, we have
\begin{align*}
\mathcal{B}(V_1)=[d_2,\frac{2i_2}{d_2},-d_2,\mathcal{B}(\hat V_1)]
&=[d_2,\frac{2i_2}{d_2},\bar\tau_1,\hat\mathcal{R}_1,
\hat\tau_0,\hat\mathcal{R}_0,-i_0\hat d_0,\mathcal{B}(V)],
\end{align*}
where $\bar\tau_1=\tau_1-i_1\hat d_1-d_{2}$ follows from \eqref{combine2}.
This completes the proof of \eqref{fract2}.
\end{proof}
\section{Hyperbolicity of asymmetric lemon billiards}\label{Sec:4}
In this section we first list several general sufficient conditions that
ensure the hyperbolicity of the asymmetric lemon-type billiards (see the
statement below and the proof of Theorem \ref{hyper}), then we verify these
conditions for a set of asymmetric lemon tables. Let $Q(b,R)$ be an
asymmetric lemon table satisfying (A0), that is, $\max\{R-1,1\}<b<R$. Let $M$
be the subset introduced in \S\ref{induced}. We divide the set $M$ into three
disjoint regions $X_k$, $k=0,1,2$, which are given by
\begin{enumerate}
\item[(a)] $X_0=\{x\in M\,:\, i_1(x)\ge 1\}$;
\item[(b)] $X_1=\{x\in M\,:\, i_1(x)=0, \text{ and } i_2(x)\ge1\}$;
\item[(c)] $X_2=\{x\in M\,:\, i_1(x)=i_2(x)=0\}$.
\end{enumerate}
We first make a simple observation:
\begin{lem}
For each $x$ with $i_1(x)=0$, one has $\tau_0+\tau_1>d_0+d_2$.
\end{lem}
\begin{proof}
Suppose $i_1(x)=0$, and $p_1$ be the reflection point of $x_1$ on $\Gamma_R$.
Then we take the union of $Q(b,R)$ with its mirror, say $Q^\ast(b,R)$, along
the tangent line $L$ of $\Gamma_R$ at $p_1$, and extend the pre-collision
path of $x_1$ beyond the point $p_1$, which will intersect $\partial
Q^\ast(b,R)$ at the mirror point of the reflection point $p_2$ of $\mathcal{F} x_1$,
say $p^\ast_2$ (see Fig .\ref{mirror}). Clearly the distance
$|p_2^\ast-p_1|=|p_2-p_1|=\tau_1$. By the assumption that $b>R-1$, one can
see that the tangent line $L$ cuts out a major arc on $\partial D_1$ (clearly
larger than $\Gamma_1$), and the point $p_2^\ast$ lies outside of the unit
disk $D_1$. Therefore, $\tau_0+\tau_1>2d_0$. Similarly, we have
$\tau_0+\tau_1>2d_2$. Putting them together, we get $\tau_0+\tau_1>d_0+d_2$.
\end{proof}
\begin{figure}[htb]
\centering
\begin{overpic}[width=60mm]{mirror.eps}
\put(51,32){$p_1$}
\put(32,63){$p_2$}
\put(64,65){$p_2^\ast$}
\put(51,5){$L$}
\put(39,46){$\tau_1$}
\put(38,23){$\tau_0$}
\end{overpic}
\caption{The case with $i_1(x)=0$: there is only one reflection on $\Gamma_R$.}
\label{mirror}
\end{figure}
\subsection{Main Assumptions and their analysis}\label{assumption}
In the following we list the assumptions on $X_i$'s.
\noindent(\textbf{A1}) For $x\in X_0$: $i_0\ge1$, $i_2\ge1$ and
\begin{equation}\label{assume1}
\tau_0<(1-\frac{1}{2(1+i_0)}) d_0+\frac{i_1}{1+i_1}d_1,\quad
\tau_1<(1-\frac{1}{2i_2})d_2+\frac{i_1}{1+i_1}d_1;
\end{equation}
\noindent(\textbf{A2}) For $x\in X_1$: $\frac{d_0}{1+i_0}< d_1$ and
$\tau_0+\tau_1<(1-\frac{1}{2(1+i_0)})d_0+ d_1+(1-\frac{1}{2i_2})d_2$;
\noindent(\textbf{A3}) For $x\in X_2$: $\tau_0\le d_1/2$.
\vskip.3cm
To prove Theorem \ref{hyper}, it suffices to verify hyperbolicity of the
first return map $F:M\to M$, obtained by restricting $\mathcal{F}$ on $M$. For each
$x\in M$, $V\in T_x\mathcal{M}$, we let $\mathcal{B}(V)=\mathcal{B}^-(V)$ be the pre-reflection
curvature of $V$. Note that $\mathcal{B}(V)$ determines $V$ uniquely up to a scalar.
Let $V^d_x\in T_x\mathcal{M}$ be the unit vector corresponding to the incoming beam
with curvature $\mathcal{B}(V^d_x)=1/d(x)$, and $V^{p}_x\in T_x\mathcal{M}$ be the unit
vector corresponding to the parallel incoming beam $\mathcal{B}(V^p_x)=0$,
respectively.
\begin{pro}\label{suffness}
Let $x\in M$, $i_k$, $d_k$ and $\hat d_k$, $k=0,1,2$ be the corresponding
quantities of $x$ given in \S \ref{induced}, and $Fx=\mathcal{F}^{i_0+i_1+i_2+2}x$ be
the first return map of $\mathcal{F}$ on $M$. Then
we have
\noindent $\mathrm{(I)}$. $0<\mathcal{B}(DF(V_x^d))<1/d_2$ if one of the
following conditions holds:
\begin{enumerate}
\item[(\textbf{D1})] $\tau_1-i_1\hat d_1-d_{2}+[-\frac{2}{\hat
d_1},\tau_0-d_0-i_{1}\hat d_{1}]>0$,
\item[(\textbf{F1})] $\tau_1-i_1\hat d_1-d_{2}+[-\frac{2}{\hat
d_1},\tau_0-d_0-i_{1}\hat d_{1}]<-\frac{d_2}{2i_2}$.
\end{enumerate}
\vskip.2cm
\noindent $\mathrm{(II)}$. $0<\mathcal{B}(DF(V_x^p))<1/d_2$ if one of the following
conditions holds:
\begin{enumerate}
\item[(\textbf{D2})] $\tau_1-i_1\hat d_1-d_{2}+[-\frac{2}{\hat d_1},
\tau_0-(1-\frac{1}{2(1+i_0)})d_0-i_{1}\hat d_{1}]>0$,
\item[(\textbf{F2})] $\tau_1-i_1\hat d_1-d_{2}+[-\frac{2}{\hat d_1},
\tau_0-(1-\frac{1}{2(1+i_0)})d_0-i_{1}\hat d_{1}]<-\frac{d_2}{2i_2}$.
\end{enumerate}
\vskip.2cm
\noindent $\mathrm{(III)}$. $0<\mathcal{B}(DF(V_x^p))<\mathcal{B}(DF(V_x^d))<1/d_2$ if one
of the following conditions holds:
\begin{enumerate}
\item[(\textbf{P1})] One of the paired conditions\footnote{In the
following we will use the term \textbf{(D1)-(D2)}, which is short for
``both conditions \textbf{(D1)} and \textbf{(D2)}''. Similarly, we
use the term \textbf{(D1)-(D2)-(P1)}, which is short for ``all three
conditions \textbf{(D1)}, \textbf{(D2)} and \textbf{(P1)}''} (that
is, \textbf{(D1)-(D2)} or \textbf{(F1)-(F2))} holds and
\begin{equation}\label{check} [\tau_1-i_1\hat d_1-d_{2},-\frac{2}{\hat
d_1},\tau_0-(1-\frac{1}{2(1+i_0)})d_0-i_{1}\hat
d_{1}]>[\tau_1-i_1\hat d_1-d_{2},-\frac{2}{\hat
d_1},\tau_0-d_0-i_{1}\hat d_{1}].
\end{equation}
\item[\textbf{(P2)}] \textbf{(D1)-(F2)} hold.
\end{enumerate}
\end{pro}
The statements of Proposition \ref{suffness} is rather technical, but the
proof is quite straightforward. Geometrically, it gives different criterions
when the cone bounded by $V_x^p$ and $V_x^d$ is mapped under $DF$ to the cone
bounded by $V_{Fx}^p$ and $V_{Fx}^d$, see Proposition \ref{returnf} for more
details.
\begin{proof}
Note that $\mathcal{B}(V^d_x)=1/d_0(x)$ and $\mathcal{B}(V^p_x)=0$. So the curvatures of
$DF(V_x^d)$ and $DF(V_x^p)$ are given by
\begin{align*}
\mathcal{B}(DF(V_x^d))&=[d_2,\frac{2i_2}{d_2},\bar\tau_1,-\frac{2}{\hat d_1},\hat\tau_0,
-\frac{2}{\hat d_0},-i_0\hat d_0+d_0]
=[d_2,\frac{2i_2}{d_2},\bar\tau_1,-\frac{2}{\hat d_1},\hat\tau_0-\hat d_0]\\
&=[d_2,\frac{2i_2}{d_2},\bar\tau_1,
-\frac{2}{\hat d_1},\tau_0-d_0-i_{1}\hat d_{1}];\\
\mathcal{B}(DF(V_x^p))&=[d_2,\frac{2i_2}{d_2},\bar\tau_1,-\frac{2}{\hat d_1},\hat\tau_0,
-\frac{2}{\hat d_0}]=[d_2,\frac{2i_2}{d_2},\bar\tau_1,
-\frac{2}{\hat d_1},\hat\tau_0-\frac{\hat d_0}{2}]\\
&=[d_2,\frac{2i_2}{d_2},\bar\tau_1,
-\frac{2}{\hat d_1},\tau_0-pd_0-i_{1}\hat d_{1}],
\text{ where }p=1-\frac{1}{2(1+i_0)}.
\end{align*}
Here we use the facts that $\hat\tau_0=\tau_0-i_0\hat d_0-i_{1}\hat d_{1}$
and $[\cdots,a,b,0]=[\cdots,a]$ whenever $b\neq 0$. Then it is easy to see
that (I) $0<\mathcal{B}(DF(V_x^d))<1/d_2$ is equivalent to
\begin{equation} \label{equi1}
[\frac{2i_2}{d_2},\bar\tau_1, -\frac{2}{\hat d_1},\tau_0-d_0-i_{1}\hat
d_{1}]>0.
\end{equation}
Moreover, \eqref{equi1} holds if and only if one of the following conditions
holds:
\begin{itemize}
\item $[\bar\tau_1, -\frac{2}{\hat d_1},\tau_0-d_0-i_{1}\hat d_{1}]>0$,
which corresponds to \textbf{(D1)};
\item $0>[\bar\tau_1, -\frac{2}{\hat d_1},\tau_0-d_0-i_{1}\hat
d_{1}]>-\frac{2i_2}{d_2}$, which corresponds to \textbf{(F1)}.
\end{itemize}
We can derive \textbf{(D2)} and \textbf{(F2)} from (II) in the same way.
\vskip.3cm
Condition (III) $0<\mathcal{B}(DF(V_x^p))<\mathcal{B}(DF(V_x^d))<1/d_2$ is
equivalent to
\begin{align}
&[\frac{2i_2}{d_2},\bar\tau_1,
-\frac{2}{\hat d_1},\tau_0-pd_0-i_{1}\hat d_{1}]>[\frac{2i_2}{d_2},\bar\tau_1,
-\frac{2}{\hat d_1},\tau_0-d_0-i_{1}\hat d_{1}]>0.\label{equi2}
\end{align}
Then \eqref{equi2} holds if and only if one of the following conditions
holds:
\begin{itemize}
\item $0<[\bar\tau_1, -\frac{2}{\hat d_1},\tau_0-pd_0-i_{1}\hat
d_{1}]<[\bar\tau_1, -\frac{2}{\hat d_1},\tau_0-d_0-i_{1}\hat d_{1}]$,
which is \textbf{(D1)-(D2)-(P1)};
\item $[\bar\tau_1, -\frac{2}{\hat d_1},\tau_0-pd_0-i_{1}\hat
d_{1}]<[\bar\tau_1, -\frac{2}{\hat d_1},\tau_0-d_0-i_{1}\hat
d_{1}]<-\frac{d_2}{2i_2}$, which is \textbf{(F1)-(F2)-(P1)};
\item $[\bar\tau_1, -\frac{2}{\hat d_1},\tau_0-d_0-i_{1}\hat d_{1}]>0$
and $[\bar\tau_1, -\frac{2}{\hat d_1},\tau_0-pd_0-i_{1}\hat
d_{1}]<-\frac{d_2}{2i_2}$, which is \textbf{(D1)-(F2)}, or equivalently, \textbf{(P2)}.
\end{itemize}
This completes the proof of the proposition.
\end{proof}
\begin{remark}
It is worth pointing out the following observations:
\begin{itemize}
\setlength{\itemsep}{3pt}
\item[(1).] The conditions \textbf{(F1)} and \textbf{(F2)} are empty when
$i_2=0$.
\item[(2).] The arguments in the proof of Proposition \ref{suffness}
apply to general billiards with several circular boundary components.
That is, for any consecutive circular components that a billiard
orbit passes, say $\Gamma_k$, $k=0,1,2$, let $i_k$ be the number of
reflections of the orbit on $\Gamma_k$. Then we have the same
characterizations for $Fx=\mathcal{F}^{i_0+i_1+i_2+2}x$.
\item[(3).] In the case when all arcs $\Gamma_k$ lie on the same circle,
the left-hand sides of \textbf{(D1)} and \textbf{(F1)} are zero, and (I) always fails.
This is quite natural, since the circular billiard is well-known to
be not hyperbolic, but parabolic.
\end{itemize}
\end{remark}
The following is an application of Proposition \ref{suffness} to asymmetric
lemon tables satisfying Assumptions \textbf{(A0)}--\textbf{(A3)}.
\begin{pro}\label{returnf}
Let $Q(b,R)$ be an asymmetric lemon billiard satisfying the assumptions
\textbf{(A0)}--\textbf{(A3)}. Then for a.e. $x\in M$,
\begin{equation}\label{cone}
0=\mathcal{B}(V_{Fx}^p)<\mathcal{B}(DF(V_x^p))<\mathcal{B}(DF(V_x^d))<\mathcal{B}(V_{Fx}^d)=1/d(Fx).
\end{equation}
\end{pro}
In comparing to Proposition \ref{suffness}, the statement of Proposition
\ref{returnf} is very clear, but the proof is quite technical and a little
bit tedious. From the proof, one can see that $\frac{d_0}{1+i_0}<d_1$ is a
very subtle assumption to guarantee $d_0<pd_0+\frac{d_1}{2}$. One can check
that \eqref{cone} may fail if $d_0>pd_0+\frac{d_1}{2}$. This is why we need
$\frac{d_0}{1+i_0}<d_1$ in the assumption (\textbf{A2}).
\begin{proof}
It suffices to show that for a.e. point $x\in M$, one of the following
combinations holds: \textbf{(D1)-(D2)-(P1)}, or \textbf{(F1)-(F2)-(P1)}, or
\textbf{(D1)-(F2)}.
\vskip.3cm
\noindent{\bf Case 1.} Let $x\in X_0$. Then Eq. \eqref{assume1} implies that
\[\tau_0-d_0-i_{1}\hat d_{1}<\tau_0-(1-\frac{1}{2(1+i_0)})d_0-i_{1}\hat
d_{1}<0,\quad \tau_1-i_1\hat d_1-d_{2}<-\frac{d_2}{2i_2}.\]
Therefore, \textbf{(F1)-(F2)-(P1)} hold.
\vskip.3cm
\noindent{\bf Case 2.} Let $x\in X_1$. Note that $i_1=0$ and $\hat d_1=d_1$.
We denote $p=1-\frac{1}{2(1+i_0)}$ and $q=1-\frac{1}{2i_2}$, and rewrite the
the corresponding conditions using $i_1=0$:
\begin{enumerate}
\item[(\textbf{D1}$'$).] $\tau_1-d_{2}+[-\frac{2}{d_1},\tau_0-d_0]>0$, or
equivalently,
$\tau_1-d_{2}+\cfrac{1}{-\frac{2}{d_1}+\frac{1}{\tau_0-d_0}}>0$;
\item[(\textbf{F1}$'$).] $\tau_1-d_{2}+[-\frac{2}{d_1},
\tau_0-d_0]<-\frac{d_2}{2i_2}$, or equivalently,
$\tau_1-qd_{2}+\cfrac{1}{-\frac{2}{d_1}+\frac{1}{\tau_0-d_0}}<0$;
\item[(\textbf{D2}$'$).] $\tau_1-d_{2}+[-\frac{2}{d_1}, \tau_0-pd_0]>0$, or
equivalently,
$\tau_1-d_{2}+\cfrac{1}{-\frac{2}{d_1}+\frac{1}{\tau_0-pd_0}}>0$;
\item[(\textbf{F2}$'$).] $\tau_1-d_{2}+[-\frac{2}{d_1},
\tau_0-pd_0]<-\frac{d_2}{2i_2}$, or equivalently,
$\tau_1-qd_{2}+\cfrac{1}{-\frac{2}{d_1}+\frac{1}{\tau_0-d_0}}<0$;
\item[(\textbf{P1}$'$).] $[\tau_1-d_{2}, -\frac{2}{d_1},
\tau_0-pd_0]<[\tau_1-d_{2},-\frac{2}{d_1}, \tau_0-d_0]$.
\end{enumerate}
Note that $0<\tau_0<d_0+d_1$. There are several subcases when $\tau_0$ varies
in $(0,d_0+d_1)$, see Fig. \ref{subcases}.
\begin{figure}[htb]
\centering
\begin{overpic}[width=140mm]{subcases.eps}
\put(14,-1){$0$}
\put(24,-1){$pd_0$}
\put(38,-1){$d_0$}
\put(47,-1){$pd_0+\frac{d_1}{2}$}
\put(61,-1){$d_0+\frac{d_1}{2}$}
\put(74,-1){$d_0+d_1$}
\put(94,-1){$\tau_0$}
\put(19,5){(a)}
\put(33,5){(b)}
\put(46,5){(c)}
\put(57,5){(d)}
\put(72,5){(e)}
\end{overpic}
\caption{Subcases of Case 2 according to the values of $\tau_0$.
Note that $d_0<pd_0+\frac{d_1}{2}$,
which follows from the assumption that
$\frac{d_0}{1+i_0}<d_1$.}
\label{subcases}
\end{figure}
\noindent{\bf Subcase (a).} Let $\tau_0< pd_0$. Hence $\tau_1>d_2$ and
$0<\frac{1}{\tau_1-d_2}<\frac{1}{d_0-\tau_0}<\frac{1}{pd_0-\tau_0}$. Then we
claim that \textbf{(D1$'$)-(D2$'$)-(P1$'$)} hold.
\noindent{\it Proof of Claim.} Note that $\tau_0+\tau_1> d_0+d_2$, or equally
$\tau_1-d_2> d_0-\tau_0>0$. So
$\frac{1}{\tau_1-d_{2}}<\frac{1}{d_0-\tau_0}<\frac{2}{d_1}+\frac{1}{d_0-\tau_0}$
which implies (\textbf{D1}$'$). Similarly,
$\frac{1}{\tau_1-d_{2}}<\frac{1}{pd_0-\tau_0}<\frac{2}{d_1}+\frac{1}{pd_0-\tau_0}$
implies (\textbf{D2}$'$). If both terms in (\textbf{P1}$'$) are positive,
then (\textbf{P1$'$}) is equivalent to $[-\frac{2}{d_1},
\tau_0-pd_0]>[-\frac{2}{d_1}, \tau_0-d_0]$. Analogously, if both terms in
(\textbf{P1}$'$) are negative, then (\textbf{P1}$'$) is equivalent to
$[\tau_0-pd_0]<[\tau_0-d_0]$, or equivalently, $\tau_0-pd_0>\tau_0-d_0$,
which holds trivially. This completes the proof of the claim.
\vskip.2cm
\noindent{\bf Subcase (b).} Let $pd_0\le \tau_0< d_0$. The proof of
(\textbf{D1}$'$) in the previous case is still valid. For (\textbf{D2}$'$),
we note that $\tau_0<d_0<pd_0+\frac{d_1}{2}$ and hence
$-\frac{2}{d_1}+\frac{1}{\tau_0-pd_0}>0$. So (\textbf{D2}$'$) follows. Since
both terms are positive, (\textbf{P1}$'$) follows from $[-\frac{2}{d_1},
\tau_0-pd_0]>0>[-\frac{2}{d_1}, \tau_0-d_0]$.
\vskip.2cm
\noindent{\bf Subcase (c).} Let $d_0\le\tau_0<pd_0+d_1/2$. There are two
subcases:
$\bullet$ $\tau_1> d_2$. The proof is similar to Case (b), and
\textbf{(D1$'$)-(D2$'$)-(P1$'$)} follows.
$\bullet$ $\tau_1\le d_2$. Note that $0<\frac{1}{\tau_0-pd_0} -\frac{2}{d_2}
<\frac{1}{\tau_0-d_0} -\frac{2}{d_2}< \frac{1}{d_{2}-\tau_1}$, from which
\textbf{(D1$'$)-(D2$'$)-(P1$'$)} follows.
\vskip.2cm
\noindent{\bf Subcase (d).} Let $pd_0+d_1/2\le \tau_0<d_0+d_1/2$, which is
equivalent to $\frac{1}{\tau_0-pd_0}-\frac{2}{d_1}<0
<\frac{1}{\tau_0-d_0}-\frac{2}{d_1}$. Using Assumption \textbf{(A2)} that
$\tau_0+\tau_1>d_0+d_2$, we see that (\textbf{D1}$'$) always holds (as in Subcase
2(c)). Using the condition $\tau_0+\tau_1<pd_0+d_1+qd_2$, we see that
$\tau_1-qd_2<\frac{d_1}{2}$. There are two subcases:
$\bullet$ $\tau_1-qd_2<0$. Then (\textbf{F2}$'$) holds trivially.
$\bullet$ $\tau_1-qd_2>0$. Then $\frac{1}{\tau_1-qd_2}>\frac{2}{d_1}$ implies
(\textbf{F2}$'$).
\vskip.2cm
\noindent{\bf Subcase (e).} Let $\tau_0\ge d_0+d_1/2$. There are also two
subcases:
$\bullet$ $\tau_1-qd_2<0$. Then $\frac{1}{\tau_1-qd_2}<0
<\frac{2}{d_1}-\frac{1}{\tau_0-d_0}< \frac{2}{d_1}-\frac{1}{\tau_0-pd_0}$,
and \textbf{(F1$'$)-(F2$'$)-(P1$'$)} hold.
$\bullet$ $\tau_1-qd_2>0$. Then $\frac{1}{\tau_1-qd_2}>\frac{2}{d_1}$ implies
\textbf{(F1$'$)-(F2$'$)-(P1$'$)}.
\vskip.3cm
\noindent{\bf Case 3.} Let $x\in X_2$. Then $\tau_0<d_1/2$ by assumption. The
proof of Case 2 (a)-(c) also works this case, and
\textbf{(D1$'$)-(D2$'$)-(P1$'$)} hold.
\end{proof}
\subsection{Proof of Theorem \ref{hyper}}\label{provehyper}
Let $\mathcal{S}_\infty=\bigcup_{n\in\mathbb{Z}}\mathcal{F}^{-n}\mathcal{S}_1$. Note that
$\mu(\mathcal{S}_\infty)=0$. Let $x\in M\backslash \mathcal{S}_\infty$. The push-forward
$V_n^p(x)=DF^n(V^p_{F^{-n}x})$ and $V_n^d(x)=DF^n(V^d_{F^{-n}x})$ are well
defined for all $n\ge 1$, and together generate two sequences of tangent
vectors in $T_x\mathcal{M}$. By Proposition \ref{order} and Lemma \ref{returnf}, we
get the following relation inductively:
\begin{align*}
0=\mathcal{B}(V_x^p)<\cdots<
\mathcal{B}(V^p_n(x))&<\mathcal{B}(V^p_{n+1}(x))\\
&<\mathcal{B}(V_{n+1}^d(x))<\mathcal{B}(V^d_n(x))<\mathcal{B}(V^d_x)=1/d(x).
\end{align*}
Therefore $\displaystyle \mathcal{B}^u(x):=\lim_{n\to\infty} \mathcal{B}(V^d_n(x))$ exists, and
$0<\mathcal{B}^u(x)<1/d(x)$. Let $E^u_x=\langle V^u_x\rangle$ be the corresponding
subspace in $T_x\mathcal{M}$ for all $x\in M\backslash S_\infty$.
Let $\Phi:\mathcal{M}\to\mathcal{M}, (s,\varphi)\mapsto (s,-\varphi)$ be the time reversal
map on $\mathcal{M}$. Then we have $\mathcal{F}\circ\Phi=\Phi\circ\mathcal{F}^{-1}$, and
$F\circ\Phi=\Phi\circ F^{-1}$. In particular, $V^s_x=D\Phi(V^u_{\Phi x})$
satisfies $-1/d(\Phi x)<\mathcal{B}(V^s_x)=-\mathcal{B}(V^u_{\Phi x})<0$, which corresponds
to a stable vector for each $x\in M\backslash \mathcal{S}_{\infty}$. Therefore,
$T_x\mathcal{M}=E^u_x\oplus E^s_x$ for every point $x\in M\backslash \mathcal{S}_{\infty}$,
and such a point $x$ is a hyperbolic point of the induced billiard map $F$.
Next we show that the original billiard map $\mathcal{F}$ is hyperbolic. It suffices
to show that the Lyapunov exponent $\chi^+(\mathcal{F},x)>0$ for a.e. $x\in M$, since
$(\mathcal{M},\mathcal{F},\mu)$ is a (discrete) suspension over $(M,F,\mu_M)$ with respect to
the first return time function, say $\xi_M$, which is given by
$\xi_M(x)=i_0+i_1+i_2+2$ (see \S~\ref{induced}). Note that $\int_M \xi_M
d\mu_M=\frac{1}{\mu(M)}$. So the averaging return time
$\overline{\xi}(x)=\lim_{k\to\infty}\frac{\xi_k(x)}{k}$ exists for a.e. $x\in
M$, where $\xi_k(x)=\xi_M(x)+\cdots+\xi_M(F^{k-1}x)$ be the $k$-th return
time of $x$ to $M$. Moreover, $1\le \overline{\xi}(x)<\infty$ for a.e. $x\in
M$. Then for a.e. $x\in M$, we have
\[\chi^+(\mathcal{F},x)=\lim_{n\to\infty}\frac{1}{n}\log\|D_x\mathcal{F}^n\|
=\lim_{k\to\infty}\frac{k}{\xi_k(x)}\cdot\frac{1}{k}\log\|D_xF^k\|
=\frac{1}{\overline{\xi}(x)}\cdot\chi^+(F,x)>0.\] This completes the proof.
\qed
\subsection{Proof of Theorem \ref{main}}\label{provemain}
Let $\Gamma_1$ be a major arc of the unit circle with endpoints $A,B$
satisfying $|AB|<1$. For $R>1$, it is easy to see that
$b=(R^2-|AB|^2/4)^{1/2}-(1-|AB|^2/4)^{1/2}>R-1$. Moreover, $b>1$ as long as
$R>2$. In the following, we will assume $R>2$. Therefore, (\textbf{A0})
holds. Then it suffices to show that the assumptions
(\textbf{A1})--(\textbf{A3}) hold for the tables $Q(R)=Q(b,R)$ considered in
Theorem \ref{main}.
\begin{figure}[htb]
\centering
\begin{overpic}[width=50mm]{assumption.eps}
\put(53,68){$y_1$}
\put(53,29){$y_2$}
\put(3,43){$O$}
\put(60,81){$A$}
\put(59,15){$B$}
\end{overpic}
\caption{First restriction on $R$.
The thickened pieces on $\Gamma_1$ are related to $U$.}
\label{assume-fig0}
\end{figure}
Let $y_1=\overrightarrow{AB}$ and $y_2=\overrightarrow{BA}$ be the two points
in the phase space $\mathcal{M}$ moving along the chord $AB$, see Fig.
\ref{assume-fig0}. Note that $y_i\in\mathcal{S}_0$. Since $|AB|<1$, we have $\angle
AOB < \frac{\pi}{3}$, or equally, $n^\ast:=\left\lfloor\frac{2\pi}{\angle
AOB}\right\rfloor\ge 6$, where $\lfloor t\rfloor$ is the largest integer
smaller than or equal to $t$. Then for each $i=1,2$, there is a small
neighborhood $U_i\subset \mathcal{M}$ of $y_i$, such that for any point $x\in
U_i\cap\mathcal{M}_1\backslash \mathcal{F}^{-1}\mathcal{M}_1$, one has
\begin{itemize}
\item $\mathcal{F} x$ enters $\mathcal{M}_R$ and stays on $\mathcal{M}_R$ for another $i_1$
iterates, where $i_1=\eta(\mathcal{F} x)$,
\item $\mathcal{F}^{i_1+2}x\in M_n$ for some $n\ge n^\ast-1=5$.
\end{itemize}
See Section \ref{cfrac} for the definitions of $\eta(\cdot)$ and $M_n$. Let
$U=(U_1\cup U_2)\cap\mathcal{M}_1\backslash \mathcal{F}^{-1}\mathcal{M}_1$. See Fig.
\ref{assume-fig0}, where the bold pieces on $\Gamma_1$ indicate the bases of
$U$. Note that the directions of vectors in $U$ are close to the vertical
direction and are pointing to some points on $\Gamma_R$.
Before moving on to the verification of the assumptions
(\textbf{A1})--(\textbf{A3}), we need the following lemma. Note that for each
$x\in M$, $\mathcal{F}^{i_0}x$ is the last reflection of $x\in M$ on $\Gamma_1$
before hitting $\Gamma_R$. That is, $\mathcal{F}^{i_0}x=\mathcal{F}^{-1}x_1$. See Fig.
\ref{assume-left} and \ref{assume-right} for two illustrations.
\begin{lem}\label{largeR} For
any $\epsilon>0$, there exists $R_0=R(\epsilon)>2$ such that for any $R\ge
R_0$, the following hold for the induced map $F:M\to M$ of the billiard table
$Q(R)$. That is, for any point $x\in M$ with $d_1\le 4$,
\begin{enumerate}
\item $\mathcal{F}^{i_0}x\in U$ and $\mathcal{F}^{i_0+i_1+2}x\in \bigcup_{n\ge 5}M_n$;
\item the reflection points of $\mathcal{F}^{i_0}x$ and of $\mathcal{F}^{i_0+i_1+2}x$,
both on $\Gamma_1$, are $\epsilon$-close to the corners $\{A,B\}$;
\item the total length of the trajectory from $\mathcal{F}^{i_0}x$ to
$\mathcal{F}^{i_0+i_1+2}x$ is bounded from above by $|AB|+\epsilon$.
\end{enumerate}
\end{lem}
Note that the number 4 in `$d_1\le 4$' is chosen to simplify the presentation
of the proof of Theorem \ref{main}. The above lemma holds for any number
larger than $2|AB|$. In the following we will use $d(x)=\rho\cdot\sin\theta$,
where $\theta=\frac{\pi}{2}-\varphi$ is the angle from the direction of $x$
to the tangent line of $\Gamma_\rho$ at $x$.
\begin{proof}[Proof of Lemma \ref{largeR}]
Let $p_1$ be the reflection point of $x_1$ on $\Gamma_R$, and let $\theta_1$
be the angle between the direction of $x_1$ with the tangent line $L$ of
$\Gamma_R$ at $p_1$. Then $\sin\theta_1\le \frac{4}{R}$, since $d(x_1)=d_1\le
4$. Consider the vertical line that passes through $p_1$. Clearly the length
of the chord on $\Gamma_R$ cut out by this vertical line is less than $|AB|$,
and hence less than 1. Therefore, the angle $\theta_2$ between the vertical
direction with the tangent direction of $\Gamma_R$ at $p_1$ satisfies
$\sin\theta_2< \frac{1}{2R}$.
(1). Note that the angle $\theta$ between $\mathcal{F}^{i_0}x$ and the vertical
direction is $\theta_1\pm\theta_2$, which satisfies $|\sin
\theta|\le\sin\theta_1 +\sin\theta_2< \frac{5}{R}$. Then $\mathcal{F}^{i_0}x\in U$ if
$R$ is large enough. Moreover, $\mathcal{F}^{i_0+i_1+2}x=\mathcal{F}^{i_1+1}x_1\in
\bigcup_{n\ge 5}M_n$ by our choice of $U$.
(2). Let $P$ be a point on $\Gamma_1$, and $\theta(A)$ be the angle between
the vertical direction with the line $PA$. Similarly we define $\theta(B)$.
Let $\theta_\ast$ be the minimal angle of $\theta(A)$ and $\theta(B)$ among
all possible choices of $P$ on $\Gamma_1$ that is not $\epsilon$-close to
either $A$ or $B$. Then the angle between the vertical direction with the
line connecting $P$ to any point on $\Gamma_R$ is bounded from below by
$\theta_\ast$. Let $x_1\in\mathcal{M}_R$. The angle $\theta(x_1)$ between the
direction of $x_1$ with the tangent direction of $\Gamma_R$ at the base point
of $x_1$ satisfies $\theta(x_1)\ge \theta_\ast-\arcsin \frac{1}{2R}$. Then
$d_1=R\cdot\sin\theta(x_1)\ge R\sin\theta_\ast-\frac{1}{2}$. So it suffices
to assume $R_0= \frac{5}{\sin\theta_\ast}$.
(3). By enlarging $R_0$ if necessary, the conclusion follows from (2), since
the two reflections are close to the corners and the arc $\Gamma_R$ is almost
flat.
\end{proof}
\begin{figure}[htb]
\centering
\begin{minipage}{0.45\linewidth}
\centering
\begin{overpic}[height=70mm]{assumption1.eps}
\put(10,72){$\tau_1$}
\put(10,21){$\tau_0$}
\put(24,56){$L$}
\put(22,33){$p_1$}
\put(18,83){$A$}
\put(18,14){$B$}
\end{overpic}
\caption{The case that there are multiple reflections on $\Gamma_R$.}
\label{assume-left}
\end{minipage}
\begin{minipage}{0.45\linewidth}
\centering
\begin{overpic}[height=70mm]{assumption2.eps}
\put(11,68){$\tau_1$}
\put(11,27){$\tau_0$}
\put(21,79){$L$}
\put(24,54){$p_1$}
\put(18,83){$A$}
\put(18,14){$B$}
\end{overpic}
\caption{The case that there is a single reflection on $\Gamma_R$.}
\label{assume-right}
\end{minipage}
\end{figure}
Now we continue the proof of Theorem \ref{main}.
(1). To verify (\textbf{A1}), we assume $x\in X_0$, which means $i_1\ge1$
(see Fig. \ref{assume-left} for an illustration). Then we have $d_1<|AB|<1$,
which implies $i_0\ge 2$ and $i_2\ge 3$. Then we have
$1-\frac{1}{2(1+i_0)}\ge \frac{5}{6}$, and $1-\frac{1}{2i_2}\ge \frac{5}{6}$.
Therefore, a sufficient condition for (\textbf{A1}) is
\begin{equation}
\tau_0< \frac{5}{6}d_0+\frac{1}{2}d_1,
\quad
\tau_1< \frac{5}{6}d_2+\frac{1}{2}d_1.
\end{equation}
Since $\tau_0,\tau_1<2d_1$, we only need to show that $\frac{3}{4}\tau_0<
\frac{5}{6}d_0$ and $\frac{3}{4}\tau_1< \frac{5}{6}d_2$, or equivalently,
$\tau_0< \frac{10}{9}d_0$ and $\tau_1< \frac{10}{9}d_2$. To this end, we
first set $R_1= 100$: then for each $R\ge R_1$, the angle $\theta$ of the
vertical direction (assume that $AB$ is vertical) and the tangent line of any
point on $\Gamma_R$ satisfies $\sin\theta<\frac{1}{2R}\le \frac{1}{200}$.
Then the angle $\theta_0$ between the vertical direction and the free path
corresponding to $\tau_0$ satisfy $\sin\theta_0\le
\sin\theta+\frac{d_1}{R}\le \frac{1}{100}$ (since $2d_1<1$). This implies
$2d_0\ge (1-0.05)\cdot|AB|$. On the other hand, by making $R_1$ even larger
if necessary, we can assume $\tau_0+\tau_1+2i_1d_1\le (1+0.05)\cdot|AB|$ for
$x\in X_0$ (see Lemma \ref{largeR}). Then we have
\[ \tau_0< \frac{\tau_0+2d_1}{2}<\frac{1.05}{2}\cdot|AB|
\le \frac{1.05}{2}\cdot\frac{2d_0}{0.95}<\frac{10}{9}d_0.
\]
Similarly, we have $\tau_1< \frac{10}{9}d_2$.
(2). To verify (\textbf{A2}), we assume $x\in X_1$, which means $i_1=0$ and
$i_2\ge 1$ (see Fig. \ref{assume-right}). Both relations in (\textbf{A2}) are
trivial if $d_1\ge 4$. In the case $d_1<4$, one must have $i_0\ge 2$, $i_2\ge
3$ (by the assumption $R>R_0$). Note that $|AB|<\tau_0+\tau_1<4d_1$. Pick
$R_2$ large enough, such that for any $R\ge R_2$, for any point $x_1\in
\mathcal{M}_R$ with $d_1< 4$, the angle $\theta$ of the direction corresponding to
$x_1$ with the vertical direction is bounded by $0.05$ (recall that $AB$ is
vertical). Then we have\footnote{This is a very rough estimate, but is
sufficient for our need.} $2d_0\le \frac{4}{3}\cdot|AB|$. This implies
\[\frac{d_0}{1+i_0}\le \frac{1}{3}d_0\le \frac{2}{9}\cdot|AB|<\frac{8}{9}d_1<d_1.\]
For the second inequality in (\textbf{A2}), by making $R_2$ even larger if
necessary (see Lemma \ref{largeR}), we can assume that
$\tau_0+\tau_1\le1.04\cdot|AB|$, $2d_0,2d_2\ge 0.95\cdot|AB|$. Combining with
the fact that $d_1>|AB|/4$, we have
\[\frac{5}{6}d_0+d_1+\frac{5}{6}d_2>
\frac{5}{6}\cdot \frac{0.95}{2}\cdot|AB|+\frac{1}{4}\cdot|AB|
+\frac{5}{6}\cdot \frac{0.95}{2}\cdot|AB|
>1.04\cdot|AB|.\]
Therefore, $\tau_0+\tau_1<\frac{5}{6}d_0+d_1+\frac{5}{6}d_2$, and
(\textbf{A2}) follows.
(3). The condition (\textbf{A3}) on $X_2$ holds trivially, since $i_2=0$
implies that $d_1\ge 4>2\tau_0$.
Then we set $R_\ast=\max\{R_i:i=0,1,2\}$. This completes the proof of Theorem
\ref{main}. \qed
| {
"attr-fineweb-edu": 2.982422,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdWA5qYVBeaTuDLU2 | \section{Introduction}
Millions of fans watched mesmerized the Super Bowl 2017 outcome. The New England Patriots had a spectacular
comeback defeating the Atlanta Falcons in a historical game. When the Falcons opened up a 28-3 lead at
the start of the third quarter, no one would bet on the Patriots. And yet they won in an amazing way.
After the game, endless discussion tables ponder over whether the Falcons blew it or the Patriots won it.
Was it luck or skill that played the main role in that evening? Either luck or skill, to watch a solid victory
melting and dissolving in the air brings the feeling that the world is unpredictable and uncontrollable.
There have been empirical and theoretical studies showing that unpredictability cannot
be avoided~\cite{Watts2016}.
Indeed, Martin et al.~\cite{Watts2016} showed that realistic bounds on predicting outcomes in social systems
imposes drastic
limits on what the best performing models can deliver. And yet, accurate prediction is a holy grail
avidly sought
in financial markets~\cite{chen2009}, sports~\cite{chen2016predicting},
arts and entertainment award events~\cite{haughton2015oscar},
and politics~\cite{tumasjan2010}.
Among all these domains, sports competitions appears as a specially attractive field of study for three main reasons.
The first one is that a sports league forms a relatively isolated system, with few external and unrestrained influences.
Furthermore, it is replicated over time approximately in the same conditions and under the same rules.
The second reason is the large
amount of data available that makes possible to learn their statistical patterns. The third reason is their popularity,
which fuels a multibillion business that includes television, advertisement, and a huge betting market. All these
business players would benefit from better understanding of predictability in sports.
Sports outcomes is a mix of skills and good and bad luck~\cite{Owen2013, fortmaxcy2003, Zimbalist2002}. This mix is responsible for large part of the sports appeal~\cite{Chan2009}.
While an individual player may find fun equally with a pure skill game (such as chess)
or with a pure luck game (such as a lottery), for the typical large sports audience the mix aspect is the
great attraction. If a pure skill game, it would be completely predictable and potentially
boring~\cite{fort2011}.
There is also the suspition among researchers that a competition based on pure luck will
lower audience interest and hence, less revenue~\cite{Owen2013}.
If a pure luck game, the final champions will not be deemed worthy of admiration as their ranks are not resulting
their merits and not enough positive psychological emotions will be aroused~\cite{khanin2000emotions}.
Thus, the mixture makes prediction feasible, but very hard. According to Blastland and Dilnot~\cite{numbersgame},
the teams favored by bettors win half the time in soccer, 60\% of the time in baseball,
and 70\% of the time in football and in basketball. Despite the large amount of money involved,
there are no algorithms capable of producing accurate predictions and there is some evidence they
will never be found~\cite{Watts2016}.
Thus, the objective of this paper is to propose a general framework to measure the role of skill in sports leagues' outcomes. In order to do that, we propose a hypothesis test to verify if skill actually plays a significant role in the league. If it does, our framework identifies which teams are significantly more and less skilled than others. Finally, for leagues in which skill plays a significant role, we present a probabilistic graphical model to learn about the teams' skills.
As the first contribution of this paper,
we introduce the \textit{skill} coefficient, denoted by $\phi$, that measures the
departure between the observed final season score distribution and what one could expect from a competition
based on pure luck plus contextual circumstances. The larger
the value of $\phi$, the more distant from a no-luck competition. From $\phi$, we devise two techniques
to characterize the role of skill in sports leagues. First, we propose a significance test for $\phi$,
which shows, for instance, that most leagues' outcomes in
certain sports, such as handball, are no distinct from a random ordering of the teams.
Second, we propose a technique based on $\phi$ that identifies which teams are significantly more (and less) skilled than the others in the league.
By this technique, it is possible to show, for example, that in leagues where the competition is intense and
the financial stakes are very high, such as the \emph{Primera Divisi\'{o}n}
Spanish soccer league, the removal of only 3 teams out of 20 renders the competition completely random.
We also show through the $\phi$ coefficient the unequivocal presence of rare (or anomalous) phenomena in some leagues.
In these cases, the final scores end up very close to each other, with very little difference between the teams.
We show that this similarity is so extreme that there is almost no possibility that it could have been
generated by a completely random competition, let alone a competition where different skills are present.
The value of $\phi$ increases with the weight of the skill component but this relationship is unknown and nonlinear.
It does not answer a most relevant question: how much of the observed variation
in the final scores can be assigned to skill or luck? A large value of $\phi$
may be due to the presence of few outliers with extreme skills or it can be due to a distribution without
outliers but with excessive kurtosis. Hence, our second contribution is a probabilistic graphical model
that aims to disentangle and measure the relative weights of skills and randomness in the final scores.
We assume a random effects model with a generalized linear model allowing for features to explain
the skill differences between the teams. We model the large number of pairwise matchups in a sports competition
with a generalization of the Bradley-Terry paired comparison model that uses latent skill factors and
random effects due to the luck factor. Based on this graphical model, we are able to estimate the
luck component in the competition and so answer to the question: by how much skill determines the result?
An additional result is that the analysis of the coefficients associated with the features
suggest a number of management strategies to build a highly competitive team.
As our final contribution, we present a deep and comprehensive characterization of the presence of luck and skill in sports leagues using our proposed $\phi$ coefficient. We collected and analyzed all games from 198 sports leagues of 4 different sports: basketball, soccer, volleyball and handball. In total, 1503 seasons from 84 different countries were analyzed. From this analysis, we revealed which sport is more likely to have a league whose outcomes could be completely explained by chance. Also, we showed, for each sport, what is the expected percentage of teams that should be removed from a sports league to make such league similar with pure luck competition. In this direction, by using $\phi$ over the historical outcomes of sports leagues, we characterize how stable (or dynamic) are the skills of teams in that league.
\section{Related Work}
There is a large literature aiming at predicting the final outcome of individual sports games.
Vra\v{c}ar \emph{et al.} \cite{Vracar2016} proposed an ingenious model based on Markov process
coupled with a multinomial logistic regression approach to predict each consecutive point in a basketball match.
A large number of features included the current context in the progression of a game. Hence, they used the
game time, the points difference at that moment, as well as the opposing teams' characteristics.
Another model for the same situation of the sequence of successive points was proposed by \cite{peel2015}
where the focus was to verify the influence of the leading gap size (is there a restoration force)
and the identity of the last team to score (anti-persistence).
\cite{Gabel2012} and \cite{Merritt2014} found that extremely simple stochastic models
fitted the empirical data very well. The common pattern is that events occur randomly,
according to a homogeneous Poisson process
with a sport-specific rate. Given an event, its points are assigned to one of the teams
by flipping a game-specific biased coin (a Bernoulli process).
Vaz de melo \cite{vaz2012forecasting} proposed features to represent the network
effects between the players on the teams performance.
In \cite{ben2006parity}, the authors fit a idealized and simple theoretical model to empirical
data in order to estimate a so-called upset probability $q$, the chance that a worse team wins.
Although simple, the model fits quite well to the data and they conclude that $q \approx 0.45$
in the case of soccer and baseball, while $q \approx 0.35$ for basketball and football.
These numbers give a rough idea of the level of randomness present in the sports leagues.
The within game prediction of each successive point is a hard task as the most successful
models typically calculate the most extreme probability around 0.65
for a given team to score next, conditional on the model features \cite{peel2015}.
Probabilistic graphical models have been used also.
Chen and Joachims presented in \cite{chen2016predicting} a probabilistic framework for predicting
the outcome of pairwise matchups considering contextual characteristics of the game, such as the
weather or the type of tennis field.
Predicting the final ranking in a league is a different task but equally hard to perform well.
In this case, one is interested in estimating the chance that the strongest team is the final
winner or the probability that a weak team ended up being the champion.
Besides the within-game factors, there is also effects due to the competition
design~\cite{Chan2009, ben2013randomness, BenNaimKorea}.
Chetrite studied in \cite{chetrite2015number} the number of potential winners in
competitions considering how their abilities is distributed.
Spiegelhalter studied how measure the luck and ability in a championship in \cite{futebol1} by
comparing the sample variance and the theoretical variance expected when all teams have the same ability.
A ranking to NBA teams was made by Pelechrinis in \cite{pelechrinis2016sportsnetrank}. It was used the PageRank Algorithm to create a network among the teams. This work has an accuracy of about 67\%.
The keen interest from industry and academia in sports have spur many recent papers aiming at guiding
the tactical moves during the game. \cite{wang2015} discovered the best tactical patterns of soccer teams
by mining the historical match logs. A method which recommends the best serve to a tennis player
in a given context was developed by~\cite{wei2015}.
Data mining technique were employed by~\cite{van2016} to discover patterns accounting for spatial
and temporal aspects of volleyball matches to guide performance.
\cite{brooks2016} proposed a soccer player ranking system based on the value of passes completed.
Our paper as well as all these other papers has the objective of understanding the dynamics of competitive
sports to help the teams management, tactics development, providing more entertainment and profits.
\section{Disentangling luck and skill}
In this section, our aim is to define a metric by which performance can be assessed with respect to a baseline where the teams have the same ability or skill and the tournament is determined by pure chance. It does not mean that all teams have the same
probability of winning a given game as it depends on the context in which the match occurs. In soccer games, for example,
the home team usually has a higher chance of winning. This is due to the presence of a favorable audience, and not
because of any intrinsic skill the team may have.
An analysis of the empirical frequency of home wins, away wins, and draws in nine successive seasons of the Brazilian S\'{e}rie A soccer league shows that the probability of an away win is
about half the home win (0.25 versus 0.50, respectively), with little variation in
time. Other contextual variables could influence the outcome of a game.
For example, \cite{chen2016predicting} consider the
different winning probabilities the same male tennis player has depending on which type of field he plays, such as
indoor or outdoor court or the type of court surface (Grass, Clay, Hard, Carpet). In this paper, we consider only the home/away
context by lack of additional information, but our model is easily extended if there is more contextual information available.
\subsection{A coefficient to measure skill and luck}
Let $X_h$ be a random variable representing the points earned by a team when it plays in a match at home.
Similarly, let $X_a$ be associated with the points earned when the team plays away.
The probability distribution of $X_h$ and $X_a$ is sports-specific, depending on the scoring system
associated with each match. In basketball, for example, there is no ties and each game results in one
point for either team. Hence $X_h$ and $X_a$ are simply binary Bernoulli random variables.
In the case of soccer the situation is different:
a team earns either 3, 1, or 0 points depending on its number of goals
being larger, equal, or smaller than its competitor, respectively.
Let $P_h$, $P_t$ and $P_a$ be the probabilities that a home team wins, a tie, and a home team loses,
respectively, with the restriction that they are non-negative numbers and $P_h + P_t + P_a = 1$.
Hence, the random variable $X_h$ has a multinomial distribution with possible values 3, 1, or 0
with probabilities $P_h$, $P_t$, and $P_a$, respectively. As a consequence, its
expected value $\mathbb{E}(X_h)$ is equal to $\mu_{X_h} = 3 P_{h} + P_{t}$
and the variance is $\sigma^2_{X_h}= 9 P_{h} + P_{t} - \mu^2_{X_h}$.
Similarly, $X_a$ has mean and variance given by
$\mu_{X_a} = P_{t} + 3P_{a}$ and $\sigma^2_{X_a}= P_{t} + 9 P_{a} - \mu^2_{X_a}$.
In a round-robin tournament composed by $k+1$ teams, each one of them plays $2k$ times, once at home and once
at the opponent field. Let $X_{hi}$ and $X_{ai}$ be the score earned by a team when playing his $i$-th
game at home and away, respectively. Then, $ Y_{2k} = \sum_i (X_{hi} + X_{ai})$.
is the total random score a given team obtains at the end of a tournament. Assuming that the probabilities
$P_h$, $P_t$, and $P_a$ are the same for all teams is equivalent to assume that skills play no part in the tournament.
Any final score is the result of pure luck, mediated only by the contextual variables. The round-robin
system, with one game at home and one away, and with the same number of games for all teams involved,
guarantees that the distribution of its final score $Y_{2k}$ is the same for all teams.
Considering the stochastic independence between the games and a sufficiently large number $k$ of games
in a tournament, we have approximately a Gaussian distribution:
$ Y_{2k} \sim N(\mu_{2k},\sigma^2_{2k}) $
where $\mu_{2k} = k(\mu_h+\mu_a)$ and $\sigma^2_{2k}=k(\sigma^2_h+\sigma^2_a)$.
In summary, if all teams had the equal skill, after $k$ matches at home and $k$ away matches, the distribution
of final scores $Y_{2k}$ follows approximately a normal distribution with parameters $\mu_{2k}$ and
$\sigma^2_{2k}$. This represents our baseline model, against which the observed empirical results are to be contrasted.
The construction of this distribution depends on the specific sport under analysis,
on the number of games in the season and the amount of teams.
In practice, the probabilities $P_h$, $P_t$, and $P_a$ are estimated from the data through the simple frequencies.
For example, $P_h = W_h/N$ where $N$ is the total number of matches in a season and $W_h$ is the number of times
there was a home win. Analogously, we estimate $P_t = W_t/N$ and $P_a = W_a/N$.
At the end of a season, we have the observed value of the final score $Y_{2k}$ for each team.
This allows us to obtain a non-parametric, model-free estimate of its variance by calculating the empirical
variance $s^2$. We compare the observed variability of $Y_{2k}$ relatively to its expected value under the equal skills
assumption:
\begin{equation}
\phi = \frac{s^2-\sigma^2_{2k}}{s^2}
\label{eq:phi}
\end{equation}
The value of $\phi$ lies between $[-\infty,1]$. Values around zero are compatible with seasons following the random model,
where the teams have equal skills and luck is the only drive for victory. Positive values of $\phi$ indicate an excess
variability in the final scores due to the additional variation induced by the different teams' skills.
The closer to 1, the more dominant the skill factor. What can come as a surprise is the possibility of negative
values for $\phi$. Values smaller than zero indicate that the teams' outcomes have a variability much smaller than
that expected due to the luck factor. This could happen, for example, due to some kind of offsetting mechanism along the
season or to some colluding scheme among the teams. Although this is not a common situation, we have clear evidence of
these negative $\phi$ values presence in our data analysis (see Section \ref{sec:dataanalysisphi}).
\subsection{How much sampling variation in \texorpdfstring{$\phi$}?}
Since $\phi$ is an empirical measure based on the statistical outcomes in a season,
even when the random model of no skills is true, it is virtually
impossible to have $\phi$ exactly equal to zero, its expected value.
We obviously need a measure of uncertainty to build confidence intervals in order to measure the distance between
the observed data and the random model.
We build a confidence interval for $\phi$ using Monte Carlo simulations under the random model.
With the probabilities $P_h$, $P_t$, $P_a$, we generate Multinomial trials playing the teams against each other
according to the same game schedule followed during the season. We calculate the simulated $Y_{2k}$ outcomes
and their empirical variances. By repeating this independently, we obtain a 95\% Monte Carlo distribution
for $\phi$ when there are no skill differences between the teams by taking the interval with endpoints given by
the 2.5\% and 97.5\% percentiles.
The confidence interval provide a yardstick to judge the value of $\phi$ calculated with the real data.
If the observed $\phi$ value is inside the interval, it is not significantly different of 0. This means that
there is no evidence that the league/season deviates from the random model. In contrast, a $\phi$ value above the
upper interval endpoint is a strong indicator that the teams' skills are different, while $\phi$ values
below the lower endpoint leads to the conclusion that the season has significantly less variability
than the random model.
Based on those seasons with $\phi$ values significantly different and higher than 0, a question of interest is:
how many teams should be removed from the league/season in order to make it random?
To answer this question, we ordered the teams according with their points in the season and remove the team more distant from the
average team score. This team could be the best or the worst team in the season. In case of a tie between the best and the worst team, the best one is removed. If the tie is between two best teams or two worst teams,
we use the alphabetical order to remove one of them.
After selecting which team should be removed from the league/season, we recalculate the final score on the league/season excluding all matches between this team and the other teams. The $\phi$ value and the confidence interval are both recalculated. If the new $\phi$ value is inside the confidence interval, the algorithm stops. Otherwise, it keeps removing teams until the confidence interval contains the new $\phi$.
\subsection{Where do these skills come from?}
As final part of this section, we present a probabilistic graphical model to learn about the teams' skills when
their $\phi$ value is positive and outside the confidence interval. More important than simply learning the
different skills levels of each team is to be able to factor them in terms of explanatory features.
What are the most important factors to explain the different skills the teams show in a tournament?
We propose a probabilistic graphical model that takes into account the features that lead to the
variable skill as well as allowing for the presence of luck.
A famous model for paired comparisons is the Bradley-Terry model~\cite{chen2009}.
This model assumes that there are positive
quantities $\alpha_1, \alpha_2, \ldots, \alpha_n$ representing the skills of $n$ teams. The probability
that team $i$ beats team $j$ in a game is given the relative size $\alpha_i$ with respect to $\alpha_j$:
\[ \pi_{ij} = \mathbb{P}(i \mbox{ beats } j) = \frac{\alpha_i}{\alpha_i + \alpha_j} \: .
\]
Take the positive parameters $\alpha_i$ to an unrestricted scale by transforming $\alpha_i = \exp(\beta_i)$. Then,
the Bradley-Terry formulation implies in a conveniently linear logistic model:
\[ \log\left( \frac{p_{ij}}{1-p_{ij}} \right) = \beta_i - \beta_j \:
\]
The convenience of this linear logistic formulation is the possibility of extending the Bradley-Terry model to
incorporate features that may help to estimate and to explain the differences between the $\alpha_i$'s skills.
Let $\mathbf{x}_i$ be a $d$-dimensional vector with a set of features for team $i$ that may correlate with its
skill $\alpha_i$. We want to learn the relevance of each feature present in $\mathbf{x}_i$ on the specification
of the $\alpha_i$ skill. To do this, we need to specify a likelihood function for the random data holding the
season outcomes.
Most work using the Bradley-Terry approach adopt a binomial trial likelihood, considering only the final result,
win or not, for a given game~\cite{chen2009}.
We decided to take into account also the final score in the game as this should be
correlated with the relative skills of the teams involved.
Indeed, a large $\alpha_i$ skill playing against a small $\alpha_j$ skill
should not only have a large probability of winning but it should also lead to a greater score difference between the
teams. We adapt the Bradley-Terry model for a Poisson likelihood model to account for the number of points or goals,
rather than only the binary winning indicator.
\subsubsection{The likelihood}
In a tournament $K$ games and with $n$ teams having skills $\alpha_1, \alpha_2, \ldots, \alpha_n$, let $N_k$ be the total
score of the home team and the away team at the $k$-th match during the season and $Y_k$ be the home team score in the same match.
If we imagine each of the total $N_k$ points resulting in a success or a failure for the home team, the
random variable $Y_k$ will have a binomial distribution conditioned on $N_k$ and on the teams skills.
As the number of points can be very large (as in basketball games) and hence to produce a likelihood
computationally intractable, the binomial distribution is approximated by a Poisson distribution with $N_k$
as an offset for the expected mean.
We additionally allow unmeasured features and contextual variables to affect the distribution of $Y_k$ by
introducing a random effect factor $\varepsilon_k$ for each match.
In summary, conditioned on the parameters and random effects, each $k$-th game has
\begin{equation}
Y_k \sim \mbox{Poisson}\left( N_k ~ \frac{\alpha_{h(k)}}{\alpha_{h(k)} + \alpha_{a(k)}} + \varepsilon_k \right)
\label{eq:distribY01}
\end{equation}
where $h(k)$ and $a(k)$ are the indices of the home and away teams participating in the $k$-th season game.
The presence of $\varepsilon_k$ is essential. Without it, the outcome variability induced by the Poisson distribution
would be completely determined by the skill themselves through the $\alpha$ terms. We need to add an overdispersion
component, the $\varepsilon_k$ random effect, to account for the large variation in the games outcomes.
The skill coefficients $\alpha_1, \alpha_2, \ldots, \alpha_n$ are fully determined by intrinsic characteristics
of each team. We use the canonical link function associated with the Poisson distribution to obtain a log-linear model
\[ \log(\alpha_i) = \mathbf{w}^T \mathbf{x}_i
\]
Let $\mathcal{D} = \{ \mathbf{x}_i, Y_k, N_k, \mbox{ for } k=1,\ldots,K, \mbox{ and } i=1, \ldots, n \}$
be the observed dataset. The likelihood of the parameters $\mathbf{w} \in \mathbb{R}^d$ and $\varepsilon_1, \ldots, \varepsilon_K$
for all $k=1, \ldots, K$ games in the league based on $\mathcal{D}$ is given by
\begin{align*}
& L(\mathbf{w}, \{ \varepsilon_k \} | \mathcal{D}) \propto \prod_{k=1}^K \left(N_k \frac{\alpha_{h(k)}}{\alpha_{h(k)} +
\alpha_{a(k)}} + \varepsilon_k \right)^{y_k} \\
& ~~~ \times \exp\left( - N_k \frac{\alpha_{h(k)}}{\alpha_{h(k)} + \alpha_{a(k)}} + \varepsilon_k \right) \\
\end{align*}
\subsubsection{Hierarchical Priors}
The prior distribution over the $\mathbf{w}$ weight components is a multivariate Gaussian distribution:
$p(\mathbf{w}|\tau_w) \sim \mathcal{N}(\mathbf{w}; \mathbf{0}, \tau^{-1}_w \mathbf{I})$, where
$\tau^{-1}_w$ is a precision parameter. We also assume independent prior
zero-mean Gaussian distributions for the random effects $\varepsilon_1, \ldots, \varepsilon_K$.
Hence, the $n$-dimensional vector $\bg{\varepsilon} = (\varepsilon_1, \ldots, \varepsilon_n)$
also follows a multivariate Gaussian conditioned on a certain precision parameter:
$p(\bg{\varepsilon} | \tau_{\varepsilon} ) \sim \mathcal{N}(\bg{\varepsilon}; \mathbf{0}, \tau^{-1}_{\varepsilon} \mathbf{I})$.
These prior distributions are the aspect of the Bayesian model that acts as a probabilistic regularization method.
We completed the specification assigning Gamma hyper-prior distributions over the precision parameters as:
\[ p(\tau_{w} | a, b) \sim \mathcal{G}(\tau_w ; a, b) \quad \textrm{and} \quad p(\tau_{\varepsilon} | c, d) \sim \mathcal{G}(\tau_{\varepsilon} ; c, d) \]
where $\mathcal{G}(\tau,; a, b)$ is a Gamma distribution and $a$ and $b$ are global scale and rate hyper-parameters, respectively. Figure \ref{grafo} shows a representation of the Bayesian model developed in this paper.
\begin{figure}[htb]
\centering
\begin{tikzpicture}
\node[obs] (y) {$Y_{k}$};%
\node[latent,above=of y,xshift=1.5cm] (a) {$\bg{\alpha}$}; %
\node[obs,right =of a, xshift=1cm] (x) {$\mathbf{x}_i$}; %
\node[latent,above=of a] (w) {$\mathbf{w}$}; %
\node[latent,above =of y, xshift=-1.5cm] (sigma) {$\varepsilon_k$}; %
\plate [inner sep=.3cm,xshift=.02cm,yshift=.2cm] {plate1} {(y)(a)(sigma)} {for($k$ in $1:K$)}; %
\plate [inner sep=.3cm,xshift=.02cm,yshift=.2cm] {plate2} {(x)(a)} {for($i$ in $1:n$)};
\edge {w,x} {a};
\edge{sigma}{y};
\edge[double]{a}{y}
\end{tikzpicture}
\caption{Probabilistic Graphical Model for the Bayesian skills model.
The number of teams is $n$ and $K$ is the total number of matches in the season.}
\label{grafo}
\end{figure}
\subsubsection{Bayesian inference via MCMC algorithms}
The Bayesian inference is based on the posterior distribution of the $\mathbf{w}$ weight coefficients, the random effects $\bg{\varepsilon}$, and $\tau_w$ and $\tau_{\varepsilon}$, the precision hyper-parameters. It
is proportional to the product between the likelihood and the prior distribution:
\begin{align*}
& p(\mathbf{w}, \bg{\varepsilon}, \sigma_w^2, \varepsilon_k | \mathcal{D}) \propto \prod_{k=1}^K \left( \frac{\alpha_{h(k)}}{\alpha_{h(k)} +
\alpha_{a(k)}} + \varepsilon_k \right)^{y_k} \\
& ~~~ \times \exp\left( - N_k \frac{\alpha_{h(k)}}{\alpha_{h(k)} + \alpha_{a(k)}} + \varepsilon_k \right)
\exp\left( -\frac{1}{2 \sigma^{2d}_w} \mathbf{w}^T \mathbf{w} \right) \\
& ~~~ \times \exp\left( -\frac{1}{2\sigma^{2K}_{\varepsilon}} \boldsymbol{\varepsilon}^T \boldsymbol{\varepsilon} \right)
\end{align*}
where we eliminate all multiplicative factors not involving the unknown parameters.
The Metropolis-Hastings Algorithm, a member of Markov chain Monte Carlo class,
is used to make inference about the coefficients in $\mathbf{w}$
by a sequential algorithm approach:
\lstinputlisting{Codigo1.R}
\section{Datasets}
The 1503 seasons and their games studied in the first part of this paper were collected from the site \url{www.betexplorer.com}. The base has 270713 matches from 198 leagues occurred from January 2007 to July 2016. The games took place in 84 different countries from America, Europe, Asia, Africa and Oceania.
We restrict the data collection to those sports leagues adopting a double round-robin tournament system. This is a scheduling scheme where each team plays against every other team at least twice. A round-robin system provides a certain statistical stability for the results, preventing one single bad game from eliminating a team from the competition. It is also considered a fair system as it gives the same opportunities to all competitors. For our data analysis, it has the advantage of equalizing the number of matches in a tournament for all teams. The final ranking is determined by the accumulation of scores obtained along the competition. In our data collection, we also require
a tournament to have more than 7 teams and more than 5 seasons for statistical stability purposes. We collected data from basketball (42 leagues, 310 seasons), volleyball (51 leagues, 328 seasons), handball (25 leagues, 234 seasons)
and soccer (80 leagues, 631 seasons). The information about the NBA teams used in the second part the paper were collected from \url{www.basketball-reference.com}. From this site we collected the NBA players and their teams since 2004, the players salaries and the Player Efficiency Rating (PER) in each season.
\section{Results}
\label{sec:results}
\subsection{The skill coefficient \texorpdfstring{$\phi$ }.} \label{sec:dataanalysisphi}
We calculated our skill coefficient $\phi$ for all seasons and all sports in our database.
We show the results aggregated by sports in Figure \ref{phi_value}.
The more compressed towards the upper limit 1, the more away from the random model and the more important is the skill component
to explain the final scores. Basketball appears as the sport
where the skill has the largest influence in the final results.
In second comes volleyball, followed by soccer and then, handball.
This is in accordance with previous analysis using other methods~\cite{ben2006parity, numbersgame}.
\begin{figure}[!h]
\includegraphics[scale=0.40]{escala_phi_esportes.png}
\caption{Boxplot of the 1503 seasons $\phi$ values separated by sport.}
\label{phi_value}
\end{figure}
Considering the structure of each sport, it is possible to explain these results. The players usually score many points in basketball and volleyball games. Due to the long sequence of relevant events, it is more difficult for a less skilled team to win a game out of pure luck. However, in soccer and handball, there are much less relevant events leading to points.
In soccer, for example, the average number of goals per match is 2.62. This makes easier for a less skilled team to win a match due to a single lucky event. Even if a large skill difference is present in soccer,
it has less opportunity to be revealed as compared to basketball and volleyball.
We evaluated for each sport which factor, luck or skill, has more influence in its leagues results. If the observed value $\phi$ is not significantly different from zero (according to the confidence interval), we classified the corresponding season as random. Otherwise, it is classified as ``skill''. All basketball leagues and 99.39\% of volleyball seasons have a strong skill component in their final results.
In contrast, in handball and soccer, pure luck can be the single factor in as many as
17.95\% and 7.13\% of the seasons, respectively.
\begin{comment}
\begin{table}[!h]
\centering
\caption{Breaking down the calculated $\phi$ values by sports and luck or skill components.}\label{tab2}
\begin{tabular}{l|cc|cc|c}
\hline
\multirow{2}{*}{Sport}&\multicolumn{2}{c|}{Skill}
&\multicolumn{2}{c|}{Random}& \multirow{2}{*}{Total}\\ \cline{2-5}
& Freq. & \%& Freq. & \% & \\ \hline\hline
Basketball & 310 & 100\% & 0 & 0\% & 310 \\
Handball & 192 & 82.05\% & 42 & 17.95\% & 234 \\
Soccer & 586 & 92.87\% & 45 & 7.13\% & 631 \\
Volleyball & 326 & 99.39\% & 2 & 0.61\% & 328 \\ \hline
Total & 1414 & 94.08\% & 89 & 5.92\% & 1503 \\
\hline
\end{tabular}
\end{table}
\end{comment}
The values of $\phi$ for basketball and volleyball in Figure \ref{phi_value} are close to its maximum value
and this could mislead one into thinking that luck plays no role in these sports. If this was so, these
sports would loose much of their appeal~\cite{Owen2013}. Skill has a strong influence in the results
but it does not completely determine the final ranking. In each season, some teams have more extreme
skills while the others have approximately the same skill level. It is that first group that drives
the $\phi$ coefficient towards and close its maximum value. Were they absent, and the season would resemble a lottery.
To demonstrate that, in~Figure \ref{percent} we show the percentage of teams that should be removed from the league/season to make it random. The basketball (50\%) and volleyball (40\%) are the sports requiring relatively more teams to be removed. This reflects the results of Figure \ref{phi_value}. Handball (14\%) and soccer(19\%), having typically lower values of $\phi$, need to remove relatively less teams
to end up with a random league. To exemplify how different these sports are, consider two important soccer leagues,
the Spanish \textit{Primera Divisi\'{o}n} and the English Premier League. In each season from 2007 to 2016,
the average number of teams that need to removed was 3.2 and 4.9, respectively. As these leagues have 20 teams in the
competition, these averages mean 16\% and 25\%, respectively. More important, the teams to be removed are almost the
same every season. For example, in the Spanish league, the 3.2 average teams to be removed included Real Madrid 10 times
and Barcelona 9 times out of the 10 seasons considered. This means that, removing 2 teams out of 20 in the \textit{Primera Divisi\'{o}n}
we end up with a competition that produces a ranking that is essentially a random shuffle among the remaining teams.
In the English Premier League, Manchester United appeared 9 times out of 10 seasons with other teams appearing as often as 7 times
in the list of teams that, if removed, renders the tournament basically a random outcome. Consider now the NBA basketball competition. Reflecting the large skill differences among the teams, we need to remove between 17 to 25 teams out of 30 to
make it look as a random competition. Also, the teams to be removed are highly variable from season to season, in contrast with soccer.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.35]{escala_phi_quant_retirados.png}
\caption{Percentage of Teams Removed.}
\label{percent}
\end{figure}
The panel (a) in Figure \ref{moreseason} shows the $\phi$ coefficient by season for some popular and financially important sport leagues. There is some slight trend but $\phi$ stays essentially stationary around a sport-specific constant.
For NBA, $\phi$ is always above 0.95. The value for Premier League stays around 0.77,
while Primera Divis\'{o}n and S\'{e}rie A end up around 0.80 and 0.63, respectively.
\begin{figure*}
\centering
\includegraphics[scale=0.5]{graficos7b.png}
\caption{(a): Value of $\phi$ by season; (b) Cumulative $\phi$; (c) \% to be removed to make the league random.}
\label{moreseason}
\end{figure*}
We consider these 4 leagues to study how the $\phi$ coefficient changes when the matches
are accumulated over the seasons as if they comprise a single tournament.
At a given year $t \leq 2016$, we consider all matches occurred since 2007.
Hence, the $\phi$ coefficient for, say, the 2009 season will consider now all matches between the teams that played in the 2007, 2008, and 2009 seasons
as if they were a single championship.
The 4 leagues selected were the \textit{NBA} basketball league (from USA) and three soccer leagues: \textit{Premier League} (from England),
\textit{Primera División} (from Spain) and \textit{S\'{e}rie A} (from Brazil). The results are shown in panel (b)
of Figure \ref{moreseason}.
There was an increasing trend in the $\phi$ coefficient as the matches are accumulated, which implies that the relative skills of the teams are stable over the years, i.e., the point difference between good and bad teams is increasing. On the other hand, for the Brazilian soccer league the $\phi$ is decreasing, meaning that, in the long term, the teams are more similar to each other, with no small number of teams dominating the entire league, such as the Barcelona and Real Madrid
do in the \textit{Primera Divisi\'{o}n} in almost all seasons.
The third panel (c) in Figure \ref{moreseason} shows the proportion of teams that need to be removed in each season
for the league final ranking turn out into a completely random shuffle of the teams. The percentages show
little evidence of trending during this period. NBA is a league where there is a large spread in skills,
requiring the removal of up to 62\% of the teams to make it random. The other three soccer leagues requires less,
with the English competition around 25\%, the Spanish being around 12\% and the Brazilian S\'{e}rie A requiring
around 8\%.
The most extreme $\phi$ coefficient in Figure \ref{phi_value} was observed in the soccer league Division 1 from Algeria,
being equal to -1.93, in the season 2014-2015. Negative values are not very common and this was an extreme value,
with $\phi \approx -0.6$ being the second smallest
value. The Algeria outlier value is only figuratively represented in the plot, in a vertical height different from its true vertical
coordinate, in order to a better visualization of the results. On this league and this year, the teams' scores were very similar
and all teams still had chances to win the championship at a point very close to the season ending.
A newspaper headlined ``Algerian League is so tight all 16 teams can mathematically still win the title with
four rounds of matches to go''~\cite{argelia}.
During the third week, Albert Bodjongo,
an athlete playing for the JSK team, was dramatically killed.
At the end of a home game between JSK and USM Alger, when
JSK lost by 2 to 1, angry JSK fans hit Bodjongo's head with a projectile thrown in the field. The athlete died
a few hours later in a hospital. As a consequence, the Algerian Football Federation
suspended all football indefinitely and ordered the closure of the stadium where the incident took place.
The season was eventually resumed later. It is not clear how much this traumatic events can provide an explanation for the extremely negative value reached by $\phi$ in this case.
\subsection{Skills in the NBA}
We choose a league in particular, the National Basketball Association (NBA), to fit our Bayesian
model due to its feature data availability, prestige and high value of the $\phi$ coefficient in most seasons.
The NBA is divided into two parts: the regular season and the playoffs. We considered only the matches during the
regular season, because at this stage all teams played with each other at least twice (at home and away).
As basketball does not have ties, the teams either earns one point if it wins, and zero points in case it loses.
The seasons have 30 teams divided in East and West conferences.
We have two types of features in the vector $\mathbf{x}_i$ to model the differences in skills.
A subset of them are statistics associated with the teams, such as the average of the 5 highest players salaries.
Other features are built based on a network that consider players and teams as in~\cite{vaz2012forecasting}. This network is represented by a graph where players and teams are nodes. The graph varies with year. In a certain year, two players are connected by an edge if they played together sometime during the previous 6 seasons and if they both still play on NBA. A player and a team are connected by an edge if the player played on the team at some moment in the previous 6 seasons and if he still plays on NBA, either on that team or in another team. The complete
list of features in $\mathbf{x}_i$ associated with team $i$ in a given season is the following:
\begin{adjustwidth*}{1em}{0em}
\noindent \textbf{CO}: A binary indicator. It is equal to $E$ if the team belongs to East conference, and $W$, if West conference; \newline
\noindent \textbf{A5}: The average salary of the top 5 players salaries; \newline
\noindent \textbf{A6-10}: The average salary of the 6 to 10 top salaries. \newline
\noindent \textbf{SD}: The standard deviation for the players salaries. \newline
\noindent \textbf{AP}: The average Player Efficiency Rating (PER) considering the players in the last year.
The PER was created by John Hollinger and it is a system to rate the players according to their performance. The league-average PER is set
equal to 15. \newline
\noindent \textbf{VL}: Team Volatility, measuring how much a team change its players between seasons.
It is defined as $\Delta d_t^y=d_t^y-d_t^{y-1}$, where $d_t^y$ is the degree of team $t$ in the year $y$. \newline
\noindent \textbf{RV}: Roster Aggregate Volatility, measuring how much the players have been transferred from other teams in the past years.
It is defined as $\sum\Delta d_t^y = \sum_{\forall v\in R^y_t } d_v^t / (y-y0_v)$, where $y0_v$ is the year of the first season of player $v$, where $v$ is a player of team $t$ in year $y$. $\sum\Delta d_t^y$ it the sum of the degree of all team members in the roster $R^y_t$ of the team $t$ in year $y$. \newline
\noindent \textbf{CC}: Team Inexperience, the graph clustering coefficient. \newline
\noindent \textbf{RC}: Roster Aggregate Coherence, measuring the strength of the relationship among the players. A high value of $\hat{cc}_t^y$ means that the team roster played together for a substantial amount of time or that few changes have occurred. The metric is defined as $\hat{cc}_t^y = avg(cc_v^t\times(y-y0_v)),\forall v \in R_t^y$, where $cc_v^t$ is the clustering coefficient of player $v$ in year $t$. \newline
\noindent \textbf{SI}: Roster Size, the number of players.
\end{adjustwidth*}
All variables are standardized to have mean zero and standard deviation 1.
An extensive search was considered combining the features and the final model was selected
using the \textit{Deviance Information Criterion} (DIC) \cite{spiegelhalter2002bayesian}, which
balances goodness of fit and model complexity. When several models are compared,
that with the lowest value of DIC is the preferable one.
We ran 10000 iterations of the Metropolis-Hastings and used a burn-in equal to 2000.
The average acceptance rate of the Metropolis-Hasting Algorithm proposed new parameter values was 0.395,
a reasonable value, and the standard deviation was equal to 0.02.
\begin{table*}[!h]
\centering
\caption{DIC value for some of the best models by season.}
\begin{tabular}{cl|c|c|c|c|c}
\hline
\multicolumn{2}{c|}{\multirow{2}{*}{Model Features}}
& \multicolumn{5}{c}{DIC}\\ \cline{3-7}
& & 2012 & 2013 &2014 &2015 &2016\\ \hline
1 &CO+A5+AP+VL+RC+SI & \textbf{-683589.7} &\textbf{-871853.9} &\textbf{-901784.2} &\textbf{-890471.0} &\textbf{-922389.1}\\
2 &CO+A5+A6+AP+SD+VL+RC+SI & -678306.4& -865637.7& -895432.8& -882730.4& -915692.8\\
3 &CO+A5+AP+SD+VL+RC+SI & -681779.5& -869800.7& -899301.4& -888251.6& -920525.5\\
4 &CO+A5+A6+AP+VL+RV+CC+RC+SI & -675140.2 &-860787.3 &-890335.5 &-87362.2 &-908063.3\\ \hline
\end{tabular}
\label{dic}
\end{table*}
\subsection{Results}
The Table \ref{dic} presents the DIC value for some of the best models.
It is possible to conclude that model 1, including the variables
$CO+A5+AP+VL+RC+SI$, is the best one in every single year we analysed. Variables $VL$ and $RC$ were also relevant
in Vaz-de-Melo~\cite{vaz2012forecasting}. The variables $CO$, $A5$, and $AP$ were not present in~\cite{vaz2012forecasting},
but our analysis show that they have influence on the teams' skill in addition to that caused by $VL$ and $RC$.
Figure \ref{model} displays three plots with the results of model 1. The bar plot shows the correlation between the teams skills estimated by the Bayesian model (their posterior mean) and the number of games won during the regular season. The average over the seasons is equal to 0.7399, a high value.
We can see from these plots that the high skills lead to a larger number of wins but this is not deterministic. This high correlation can be
appreciated in the second plot, showing the scatter plot between the teams skills estimates $\hat{\alpha}_i$ and the games won by the $i$-th team in the 2016 season. Each circle is a team. The symbol inside the circle represent their position after the games in playoff: numbers are the ranking order, and the letters P and N represent the teams that lost on the first playoff game and the teams that did not play the playoffs, respectively. This scatter plot shows that our skills
estimates $\hat{\alpha}$ based on the regular season data correlate very well with the total number of
wins at the end of the playoffs.
However, some surprising results can occur due to luck and additional events during the playoff season.
Hence, the final ranking does not have a perfect linear relationship with the number of wins. For example, the team with the highest skill estimated by the model ended on the $4$-th position in the final rank; the team with the second highest skill value was the second team in number of victories but it ended up only in the $8$-th position. The champion had the 4-th highest skill estimate and the 4-th highest number of games won. The third plot on Figure \ref{model} presents the coefficients estimated by our model 1 in each season. The average salary of 5 highest salaries (A5) and the average PER (AP) have a positive effect on skill, as one might expect. Except for the 2015 season, the Roster Aggregate Coherence (RC) also has a positive effect. On the other hand, except for the 2014 season, the teams' size (SI) has a negative effect.
\begin{figure*}[!h]
\centering
\includegraphics[scale=0.48]{graficos1.png}
\caption{Barplot correlation between estimated skills and number of wins by season;
scatter plot of estimate skills versus number of wins for the NBA 2016 Season;
$w$ coefficients for different seasons.}
\label{model}
\end{figure*}
Table \ref{tab:underwins} shows an important result. Based on the estimated skills $\alpha_i$, for each match,
we are able to identify which team is the more skilled. Hence, looking at the frequencies classified according to
the relative skills $\alpha_{h(k)}$ and $\alpha_{a(k)}$, we can estimate the probability $\mathbb{P}(U)$
that the underdog wins. It is stable over the seasons and approximately equal to $0.36$. This is the component
of luck in the NBA outcomes. Our graphical model model allows us to be much more refined than this.
We also estimate the probability that the underdog wins given that it plays away from home ($\mathbb{P}(U|A)$)
and at home ($\mathbb{P}(U|H)$). We see that, being an average value,
the unconditional probability is between these two conditional probabilities. Playing at home rather than
away increases the underdog probability in approximately $0.45 - 0.27 = 0.18$, a substantial advantage.
\begin{table}[!h]
\centering
\caption{Unconditional and conditional probabilities that an underdog team wins.}
\label{tab:underwins}
\begin{tabular}{c|ccc|cc}
\hline
Season & $\mathbb{P}(U)$ & $\mathbb{P}(U|A)$ & $\mathbb{P}(U|H)$
& $\mathbb{P}(U|A, R+)$ & $\mathbb{P}(U|H, R+)$ \\
\hline
2012 & 0.35 & 0.27 & 0.44 & 0.25 & 0.24 \\
2013 & 0.36 & 0.25 & 0.47 & 0.16 & 0.13 \\
2014 & 0.37 & 0.29 & 0.45 & 0.15 & 0.14 \\
2015 & 0.37 & 0.30 & 0.44 & 0.22 & 0.20 \\
2016 & 0.34 & 0.26 & 0.43 & 0.19 & 0.13 \\ \hline
Mean & 0.36 & 0.27 & 0.45 & 0.19 & 0.17 \\ \hline
\end{tabular}
\end{table}
We also consider what is this underdog winning probability when the adversary team is one of those
removed by our $\phi$ coefficient to turn out the league random. We consider only those removed and among
the best skilled teams. We denote them as $\mathbb{P}(U|A, R+)$ and $\mathbb{P}(U|H, R+)$ in Table
\ref{tab:underwins}. The previous probabilities should obviously decrease. Indeed, this is what
the table shows. The home advantage almost disappears in this case.
The proportions have more variability because they are calculated under a smaller
number of games.
\section{Conclusions}
The proposal of this paper was to study the relative roles of skill and luck in some competitive sports.
Through the $\phi$ coefficient, it was possible to determine that the basketball is the most competitive sport, in comparison with volleyball, soccer and handball. Despite volleyball final score be sets, that are limited to 5 in a match and maximum 3 by team, the long sequence of points during the match to obtain those sets made this sport more competitive than soccer and handball. It was also possible to find which teams should be removed from a league/season in order to make it random. In basketball and volleyball, in average one needs to remove 50\% and 40\%, respectively, while in soccer removing only 20\% turns the season in a random tournament.
For leagues where the coefficient $\phi$ is positive and outside the confidence interval, we proposed a probabilistic graphical model to learn the teams' skills. We test this model in the NBA league from USA, due to the large amount of data available and to its $\phi$ coefficient being close to 1. Using the features conference, average salary top 5, average PER, team volatility, roster aggregate coherence and team size we found an average correlation of 0.7399 between the teams' skill estimated and the number of games won during the regular season.
Our graphical model allows us to decompose the relative weights of luck and skill in a competition.
We found that, in the NBA seasons, 35\% an underdog team wins a match, representing the luck component
in the NBA final scores. We also estimated that the home advantage of an underdog team adds 0.18 with respect to
the average probability of 0.27 when it plays away from home.
Looking at the most relevant features, our results suggest a management strategy to maximize the formation of teams with high skills, the controllable aspect on the team performance. Building smaller teams allow for higher salaries,
which also increases the odds of attracting high PER players and increasing rapport (roster aggregate coherence).
Smaller, more cohesive teams have less conflicts and so are easier to manage.
The final message of our paper is that sometimes the culprit is luck, about 35\% of the times in NBA. It is hard to beat luck in sports. This makes the joy and cheer of crowds. Or someone will forget the Superbowl 2017 evening?
\section{Acknowledgments}
The authors want to thank CNPq, CAPES, and FAPEMIG, for providing support for this
work.
\bibliographystyle{ACM-Reference-Format}
| {
"attr-fineweb-edu": 2.369141,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdufxaL3SujRqKo6H | \section{Introduction}
Injuries in sports is a major cost. When a start player is hurt, not only does the team continue paying the player, but also impacts the teams performance and fan interest. In the MLB, teams spend an average of \$450 million on players on the disabled list and an additional \$50 million for replacement players each year, an annual average \$500 million per year \cite{conte2016injury}. In baseball, pitcher injuries are some of the most costly and common, estimated as high as \$250 million per-year \cite{forbesinj}, about half the total cost of injuries in the MLB.
As a result, there are many studies on the causes, effects and recovery times of injuries caused by pitching. Mehdi et al. \cite{mehdi2016latissimus} studied the duration of lat injuries (back muscle and tendon area) in pitchers, finding an average recovery time of 100 days without surgery and 140 days for pitchers who needed surgery. Marshall et al. \cite{marshall2018implications} found pitchers with core injuries took an average of 47 days to recover and 37 days for hip/groin injuries. These injuries not only effect the pitcher, but can also result in the team losing games and revenue. Pitching is a repetitive action; starting pitchers throw roughly 2500 pitches per-season in games alone - far more when including warm-ups, practice, and spring training. Due to such high use, injuries in pitchers are often caused by overuse \cite{lyman2002effect} and early detection of injuries could reduce severity and recovery time \cite{hreljac2005etiology,hergenroeder1998prevention}.
Modern computer vision models, such as convolutional neural networks (CNNs), allow machines to make intelligent decisions directly from visual data.
Training a CNN to accurately detect injuries in pitches from purely video data would be extremely beneficial to teams and athletes, as they require no sensors, tests, or monitoring equipment other than a camera. A CNN trained on videos of pitchers would be able to detect slight changes in their form that could be a early sign of an injury or even cause an injury. The use of computer vision to monitor athletes can provide team physicians, trainers and coaches additional data to monitor and protect athletes.
CNN models have already been successfully applied to many video recognition tasks, such as activity recognition \cite{carreira2017quo}, activity detection \cite{piergiovanni2018super}, and recognition of activities in baseball videos \cite{mlbyoutube2018}. In this paper, we introduce the problem of injury detection/prediction in MLB pitchers and experimentally evaluate the ability of CNN models to detect and predict injuries in pitches from only video data.
\section{Related Work}
\paragraph{Video/Activity Recognition}
Video activity recognition is a popular research topic in computer vision~\cite{aggarwal11,karpathy2014large,simonyan2014two,wang2011action,ryoo13}. Early works focused on hand-crafted features, such as dense trajectories~\cite{wang2011action} and showed promising results. Recently, convolutional neural networks (CNNs) have out-performed the hand-crafted approaches \cite{carreira2017quo}. A standard multi-stream CNN approaches takes input of RGB frames and optical flows \cite{simonyan2014two,repflow2019} or RGB frames at different frame-rates \cite{feichtenhofer2018slowfast} which are used for classification, capturing different features. 3D (spatio-temproal) convolutional models have been trained for activity recognition tasks~\cite{tran2014c3d,carreira2017quo,piergiovanni2018evolving}. To train these CNN models, large scale datasets such as Kinetics~\cite{kay2017kinetics} and Moments-in-Time \cite{monfort2019moments} have been created.
Other works have explored using CNNs for temporal activity detection \cite{piergiovanni2018evolving, shou2017cdc, dave2017predictive} and studied the use of temporal structure or pooling for recognition \cite{piergiovanni2017learning, ng2015beyond}. Current CNNs are able to perform quite well on a variety of video-based recognition tasks.
\paragraph{Injury detection and prediction}
Many works have studied prediction and prevention of injuries in athletes by developing models based on simple data (e.g., physical stats or social environment) \cite{andersen1988model,ivarsson2017psychosocial} or cognitive and psychological factors (stress, life support, identity, etc.) \cite{maddison2005psychological,falkstein1999prediction}. Others made predictions based on measured strength before a season \cite{pontillo2014prediction}. Placing sensors on players to monitor their movements has been used to detect pitching events, but not injury detection or prediction \cite{lapinski2009distributed,murray2017automatic}. Further, sonography (ultra-sound) of elbows has been used to detect injuries by human experts \cite{harada2006using}.
To the best of our knowledge, there is no work exploring real-time injury detection in game environments. Further, our approach requires no sensors other than a camera. Our model makes predictions from only the video data.
\section{Data Collection}
Modern CNN models require sufficient amount of data (i.e., samples) for both their training and evaluation.
As pitcher injuries are fairly rare, especially compared to the number of pitches thrown while not injured, the collection and preparation of data is extremely important.
There is a necessity to best take advantage of such example videos while removing non-pitch related bias in the data.
In this work, we consider the task of injury prediction as a binary classification problem. That is, we label a video clip of a pitch either as `healthy' or `injured'. We assume the last $k$ pitches thrown before a pitcher was placed on the disabled list to be `injured' pitches.
If an injury occurred during practice or other non-game environment, we do not include that data in our dataset (as we do not have access to video data outside of games). We then collect videos of the TV broadcast of pitchers from several games not near the date of injury as well as the game they were injured in. This provides sufficient `healthy' as well as `injured' pitching video data.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/overfit-examples.pdf}
\caption{Examples of (top-left) the the pitch count being visible, (bottom-left) unique colored shirts in the background and (right) the same pitcher in different uniforms and ballparks. These features allow for the model to overfit to data that will not generalize or properly represent whether or not the pitcher is injured.}
\label{fig:overfit}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/pitch-flow-ex.pdf}
\caption{Example of cropped RGB frames and the computed optical flows.}
\label{fig:flow}
\end{figure*}
The challenge in our dataset construction is that simply taking the broadcast videos and (temporally) segmenting each pitch interval is not sufficient. We found that the model often overfits to the pitch count on the scoreboard, the teams playing, or the exact pitcher location in the video (as camera position can slightly vary between ballparks and games), rather than focusing on the actual pitching motion. Spatially cropping the pitching region is also insufficient, as there could be an abundant amount of irrelevant information in the spatial bounding box. The model then overfits to the jersey of the pitcher, time of day or weather (based on brightness, shadows, etc.) or even a fan in a colorful shirt in the background (see Fig. \ref{fig:overfit} for examples). While superstitious fans may find these factors meaningful, they do not have any real impact on the pitcher or his injuries.
To address all these challenges, we first crop the videos to a bounding box containing just the pitcher. We then convert the images to greyscale and compute optical flow, as it capture high-resolution motion information while being invariant to appearance (i.e., jersey, time of day, etc.). Optical flow has commonly been used for activity detection tasks \cite{simonyan2014two} and is beneficial as it captures appearance invariant motion features \cite{sevilla2018integration}. We then use the optical flow frames as input to a CNN model trained for our binary injury classification task. This allows the model to predict a pitcher's injury based solely on motion information, ignoring the irrelevant features. Examples of the cropped frames and optical flows are shown in Fig. \ref{fig:flow}.
\paragraph{Dataset} Our dataset consists of pitches from broadcast videos for 30 games from the 2017 MLB season. It contains injuries from 20 different pitchers, 4 of which had multiple injuries in the same season. Each pitcher has an average of 100 healthy pitches from games not near where they were injured as well as pitches from the game they in which they were injured. The data contains 12 left-handed pitchers and 8 right-handed pitchers, 10 different injuries (back strain, arm strain, finger blister, shoulder strain, UCL tear, intercoastal strain, sternoclavicular joint, rotator cuff, hamstring strain, and groin strain). There are 5479 pitches in the dataset, about 273 per-pitcher, providing sufficient data to train a video CNN. When using $k=20$, resulting in 469 `injured' pitches and 5010 healthy pitches, as some pitchers threw less than 20 pitches before being injured.
\section{Approach}
We use a standard 3D spatio-temporal CNN trained on the optical flow frames. Specifically, we use I3D \cite{carreira2017quo} with 600 optical flow frames as input with resolution of $460\times 600$ (cropped to just the pitcher from $1920\times 1080$ video) from a 10 second clip of a pitch at 60 fps. We use high frame-rate and high-resolution inputs to allow the model to learn the very small differences between `healthy' and `injured' pitches. We initialize I3D with the optical flow stream pre-trained on the Kinetics dataset \cite{kay2017kinetics} to obtain good initial weights.
We train the model to minimize the binary cross entropy:
\begin{equation}
\mathcal{L} = \sum_i \left( y_i \log p_i + (1-y_i) \log (1-p_i) \right)
\end{equation}
where $y_i$ is the label (injured or not) and $p_i$ is the models prediction for sample $i$. We train for 100 epochs with a learning rate of 0.1 that is decayed by a factor of 10 every 25 epochs. We use dropout set at 0.5 during training.
\section{Experiments}
We conduct extensive experiments to determine what the model (i.e., I3D video CNN) is capable of learning and how well it generalizes. We compare (1) learning models per-pitcher and test how well they generalize to other pitchers, (2) models learned from a set of lefty or righty pitchers, and (3) models trained on a set of several pitchers. We evaluate on both seen and unseen pitchers and seen and unseen injuries. We also compare models trained on specific injury types (e.g. back strain, UCL injury, finger blisters, etc.) and analyze how early we can detect an injury solely from video data.
Since this is a binary classification task, as the evaluation metric, we report:
\begin{itemize}
\item Accuracy (correct examples/total examples)
\item Precision (correct injured/predicted injured)
\item Recall (correct injured/total injured)
\item $F_1$ scores
\end{itemize}
where
\begin{equation}
F_1 = \frac{1}{0.5\left(\cfrac{1}{recall}+\cfrac{1}{precision}\right)}
\end{equation}
All values are measured between 0 and 1, where 1 is perfect for the given measure.
\subsection{Per-player model}
We first train a model for each pitcher in the dataset. We consider the last 20 pitches thrown by a pitcher before being placed on the disabled list (DL) as `injured' and all other pitches thrown by the pitcher as healthy. We use half the `healthy' pitches and half the `injured' pitches as training data and the other half as test data. All the pitchers in the test dataset were seen during training. In Table \ref{tab:per-pitcher} we compare the results of our model for 10 different pitchers. For some pitchers, such as Clayton Kershaw or Boone Logan, the model was able to accurately detect their injury, while for other pitchers, such as Aaron Nola, the model was unable to reliably detect the injury.
\begin{table}[]
\centering
\begin{tabular}{l|cccc}
\toprule
Pitcher & Acc & Prec & Rec & $F_1$ \\
\midrule
Aaron Nola & .92 & .50 & .34 & .42 \\
Clayton Kershaw & .96 & 1. & .75 & .86 \\
Corey Kluber & .95 & 1. & .33 & .50 \\
Aroldis Chapman & .83 & .50 & 1. & .67\\
Boone Logan & .98 & .96 & .97 & .97 \\
Brett Anderson & .96 & .75 & .97 & .85 \\
Brandon Finnegan & .95 & .75 & .99 & .87 \\
Austin Brice & .87 & .89 & .75 & .82 \\
AJ Griffin & .91 & .98 & .33 & .50\\
Adalberto Mejia & .96 & .83 & .78 & .81\\
\midrule
Average & .93 & .82 & .72 & .75 \\
\bottomrule
\end{tabular}
\caption{Results of predicting a pitcher's injury where the last 20 pitches thrown are injured. A model was trained for each pitcher. The model performs well for some pitchers and poorly for others.}
\label{tab:per-pitcher}
\end{table}
To determine how well the models generalize, we evaluate the trained models on a different pitcher. Our results are shown in Table \ref{tab:pitcher-to-pitcher}. We find that for some pitchers, the transfer works reasonable well, such as Libertore and Wainwright or Brice and Wood. However, for other pitchers, it does not generalize at all. This is not surprising, as pitchers have various throwing motions, use different arms, etc., in fact, it is quite interesting that it generalizes at all.
\begin{table}[]
\centering
\begin{tabular}{ll|cccc}
\toprule
Train Pitcher & Test Pitcher & Acc & Prec & Rec & $F_1$ \\
\midrule
Liberatore & Wainwright & .33 & .17 & .87 & .27 \\
Wainwright & Liberatore & .29 & .23 & .85 & .24 \\
Wood & Liberatore & .63 & 0. & 0. & 0. \\
Chapman & Finnegan & .75 & .23 & .18 & .19 \\
Brice & Wood & .43 & .17 & .50 & .24 \\
Wood & Brice & .42 & .15 & .48 & .22 \\
Chapman & Bailey & .23 & 0. & 0. & 0.\\
Mejia & Kulber & .47 & 0. & 0. & 0.\\
\bottomrule
\end{tabular}
\caption{Comparing how well a model trained on one pitcher transfers to another pitcher. As throwing motions vary greatly between pitchers, the models do not generalize too well.}
\label{tab:pitcher-to-pitcher}
\end{table}
\subsection{By pitching arm}
To further examine how well the model generalizes, we train the model on the 12 left handed (or 8 right handed) pitchers, half the data is used for training and the other half of held-put pitches is used for evaluation. This allows us to determine if the model is able to learn injuries and throwing motions for multiple pitchers or if it must be specific to each pitcher. Here, all the test data is of pitchers seen during training. We also train a model on all 20 pitchers and test on held-out pitches. Our results are shown in Table \ref{tab:by-arm1}. We find that these models perform similarly to the per-pitcher model, suggesting that the model is able to learn multiple pitchers' motions. Training on all pitchers does not improve performance, likely since left handed and right handed pitchers have very different throwing motions.
\begin{table}[]
\centering
\begin{tabular}{l|cccc}
\toprule
Arm & Acc & Prec & Rec & $F_1$ \\
\midrule
Lefty & .95 & .85 & .77 & .79 \\
Righty & .94 & .81 & .74 & .74 \\
All pitchers & .91 & .75 & .73 & .74 \\
\bottomrule
\end{tabular}
\caption{Classification of pitcher injury trained on the 12 left handed or 8 right handed pitchers and evaluated on held-out data.}
\label{tab:by-arm1}
\end{table}
\paragraph{Unseen pitchers} In Table \ref{tab:by-arm2} we report results for a model trained on 6 left handed (or 4 right handed) pitchers and tested on the other 6 left handed (or 4 right handed) pitchers not seen during training. For these experiments, the last 20 pitches thrown were considered `injured.' We find that when training with more pitcher data, the model generalizes better than when transferring from a single pitcher, but still performs quite poorly. Further, training on both left handed and right handed pitchers reduces performance. This suggests that models will be unable to predict injuries for pitchers they have not seen before, and left handed and right handed pitchers should be treated separately.
\begin{table}[]
\centering
\begin{tabular}{l|cccc}
\toprule
Arm & Acc & Prec & Rec & $F_1$ \\
\midrule
Lefty & .42 & .25 & .62 & .43 \\
Righty & .38 & .22 & .54 & .38 \\
All pitchers & .58 & .28 & .43 & .35 \\
\bottomrule
\end{tabular}
\caption{Classification of pitcher injury trained on the 6 left handed or 4 right handed pitchers and evaluated on the other 6 left handed or 4 right handed pitcher.}
\label{tab:by-arm2}
\end{table}
To determine if a model needs to see an specific pitcher injured before it can detect that pitchers injury, we train a model with `healthy' and `injured' pitches from 6 left handed pitchers (4 right handed), and only `healthy' pitches from the other 6 left handed (4 right handed) pitchers. We use half of the unseen pitchers `healthy' pitches as training data and all 20 unseen `injured' plus the other half of the unseen `healthy' pitches as testing data. Our results are shown in Table \ref{tab:by-arm3}, confirming that training in this method generalizes to the unseen pitcher injuries, nearly matching the performance of the models trained on all the pitchers (Table \ref{tab:by-arm1}). This suggests that the models can predict pitcher injuries even without seeing a specific pitcher with an injury.
\begin{table}[]
\centering
\begin{tabular}{l|cccc}
\toprule
Arm & Acc & Prec & Rec & $F_1$ \\
\midrule
Lefty & .67 & .62 & .65 & .63 \\
Righty & .71 & .65 & .68 & .67 \\
All pitchers & .82 & .69 & .72 & .71 \\
\bottomrule
\end{tabular}
\caption{Classification performance of our model when trained using `healthy' + `injured' pitch data from half of the pitchers and some `healthy' pitches from the other half. The model was applied to unseen pitches from the other half of the pitchers to measure the performance. This confirms that the model can generalize to \textbf{unseen pitcher injuries} using only `healthy' examples.}
\label{tab:by-arm3}
\end{table}
\paragraph{Lefty vs Righty Models} To further test how well the model generalizes, we evaluate the model trained on left handed pitchers on the right handed pitchers, and similarly the right handed model on left handed pitchers. We also try horizontally flipping the input images, effectively making a left handed pitcher appear as a right handed pitcher (and vice versa). Our results, shown in Table \ref{tab:by-arm4}, show that the learned models do not generalize to pitches throwing with the other arm, but by flipping the image, the models generalize significantly better, giving comparable performance to unseen pitchers (Table \ref{tab:by-arm2}). By additionally including flipped `healthy' pitches of the unseen pitchers, we can further improve performance. This suggests that flipping an image is sufficient to match the learned motion information of an injured pitcher throwing with the other arm.
\begin{table}
\small
\centering
\begin{tabular}{l|cccc}
\toprule
Arm & Acc & Prec & Rec & $F_1$ \\
\midrule
Left-to-Right & .22 & .05 & .02 & .03 \\
Right-to-Left & .14 & .07 & .05 & .05 \\
Left-to-Right + Flip & .27 & .35 & .42 & .38 \\
Right-to-Left + Flip & .34 & .38 & .48 & .44 \\
Left-to-Right + Flip + `Healthy' & .57 & .54 & .57 & .56 \\
Right-to-Left + Flip + `Healthy' & .62 & .56 & .55 & .56 \\
\bottomrule
\end{tabular}
\caption{Classification of the left-handed model tested on right handed pitchers and the right handed model tested on left handed pitchers. We also test horizontal flipping the images, which makes a left handed pitcher appear as a right handed pitcher (and vice versa). }
\label{tab:by-arm4}
\end{table}
\begin{table*}
\centering
\begin{tabular}{l|cccc|cccc}
\toprule
& \multicolumn{4}{|c|}{Lefty} & \multicolumn{4}{|c}{Righty}\\
Injury & Acc & Prec & Rec & $F_1$ & Acc & Prec & Rec & $F_1$ \\
\midrule
Back Strain & .95 & .67 & .74 & .71 & .95 & .71 & .76 & .73 \\
Finger Blister & .64 & .06 & .02 & .05 & .64 & .03 & .01 & .02 \\
Shoulder Strain & .94 & .82 & .89 & .85 & .96 & .95 & .92 & .94 \\
UCL Tear & .92 & .74 & .72 & .74 & .94 & .71 & .67 & .69 \\
Intercostal Strain & .94 & .84 & .87 & .86 & .92 & .82 & .84 & .83 \\
Sternoclavicular & .92 & .64 & .68 & .65 & .93 & .65 & .67 & .66 \\
Rotator Cuff & .86 & .58 & .61 & .59 & .86 & .57 & .60 & .59 \\
Hamstring Strain & .98 & .89 & .92 & .91 & .99 & .93 & .95 & .94 \\
Groin Strain & .93 & .85 & .83 & .84 & .92 & .85 & .84 & .86 \\
\bottomrule
\end{tabular}
\caption{Evaluation of the model for each injury. The model is able to detect some injuries well (e.g., hamstring strains) and others quite poorly (e.g., finger blisters).}
\label{tab:by-inj}
\end{table*}
\begin{table*}
\centering
\begin{tabular}{l|ccccc}
\toprule
& $k=10$ & $k=20$ & $k=30$ & $k=50$ & $k=75$ \\
\midrule
Lefty & .68 & .63 & .65 & .48 & .47 \\
Righty & .64 & .67 & .69 & .52 & .44 \\
\bottomrule
\end{tabular}
\caption{$F_1$ score for models trained to predict the last $k$ pitches as `injured.' The model is able to produce most accurate predictions where the last 10 to 30 pitchers are labeled as `injured' but performs quite poorly for greater than 50 pitches.}
\label{tab:how-early}
\end{table*}
\subsection{Analysis of Injury Type}
We can further analyze the models performance on specific injuries. The 10 injuries in our dataset are: back strain, arm strain, finger blister, shoulder strain, UCL tear, intercoastal strain, sternoclavicular joint, rotator cuff, hamstring strain, and groin strain. For this experiment, we train a separate model for left-handed and right-handed pitchers, then compare the evaluation metrics for each injury for each throwing arm. We use half the pitchers for training data plus half the `healthy' pitches from the other pitchers. We evaluate on the unseen `injured' pitches and other half of the unseen `healthy' pitches.
In Table \ref{tab:by-inj}, we show our results. Our model performs quite well for most injuries, especially hamstring and back injuries. These likely lead to the most noticeable changes in a pitchers motion, allowing the model to more easily determine if a pitcher is hurt. For some injuries, like finger blisters, our model performs quite poorly in detecting. Pitchers likely do not significantly change their motion due to a finger blister, as only the finger is affected.
\subsection{How early can an injury be detected?}
Following the best setting, we use half the pitchers plus half of the `healthy' pitches of the remaining pitchers as training data and evaluate on the remaining data (i.e., the setting used for Table \ref{tab:by-arm3}). We vary $k$, the number of pitches thrown before being placed on the disabled list to determine how early before injury the model can detect an injury. In Table \ref{tab:how-early}, we show our results. The models performs best when given 10-30 `injured' samples, and produces poor results when the last 50 or more pitches are labeled as `injured.' This suggests that 10-30 samples are enough to train the model while still containing sufficiently different motion patterns related to an injury. When using the last 50 or more pitches, the injury has not yet significantly impacted the pitchers throwing motion.
\section{Evaluating the Bias in the Dataset}
To confirm that our model is not fitting to game-specific data and that such game-specific information is not present in our optical flow input, we train an independent CNN model to predict which game a given pitch is from. The results, shown in Table \ref{tab:game-prediction}, show that when given cropped optical flow as input, the model is unable to determine which game a pitch is from, but is able to when given RGB features. This confirms both that our cropped flow is a good input and that the model is not fitting to game specific data.
\begin{table*}[]
\centering
\begin{tabular}{l|ccccc}
\toprule
Pitcher & Guess & RGB & Cropped RGB & Flow & Cropped Flow \\
\midrule
Boone Logan & 0.44 & 0.97 & 0.95 & 0.76 & 0.47 \\
Clayton Kershaw & 0.44 & 0.94 & 0.85 & 0.73 & 0.45 \\
Adalberto Mejia & 0.54 & 0.98 & 0.78 & 0.55 & 0.55 \\
Aroldis Chapman & 0.25 & 0.86 & 0.74 & 0.57 & 0.26 \\
\bottomrule
\end{tabular}
\caption{Accuracy predicting which game a pitch is from using different input features. Having a lower value means that the input data has less bias, which is better for the injury detection. The model is able to accurately determine the game using RGB features. However, using the cropped flow, it provides nearly random guess performance, confirming that when using cropped flow, the model is not fitting to game-specific details. Note that random guessing varies depending on how many games (and pitches per-game) are in the dataset for a given pitcher. }
\label{tab:game-prediction}
\end{table*}
We further analyze the model to confirm that our input does not suffer from temporal bias, by trying to predict the temporal ordering of pitches. Here, we give the model two pitches as input, and it must predict if the first pitch occurs before or after the second pitch. We only train this model on pitches from games where there was no injury to confirm that the model is fitting to injury related motions, and not some other temporal feature. The results are shown in Table \ref{tab:order-prediction} and we find that the model is unable to predict temporal ordering of pitches. This suggests that the model is fitting to actual injury related motion, and not some other temporal feature.
\begin{table}[]
\centering
\begin{tabular}{l|c}
\toprule
Pitcher & Accuracy \\
\midrule
Boone Logan & 0.49 \\
Clayton Kershaw & 0.53 \\
Adalberto Mejia & 0.54 \\
Aroldis Chapman & 0.51 \\
\midrule
All Average & 0.51\\
\bottomrule
\end{tabular}
\caption{Predicting if a pitch occurs before or after another pitch, random guess is 50\%. We used only pitches from games where no injury occurred to determine if the model was able to find any temporal relationship between the pitches. Ideally, this accuracy would be 0.5, meaning that the pitch ordering is random. We find the model is not able to predict the ordering of pitches, suggesting that it is fitting to actual different motions caused by injury and not unrelated temporal data.}
\label{tab:order-prediction}
\end{table}
\section{Discussion and Conclusions}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/vis.png}
\caption{Visualization using the method from \cite{feichtenhofer2018have} to visualize an `injured' pitch. This does not give an interpretable image of why the decision was made, but shows that the model is capturing spatio-temporal pitching motions.}
\label{fig:vis}
\end{figure}
We introduced the problem of detecting and predicting injuries in pitchers from only video data. However, there are many possible limitations and extensions to our work. While we showed that CNN can reliably detect and predicted injuries, due to the somewhat limited size of our dataset and scarcity of injury data in general, it is not clear exactly how well this will generalize to all pitchers, or pitchers at different levels of baseball (e.g., high school pitchers throw much more slowly than the professionals). While optical flow provides a reasonable input feature, it does lose some detail information which could be beneficial for injury detection. The use of higher resolution and higher frame-rate data could further improve performance. Further, since our method is based on CNNs, it is extremely difficult to determine why or how a decision is made. We applied the visualization method from Feichtenhofer et al. \cite{feichtenhofer2018have} to our model and data to try to interpret why a certain pitch was classified as an injury. However, this just provided a rough visualization over the pitchers throwing motion, providing no real insight into the decision. We show an example visualization in Fig. \ref{fig:vis}. It confirms the model is capturing spatio-temporal pitching motions, but does not explain why or how the model detects injuries. This is perhaps the largest limitation of our work (and CNN-based methods in general), as just a classification score is very limited information for the athletes and trainers.
As many injuries in pitchers are due to overuse, representing an injury as a sequence of pitches could be beneficial, rather than treating each pitch as an individual event. This would allow for models to detect changes in motion or form over time, leading to better predictions and possibly more interpretable decisions. However, training such sequential models would require far more injury data to learn from, as 10-20 samples would not be enough. The use of additional data, both `healthy' and `injured' would further improve performance. Determining the optimal inputs and designing of models specific to baseball/pitcher data could further help.
Finally, determining how early and injury would have to be detected/predicted to actually reduce recovery time remains unknown.
In conclusion, we proposed a new problem of detecting/predicting injuries in pitchers from only video data. We extensively evaluated the approach to determine how well it performs and generalizes for various pitchers, injuries, and how early reliable detection can be done.
{\small
\bibliographystyle{ieee}
| {
"attr-fineweb-edu": 2.421875,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcFXxK7ICUn2IY_Ds | \section{Introduction}
Predicting the outcome of tennis matches has attracted much attention within sport analytics over the years for a number of applications. For example, prediction models can provide coaches useful feedback about how players are improving over time and who they should be able to beat. Further, prediction models could help assess fan engagement and determine who is the favourite player, by how much, and who is currently the best player. See, for example, \cite{Glickman, Klassen, Barnett, Newton, Gilsdorf, Gomes, Smith, Irons, KOV2016} and references therein. \par
It is nowadays generally acknowledged that the service is one of the most important elements in tennis. Indeed, it has been observed that the serving player wins more points than the receiving player in elite tennis \citep{Lees}. With the advances in racquet technologies, most top male players can hit service speeds of over 200 Kph. \cite{KMR} point out that if the serving speed reaches the receiver's reacting threshold, it becomes virtually impossible for the receiving player to return the ball. In the extreme, a strong serve strategy that gets rarely broken reduces the competitiveness of the game, and this may result in a loss of spectator interest. For this reason, the International Tennis Federation (ITF) monitors the importance of the serve and can undertake measures, such as slowing surface speeds, to ensure the game's combativeness is not endangered. \par
While it is reasonable to assume that the serve advantage gets lost as the rally length increases, there are only a few contributions in the literature attempting to quantify the serve advantage and relate it to rally length via a statistical model. An early contribution is given by \cite{DB}, where the authors describe the advantage of serving in elite tennis by comparing points won by both the server and the receiver for a given rally length. They conclude that the serve advantage is lost after the 4th rally shot on men's first serve. Subsequently, \cite{KOV} proposes a Bayesian hierarchical model to estimate player-specific serve curves that also adjust for the opponent rally abilities. In particular, the author uses an exponential decay function to model the decline in serve advantage plus a random effect, representing the difference between the rally ability of the opponents.\par
In this paper, we focus on the role of the serve advantage in winning a point as a function of the rally length. Our approach falls into the Bradley-Terry class of models \citep{BradlyTerry} and is built upon of \cite{KOV}. We propose a Bayesian isotonic logistic regression model by representing the logit of the probability of winning a point on serve, $f$, as a linear combination of B-splines basis functions, with athlete-specific basis function coefficients. We point out that while the term isotonic is used to denote regression models where monotonicity is imposed everywhere, in our application we may also want to accommodate for monotonicity only in a subinterval of the function domain. The smoothness of $f$ is controlled by the order of the B-splines, while their shape is controlled by the associated control polygon $\mathcal{C}$ \citep{deBoor}. In particular, to ensure the serve advantage is non-increasing with rally length we constrain the spline function $f$ to be non-increasing by controlling its control polygon. This essentially results in imposing a constraint on the coefficients of the spline function. Further, we allow for the probability to win on serve to also depend on the rally abilities of the opponents. We note that the rally advantage component of the model draws on \cite{KOV}, but we extend it further to study how the different types of court (e.g., clay, hard) may impact on the player's rally ability. It is indeed well known that some players favour and perform better on particular surfaces (e.g., Nadal holds 11 French Open (clay) titles and 2 Wimbledon (grass) titles). Each surface material presents its own unique characteristics and provides different challenges to the players, with certain playing styles working better on some types of court and less effectively on others. For example, a grass court is the fastest type of court because of its low bounce capacity. Players must get to the ball more quickly than with clay or hard courts, thus players with stronger serve will generally perform better on grass. The rally advantage component of our model reflects how a player is likely to perform on a particular surface, and this in turns affects the win probability. \par
Our contribution is twofold: first, the basis function decomposition allows for a more flexible modelling of the longitudinal curve for serve advantage than that attainable via an exponential decay function. Our hierarchical Bayesian framework further accommodates for the borrowing of information across the trajectories of the different athletes, and allows for out-of-sample prediction; second, our construction allows for the inclusion of covariates (e.g., the type of terrain) in the modelling of the rally abilities of the opponents. Therefore, it becomes possible to examine how covariates impact on the rally abilities, and ultimately on the distribution of the serve advantage curves. \par
The remainder of the paper is organised as follows. In Section~\ref{data} we provide an overview of the data used for our analysis. In Section~\ref{model} we present our hierarchical Bayesian isotonic logistic model. In Section~\ref{comparison} we compare our model with \cite{KOV}. Section~\ref{results} presents the results of our real data analysis, and conclusions are outlined in Section~\ref{conclusions}.
\section{Grand slam data} \label{data}
We consider point-by-point data for main-draw singles Grand Slam matches from 2012 forward. Organisations such as the ITF and Grand Slam tournaments record some data on professional tennis matches, but rarely make it available to the public. In this paper, we use data scraped from the four Grand Slam websites shortly after each event by Jeff Sackmann\footnote{\label{Jeff}https://github.com/JeffSackmann}. The data is also available with the \texttt{R} package \texttt{deuce} \citep{deuce}. \par
There are four Grand Slam tournaments, namely, the Australian Open, French Open, Wimbledon, and US Open. These tournaments are subdivided in two types of associations: the Association of Tennis Professionals (ATP), containing all the matches played by male athletes, and the Women's Tennis Association (WTA), containing all matches played by female athletes. We consider the male and female tournaments separately, thus obtaining two datasets. For both datasets, we include players with three matches or more in the training data. For a more robust inference, we consider only rally lengths between zero and thirty, counting as zero the first shot played by the server. For both datasets, we extract the following variables: rally length, the series of return hits of the ball from a player to the opponent (an integer between zero and thirty); the ID (name and surname) of the players serving and receiving, respectively; an indicator variable denoting if the server wins the point; the tournament name, used to derive the type of court in the different tournaments. Indeed, the Australian Open and US Open tournaments are played over a hard court, the French Open is played on clay while Wimbledon is played on grass. Unfortunately, other information of potential interest, e.g. the serve's speed and direction, is not available for every rally. Table \ref{tab:1} reports some summary statistics about the dataset. In Table \ref{tab:rallies} we report the total number of rallies in the ATP and WTA tournaments, respectively. Short rallies, i.e. rally lengths smaller than or equal to $4$, constitute $90\%$ of the rallies played during the Grand Slam tournaments.
\begin{table*}\centering
\begin{tabular}{@{}rrrrr@{}}\toprule
& \multicolumn{4}{c}{\textbf{Tournament}} \\
\cmidrule{1-5}
& \textbf{US Open} & \textbf{Aus Open} &\textbf{ French Open} & \textbf{Wimbledon} \\ \midrule
\textbf{Matches}\\
ATP & 147 & 106 & 192 & 214 \\
WTA & 158 & 106 & 186 & 118 \\
\textbf{Players}\\
ATP & 108 & 107 & 132 & 107 \\
WTA & 108 & 127 & 119 & 104 \\
\bottomrule
\end{tabular}
\caption{Number of matches and players in the four Grand Slam tournaments by association from 2012 forward. The data was collected by Jeff Sackmann$^{\text{\ref{Jeff}}}$, and is also available with the \texttt{R} package \texttt{deuce} \citep{deuce}.}
\label{tab:1}
\end{table*}
\begin{table*}\centering
\begin{tabular}{@{}rrrr@{}}\toprule
\cmidrule{1-4}
&\textbf{Short rallies} &\textbf{Long rallies} & \textbf{Total}\\ \midrule
ATP & 130577 & 14933 & 145510\\
WTA & 71592 & 10288 & 81880\\
\bottomrule
\end{tabular}
\caption{Number of rallies by tournament. We define as short a rally whose length is less than or equal to 4. }
\label{tab:rallies}
\end{table*}
In Figure~\ref{fig:rally30} we report the observed relative frequency of rallies won by the server given the number of shots. It appears that the server has a higher chance of winning the point on odd-rally lengths compared to the even-rally lengths. This pattern can be explained by the fact that even-numbered rallies end on the server's racquet, so he/she can win or make a mistake. Since the $y$-axis report the observed relative frequency of winning for the server, all the even-shots in Figure \ref{fig:rally30} represent the case in which the server wins a point with a \textit{winner}. A \textit{winner} is a shot that is not reached by the opponent and wins the point. Occasionally, the term is also used to denote a serve that is reached but not returned into the court. On the other hand, the odd-numbered rallies are the winners or errors made by the receiver. In particular, the odd-shots in Figure \ref{fig:rally30} represent the errors done by the receivers. \par
\begin{figure}[!h]
\centering
\begin{minipage}{7cm}
\begin{center}
\includegraphics[width=7cm,height=8cm]{Pmen1.pdf}\\
\end{center}
\end{minipage}\ \
\begin{minipage}{7cm}
\begin{center}
\includegraphics[width=7cm,height=8cm]{Pwomen1.pdf}\\
\end{center}
\end{minipage}
\caption{Observed frequency of rallies won by the server given the number of shots for the ATP tournament (left) and the for the WTA (right). The blue points represent the odd-shots, while pink points are the even-numbered rallies. The size of a point $(x,y)$ is proportional to the number of the server's victories with rally length equal to $x$ divided by the total number of points won by the server. }\label{fig:rally30}
\end{figure}
Because errors are more common than winners, we aggregate odd and even rally lengths (see also \cite{KOV}). We obtain a vector of integers, where 1 corresponds to rally lengths equal to zero or one, 2 corresponds to values 2 or 3 of rally length and so on. This ensures that the same set of outcome types for the server and receiver are represented within each group. The resulting frequencies are showed in Figure \ref{fig:rally15}. We observe that after the first shot the server's chance of winning the point drastically decreases. This is clear for both men and women. As conventional wisdom suggests, the server has the highest chance of winning a point at the beginning of the rally, owing to the strength of the serve. As the rally progresses, the serve advantage is expected to get increasingly small and have increasingly less influence on the outcome of the rally with each additional shot taken.
\begin{figure}[!h]
\centering
\begin{minipage}{7cm}
\begin{center}
\includegraphics[width=7cm,height=7cm]{Pmen2.pdf}\\
\end{center}
\end{minipage}\ \
\begin{minipage}{7cm}
\begin{center}
\includegraphics[width=7cm,height=7cm]{Pwomen2.pdf}\\
\end{center}
\end{minipage}
\caption{Conditional percentage of winning a point given the number of shots for the ATP tournament (left) and the for the WTA (right). Since we aggregated the odd and even results, rally length is between 1 and 15. The size of a point $(x,y)$ is proportional to the number of the server's victories with rally length equal to $x$ divided by the total number of points won by the server. \label{fig:rally15}}
\end{figure}
\section{Hierarchical Bayesian isotonic logistic regression model}\label{model}
In this section, we present our Bayesian hierarchical isotonic regression approach to model the serve advantage. Let $x \in X $ be the discrete variable representing rally length, where $X = \{L,\ldots, U\}$ is the set of all integers between $L$ and $U$. Let $Y_{i,j}$ be a binary random variable which is equal to one if server $i$ wins the point against receiver $j$, and zero otherwise. We assume
\begin{equation} \label{eq:likelihood}
Y_{i,j}|p_{i,j}(x) \sim \text{Bernoulli}(p_{i,j}(x)), \quad x\in X
\end{equation}
where $p_{i,j}(x)$ is the probability that server $i$ wins a point against receiver $j$ at rally length $x$, e.g $\mathbb{P}[Y_{i,j}=1\vert x]$. We consider two components to model $p_{i,j}(x)$, the first describing the serve advantage and the second representing the rally ability of the players. Specifically, we model the logit of $p_{i,j}(x)$ as follows:
\begin{equation}
\text{logit }\mathbb{P}(Y_{i,j}=1 \vert x) = \text{logit} \left[ p_{i,j}(x) \right] = f_i(x) + (\alpha_i -\alpha_j), \quad x \in X
\label{eq:mainmodel}
\end{equation}
with
\begin{equation}
f_i(s) = \sum_{m=1}^{M}\beta_{i,m} b_{m}(s), \quad s \in [L, U] \subset \mathbb{R}
\label{eq:basis}
\end{equation}
for $i= 1,\dots, n_s,$ $j = 1,\dots, n_r$ and $i \neq j$, where $n_s$ is the total number of servers and $n_r$ the total number of receivers. The $\alpha$'s are athlete-specific parameters representing the rally ability of the player. We observe that the function $f(s)$ ($i$ index omitted for simplicity) is defined for each $s$ in the continuous interval $[L, U]$, however only its values $f(x)$, for $x$ in the discrete set $X \subseteq [L, U]$, enter the sampling model in Equation~\eqref{eq:mainmodel}. The continuous structure of $f(s)$ takes into account the overall trend of the serving advantage (i.e., it estimates the drop of the serving advantage as the rally length increases), accommodating for the longitudinal structure of the data. \par
The conditional log odds for the probability of the $i$-th server winning a point against the $j$-th receiver is a non-linear function of the rally length, $x$. Function $f_i(s)$ in Eq.~\eqref{eq:basis} represents the decay part of the model via a linear combination of basis functions $\{b_m(s)\}_{m=1}^M$, where $M$ is the dimension of the spline basis. In particular, we opted for B-splines basis functions of order $k$ on $[L, U] \subset \mathbb{R}$, and the $\beta_{i,m}$'s are athlete-specific basis function coefficients. Specifically, let $M \geq k \geq 1$ and $\mbox{\boldmath $t$} \equiv \{t_m\}_{m = 1}^{M+k}$ be a non-decreasing sequence of knots such that $t_m < t_{m+k}$ for all $m$, and $t_k = L$ and $U = t_{M+1}$. Function $f_i(s)$ is a linear combination of the B-splines $b_1, \ldots, b_M$, and is called a spline function of order $k$ and knot sequence $\mbox{\boldmath $t$}$ \citep{deBoor}. In other words, each $f_i$ is a piecewise polynomial of degree $(k-1)$ with breakpoints $t_m$, and the polynomials are $k - 1 - \mbox{Card}(t_m)$ times continuously differentiable at $t_m$. Here $\mbox{Card}(t_m)$ denotes the cardinality of $\{t_j: t_j = t_m\}$. Moreover, we recall that spline $b_m$ has support on the interval
$[t_m,t_{m+k}[$ and here we are going to assume $t_1=\dots=t_{k}=L$ and $t_{M+1}=\dots=t_{M+k}=U$.
A more extensive presentation of spline functions is given in \cite{deBoor}. \par
In our model, the spline function is defined on the whole interval $[L, U]$ and does not go to zero for high values of rally length. Indeed, by looking at the last value of rally length in our application, e.g. $x = 15$, it is clear that the logit of the conditional probability of $i$ winning the point against $j$ reduces to
\begin{equation*}
\text{logit } \mathbb{P}(Y_{i,j}=1 \vert x=15) = \sum_{m=1}^{M}\beta_{i,m} b_m(15) + (\alpha_i -\alpha_j). \label{x15}
\end{equation*}
We consider this as an asymptote, describing the server's ($i$-th player) log-odds of winning a point against opponent $j$ when the serve advantage has vanished. We can interpret parameter $\alpha_i$ as the rally ability of the $i$-$th$ player. When the serve advantage vanishes, the probability of winning the point depends on the discrepancy between the rally abilities of the two players plus a constant, obtained from the B-splines basis.\par
In the next sections, we provide more details regarding the modelling of the serve advantage and the rally ability, respectively.
\subsection{Modelling the serve advantage}
\label{serveadvantage}
Hereafter we will denote a spline function as \emph{partially monotone} if $f_i(s)$ is monotone only in a sub-interval of its domain $[L,U]$, for example if $f_i(s)$ is monotone decreasing in $[L_0, U] \subset [L,U]$. To simplify the notation, we will omit index $i$ that denotes individual-specific objects, so we will let $f_i = f$ and $\beta_{i,m} = \beta_m$. Further, the spline coefficients $\{\beta_m\}_{m=1}^M$ will be also referred to as \emph{control points} in the following.
Given that the serve advantage is expected to decrease as rally length increases, the spline function $f(s)$ should be non-increasing in $[L, U]$. While the non-increasing behaviour can be directly learnt from the data for small values of rally length, this could be harder to achieve for large values of rally length due to data sparsity in this part of the function domain. In other words, we may have to impose that the spline function is non-increasing for large values of rally length. Given a threshold $L_0$, we may allow $f(s)$ to be free to vary for small values of rally length (i.e., for $s<L_0$), while it is crucial to ensure that $f(s)$ is non-increasing as $s$ goes above $L_0$. Thus, we would like $f$ to be partially monotone, according to our definition. Then, we need to investigate which condition the spline function must verify to guarantee the partial monotonicity constraint. To this end we will first provide the following definition.
\theoremstyle{definition}
\begin{definition}[\textbf{Control Polygon}] \label{def:controlpol}
Let $\mbox{\boldmath $t$} \equiv \{t_m\}_{m = 1}^{M+k}$ be a non-decreasing sequence of knots and let $f(s)=\sum_{m=1}^{M}\beta_mb_m(s)$ be a spline function of order $k > 1$ and knot sequence $\mbox{\boldmath $t$}$. The \textit{control polygon} $\mathcal{C}(s)$ of $f(s)$ is defined as the piecewise linear function with vertices at $(\overline{t}_m, \beta_m)_{m = 1}^M$, where $$\overline{t}_m = \frac{t_{m+1} + \ldots + t_{m + k-1}}{k-1}$$ is called the $m$th knot average.
\end{definition}
We note that $\overline{t}_m < \overline{t}_{m+1}$ because it is assumed that $t_m < t_{m+k}$ for all $m$. The left panel of Figure \ref{fig:splinefunction} provides an illustrative example of a spline function with its associated control polygon. The spline function has order $k=4$ and knot vector $$\mbox{\boldmath $t$} = (1, 1, 1, 1, 2, 3, 4, 7, 11, 15, 15, 15, 15)$$ The control points $\{\beta_m\}_{m=1}^M$ are randomly drawn from standard Normal distribution. The control polygon approximates the spline function $f$, and the approximation becomes more accurate as the number of control points increases.
\begin{figure}[h!]
\centering
\includegraphics[width=16cm, height = 8cm]{example}
\caption{Examples of spline functions (black solid lines) and associated control polygons (piecewise linear lines). In both panels, the spline functions were generated assuming B-splines functions of order $k = 4$ on $[1, 15]$ and with knot sequence $\mbox{\boldmath $t$} = (1, 1, 1, 1, 2, 3, 4, 7, 11, 15, 15, 15, 15)$, and $M = 9$ is the dimension of the spline basis. Left panel: control points are generated as $\beta_m \stackrel{iid}{\sim}N(0,1)$, for $m = 1, \ldots, M$. Right panel: the control polygon is restricted to be non-increasing in $[\bar{t}_{m_{L_0} -k} = \bar{t}_3= 2, 15]$, and the resulting spline function is such from the smallest knot greater than $2$. Black rug bars indicate the knot averages, while orange rug bars denote the interior knots. }
\label{fig:splinefunction}
\end{figure}
To ensure the serve advantage is non-increasing with rally length, we need to control the shape of the spline function $f$ on an interval $[L_0, U] \subseteq [L, U]$. To do so, we will follow the notation and the construction of \cite{france} hereafter. In particular, for all $s \in [L, U]$ we denote by $t_{m_s}$ the smallest knot greater than $s$, with the proviso that $m_s=M+1$ if $s$ belongs to $]t_{M},U]$. Further, we assume that $t_m < t_{m+1}$ for all
$m \notin \{1,\dots,k-1,M+1,\dots,M+k\}$ so that $t_{m_s - 1} < s \leq t_{m_s}$ for all $s \in [L, U]$. We identify splines $b_{m_{L_0}}, \ldots, b_{M}$ as those whose support intersects with $[L_0, U]$. In other words, $[t_m , t_{m+k}] \cap [L_0, U] \neq \emptyset$ for $m \in \{m_{L_0}-k, \ldots, m_U-1\}$ and $[t_m , t_{m+k}] \cap [L_0, U] = \emptyset$ for $m \notin \{m_{L_0}-k, \ldots, m_U-1\}$, and the spline function restricted to the interval $[L_0,U]$
reduces to
\begin{equation}
f_{[L_0,U]}(s) = \sum_{m=m_{L_0} - k}^{M}\beta_{m} b_{m}(s), \quad s \in [L_0, U]
\label{eq:basisreduced}
\end{equation}
We define the restricted control polygon on $[L_0, U]$ as the control polygon
associated to the spline \eqref{eq:basisreduced}, that is, the piecewise
linear function that interpolates the vertexes $P_m := (\bar{t}_m , \beta_m )$
for $m = m_{L_0} - k,\dots, M$. We denote by $\mathcal{C}_{[L_0,U]}(s)$ this
restricted control polygon, which is defined for $s\in[\bar{t}_{m_{L_0-k}},\bar{t}_M]$. We also observe that $\mathcal{C}_{[L_0,U]}$
is defined on a interval that contains $[L_0,U]$. Indeed
$\bar{t}_{m_{L_0-k}}\le L_0 $ and $\bar{t}_{M}=U$.
The spline function $f(s)$ can be restricted to be non-increasing on $[L_0, U]$ by imposing that the associated restricted control polygon $\mathcal{C}_{[L_0,U]}(s)$ is non-increasing as the following Proposition states.
\begin{proposition}
Let $f(s)$ be a spline function with $s \in [L,U] \subset \mathbb{R}$, and let $\mathcal{C}(s)$ be the associated control polygon. Consider a real $L_0$ such that $L \leq L_0$. If the restricted $\mathcal{C}_{[L_0,U]}(s)$ is non-increasing on its support $[\bar{t}_{m_{L_0-k}},\bar{t}_M]$, then the restricted spline function $f_{[L_0,U]}(s)$ is also non-increasing on $[L_0,U]$. \label{prop:1}
\end{proposition}
The proof of Proposition \ref{prop:1} is given in Appendix \ref{A}. \cite{france} remark that if the control polygon is unimodal in $[L_0, U]$, then $f$ is unimodal or monotone on $[L_0, U]$, but it is possible to force $f$ to be unimodal by increasing the number of knots.
Controlling the shape of the control polygon reduces to controlling the magnitude of the sequence of control points $\{\beta_m\}_{m=1}^M$. For example, for a non-increasing constraint on $[L_0, U]$, the broken line with vertexes $(\bar{t}_m, \beta_m)_{m_{L_0}-k}^M$ is non-increasing if $\beta_{m_{L_0}-k} \geq \cdots \geq \beta_{M}$. In particular, we impose that
the spline coefficients of the restricted spline satisfy
\begin{equation}
\beta_{m}\le \beta_{m-1}\quad m \in \left\{m_{L_0-k+1},\dots, M\right\}
\label{eq:condizione_beta}
\end{equation}
Thus we have that the spline coefficients $\beta_1,\dots,\beta_{m_{L_0}-k}$ are free, while the spline coefficients $\beta_{m_{L_0}-k+1},\dots,\beta_M$ must be
chosen such that condition in Equation~\eqref{eq:condizione_beta} is satisfied.
The right panel of Figure \ref{fig:splinefunction} shows an example of spline function whose shape is constrained to be non-increasing on $[L_0 = 3, U = 15]$. Given knots $\mbox{\boldmath $t$} = (1, 1, 1, 1, 2, 3, 4, 7, 11, 15, 15, 15, 15)$, it is simple to realise that $m_{L_0} = 7$. Thus, $\beta_m \stackrel{iid}{\sim}N(0,1)$, for $m = 1, \ldots, m_{L_0 -k} = 3$, while the remaining coefficients are such that $\beta_3 \geq \beta_4 \geq \beta_5 \geq \ldots \geq \beta_{9}$. The resulting control polygon is decreasing from $\bar{t}_3 = 2$, and the spline function decreases from $L_0 = 3$. \par
\subsection{A new prior for the isotonic model} \label{sec:prior_isotonic}
Now, let us return to the athlete-specific notation, that is, let's denote with $f_i(s)$ the spline function and with $\beta_{i,m}$ the spline coefficients for athlete $i$. Hereafter, we will construct the spline function $f_i(s)$ as (an intercept plus) a combination of $M$ B-spline functions defined on the closed set $[L,U] \subset \mathbb{R}$, with order $k$ and knot vector $\boldsymbol{t}=\left \{t_1,\dots,t_{M+k} \right\}$. For rally lengths $x \in [L,L_0]$, the decreasing behaviour of the spline function $f$ is learnt from the data (Figure \ref{fig:rally15}), whereas the non-increasing trend for rally length larger $L_0$ is assured by controlling the trend of to the restricted spline $f_{[L_0,U]}(s)$.
In order to specify a Bayesian model which takes into account the constraints on $f_i(s)$, we have to specify a prior distribution on the spline coefficients $\beta_1,\dots,\beta_M$ such that the conditions described in section~\ref{sec:prior_isotonic} are satisfied.
With this goal in mind, the free spline coefficients are given a Normal prior distribution with mean $\beta_{m}$ and variance $\sigma^2_{\beta_m}$:
\begin{equation}
\beta_{i m}|\beta_{m},\sigma_{\beta_m }^2 \overset{\text{iid}}{\sim} \mathcal{N}(\beta_{m},\sigma_{\beta_m }^2) \qquad \text{for} \qquad m = 1,\cdots, m_{L_0}-k
\label{eq:priorbeta}
\end{equation}
The prior mean and precision $\tau^2_m= \frac{1}{\sigma_{\beta_m }^2}$ are given conditionally conjugate prior distributions:
\begin{equation}
\beta_{m}|\beta_{0},\sigma^{2}_{\beta_0} \sim \mathcal{N}(\beta_{0},\sigma^{2}_{\beta_0}) \quad \text{and} \quad \tau^2_m|r_{\tau},s_{\tau} \sim \Gamma\left(\frac{r_{\tau}}{s_{\tau}^2},\left(\frac{r_{\tau}}{s_{\tau}}\right)^2\right),
\end{equation}
where $r_{\tau}$ is the mean and $s_{\tau}$ is the variance of $\tau_m^2$. Further:
$$\beta_{0} \sim \mathcal{N}(0,100), \qquad \frac{1}{\sigma_{\beta_0}^2} \sim \Gamma(0.1,0.1)$$
$$r_{\tau} \sim \mathcal{U}(0,10), \qquad s_{\tau} \sim \mathcal{U}(0,10)$$
With regard to the constrained coefficients, we need to ensure the condition in Equation~\eqref{eq:condizione_beta} is verified. Therefore, we define these parameters recursively by letting
\begin{equation}
\beta_{i, m} := \beta_{i, m-1} - \varepsilon_{i, m}, \quad m=m_{L_0}-k+1,\dots,M
\label{eq:priorbeta2}
\end{equation}
where $\varepsilon_{i,m}$ are random decrements with
\begin{equation}
\varepsilon_{i,m}|r_{\varepsilon},s_{\varepsilon} \sim \Gamma\left(\frac{r_{\varepsilon}}{s_{\varepsilon}^2},\left(\frac{r_{\varepsilon}}{s_{\varepsilon}}\right)^2\right), \qquad \text{where} \quad r_{\varepsilon} \sim \mathcal{U}(0,10) \quad \text{and} \quad s_{\varepsilon} \sim \mathcal{U}(0,10)
\label{eq:priorepsilon}
\end{equation}
The last equation shows how a spline basis can be easily constrained to be non-increasing, while still retaining its essential flexibility. \par
For our application, we choose the same setup discussed in the example of Section~\ref{serveadvantage}. The constrain $L_0=3$ satisfies the empirical conclusion of \cite{DB}, namely, that the serve advantage is lost after the 4th rally shot. We assume independent Normal priors as in Equation \ref{eq:priorbeta} for the first three spline ($m=1,\dots,3$) coefficients and we adopt the recursive construction \eqref{eq:priorbeta2}, with the prior in \eqref{eq:priorepsilon}, for the remaining ($m=4,\dots,9$) coefficients.
\subsection{Modelling the rally ability}
\label{sec:rallying}
As outlined above, parameter $\alpha_i$ in Equation~\eqref{eq:mainmodel} can be interpreted as the rally ability of server $i$. It is clear that parameters $\alpha_i$ and $\alpha_j$ in Equation~\eqref{eq:mainmodel} are not identifiable, that is, adding and subtracting a constant to these parameters leaves $(\alpha_i - \alpha_j)$, thus inference, unchanged. Non-identifiability of the $\mbox{\boldmath $\alpha$}$'s is not a concern if one is solely interested in learning the logit of $p_{i,j}(x)$. However, we are also interested in direct inference of the rally ability parameters, thus we need to include an identifiability constraint \citep{gelfand1999identifiability,Baio}. In particular, we adopt a sum-to-zero constraint by setting
$$\sum_{i=1}^{N}{\alpha_i}=0,$$
where $N$ is the total number of players in the dataset. We specify a Gaussian prior distribution for the rally ability parameter as in \cite{KOV}:
\begin{equation}
\alpha_i|\alpha_{ 0},\sigma_{\alpha} \sim \mathcal{N}(\alpha_{0},\sigma^2_{\alpha}), \qquad {\text{for}} \qquad i = 1, \ldots, N-1 \label{modelalpha}
\end{equation}
Finally, we specify the following conditionally conjugate non-informative priors on the hyperparameters:
$$\alpha_0 \sim \mathcal{N}(0,100) \qquad \qquad \frac{1}{\sigma^2_{\alpha}} \sim \Gamma(0.1,0.1) $$
\subsection{Estimating a court effect} \label{sec:future}
In the Grand Slam tournaments, players play over three different types of court: clay, grass, and hard, respectively. The tennis season begins with hard courts, then moves to clay, grass, and back again to hard courts. While very popular in the past, nowadays only Wimbledon is played on grass. Each surface elicits different ball speed, bounce height, and sliding characteristics. For example, grass courts produce little friction with the ball, which will typically bounce low and at high speed on this court. Conversely, clay slows down the ball a little and allows players more time to return it, resulting in longer rallies. Players have to adapt their technique effectively to the surface. However, adapting training and playing schedules is extremely physically demanding on the player. As a result, it is very difficult for one player to dominate across all the courts, and thus all the slams \citep{Starbuck2016}. \par
It is therefore reasonable to state that the surface type can impact on a player's performance. \cite{atp} study the effect of the different courts for ATP players using a Bradly-Terry model and conclude that taking this information into account leads to improved rankings of the players. In our model, it is straightforward to include court as a covariate within a regression model for the rally ability of each player, and observe the best player for each court. In particular, we define the probability of winning a point on serve given both the rally length and the type of court:
\begin{equation}
\text{logit } p_{i,j}(x,c) = \text{logit }\mathbb{P}[Y_{i,j}=1\vert x,c] = \sum_{m=1}^{M}\beta_{i,m} b_{m}(x) + (\alpha_{i,c} -\alpha_{j,c} ),
\label{modelcourt}
\end{equation}
for $i= 1,\dots, n_s, j = 1,\dots, n_r$ and $i \neq j$, where $n_s$ is the total number of servers, $n_r$ the total number of receivers, and $M$ is the dimension of the splines basis. Here index $c$ denotes the type of court, with $c \in \left\{1,2,3\right\}$, where 1 means clay, 2 stands for grass and 3 means hard. \par
Adding this covariate to our model does not affect the serve advantage, which is modeled as in Section~\ref{serveadvantage}. Conversely, we now have a subject-specific vector of rally abilities $\mbox{\boldmath $\alpha$}_i = (\alpha_{i,1}, \alpha_{i,2}, \alpha_{i,3})^\top$, where $\alpha_{i,c}$ refers to court type $c$. To ensure the $\mbox{\boldmath $\alpha$}_i$'s are identifiable for all players, we impose
$$\sum_{i=1}^{N}\sum_{c=1}^3 \alpha_{i,c} = 0,$$
where $N$ is the total number of players in the dataset. We specify a Gaussian prior distribution on the rally ability parameters:
\begin{equation}
\alpha_{i,c}|\alpha_{ 0},\sigma_{\alpha} \sim \mathcal{N}(\alpha_{0},\sigma^2_{\alpha}), \label{alphacourt}
\end{equation}
for $i = 1, \ldots, n-1$, $c=1, 2, 3$, and for $i = n$ and $c = 1, 2$. Finally, we specify the following conditionally conjugate non-informative priors on the hyper-parameters:
$$\alpha_0 \sim \mathcal{N}(0,100) \qquad \qquad \frac{1}{\sigma^2_{\alpha}} \sim \Gamma(0.1,0.1) $$
\section{Model comparison}\label{comparison}
In this section, we compare different models in order to identify the best on the tennis data. In particular, we consider the pair-comparison exponential decay model, proposed by \cite{KOV}, and three versions of our Bayesian isotonic logistic regression (BILR) model: 1) no constraints on the spline coefficients, thus the spline function is free of monotonicity constraints; 2) set $L_0 = 3$ ($U=15$) and impose an order constraint on the coefficients of the B-splines with support in $(3, 15]$, thus the resulting spline function is non-increasing in $(3, 15]$ (partially monotone); and 3) spline function constrained to be non-increasing in $[1, 15]$, with an order constraint on all basis function coefficients, $\beta_1 \geq \beta_ 2 \geq \ldots \geq \beta_M$. Setting 2) draws on \cite{DB}, who observe the serve advantage is lost after the 4th rally shot on men's first serve.
To compare the performance of the different methods, we compute four goodness of fit indices broadly used in the Bayesian framework. In particular, we consider the Log Pseudo Marginal Likelihood (LPML) \citep{LPML}, which derives from predictive considerations and leads to pseudo Bayes factors for choosing among models. Further, we compute the Deviance Information Criterion (DIC) \citep{DIC}, which penalizes a model for its number of parameters, and the Watanabe Akaike information criterion (WAIC) \citep{WAIC}. The latter can be interpreted as a computationally convenient approximation to cross-validation and it is not effected by the dimension of the parameter vector. Finally, we also compute the root mean squared error (RMSE). \par
Prior to implementing the BILR model, one has to choose the order of the B-spline bases $k$, the number of knots and their location, which together determine the dimension of the spline basis. We recall that $M$ in Eq.~\eqref{eq:basis} is determined as $M = k$ $+ $ number of interior knots.
Further, one has to choose the sub-interval of the spline function domain where monotonicity is to be imposed. For setting 2) above, this sub-interval is chosen to be $(3, 15]$. We performed some preliminary sensitivity analysis to investigate changes in performance of the BILR model due to different choices for the number of bases, their degree, the knots location, and range $[L_0, U)$. Results of our sensitivity analysis are reported in \cite{tesiOrani}. For all three versions of our BILR model 1)-3) above, the spline functions $f_i(s)$, with $i = 1, \ldots, n_s$, are constructed from a B-spline basis of dimension $M = 9$ as in Eq.~\eqref{eq:basis}, defined on the closed interval $\left[L,U\right]=\left[1,15\right]$. The spline functions have order $k=4$ and knot vector $\mbox{\boldmath $t$} = (1, 1, 1, 1, 2, 3, 4, 7, 11, 15, 15, 15, 15)$. This model maximises the LPLM criterion and minimises the DIC and the WAIC, as reported in \cite{tesiOrani}.
In Figure \ref{fig:constrained} we compare the fit obtained for Andy Murray with the exponential model and our three versions of our BILR model. The exponential decay model (top left) has a decreasing behaviour until $x=5$, and after that the chance of winning a point is a constant given by the difference between Murray's rally ability and the average rally ability of his opponents, i.e. to plot the figure we substituted $\alpha_j$ in Eq. \eqref{eq:mainmodel} by $\bar \alpha_i$ obtained averaging all the $\alpha_j$ for $j\neq i$. Our BILR model with no constraints (top right) allows for an increasing behaviour in the probability of winning the point for some intermediate values of rally length, and this behaviour is unlikely to be justified in practice. While for small values of rally length the decreasing trend in serve advantage can be learned from the data, for large values of rally length this behaviour must be imposed through the model given data sparsity. We recall that short rallies, i.e. $x\leq 4$, constitute $90\%$ of the rallies in the dataset. In this data-rich part of the domain, no constraint is needed to adequately describe the data. Conversely, for long rallies, the decreasing behaviour imposed via the prior on the coefficients leads to a model which is not influenced by outliers. Both the model with monotonicity constraint in $(3, 15]$ (bottom left) and the model with all spline coefficients constrained to be non-increasing (bottom right) display a non-increasing behaviour for large values of rally length.
\begin{figure}[h!]
\centering
\begin{minipage}{7cm}
\begin{center}
\includegraphics[width=7cm,height=6cm]{posteriorMurrayK.pdf}\\
\end{center}
\end{minipage}\ \
\begin{minipage}{7cm}
\begin{center}
\includegraphics[width=7cm,height=6cm]{Murraynocon.pdf}\\
\end{center}
\end{minipage}\ \
\begin{minipage}{7cm}
\begin{center}
\includegraphics[width=7cm,height=6cm]{Murray6con.pdf}\\
\end{center}
\end{minipage}\ \
\begin{minipage}{7cm}
\begin{center}
\includegraphics[width=7cm,height=6cm]{Murrayallcon.pdf}\\
\end{center}
\end{minipage}
\caption{Probability of winning a point as a function of rally length for Andy Murray estimated with the exponential decay model (top left), BILR with no constrained splines (top right), BILR with order constraint on the coefficients of the splines with support in $(3, 15]$ (bottom left), and BILR with order constraint on all spline coefficients (bottom right). The points represents the real data, while the black lines are the posterior mean estimate of the probability of winning a point as a function of rally length obtained with these models. The blue dashed lines are the $95\%$ credible intervals. \label{fig:constrained}}
\end{figure}
For a quantitative evaluation of the performance of the four approaches, we compute the goodness-of-fit measures for these models, reported in Table \ref{tab:4}.
\begin{table}[!h]\centering
\begin{tabular}{@{}lrrrrr@{}}\toprule
& \multicolumn{4}{c}{\textbf{Goodness-of-fit-Measures}} \\
\cmidrule{1-5}
& \textbf{LPML} & \textbf{WAIC} &\textbf{DIC} &\textbf{RMSE}\\ \midrule
\textbf{Exponential model} & -52813.7 & 105821.3 & 105876 & 22.52\\
\textbf{Spline models}\\
No constraints & -52760.4 & 105853.9 & 105818 & 20.71\\
Non-increasing in $(3,15]$ & -52739.1 & 105747.6 & 105828 & 19.32\\
Non-increasing in $[1,15]$ & -52744.9 & 105790.4 & 105875 & 20.37\\
\bottomrule
\end{tabular}
\caption{Predictive goodness-of-fit measures for different model settings.}\label{tab:4}
\end{table}
Although no dramatic difference in performance emerge, the BILR model under setting 2) above (spline function non-increasing in $(3, 15]$) simultaneously maximises the LPML criterion and minimises the WAIC, DIC, and RMSE, respectively. According to the results in Table \ref{tab:4}, we select the model with six constrained splines. Thus, our final model has a spline function for server $i$:
\begin{equation}
f_i(s) = \sum_{m=1}^{9} \beta_{i, m} b_m(s), \quad \text{where} \quad s \in [1,15],
\end{equation}
with six constrained splines, that is, $ \beta_{i,4} \geq \beta_{i,5} \geq \beta_{i,6} \geq \beta_{i,7} \geq \beta_{i,8} \geq \beta_{i,9}$ for all servers $i=1,\dots, n_s$.
\section{Results} \label{results}
In this Section, we report results of the model fitted to point-by-point data for main-draw singles Grand Slam matches from 2012 forward, which were described in Section~\ref{data}. We divide both the male and female datasets into training and test sets. In both training sets we have $90$ randomly chosen servers, while the receivers are $140$ in the male training set and $139$ in the female training set. Conversely, in the male test set we have $50$ servers and $140$ receivers, whereas in the female test set we have $49$ servers and $139$ receivers. We fit model \eqref{eq:likelihood}-\eqref{modelalpha} separately on both male and female training sets, and perform predictions on the hold-out test sets. Our aim is to predict the conditional probability of winning a point for servers in the test sets by borrowing information from the training set results. \par
The posterior update of the model parameters was performed via Gibbs sampling, implemented by the \texttt{rjags} package \citep{rjags} in the \texttt{R} programming language \citep{R}. Posterior summaries were based on $20,000$ draws from the posteriors, with a burn-in of 1000 iterations and thinning every 20 iterations to reduce the autocorrelation in the posterior samples. Convergence of the Markov Chain ha been assessed by visual inspection ad using the \texttt{coda} package \citep{coda2006}. The sampler appeared to converge rapidly and mix efficiently.
Summaries of the serve advantage model parameters suggest a strong serve advantage. The probability of the server winning conditional on the point ending on serve, $\mathbb{E}(Y_{ij} = 1 \vert x = 1)$, is $0.83$ with $95\%$ credible interval (C.I.) $(0.75,0.94)$ for men and $0.69$ for women with $95\%$ C.I. $(0.55,0.83)$, respectively. When the serve advantage is lost, e.g. $x=15$, the probability of winning a point is mainly given by the rally ability of the server against the rally ability of the opponent. In this case, $\mathbb{E}(Y_{ij} = 1 \vert x = 15)$ has credible intervals $(0.51,0.64)$ for men and $(0.46,0.55)$ for women, respectively.
\begin{figure}[!h]
\centering
\begin{minipage}{7cm}
\begin{center}
\includegraphics[width=7cm,height=6cm]{Nadal.pdf}
\end{center}
\end{minipage}\ \
\begin{minipage}{7cm}
\begin{center}
\includegraphics[width=7cm,height=6cm]{Federer.pdf}
\end{center}
\end{minipage}
\caption{Probability of winning a point as a function of rally length for Rafael Nadal and Roger Federer. The points represents the real data, while the line is the posterior mean estimate of the probability of winning a point as a function of rally length obtained with the model. The blue dashed lines are the $95\%$ credible intervals. \label{fig:mod} }
\end{figure}
In Figure \ref{fig:mod} we show the estimated probability of winning a point as a function of rally length for Rafael Nadal and Roger Federer. The mean posterior curves are very similar: both players have the highest chance of winning the point on serve, and then this probability decreases. It is evident that the posterior mean estimate of the probability of winning a point on serve does not undergo an exponential decay, and a similar pattern for this estimate is observed on other players as well.
We remark again that the curve $f_i(s)$ estimates a global trend, namely it describes how the serve advantage drops with rally length. Nevertheless, the value $f_i(x)$, for $x = 1,2, \dots, 15$, is the estimate of the serve advantage for athlete $i$ at the (discrete) value of rally length, $x$. In our Figure we decide to plot the posterior estimate of $f_i(s)$ as a continuous trajectory to underline the longitudinal structure of the data.
\begin{figure}[h!]
\centering
\includegraphics[width=13cm,height=14cm]{ServeVsRallynuovo.pdf}
\caption{The posterior median rally ability $\alpha_i$ estimated under the baseline model (Section~\ref{sec:rallying}), on the $x$-axis, against the total serve advantage $\left(f_i(0) - f_i(15)\right)$ (Eq.~\eqref{eq:basis}), on the $y$-axis, for male players in the ATP tournaments. The red lines represent the median rally ability of male athletes, parallel to the $y$-axis, and the median of the serve advantage. The red points indicate the top four players of the ATP tournaments. We only display those athletes whose 95\% CI for serve advantage and rally ability do not include zero. \label{fig:ServevsRally} }
\end{figure}
Figure \ref{fig:ServevsRally} displays the estimated posterior median serve advantage versus the estimated posterior median rally ability. Specifically, the $x$-axis displays the posterior median of $\alpha_i$ estimated under the baseline model (Section~\ref{sec:rallying}), whereas the $y$-axis displays the total serve advantage $\left(f_i(0) - f_i(15)\right)$, where $f_i(s)$ is defined in Eq.~\eqref{eq:basis}. Let us observe the three top players according to the ATP singles ranking as of January 2019, namely, Roger Federer, Rafael Nadal and Novak Djokovic. We notice that Djokovic excels in terms of rally ability, confirming that he is better in defence than in attack. Conversely, Federer wins more at the first shot than on the long play. Finally, Nadal stands out in both terms of serve advantage and rally ability. Figure \ref{fig:ServevsRallyWTA} displays the same plot for the female dataset. We observe that Serena Williams excels on the long play, while Angelique Kerber and Simona Halep are better on serve. Caroline Wozniacki displays a good balance between serve and rally abilities.
\begin{figure}[h!]
\centering
\includegraphics[width=13cm,height=14cm]{ServeVsRallynuovoWTA.pdf}
\caption{The posterior median rally ability $\alpha_i$ estimated under the baseline model (Section~\ref{sec:rallying}), on the $x$-axis, against the total serve advantage $\left(f_i(0) - f_i(15)\right)$ (Eq.~\eqref{eq:basis}), on the $y$-axis, for female players in the WTA tournaments. The red lines represent the median rally ability of female athletes, parallel to the $y$-axis, and the median of the serve advantage. The red points indicate the top four players of the WTA tournaments. We only display those athletes whose 95\% CI for serve advantage and rally ability do not include zero. \label{fig:ServevsRallyWTA} }
\end{figure}
Further, we want to investigate the effect of the surface on the rally ability. To this end we fit the extension of our model described in Section~\ref{sec:future}. Since the court is likely to have an effect on the player's rally abilities, we study how the court affects the players' skills. We report here the posterior median estimate for the rally ability along with 95\% credible intervals for the three different courts for the best players in the ATP and WTA tournaments, respectively. We also compute the posterior median estimate, with 95\% credible intervals, for ${\alpha_i}$, obtained with the model which does not take the court effect into account (Equation \eqref{eq:mainmodel}). These estimates, reported in Appendix~\ref{rallyabilities}, are used to rank the athletes and understand how the different courts impact to the game of these top players.
\begin{tiny}
\begin{table}[h!]\centering
\begin{tabular}{@{}lcllll@{}}\toprule
& & & \multicolumn{3}{c}{\textbf{Type of court}} \\
\cmidrule{4-6}
& \multicolumn{1}{c}{\textbf{Ranking}} & \multicolumn{1}{c}{\textbf{Baseline}} & \multicolumn{1}{c}{\textbf{Clay}} & \multicolumn{1}{c}{\textbf{Grass}} & \multicolumn{1}{c}{\textbf{Hard}} \\ \midrule
\textbf{ATP}\\
& 1 & Djokovic (0.35) & Nadal (0.52) & Federer (0.28) & Djokovic (0.34)\\
& 2 & Nadal (0.31) & Djokovic (0.29) & Djokovic (0.28) & Nadal (0.28)\\
& 3 & Federer (0.16) & Federer (0.09) & Nadal (0.17) & Federer (0.14)\\
\textbf{WTA}\\
& 3 & Wozniacki (0.17) & Halep (0.20) & Kerber (0.16) & Wozniacki (0.21)\\
& 1 & Halep (0.11) & Wozniacki (0.09)& Halep (0.11) & Kerber (0.15)\\
& 2 & Kerber (0.03) & Kerber (0.01) & Wozniacki (0.11) & Halep (0.11)\\
\bottomrule
\end{tabular}
\caption{Ranking of the players on different court surfaces. The second column lists the official ATP and WTA year-end final rankings (by points) for singles for the 2018 championships season.}\label{tab:6}
\end{table}
\end{tiny}
The results (Table \ref{tab:6}) confirm common knowledge about these athletes. Djokovic and Nadal are both great at rallying. Djokovic is good on all courts, while Nadal is very good on clay and hard courts, but less favorite on grass. Federer appears to be weaker in rallying compared to the other two athletes, though he is the strongest on grass courts.
Regarding the WTA tournament, Angelique Kerber is good at rallying on both hard and grass court, but underperforming on clay courts. Caroline Wozniacki is good on all types of court, and in fact she is the player with the highest estimated rally ability $\alpha$ among the three female athletes. Simona Halep is good on clay, but does not outperform other players either on grass and hard courts. In general, however, the female athlete with the highest estimated $\alpha$ in the WTA dataset is Serena Williams (Figure~\ref{fig:ServevsRallyWTA}). The median estimate of her rally ability $\alpha$ in the baseline model is 0.26 (0.18-0.35), whereas the estimates on clay, grass, and hard courts are, respectively, 0.17 (0.05-0.30), 0.17 (0.11-0.21) and 0.27 (0.17-0.37).
\begin{figure}[!h]
\centering
\begin{minipage}{7cm}
\begin{center}
\includegraphics[width=7cm,height=6cm]{posteriorSimon.pdf}
\end{center}
\end{minipage}\ \
\begin{minipage}{7cm}
\begin{center}
\includegraphics[width=7cm,height=6cm]{EugenieBouchard.pdf}
\end{center}
\end{minipage}
\caption{Probability of winning a point as a function of rally length for Gilles Simon, on the left, and for Eugenie Bouchard, on the right. The points represent the real data, while the line is the estimated posterior mean probability of winning a point as a function of rally length obtained with the model. The blue dashed lines are the $95\%$ credible intervals.} \label{fig:SimHal}
\end{figure}
In Figure \ref{fig:SimHal} we observe the out-of-sample prediction for two players belonging to the male and female test sets, Gilles Simon and Eugenie Bouchard. The estimated probability of winning a point for a server in the test set (e.g., the black solid curve in Figure \ref{fig:SimHal}), is obtained by drawing the basis functions coefficients $\beta_{i,1}, \dots, \beta_{i,M}$ and the positive random decrements $\{\varepsilon_{i,m}\}_{m = m_{L_0}+1}^{M}$ as per Equations \eqref{eq:priorbeta}, \eqref{eq:priorepsilon}, \eqref{eq:priorbeta2}, using the posterior estimates of the non-subject specific parameters, that is, $\beta_{m}$, $\sigma_{\beta_m}^2$, $r_{\varepsilon}$ and $s_{\varepsilon}$. The rally ability is just computed in the training phase. The strength of the hierarchical model is the ability to infer the conditional probability of winning for a hold-out subject by borrowing strength from athletes in the training dataset. The estimated trajectory for these players is in line with the observed realisations given by the points in Figure \ref{fig:SimHal}. \par
\section{Conclusions} \label{conclusions}
In this paper, we presented a framework to modelling the serve advantage in elite tennis. Our approach extends \cite{KOV} by replacing a simple decay exponential function for the serve advantage with a B-spline basis function decomposition, thus achieving more flexible results. Constraints on the basis function coefficients guarantee that the serve advantage is non-increasing with rally length. As in \cite{KOV}, we allow the conditional probability of winning on serve to also depend on the rally ability of the two players, and investigate how the different types of court may impact on such rally ability. When the exponential decay function in \cite{KOV} goes to zero, the conditional probability of winning a point is only given by the difference between two rally abilities, thus a constant. Conversely, our spline function is defined on $[1,15]$ by construction, and therefore non-zero everywhere in the spline domain. This results in higher uncertainty as represented by wider credible intervals for large values of rally length.
This should be considered as a positive feature of our model, that is able to reflect larger uncertainty in presence of sparser data.
Our results show a sort of trade-off between serve advantage and rally ability. The most successful tennis players in the dataset show higher rally ability (rally ability above training median value) relative to their serve advantage. Indeed, if two players have the same chance of winning the point on the first shot, the match will be won by the player with the higher rally ability. We can conclude that although the service is important, what makes a tennis player great is his/her rally ability.\par
Although motivated by the analysis of tennis data, our methodology can be applied to pair-comparison data in general, with applications ranging from experimental psychology to the analysis of sports tournaments to genetics.
| {
"attr-fineweb-edu": 2.207031,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcPzxK7ICUyBuzw3i |
\section{Experiment and Results}\label{sec:CaseStudy}
We present the results of the computation of the methods for all games played by the Golden State Warriors and Cleveland Cavaliers in the 2016 season of the National Basketball Association (NBA). There are two reasons for choosing these two teams in this season. First, they were the most consistent teams in the league in recent years, and they did not change any aspects of their regular rotation during this season. Second, these two teams won all three championships during the past three years. According to common sense~\cite{mikolajec2013game}, we consider the Finals to be the environment that best shows the players' true characteristics and their value to their team. Furthermore, we can also analyze more players' data and reduce the volatility caused by changes in the lineup~\cite{blei2003latent}. More importantly, through rigorous mathematical derivations, we can compare the actual parameters for which we used real data from the finals, with the parameters trained in our system, to verify the accuracy of the training system that we obtained. Due to page limitations, we only analyze some games, which are randomly selected from all competitions, and we evaluate the exact values of two players (Stephen Curry and Andrew Bogut) in this paper. The results of other games and players are presented in the appendix .
\subsection{Data}
The data for all games in the 2016 season of the NBA were downloaded from the SportVU website. SportVU is an optical tracking system installed by the NBA on all 30 courts to collect real-time data~\cite{siegle2013design}. We generate a sequence of points based on rounds, and we also present the data for each team as a table (Table~\ref{table1} shows 20 games for the Cleveland Cavaliers). In addition, we introduce artificial passing data (namely, Table~\ref{table2} shows the number of passes for the Cleveland Cavaliers), which include the Finals from the 2015 and 2016 seasons, to verify the parameters' effectiveness.
We set up a Java web project to show all of our results. The passing network and graph were created and analyzed using Java and NetworkX~\cite{hagberg2008exploring}. The code implementation of the algorithm was created using Python.
\begin{table}[!ht]
\centering
\caption{ \bf The datasets of 20 games for the Cleveland Cavaliers.}
\label{table1}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\bf number & \bf date & \bf round & \bf 0 pt ratio & \bf 1 pt ratio & \bf 2 pt ratio & \bf 3 pt ratio & \bf result \\
\hline
1 & 2015/10/27 & 111&56.76
& 9.01
& 26.13
& 8.11
& F
\\
\hline
2 & 2015/10/28
& 101
& 48.51
& 10.89
& 27.72
& 12.87
& S
\\
\hline
3 & 2015/10/30
& 107
& 46.73
& 16.82
& 30.84
& 5.61
& S
\\
\hline
4 & 2015/11/2
& 101
& 43.56
& 14.85
& 33.66
& 7.92
& S
\\
\hline
5 & 2015/11/4
& 115
& 51.3
& 20
& 22.61
& 6.09
& S
\\
\hline
6 & 2015/11/6
& 100
& 47
& 8
& 35
& 10
& S
\\
\hline
7 & 2015/11/8
& 108
& 49.07
& 15.74
& 27.78
& 7.41
& S
\\
\hline
8 & 2015/11/10
& 117
& 40.17
& 28.21
& 22.22
& 9.4
& S
\\
\hline
9 & 2015/11/13
& 109
& 53.21
& 16.51
& 24.77
& 5.5
& S
\\
\hline
10 & 2015/11/14
& 118
& 54.24
& 14.41
& 19.49
& 11.86
& F
\\
\hline
11 & 2015/11/17
& 100
& 50
& 12
& 27
& 11
& F
\\
\hline
12 & 2015/11/19
& 101
& 36.63
& 23.76
& 28.71
& 10.89
& S
\\
\hline
13 & 2015/11/21
& 104
& 45.19
& 15.38
& 28.85
& 10.58
& S
\\
\hline
14 & 2015/11/23
& 103
& 45.63
& 12.62
& 24.27
& 17.48
& S
\\
\hline
15 & 2015/11/25
& 98
& 50
& 13.27
& 22.45
& 14.29
& F
\\
\hline
16 & 2015/11/27
& 106
& 50
& 17.92
& 24.53
& 7.55
& S
\\
\hline
17 & 2015/11/28
& 104
& 55.77
& 10.58
& 25
& 8.65
& S
\\
\hline
18 & 2015/12/1
& 106
& 54.72
& 18.87
& 17.92
& 8.49
& F
\\
\hline
19 & 2015/12/4
& 113
& 47.79
& 17.7
& 25.66
& 8.85
& F
\\
\hline
20 & 2015/12/5
& 109
& 56.88
& 14.68
& 22.94
& 5.5
& F
\\
\hline
\end{tabular}
\begin{flushleft} The table is processed from the data of a part of our appendix.
\end{flushleft}
\end{table}
\begin{table}[!ht]
\centering
\caption{ \bf The number of passes in the finals of the Cleveland Cavaliers.}
\label{table2}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
& \bf A & \bf B & \bf C & \bf D & \bf E & \bf F & \bf G & \bf H & \bf I & \bf G & \bf K & \bf L & \bf M\\
\hline
\bf A & 95 & 36 & 20 & 17 & 112 & 13 & 7 & 3 & 2 & 2 & 2 & 0 & 1\\
\hline
\bf B & 20 & 78 & 4 & 9 & 54 & 15 & 2 & 11 & 1 & 2 & 0 & 1 & 1\\
\hline
\bf C & 10 & 0 & 65 & 2 & 17 & 5 & 1 & 2 & 0 & 1 & 0 & 0 & 0\\
\hline
\bf D & 13 & 5 & 1 & 95 & 35 & 11 & 1 & 18 & 0 & 3 & 0 & 0 & 1\\
\hline
\bf E & 79 & 73 & 45 & 52 & 411 & 99 & 24 & 84 & 6 & 35 & 2 & 0 & 21\\
\hline
\bf F & 20 & 8 & 7 & 16 & 55 & 121 & 3 & 12 & 1 & 7 & 0 & 0 & 6\\
\hline
\bf G & 6 & 1 & 3 & 0 & 9 & 8 & 18 & 4 & 0 & 0 & 0 & 0 & 0\\
\hline
\bf H & 4 & 13 & 6 & 19 & 108 & 32 & 5 & 161 & 0 & 29 & 0 & 1 & 12\\
\hline
\bf I & 2 & 0 & 1 & 0 & 3 & 2 & 6 & 0 & 6 & 0 & 0 & 0 & 0\\
\hline
\bf J & 2 & 0 & 0 & 2 & 22 & 5 & 0 & 11 & 0 & 48 & 0 & 0 & 0\\
\hline
\bf K & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3 & 0 & 0\\
\hline
\bf L & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1\\
\hline
\bf M & 2 & 1 & 1 & 2 & 10 & 7 & 0 & 4 & 0 & 0 & 0 & 0 & 32\\
\hline
\end{tabular}
\begin{flushleft} The table is processed from the data of a part of our appendix. The letters (A $\sim$ M) represent the players of the Cleveland Cavaliers.
\end{flushleft}
\end{table}
\subsection{CaseStudy}
In this section, through a verification experiment, we will prove that the proposed method, which combines networks with a hidden Markov model, is a feasible approach. The experimental results are more accurate and stable for evaluating the exact values of basketball players.
In this experiment, we have 13 players recorded during the season for the Golden State Warriors, for whom are incorporated the semantics of the basketball games. Now, we indicate them with letters (e.g., A represents Stephen Curry). Each round is given a set of 4 results (0 points, 1 point, 2 points, and 3 points). Thus, we set the known parameters in the model as follows: $\mathcal{M} = \left\lbrace A, B, C ... L, M\right\rbrace$. $\mathcal{N} = \left\lbrace 0,1,2,3 \right\rbrace$. The observation sequence $O_{i}$ of our selected games is inputted into the algorithm. We can obtain the parameters $A$ (matrix of state transition probability), $B$ (matrix of observation probability), and $pi$ (initial probability distribution). According to the above analysis, the parameters that were not directly obtained from the original large dataset can be derived from the algorithm in our system. By using these parameters, we provide a novel behavior analysis and individual evaluation method in the basketball game. Now, we combine the method with the data from the basketball competition and introduce the experiment in detail from two perspectives.
\begin{Method}
The method for verifying the accuracy of the parameters trained in VPHMM
\end{Method}
It is well known that in hidden Markov models, if the parameter of a state transition matrix probability is the same, the other two parameters are the same. This is determined by its derivation formula.~\cite{lecun2015deep}. In other words, if we prove that the parameters $A$ (matrix of state transition probability), which is a training parameter, is the same as its defined parameter, which is generated by the actual numbers, then we can verify the effectiveness of the parameters trained by the system. To obtain the parameters in the definitions, we obtained the passing data for the NBA Finals in 2015 and 2016 through artificial collection (such as Table~\ref{table1}). In this way, we can calculate the metric of state transition probability from its definition and compare the results with those trained by our system. Moreover, we have a graphical processing~\cite{kumar2016mega7} for showing the data intuitively.
Fig~\ref{fig3} illustrates the probability that players X and Y interact (i.e., passing the ball to each other) in four games. The more interactions that occurred among players, the lighter is the color displayed in the grid, as displayed in the reference scale on the right-hand side of each grid. There are eight pictures here, each of which is composed of 13 $\times$ 13 grayscale lattices. They are divided into 4 groups. Both figures in one group are printed to describe the same game, whose parameters are calculated in two different ways. The top row is the result of the real observation, and the bottom row is the result of our algorithm.
By comparing and analyzing the data from the four groups (see Fig~\ref{fig3}), it can be observed that deviation of the parameters is in the acceptable range~\cite{cai2019personalized}. In our experiment, we carefully compared all the game parameters of different teams in the 2016 and 2017 NBA Finals. Thus, we can consider that parameters trained in our system are applicable and the method of training parameters is feasible.
\begin{figure}[!ht]
\centering
{
\label{fig:subfig_a25}
\includegraphics[width = 1\textwidth]{Fig3}
}
\caption{{\bf There are 8 figures, which represent 13 players' relationships in four games. The scale on the right-hand side of each figure displays the probability that player X will interact with player Y.} There are four groups (a and e, b and f, c and g, and d and h), which are a comparison of the matrix of transition probability. Figures (a $\sim$ d) are authentic data, and (e $\sim$ h) are trained by the algorithm.}
\label{fig3}
\end{figure}
\begin{Method}
A novel criterion to evaluate the exact values of players.
\end{Method}
Duch et al.~\cite{duch2010quantifying} proposed that evaluating any network of a team requires determining the relative value of its individual members. In the same way, the value of a player also depends on the network of his team. The network here is directly determined by the interaction between players on the basketball court. To solve this problem in a comprehensive way, we propose a novel criterion about basketball by quantifying the involvement of players in the network and calculating the probability distribution of their points and the probability that the player was chosen as an organizer with successful versus unsuccessful results. For our analyses, we used the parameters that were trained from our algorithm to compare the different players. We provide the methodology to evaluate the exact values of players and illustrate its concrete steps with two players (i.e., Stephen Curry and Andrew Bogut) of the Golden State Warriors.
Through the training system, we obtain three parameters (matrix of transition probability $A= \{ A_{ij}$\}, matrix of observation probability $B$ = \{$b_{ij} $\}, and initial probability distribution $\pi$ = \{$\pi_{i}$\}) of the Golden States Warriors in the 2016 season. In our Methodology, we have defined the parameters $A$, $B$, and $\pi$. Now, we visualize each of the parameters to obtain the values of the players in the comparison. From the data visualization, the matrix of transition probability is indicated as a weighted graph (Fig~\ref{fig1}), the matrix of observation probability is shown as a pie chart (Fig 4), and the initial state probability distribution is expressed as a bar chart (Fig 5). In the case study, we randomly choose two games of the Golden State Warriors to evaluate Stephen Curry's and Andrew Bogut's exact values to the team~\cite{kearse2012geneious}.
For the purpose of evaluating the exact values of Stephen Curry and Andrew Bogut of the Golden State Warriors, we generate the network representation for every game involving the Golden State Warriors. Figure 1(a) is one of the games chosen randomly among the Warriors' wins. Meanwhile, Figure 1(b) is one of the games chosen randomly among the Warriors' losses. In this paper, we take these two games, combined with the three features of the parameters, as the examples suggest. Before the units of each parameter, we first compare two weighted graphs. There is a clear difference in the density of passes between the successful games and the lost game. This provides an important reference for our next analysis. The evaluation method is instantiated in the parameters as follows:
\begin{enumerate}
\item
By using data from the matrix of transition probability, we creatively construct the individual weight graphs, which show each player's linkages from the player's point of view. Figure 6 shows the networks of Stephen Curry; we can observe that the networks are very different between the successful game and the lost game. We analyze the results of the game from two perspectives.
\begin{itemize}
\item
According to the comparison between the two games, we can observe that when the game is won, the probability of Curry's passed balls and received balls are more balanced with all his teammates. In the lost games, he was more inclined to interact with several core players. In other words, his interaction with his teammates is very important to the game.
\item
In addition, whether the game results are wins or losses, Stephen Curry can contact almost all of the players. Figure 7 presents the network of another Golden State Warriors' player named Andrew Bogut. Clearly, we can intuitively see that his passed arrows and received arrows in the two games are both less than those of Stephen Curry. We can observe that this player prefers to obtain the received balls, which means that this player is more dependent on assists from his teammates than on creating an opportunity for himself~\cite{blanchard2009cohesiveness}.
\end{itemize}
\item From the data visualizations, the matrix of observation probability was presented as a pie chart. This pattern showed the significance of Stephen Curry to the team.
Comparing the two charts (i.e., Figure 4a and Figure 4c), we can intuitively see a great difference in the performance of Stephen Curry between the two games. When he did not play at his proper level (e.g., Fig 4c), the probability of losing the game was larger. It was also observed that Andrew Bogut's performance in the two games was almost identical (see Fig 4b and Fig 4d).
\item According to the initial probability distribution, we transformed the probability distribution into a bar chart. The chart shows the probability of players in the game being the organizer of each round. By analyzing Fig 5, we can analyze the probability changes of all players (e.g., Stephen Curry and Andrew Bogut) during the different games. In Fig 5a and Fig 5b, the X-axis represents each player, and the Y-axis indicates the probability that a player is chosen as the organizer.
By analyzing the two histograms, we can obtain a clear conclusion. For the team, Stephen Curry is the organizer who the team relies on in every game. Moreover, through the comparison of probability, a higher proportion of passes involving Stephen Curry is propitious to the game's success. By contrast, the proportions of the other players (e.g., Andrew Bogut) are relatively small and have little influence.
\end{enumerate}
We drew some interesting conclusions by comparing the two players in the two different games. All players from different teams can be added to the comparison in this way. From the results, it was possible to visually identify the differential influences of the performances of various players on the result of the game. The quantification and visualization of the value of a player to his team can provide coaches and managers with useful knowledge for deciding on the roster of their teams.
\section{Conclusion}\label{sec:Conclusion}
In this paper, we presented an approach to evaluate the exact values of basketball players to their team using SportVU data collected from the NBA. For this purpose, we first defined and trained three parameters, depending on the passing network, the probabilities for a person being chosen as an organizer, and the distribution of the player's scores. To verify the effectiveness of the parameters trained in the system, we used a mathematical derivation for the formula to estimate the parameters and calculate the deviation between the output results of the algorithm and the real data. We accomplished this by using an algorithm called the variant parameter hidden Markov model (VPHMM), which was used to analysis the players' networks.
To obtain better evaluation performance, we respectively visualized the three parameters as a weighted graph, a pie chart, and a bar graph. The overall goal of this work was to create an assistive tool for the manager of the teams, coaches, and players to use when planning to improve the team and play against an upcoming opponent. The experiment showed that our method of evaluating basketball players' values in teams is superior to previous strategies. We believe this method can play a role in other group sports.
\section{Introduction}\label{sec:introduction}
Behavioral analysis of basketball players and the measurement of their values are crucially important for building an admirable basketball team. Without an elaborate assessment system for basketball players, the team manager is unable to devise a strategy by which to utilize athletes of different types and values~\cite{gudmundsson2017spatio}. Such a system can not only provide suggestions for team managers for formulating strategies but also play important roles in the construction of coaching tactics and in the players' own training.
Conventional systems often use concrete statistical data, such as scoring, assists, and rebounds, to measure player's behavior in a game. On the basis of these data, they study the changes in individual players' physical states in competition and try to divide the players' roles in team sports (such as a leading role versus a supporting role), so as to orient themselves to their players' values~\cite{carling2010analysis,wang2007learning,8634658}. By Svensson's survey report~\cite{svensson2005testing}, such an approach has been proven insufficient insofar as it ignores implicit abilities behind direct data and cannot offer proper strategies by which to improve the total performance of a team with unaltered players. The appearance of the camera on the game court provides opportunities accurately analyzing any specific player in detail. For example, SportVU~\cite{cornacchia2017survey}, which is dedicated to the domain of sports, distinguishes concrete ball passing and records the instantaneous positions of each player. The massive amounts of data produced by similar recording systems enable theoretical analyses with mathematical tools, such as graph theory and hidden Markov modeling, to uncover useful information that can correctly assess players' values.
The players in the games can initially be viewed as isolated points that are connected by ball passing and changing position to form an integrated network, which match the attributes of the network. The first attempt of sports analysis was proposed by Gould and Gatrell~\cite{gould1979structural}, who studied the simplicial complexes of the passing network in the 1977 FA Cup Final. However, their analysis did not receive considerable attention due to the limitations of hardware facilities.
Since the mid-2000s, a series of works have appeared. These works introduced social networks to analyze team sports. Bourbousson et al.~\cite{bourbousson2010team} presented an efficient method for distinguishing the relationships among basketball players on the offensive end using graph theory. Passos et al.~\cite{passos2011networks} provided some insights on graph theory in team sports by analyzing datasets of player interactions during an attacking sequence in water polo. In 2012, Fewell et al.~\cite{fewell2012basketball} introduced a transition graph in basketball games wherein the vertices indicated the five traditional player positions (point guard, shooting guard, small forward, power forward, and center). In this paper, the network properties of degree centrality, clustering, entropy, and flow centrality across teams and positions were used to characterize the game. These results prove that the coordination of players built on local interactions does not necessarily force all players to win the matches. Despite this result, Duch et al.~\cite{duch2010quantifying} still applied the network model to improve the passing accuracy and analysis performance of soccer players. Recently, a series of works by Clemente et al.~\cite{ clemente2015general, clemente2015using} demonstrated that network metrics can be applied to characterize the cooperation of players, which is a powerful tool for helping coaches understand specific properties of their teams and to support decision making to improve the sports training process based on match analysis.
These achievements provide the basis with which to utilize graph theory to evaluate the individual values of players. Indeed, the researchers~\cite{clemente2015using, grund2012network, pena2012network} evaluated the performance of players by network density and heterogeneity. However, it is challenging to evaluate players' exact values on their own team~\cite{albert2000error}. The challenge is that players produce a large amount of data when they interact on the court, and the quantitative analysis of this information cannot be achieved by these scholars' methods~\cite{9284088}. To solve this problem, we use the interactive data between players to construct the network and then apply a machine learning algorithm to analyze these networks.
We introduce a novel criterion to better discover and analyze datasets for evaluating the exact value of a player to his team. This criterion combines three features of players that match the actual situation in a game. The estimation of each player's three features with a machine learning algorithm is the key to analyzing his value to the team. The first feature includes passing efficiency and its specific effect on scoring, which indirectly measures the scores that one player achieves. The second feature involves the probabilities of a person being chosen as an organizer for the next offense. It offers the ability to form an overarching perspective of a player in a game. The last feature is the direct scoring generated by a player, which includes foul shots and three-point shots. Under the novel criterion, we construct a passing network that incorporates the semantics of the game to create an accurate expression of the passing relationship between players to quantify these three features by means of three parameters. The state transition probability indicates the passing relationships, the initial probability represents the probability of a player being chosen as an organizer, and the observation probability illustrates the probabilities of different points that the player scores in a round. To estimate these three parameters, we propose a variant parameter hidden Markov model (VPHMM), which is a parameter estimation paradigm that inherits advantages from hidden Markov models~\cite{Ingle2016Hidden,eddy1998profile,8750782}.
The main contributions of this paper include the following:
\begin{itemize}
\item A novel criterion is introduced to evaluate the player's performance from a team-level perspective.
\item A variant parameter hidden Markov model is proposed to estimate the parameters that indicate three features of the player's performance.
\item A network-based hidden Markov model is applied to team sports for the first time.
\end{itemize}
The rest of the paper is organized as follows. We describe in detail the methods in Section 2, which includes network analyses and parameter estimation. In Section 3, the results of the measures are empirically demonstrated. Section 4 is the conclusion.
\section{Methodology}\label{sec:Methodology}
We originally aimed to record and analyze passing networks in basketball to evaluate the exact value of each player to his team. However, because most sources only provide aggregate data across all games, the passing networks were constructed using the parameters trained in our system. When we found that the system, which was created by a hidden Markov model, could train very accurate parameters, we discovered that the evaluation method exceeded our expectations.
All three parameters obtained by the algorithm can play an important role in the network, which is composed of detailed data with which the game players are constructed~\cite{emmert2016fifty}. The subtle performances of a player on the court can be shown by his three parameters. We applied these parameters to basketball games to demonstrate that the approach is feasible and precise in evaluating the value of a basketball player to his team.
\subsection{Network Analyses}
To characterize the basketball matches as a network, the first step is to define the standards by which the players are linked. We construct a finite ${n} \times ${n} network where the vertices represent the players and the edges represent linkages between players (e.g., when a player passes the ball to another player). A 'round' can be defined as the moment that the team obtained possession of the basketball to the moment when possession of the basketball was recovered by the opposition~\cite{passos2011networks}. Then, the contribution of a player to a team can be inferred from the individual passing network and, in particular, from centrality and statistical measures, which define the values of a player according to different parameters~\cite{pena2012network}. We provide a novel measure that defines three parameters and discusses their meaning in the context of basketball.
\begin{definition}
(Matrix of state transition probability). Assume that there is a set of players $\mathcal{M} = \left\lbrace 1,2,...,m\right\rbrace$; we define $\omega_{i,j}$ to be the number of passes from player $i$ to player $j$. Thus, the total number of passes in a game should be $\sum_{i=1}^{m} \omega_{i}$. Therefore, the matrix of transition probability $A$ can be solved as follows:
\begin{equation}
A_{i,j} = \frac{ \omega_{i,j}}{
\sum_{i=1}^{m} \omega_{i}},
\end{equation}
\begin{itemize}
\item For every player $i \in \mathcal{M}$,
\item The player did not pass the ball if $i = j$.
\end{itemize}
\end{definition}
\begin{definition}
(Matrix of observation probability). Assume that there is a set of observations $\mathcal{N} = \left\lbrace 1,2,...,n\right\rbrace$; we define $b_{j,k}$ to be the number of players $j$ in observation $k$. Thus, the total observations of this player in a game should be $\sum_{j=1}^{n} $b$_{j,k}$. Therefore, the matrix of transition probability $B$ can be solved:
\begin{equation}
B_{j,k} = \frac{b_{j,k}}{\sum_{i=1}^{n} b_{i}},
\end{equation}
\begin{itemize}
\item For every observation of basketball game $k \in \mathcal{N}$.
\end{itemize}
\end{definition}
\begin{definition}
(Initial probability distribution). We define $\pi$ as the probability distribution of players being chosen as an organizer for the offense.
\end{definition}
We generated weighted graphs from the probabilities of cumulative passes, which can be obtained from the definition 2.1. Here, previous researchers \cite{fewell2012basketball} usually set $m$ to five letters to represent the five most important players of five basketball positions. Although this better depicts the network diagram by highlighting the important players per the role of the network, it ignores the values of role players. This may distort the truth of the game, and the result will be different from the actual event. The network diagram (see Figure 1) that we set up here does not ignore any one person. The blue circles represent those involved in the match who had playing time. An arrow's direction represents the pass direction. The origin of the arrow indicates the player who passed the ball, and the arrowhead indicates the player who received the ball. The width and color of the arrows denote the numbers and percentages of passes from one player to another during games. Specifically, thicker arrows represent more passes taking place between players, and thinner arrows illustrate fewer passes occurring among players. The red arrows represent when the number of passes from one player to a specific player is more than 20 percentage of his total passes. When the percentage is between 10 and 20, the arrows are colored orange. Meanwhile, the blue arrows represent when this percentage falls between 5 and 10, and the green arrows denote that ,this percentage is between 1 and 5. The grey arrows represent percentages of less than 1, the minimum percentage.
This weight graph not only takes into account the details but also intuitively reveals the important players in the game. However, when our dataset was analyzed in detail, almost all nodes
were connected, making some nodes difficult to differentiate across
the team's network. To solve this problem, we creatively refined the graph (Figure 2) to each player's perspective. Each player's individual network contains the passes arrows and received arrows. Upon evaluating the values of the players, these individual networks can intuitively indicate the relevance or centrality of a player.
\begin{figure}[!ht]
\centering
\subfigure[]{
\label{fig:subfig_a}
\includegraphics[width = 0.52\textwidth]{Fig1a}
}
\hspace{-0.5in}
\subfigure[]{
\label{fig:subfig_a}
\includegraphics[width = 0.52\textwidth]{fig1b}
}
\caption{ {\bf Network representation for analyzed games played by the Golden State Warriors (a: a win; b: a loss).}
The blue circles represent the players involved in the game who had playing time. The direction of the arrows represents the pass direction. The origin of the arrow
indicates the player who passed the ball, and the arrowhead indicates the player who received the ball. The width and color of the arrows denote
the numbers and percentages of passes from one player to another during games. Specifically, thicker arrows represent more passes having occurred
between players, and thinner arrows indicate fewer passes having occurred among players. The red arrows indicate that the number of passes from one player
to the specified player is more than 20 percent of his total passes. When the percentage is between 10 and 20, the arrows are colored orange.
Likewise, the blue arrows indicate that this percentage is between 5 and 10, and the green arrows indicate that this percentage is between 1 and 5. The gray arrows indicate that the percentage is less than 1, which is the minimum percentage represented by color.}
\label{fig1}
\end{figure}
\begin{figure}[!ht]
\centering
\subfigure[]{
\label{fig:subfig_a}
\includegraphics[width = 0.51\textwidth]{Fig2a}
}
\hspace{-0.5in}
\subfigure[]{
\label{fig:subfig_a}
\includegraphics[width = 0.51\textwidth]{Fig2b}
}
\caption{{\bf Network representation for the analyzed games from the point of view of A (i.e., Stephen Curry).} Fig 2(a) presents the network of A with respect to receiving the ball from others, and Fig 2(b) presents the network of A with respect to passing the ball to other players.}
\label{fig2}
\end{figure}
In addition, the method of evaluation also includes the score distribution of the player. The score distribution defines the probability of different numbers of points scored by the player in a round.
The last parameter we used in our method represents the probability of the player being the organizer. These three parameters trained in our system represent the aforementioned features. We will illustrate the application in detail in Section 3 with an example about basketball.
\subsection{Parameters Training
}
In the definitions of the previous subsection, we defined the following parameters: the matrix of transition probability $A= \{ a_{ij}$\}, the matrix of observation probability $B$ = \{$b_{ij} $\}, and the initial probability distribution $\pi$ = \{$\pi_{i}$\}. We provide the definitions and mathematical procedures of our VPHMM algorithm in this subsection.
We define the players to be the finite set of states M = \{$\alpha_{1}$,$\alpha_{2}$,$\alpha_{3}$,...,$\alpha_{m}$\}, and we define the finite set of observations N =\{$\varGamma_{1}$,$\varGamma_{2}$,$\varGamma_{3}$,...,$\varGamma_{n}$\}. These two parameters, which are easily obtained from the large datasets, are used as input for Algorithm~\ref{alg:A} to estimate the other three parameters (i.e., A, B, and $\pi$). Now, we present a practice example to show how the proposed algorithm works.
\begin{algorithm}
\caption{Parameters training}
\label{alg:A}
\begin{algorithmic}[1]
\REQUIRE The finite set of states $M$; the finite set of observations $N$; The initial value $\lambda^{(0)}=(A^{(0)},B^{(0)},\pi^{(0)})$
\ENSURE Matrix of transition probability $A$; Matrix of observation probability $B$; Initial state probability distribution $\pi$; The final value $\lambda^{(n+1)}=(A^{(n+1)},B^{(n+1)},\pi^{(n+1)})$
\STATE $n = 0$.
\WHILE{$n < T$}
\STATE $a_{ij}=\frac{\sum_{t=1}^{T-1}\zeta_t(i,j)}{\sum_{t=1}^{T-1}\gamma_t(i)}$.
\STATE $b_j(k)^{(n+1)}=\frac{\sum_{t=1,a_1=v_k}^{T}\gamma_t(j)}{\sum_{t=1}^{T}\gamma_t(j)}$.
\STATE $\pi_i^{(n+1)}=\gamma_1(i)$.
\ENDWHILE
\end{algorithmic}
\end{algorithm}
Algorithm~\ref{alg:A} was proposed based on a very classical unsupervised learning algorithm in the hidden Markov model~\cite{leos2017analysis}. We combine the estimation method with the properties of network theory to train the parameters and evaluate the players' values in the basketball matches. The estimation formula, which incorporates the semantics of the basketball game~\cite{wei:2015:pst:2783258.2788598}, is as follows:
In the method, $a_{ij}$ denotes the probability of player $i$ passing to another player $j$. We calculate $a_{ij}$ as follows:
\begin{equation}
a_{ij}=\frac{\sum_{t=1}^{T-1}\zeta_t(i,j)}{\sum_{t=1}^{T-1}\gamma_t(i)}.
\end{equation}
$b_j(k)^{(n+1)}$ represents the probability of player $j$ obtaining $b$ points in one round. Its estimation formula is
\begin{equation}
b_j(k)^{(n+1)}=\frac{\sum_{t=1,a_1=v_k}^{T}\gamma_t(j)}{\sum_{t=1}^{T}\gamma_t(j)}.
\end{equation}
$\pi_i^{(n+1)}$ represents the rate of player $i$ being chosen as the organizer, and the parameter can be trained from
\begin{equation}
\pi_i^{(n+1)}=\gamma_1(i).
\end{equation}
According to~\cite{abdel2014convolutional}, $\gamma_t(i)$ as a conclusion is defined as follows:
\begin{equation}
\gamma_t(i) = P(i_t=q_i|O,\lambda)=\frac{\alpha_t(i)\beta_t(i)}{\sum_{i=1}^N\sum_{j=1}^N\alpha_t(i)a_{ij}b_j(o_{t+1})\beta_{t+1}(j)},
\end{equation}
where $\alpha_t(i)$ and $\beta_t(i)$ are two intermediate variables. Both of these variables converge from one side to another and construct a directed edge to form a recursive structure.
$\zeta_t(i,j)$ is an intermediate node in the $t$ round, which is derived from the forward algorithm~\cite{lecun2015deep}, calculated by the formula with the last node. It can be obtained as follows:
\begin{equation}
\zeta_t(i,j) = \frac{P(i_t=q_i,i_{t+1}=q_j,O|\lambda)}{P(O|\lambda)}=\frac{\alpha_t(i)a_{ij}b_j(o_{t+1})\beta_{t+1}(j)}{\sum_{i=1}^N\sum_{j=1}^N\alpha_t(i)a_{ij}b_j(o_{t+1})\beta_{t+1}(j)}.
\end{equation}
The weighted and directed edge relation , $P(i_t=q_i,i_{t+1}=q_j,O|\lambda)$ between rounds can be calculated as follows:
\begin{equation}
P(i_t=q_i,i_{t+1}=q_j,O|\lambda)=\alpha_t(i)a_{ij}b_j(o_{t+1})\beta_{t+1}(j).
\end{equation}
Then, we present their derivation in detail based on the games of the Golden State Warriors in the 2016 season. In our set, we have the known observation sequence $O = (o_{1},o_{2},o_{3},...,o_{T})$ and $o_{T} \in \mathcal{N}$.
The states' sequence is $I = (i_{1},i_{2},i_{3},...,i_{T})$ and $i_{T} \in \mathcal{M}$.
In our method, the observation sequence $O$ presents a sequence of rounds in the basketball match, and T represents the number of rounds in a game. The states' sequence is illustrated as the player. In our example, A is Stephen Curry, and B is another player named Andrew Bogut.
$\lambda$ presents the current estimate (i.e., the parameter's value when the game is in progress). The $\overline{\lambda}$ are parameters that the team most want to see. Based on these, we obtained the following:
\begin{equation}
Q(\lambda,\overline{\lambda})=\sum_IlogP(O,I|\lambda)P(O|\overline{\lambda}).
\end{equation}
In the case study, $P(O|\overline{\lambda})$ can be directly calculated from the initial original data. According to algorithm~\cite{abdel2014convolutional} in the HMM, we developed $Q(\lambda,\overline{\lambda})$ by utilizing the known values, which is
\begin{equation}
\begin{split}
Q(\lambda,\overline{\lambda})=\sum_Ilog\pi_{i_1}P(O,I|\overline{\lambda})+
\sum_I(\sum_{t=1}^{\tau=1}loga_{j,j+1})P(O,I|\overline{\lambda})+\\\sum_I(\sum_{t=1}^\tau logb_{i_t}(o_t)P(O,I|\overline{\lambda}).
\end{split}
\end{equation}
Thus, three parameters have occurred in the formula. We deduce the values of these parameters as follows:
\begin{enumerate}
\item[(1)] $\pi_i$:
through the above calculations, we obtain
\begin{equation}
\sum_Ilog\pi_{ij}P(O|\overline{\lambda})=\sum_{i=1}^Nlog\pi_iP(O,i_1=i|\overline{\lambda}).
\end{equation}
In the basketball matches, someone is always chosen as an organizer in a round. Consequently, $\pi_i$ satisfies the following constraints:
\begin{equation}
\sum_{i=1}^N\pi_i=1.
\end{equation}
By using the Lagrange multiplier method, the Lagrange function is written as
\begin{equation}
\sum_{i=1}^Nlog\pi_iP(O,i_i=i|\overline{\lambda})+\gamma(\sum_{i=1}^N\pi_i-1).
\end{equation}
Take the partial derivative and make it 0.
\begin{equation}
\frac{\partial}{\partial \pi}\sum_{i=1}^Nlog\pi_iP(O,i_i=i|\overline{\lambda})+\gamma(\sum_{i=1}^N\pi_i-1)=0.
\end{equation}
We can obtain the following:
\begin{equation}
P(O,i_i=1|\overline{\lambda})+\gamma\pi_i=0.
\end{equation}
Then, we sum over $i$ and obtain $\gamma$
\begin{equation}
\gamma=-P(O|\overline{\lambda}).
\end{equation}
We can obtain $\pi_i$ as
\begin{equation}
\pi_i=\frac{P(O,i_i=i|\overline{\lambda})}{P(O|\overline{\lambda})}.
\end{equation}
\end{enumerate}
\begin{enumerate}
\item [(2)] $a_{ij}$: it is the same as $\pi_i$. We sum over the probability that a player's pass in a game is 1, satisfying the constraint
\begin{equation}
\sum_{j=1}^Na_{ij}=1.
\end{equation}
By using the same method as (13), we obtain
\begin{equation}
\sum_I(\sum_{t=1}^{\tau=1}loga_{j,j+1})P(O,I|\overline{\lambda})=
\sum_{i=1}^N\sum_{j=1}^N\sum_{t=1}^{\tau-1}loga_{ij}P(O,i_t=i,i_t+1=j|\overline{\lambda}),
\end{equation}
\begin{equation}
a_{ij}=\frac{\sum_{t=1}^{\tau-1}P(O,i_t=i,i_{t+1}=j|\overline{\lambda})}{P(O,\sum_{t=1}^{\tau-1}|\overline{\lambda})}.
\end{equation}
\end{enumerate}
\begin{enumerate}
\item [(3)] $b_j(k)$: the derivation process is the same as (2),
\end{enumerate}
\begin{equation}
\sum_{k=1}^Mb_{j}(k)=1,
\end{equation}
\begin{equation}
\sum_I(\sum_{t=1}^\tau logb_{i_t}(o_t)P(O,I|\overline{\lambda}))=\sum_{j=1}{N}\sum_{t=1}{\tau}logb_{j}(o_i)P(O,i_t=j|\overline{\lambda}),
\end{equation}
\begin{equation}
b_j(k)=\frac{\sum_{t=1}^{\tau}P(O,i_t=j|\overline{\lambda})I(o_t=v_k)}{\sum_{t=1}^{\tau}P(O,i_t=j|\overline{\lambda})}.
\end{equation}
\section*{Acknowledgments}
The research work described herein was funded by the Science and Technology Planning Project of Guangdong Province (No. 2016B010124012, 2016B090920095).
\section*{References}
| {
"attr-fineweb-edu": 2.271484,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdkTxK7IDOB8Th4nJ | \section{Introduction}
Football is a typical low-scoring game and games are frequently decided through single events in the game. While several factors like extraordinary individual performances, individual errors, injuries, refereeing errors or just lucky coincidences are hard to forecast, each team has its strengths and weaknesses (e.g., defense and attack) and most of the results reflect the qualities of the teams. We follow this idea in order to derive probabilities
for the exact result of a single match between two national teams, which involves the following four ingredients for both teams:
\begin{itemize}
\item Elo ranking
\item attack strength
\item defense strength
\item location of the match
\end{itemize}
The complexity of the tournament with billions of different outcomes makes it very difficult to obtain accurate estimates of the probabilities of certain events. Therefore, we do \textit{not} aim on forecasting the exact outcome of the tournament, but we want to make the discrepancy between the participating teams \textit{quantifiable} and to measure the chances of each team to reach certain stages of the tournament or to win the cup. In particular, since the groups are already drawn and the tournament structure for each team (in particular, the way to the final) is set, the idea is to measure whether a team has a rather simple or hard way to the final.
\par
Since this is a technical report with the aim to present simulation results, we omit a detailed description of the state of the art and refer to \cite{gilch:afc19} and \cite{gilch-mueller:18} for a discussion of related research articles and a comparison to related models and covariates under consideration.
\par
As a quantitative measure of the participating team strengths in this article, we use the \textit{Elo ranking} (\texttt{http://en.wikipedia.org/wiki/World\_Football\_Elo\_Ratings}) instead of the FIFA ranking (which is a simplified Elo ranking since July 2018), since the calculation of the FIFA ranking changed over time and the Elo ranking is more widely used in football forecast models. See also \cite{GaRo:16} for a discussion on this topic and a justification of the Elo ranking. The model under consideration shows a good fit, the obtained forecasts are conclusive and give \textit{quantitative insights} in each team's chances.
\section{The model}
\label{sec:model}
\subsection{Preliminaries}
\label{subsec:goals}
The simulation in this article works as follows: each single match is modeled as $G_{A}$:$G_{B}$, where $G_{A}$ (resp.~$G_{B}$) is the number of goals scored by team A (resp.~by team B). Each single match's exact result is forecasted, from which we are able to simulate the course of the whole tournament. Even the most probable tournament outcome has a probability very close to zero to be actually realized. Hence, deviations of the true tournament outcome from the model's most probable one are not only possible, but most likely. However, simulations of the tournament yield estimates of the probabilities for each team to reach certain stages of the tournament and allow to make the different team's chances \textit{quantifiable}.
\par
We are interested to give quantitative insights into the following questions:
\begin{enumerate}
\item Which team has the best chances to become new European champion?
\item How big are the probabilities that a team will win its group or will be eliminated in the group stage?
\item How big is the probability that a team will reach a certain stage of the tournament?
\end{enumerate}
\subsection{Involved data}
The main idea is to predict the exact outcome of a single match based on a regression model which takes the following individual characteristics into account:
\begin{itemize}
\item Elo ranking of the teams
\item Attack and defense strengths of the teams
\item Location of the match (either one team plays at home or the match takes place on neutral ground)
\end{itemize}
We use an Elo rating system, see \cite{Elo:78}, which includes modifications to take various football-specific variables (like home advantage, goal difference, etc.) into account. The Elo ranking is published by the website \texttt{eloratings.net}, from where also all historic match data was retrieved.
\par
We give a quick introduction to the formula for the Elo ratings, which uses the typical form as described in \texttt{http://en.wikipedia.org/wiki/World\_Football\_Elo\_Ratings}: let $\mathrm{Elo}_{\mathrm{before}}$ be the Elo points of a team before a match; then the Elo points $\mathrm{Elo}_{\mathrm{after}}$ after the match against an opponent with Elo points $\mathrm{Elo}_{\mathrm{Opp}}$ is calculated as follows:
$$
\mathrm{Elo}_{\mathrm{after}}= \mathrm{Elo}_{\mathrm{before}} + K\cdot G\cdot (W-W_e),
$$
where
\begin{itemize}
\item $K$ is a weight index regarding the tournament of the match (World Cup matches have wight $60$, while continental tournaments have weight $50$)
\item $G$ is a number taking into account the goal difference:
$$
G=\begin{cases}
1, & \textrm{if the match is a draw or won by one goal,}\\
\frac{3}{2}, & \textrm{if the match is won by two goals,}\\
\frac{11+N}{8}, & \textrm{where $N$ is the goal difference otherwise.}
\end{cases}
$$
\item $W$ is the result of the match: $1$ for a win, $0.5$ for a draw, and $0$ for a defeat.
\item $W_e$ is the expected outcome of the match calculated as follows:
$$
W_e=\frac{1}{10^{-\frac{D}{400}}+1},
$$
where $D=\mathrm{Elo}_{\mathrm{before}}-\mathrm{Elo}_{\mathrm{Opp}}$ is the difference of the Elo points of both teams.
\end{itemize}
The Elo ratings on 8 June 2021 for the top $5$ participating nations in the UEFA EURO 2020 (in this rating) were as follows:
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Belgium & France & Portugal & Spain & Italy \cr
\hline
2100 & 2087 & 2037 & 2033 & 2013 \cr
\hline
\end{tabular}
\end{center}
The forecast of the exact result of a match between teams $A$ and $B$ is modelled as
$$
G_A\ : \ G_B,
$$
where $G_A$ and $G_{B}$ are the numbers of goals scored by team $A$ and $B$.
The model is based on a \textit{Zero-Inflated Generalized Poisson} (ZIGP) regression model, where we assume $(G_{A}, G_{B})$ to be a bivariate zero-inflated generalized Poisson distributed random variable. The distribution of $(G_{A}, G_{B})$
will depend on the current Elo ranking $\elo{A}$ of team $A$, the Elo ranking $\elo{B}$ of team $B$ and the location of the match (that is, one team either plays at home or the match is taking place on neutral playground). The model is fitted
using all matches of the participating teams between 1 January 2014 and 7 June 2021. The historic match data is weighted according to the following criteria:
\begin{itemize}
\item Importance of the match
\item Time depreciation
\end{itemize}
In order to weigh the historic match data for the regression model we use the following date weight function for a match $m$:
$$
w_{\textrm{date}}(m)=\Bigl(\frac12\Bigr)^{\frac{D(m)}{H}},
$$
where $D(m)$ is the number of days ago when the match $m$ was played and $H$ is the half period in days, that is, a match played $H$ days ago has half the weight of a match played today. Here, we choose the half period as $H=365\cdot 3= 3\,\textrm{years}$ days; compare with Ley, Van de Wiele and Hans Van Eetvelde \cite{Ley:19}.
\par
For weighing the importance of a match $m$, we use the match importance ratio in the FIFA ranking which is given by
$$
w_{\textrm{importance}}(m)=\begin{cases}
4, & \textrm{if $m$ is a World Cup match},\\
3, & \textrm{if $m$ is a continental championship/Confederation Cup match},\\
2.5, & \textrm{if $m$ is a World Cup or EURO qualifier/Nations League match},\\
1, & \textrm{otherwise}.\\
\end{cases}
$$
The overall importance of a single match from the past will be assigned as
$$
w(m)= w_{\textrm{date}}(m) \cdot w_{\textrm{importance}}(m).
$$
In the following subsection we explain the model for forecasting a single match, which in turn is used for simulating the whole tournament and determining the likelihood of the success for each participant.
\subsection{Nested Zero-Inflated Generalized Poisson Regression}
We present a \textit{dependent} Zero-Inflated Generalized Poisson regression approach for estimating the probabilities of the exact result of single matches. Consider a match between two teams $A$ and $B$, whose outcome we want to estimate in terms of probabilities.
The numbers of goals $G_A$ and $G_B$ scored by teams $A$ and $B$ shall be random variables which follow a zero-inflated generalised Poisson-distribution (ZIGP). Generalised Poisson distributions generalise the Poisson distribution by adding a dispersion parameter; additionally, we add a point measure at $0$, since the event that no goal is scored by a team typically is a special event. We recall the definition that a discrete random variable $X$ follows a \textit{Zero-Inflated Generalized Poisson distribution (ZIGP)} with Poisson parameter $\mu>0$, dispersion parameter $\varphi\geq 1$ and zero-inflation $\omega\in[0,1)$:
$$
\mathbb{P}[X=k]=\begin{cases}
\omega + (1-\omega) \cdot e^{-\frac{\mu}{\varphi}}, & \textrm{if k=0,}\\
(1-\omega)\cdot \frac{\mu\cdot \bigl(\mu+(\varphi-1)\cdot k\bigr)^{k-1}}{k!}\varphi^{-k} e^{-\frac{1}{\varphi}\bigl(\mu+(\varphi-1)x\bigr)}, & \textrm{if $k\in\mathbb{N}$};
\end{cases}
$$
compare, e.g., with Consul \cite{Co:89} and Stekeler \cite{St:04}. If $\omega=0$ and $\varphi=1$, then we obtain just the classical Poisson distribution. The advantage of ZIGP is now that we have an additional dispersion parameter. We also note that
\begin{eqnarray*}
\mathbb{E}(X) &=& (1-\omega)\cdot \mu, \\
\mathrm{Var}(X) &=& (1-\omega)\cdot \mu \cdot (\varphi^2 + \omega \mu).
\end{eqnarray*}
The idea is now to model the number $G$ of scored goals of a team by a ZIGP distribution, whose parameters depend on the opponent's Elo ranking and the location of the match. Moreover, the number of goals scored by the weaker team (according to the Elo ranking) does additionally depend on the number of scored goals of the stronger team.
\par
We now explain the regression method in more detail.
In the following we will always assume that $A$ has \textit{higher} Elo score than $B$. This assumption can be justified, since usually the better team dominates the weaker team's tactics. Moreover the number of goals the stronger team scores has an impact on the number of goals of the weaker team. For example, if team $A$ scores $5$ goals it is more likely that $B$ scores also $1$ or $2$ goals, because the defense of team $A$ lacks in concentration due to the expected victory. If the stronger team $A$ scores only $1$ goal, it is more likely that $B$ scores no or just one goal, since team $A$ focusses more on the defense and tries to secure the victory.
\par
Denote by $G_A$ and $G_B$ the number of goals scored by teams $A$ and $B$. Both $G_A$ and $G_B$ shall be ZIGP-distributed: $G_A$ follows a ZIGP-distribution with parameter $\mu_{A|B}$, $\varphi_{A|B}$ and $\omega_{A|B}$, while $G_B$ follows a ZIGP-distribution with Poisson parameter $\mu_{B|A}$, $\varphi_{B|A}$ and $\omega_{B|A}$. These parameters are now determined as follows:
\begin{enumerate}
\item In the first step we model the strength of team $A$ in terms of the number of scored goals $\tilde G_{A}$ in dependence of the opponent's Elo score $\elo{}=\elo{B}$ and the location of the match. The location parameter $\mathrm{loc}_{A|B}$ is defined as:
$$
\mathrm{loc}_{A|B}=\begin{cases}
1, & \textrm{if $A$ plays at home},\\
0, & \textrm{if the match takes place on neutral playground},\\
-1, & \textrm{if $B$ plays at home}.
\end{cases}
$$
The parameters of the distribution of $\tilde G_A$ are modelled as follows:
\begin{equation}\label{equ:independent-regression1}
\begin{array}{rcl}
\log \mu_A\bigl(\elo{B}\bigr)&=& \alpha_0^{(1)} + \alpha_1^{(1)} \cdot \elo{B} + \alpha_2^{(1)} \cdot \mathrm{loc}_{A|B},\\
\varphi_A &= & 1+e^{\beta^{(1)}}, \\
\omega_A &= & \frac{\gamma^{(1)}}{1+\gamma^{(1)}},
\end{array}
\end{equation}
where $\alpha_0^{(1)},\alpha_1^{(1)},\alpha_2^{(1)},\beta^{(1)},\gamma^{(1)}$ are obtained via ZIGP regression. Here, $\tilde G_A$ is a model ofr the scored goals of team $A$, which does \textit{not} take into account the defense skills of team $B$.
\item Teams of similar Elo scores may have different strengths in attack and defense. To take this effect into account we model the number $\check G_A$ of goals team $B$ receives against a team of higher Elo score $\elo{}=\elo{A}$ using a ZIGP distribution with mean parameter $\nu_{B}$, dispersion parameter $\psi_B$ and zero-inflation parameter $\delta_B$ as follows:
\begin{equation}\label{equ:independent-regression2}
\begin{array}{rcl}
\log \nu_B\bigl(\elo{A}\bigr) & =& \alpha_0^{(2)} + \alpha_1^{(2)} \cdot \elo{A} + \alpha_2^{(2)} \cdot \mathrm{loc}_{B|A},\\
\psi_B &= & 1+e^{\beta^{(2)}}, \\
\delta_B &= & \frac{\gamma^{(2)}}{1+\gamma^{(2)}},
\end{array}
\end{equation}
where $\alpha_0^{(2)},\alpha_1^{(2)},\alpha_2^{(2)},\beta^{(2)},\gamma^{(2)}$ are obtained via ZIGP regression. Here, we model the number of scored goals of team $A$ as the goals against $\check G_A$ of team $B$.
\item Team $A$ shall in average score $(1-\omega)\cdot \mu_{A}(\elo{B})$ goals against team $B$ (modelled by $\tilde G_A$), but team $B$ shall receive in average $(1-\omega_B)\cdot \nu_{B}(\elo{A})$ goals against (modelled by $\check G_A$). As these two values rarely coincides we model the numbers of goals $G_A$ as a ZIGP distribution with parameters
\begin{eqnarray*}
\mu_{A|B} &:= & \frac{\mu_A\bigl(\elo{B}\bigr)+\nu_B\bigl(\elo{A}\bigr)}{2},\\
\varphi_{A|B} &:=& \frac{\varphi_A + \psi_B}{2},\\
\omega_{A|B} &:= & \frac{\omega_A + \delta_B}{2}.
\end{eqnarray*}
\item The number of goals $G_B$ scored by $B$ is assumed to depend on the Elo score $E_A=\elo{A}$, the location $\mathrm{loc}_{B|A}$ of the match and additionally on the outcome of $G_A$. Hence, we model $G_B$ via a ZIGP distribution with Poisson parameters $\mu_{B|A}$, dispersion $\varphi_{B|A}$ and zero inflation $\omega_{B|A}$ satisfying
\begin{equation}\label{equ:nested-regression1}
\begin{array}{rcl}
\log \mu_{B|A} &=& \alpha_0^{(3)} + \alpha_1^{(3)} \cdot E_A+\alpha_0^{(3)}\cdot \mathrm{loc}_{B|A} +\alpha_3^{(3)} \cdot G_A,\\
\varphi_{B|A} &:= &1+e^{\beta^{(3)}}, \\
\omega_{B|A} &:= & \frac{\gamma^{(3)}}{1+\gamma^{(3)}},
\end{array}
\end{equation}
where the parameters $\alpha_0^{(3)},\alpha_1^{(3)} ,\alpha_2^{(3)} ,\alpha_3^{(3)} ,\beta^{(3)} ,\gamma^{(3)} $ are obtained by ZIGP regression.
\item The result of the match $A$ vs. $B$ is simulated by realizing $G_A$ first and then realizing $G_B$ in dependence of the realization of $G_A$.
\end{enumerate}
For a better understanding, we give an example and consider the match France vs. Germany, which takes place in Munich, Germany: France has $2087$ Elo points while Germany has $1936$ points. Against a team of Elo score $1936$ France is assumed to score without zero-inflation in average
$$
\mu_{\textrm{France}}(1936)=\exp\bigl(1.895766 -0.0007002232 \cdot 1936- 0.2361780 \cdot (-1)\bigr)=1.35521
$$
goals, and France's zero inflation is estimated as
$$
\omega_{\textrm{France}} = \frac{e^{-3.057658}}{1+e^{-3.057658}}=0.044888.
$$
Therefore, France is assumed to score in average
$$
(1-\omega_{\textrm{France}})\cdot \mu_{\textrm{France}}(1936) = 1.32516
$$
goals against Germany. Vice versa, Germany receives in average without zero-inflation
$$
\nu_{\textrm{Germany}}(2087)=\exp(-3.886702 +0.002203437 \cdot 2087 -0.02433679\cdot 1)=1.988806
$$
goals, and the zero-inflation of Germany's goals against is estimated as
$$
\delta_{\mathrm{Germany}} = \frac{e^{-5.519051}}{1+e^{-5.519051}}=0.003993638.
$$
Hence, in average Germany receives
$$
(1-\omega_{\textrm{Germany}})\cdot \nu_{\textrm{Germany}}(2087) = 1.980863
$$
goals against when playing against an opponent of Elo strength $2087$.
Therefore, the number of goals, which France will score against Germany, will be modelled as a ZIGP distributed random variable with mean
$$
\Bigl(1-\frac{\omega_{\textrm{France}}+\delta_{\mathrm{Germany}}}{2}\Bigr)\cdot \frac{\mu_{\textrm{France}}(1936)+ \nu_{\textrm{Germany}}(2087)}{2}=1.627268.
$$
The average number of goals, which Germany scores against a team of Elo score $2087$ provided that $G_A$ goals against are received, is modelled by a ZIGP distributed random variable with parameters
$$
\mu_{\textrm{Germany}|\textrm{France}} = \exp\bigl( 3.340300 -0.0014539752\cdot 2087 -0.089635003 \cdot G_A + 0.21633103\cdot 1\bigr);
$$
e.g., if $G_A=1$ then $\mu_{\textrm{Germany}|\textrm{France}}=1.54118$.
\par
As a final remark, we note that the presented dependent approach may also be justified through the definition of conditional probabilities:
$$
\mathbb{P}[G_A=i,G_B=j] = \mathbb{P}[G_A=i]\cdot \mathbb{P}[G_B=j \mid G_A=i] \quad \forall i,j\in\mathbb{N}_0.
$$
For a comparision of this model in contrast to similar Poisson models, we refer once again to \cite{gilch:afc19} and \cite{gilch-mueller:18}. All calculations were performed with R (version 3.6.2). In particular, the presented model generalizes the models used in \cite{gilch-mueller:18} and \cite{gilch:afc19} by adding a dispersion parameter, zero-inflation and a regression approach which weights historical data according to importance and time depreciation.
I
\subsection{Goodness of Fit Tests}\label{subsubsection:gof}
We check goodness of fit of the ZIGP regressions in (\ref{equ:independent-regression1}) and (\ref{equ:independent-regression2}) for all participating teams. For each team $\mathbf{T}$ we calculate the following $\chi^{2}$-statistic from the list of matches from the past:
$$
\chi_\mathbf{T} = \sum_{i=1}^{n_\mathbf{T}} \frac{(x_i-\hat\mu_i)^2}{\hat\mu_i},
$$
where $n_\mathbf{T}$ is the number of matches of team $\mathbf{T}$, $x_i$ is the number of scored goals of team $\mathbf{T}$ in match $i$ and $\hat\mu_i$ is the estimated ZIGP regression mean in dependence of the opponent's historical Elo points.
\par
We observe that almost all teams have a very good fit. In Table \ref{table:godness-of-fit:gc} the $p$-values for some of the top teams are given.
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Team & Belgium & France & Portugal & Spain & Italy
\\
\hline
$p$-value & 0.98 &0.15 & 0.34 & 0.33 &0.93
\\
\hline
\end{tabular}
\caption{Goodness of fit test for the ZIGP regression in (\ref{equ:independent-regression1}) for the top teams. }
\label{table:godness-of-fit:gc}
\end{table}
Only Germany has a low $p$-value of $0.05$; all other teams have a $p$-value of at least $0.14$, most have a much higher $p$-value.
\par
We also calculate a $\chi^{2}$-statistic for each team which measures the goodness of fit for the regression in (\ref{equ:independent-regression2}) which models the number of goals against. The $p$-values for the top teams are given in Table \ref{table:godness-of-fit2:gc}.
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Team & Netherlands & France & Germany & Spain & Englan
\\
\hline
$p$-value & 0.26 &0.49 & 0.27 & 0.76 &0.29
\\
\hline
\end{tabular}
\caption{Goodness of fit test for the ZIGP regression in (\ref{equ:independent-regression2}) for some of the top teams. }
\label{table:godness-of-fit2:gc}
\end{table}
Let us remark that some countries have a very poor $p$-value like Italy or Portugal. However, the effect is rather limited since regression (\ref{equ:independent-regression2}) plays mainly a role for weaker teams by construction of our model.
\par
Finally, we test the goodness of fit for the regression in (\ref{equ:nested-regression1}) which models the number of goals against of the weaker team in dependence of the number of goals which are scored by the stronger team; see Table \ref{table:godness-of-fit3:gc}.
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Team & Germany & England & Italy & Austria & Denmark
\\
\hline
$p$-value &0.06 & 0.41 & 0.91& 0.17 & 0.74
\\
\hline
\end{tabular}
\caption{Goodness of fit test for the Poisson regression in (\ref{equ:nested-regression1}) for some of the teams. }
\label{table:godness-of-fit3:gc}
\end{table}
Only Slovakia and Sweden have poor fits according to the $p$-value while the $p$-values of all other teams
suggest reasonable fits.
\subsection{Validation of the Model}
\label{subsec:validation}
In this subsection we want to compare the predictions with the real result of the UEFA EURO 2016. For this purpose, we introduce the following notation: let $\mathbf{T}$ be a UEFA EURO 2016 participant. Then define:
$$
\mathrm{result}(\mathbf{T}) = \begin{cases}
1, &\textrm{if } \mathbf{T} \textrm{ was UEFA EURO 2016 winner}, \\
2, &\textrm{if } \mathbf{T} \textrm{ went to the final but didn't win the final},\\
3, &\textrm{if } \mathbf{T} \textrm{ went to the semifinal but didn't win the semifinal},\\
4, &\textrm{if } \mathbf{T} \textrm{ went to the quarterfinal but didn't win the quarterfinal},\\
5, &\textrm{if } \mathbf{T} \textrm{ went to the round of last 16 but didn't win this round},\\
6, &\textrm{if } \mathbf{T} \textrm{ went out of the tournament after the round robin}
\end{cases}
$$
E.g., $\mathrm{result(Portugal)}=1$, $\mathrm{result(Germany)}=2$, or $\mathrm{result(Austria)}=6$. For every UEFA EURO 2016 participant $\mathbf{T}$ we set the simulation result probability as $p_i(\mathbf{T}):=\mathbb{P}[\mathrm{result}(\mathbf{T})=i]$.
In order to compare the different simulation results with the reality we use the following distance functions:
\begin{enumerate}
\item \textbf{Maximum-Likelihood-Distance:} The error of team $\mathbf{T}$ is in this case defined as
$$
\mathrm{error}(\mathbf{T}) := \bigl| \mathrm{result}(\mathbf{T}) - \mathrm{argmax}_{j=1,\dots,6} p_i(\mathbf{T})]\bigr|.
$$
The total error score is then given by
$$
MDL = \sum_{\mathbf{T} \textrm{ UEFA EURO 2016 participant}} \mathrm{error}(\mathbf{T})
$$
\item \textbf{Brier Score:}
The error of team $\mathbf{T}$ is in this case defined as
$$
\mathrm{error}(\mathbf{T}) := \sum_{j=1}^6 \bigl( p_{j}(\mathbf{T}) -\mathds{1}_{[\mathrm{result}(\mathbf{T})=j]} \bigr)^2.
$$
The total error score is then given by
$$
BS = \sum_{\mathbf{T} \textrm{ UEFA EURO 2016 participant}} \mathrm{error}(\mathbf{T})
$$
\item \textbf{Rank-Probability-Score (RPS):}
The error of team $\mathbf{T}$ is in this case defined as
$$
\mathrm{error}(\mathbf{T}) := \frac{1}{5} \sum_{i=1}^5 \left( \sum_{j=1}^i p_{j}(\mathbf{T}) -\mathds{1}_{[\mathrm{result}(\mathbf{T})=j]} \right)^2.
$$
The total error score is then given by
$$
RPS = \sum_{\mathbf{T} \textrm{ UEFA EURO 2016 participant}} \mathrm{error}(\mathbf{T})
$$
\end{enumerate}
We applied the model to the UEFA EURO 2016 tournament and compared the predictions with the basic Nested Poisson Regression model from \cite{gilch:afc19}.
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|}
\hline
Error function & ZIGP & Nested Poisson Regression
\\
\hline
Maximum Likelihood Distance & 22 & 26 \\
Brier Score & 17.52441& 18.68 \\
Rank Probability Score &5.280199 & 5.36
\\
\hline
\end{tabular}
\caption{Validation of ZIGP model compared with Nested Poisson Regression measured by different error functions. }
\label{table:godness-of-fit3:gc}
\end{table}
Hence, the presented ZIGP regression model seems to be a suitable improvement of the Nested Poisson Regression model introduced in \cite{gilch:afc19} and \cite{gilch-mueller:18}.
\section{UEFA EURO 2020 Forecast}
Finally, we come to the simulation of the UEFA EURO 2020, which allows us to answer the questions formulated in Section \ref{subsec:goals}. We simulate each single match of the UEFA EURO 2020 according to the model presented in Section \ref{sec:model}, which in turn allows us to simulate the whole UEFA EURO 2020 tournament. After each simulated match we update the Elo ranking according to the simulation results. This honours teams, which are in a good shape during a tournament and perform maybe better than expected.
Overall, we perform $100.000$ simulations of the whole tournament, where we reset the Elo ranking at the beginning of each single tournament simulation.
\section{Single Matches}
Since the basic element of our simulation is the simulation of single matches, we visualise how to quantify the results of single matches. Group A starts with the match between Turkey and Italy in Rome. According to our model we have the probabilities presented in Figure \ref{table:TR-IT} for the result of this match: the most probable scores are a $1:0$ or $2:0$ victory of Italy or a $1:1$ draw.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=13cm]{TURITA.jpeg}
\end{center}
\caption{Probabilities for the score of the match Turkey vs. Italy (group A) in Rome.}
\label{table:TR-IT}
\end{figure}
\subsection{Group Forecast}
In the following tables \ref{tab:EMgroupA}-\ref{tab:EMgroupF} we present the probabilities obtained from our simulation for the group stage, where we give the probabilities of winning the group, becoming runner-up, getting qualified for the round of last 16 as one of the best ranked group third (Third Q),
or to be eliminated in the group stage.
In Group F, the toughest group of all with world champion France, European champion Portugal and Germany, a head-to-head fight between these countries is expected for the first and second place.
\begin{table}[H]
\centering
\begin{tabular}{|rcccc|}
\hline
Team & GroupFirst & GroupSecond & Third Q & Prelim.Round \\
\hline
Italy & 39.8 \% & 28.3 \% & 15.8 \% & 16.10 \%\\
Switzerland & 24.1 \% & 26.9 \% & 19.6 \% & 29.40 \%\\
Turkey & 23.7 \% & 25.5 \% & 19.1 \% & 31.80 \%\\
Wales & 12.5 \% & 19.3 \% & 18.7 \% & 49.60 \%\\
\hline
\end{tabular}
\caption{Probabilities for Group A}
\label{tab:EMgroupA}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{|rcccc|}
\hline
Team & GroupFirst & GroupSecond & Third Q & Prelim.Round \\
\hline
Belgium & 72.8 \% & 21.2 \% & 4.6 \% & 1.30 \%\\
Denmark & 19.9 \% & 42.9 \% & 17.9 \% & 19.30 \%\\
Finland & 3.8 \% & 14.4 \% & 15.4 \% & 66.40 \%\\
Russia & 3.5 \% & 21.4 \% & 18.4 \% & 56.70 \%\\
\hline
\end{tabular}
\caption{Probabilities for Group B}
\label{tab:EMgroupB}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{|rcccc|}
\hline
Team & GroupFirst & GroupSecond & Third Q & Prelim.Round \\
\hline
Netherlands & 57.8 \% & 27.5 \% & 9.7 \% & 5.00 \% \\
Ukraine & 27.6 \% & 35.4 \% & 17.4 \% & 19.60 \%\\
Austria & 10.6 \% & 24.9 \% & 24.9 \% & 39.70 \%\\
North Macedonia & 4.1 \% & 12.3 \% & 14.5 \% & 69.20 \%\\
\hline
\end{tabular}
\caption{Probabilities for Group C}
\label{tab:EMgroupC}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{|rcccc|}
\hline
Team & GroupFirst & GroupSecond & Third Q & Prelim.Round \\
\hline
England & 54.5 \% & 27.5 \% & 11.1 \% & 6.90 \%\\
Croatia & 26.5 \% & 32.9 \% & 17.6 \% & 22.90 \%\\
Czechia & 12.4 \% & 23.2 \% & 22.3 \% & 42.10 \%\\
Scotland & 6.6 \% & 16.4 \% & 17.7 \% & 59.20 \%\\
\hline
\end{tabular}
\caption{Probabilities for Group D}
\label{tab:EMgroupD}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{|rcccc|}
\hline
Team & GroupFirst & GroupSecond & Third Q & Prelim.Round \\
\hline
Spain & 71.9 \% & 19.8 \% & 5.9 \% & 2.40 \%\\
Sweden & 12.6 \% & 34 \% & 20.6 \% & 32.80 \%\\
Poland & 12 \% & 31.6 \% & 21.6 \% & 34.90 \%\\
Slovakia & 3.6 \% & 14.6 \% & 14.1 \% & 67.60 \%\\
\hline
\end{tabular}
\caption{Probabilities for Group E}
\label{tab:EMgroupE}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{|rcccc|}
\hline
Team & GroupFirst & GroupSecond & Third Q & Prelim.Round \\
\hline
France & 37.7 \% & 30.4 \% & 17.9 \% & 14.00 \%\\
Germany & 32.4 \% & 30.3 \% & 19.9 \% & 17.40 \%\\
Portugal & 26.4 \% & 29.9 \% & 23 \% & 20.60 \%\\
Hungary & 3.5 \% & 9.5 \% & 11.8 \% & 75.10 \%\\
\hline
\end{tabular}
\caption{Probabilities for Group F}
\label{tab:EMgroupF}
\end{table}
\subsection{Playoff Round Forecast}
Our simulations yield the following probabilities for each team to win the tournament or to reach certain stages of the tournament. The result is presented in Table \ref{tab:EURO2020}. The ZIGP regression model favors Belgium, followed by the current world champions from France and Spain. The remaining teams have significantly less chances to win the UEFA EURO 2020.
\begin{table}[H]
\centering
\begin{tabular}{|rlllll|}
\hline
Team & Champion & Final & Semifinal & Quarterfinal & Last16 \\
\hline
Belgium & 18.4 \% & 29.1 \% & 47.7 \% & 68.7 \% & 98.5 \% \\
France & 15.4 \% & 24.9 \% & 38.8 \% & 58.4 \% & 85.9 \% \\
Spain & 13 \% & 22.5 \% & 38.9 \% & 68.1 \% & 97.7 \% \\
England & 7.8 \% & 14.8 \% & 26.8 \% & 50.5 \% & 93 \% \\
Portugal & 7.7 \% & 15.5 \% & 28.6 \% & 47.7 \% & 79.5 \% \\
Netherlands & 7.1 \% & 14.7 \% & 28.9 \% & 54.4 \% & 95 \% \\
Germany & 6.1 \% & 13.6 \% & 27.4 \% & 47.3 \% & 82.6 \% \\
Italy & 4.8 \% & 10.7 \% & 22.7 \% & 47.7 \% & 83.9 \% \\
Turkey & 3.7 \% & 8 \% & 17.1 \% & 35.2 \% & 68.2 \% \\
Denmark & 3.5 \% & 9.2 \% & 20.8 \% & 40.7 \% & 79.7 \% \\
Croatia & 3.4 \% & 8.3 \% & 17.7 \% & 38.1 \% & 76.9 \% \\
Switzerland & 3.2 \% & 7.9 \% & 17.9 \% & 37.6 \% & 70.6 \% \\
Ukraine & 1.8 \% & 5.1 \% & 13.2 \% & 34.4 \% & 80.4 \% \\
Poland & 1.2 \% & 3.9 \% & 10.9 \% & 29.6 \% & 66.6 \% \\
Sweden & 1.1 \% & 3.9 \% & 10.8 \% & 30.9 \% & 68.6 \% \\
Wales & 0.6 \% & 2.2 \% & 7.2 \% & 19.7 \% & 50.6 \% \\
Czechia & 0.4 \% & 1.7 \% & 5.9 \% & 19.6 \% & 57.9 \% \\
Russia & 0.2 \% & 1 \% & 4.1 \% & 12.9 \% & 41.5 \% \\
Finland & 0.1 \% & 0.7 \% & 2.6 \% & 9 \% & 32.3 \% \\
Austria & 0.1 \% & 0.6 \% & 3.3 \% & 15 \% & 60.3 \% \\
Slovakia & 0.1 \% & 0.7 \% & 2.7 \% & 10.1 \% & 34 \% \\
Hungary & 0.1 \% & 0.6 \% & 2.6 \% & 8.3 \% & 24.7 \% \\
Scotland & 0.1 \% & 0.5 \% & 2.3 \% & 10.6 \% & 40.7 \% \\
North Macedonia & 0 \% & 0.1 \% & 0.9 \% & 5.5 \% & 30.8 \% \\
\hline
\end{tabular}\caption{UEFA EURO 2020 simulation results for the teams' probabilities to proceed to a certain stage}
\label{tab:EURO2020}
\end{table}
\section{Final remarks}
\label{sec:discussion}
As we have shown in Subsection \ref{subsec:validation} the proposed ZIGP model with weighted historical data seems to improve the model which was applied in \cite{gilch:afc19} for CAF Africa Cup of Nations 2019.
For further discussion on adaptions and different models, we refer once again to the discussion section in \cite{gilch-mueller:18} and \cite{gilch:afc19}.
\bibliographystyle{apalike}
| {
"attr-fineweb-edu": 2.380859,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcOa6NNjgBtygeeSa | \section{Introduction}\label{sec:into}
Videos have become one of the most popular media in our every day life and video understanding draws much attention of researchers in recent years, including video tagging \cite{yue2015beyond,abu2016youtube,wang2019knowledge}, retrieval \cite{gabeur2020multi,liu2019use,dong2021dual}, action recognition \cite{karpathy2014large,feichtenhofer2019slowfast,wei2021masked} and localization \cite{nawhal2021activity,xu2021boundary,dai2017temporal}. There are many applications of video understanding, such as recommendation systems \cite{cai2022heterogeneous,yan2019multi,liu2019building}. One of the most attractive applications is understanding sports videos, which is able to benefit coaching \cite{wadsworth2020use,bao2021dynamic}, player training \cite{bertasius2017baller,sri2021toward} and sports broadcasting \cite{liu2017deep,wang2019football,wang2019football}. To better understanding sports videos, localizing and recognizing the actions in untrimmed videos play crucial roles, however, they challenging tasks with unsolved problems.
First, there are many types of sports, such as team sports like football, basketball, volleyball, and individual sports like tennis, table tennis, golf, and gymnastics, and each of them has specific actions. Therefore, it is difficult to build a dataset covering all sports and their specific actions. Moreover, the duration of actions in different sports are diverse. For example, the strokes in tennis and table tennis are extremely fast, usually less than one second, while actions in soccer could endure for several seconds, such as long passes and corner ball. Hence, it is difficult to model the actions with various lengths using one model.
Second, data annotation of sports actions is challenging. Compared with common actions in our daily life like walking, riding bicycles that can be easily recognized by annotators, it could be difficult to identify the actions in sports videos, for example, Axel jump, Flip jump and Lutz jump in figure skating, hence, professional players should be involved in data annotation. In addition, players normally perform deceptive actions, for example, table tennis players can perform similar actions, but serve balls with different kinds of spin (top spin, back spin and side spin), which is difficult for annotators to distinguish one action from the others.
Recently, researchers pay much attention to sports videos to address the challenges, including building datasets \cite{shao2020finegym,jiang2020soccerdb,martin2018sport,kulkarni2021table,deliege2021soccernet} and proposing new models \cite{martin2021three,koshkina2021contrastive,thilakarathne2021pose}. In this paper, we focus on a specific sport -- table tennis and propose a dataset -- \textit{Ping Pang Action} ($\mathbf{P^2A}$) for action recognition and localization to facilitate researches on fine-grained action understanding. The properties of $\mathbf{P^2A}$ are as follows:
\begin{itemize}[leftmargin=*]
\item We annotate each stroke in videos, including the category of the stroke and the indices of the starting and end frames. Plus, the stroke labels are confirmed by professional players, including Olympic table tennis players.
\item To the best of our knowledge, $\mathbf{P^2A}$ is the largest dataset for table tennis analysis, composed of 2,721 untrimmed broadcasting videos, and the total length is 272 hours. Though \cite{martin2018sport,kulkarni2021table} propose table tennis datasets, the number of videos is smaller and they use self-recorded videos where there is only one player, which is much easier than using broadcasting videos. Moreover, the datasets proposed in \cite{martin2018sport,kulkarni2021table} are only for action recognition, whereas $\mathbf{P^2A}$ can also be used for action localization.
\item The actions are fast and dense. The action length ranges from 0.3 seconds to 3 seconds, but more than 90\% actions are less than 1 second. And typically, there are around 15 actions in 10 seconds, which is challenging for localization.
\end{itemize}
With the properties, $\mathbf{P^2A}$ is a rich dataset for research on action recognition and localization, and a benchmark to assess models (see section \ref{sec:dataset} for more details). We further benchmark several existing widely used action recognition and localization models using $\mathbf{P^2A}$, finding that $\mathbf{P^2A}$ is relatively challenging for both recognition and localization.
To sum up, the main contributions of this work are twofold. First, we develop a new challenging dataset -- $\mathbf{P^2A}$ for fast action recognition and dense action localization, which provides high-quality annotations confirmed by professional table tennis players. Compared with existing table tennis datasets, $\mathbf{P^2A}$ uses broadcasting videos instead of self-recorded ones, making it more challenging and flexible. In addition, $\mathbf{P^2A}$ is the largest one for table tennis analysis. Second, we benchmark a number of existing recognition and localization models using $\mathbf{P^2A}$, finding that it is difficult to localize dense actions and recognize actions with imbalance categories, which could inspire researchers to come up with novel models for dense action
detection in the future.
\section{Related Work}
\paragraph{\textbf{Action Localization}}
The task of action localization is to find the beginning and the end frames of actions in untrimmed videos. Normally, there are three steps in an action localization model. The first step is video feature extraction using deep neural networks. Second, classification and finally, suppressing the noisy predictions. Z. Shou \textit{et al.} \cite{shou2016temporal} propose a temporal action localization model based on multi-stage CNNs, where a deep proposal network is employed to identify the candidate segments of a long untrimmed video that contains actions, then a classification network is applied to predict the action labels of the candidate segments and finally the localization network is fine-tuned. Alternatively, J. Yuan \textit{et al.} \cite{yuan2016temporal} employ \textit{Improved Dense Trajectories} (IDT) to localize actions, however, IDT is time-consuming and requires more memory. In contrast, S. Yeung \textit{et al.} \cite{yeung2016end} treat action localization as decision making process, where an agent observes video frames and decides where to look next and when to make a prediction. Similar to Faster-RCNN \cite{ren2015faster}, X. Dai \textit{et al.} \cite{dai2017temporal} propose a temporal context network (TCN) for active localization, where proposal anchors are generated and the proposals with high scores are fed into the classification network to obtain the action categories. To generate better temporal proposals, T. Lin \textit{et al.} \cite{lin2018bsn} propose a boundary sensitive network (BSN), which adopts a local-to-global fashion, however, BSN is not a unified framework for action localization and the proposal feature is too simple to capture temporal context. To address the issues of BSN, T. Lin \textit{et al.} \cite{lin2019bmn} propose a boundary matching network (BMN). Recently, M. Xu \textit{et al.} \cite{xu2021boundary} introduce the pre-training-fine-tuning paradigm into action localization, where the model is first pre-trained on a boundary-sensitive synthetic dataset and then fine-tuned using the human annotated untrimmed videos. Instead of using CNNs to extract video features, M. Nawhal \textit{et al.} \cite{nawhal2021activity} employ a graph transformer, which significantly improves the performance of temporal action localization.
\paragraph{\textbf{Action Recognition}}
Action recognition lies at the heart of video understanding, an elementary module that draws much attention of researchers. A simple deep learning based model is proposed by A. Karpathy \cite{karpathy2014large}, where a 2D CNN is independently applied to each video frame. To capture temporal information, the fusion of frame features is used, however, this simple approach cannot sufficiently capture the motion of objects. A straightforward approach to mitigate this problem is to directly introduce motion information into action recognition models. Hence, K. Simonyan \textit{et al.} \cite{simonyan2014two} proposed a two-stream framework, where the spatial stream CNN takes a single frame as input and the temporal steam CNN takes a multi-frame optical flow as input. However, the model proposed by \cite{simonyan2014two} only employs shallow CNNs, while L. Wang \textit{et al.} \cite{wang2016temporal} investigate different architectures and the prediction of a video is the fusion of the segments' predictions. To capture temporal information without introducing extra computational cost, J. Lin \textit{et al.} \cite{lin2019tsm} propose a temporal shift module, which can fuse the information from the neighboring frames via using 2D CNNs. Another family of action recognition models is 3D based. D. Tran \textit{et al.} \cite{tran2015learning} propose a deep 3D CNN, termed C3D, which employs 3D convolution kernels. However, it is difficult to optimize a 3D CNN, since there are much more parameters than a 2D CNN. While J. Carreira \textit{et al.} \cite{carreira2017quo} employs mature architecture design and better-initialized model weights, leading to better performance. All above mentioned models adopt the same frame rate, while C. Feichtenhofer \textit{et al.} \cite{feichtenhofer2019slowfast} propose a SlowFast framework, where the slow path uses a low frame rate and the fast path employs a higher temporal resolution and then the features are fused in the successive layers.
Recently, researchers pay more attention to video transformer -- a larger model with multi-head self-attention layers. G Bertasius \textit{et al.} \cite{bertasius2021space} propose a video transformer model, termed TimeSformer, where video frames are divided into patches and then space-time attention is imposed on patches. TimeSformer has a larger capacity than CNNs, in particular using a large training dataset. Similarly, A. Arnab \textit{et al.} \cite{arnab2021vivit} propose a transformer-based model, namely ViViT, which divides video frames into non-overlapping tubes instead of frame patches. Another advantage of the transformer-based model is that it is easy to use self-supervised learning approaches to pre-train the model. C. Wei \textit{et al.} \cite{wei2021masked} propose a pre-training method, where the transformer-based model is to predict the \textit{Histograms of Oriented Gradients} (HoG) \cite{dalal2005histograms} of the masked patches and then the model is fine-tuned on downstream tasks. Using the pre-training-fine-tuning paradigm, we can further improve the performance of action recognition.
In this paper, we adopt multiple widespread action recognition and localization models to conduct experiments on our proposed dataset -- $\mathbf{P^2A}$, showing that $\mathbf{P^2A}$ is relatively challenging for both action recognition and localization since the action is fast and the categories are imbalance (see sections \ref{sec:dataset} and \ref{sec:eval} for more details).
\section{\textbf{P$^2$A}{} Dataset} \label{sec:dataset}
Our goal for establishing the PingPangAction dataset is to introduce a challenging benchmark with professional and accurate annotations to the intensive and dense human action understanding regime.
\subsection{Dataset Construction}
\textbf{Preparation.} The procedure of data collection takes the following steps. First, we collect the raw broadcasting video clips of International/Chinese top-tier championships and tournaments from the ITTF (International Table Tennis Federation) Museum \& China Table Tennis Museum. We are authorized to download video clips from 2017 to 2021 (five years), including the Tokyo 2020 Summer Olympic table tennis competition, World Cups, World Championships, etc.
As a convenience for further processing (e.g., storing and annotating), we split the whole game videos into 6-minute chunks, while ensuring the records are complete, distinctive and of standard high-resolution, e.g., 720P ($1280\times 720$) and 1080p ($1920\times 1080$). Then, the short video chunks are ready for annotation in the next step.
\textbf{Annotation.} Since the actions (i.e., the ping pong strokes) in video chunks are extremely dense and fast-moving, it is challenging to recognize and localize all of them accurately. Thus, we cooperate with a team of table tennis professionals and referees to regroup the actions into broad types according to the similarity of their characteristics, which are the so-called 14-classes (14c) and 8-classes (8c). As the name implies, 14c categorizes all the strokes in our collected video chunks into 14 classes, which are shown in Fig.~\ref{fig:camera} (right side). Compared to 14c, we further refine and combine the classes into eight higher-level ones, where the mapping relationships are revealed accordingly. Specifically, since there are seven kinds of service classes at the beginning of each game round (i.e., the first stroke by the serving player), we reasonably combine these service classes aside from those non-serving classes. Note that, all the actions/strokes we mentioned here are presented in the formal turns/rounds either conducted by the front-view player or back-view player from the broadcasting cameras (the blue-shirt and red-shirt player in Fig.~\ref{fig:camera}), and others presented in highlights (e.g., could be in a zoom-in scale with slow motions), Hawk-Eye replays (e.g., with the focus on the ball tracking), and game summaries (e.g., possibly in a zoom-out scale with a scoreboard) are ignored as redundant actions/strokes. For each of the action types in 14c/8c, we define a relatively clear indicator for the start frame and the end frame to localize the specific action/stroke. For example, we set the label of the non-serving stroke in a +/- 0.5 seconds window based on the time instance when the racket touches the ball, where the serving action/stroke may take a long time before the touching moment so that we label it in a +0.5/-1.5 seconds window as the adjustment. The window size is used as the reference for the annotator and can be varied according to the actual duration of the specific action. We further show the detailed distribution of the action/stroke classes in Section 3.2.
Once we have established the well-structured classes of actions, the annotators then engage to label all the collected broadcasting video clips. Unfortunately, the first edition of annotated dataset achieves around $85\%$ labeling precision under sampling inspection by experts (i.e., international table tennis referees). It is reasonable that even with the trained annotators, labeling the actions/strokes of table tennis players is still challenging, which lies in the existence of 1) the fast-moving and dense actions; 2) the deceptive actions; 3) the entire/half-sheltered actions. To improve the quality of the labeled dataset, we work with a crew of table tennis professionals to further correct the wrong or missing samples in the first edition. The revised edition can achieve about $95\%$ labeling precision on average, which is confirmed by the invited international table tennis referee~\cite{wufei}.
Note that, beyond labeling the segment of each action/stroke, we also label some additional information along with it (after finishing each action), which includes the placement of the ball on the table, the occasion of winning or losing a point, the player who committed the action (the name with the status of front view or back view in that stroke), whether it is forehand or backhand, and the serving condition (serve or not). The full list of labeling items is attached in the open-sourced dataset repository on Github\footnote{The link will be released later.}.
\textbf{Calibration.} The last step for annotation is to clean the dataset for further research purposes. Since the whole 6-minute video dataset has some chunks without valid game rounds/turns (e.g., the opening of the game, the intermission and break, and the award ceremony), we filter out those unrelated chunks and focus on only action-related videos. Then, to accomplish the main goals of the research, which are tasks of \emph{recognition} and \emph{localization}, we further screen the video chunks to reserve the qualified ones meeting the following criteria:
\begin{itemize}[leftmargin=*]
\item \textbf{\emph{Camera Angle}} -- The current most-common camera angle for broadcasting is the top view, where the cameras are hosted on a relatively high position upon the game court and overlook from the back of two sides of players. As shown in Fig.~\ref{fig:camera}, we only select the videos recorded in the standard top view and remove those with other views (e.g., side view, ``bird's eye'' view, etc.) to keep the whole dataset consistent. Note that, the broadcasting recordings with non-standard broadcasting angles rarely (less than $5\%$) appear in our target pool of competition videos, where only a few WTT (World Table Tennis Championships) broadcasting videos experimentally use the other types of camera angles.
\item \textbf{\emph{Game Type}} -- Considering the possible mutual interference in double or mixed double games, we only include the single games in \textbf{P$^2$A}{}. In this case, there at most two players appear simultaneously in each frame, where usually one player is located near the top of the frame and another is at the bottom of the frame.
\item \textbf{\emph{Audio Track}} -- Although we do not leverage the audio information for the localization and recognition tasks, the soundtracks are along with the videos and awaiting to be explored or incorporated in the future research. We also remove those with broken or missing soundtracks.
\item \textbf{\emph{Video Resolution}} -- We select the broadcasting videos with a resolution equal to or higher than 720P ($1280\times 720$) in MP4 format and drop those with low frame quality.
\item \textbf{\emph{Action Integrity}} -- Since we trim the original broadcasting video into 6-minute chunks, some actions/strokes may be cut off when delivering the splitting. In this case, we make efforts to label those broken actions (about $0.2\%$) and it is optional to incorporate the broken actions or not in a specific task.
\end{itemize}
In the next subsection, we introduce the basic statistics of calibrated \textbf{P$^2$A}{} dataset and provide some guidance on pre-processing that might be helpful in specific tasks.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.47\textwidth]{camera_class.pdf}
\end{center}
\vspace{-3mm}
\caption{The camera angles \& the classes of stroke/action in \textbf{P$^2$A}{} dataset.}
\label{fig:camera}
\end{figure}
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.4]{action_example.pdf}
\vspace{-3mm}
\caption{An example of ``Side Spin Serve'' action in the consecutive frames from \textbf{P$^2$A}{} dataset.}
\label{fig:action_example}
\end{figure*}
\renewcommand{\thempfootnote}{\fnsymbol{mpfootnote}}
\begin{table*}[h]
\begin{minipage}{\textwidth}
\label{tab:comparison}
\caption[xxx]{Comparison of existing sports video datasets for action detection and related purposes.}
\vspace{-3mm}
\begin{center}
\scalebox{1}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
Dataset & Duration & Samples & Segments & Classes & Tasks\footnote[2]{We denote the task of recognition, localization, and segmentation as ``Rec'', ``Loc'', and ``Seg''.} & Density\footnote[3]{We calculate the Density here as the value of Segments/Samples.} & Action Length & Source & Year \\ [0.5ex]
\hline\hline
Olympic~\cite{niebles2010modeling} & 20 h & 800 & 800 & 16 & Rec + Loc & 1.00 & 5s $\sim$ 20s & YouTube & 2010\\
\hline
UCF-Sports~\cite{soomro2014action} & 15 h & 150 & 2,100 & 10 & Rec + Loc & 14.00 & 2s $\sim$ 12s & Broadcast TV & 2014\\
\hline
ActivityNet~\cite{caba2015activitynet}\footnote[1]{We also include some popular video-based action detection datasets, which are not only limited to sports-related actions, but with a considerable amount of them in sub-genre.} & 648 h & 14,950 & 23,064 & 200 & Rec + Loc & 1.54 & 10s $\sim$ 100s & YouTube & 2015\\
\hline
TenniSet~\cite{faulkner2017tenniset} & 9 h & 380 & 3,568 & 10 & Rec + Loc & 9.40 & 1.08s $\sim$ 2.84s & YouTube & 2017 \\
\hline
OpenTTGames~\cite{voeikov2020ttnet} & 5 h & 12 & 4,271 & 3 & Rec + Seg & 355.92 & 0.12s $\sim$ 0.13s & Self Recorded & 2020 \\
\hline
FineGym~\cite{shao2020finegym} & 708 h & 303 & 4,885 & 530 & Rec + Loc & 16.12 & 8s $\sim$ 55s & Broadcast TV & 2020 \\
\hline \hline
\textbf{\textbf{P$^2$A}{}} & 272 h & 2,721 & 139,075 & 14 & Rec + Loc & 52.70 & 0.32s $\sim$ 3s & Broadcast TV & 2022 \\
\hline
\end{tabular}}
\end{center}
\end{minipage}
\end{table*}
\subsection{Dataset Analysis}
In this section, we conduct a comprehensive analysis of the statistics of \textbf{P$^2$A}{}, which is the foundation of the follow-up research and also serves as the motivation for establishing \textbf{P$^2$A}{} in the video analysis domain. The analysis lies in four parts, which are (1) general statistics, (2) category distribution, (3) densities, and (4) discussion. In the rest of this subsection, we go through each part in detail.
\noindent\textbf{General Statistics.}
The \textbf{P$^2$A}{} dataset consists of 2,721 annotated 6-minutes-long video clips, containing 139,075 labeled action segments and last 272 hours in total. They are extracted from over 200 table tennis competitions, involving almost all the top-tier ones during 2017-2021. These events include the World Table Tennis Championships, the Table Tennis World Cup, the Olympics, the ITTF World Tour, the Asia Table Tennis Championships, the National Games of China, as well as the China Table Tennis Super League. Since we intend to establish an action-focused table tennis dataset, the quality and standard of actions are expected to achieve a relatively high level, where the official HD broadcasting videos selected from the above-mentioned top-tier competitions lay a good foundation as the source of the target actions.
For the general statistics of actions, Table 1 presents the basic statistics of \textbf{P$^2$A}{} compared with the popular streams of sports-related datasets. The length of action for \textbf{P$^2$A}{} varies from 0.32 to 3 seconds, which is significantly shorter than most of the other datasets except the OpenTTGames and the TenniSet. Since the OpenTTGames targets only the in-game events (i.e. ball bounces, net hits, or empty event targets) without any stroking annotations, the action length is even shorter and with a considerably higher Density. Compared to these two table tennis related datasets, our \textbf{P$^2$A}{} dataset has a longer Duration (over 30 times, and with more samples and segments), where it means a lot for accurately reflecting the correlation and distribution of actions/strokes in different classes. Furthermore, compared to the sources of the OpenTTGames and the TenniSet, \textbf{P$^2$A}{} leverages the Broadcast TV games with the proper calibration by the international table tennis referee, which are more official and professional. Compared to the commonly used datasets such as ActivityNet and FineGym, \textbf{P$^2$A}{} focuses on the actions/strokes with the dense and fast-moving characteristics. Specifically, the action length of \textbf{P$^2$A}{} is around 30 times shorter and density is 5 to 50 higher than these two datasets, which means \textbf{P$^2$A}{} introduces a series action/strokes in a totally different granularity. We later show that the SOTA models confront a huge challenge when applying on \textbf{P$^2$A}{}. Although the number of classes of \textbf{P$^2$A}{} is relatively small due to the nature of the table tennis sports itself, the action recognition and localization tasks are barely easy on \textbf{P$^2$A}{} for most of the mainstream baselines. In the next subsection, we show the distribution of categories in \textbf{P$^2$A}{}, which is one of the vital components causing the aforementioned difficulties.
\begin{figure*}[hpt!]
\centering
\vspace{-5mm}
\includegraphics[scale=0.20]{cls_count_dur.pdf}
\vspace{-3mm}
\caption{Sorted distribution of the number of actions (with duration) from each class (Ver. 14c) in the \textbf{P$^2$A}{} dataset.}
\label{fig:num_cls}
\end{figure*}
\begin{figure*}[h]
\vspace{-5mm}
\captionsetup[subfloat]{captionskip=2pt}
\centering
\subfloat[Serving Actions (7 classes)]{\includegraphics[width=0.25\textwidth]{cls_count_dur_serve.pdf}}
\subfloat[Non-Serving Actions (7 classes) ]{\includegraphics[width=0.25\textwidth]{cls_count_dur_non_serve.pdf}}
\subfloat[Combined Actions (8 classes)]{\includegraphics[width=0.25\textwidth]{cls_count_dur_8.pdf}}
\vspace{-3mm}
\caption{Sorted distribution of the number of actions from the serving, non-serving, combined classes (Ver. 8c) in the \textbf{P$^2$A}{} dataset.}
\label{fig:two_version}
\end{figure*}
\noindent\textbf{Category Distribution.}
Figure~\ref{fig:camera} right side shows the targeting categories of actions/stokes in \textbf{P$^2$A}{}. The \textbf{P$^2$A}{} dataset has two main branches of strokes -- \textbf{Serve} and \textbf{Non-serve}, where each of them owns 7 types of sub-strokes. Since the strokes are possibly taken by different players in the same video frames, we annotate strokes from both the front-view and the back-view players without time overlaps. For example, Figure~\ref{fig:action_example} represents consecutive frames of annotations on a front-view player, where a player is producing a ``Side Spin Serve'' at the start of a turn. Notice that this player stands at the far-end of the game table and turns around from the side-view to the front-view, then keeps the front-view in the rest of this turn. A similar annotating is taken to record the player's strokes at the near-end of the game table.
As aforementioned, the \textbf{P$^2$A}{} dataset consists of two versions, which are 14c (14 classes) and 8c (8 classes). The 14c flattens all the predefined classes in Figure~\ref{fig:camera} and treats them equally in the future tasks. We measure the number of strokes for each of the classes in Figure~\ref{fig:num_cls}. Overall, the drive category contains more than half of the samples (we sort the categories in descending order), which means there exists unbalancing category phenomenon in \textbf{P$^2$A}{}. We also observe an interesting point that the non-serve categories generally last a shorter duration than the serve categories (as shown from the blue bars). This is because of the long preparation of the serve strokes and could be an important feature to distinguish the serve and the non-serve strokes. Another fact is that the non-serve categories dominate the whole dataset, where there are seven non-serve categories out of the left-most eight categories (in the descending order). This unbalancing phenomenon leads to the creation of the second version -- 8C to combine all the serve categories into one unified ``Serve'' category.
Figure~\ref{fig:two_version} measures the number of actions in the serve, non-serve , and combined 8c categories separately. As a result, the sub-categories are unbalanced, where ``Side Spin'' and ``Drive'' dominates the serving and non-serving actions in (a)\&(b). In the combined 8c dataset, ``Drive'' unsurprisingly takes a large proportion among all eight categories due to its frequent appearances in modern table tennis competitions. Thus, it is necessary to adopt additional sampling techniques to mitigate the side effects, and the implementation details are introduced in Section 4.
\begin{figure}[h]
\vspace{-5mm}
\captionsetup[subfloat]{captionskip=2pt}
\centering
\subfloat[All Classes]{\includegraphics[width=0.25\textwidth]{dist_all.pdf}}
\subfloat[Serve vs Non-serve Classes]{\includegraphics[width=0.228\textwidth]{dist_sns.pdf}}
\caption{Duration distribution of actions/strokes in \textbf{P$^2$A}{} dataset.}
\vspace{-3mm}
\label{fig:dist}
\end{figure}
\noindent\textbf{Densities.}
As one of the unique characteristics compared to other datasets, the high density of actions/strokes plays a vital role in \textbf{P$^2$A}{}. We analyze the density of actions/strokes in two aspects, where the first one is the duration distribution of each actions/strokes category. Figure~\ref{fig:dist}(a) shows the duration distribution of all classes of strokes in \textbf{P$^2$A}{}. We can observe that most of the action types have relatively stable distributions (with short tails) except the ``Control'' and the ``Neutral'' types. The ``Neutral'' is one of the serving strokes representing those irregular/unknown actions in the serving process, where the duration of it widely ranges from 0.4 to 3 seconds in Figure~\ref{fig:dist}(a). It is similar that the ``Control'' category stands for the non-serving strokes which cannot be categorized to any other classes, and has a long tail of 2 seconds at most. On the whole, the average duration of strokes sticks to around 0.5 seconds, which demonstrates the actions in \textbf{P$^2$A}{} is fast-moving compared to other datasets in Table 1. We also compare the duration distribution between the serve and non-serve classes in Figure~\ref{fig:dist}(b), where the stacked plot of both classes presents that the serve class has a slightly longer tail than the non-serve class.
For the second respect, we measure the action frequency and the appearance density in each turn of the game. One turn in a game starts from a serving action by one player and ends with one point acquired by either of the players. Thus, the action frequency can reflect the nature of the table tennis competition (e.g., the strategy and the pace of the current table tennis game) that how many strokes could be delivered in a single turn. As shown in Figure~\ref{fig:dense}(a), most of the turns have around 3 $\sim$ 5 actions/strokes and the distribution is also long-tailed to a maximum 28 actions/strokes per turn. It is reasonable that the pace of the game today becomes faster than ever, and a point could be outcome within three strokes for both players (i.e., summed up within 6 strokes). To further differentiate from other sports datasets, we additionally measure the appearance density in 10 seconds, which is usually the time boundary for long-duration and short-duration sports actions. Figure~\ref{fig:dense}(b) depicts the histogram of the action counts in 10-second consecutive video frames. The results follow a normal-like distribution with average counts as 15, which means the actions appear densely and frequently in a short time in the most of the time in \textbf{P$^2$A}{}. The density can be achieved as about 1.5 actions per second on average and it demonstrates the characteristics of dense actions for the \textbf{P$^2$A}{} dataset.
\begin{figure}[h]
\vspace{-5mm}
\captionsetup[subfloat]{captionskip=2pt}
\centering
\subfloat[The Action Frequency]{\includegraphics[width=0.17\textwidth]{density_turn.pdf}}
\subfloat[The Appearance Density ]{\includegraphics[width=0.17\textwidth]{density_appearance.pdf}}
\vspace{-3mm}
\caption{Density of actions/strokes in each turn of game.}
\label{fig:dense}
\end{figure}
\noindent\textbf{Summary.}
In this section, we comprehensively report the statistics of \textbf{P$^2$A}{}. Specifically, we leverage rich measurements to have a thorough view of the volume of the dataset, the category distribution, the duration of samples, and the densities. In conclusion, the \textbf{P$^2$A}{} dataset is unique comparing to the other mainstream sports datasets in the following sides,
\begin{itemize}[leftmargin=*]
\item Comparing to the table tennis datasets~\cite{voeikov2020ttnet, faulkner2017tenniset}, \textbf{P$^2$A}{} has an adequate sample size (10 times $\sim$ 200 times) and more subtle set of categories.
\item Comparing to the large datasets in other sports domain~\cite{niebles2010modeling,soomro2014action,caba2015activitynet,shao2020finegym}, \textbf{P$^2$A}{} focuses on the dense (5 times $\sim$ 50 times) and fast-moving (around 0.1 times) actions.
\item Comparing to most of the aforementioned sports datasets, \textbf{P$^2$A}{} includes two players' actions back-to-back (the action appears in turns with a fast pace), which is more complicated and mutually interfering when analyzing.
\end{itemize}
In the next section, we investigate the performance of the state-of-the-art models and solutions for the predefined action recognition and action localization tasks on our \textbf{P$^2$A}{} dataset. Then, we observe the challenges those algorithms confront and demonstrate the research potential of the \textbf{P$^2$A}{} dataset.
\section{Benchmark Evaluation}\label{sec:eval}
In this section, we present the evaluation of various methods on our \textbf{P$^2$A}{} dataset and show the difficulties when applying the original design of those methods. Then, we provide several suggestions to mitigate the ineffectiveness and tips to improve the current models.
\subsection{Baselines}
We adopt two groups of baseline algorithms to separately tackle the action recognition and action localization tasks. For \textbf{Action Recognition}, we includes the following trending algorithms,
\begin{itemize}[leftmargin=*]
\item \textbf{Temporal Segment Network (TSN)}~\cite{wang2016temporal} is a classic 2D-CNN-based solution in the field of video action recognition. This method mainly solves the problem of long-term behavior recognition of video, and replaces dense sampling with sparsely sampling video frames, which can not only capture the global information of the video, but also remove redundancy and reduce the amount of calculation.
\item \textbf{Temporal Shift Module (TSM)}~\cite{lin2019tsm} is a popular video action recognition model with shift operation, which can achieve the performance of 3D CNN but maintain 2D CNN's complexity. The method of moving through channels greatly improves the utilization ability of temporal information without increasing any additional number of parameters and calculation costs. TSM is accurate and efficient: it ever ranked first place on the Something-Something~\cite{goyal2017something} leaderboard. Here, we adopt the industrial level variation from the deep learning platform PaddlePaddle~\cite{ma2019paddlepaddle}. We improve the original design with additional knowledge distillation, where the model is pretrained on Kinetics400~\cite{carreira2017quo} dataset.
\item \textbf{SlowFast}~\cite{feichtenhofer2019slowfast} involves a Slow pathway operating at a low frame rate to capture spatial semantics, and another Fast pathway operating at a high frame rate to capture motions in the temporal domain. SlowFast is a powerful action recognition model, which ranks in second place on AVA v2.1 datasets~\cite{gu2018ava}.
\item \textbf{Video-Swin-Transformer (Vid)}~\cite{liu_2021} is a video classification model based on Swin Transformer~\cite{liu2021swin}. It utilizes Swin Transformer's multi-scale modeling and the efficient local attention module. Vid shows competitive performances in video action recognition on various datasets, including the Something-Something and Kinetic-series datasets.
\end{itemize}
For \textbf{Action Localization}, we investigate the following classical or trending algorithms,
\begin{itemize}[leftmargin=*]
\item \textbf{Boundary Sensitive Network (BSN)}~\cite{lin2018bsn} is an effective proposal generation method, which adopts "local to global" fashion. BSN has already achieved high recalls and temporal precision on several challenging datasets, such as ActivityNet-1.3~\cite{caba2015activitynet} and THUMOS14'~\cite{wang2014action}.
\item \textbf{BSN++}~\cite{su2020bsn++} is a new framework which exploits complementary boundary regressor and relation modeling for temporal proposal generation. It ranked first in the CVPR19 - ActivityNet challenge leaderboard on the temporal action localization task.
\item \textbf{Boundary-Matching Network (BMN)}~\cite{lin2019bmn} introduces the Boundary-Matching (BM) mechanism to evaluate confidence scores of densely distributed proposals, which leads to generating proposals with precise temporal boundaries as well as reliable confidence scores simultaneously. Combining with existing feature extractors (e.g., TSM), BMN can achieve state-of-the-art temporal action detection performance.
\item \textbf{Self-Supervised Learning for Semi-Supervised Temporal Action Proposal (SSTAP)}~\cite{wang2021self} is one of the self-supervised methods to improve semi-supervised action proposal generation. SSTAP leverages two crucial branches, i.e., the temporal-aware semi-supervised branch and relation-aware self-supervised branch to further refine the proposal generating model.
\item \textbf{Temporal Context Aggregation Network (TCANet)}~\cite{qing2021temporal} is the championship model in the CVPR 2020 - HACS challenge leaderboard, which can generate high-quality action proposals through "local and global" temporal context aggregation and complementary as well as progressive boundary refinement. Since it is a newly released model designed for several specific datasets (the original design shows unstable performance on \textbf{P$^2$A}{}), we slightly modify the architecture to incorporate a BMN block ahead of TCANet. The revised model is denoted as TCANet+ in our experiments.
\end{itemize}
\subsection{Experimental Setup}
\noindent\textbf{Datasets.} As aforementioned, we establish two versions -- 14-classes (14c) and 8-classes (8c) -- in \textbf{P$^2$A}{}. The difference is that we combine all the serving strokes/actions in a single ``Serve'' category in 8c version. We evaluate the baseline action recognition algorithms on both versions and compare the performances side by side. Since the performances of the current action localization algorithms on 14c are far from satisfactory, we only report the results on 8c for reference.
\noindent\textbf{Metrics.} We use the Top-1 and Top-5 accuracy as the metric for evaluating the performance of baselines in the action recognition tasks. For the action localization task, We adopt the area under the Average Recall vs. Average Number of Proposals per Video (AR-AN) curve as the evaluation metric. A proposal is a true positive if it has a temporal intersection over union (tIoU) with a ground-truth segment that is greater than that or equal to a given threshold (e.g, tIoU>0.5). AR is defined as the mean of all recall values using tIoU between 0.5 and 0.9 (inclusive) with a step size of 0.05. AN is defined as the total number of proposals divided by the number of videos in the testing subset. We consider 100 bins for AN, centered at values between 1 and 100 (inclusive) with a step size of 1, when computing the values on the AR-AN curve.
\noindent\textbf{Data Augmentation.}
The category distribution analysis in Section 3.2 reveals that \textbf{P$^2$A}{} is an imbalanced dataset, where it might cause trouble to the recognition task. To fully utilize the proposed \textbf{P$^2$A}{} dataset, we design a simple yet effective data augmentation method to train the baseline models. Specifically, we introduce an up/down-sampling procedure to balance the sample size of each category. First, we calculate the mean size of all the samples (denoted $N$ samples) by the defined number of categories (e.g., 8c or 14c) as $M$, where $M = N/8 $ or $N/14$. Then, we sample the actual number of action segments $S_A$ within a range $M \leq S_A \leq 2M$. For those categories with less number of segments than $M$, we up-sample by the random duplication. For those with much more samples over $2M$, we random down-sample from the original sample set and try to make the sampled actions cover all the video clips (i.e., uniform sampling from each 6-minute chunk). We apply this data augmentation strategy to all the recognition tasks and the significant performance gains. As shown in Figure~\ref{fig:heat}, we report the ablation results of action recognition baseline TSM on \textbf{P$^2$A}{} with/without the data augmentation. In Figure~\ref{fig:heat}(a), the confusion matrix illustrates that TSM falsely classifies most of the strokes/actions into the ``Drive'' category without the class balancing, while this kind of misclassification is significantly alleviated after we apply the data augmentation. Note that, since the localization tasks normally require feature extractor networks as the backbones (e.g., TSM is one of the backbones), localization performance could also be benefited from the designed data augmentation. The baseline algorithms in the following results section also involve data augmentation.
\begin{figure}[h]
\vspace{-5mm}
\captionsetup[subfloat]{captionskip=2pt}
\centering
\subfloat[\emph{\underline{without}} class balancing]{\includegraphics[width=0.238\textwidth]{heat_1.pdf}}
\subfloat[\emph{\underline{with}} class balancing]{\includegraphics[width=0.238\textwidth]{heat_2.pdf}}
\vspace{-3mm}
\caption{Confusion matrix of action recognition results}
\label{fig:heat}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[scale=0.36]{predict.pdf}
\vspace{-3mm}
\caption{Example of localisation results on \textbf{P$^2$A}{} dataset.}
\label{fig:predict}
\end{figure*}
\begin{figure*}[h]
\vspace{-5mm}
\captionsetup[subfloat]{captionskip=2pt}
\centering
\subfloat[BSN]{\includegraphics[width=0.164\textwidth]{BSN.png}}
\subfloat[BSN++]{\includegraphics[width=0.164\textwidth]{BSNPP.png}}
\subfloat[SSTAP]{\includegraphics[width=0.164\textwidth]{SSTAP.png}}
\subfloat[TCANet+]{\includegraphics[width=0.179\textwidth]{TCANetp.png}}
\subfloat[BMN]{\includegraphics[width=0.164\textwidth]{BMN.png}}
\subfloat[ALL]{\includegraphics[width=0.164\textwidth]{compare_all_auc.png}}
\vspace{-3mm}
\caption{Performance of different action localization methods on \textbf{P$^2$A}{}.}
\label{fig:fig1}
\end{figure*}
\subsection{Results}
\begin{table}[h]
\caption{Performance of different action recognition methods on \textbf{P$^2$A}{} in terms of validation accuracy.}
\vspace{-3mm}
\centering
\scalebox{0.83}{
\begin{tabular}{c|c|cc|cc}
\toprule
Actions & Method & Acc1@14c & Acc5@14c & Acc1@8c & Acc5@8c \\
\hline
\multirow{4}*{Face} &
TSN & 59.77 & 90.13 & 73.84 & 94.85\\
& TSM & \textbf{69.97} & 98.08 & \textbf{82.35} & \textbf{99.06}\\
& SlowFast & 63.19 & 92.79 & 77.41 & 96.07\\
& Vid & 68.85 & \textbf{98.16} & 81.03 & 98.96\\
\midrule
Face\&Back & TSM (Best) & 59.46 & 87.11 & 68.39 & 91.73 \\
\bottomrule
\end{tabular}}
\label{tab:recognition}
\end{table}
This section first presents the performance results of action recognition tasks. Table~\ref{tab:recognition} provides a summary of these results in terms of top1/top5 accuracy on both 8c and 14c datasets. As we can observe, the top5 accuracy is unsurprisingly higher than the top1 accuracy for all the methods. For comparing the 8c and 14c datasets in the same setting, the performances of baselines on 8c are relatively better than on 14c, which confirms that the combination of serving classes is reasonable and actually eases the recognition task. In general, our refined TSM (PaddlePaddle version) method achieves the highest accuracy in all four settings except the top5 accuracy on the 14c datasets. The popular transformer-based method Vid also shows competitive performances in the second place as a whole. Note that, since the same strokes/actions done by the player facing the camera and with back towards the camera differs drastically (e.g., the blocked views and the opposite actions), we mainly focus on the facing players' actions/strokes as the recognition targets. As for the back view player's actions/strokes, the performances are poor for all methods. Alternatively, we report the performance of the mixed (i.e., face and back) one, the results are far below the expectation, where even the best performer, TSM, only achieves 59.46 top 1 accuracy on the 14c and 68.39 top5 accuracy on the 8c dataset. Although the top5 accuracy of all the methods appears to be promising, considering the total number of categories in \textbf{P$^2$A}{}, which is 14 or 8 accordingly, the results are trivial to some degree. In conclusion, for the action recognition task with the mainstream baseline algorithms, \textbf{P$^2$A}{} is considerably a challenging dataset and there is a vast room and potential for improvement and research involvement.
For the action localization task, we summarize the performances in Figure~\ref{fig:fig1}. We selected several representative state-of-the-art action localization methods that are publicly available and retrained them on our \textbf{P$^2$A}{} dataset. We report the area under the AR-AN curve for the baselines as the measurements of the localization performance. For example, in Figure~\ref{fig:fig1}(a), we draw the AR-AN curves with a varying value of tIoU from 0.5 to 0.95 separately and one mean curve (solid blue curve) to represent the overall performance of BSN on \textbf{P$^2$A}{}. Among all the methods, TCANEt+ shows the highest overall AUC score which is 47.79 in Figure~\ref{fig:fig1}(f) and also outperforms other baselines in each tIoU level. To straightforwardly compare the localization results, we visualize the located action segments from each method in Figure~\ref{fig:predict}. The top figure is on a large scale to show the predicted segments compared with the ground-truth segments in a complete 6-minute video chunk, where the bottom one zooms in a single turn for a series of consecutive actions/strokes by the player facing the camera. From such visual observations, we can find that TCANet+ can almost reproduce the short action segments from the ground-truth labeling. However, the first period of serving action is barely located and TCANet+ mis-predicts with a much shorter one instead. Compared to the action recognition task, the action localization seems more difficult on \textbf{P$^2$A}{} since the best AUC score is even in a massive gap from the average achievement of these baselines on ActivityNet~\cite{caba2015activitynet} (47.79 $\ll$ 68.25 on average from top35 solutions in the 2018 leader-board).
\noindent\textbf{Summary.} From the above experimental results, we can conclude that the \textbf{P$^2$A}{} is still a very challenging dataset for no matter the action recognition and the action localization tasks. Compared to the current release sports datasets, the representative baselines hardly adapt to the proposed \textbf{P$^2$A}{} dataset and the performance is far lower than our expectation. Thus, the \textbf{P$^2$A}{} dataset provides a good opportunity for researchers to further explore the solution in a domain of fast-moving and dense action detection. Note that, the \textbf{P$^2$A}{} dataset is not limited to the action recognition and localization tasks only, where it is also potential to be used for video segmentation and summarizing. Actually, we are on the way to explore more possibilities of \textbf{P$^2$A}{} and will update the released version to include more use cases. Due to the limited space, we will release the implementation details for all the baselines in our Github repository with the dataset link once the paper gets published.
\section{Conclusion}
In this paper, we present a new large-scale dataset \textbf{P$^2$A}{}, which is currently the largest publicly available table tennis action detection dataset to the best of our knowledge. We have evaluated several state-of-the-art algorithms on the introduced dataset. Experimental results show that existing approaches are heavily challenged by fast-moving and dense actions and uneven action distribution in video chunks. In the future, we hope that \textbf{P$^2$A}{} could become a new benchmark dataset for fast-moving and dense action detection especially in the sports area. Through a comprehensive introduction and analysis, we intend to help the follow-up users and researchers to better acknowledge the characteristics and the statistics of \textbf{P$^2$A}{}, so as to fully utilize the dataset in action recognition and action localization tasks. We believe the \textbf{P$^2$A}{} dataset will make an early attempt in driving the sports video analysis industry to be more intelligent and effective.
| {
"attr-fineweb-edu": 2.630859,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcLTxK7kjXMEc8FRN | \section{Introduction}
\subsection{Background}
As COVID-19 transmissions spread worldwide, governments announced and enforced both domestic and international travel restrictions. These restrictions affected the trend of flight networks around the world such as major airlines \cite{whitehouse-airlines} and tourism-dependent cities \cite{tourism-gdp}. All or a portion of flights connecting these restricted countries and areas were cancelled. The date of travel restriction enforcement varies by country, therefore the period and degree of flight reductions also vary by country and continents.
\subsection{Overview}
We quantitatively investigated how flight restrictions affect domestic and international travel in countries and continents through time-series analytics and graph algorithms. We also investigated the negative effect of these restrictions on employment and the number of infected cases in Barcelona, where the majority of revenue depends on the travel industry.
The structure of this paper is as follows; In Section \ref{sec:methods}, we introduce the flight dataset which we used for our analysis. We compared the dataset with other flight datasets and discussed the limitations and potential biases. In Section \ref{sec:stat}, we summarize the statistical analytics of airport and flight data by country and continent. In Section \ref{sec:timeseries}, we analyze the number of daily flights globally as well as by region (continent and country level), and discuss the relationship between government announcements and the decline of flights. In Section \ref{sec:graph}, we visualize flight networks and apply graph analytic techniques to the network to evaluate the quantitative impacts on travelers. We examine economic effects on travel restrictions and cities depending on tourism industries from the public data from Barcelona in Section \ref{sec:barcelona}, and we evaluate the relationship between incoming flights from Europe and the number of new infection cases in the United States in Section \ref{sec:cases}. Finally, we discuss our findings, limitations and related work in Section \ref{sec:discussion}, and explain the summary and future perspectives of our research in Section \ref{sec:conclusion}.
\section{Methods}
\label{sec:methods}
\subsection{Methods and Algorithms}
In order to evaluate the effect of travel restrictions on the flights, we conducted the following analyses step by step.
First, we summarized the total number of airports, flights and potential passengers for each country and continent from an open dataset.
Next, we counted up the number of daily flights for countries and continents, and compared the period of the flight reductions and the date of the travel restriction enforcement.
In order to evaluate the density of the flight network quantitatively, we also applied graph analytics methods to the time-series flight data.
Finally, we analyzed time-series data sets about the number of incoming flights, number of infected cases and unemployment in Barcelona \cite{barcelona-tourism}, where tourism industries is a main revenue source, and hypothesized the correlation among these data.
\subsection{The OpenSky Network Dataset}
For time-series flight analytics, we used an open dataset \cite{opensky-data} from The OpenSky Network\cite{opensky-web}. The dataset contains flight records with departure and arrival times, airport codes (origin and destination), and aircraft types.
The dataset includes the following flight information during 2020 January 1 to April 30. The dataset for a particular month is made available during the beginning of the following month. Note that the year of all of the following dates is 2020. In total, our dataset is composed by:
\begin{itemize}
\item 655,605 (35\%) international flights
\item1,213,100 (65\%) domestic flights
\item 616 major airports covering 148 countries (out of 195 world countries)
\end{itemize}
\section{Statistical Analysis of Flight Data}
\label{sec:stat}
\subsection{Distribution of the Number of Airports}
We extracted flight data regarding 616 primary airports from 148 countries. Figure \ref{fig:airport-count} describes the number of major airports per continent and country. Currently, approximately 33\% of the world's airports belong in North America and 170 airports (27.6\%) in the United States. 27\% (166) and 26\% (162) belong to Asia and Europe respectively, and China (5.7\%) and the United Kingdom (4.4\%) have the highest number of airports within each continent.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure3_1.png}}
\caption{Number of Major Airports per Continent and Country}
\label{fig:airport-count}
\end{figure}
\subsection{Number of Flights per Area and Country}
Figure \ref{fig:flights-count} describes the total number of flights from January 1 to April 30 2020 for each continent and country. More than half of flights in this dataset departed from Europe. 10\% of international flights departed from Germany (10\%) and the United States (10.7\%). In Asia, Hong Kong, the United Arab Emirates, and Singapore had the highest number of flights (2.9\%, 2.6\%, and 2.2\% respectively), yet there were very few flights connecting to China. The number of flights in South America, Africa, and Oceania were too small to analyze, therefore these continents were excluded from the following sections.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure3_2.png}}
\caption{Number of Departed Flights per Continent and Country}
\label{fig:flights-count}
\end{figure}
\subsection{Number of Potential Passengers}
The OpenSky Network Dataset does not contain the final number of passengers for each flight. However, since the data contained each aircraft type used, we used this to estimate the number of passengers for each trip. Since the information regarding the flights were extracted from the capacity data of each aircraft type from their respective Wikipedia pages \cite{aircraft-passengers}, we aim to obtain more accurate information on aircraft types and their capabilities for our future analysis.
The proportion of international passengers travelling from Asia was approximately 31.2\% of all passengers, and the number of flights were 23.5\% of overall flights. On the other hand, the total number of passengers of international flights within Europe is about 46.5\% while the proportion of international flights outside of Europe is 55.5\%. The gap of proportions between passengers and flights comes from the difference in capacities of the aircraft.
\section{Descriptive Analysis of Time-series Flight Data}
\label{sec:timeseries}
\subsection{Overview}
Announcements and enforcement regarding lockdowns (border closures and travel restrictions, etc.) were implemented to slow the rate of transmissions and prevent overburdening of health facilities. The date of implementation, length, and extent vary by country. In the US, the Secretary of Health and Human Services declared a public health emergency on January 31\cite{national-emergency}. The US government suspended incoming flights from Europe (Schengen Area) on March 14\cite{us-close-eu}, and closed the border between the US and Canada on March 18\cite{us-close-canada}. Germany closed borders with neighboring countries on March 16\cite{germany-border}. Countries in the European Union agreed to reinforce border closures and restrict non-essential travels from March 17\cite{eu-close} onwards. In this section, we show the changes in the number of daily flights in each region, and discuss the correlations between travel restrictions and the number of flights in each country.
\subsection{Overall International Flights}
Figure \ref{fig:global-intl} describes the number of daily international flights in the world. Before the end of February, the daily number of international flights was approximately 8,000. However, the number of daily flights gradually decreased to 6,000 by early March and then drastically dropped to less than 1,000 (around 10\% of regular seasons) by the end of March.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure4_1.png}}
\caption{Total Number of Daily Departed International Flights}
\label{fig:global-intl}
\end{figure}
\subsection{Inter-continental Flights}
Figure \ref{fig:continent-intl} describes the number of intercontinental flights that departed from each continent. Inter-continental flights from Europe, Asia, and North America drastically decreased from March 13 as many flights were canceled. The decrease probably reflects the declaration of pandemic by WHO on March 11 and national emergency in the United States on March 13.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure4_2.png}}
\caption{Total Number of Daily Departed Inter-continental Flights per Continent}
\label{fig:continent-intl}
\end{figure}
\subsection{Continental Flights}
Figure \ref{fig:continent-inner} describes the number of continental flights within each continent.
In Europe, more than 3,000 flights were operating per day until the beginning of March, but dropped to 10\% of the regular number of flights from mid March to the end of March.
In Asia, the number of flights has gradually decreased from February.
In North America, the number of flights was steady until mid-March, before dropping significantly.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure4_3.png}}
\caption{Number of Daily International Flights per Continent}
\label{fig:continent-inner}
\end{figure}
In Europe, the UK, Ireland, and Germany had the highest number of flight passengers\cite{wb-passengers}. In Germany, international flights decreased from the beginning of March and then dropped after the nation closed its borders on March 16 (Figure \ref{fig:uk-de}). This reflected the EU's agreement to ban incoming travel from areas outside the EU, UK, and Switzerland on March 17.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure4_4.png}}
\caption{Number of Daily Flights from the United Kingdom and Germany to Other European Countries}
\label{fig:uk-de}
\end{figure}
Most of the continental flights had been operating in North America (the US in particular) until mid-March and half number of these flights in the Beginning of January are still in operation today (Figure \ref{fig:us-na}). On the other hand, the number of flights in Asia, Europe, and Oceania have dropped to less than 20\% to that of January 1.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure4_5.png}}
\caption{Number of Daily International Flights from the United States in North America}
\label{fig:us-na}
\end{figure}
\subsection{Domestic and International Flights in the United States}
In the US, the number of domestic flights suddenly dropped from mid-March. However, roughly 50\% of all regular flights were still in operation on March 31 (Figure \ref{fig:us-domestic}), although the US government and airlines planned to ban or reduce flights by that point \cite{us-flights}. In April, the number of flights gradually declined but 30\% of regular flights were still in operation.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure4_6.png}}
\caption{Number of Domestic Flights in the United States}
\label{fig:us-domestic}
\end{figure}
The number of flights from European countries to the US suddenly dropped after March 14 (Figure \ref{fig:us-eu}). The US suspended incoming travel from Europe (Schengen area) from March 13. In the overall number of international flights, the primary destination from the US is Canada (green lines in Figure \ref{fig:us-top10}). The number of daily flights between the US and Canada gradually decreased after these governments agreed to close their border on March 18.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure4_7.png}}
\caption{Number of Flights Between the United States and European Countries}
\label{fig:us-eu}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure4_8.png}}
\caption{Number of Flights Between the United States and top-10 Countries}
\label{fig:us-top10}
\end{figure}
\subsection{Key Takeaways}
We analyzed the number of daily flights for each country and continent, and compared the relationship between the date of travel restrictions and the decline of flights. We found that many international flights suddenly declined during the middle of March. In Europe, the number of flights dropped to about 10\% of regular seasons after the EU agreed to restrict incoming travels on March 17. In the US, approximately half of all domestic flights were still in operation even at the end of March.
\section{Graph Analytics on the Flight Network}
\label{sec:graph}
\subsection{Overview}
With travel restrictions, the global flight network gradually became more sparse, which prevented travelers from:
\begin{enumerate}
\item Using direct flights to their destinations
\item Returning to their origin once they depart from the airport
\item Departing from their current location due to absence of flights in nearby airports
\end{enumerate}
By constructing daily flight networks, we quantitatively evaluated these effects through graph analytics. Each vertex and edge represent an airport and a flight, respectively.
\begin{enumerate}
\item Distribution of the number of neighboring airports and flights (degree and neighborhood distributions)
\item Strongly connected component (SCC) extraction
\item Isolated vertex extraction
\end{enumerate}
In the construction of flight networks, we aggregated multiple flights with the same origin and destination and date into a single weighted edge in the flight network construction like Figure \ref{fig:agg-edges-date}. The edge weight is the total number of flights.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure5_1.png}}
\caption{Flight Edge Aggregations by Date}
\label{fig:agg-edges-date}
\end{figure}
While multiple flights are in operation every day, not all flights are operated every day (e.g., three days a week). To detect long-term trends more precisely, we also aggregated flight edges by week as an optional preprocessing (Figure \ref{fig:agg-edges-week}).
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure5_2.png}}
\caption{Flight Edge Aggregations by Week}
\label{fig:agg-edges-week}
\end{figure}
\subsection{Connecting Airports and Flights}
In regular times, many flights (edges in the flight network) connect from the airport (vertices in the flight network) to the airport. That helps travelers to get to more destinations with fewer transfers. However, travel restrictions prevent them from using direct flights and getting to their destinations. To measure the effect quantitatively, we computed the following metrics for each airport and each date (Figure \ref{fig:degree}).
\begin{enumerate}
\item Number of flights (edges) from the airport
\item Number of 1-hop neighboring airports from the airport (the number of reachable destinations with a nonstop flight)
\item Number of 1,2-hop neighboring airports from the airport (the number of reachable destinations with a single connecting flight)
\end{enumerate}
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure5_3.png}}
\caption{Number of Flights and Neighboring Airports from the Airport $A$}
\label{fig:degree}
\end{figure}
The color and size of each vertex in Figure \ref{fig:global-map-jan} - \ref{fig:na-map-apr} indicate the number of departed flights from the airport. From mid-March to the beginning of April, many flights were gradually canceled, except for domestic flights in the United States and international flights between the United States and Europe.
\begin{figure}[htb]
\begin{tabular}{c}
\begin{minipage}{0.9\hsize}
\centerline{\includegraphics[width=0.9\hsize]{figure5_4.png}}
\caption{Global Flight Network on January 1}
\label{fig:global-map-jan}
\end{minipage}\\
\begin{minipage}{0.9\hsize}
\centerline{\includegraphics[width=0.9\hsize]{figure5_5.png}}
\caption{Global Flight Network on April 1}
\label{fig:global-map-apr}
\end{minipage}
\end{tabular}
\end{figure}
The number of flights in Europe suddenly decreased from March 16 (the day before EU countries closed borders) to March 23. In April, only a few flights were operating and many airports became isolated.
\begin{figure}[htb]
\begin{tabular}{c}
\begin{minipage}{0.9\hsize}
\centerline{\includegraphics[width=0.9\hsize]{figure5_6.png}}
\caption{Flight Network in Europe on January 1}
\label{fig:eu-map-jan}
\end{minipage}\\
\begin{minipage}{0.9\hsize}
\centerline{\includegraphics[width=0.9\hsize]{figure5_7.png}}
\caption{Flight Network in Europe on April 1}
\label{fig:eu-map-apr}
\end{minipage}
\end{tabular}
\end{figure}
In North America, most of the flights in North America were operating between the United States and Canada. In the United States, most of the domestic flights were operating as usual, even at the end of April.
\begin{figure}[htb]
\begin{tabular}{c}
\begin{minipage}{0.9\hsize}
\centerline{\includegraphics[width=\hsize]{figure5_8.png}}
\caption{Flight Network in North America on January 1}
\label{fig:na-map-jan}
\end{minipage}\\
\begin{minipage}{0.9\hsize}
\centerline{\includegraphics[width=\hsize]{figure5_9.png}}
\caption{Flight Network in North America on April 1}
\label{fig:na-map-apr}
\end{minipage}
\end{tabular}
\end{figure}
\subsection{Density and Clustering Coefficient}
\label{sec:density}
With denser flight networks, travelers can reach more destinations. As the barometer of the flight network density, we computed the following metrics.\\
\textbf{Density:} The ratio of the actual number of edges after aggregation against the number of edges of the complete graph.\\
\textbf{Clustering Coefficient:} The ratio of the total number of triangles against the total number of triplets (connected three vertices). This is mainly used for social network analytics to measure the connectivity of friendships.
The density (red lines in Figure \ref{fig:global-density} - \ref{fig:us-density}) of the global flight network rapidly decreased by half during March 8 to April 1. In Europe, drastic decreases in the flight network density was similar to that of the global network, and fell to approximately 17\% of the peak. In the US on the other hand, the density gradually decreased, yet only by 20\%.
From March 13, the clustering coefficient (blue lines) of the global flight network gradually decreased by half in the next two weeks. In Europe, it rapidly declined to about a quarter in the same period. In the US, the decline of the clustering coefficient is only about 10\% at the end of April.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure5_10.png}}
\caption{Density (red line) and Clustering Coefficient (blue line) of the Global Flight Network}
\label{fig:global-density}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure5_11.png}}
\caption{Density (red line) and Clustering Coefficient (blue line) of the Flight Network in Europe}
\label{fig:eu-density}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure5_12.png}}
\caption{Density (red line) and Clustering Coefficient (blue line) of the Flight Network in the United States}
\label{fig:us-density}
\end{figure}
\subsection{Strongly Connected Component and Isolated Airports}
In regular seasons, travelers departing from any airport can typically go to any destination and return to their airport of origin. This is due to flight networks being a strongly connected component (SCC), within which travelers can go from/to any airports in the same SCC.
With travel restrictions, however, many flights were canceled and the large SCC (blue circle in Figure \ref{fig:scc-example}) has decomposed into smaller SCCs (yellow and green circles in Figure \ref{fig:scc-example}). In the result, travelers cannot:
\begin{enumerate}
\item Go to other airports outside local flight networks
\item Return to their origin (a red edge from E to D)
\item Depart from isolated airports (a red vertex)
\end{enumerate}
\begin{figure}[htb]
\centerline{\includegraphics[width=0.6\hsize]{figure5_13.png}}
\caption{An Example of SCC Decompositions of Flight Networks}
\label{fig:scc-example}
\end{figure}
In the global flight network, the size of the largest SCC (red line) decreased by 30\% from March 13 (300) to April 6 (210), and the number of isolated airports (blue line) increased by 30\% (from 310 to 400). In Europe, the largest SCC size on April 7 was less than half of March 10 (before border closures), and the number of isolated airports tripled. In the US on the other hand, the largest SCC size and isolated airports are almost stable (80 and 100 respectively). Although the largest SCC size gradually decreased from March 13 to April 6, the rate of the overall decrease is only about 10\%.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure5_14.png}}
\caption{Largest SCC Size (red line) and the Number of Isolated Airports (blue line) of the Global Flight Network}
\label{fig:scc-global}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure5_15.png}}
\caption{Largest SCC Size (red line) and the Number of Isolated Airports (blue line) of the Flight Network in Europe}
\label{fig:scc-eu}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure5_16.png}}
\caption{Largest SCC Size (red line) and the Number of Isolated Airports (blue line) of the Flight Network in the United States}
\label{fig:scc-us}
\end{figure}
\subsection{Diameter and Radius of Flight Network}
When many flights are canceled, more transfers are required for travellers to reach the destinations compared to regular seasons. In order to measure the number of required transfers, we applied a graph analysis to compute eccentricity (the maximum distance from a specified vertex to all other connected vertices) of the flight network. An eccentricity of an airport vertex corresponds to the number of required transfers to other destinations in the worst case. With travel restrictions, the diameter (the maximum value of eccentricities) and the radius (the minimum value of eccentricities) of the flight network become larger.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure5_17.png}}
\caption{An Example of the Average Eccentricity, Diameter and Radius}
\label{fig:diaeter-example}
\end{figure}
The diameter (red line in Figure \ref{fig:diaeter-global} - \ref{fig:diaeter-us}), radius (blue line), and average eccentricity (green line) of the largest SCC size (black line) in the global flight network increased from April. In the flight network in Europe, diameter and radius rapidly increased from March 16 in two weeks. The overall flight network's trend in the US did not change until the end of April.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure5_18.png}}
\caption{Diameter (red line) and Radius (blue line) of the Global Flight Network}
\label{fig:diaeter-global}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure5_19.png}}
\caption{Diameter (red line) and Radius (blue line) of the Europe Flight Network}
\label{fig:diaeter-eu}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure5_20.png}}
\caption{Diameter (red line) and Radius (blue line) of the United States Flight Network}
\label{fig:diaeter-us}
\end{figure}
\subsection{Flight Edge Weight and Density}
Although a number of flights still exist among most of the airports, the flight frequency has gradually decreased. To measure the decline of flight frequencies, we defined the "relative weight" for each flight edge (pair of vertices as connected airports) and network density with the weighted edges.
First, we computed the moving average of the number of daily flights for each pair of vertices, and then computed the ratio of the total number of flights in a week against the baseline of the specified week (January 12 - January 18), before defining the ratio as the relative weight of the edge. Besides the topological density described in \ref{sec:density}, we computed the density of the whole flight network with the weighted edges.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure5_21.png}}
\caption{An Example of Weighted Flight Edges}
\label{fig:weight-example}
\end{figure}
The blue and red lines in Figure \ref{fig:weight-global} - \ref{fig:weight-us} represent the density of the daily flight network with the unweighted/weighted flight edges. The density of the whole flight network with weighted edges in Europe dropped to almost zero in April.
In the US, the density with weighted edges has dropped from 0.025 to 0.005 in March. Although the topology of the flight network remains unchanged, the frequency of domestic flights in the US significantly decreased as seen in other continents.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure5_22.png}}
\caption{Density of Global Flight Network with Unweighted (Blue) and Weighted (Red) Flight Edges}
\label{fig:weight-global}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure5_23.png}}
\caption{Density of Flight Network in Europe with Unweighted (Blue) and Weighted (Red) Flight Edges}
\label{fig:weight-eu}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure5_24.png}}
\caption{Density of Flight Network in the United States with Unweighted (Blue) and Weighted (Red) Flight Edges}
\label{fig:weight-us}
\end{figure}
\subsection{Key Takeaways}
We applied some graph analytics to the flight data in order to evaluate the density of flight networks. Many graph metrics such as density, maximum SCC size, and diameter remarkably indicated that the flight networks in the overall world and Europe became more sparse in March. We also defined the "weighted" flight network density considering the frequency of flights, and showed the flight network in the United States also became more sparse from the mid of March to the mid of April.
\section{Impact on the Travel Industry in Barcelona}
\label{sec:barcelona}
Travel restrictions have a large impact on travel industries and tourism-dependent countries. In this section, we investigate the correlation and causal association between the decline of incoming flights, the number of infection cases, and the unemployment in Barcelona.
\subsection{Incoming Flight Reductions in Barcelona}
According to the unemployment dataset\cite{spain-workers}, the number of affected temporary workers (red line in Figure \ref{fig:bcn-workers}) gradually declined in April after it suddenly increased on Match 24. The number of flights (blue line) arriving in Barcelona gradually decreased until the beginning of April. These numbers seem to have some correlations, but more investigation is required to understand the causality and relationship between these numbers.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure6_1.png}}
\caption{Changes of the Number of Affected Temporary Workers with the Number of Incoming Flights and Estimated Passengers in Barcelona}
\label{fig:bcn-workers}
\end{figure}
\subsection{Unemployment and New Infection Cases in Barcelona}
The number of flights to Barcelona (blue line in Figure \ref{fig:bcn-cases}) has been decreasing since the beginning of March, and suddenly halved on March 16 just before Spain closed its borders\cite{spain-close}.
Then, the number of affected temporary workers (red line) suddenly increased by 4.3 times (from 15,901 on March 23 to 68,170 on March 25). It is notable that the dataset regarding the number of affected workers became available from March 23. The number of new infection cases in Spain (orange line) gradually increased from March 12 and peaked at almost 10,000 on March 25.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure6_2.png}}
\caption{Changes of the Number of Affected Temporary Workers (red line) in Barcelona and the Number of New Infection Cases (orange line) in Spain}
\label{fig:bcn-cases}
\end{figure}
\subsection{Flights to Barcelona and New Infection Cases in the Major Origin Countries}
\label{sec:new-cases}
Approximately half of all flights to Barcelona (BCN) come from France, Germany, and the UK. The government of Spain closed its borders on March 16, and the number of flights from these three countries decreased by half during the following day. However, the number of new infection cases for these countries has been gradually increasing even before border closures. Considering the latent period of COVID-19 is commonly five to six days \cite{who-latent}, the reason the trend of increase of new infection cases continued for a week after the border closure may be due to the immigration of infected individuals before the border closure.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure6_3.png}}
\caption{Number of Incoming Flights and New Infection Cases of the Top-3 Origin Countries}
\label{fig:bcn-flights}
\end{figure}
\subsection{Key Takeaways}
We compared the number of incoming flights, affected temporary workers, and the number of new infection cases in Barcelona to estimate how the travel restriction affected the tourism industry. We found the number of flights halved after Spain closed borders on March 16, and the number of affected workers jumped just after the government opened the application of unemployment on March 23. The number of new infection cases increased until March 25 after the border closures, potentially due to infections spreading during its latent period.
\section{Relationship Between Incoming Flights and Infection Cases}
\label{sec:cases}
A sequence analysis of COVID-19 virus found out that the virus type of the most infection cases in New York City is from Europe\cite{spread-nyc}. The first case was confirmed in the State of New York on February 29, and the number of infection cases gradually increased from March 15 (Figure \ref{fig:cases-us}). To investigate the effects of incoming flights from Europe, we compared the number of daily flights from Europe and new infection cases in the US.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure7_1.png}}
\caption{Number of New Infection Cases in the United States}
\label{fig:cases-us}
\end{figure}
\subsection{Number of Infection Cases in European Countries and the United States}
The trend of increase in newly infected cases in Europe began a week before the US (Figure \ref{fig:cases-us-eu-all}). In particular, confirmed new infections in Italy started to increase in the beginning of March. Spain and Germany also confirmed cases before the US (Figure \ref{fig:cases-us-eu-top10}). The US announced to enforce travel restrictions from Europe on March 13 (EU mainland) and March 14 (the UK and Ireland). After these restrictions, infection cases gradually increased in the US from March 15 and exceeded 30,000 on April 2.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure7_2.png}}
\caption{Comparison of the Total Number of Daily Infection Cases in the United States (bold blue line) and Europe (orange line)}
\label{fig:cases-us-eu-all}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure7_3.png}}
\caption{Comparison of the Number of Daily Infection Cases in the United States (bold blue line) and Top-10 Countries in Europe}
\label{fig:cases-us-eu-top10}
\end{figure}
\subsection{Number of Incoming Flights from Europe and Infection Cases in United States}
With the travel restrictions from European countries to the United States, the number of incoming flights (blue line in Figure \ref{fig:flights-cases-us-eu}) suddenly decreased from March 13 to March 24. However, the number of infection cases (red line in Figure \ref{fig:flights-cases-us-eu}) began to increase just after these restrictions. However, the latent period of COVID-19 is commonly five to six days as we mentioned in Section \ref{sec:new-cases}, therefore we cannot rule out the possibility that the infections had already spread out before the US enforced these restrictions.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure7_4.png}}
\caption{Number of Daily Incoming Flights from Europe and New Infection Cases in the United States}
\label{fig:flights-cases-us-eu}
\end{figure}
European countries such as Italy, Spain and Germany have already reported a certain number of infection cases before travel restrictions. Figure \ref{fig:flights-cases-eu} shows the number of daily international flights from these countries to the US and the number of daily new infections in the US (red lines). In Italy (blue line), more infection cases were reported than other European countries from March 1 (Figure \ref{fig:flights-cases-us-eu}), but the number of flights to the US became almost zero just after the travel restriction was enforced. On the other hand, some flights from Spain (orange line) and Germany (green line) to the US were still in operation, even after the restrictions were announced and infection cases in the US began to increase. These flights may be one of the reasons for the gradual increase of infections.
\begin{figure}[htb]
\centerline{\includegraphics[width=\hsize]{figure7_5.png}}
\caption{Number of Daily Incoming Flights from European Countries (Italy, Spain and Germany) and New Infection Cases in the United States}
\label{fig:flights-cases-eu}
\end{figure}
\section{Discussion}
\label{sec:discussion}
\subsection{Findings}
In Section 4, we found out when and how international flights in each continent and countries declined due to travel restrictions. The number of flights on April 1 had dropped to 10\% of that of regular seasons. The number of flights from Europe sharply decreased in the second half of March after the US announced the travel restriction from most of European countries on March 11 and the EU agreed to close borders on March 17. On the other hand, half of regular domestic flights in the US were still in operation at the end of March.
Besides counting the number of daily flights, we also applied several graph analytics to daily flight networks in order to explain how these networks became sparse in Section 5. With the result of analytics, we found out the density of the global flight network halved in March, and it has been divided into small strongly connected components from the mid of March to the end of April.
In Section 6, we evaluated the effect of reduction of incoming flights to Barcelona on the tourism industry (e.g. the number of affected temporary workers). The number of incoming flights dropped just after the travel restriction by the government of Spain on March 16, and the number of affected workers drastically increased after the government of Catalonia published the number of these workers from March 23. However, the number of infection cases still increased even after travel restrictions were announced.
We also investigated the relationship between incoming flights from Europe and the number of infections in the US in Section 7. We found that the trend of daily infections in Europe began a week ahead of the US, and that some European countries confirmed cases from March 1 while the number of infection cases in the US began to increase from March 15. We also found that flights from Europe were still accepted in the US even after travel restrictions were announced on March 14.
\subsection{Related Work}
Many researches have conducted evaluations and estimations how COVID-19 infections affected international flights.
Stefano et al.\cite{IACUS2020104791} evaluated the economic impact of travel restrictions on flights. They constructed several scenarios of travel restrictions based on the past pandemic cases such as SARS in 2003 and MERS in 2005, and estimated the loss of global GDP may amount up to 1.98\% and that the number of unemployment could reach up to 5 millions.
Hien et al.\cite{LAU2020} evaluated the correlation of domestic COVID-19 cases and flight traffic volumes with the OpenSky Network dataset, and concluded the number of international flight routes and passengers have a correlation with the risk of COVID-19 exposures.
\subsection{Limitations and Possible Bias}
Since the flight dataset from the OpenSky Network is aggregated voluntarily, it has some limitations. First, some flight entries do not contain origin or destination airport codes. We excluded such entries before analysis to construct valid flight networks. Moreover, the dataset also has a significant regional bias. For example, many international and domestic flights connecting China, Oceania, South America, and some countries in Africa are missing.
\section{Conclusion}
\label{sec:conclusion}
\subsection{Overall Conclusion}
After the declaration of a pandemic by WHO on March 11 and government announcements of travel restrictions during the following week, the number of international flights around the world drastically declined, particularly in Europe. Although many domestic flights were still in operation in the US, the frequency gradually declined.
These travel restrictions affected tourism-dependent countries. In Barcelona in particular, many temporary workers made applications regarding their unemployment soon after the government opened applications on March 23. Moreover, the number of newly infected cases increased after travel restrictions were declared on March 16, most likely due to COVID-19's longer latency period.
\subsection{Future Directions}
Due to the limitation of publicly available data, we could not conduct the same analysis for many countries as Barcelona in Section 6 for this paper. Therefore, we aim to analyze relationships among the number of flights, infection cases, and the impact on the economy for more countries and continents by combining various other data sets with the flight data in our next analysis. We also aim to focus on counties where the number of infections has soared, and investigate how the increase of cases and reduction of flights affect the economy.
We also plan to trace and predict viral mutations with the flight network data. The type of COVID-19 virus varies from region to region\cite{TreeTime, Nextstrain}, and this may provide insight to analyzing the route of infections with viral types. With the daily flight network data, we hope to estimate the date and route of infections more precisely.
In order to predict the economic impacts such as propagations of revenue declines in travel industries for each region, we will apply advanced machine learning techniques for time-series graph data such as Graph Convolutional Network (GCN). We will also evaluate the importance of flight network features as well as demographic features of regions (countries and continents) in our next analysis.
\section*{Acknowledgment}
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 2.257812,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdxY5qsJBjrwPoSut | \section{Introduction}
\IEEEPARstart{S}{}occer is among the most popular sports in the world. The attractiveness of this sport has gathered many spectators. Various studies are being performed in this area to grow and assist this sport and meet the needs of soccer clubs and media. These researches mainly focus on estimating team tactics \cite{suzuki2019team}, tracking of players \cite{manafifard2017survey} or ball on the field \cite{kamble2019deep}, detection of events occurring in the match \cite{hong2018end,fakhar2019event,khan2018soccer,khan2018learning,jiang2016automatic}, summarizing the soccer match \cite{agyeman2019soccer,sanabria2019deep,rafiq2020scene} and estimating ball possession statistics \cite{sarkar2019generation}. These studies are carried out using various methods and techniques, including machine learning (ML).
Artificial intelligence (AI), and more specifically ML can assist in conducting the above-mentioned researches in order to achieve better and more intelligent results. ML itself can be implemented in a variety of ways, including deep learning (DL). Distinguishing the events of a soccer match is one of the active research fields related to soccer. Today, various AI methods are utilized to detect events in a soccer game \cite{hong2018end,khan2018learning}. The use of AI in this area can help to achieve higher accuracy in detecting events.
Detecting the events of a soccer match may have different applications. Event detection, for instance, in a soccer match can help obtaining the statistics of events of the match. Counting the number of free kicks, fouls, tackles, etc. in a soccer game can be done by a manpower. Using a manpower is not only costly and time consuming, but also is may be associated with errors. Nonetheless, with intelligent systems based on event detection, statistics can be calculated and used automatically. Other applications of event detection may be the summarization of a soccer match. \cite{zawbaa2012event}. The summary of a soccer match includes important events in which the match took place. To prepare a useful summary of a soccer match, the events should be correctly identified. Using a method for identifying the events with high accuracy can improve the quality of the summarization task.
The purpose of the current study is to detect events in soccer matches. In this regard, various deep learning architectures have been developed. When using deep learning methods, there always exists a need for datasets for training. To this end, first, a rich visual dataset is collected for our task such as (penalty kick, corner kick, free kick, tackle, to substitute, yellow and red cards) and images containing these events of the sides and center of the field are collected. Then, this image dataset is used to train the proposed networks. Solving event detection problem encounters some challenges including the similarity of some images such as yellow and red cards, which leads to some difficulties in separating the images of these two groups from each other. This causes the classifier to have trouble in distinguishing between yellow and red cards, resulting in incorrect detection.
Another problem in detecting events that usually occurs in a soccer match is the issue of no highlights. Not all events happening in a soccer match can be included in specific events to be given to the network for training. Some events that have not been fed to the network before training the network may occur specifically in a soccer match. In such cases, the network must be able to face these images or videos and not mistakenly categorize them in one of the given events. We solved the problem of \textit{no highlight} image detection by using VAE , setting a threshold and using additional classes in the image classification module.
The proposed method for detecting the soccer match events uses two convolutional neural network (CNN) and a variational autoencoder (VAE). The current study focuses on resolving the problem of no highlight frames which may be wrongly considered as one of the events, as well as dealing with the problem of similarity between red and yellow cards frames. Experiments demonstrate the proposed method improves the accuracy of image classification from 88.93\% to 93.21\%. The main reason for increasing the accuracy is the use of a new fine-grain classification network to classify yellow and red cards. Also, the proposed method has a good ability to detect no highlight frames with a high precision. All datasets and implementations are publicly available.\footnote{https://github.com/FootballAnalysis/footballanalysis}
The rest of the paper is organized as follows. Section II provides a literature review and examines the drawbacks of the works done in this area. Section III describes the proposed algorithm, the mechanism of its structure and working. Section IV introduces the datasets collected for this study. Section V presents the experimental results and compares the results with those of other papers. Eventually, Section VI concludes the paper.
\section{Related Work}
The research of Duan et. al. \cite{duan2005unified} is one of the first works in this area that implements supervised learning for top-down video shot classification. The authors in \cite{tavassolipour2013event} present a method for detecting soccer events using the Bayesian network. The basic methods presented suffered from low accuracy, until some methods have been proposed using DL. By presenting a method based on the convolutional network and the LSTM network, Agyeman et. al. \cite{agyeman2019soccer} present a method for summarizing a soccer match based on event detection, in which five events including corner kicks, free kicks, goal scenes, centerline, and throw-in are considered. This study uses 3D-ResNet34 architecture in the convolutional network structure. One of the problems with this work is that the number of events is limited and no highlights are taken into account. Jiang et. al. \cite{jiang2016automatic}, initially, perform feature extraction using a convolutional network, then perform event detection using its combination with RNN model.This method is limited to four events: goal, goal attempt, corner, and card. Sigari et. al. \cite{sigari2015fast} employ the fuzzy inference system. The algorithm presented in this method works based on replay detection, logo detection, view type recognition and audience excitement. This method is also limited to three events: penalty, corner, and free-kick. 11 events are classified in \cite{yu2019soccer}, covering a good number of events; while this method is not capable to distinguish between the red and yellow card events because of the high similarity between these two events. In general, the methods presented in this field work on either image, video \cite{agyeman2019soccer,jiang2016automatic}, or audio signals \cite{duxans2009audio,raventos2015automatic}. Nonetheless, there exist some methods employing two signals, i.e. audio and video, simultaneously \cite{sanabria2019deep,sigari2015fast}.
The methods recently presented utilize DL architectures as the main tool for feature extraction. Among the DL architectures suggested for feature extraction, the closest flagship architecture is EfficientNet architecture \cite{tan2019efficientnet}. This architecture is presented in 8 different versions. Different versions of this architecture offer generally higher performance than other previous models \cite{simonyan2014very,szegedy2015going,he2016deep}. Also, the proposed architecture has fewer parameters and occupies less memory.
One of the challenges of the event detection problem in soccer matches, which has not been addressed adequately in the literature, is the events that are very similar in appearance but are two separate events. For instance, in the images of yellow and red cards, only the color of the cards is different and the other parts of the image are the same. Although it may come to the mind that both are card-taking operations and may be almost the same, in the soccer game these two events impact the game process significantly. In the literature, both yellow and red card events are considered as one event \cite{tavassolipour2013event}, which causes problems for event detection. The reasons for this are the very high similarity of the images of these two events, which makes it very challenging to distinguish between the two events. In this paper, fine-grained image classification is used instead of common feature extraction architectures to solve this problem in detecting such events.
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.4]{Images/Algorithm.jpg}
\caption{Generic block diagram of the proposed algorithm}
\label{fig:1}
\end{figure*}
Fine-grained image classification is one of the challenges in machine vision, which categorizes images that fall into a similar category but do not fall into one subcategory \cite{dai2019bilinear}. For example, items such as face recognition, different breeds of dogs, birds, etc., despite the many structural similarities, do not fall into one subcategory and are different from each other. As another example, the California gull and Ringed-beak gull are two similar birds, differing only in beak pattern, and are in two separate subclasses. The main problem with this type of classification is that these differences are usually subtle and local. Finding these discriminating places of two subcategories is a challenge that we face in these methods.
The work of Lin et. al. \cite{lin2015bilinear} is one of the researches in the field of fine-grained image classification, which is based on deep learning. In this model, two neural networks are used simultaneously. The outer product of the outputs of these two networks is then mapped to a bilinear vector. Finally, there is a softmax layer to specify the classification of images. The accuracy of this method for the CUB-200-2011 dataset \cite{wah2011caltech} is 84.1\% while using only the neural network architecture in this architecture provides the maximum accuracy of 74.7\%. Fu et. al. \cite{fu2017look} introduce a framework of recurrent attention CNN, which receives the image with the original size and passes it through a classification network, thereby extracts the probability of its placement in each category. At the same time, after the convolutional layers of the classifier, it extracts an attention proposal network that contains region parameters that it uses to zoom in on the image and then crop it. It now inputs the resulting new image like the original image into a network, and at the same time extracts an attention proposal network, thereby re-extracting another part of the new image. This method reaches an accuracy of 85.3\% \textit{}for the Birds dataset. In another work, the authors in \cite{zheng2017learning} propose a framework of multi-attention convolutional neural network with an accuracy of 86.5\%. Following in 2018, Sun et. al. \cite{sun2018multi} presented new architecture on an attention-based CNN that learns multiple attention region features per an image through the one-squeeze multi-excitation (OSME) module and then use the multi-attention multi-class constraint (MAMC). Thanks to this structure, this method improves the accuracy of previous methods to some extent. One of the latest method presented in \cite{ge2019weakly} consists of three steps. In the first step, the instances are detected and then instance segmentation is carried out. In the second step, a set of complementary parts is created from the original image. In the third step, CNN is used for each image obtained, and the output of these CNNs is given to the LSTM cells for all images. The accuracy of the best model in this method for the Birds dataset reaches 90.4\%. In general, the problem with the methods presented in this section is the accuracy they achieved, while newer methods attempt to improve the accuracy.
Another issue is that a soccer match can include various scenes that are not necessarily a specific event, such as scenes from a soccer match where players walking, or the moments when the game is stopped. Now, if such images are applied as input to the classification network, the network will mistakenly place them in one of the defined categories. The reason is that the network is trained only to categorize images between events, and is called traditional classification network \cite{geng2020recent}. In such classifications, the known class is used during training and the known class images should be given to the network during testing. Otherwise, the network will have trouble in detecting the image category, and even though the image should not be placed in any of the categories, it will be placed incorrectly in one of the defined categories. This type of categorization does not suffice for the problem under study. Because, as explained, the input images in this problem may not belong to any of the categories. Thus, we need a network to specify the category of an input image if it falls into one of the seven categories; otherwise, the network rejects it and does not mistakenly place it in these defined categories. In other words, an open set recognition is required to address the mentioned problem \cite{geng2020recent} .
Open set recognition techniques can be implemented using different methods. Cevikalp et. al. \cite{cevikalp2016best} performs this based on support vector machine (SVM). The works in \cite{hassen2020learning} and \cite{bendale2016towards} are also based on deep neural networks. Today, the use of generative models in this area is reaching its pinnacle, which is divided into two categories: instance generation and non-instance generation \cite{geng2020recent} . Finding a suitable method is still challenging, and the literature presented for event detection has not addressed this issue profoundly.
\section{Proposed method}
This section describes the proposed method. Initially, the general procedure is explained, then all three main parts of the method are introduced, which includes an image classification module for detecting the images of the defined events, a fine-grain classification module used to classify yellow and red card images, and a variational autoencoder for detecting \textit{no highlights}.
\subsection{The Proposed Algorithm}
As depicted in Fig \ref{fig:1}, the received video is first split into several frames based on the video length and frame rate per second, and then each frame is passed separately through a variational autoencoder. If loss value of the VAE network for the input frame is smaller than a specified value, the received frame is considered as an event frame and given to the image classification module. The image classification module classifies the images into nine classes. If the input image belongs to one of the center circle categories, right penalty area, or left penalty area, it is not categorized as an event and is still classified as no highlights. Yet, if it is one of the five events: penalty kick, corner kick, tackle, free kick, to substitute, it is recorded as an event in that image. If the event is a card event, the image is given to the fine-grain classification module to determine if it is a yellow card or red card, and the color of the card is specified there. Finally, to detect events in a soccer match, each event is calculated for every 15 consecutive frames (seven frames before, seven frames after, and the current frame), and if more than half of the frames belong to an event, that 15 frames (half a second) are tagged as that event. Of course, an event cannot be repeated more than once in 10 s, and if repeated, only one of them is calculated as an event in the calculation of the number of events occurred. In the folowing each module is described in detail.
\begin{figure*}[hbt!]
\centering
\hspace*{-0.5cm}
\begin{tikzpicture}[scale=0.8, transform shape]
\node (y) at (0,0) {\small$y$};
\node[conv,rotate=90,minimum width=4.5cm] (conv1) at (1.25,0) {\small$\text{conv}_{16(3,3) }$\,+\,$\text{bn}$\,+\,$\ReLU$};
\node[pool,rotate=90,minimum width=4.5cm] (pool1) at (2.5,0) {\small$\text{pooling}_{max}$};
\node[conv,rotate=90,minimum width=4.5cm] (conv2) at (3.75,0) {\small$\text{conv}_{32(3,3) }$\,+\,$\text{bn}$\,+\,$\ReLU$};
\node[pool,rotate=90,minimum width=4.5cm] (pool2) at (5,0) {\small$\text{pooling}_{max}$};
\node[conv,rotate=90,minimum width=4.5cm] (conv3) at (6.25,0) {\small$\text{conv}_{64(3,3) }$\,+\,$\text{bn}$\,+\,$\ReLU$};
\node[pool,rotate=90,minimum width=4.5cm] (pool3) at (7.5,0) {\small$\text{pooling}_{max}$};
\node[conv,rotate=90,minimum width=4.5cm] (conv4) at (8.75,0) {\small$\text{conv}_{128(3,3) }$\,+\,$\text{bn}$\,+\,$\ReLU$};
\node[pool,rotate=90,minimum width=4.5cm] (pool4) at (10,0) {\small$\text{pooling}_{max}$};;
\node[view,rotate=90,minimum width=4.5cm] (view4) at (11.25,0) {\small$\text{view}_{B, 1024}$};
\node[fc,rotate=90, minimum width = 2cm] (fc51) at (12.5,1.25) {\small$\text{fc}_{512, Q}$};
\node[fc,rotate=90, minimum width = 2cm] (fc52) at (12.5,-1.25) {\small$\text{fc}_{512, Q}$};
\node[view,rotate=90,minimum width=4.5cm] (kld) at (13.75,0) {\small$\text{KLD}_{\mathcal{N}}$\,+\,$\text{repa}_{\mathcal{N}}$};
\node (z) at (13.75,-5){\small$\tilde{z}$};
\node[fc,rotate=90,minimum width=4.5cm] (fc6) at (12.5,-5) {\small$\text{fc}_{Q, 512}$};
\node[view,rotate=90,minimum width=4.5cm] (view6) at (11.25,-5) {\small$\text{view}_{B, 1024, 2, 2, 2}$};
\node[up,rotate=90,minimum width=4.5cm] (up7) at (10,-5) {\small$\text{Upscaling}$};
\node[conv,rotate=90,minimum width=4.5cm] (conv7) at (8.75,-5) {\small$\text{Conv2DT}_{128(3,3)}$\,+\,$\text{bn}$\,+\,$\ReLU$};
\node[up,rotate=90,minimum width=4.5cm] (up8) at (7.5,-5) {\small$\text{Upscaling}$};
\node[conv,rotate=90,minimum width=4.5cm] (conv8) at (6.25,-5) {\small$\text{Conv2DT}_{64(3,3)}$\,+\,$\text{bn}$\,+\,$\ReLU$};
\node[up,rotate=90,minimum width=4.5cm] (up9) at (5,-5) {\small$\text{Upscaling}$};
\node[conv,rotate=90,minimum width=4.5cm] (conv9) at (3.75,-5) {\small$\text{Conv2DT}_{32(3,3)}$\,+\,$\text{bn}$\,+\,$\ReLU$};
\node[up,rotate=90,minimum width=4.5cm] (up10) at (2.5,-5) {\small$\text{Upscaling}$};
\node[conv,rotate=90,minimum width=4.5cm] (conv10) at (1.25,-5) {\small$\text{Conv2DT}_{16(3,3)}$\,+\,$\text{bn}$\,+\,$\ReLU$};
\node (ry) at (0,-5) {\small$\tilde{y}$};
\draw[->] (y) -- (conv1);
\draw[->] (conv1) -- (pool1);
\draw[->] (pool1) -- (conv2);
\draw[->] (conv2) -- (pool2);
\draw[->] (pool2) -- (conv3);
\draw[->] (conv3) -- (pool3);
\draw[->] (pool3) -- (conv4);
\draw[->] (conv4) -- (pool4);
\draw[->] (pool4) -- (view4);
\draw[->] (view4) -- (fc51);
\draw[->] (view4) -- (fc52);
\draw[->] (fc51) -- (kld);
\draw[->] (fc52) -- (kld);
\draw[->] (kld) -- (z);
\draw[->] (z) -- (fc6);
\draw[->] (fc6) -- (view6);
\draw[->] (view6) -- (up7);
\draw[->] (up7) -- (conv7);
\draw[->] (conv7) -- (up8);
\draw[->] (up8) -- (conv8);
\draw[->] (conv8) -- (up9);
\draw[->] (up9) -- (conv9);
\draw[->] (conv9) -- (up10);
\draw[->] (up10) -- (conv10);
\draw[->] (conv10) -- (ry);
\node[rotate=90] (L) at (0, -2.5) {\small$\mathcal{L}(\tilde{y}, y) = -\ln p(y | \tilde{z})$};
\draw[-,dashed] (y) -- (L);
\draw[-,dashed] (ry) -- (L);
\node[rotate=90] (KLD) at (14.5, -5) {\small$\ensuremath{\mathrm{KL}}(q(z|y) | p(z))$};
\draw[-,dashed] (KLD) -- (z);
\end{tikzpicture}
\vskip 6px
\caption{No highlight detection module architecture (VAE)}
\label{fig:2}
\end{figure*}
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.4]{Images/EfficientNet.jpg}
\caption{Image classification module architecture}
\label{fig:3}
\end{figure*}
\subsection{No Highlight Detection Module (Variational Autoencoder)}
\paragraph A soccer match can include various scenes that are not necessarily a specific event, such as scenes from a soccer match where the director is showing the faces of the players either on the field or on the bench, or when the players are walking and the moments when the game is stopped. These scenes are not categorized as events of a soccer match.
In general, to be able to separate the images of the defined events from the rest of the images, three actions must be performed to complete each other and help us to detect no highlights. The three actions are :
\begin{enumerate}
\item The use of the VAE network to identify if the input images are similar to the \textbf{s}occer \textbf{ev}ent (SEV) dataset images
\item The use of three additional categories, that is, left penalty area, right penalty area, and center circle in the image classification module, given that most free kick images are similar to images from these categories (if these categories were not placed, the images of the wingers of the field would usually get a good score in the free-kick category)
\item Applying the best threshold on the prediction value of the last layer of the EfficientNetB0 feature extraction network.
\end{enumerate}
The second and third methods are applied to the image classification module and will be described later. In the first action, the VAE architecture is employed according to Fig \ref{fig:2} to identify images that do not fall into any of the event categories. To this end, the whole images of the soccer training dataset are given to the VAE network to be trained, then using reconstruction loss and determining a threshold on it, it is determined that the images whose reconstruction loss value are higher than a fixed threshold are not considered as the soccer images. Images with a reconstruction loss value less than a fixed threshold are categorized as the soccer game images and then they are given to the image classification module for classification. In other words, this VAE plays the role of a two-class classifier that puts soccer images in one category and non-soccer images in another category. Reconstruction loss is obtained from the difference between the input image and the reconstructed image. The more input images from a more distinctive distribution, the higher reconstruction loss value will be, and if it is trained using the same image distribution, the amount of this error will be less.
\subsection{Image Classification Module}
The EfficientNetB0 architecture which is shown in Fig \ref{fig:3} is used to categorize images. This network is responsible for classifying images between nine classes. To place an image in one of the classes of this network, its prediction value at the end layer should be higher than the threshold value which is set 0.9. Otherwise it would be selected as no highlight frame.
If one of the options left penalty area, right penalty area, and center circle is the output of this network, that image is no longer defined as an image of the event that occurred in a soccer match but is included in the no highlight category. However, if it is in one of the categories of penalty kick, corner kick, tackle, free kick, and to substitute, the event will be finalized and decision is made. Finally, if the image is classified in the card category, the image is given to the fine-grain classification module where the color of that card will be determined.
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.6]{Images/OSME.jpg}
\caption{Fine-grain classification module architecture}
\label{fig:4}
\end{figure*}
\subsection{Fine-grain Classification Module }
The only difference between the red and yellow cards is their color of the card; otherwise, there exist no other differences in their image. Thus, both are in the card category but are separate in terms of subcategories. The main classification does not distinguish these two categories well, hence it is decided in the training phase of image classification module to merge these two cards into one category. Also, to separate the yellow and red cards, a separate subclassification is used that focuses on the details. The final architecture employed in this section can be seen in Fig \ref{fig:4} . Here, the proposed architecture provided in \cite{sun2018multi} is exploited, except that instead of the Res-Net50 architecture that is used in \cite{sun2018multi} , the EfficientNetB0 architecture is employed and the network is trained using the yellow and red cards data. The input images to this network are the images that were categorized as cards in the image classification module. The output of this network determines whether these images are in the yellow or red card category.
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.6]{Images/ImageDataset.jpg}
\caption{Samples of SEV dataset}
\label{fig:5}
\end{figure*}
\section{Datasets}
In this paper, two datasets, that is, soccer event (SEV) and test event datasets have been collected. The collection of these two datasets has been done in two ways:
\begin{enumerate}
\item By crawling on Internet websites, images of different games were collected. These images are unique and are not consecutive frames
\item Using the video of the soccer games of the last few years of the prestigious European leagues and extracting images related to events.
\end{enumerate}
\subsection{ImageNet}
ImageNet dataset given in \cite{krizhevsky2012imagenet} is employed in the method proposed in this paper for the initial weighting of the EfficientNetB0 network.
\begin{table*}[h!]
\caption{Statistics of SEV dataset}\label{tbl1}
\makebox[\linewidth]{
\begin{tabular}{lccc}
\toprule
Class Name & \# of train images & \# of validation images & \# of test images\\
\midrule
Corner kick & 5000 & 500 & 500\\
Free kick & 5000 & 500 & 500\\
To Substitute & 5000 & 500 & 500\\
Tackle & 5000 & 500 & 500\\
Red card & 5000 & 500 & 500\\
Yellow card & 5000 & 500 & 500\\
Penalty kick & 5000 & 500 & 500\\
Center Circle & 5000 & 500 & 500\\
Left Penalty Area & 5000 & 500 & 500\\
Right Penalty Area & 5000 & 500 & 500\\
Sum & 50000 & 5000 & 5000\\
\bottomrule
\end{tabular}
}
\end{table*}
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.5]{Images/ImageDataset2.jpg}
\caption{Samples of test event dataset}
\label{fig:6}
\end{figure*}
\begin{table*}[h!]
\caption{Statistics of test event dataset}
\makebox[\linewidth]{
\begin{tabular}{lc}
\hline
Class name & Image instance \\
\hline
Soccer events (events defined) & 1400 \\
Other soccer events (throw-in, goal kick, offside, …) & 1400\\
Other images (images that is not related to football) & 1400\\
Sum & 4200\\
\hline
\end{tabular}
}
\end{table*}
\subsection{SEV Dataset}
This dataset includes images of soccer match events that have been collected specifically for this study. In the SEV dataset, a total of 60,000 images were collected in 10 categories. The images of this dataset, as described, were collected in two different ways. Seven of the ten image categories are related to the soccer events defined in this paper, and the rest of the categories are used for no-highlight detection so that the images related to them are not mistakenly included in the seven main categories. Table I shows how dataset SEV is divided into train, validation, and test datasets. Also samples of SEV dataset shown in Fig \ref{fig:5}
\subsection{Test Event Dataset}
The test dataset is exploited to evaluate the proposed method. Samples of that dataset shown in Fig \ref{fig:6}. The dataset consists of three classes, the first class contains images of the events selected from the SEV dataset, and an equal number of images (200 instances from each category) are selected from each category (only defined events). The second class includes other images of soccer, in which none of these seven events are included. The third class includes images that are not generally related to soccer. Details of the number of images in this dataset are given in Table II.
\section{Experiment and Performance Evaluation}
\subsection{Training}
All three networks are trained independently and the end-to-end method is not employed. The training methods of the 10-class classifier network, the yellow and red card classifier network, and the VAE network are explained in the following subsections, respectively.
\subsubsection{VAE}
\begin{table*}[h!]
\caption{Simulation parameters of the variational autoencoder}
\makebox[\linewidth]{
\begin{tabular}{ll}
\hline
Parameter & Value \\
\hline \hline
Optimizer & Adam\\
Loss function & Reconstruction loss + KL loss\\
Preformance metric & Loss\\
\hline
\end{tabular}
}
\label{tab:d1}
\end{table*}
To train this network, the images of seven events defined from the SEV dataset are selected and given to the VAE network as training data. Test and validation data of these seven categories are also used to evaluate the network. The specifications of the simulation parameters used in this network are summarized in Table III.
As illustrated in Fig \ref{fig:7} , the value of the loss curve decreases during successive epochs for validation data.
\subsubsection{Image Classification (9 Class)}{}
\begin{table*}[h!]
\caption{Simulation parameters of the image classification module}
\makebox[\linewidth]{
\begin{tabular}{ll}
\hline
Parameter & Value \\
\hline \hline
Optimizer & Adam\\
Loss function & Categorical Cross-Entropy\\
Preformance metric & Accuracy\\
Total Classes & 9 (red and yellow card classes merged)\\
Augemnation & Scale, Rotate, Shift, Flip\\
Batch Size & 16\\
\hline
\end{tabular}
}
\label{tab:d1}
\end{table*}
\begin{table*}[h!]
\caption{Simulation parameters of the fine-grain image classification module}
\makebox[\linewidth]{
\begin{tabular}{ll}
\hline
Parameter & Value \\
\hline \hline
Optimizer & Adam\\
Loss function & Categorical cross-entropy\\
Preformance metric & Accuracy\\
Total Classes & 2 (red and yellow card classes)\\
Augemnation & Scale, rotate, shift, flip\\
Batch Size & 16\\
\hline
\end{tabular}
}
\label{tab:d1}
\end{table*}
The image classification network is first trained on the ImageNet image collection with dimensions of 224 * 224 * 3. Then, using transfer learning, the network is re-trained on the SEV dataset and is fine-tuned. Dimensions of input images of the SEV dataset are 224 * 224 * 3. The network is trained in 20 epochs with the simulation parameters specified in Table IV. The yellow and red card classes are merged and their 5,000 images are used for network training together with images of other SEV dataset classes.
\subsubsection{Fine-grain image classification}{}
The network shown in Fig \ref{fig:4} is trained using two classes of red card and yellow card of the SEV dataset. Data from each category is partitioned to train, test and validation with 5000, 500, and 500 images. The specifications of the simulation parameters used in this network are given in Table V.
\subsection{Evaluation Metrics}
Different metrics are exploited to evaluate this network. In order to evaluate and compare different image classification architectures of EfficientNetB0 and Fine-grain module networks, the accuracy metric is used as the main metric; recall and F1-score are also used to determine the appropriate threshold value of the EfficientNetB0 network. Also, precision is used to evaluate the performance of the proposed method to detect events.
The accuracy metric can be used to determine how accurately the trained model predicts and, as described in this paper, to compare different architectures and hyperparameters in the EfficientNetB0 and Fine-grain module networks.
\begin{equation}
Accuracy = \frac{TP + TN}{P + N} = \frac{TP + TN}{TP + TN + FP + FN}
\end{equation}
The F1 score metric considers both the recall and precision criteria together; the value of this criterion is one at the best-case scenario and zero at the worst-case scenario.
\begin{equation}
F_{1} = \frac{2}{ Precision^{-1} + Recall^{-1} } = \frac{TP}{TP + \frac{1}{2} \times (FP+FN)}
\end{equation}
\begin{figure*}[h!]
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\linewidth]{Images/Result-Architecture.jpg}
\caption{Architecture}
\label{fig:71}
\end{subfigure}\hfil
\hfill
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\linewidth]{Images/Result-Batchsize.jpg}
\caption{Batch size}
\label{fig:72}
\end{subfigure}\hfil
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\linewidth]{Images/Result-Optimizer.jpg}
\caption{Optimizer}
\label{fig:73}
\end{subfigure}\hfil
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\linewidth]{Images/Result-Learningrate.jpg}
\caption{Learning rate}
\label{fig:74}
\end{subfigure}\hfil
\caption{Comparing the results on hyperparameters and architecture for validation accuracy (image classification module)}
\label{fig:7}
\end{figure*}
Precision is a metric that helps to determine how accurate the model is when making a prediction. This metric has been used as a criterion in selecting the appropriate threshold.
\begin{equation}
Precision = \frac{TP}{TP + FP}
\end{equation}
Recall metric refers to the percentage of total predictions that are correctly categorized .
\begin{equation}
Recall = \frac{TP}{TP + FN}
\end{equation}
\subsection{Evaluation}
The various parts of the proposed method have been evaluated to achieve the best model in order to detect an event in a soccer match. In the first step, the algorithm should be able to effectively classify the images of the defined events correctly. In the next step, the network is examined to see how the network can detect no highlights, and the best possible model is selected. Eventually, the performance of the proposed algorithm in a
soccer video is examined and compared to other state-of-the-art methods.
\begin{table*}[h!]
\caption{Comparison between
Fine-grain image classification models}
\makebox[\linewidth]{
\begin{tabular}{lccc}
\hline
Method name & Epoch & Acc (red and yellow card) (\%) & Acc (CUB-200-2011) (\%) \\
\hline \hline
B-CNN \cite{lin2015bilinear}\cite{tan2019efficientnet} & 60 & 66.86 & 84.01 \\
OSME + MAMC using Res-Net50 \cite{sun2018multi} & 60 & 61.70 & 86.2 \\
\textbf{OSME + MAMC \cite{sun2018multi} using EfficientNetB0} \cite{tan2019efficientnet} & 60 & \textbf{79.90} & 88.51 \\
Ge et. al. \cite{ge2019weakly} & 60 & 78.03 & \textbf{90.40} \\
\hline
\end{tabular}
}
\label{tab:d1}
\end{table*}
\subsubsection{Classification evaluation}{}
The image classification module is responsible for classifying images. To test and evaluate this network, as shown in Fig \ref{fig:7}, different architectures were used for training and different hyperparameters were also tested for these models. As shown in Fig \ref{fig:71} , the EfficinetNetB0 model has the best accuracy among the other models.
If in the above model, we divide the card images into two categories of yellow and red cards and give the dataset to the network in the form of the same 10 categories of SEV datasets for training, the accuracy of test data is reduced from 94.08\% to 88.93\% . The reason for this is the interference of yellow and red card predictions in this model. Consequently, the yellow and red cards detection have been assigned to a subclassification, and only the card category in the EfficientNetB0 model has been used.
For the subclassification of yellow and red card images, various fine-grained methods were evaluated and the results are shown in Table VI. The bilinear CNN method \cite{lin2015bilinear} using the EfficientNet architecture achieves 66.86\% accuracy and the OSME method that employs the EfficientNet architecture reaches 79.90\% accuracy, which shows higher accuracy than the other methods. Nonetheless, the main architecture (EfficientNetB0) used in the image classifier shows 62.02\% accuracy, which gives difference of 17.88\%.
Table VII compares the accuracy of combining the image classification module and fine-grain classification module for image classification with other models. As shown in Table VII, the accuracy of our proposed method for image classification is 93.21\%, which shows the best accuracy among all models. Also, the proposed method is still faster than the other models except MobileNet and MobileNetV2. To prove this point, The execution time of each model is calculated for 1400 images, and their average run time as the mean inference time is given in Table VII.
\begin{table*}[h!]
\caption{Comparison between image classification models on SEV dataset}
\makebox[\linewidth]{
\begin{tabular}{cccc}
\hline
Method name & Accuracy (\%) & Mean inference time (second)\\
\hline \hline
VGG16 \cite{simonyan2014very} & 83.21 & 0.510 \\
ResNet50V2 \cite{he2016identity} & 84.18 & 0.121 \\
ResNet50 \cite{he2016deep} & 84.89 & 0.135 \\
InceptionV3 \cite{szegedy2016rethinking} & 84.93 & 0.124 \\
MobileNetV2 \cite{sandler2018mobilenetv2} & 86.95 & 0\textbf{.046} \\
Xception \cite{chollet2017xception} & 86.97 & 0.186 \\
NASNetMobile \cite{zoph2018learning} & 87.01 & 0.121 \\
MobileNet \cite{howard2017mobilenets} & 88.48 & 0.048 \\
InceptionResNetV2 \cite{szegedy2016inception} & 88.71 & 0.252 \\
EfficientNetB7 \cite{tan2019efficientnet} & 88.92 & 1.293 \\
EfficientNetB0 \cite{tan2019efficientnet} & 88.93 & 0.064 \\
DenseNet121 \cite{huang2017densely} & 89.47 & 0.152 \\
\textbf{Proposed method (image classification section)} & \textbf{93.21} & 0.120 \\
\hline
\end{tabular}
}
\label{tab:d1}
\end{table*}
As shown in Table VIII, the problem of card overlap is also solved, and the proposed method in the image classification section separates the two categories almost well.
\begin{table*}[h!]
\caption{Confusion matrix proposed algorithm (image classification section)}
\makebox[\linewidth]{
\begin{tabular}{ccccccccccc}
\hline
& & & & Predicted & & & & & & \\
Actual & Center & Corner & Free kicke & Left & Penalty & R Card & Right & Tackle & To Substitue & Y Card \\
\hline \hline
Center & 0.994 & 0 & 0.006 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
Corner & 0 & 0.988 & 0.006 & 0 & 0 & 0.004 & 0 & 0 & 0.002 & 0 \\
Free kick & 0.004 & 0.004 & 0.898 & 0.03 & 0 & 0.002 & 0.05 & 0.006 & 0.004 & 0.002 \\
Left Penalty Area & 0.002 & 0 & 0.036 & 0.958 & 0.004 & 0 & 0 & 0 & 0 & 0 \\
Penalty kick & 0 & 0 & 0 & 0.002 & 0.972 & 0 & 0.026 & 0 & 0 & 0 \\
Red cards & 0 & 0.008 & 0.03 & 0 & 0 & 0.862 & 0 & 0.002 & 0.012 & 0.086 \\
Right Penalty Area & 0 & 0 & 0.01 & 0.002 & 0 & 0 & 0.988 & 0 & 0 & 0 \\
Tackle & 0.012 & 0.014 & 0.02 & 0.02 & 0.002 & 0.006 & 0.028 & 0.876 & 0.004 & 0.018 \\
To Substitue & 0 & 0 & 0 & 0 & 0 & 0.002 & 0.002 & 0 & 0.994 & 0.002 \\
Yellow cards & 0 & 0.022 & 0.01 & 0 & 0 & 0.131 & 0 & 0.002 & 0.006 & 0.829 \\
\hline
\end{tabular}
}
\label{tab:d1}
\end{table*}
\begin{table*}[h!]
\caption{Comparison between different threshold values (image classification module)}
\makebox[\linewidth]{
\begin{tabular}{ccc}
\hline
Threshold & F1-score (event images) (\%) & Recall (images not related to football) \%) \\
\hline \hline
0.99 & 86.6 & 95.3 \\
0.98 & 88.2 & 94.2 \\
0.97 & 89.2 & 93.8 \\
0.95 & 90.5 & 93.1 \\
0.90 & 91.8 & 92.4 \\
0.80 & 92.4 & 76.9 \\
0.50 & 95.2 & 51.2 \\
\hline
\end{tabular}
}
\label{tab:d1}
\end{table*}
\subsubsection{Known or Unknown Evaluation}
In order to determine the threshold value on the output of the last layer of the EfficientNetB0 network, according to Table IX, different threshold values were tested and evaluated; and then the value 0.90 which gives the highest sum of F1-score for detecting the event and also gives the best recall to detect no highlight images was determined as the threshold for the last layer of the network.
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.3]{Images/VAE_Loss.jpg}
\caption{Reconstruction Loss of VAE}
\label{fig:8}
\end{figure*}
The threshold value for the loss of VAE network was also examined with different values as shown in Fig \ref{fig:8} , the value of 328 as the threshold for the loss gives the best distinction between categories.
To test how the network detects no highlight images and the defined event images, the test dataset is used. Using this dataset helps to know the number of main events incorrectly classified by the proposed method as events not related to soccer. Also, it clarifies the number of images related to soccer classified as the main events, and the number of images categorized in the no highlight category. Moreover, it specifies the number of images related to the real event categorized correctly into the correct events, and the number of events that are not included in the main events. The results of this evaluation are provided in Table X.
\begin{table*}[h!]
\caption{The precision of the proposed algorithm}
\makebox[\linewidth]{
\begin{tabular}{ccc}
\hline
Class & Sub-class & Precision\\
\hline \hline
Soccer Events & Corner kick & 0.94 \\
--- & Free kick & 0.92 \\
--- & To Substitue & 0.98 \\
--- & Tackle & 0.91 \\
--- & Red Card & 0.90 \\
--- & Yellow Card & 0.91 \\
--- & Penalty Kick & 0.93 \\
Other soccer events & & 0.86 \\
Other images & & 0.94 \\
\hline
\end{tabular}
}
\label{tab:d1}
\end{table*}
\subsubsection{Final Evaluation}
Ten soccer matches have been downloaded from the UEFA Champions league and then, using the proposed method, the task of events detection has been carried out. In this evaluation, events that occur in each soccer game are examined.
In other words, the number of events correctly detected, and the number of events incorrectly detected by the network have been determined. Details of the results are given in Table XI and compared with other similar methods. in state-of-the-art methods
\begin{table*}[h!]
\caption{Precision of the proposed and state-of-the-art methods on 10 soccer matches (\%)}
\makebox[\linewidth]{
\begin{tabular}{cccc}
\hline
Event name & Proposed method & BN\cite{tavassolipour2013event} & Jiang et. al.\cite{jiang2016automatic}\\
\hline \hline
Corner kick & 94.16 & 88.13 & 93.91\\
Free kick & 83.31 & \--- & \--- \\
To Substitue & 97.68 & \--- & \--- \\
Tackle & 90.13 & 81.19 & \--- \\
Red Card & 92.39 & \--- & \--- \\
Yellow Card & 92.66 & \--- & \--- \\
Card & \--- & 88.83 & 93.21 \\
Penalty Kick & 88.21 & \--- & \--- \\
\hline
\end{tabular}
}
\label{tab:d1}
\end{table*}
\section{Conclusions and Future Work}
In this paper, two novel datasets for soccer event detection has been presented. One is the SEV dataset including 60,000 images in 10 categories, seven of which were related to soccer events, and three to soccer scenes; these were used in training image classification networks. The images of this dataset were taken from top fine leagues in Europe and the European Champions league. The other dataset is the test event that contains 4200 images. Test event includes three categories: the first category consists of the events mentioned in the paper, the second category comprises other images of a soccer match apart from the first category events, and the third category includes images off the soccer field. This dataset was exploited to examine the network power in detecting and distinguishing between images with highlight and those with no highlight.
Furthermore, a method for soccer event detection is proposed. The proposed method employed the EfficientNetB0 network in the image classification module to detect events in a soccer match. Also, the fine-grain image classification module was used to differentiate between red and yellow cards. If this module is not employed, red and yellow cards would have been categorized in the image classification module, and the differentiation accuracy would have been 88.93\%. However, the fine-grain image classification module increased the accuracy up to 93.21\%. In order to solve the network problems in predicting the images other than those defined, a VAE was employed to adjust the value of the threshold and several images, other than those of the defined events, were used to make a better distinction between the images of the defined events and other images.
\section{Introduction}
\IEEEPARstart{S}{}occer is among the most popular sports in the world. The attractiveness of this sport has gathered many spectators. Various studies are being performed in this area to grow and assist this sport and meet the needs of soccer clubs and media. These researches mainly focus on estimating team tactics \cite{suzuki2019team}, tracking of players \cite{manafifard2017survey} or ball on the field \cite{kamble2019deep}, detection of events occurring in the match \cite{hong2018end,fakhar2019event,khan2018soccer,khan2018learning,jiang2016automatic}, summarizing the soccer match \cite{agyeman2019soccer,sanabria2019deep,rafiq2020scene} and estimating ball possession statistics \cite{sarkar2019generation}. These studies are carried out using various methods and techniques, including machine learning (ML).
Artificial intelligence (AI), and more specifically ML can assist in conducting the above-mentioned researches in order to achieve better and more intelligent results. ML itself can be implemented in a variety of ways, including deep learning (DL). Distinguishing the events of a soccer match is one of the active research fields related to soccer. Today, various AI methods are utilized to detect events in a soccer game \cite{hong2018end,khan2018learning}. The use of AI in this area can help to achieve higher accuracy in detecting events.
Detecting the events of a soccer match may have different applications. Event detection, for instance, in a soccer match can help obtaining the statistics of events of the match. Counting the number of free kicks, fouls, tackles, etc. in a soccer game can be done by a manpower. Using a manpower is not only costly and time consuming, but also is may be associated with errors. Nonetheless, with intelligent systems based on event detection, statistics can be calculated and used automatically. Other applications of event detection may be the summarization of a soccer match. \cite{zawbaa2012event}. The summary of a soccer match includes important events in which the match took place. To prepare a useful summary of a soccer match, the events should be correctly identified. Using a method for identifying the events with high accuracy can improve the quality of the summarization task.
The purpose of the current study is to detect events in soccer matches. In this regard, various deep learning architectures have been developed. When using deep learning methods, there always exists a need for datasets for training. To this end, first, a rich visual dataset is collected for our task such as (penalty kick, corner kick, free kick, tackle, to substitute, yellow and red cards) and images containing these events of the sides and center of the field are collected. Then, this image dataset is used to train the proposed networks. Solving event detection problem encounters some challenges including the similarity of some images such as yellow and red cards, which leads to some difficulties in separating the images of these two groups from each other. This causes the classifier to have trouble in distinguishing between yellow and red cards, resulting in incorrect detection.
Another problem in detecting events that usually occurs in a soccer match is the issue of no highlights. Not all events happening in a soccer match can be included in specific events to be given to the network for training. Some events that have not been fed to the network before training the network may occur specifically in a soccer match. In such cases, the network must be able to face these images or videos and not mistakenly categorize them in one of the given events. We solved the problem of \textit{no highlight} image detection by using VAE , setting a threshold and using additional classes in the image classification module.
The proposed method for detecting the soccer match events uses two convolutional neural network (CNN) and a variational autoencoder (VAE). The current study focuses on resolving the problem of no highlight frames which may be wrongly considered as one of the events, as well as dealing with the problem of similarity between red and yellow cards frames. Experiments demonstrate the proposed method improves the accuracy of image classification from 88.93\% to 93.21\%. The main reason for increasing the accuracy is the use of a new fine-grain classification network to classify yellow and red cards. Also, the proposed method has a good ability to detect no highlight frames with a high precision. All datasets and implementations are publicly available.\footnote{https://github.com/FootballAnalysis/footballanalysis}
The rest of the paper is organized as follows. Section II provides a literature review and examines the drawbacks of the works done in this area. Section III describes the proposed algorithm, the mechanism of its structure and working. Section IV introduces the datasets collected for this study. Section V presents the experimental results and compares the results with those of other papers. Eventually, Section VI concludes the paper.
\section{Related Work}
The research of Duan et. al. \cite{duan2005unified} is one of the first works in this area that implements supervised learning for top-down video shot classification. The authors in \cite{tavassolipour2013event} present a method for detecting soccer events using the Bayesian network. The basic methods presented suffered from low accuracy, until some methods have been proposed using DL. By presenting a method based on the convolutional network and the LSTM network, Agyeman et. al. \cite{agyeman2019soccer} present a method for summarizing a soccer match based on event detection, in which five events including corner kicks, free kicks, goal scenes, centerline, and throw-in are considered. This study uses 3D-ResNet34 architecture in the convolutional network structure. One of the problems with this work is that the number of events is limited and no highlights are taken into account. Jiang et. al. \cite{jiang2016automatic}, initially, perform feature extraction using a convolutional network, then perform event detection using its combination with RNN model.This method is limited to four events: goal, goal attempt, corner, and card. Sigari et. al. \cite{sigari2015fast} employ the fuzzy inference system. The algorithm presented in this method works based on replay detection, logo detection, view type recognition and audience excitement. This method is also limited to three events: penalty, corner, and free-kick. 11 events are classified in \cite{yu2019soccer}, covering a good number of events; while this method is not capable to distinguish between the red and yellow card events because of the high similarity between these two events. In general, the methods presented in this field work on either image, video \cite{agyeman2019soccer,jiang2016automatic}, or audio signals \cite{duxans2009audio,raventos2015automatic}. Nonetheless, there exist some methods employing two signals, i.e. audio and video, simultaneously \cite{sanabria2019deep,sigari2015fast}.
The methods recently presented utilize DL architectures as the main tool for feature extraction. Among the DL architectures suggested for feature extraction, the closest flagship architecture is EfficientNet architecture \cite{tan2019efficientnet}. This architecture is presented in 8 different versions. Different versions of this architecture offer generally higher performance than other previous models \cite{simonyan2014very,szegedy2015going,he2016deep}. Also, the proposed architecture has fewer parameters and occupies less memory.
One of the challenges of the event detection problem in soccer matches, which has not been addressed adequately in the literature, is the events that are very similar in appearance but are two separate events. For instance, in the images of yellow and red cards, only the color of the cards is different and the other parts of the image are the same. Although it may come to the mind that both are card-taking operations and may be almost the same, in the soccer game these two events impact the game process significantly. In the literature, both yellow and red card events are considered as one event \cite{tavassolipour2013event}, which causes problems for event detection. The reasons for this are the very high similarity of the images of these two events, which makes it very challenging to distinguish between the two events. In this paper, fine-grained image classification is used instead of common feature extraction architectures to solve this problem in detecting such events.
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.4]{Images/Algorithm.jpg}
\caption{Generic block diagram of the proposed algorithm}
\label{fig:1}
\end{figure*}
Fine-grained image classification is one of the challenges in machine vision, which categorizes images that fall into a similar category but do not fall into one subcategory \cite{dai2019bilinear}. For example, items such as face recognition, different breeds of dogs, birds, etc., despite the many structural similarities, do not fall into one subcategory and are different from each other. As another example, the California gull and Ringed-beak gull are two similar birds, differing only in beak pattern, and are in two separate subclasses. The main problem with this type of classification is that these differences are usually subtle and local. Finding these discriminating places of two subcategories is a challenge that we face in these methods.
The work of Lin et. al. \cite{lin2015bilinear} is one of the researches in the field of fine-grained image classification, which is based on deep learning. In this model, two neural networks are used simultaneously. The outer product of the outputs of these two networks is then mapped to a bilinear vector. Finally, there is a softmax layer to specify the classification of images. The accuracy of this method for the CUB-200-2011 dataset \cite{wah2011caltech} is 84.1\% while using only the neural network architecture in this architecture provides the maximum accuracy of 74.7\%. Fu et. al. \cite{fu2017look} introduce a framework of recurrent attention CNN, which receives the image with the original size and passes it through a classification network, thereby extracts the probability of its placement in each category. At the same time, after the convolutional layers of the classifier, it extracts an attention proposal network that contains region parameters that it uses to zoom in on the image and then crop it. It now inputs the resulting new image like the original image into a network, and at the same time extracts an attention proposal network, thereby re-extracting another part of the new image. This method reaches an accuracy of 85.3\% \textit{}for the Birds dataset. In another work, the authors in \cite{zheng2017learning} propose a framework of multi-attention convolutional neural network with an accuracy of 86.5\%. Following in 2018, Sun et. al. \cite{sun2018multi} presented new architecture on an attention-based CNN that learns multiple attention region features per an image through the one-squeeze multi-excitation (OSME) module and then use the multi-attention multi-class constraint (MAMC). Thanks to this structure, this method improves the accuracy of previous methods to some extent. One of the latest method presented in \cite{ge2019weakly} consists of three steps. In the first step, the instances are detected and then instance segmentation is carried out. In the second step, a set of complementary parts is created from the original image. In the third step, CNN is used for each image obtained, and the output of these CNNs is given to the LSTM cells for all images. The accuracy of the best model in this method for the Birds dataset reaches 90.4\%. In general, the problem with the methods presented in this section is the accuracy they achieved, while newer methods attempt to improve the accuracy.
Another issue is that a soccer match can include various scenes that are not necessarily a specific event, such as scenes from a soccer match where players walking, or the moments when the game is stopped. Now, if such images are applied as input to the classification network, the network will mistakenly place them in one of the defined categories. The reason is that the network is trained only to categorize images between events, and is called traditional classification network \cite{geng2020recent}. In such classifications, the known class is used during training and the known class images should be given to the network during testing. Otherwise, the network will have trouble in detecting the image category, and even though the image should not be placed in any of the categories, it will be placed incorrectly in one of the defined categories. This type of categorization does not suffice for the problem under study. Because, as explained, the input images in this problem may not belong to any of the categories. Thus, we need a network to specify the category of an input image if it falls into one of the seven categories; otherwise, the network rejects it and does not mistakenly place it in these defined categories. In other words, an open set recognition is required to address the mentioned problem \cite{geng2020recent} .
Open set recognition techniques can be implemented using different methods. Cevikalp et. al. \cite{cevikalp2016best} performs this based on support vector machine (SVM). The works in \cite{hassen2020learning} and \cite{bendale2016towards} are also based on deep neural networks. Today, the use of generative models in this area is reaching its pinnacle, which is divided into two categories: instance generation and non-instance generation \cite{geng2020recent} . Finding a suitable method is still challenging, and the literature presented for event detection has not addressed this issue profoundly.
\section{Proposed method}
This section describes the proposed method. Initially, the general procedure is explained, then all three main parts of the method are introduced, which includes an image classification module for detecting the images of the defined events, a fine-grain classification module used to classify yellow and red card images, and a variational autoencoder for detecting \textit{no highlights}.
\subsection{The Proposed Algorithm}
As depicted in Fig \ref{fig:1}, the received video is first split into several frames based on the video length and frame rate per second, and then each frame is passed separately through a variational autoencoder. If loss value of the VAE network for the input frame is smaller than a specified value, the received frame is considered as an event frame and given to the image classification module. The image classification module classifies the images into nine classes. If the input image belongs to one of the center circle categories, right penalty area, or left penalty area, it is not categorized as an event and is still classified as no highlights. Yet, if it is one of the five events: penalty kick, corner kick, tackle, free kick, to substitute, it is recorded as an event in that image. If the event is a card event, the image is given to the fine-grain classification module to determine if it is a yellow card or red card, and the color of the card is specified there. Finally, to detect events in a soccer match, each event is calculated for every 15 consecutive frames (seven frames before, seven frames after, and the current frame), and if more than half of the frames belong to an event, that 15 frames (half a second) are tagged as that event. Of course, an event cannot be repeated more than once in 10 s, and if repeated, only one of them is calculated as an event in the calculation of the number of events occurred. In the folowing each module is described in detail.
\begin{figure*}[hbt!]
\centering
\hspace*{-0.5cm}
\begin{tikzpicture}[scale=0.8, transform shape]
\node (y) at (0,0) {\small$y$};
\node[conv,rotate=90,minimum width=4.5cm] (conv1) at (1.25,0) {\small$\text{conv}_{16(3,3) }$\,+\,$\text{bn}$\,+\,$\ReLU$};
\node[pool,rotate=90,minimum width=4.5cm] (pool1) at (2.5,0) {\small$\text{pooling}_{max}$};
\node[conv,rotate=90,minimum width=4.5cm] (conv2) at (3.75,0) {\small$\text{conv}_{32(3,3) }$\,+\,$\text{bn}$\,+\,$\ReLU$};
\node[pool,rotate=90,minimum width=4.5cm] (pool2) at (5,0) {\small$\text{pooling}_{max}$};
\node[conv,rotate=90,minimum width=4.5cm] (conv3) at (6.25,0) {\small$\text{conv}_{64(3,3) }$\,+\,$\text{bn}$\,+\,$\ReLU$};
\node[pool,rotate=90,minimum width=4.5cm] (pool3) at (7.5,0) {\small$\text{pooling}_{max}$};
\node[conv,rotate=90,minimum width=4.5cm] (conv4) at (8.75,0) {\small$\text{conv}_{128(3,3) }$\,+\,$\text{bn}$\,+\,$\ReLU$};
\node[pool,rotate=90,minimum width=4.5cm] (pool4) at (10,0) {\small$\text{pooling}_{max}$};;
\node[view,rotate=90,minimum width=4.5cm] (view4) at (11.25,0) {\small$\text{view}_{B, 1024}$};
\node[fc,rotate=90, minimum width = 2cm] (fc51) at (12.5,1.25) {\small$\text{fc}_{512, Q}$};
\node[fc,rotate=90, minimum width = 2cm] (fc52) at (12.5,-1.25) {\small$\text{fc}_{512, Q}$};
\node[view,rotate=90,minimum width=4.5cm] (kld) at (13.75,0) {\small$\text{KLD}_{\mathcal{N}}$\,+\,$\text{repa}_{\mathcal{N}}$};
\node (z) at (13.75,-5){\small$\tilde{z}$};
\node[fc,rotate=90,minimum width=4.5cm] (fc6) at (12.5,-5) {\small$\text{fc}_{Q, 512}$};
\node[view,rotate=90,minimum width=4.5cm] (view6) at (11.25,-5) {\small$\text{view}_{B, 1024, 2, 2, 2}$};
\node[up,rotate=90,minimum width=4.5cm] (up7) at (10,-5) {\small$\text{Upscaling}$};
\node[conv,rotate=90,minimum width=4.5cm] (conv7) at (8.75,-5) {\small$\text{Conv2DT}_{128(3,3)}$\,+\,$\text{bn}$\,+\,$\ReLU$};
\node[up,rotate=90,minimum width=4.5cm] (up8) at (7.5,-5) {\small$\text{Upscaling}$};
\node[conv,rotate=90,minimum width=4.5cm] (conv8) at (6.25,-5) {\small$\text{Conv2DT}_{64(3,3)}$\,+\,$\text{bn}$\,+\,$\ReLU$};
\node[up,rotate=90,minimum width=4.5cm] (up9) at (5,-5) {\small$\text{Upscaling}$};
\node[conv,rotate=90,minimum width=4.5cm] (conv9) at (3.75,-5) {\small$\text{Conv2DT}_{32(3,3)}$\,+\,$\text{bn}$\,+\,$\ReLU$};
\node[up,rotate=90,minimum width=4.5cm] (up10) at (2.5,-5) {\small$\text{Upscaling}$};
\node[conv,rotate=90,minimum width=4.5cm] (conv10) at (1.25,-5) {\small$\text{Conv2DT}_{16(3,3)}$\,+\,$\text{bn}$\,+\,$\ReLU$};
\node (ry) at (0,-5) {\small$\tilde{y}$};
\draw[->] (y) -- (conv1);
\draw[->] (conv1) -- (pool1);
\draw[->] (pool1) -- (conv2);
\draw[->] (conv2) -- (pool2);
\draw[->] (pool2) -- (conv3);
\draw[->] (conv3) -- (pool3);
\draw[->] (pool3) -- (conv4);
\draw[->] (conv4) -- (pool4);
\draw[->] (pool4) -- (view4);
\draw[->] (view4) -- (fc51);
\draw[->] (view4) -- (fc52);
\draw[->] (fc51) -- (kld);
\draw[->] (fc52) -- (kld);
\draw[->] (kld) -- (z);
\draw[->] (z) -- (fc6);
\draw[->] (fc6) -- (view6);
\draw[->] (view6) -- (up7);
\draw[->] (up7) -- (conv7);
\draw[->] (conv7) -- (up8);
\draw[->] (up8) -- (conv8);
\draw[->] (conv8) -- (up9);
\draw[->] (up9) -- (conv9);
\draw[->] (conv9) -- (up10);
\draw[->] (up10) -- (conv10);
\draw[->] (conv10) -- (ry);
\node[rotate=90] (L) at (0, -2.5) {\small$\mathcal{L}(\tilde{y}, y) = -\ln p(y | \tilde{z})$};
\draw[-,dashed] (y) -- (L);
\draw[-,dashed] (ry) -- (L);
\node[rotate=90] (KLD) at (14.5, -5) {\small$\ensuremath{\mathrm{KL}}(q(z|y) | p(z))$};
\draw[-,dashed] (KLD) -- (z);
\end{tikzpicture}
\vskip 6px
\caption{No highlight detection module architecture (VAE)}
\label{fig:2}
\end{figure*}
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.4]{Images/EfficientNet.jpg}
\caption{Image classification module architecture}
\label{fig:3}
\end{figure*}
\subsection{No Highlight Detection Module (Variational Autoencoder)}
\paragraph A soccer match can include various scenes that are not necessarily a specific event, such as scenes from a soccer match where the director is showing the faces of the players either on the field or on the bench, or when the players are walking and the moments when the game is stopped. These scenes are not categorized as events of a soccer match.
In general, to be able to separate the images of the defined events from the rest of the images, three actions must be performed to complete each other and help us to detect no highlights. The three actions are :
\begin{enumerate}
\item The use of the VAE network to identify if the input images are similar to the \textbf{s}occer \textbf{ev}ent (SEV) dataset images
\item The use of three additional categories, that is, left penalty area, right penalty area, and center circle in the image classification module, given that most free kick images are similar to images from these categories (if these categories were not placed, the images of the wingers of the field would usually get a good score in the free-kick category)
\item Applying the best threshold on the prediction value of the last layer of the EfficientNetB0 feature extraction network.
\end{enumerate}
The second and third methods are applied to the image classification module and will be described later. In the first action, the VAE architecture is employed according to Fig \ref{fig:2} to identify images that do not fall into any of the event categories. To this end, the whole images of the soccer training dataset are given to the VAE network to be trained, then using reconstruction loss and determining a threshold on it, it is determined that the images whose reconstruction loss value are higher than a fixed threshold are not considered as the soccer images. Images with a reconstruction loss value less than a fixed threshold are categorized as the soccer game images and then they are given to the image classification module for classification. In other words, this VAE plays the role of a two-class classifier that puts soccer images in one category and non-soccer images in another category. Reconstruction loss is obtained from the difference between the input image and the reconstructed image. The more input images from a more distinctive distribution, the higher reconstruction loss value will be, and if it is trained using the same image distribution, the amount of this error will be less.
\subsection{Image Classification Module}
The EfficientNetB0 architecture which is shown in Fig \ref{fig:3} is used to categorize images. This network is responsible for classifying images between nine classes. To place an image in one of the classes of this network, its prediction value at the end layer should be higher than the threshold value which is set 0.9. Otherwise it would be selected as no highlight frame.
If one of the options left penalty area, right penalty area, and center circle is the output of this network, that image is no longer defined as an image of the event that occurred in a soccer match but is included in the no highlight category. However, if it is in one of the categories of penalty kick, corner kick, tackle, free kick, and to substitute, the event will be finalized and decision is made. Finally, if the image is classified in the card category, the image is given to the fine-grain classification module where the color of that card will be determined.
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.6]{Images/OSME.jpg}
\caption{Fine-grain classification module architecture}
\label{fig:4}
\end{figure*}
\subsection{Fine-grain Classification Module }
The only difference between the red and yellow cards is their color of the card; otherwise, there exist no other differences in their image. Thus, both are in the card category but are separate in terms of subcategories. The main classification does not distinguish these two categories well, hence it is decided in the training phase of image classification module to merge these two cards into one category. Also, to separate the yellow and red cards, a separate subclassification is used that focuses on the details. The final architecture employed in this section can be seen in Fig \ref{fig:4} . Here, the proposed architecture provided in \cite{sun2018multi} is exploited, except that instead of the Res-Net50 architecture that is used in \cite{sun2018multi} , the EfficientNetB0 architecture is employed and the network is trained using the yellow and red cards data. The input images to this network are the images that were categorized as cards in the image classification module. The output of this network determines whether these images are in the yellow or red card category.
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.6]{Images/ImageDataset.jpg}
\caption{Samples of SEV dataset}
\label{fig:5}
\end{figure*}
\section{Datasets}
In this paper, two datasets, that is, soccer event (SEV) and test event datasets have been collected. The collection of these two datasets has been done in two ways:
\begin{enumerate}
\item By crawling on Internet websites, images of different games were collected. These images are unique and are not consecutive frames
\item Using the video of the soccer games of the last few years of the prestigious European leagues and extracting images related to events.
\end{enumerate}
\subsection{ImageNet}
ImageNet dataset given in \cite{krizhevsky2012imagenet} is employed in the method proposed in this paper for the initial weighting of the EfficientNetB0 network.
\begin{table*}[h!]
\caption{Statistics of SEV dataset}\label{tbl1}
\makebox[\linewidth]{
\begin{tabular}{lccc}
\toprule
Class Name & \# of train images & \# of validation images & \# of test images\\
\midrule
Corner kick & 5000 & 500 & 500\\
Free kick & 5000 & 500 & 500\\
To Substitute & 5000 & 500 & 500\\
Tackle & 5000 & 500 & 500\\
Red card & 5000 & 500 & 500\\
Yellow card & 5000 & 500 & 500\\
Penalty kick & 5000 & 500 & 500\\
Center Circle & 5000 & 500 & 500\\
Left Penalty Area & 5000 & 500 & 500\\
Right Penalty Area & 5000 & 500 & 500\\
Sum & 50000 & 5000 & 5000\\
\bottomrule
\end{tabular}
}
\end{table*}
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.5]{Images/ImageDataset2.jpg}
\caption{Samples of test event dataset}
\label{fig:6}
\end{figure*}
\begin{table*}[h!]
\caption{Statistics of test event dataset}
\makebox[\linewidth]{
\begin{tabular}{lc}
\hline
Class name & Image instance \\
\hline
Soccer events (events defined) & 1400 \\
Other soccer events (throw-in, goal kick, offside, …) & 1400\\
Other images (images that is not related to football) & 1400\\
Sum & 4200\\
\hline
\end{tabular}
}
\end{table*}
\subsection{SEV Dataset}
This dataset includes images of soccer match events that have been collected specifically for this study. In the SEV dataset, a total of 60,000 images were collected in 10 categories. The images of this dataset, as described, were collected in two different ways. Seven of the ten image categories are related to the soccer events defined in this paper, and the rest of the categories are used for no-highlight detection so that the images related to them are not mistakenly included in the seven main categories. Table I shows how dataset SEV is divided into train, validation, and test datasets. Also samples of SEV dataset shown in Fig \ref{fig:5}
\subsection{Test Event Dataset}
The test dataset is exploited to evaluate the proposed method. Samples of that dataset shown in Fig \ref{fig:6}. The dataset consists of three classes, the first class contains images of the events selected from the SEV dataset, and an equal number of images (200 instances from each category) are selected from each category (only defined events). The second class includes other images of soccer, in which none of these seven events are included. The third class includes images that are not generally related to soccer. Details of the number of images in this dataset are given in Table II.
\section{Experiment and Performance Evaluation}
\subsection{Training}
All three networks are trained independently and the end-to-end method is not employed. The training methods of the 10-class classifier network, the yellow and red card classifier network, and the VAE network are explained in the following subsections, respectively.
\subsubsection{VAE}
\begin{table*}[h!]
\caption{Simulation parameters of the variational autoencoder}
\makebox[\linewidth]{
\begin{tabular}{ll}
\hline
Parameter & Value \\
\hline \hline
Optimizer & Adam\\
Loss function & Reconstruction loss + KL loss\\
Preformance metric & Loss\\
\hline
\end{tabular}
}
\label{tab:d1}
\end{table*}
To train this network, the images of seven events defined from the SEV dataset are selected and given to the VAE network as training data. Test and validation data of these seven categories are also used to evaluate the network. The specifications of the simulation parameters used in this network are summarized in Table III.
As illustrated in Fig \ref{fig:7} , the value of the loss curve decreases during successive epochs for validation data.
\subsubsection{Image Classification (9 Class)}{}
\begin{table*}[h!]
\caption{Simulation parameters of the image classification module}
\makebox[\linewidth]{
\begin{tabular}{ll}
\hline
Parameter & Value \\
\hline \hline
Optimizer & Adam\\
Loss function & Categorical Cross-Entropy\\
Preformance metric & Accuracy\\
Total Classes & 9 (red and yellow card classes merged)\\
Augemnation & Scale, Rotate, Shift, Flip\\
Batch Size & 16\\
\hline
\end{tabular}
}
\label{tab:d1}
\end{table*}
\begin{table*}[h!]
\caption{Simulation parameters of the fine-grain image classification module}
\makebox[\linewidth]{
\begin{tabular}{ll}
\hline
Parameter & Value \\
\hline \hline
Optimizer & Adam\\
Loss function & Categorical cross-entropy\\
Preformance metric & Accuracy\\
Total Classes & 2 (red and yellow card classes)\\
Augemnation & Scale, rotate, shift, flip\\
Batch Size & 16\\
\hline
\end{tabular}
}
\label{tab:d1}
\end{table*}
The image classification network is first trained on the ImageNet image collection with dimensions of 224 * 224 * 3. Then, using transfer learning, the network is re-trained on the SEV dataset and is fine-tuned. Dimensions of input images of the SEV dataset are 224 * 224 * 3. The network is trained in 20 epochs with the simulation parameters specified in Table IV. The yellow and red card classes are merged and their 5,000 images are used for network training together with images of other SEV dataset classes.
\subsubsection{Fine-grain image classification}{}
The network shown in Fig \ref{fig:4} is trained using two classes of red card and yellow card of the SEV dataset. Data from each category is partitioned to train, test and validation with 5000, 500, and 500 images. The specifications of the simulation parameters used in this network are given in Table V.
\subsection{Evaluation Metrics}
Different metrics are exploited to evaluate this network. In order to evaluate and compare different image classification architectures of EfficientNetB0 and Fine-grain module networks, the accuracy metric is used as the main metric; recall and F1-score are also used to determine the appropriate threshold value of the EfficientNetB0 network. Also, precision is used to evaluate the performance of the proposed method to detect events.
The accuracy metric can be used to determine how accurately the trained model predicts and, as described in this paper, to compare different architectures and hyperparameters in the EfficientNetB0 and Fine-grain module networks.
\begin{equation}
Accuracy = \frac{TP + TN}{P + N} = \frac{TP + TN}{TP + TN + FP + FN}
\end{equation}
The F1 score metric considers both the recall and precision criteria together; the value of this criterion is one at the best-case scenario and zero at the worst-case scenario.
\begin{equation}
F_{1} = \frac{2}{ Precision^{-1} + Recall^{-1} } = \frac{TP}{TP + \frac{1}{2} \times (FP+FN)}
\end{equation}
\begin{figure*}[h!]
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\linewidth]{Images/Result-Architecture.jpg}
\caption{Architecture}
\label{fig:71}
\end{subfigure}\hfil
\hfill
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\linewidth]{Images/Result-Batchsize.jpg}
\caption{Batch size}
\label{fig:72}
\end{subfigure}\hfil
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\linewidth]{Images/Result-Optimizer.jpg}
\caption{Optimizer}
\label{fig:73}
\end{subfigure}\hfil
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\linewidth]{Images/Result-Learningrate.jpg}
\caption{Learning rate}
\label{fig:74}
\end{subfigure}\hfil
\caption{Comparing the results on hyperparameters and architecture for validation accuracy (image classification module)}
\label{fig:7}
\end{figure*}
Precision is a metric that helps to determine how accurate the model is when making a prediction. This metric has been used as a criterion in selecting the appropriate threshold.
\begin{equation}
Precision = \frac{TP}{TP + FP}
\end{equation}
Recall metric refers to the percentage of total predictions that are correctly categorized .
\begin{equation}
Recall = \frac{TP}{TP + FN}
\end{equation}
\subsection{Evaluation}
The various parts of the proposed method have been evaluated to achieve the best model in order to detect an event in a soccer match. In the first step, the algorithm should be able to effectively classify the images of the defined events correctly. In the next step, the network is examined to see how the network can detect no highlights, and the best possible model is selected. Eventually, the performance of the proposed algorithm in a
soccer video is examined and compared to other state-of-the-art methods.
\begin{table*}[h!]
\caption{Comparison between
Fine-grain image classification models}
\makebox[\linewidth]{
\begin{tabular}{lccc}
\hline
Method name & Epoch & Acc (red and yellow card) (\%) & Acc (CUB-200-2011) (\%) \\
\hline \hline
B-CNN \cite{lin2015bilinear}\cite{tan2019efficientnet} & 60 & 66.86 & 84.01 \\
OSME + MAMC using Res-Net50 \cite{sun2018multi} & 60 & 61.70 & 86.2 \\
\textbf{OSME + MAMC \cite{sun2018multi} using EfficientNetB0} \cite{tan2019efficientnet} & 60 & \textbf{79.90} & 88.51 \\
Ge et. al. \cite{ge2019weakly} & 60 & 78.03 & \textbf{90.40} \\
\hline
\end{tabular}
}
\label{tab:d1}
\end{table*}
\subsubsection{Classification evaluation}{}
The image classification module is responsible for classifying images. To test and evaluate this network, as shown in Fig \ref{fig:7}, different architectures were used for training and different hyperparameters were also tested for these models. As shown in Fig \ref{fig:71} , the EfficinetNetB0 model has the best accuracy among the other models.
If in the above model, we divide the card images into two categories of yellow and red cards and give the dataset to the network in the form of the same 10 categories of SEV datasets for training, the accuracy of test data is reduced from 94.08\% to 88.93\% . The reason for this is the interference of yellow and red card predictions in this model. Consequently, the yellow and red cards detection have been assigned to a subclassification, and only the card category in the EfficientNetB0 model has been used.
For the subclassification of yellow and red card images, various fine-grained methods were evaluated and the results are shown in Table VI. The bilinear CNN method \cite{lin2015bilinear} using the EfficientNet architecture achieves 66.86\% accuracy and the OSME method that employs the EfficientNet architecture reaches 79.90\% accuracy, which shows higher accuracy than the other methods. Nonetheless, the main architecture (EfficientNetB0) used in the image classifier shows 62.02\% accuracy, which gives difference of 17.88\%.
Table VII compares the accuracy of combining the image classification module and fine-grain classification module for image classification with other models. As shown in Table VII, the accuracy of our proposed method for image classification is 93.21\%, which shows the best accuracy among all models. Also, the proposed method is still faster than the other models except MobileNet and MobileNetV2. To prove this point, The execution time of each model is calculated for 1400 images, and their average run time as the mean inference time is given in Table VII.
\begin{table*}[h!]
\caption{Comparison between image classification models on SEV dataset}
\makebox[\linewidth]{
\begin{tabular}{cccc}
\hline
Method name & Accuracy (\%) & Mean inference time (second)\\
\hline \hline
VGG16 \cite{simonyan2014very} & 83.21 & 0.510 \\
ResNet50V2 \cite{he2016identity} & 84.18 & 0.121 \\
ResNet50 \cite{he2016deep} & 84.89 & 0.135 \\
InceptionV3 \cite{szegedy2016rethinking} & 84.93 & 0.124 \\
MobileNetV2 \cite{sandler2018mobilenetv2} & 86.95 & 0\textbf{.046} \\
Xception \cite{chollet2017xception} & 86.97 & 0.186 \\
NASNetMobile \cite{zoph2018learning} & 87.01 & 0.121 \\
MobileNet \cite{howard2017mobilenets} & 88.48 & 0.048 \\
InceptionResNetV2 \cite{szegedy2016inception} & 88.71 & 0.252 \\
EfficientNetB7 \cite{tan2019efficientnet} & 88.92 & 1.293 \\
EfficientNetB0 \cite{tan2019efficientnet} & 88.93 & 0.064 \\
DenseNet121 \cite{huang2017densely} & 89.47 & 0.152 \\
\textbf{Proposed method (image classification section)} & \textbf{93.21} & 0.120 \\
\hline
\end{tabular}
}
\label{tab:d1}
\end{table*}
As shown in Table VIII, the problem of card overlap is also solved, and the proposed method in the image classification section separates the two categories almost well.
\begin{table*}[h!]
\caption{Confusion matrix proposed algorithm (image classification section)}
\makebox[\linewidth]{
\begin{tabular}{ccccccccccc}
\hline
& & & & Predicted & & & & & & \\
Actual & Center & Corner & Free kicke & Left & Penalty & R Card & Right & Tackle & To Substitue & Y Card \\
\hline \hline
Center & 0.994 & 0 & 0.006 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
Corner & 0 & 0.988 & 0.006 & 0 & 0 & 0.004 & 0 & 0 & 0.002 & 0 \\
Free kick & 0.004 & 0.004 & 0.898 & 0.03 & 0 & 0.002 & 0.05 & 0.006 & 0.004 & 0.002 \\
Left Penalty Area & 0.002 & 0 & 0.036 & 0.958 & 0.004 & 0 & 0 & 0 & 0 & 0 \\
Penalty kick & 0 & 0 & 0 & 0.002 & 0.972 & 0 & 0.026 & 0 & 0 & 0 \\
Red cards & 0 & 0.008 & 0.03 & 0 & 0 & 0.862 & 0 & 0.002 & 0.012 & 0.086 \\
Right Penalty Area & 0 & 0 & 0.01 & 0.002 & 0 & 0 & 0.988 & 0 & 0 & 0 \\
Tackle & 0.012 & 0.014 & 0.02 & 0.02 & 0.002 & 0.006 & 0.028 & 0.876 & 0.004 & 0.018 \\
To Substitue & 0 & 0 & 0 & 0 & 0 & 0.002 & 0.002 & 0 & 0.994 & 0.002 \\
Yellow cards & 0 & 0.022 & 0.01 & 0 & 0 & 0.131 & 0 & 0.002 & 0.006 & 0.829 \\
\hline
\end{tabular}
}
\label{tab:d1}
\end{table*}
\begin{table*}[h!]
\caption{Comparison between different threshold values (image classification module)}
\makebox[\linewidth]{
\begin{tabular}{ccc}
\hline
Threshold & F1-score (event images) (\%) & Recall (images not related to football) \%) \\
\hline \hline
0.99 & 86.6 & 95.3 \\
0.98 & 88.2 & 94.2 \\
0.97 & 89.2 & 93.8 \\
0.95 & 90.5 & 93.1 \\
0.90 & 91.8 & 92.4 \\
0.80 & 92.4 & 76.9 \\
0.50 & 95.2 & 51.2 \\
\hline
\end{tabular}
}
\label{tab:d1}
\end{table*}
\subsubsection{Known or Unknown Evaluation}
In order to determine the threshold value on the output of the last layer of the EfficientNetB0 network, according to Table IX, different threshold values were tested and evaluated; and then the value 0.90 which gives the highest sum of F1-score for detecting the event and also gives the best recall to detect no highlight images was determined as the threshold for the last layer of the network.
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.3]{Images/VAE_Loss.jpg}
\caption{Reconstruction Loss of VAE}
\label{fig:8}
\end{figure*}
The threshold value for the loss of VAE network was also examined with different values as shown in Fig \ref{fig:8} , the value of 328 as the threshold for the loss gives the best distinction between categories.
To test how the network detects no highlight images and the defined event images, the test dataset is used. Using this dataset helps to know the number of main events incorrectly classified by the proposed method as events not related to soccer. Also, it clarifies the number of images related to soccer classified as the main events, and the number of images categorized in the no highlight category. Moreover, it specifies the number of images related to the real event categorized correctly into the correct events, and the number of events that are not included in the main events. The results of this evaluation are provided in Table X.
\begin{table*}[h!]
\caption{The precision of the proposed algorithm}
\makebox[\linewidth]{
\begin{tabular}{ccc}
\hline
Class & Sub-class & Precision\\
\hline \hline
Soccer Events & Corner kick & 0.94 \\
--- & Free kick & 0.92 \\
--- & To Substitue & 0.98 \\
--- & Tackle & 0.91 \\
--- & Red Card & 0.90 \\
--- & Yellow Card & 0.91 \\
--- & Penalty Kick & 0.93 \\
Other soccer events & & 0.86 \\
Other images & & 0.94 \\
\hline
\end{tabular}
}
\label{tab:d1}
\end{table*}
\subsubsection{Final Evaluation}
Ten soccer matches have been downloaded from the UEFA Champions league and then, using the proposed method, the task of events detection has been carried out. In this evaluation, events that occur in each soccer game are examined.
In other words, the number of events correctly detected, and the number of events incorrectly detected by the network have been determined. Details of the results are given in Table XI and compared with other similar methods. in state-of-the-art methods
\begin{table*}[h!]
\caption{Precision of the proposed and state-of-the-art methods on 10 soccer matches (\%)}
\makebox[\linewidth]{
\begin{tabular}{cccc}
\hline
Event name & Proposed method & BN\cite{tavassolipour2013event} & Jiang et. al.\cite{jiang2016automatic}\\
\hline \hline
Corner kick & 94.16 & 88.13 & 93.91\\
Free kick & 83.31 & \--- & \--- \\
To Substitue & 97.68 & \--- & \--- \\
Tackle & 90.13 & 81.19 & \--- \\
Red Card & 92.39 & \--- & \--- \\
Yellow Card & 92.66 & \--- & \--- \\
Card & \--- & 88.83 & 93.21 \\
Penalty Kick & 88.21 & \--- & \--- \\
\hline
\end{tabular}
}
\label{tab:d1}
\end{table*}
\section{Conclusions and Future Work}
In this paper, two novel datasets for soccer event detection has been presented. One is the SEV dataset including 60,000 images in 10 categories, seven of which were related to soccer events, and three to soccer scenes; these were used in training image classification networks. The images of this dataset were taken from top fine leagues in Europe and the European Champions league. The other dataset is the test event that contains 4200 images. Test event includes three categories: the first category consists of the events mentioned in the paper, the second category comprises other images of a soccer match apart from the first category events, and the third category includes images off the soccer field. This dataset was exploited to examine the network power in detecting and distinguishing between images with highlight and those with no highlight.
Furthermore, a method for soccer event detection is proposed. The proposed method employed the EfficientNetB0 network in the image classification module to detect events in a soccer match. Also, the fine-grain image classification module was used to differentiate between red and yellow cards. If this module is not employed, red and yellow cards would have been categorized in the image classification module, and the differentiation accuracy would have been 88.93\%. However, the fine-grain image classification module increased the accuracy up to 93.21\%. In order to solve the network problems in predicting the images other than those defined, a VAE was employed to adjust the value of the threshold and several images, other than those of the defined events, were used to make a better distinction between the images of the defined events and other images.
| {
"attr-fineweb-edu": 2.810547,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbwDxK0iCl4YJmcl7 | \section*{Acknowledgments}
The statistics for the cricketers in this paper were taken from the
\href{http://www.espncricinfo.com}{\texttt{ESPNcricinfo}} website.
The paper was typeset using \LaTeX{} and calculations were performed using
\href{http://www.r-project.org}{\texttt{R}} statistical computing system.
The author thanks countless contributors who made the above software in
particular and open source software in general.
\section{Evaluating a Batsman}
To evaluate a batsman we imagine a ``team'' consisting of eleven replicas of the same
batsman and find how many \emph{runs on average} this imaginary team will
score\footnote{
It is straightforward to apply this method to evaluate a bowler too.
}.
For example, to evaluate Sachin Tendulkar we want to find out
how many runs will be scored by a team consisting of eleven Tendulkar's.
Since we model each delivery as a Bernoulli trial, the total runs scored by this
imaginary team will be a probability distribution.
To further elaborate this point, if a team of Tendulkar's faces $300$ balls,
they score $300r$ runs if they don't lose a wicket where $r$ is defined in
Eq.~\eqref{eqn:def:batsman:r}.
They score $299r$ runs if they lose only one wicket. On the other end of the scale,
this imaginary team might be dismissed without scoring a run if it so happens
that all the first $10$ tosses turn out to be tails\footnote{
The probability of being all out without a run being scored will be astronomically
low for an imaginary team of eleven Tendulkar's!}.
Thus this imaginary team can score total runs anywhere between $[0, 300r]$
and each total is associated with probability.
But it is difficult to interpret probability distribution and it is much
easier to comprehend basic statistical summaries such as \emph{mean}
and \emph{standard deviation}.
We call the \emph{mean} as \textbf{Bernoulli runs} in this paper since
the idea was inspired by Markov runs in Baseball~\cite{dangelo:2010:baseball}.
\subsection{Bernoulli Runs}
We now derive the formula for the \emph{mean} of runs scored by a team
consisting of eleven replicas of the same batsman in an One day international
(ODI) match\footnote{The ODI has a maximum of $300$ deliveries per team.
The formulas can be derived for Twenty20 and Test matches with appropriate
deliveries limit.}.
Let $Y$ denote the number of runs scored in a ODI by this imaginary team.
It is easier to derive the formula for mean of the total runs ($\mathbb{E}(Y)$)
scored by partitioning the various scenarios into two cases:
\begin{enumerate}
\item The team loses all the wickets ($\mathbb{E}_{\text{all-out}}(Y)$);
\item The team uses up all the allotted deliveries which implies that the team
has lost less than 10 wickets in the allotted deliveries ($\mathbb{E}_{\overline{\text{all-out}}}(Y)$).
\end{enumerate}
This leads to:
\begin{align}
\mathbb{E}(Y) &= \mathbb{E}_{\text{all-out}}(Y) + \mathbb{E}_{\overline{\text{all-out}}}(Y)
\label{eqn:bruns:split}
\end{align}
In the first case of team losing all the wickets, we can once again partition
on the delivery the tenth wicket was lost.
Let $b$ be the delivery the tenth wicket fell. The tenth wicket can fall on any
delivery between $[10,300]$.
The first nine wickets could have fallen in any one of the previous $b-1$ deliveries.
The number of possible ways the nine wickets could have fallen in $b-1$
deliveries is given by ${b-1 \choose 9}$.
The number of scoring shots (heads in coin tosses) is $b-10$.
The mean number of runs scored while losing all wickets is given by:
\begin{align}
\mathbb{E}_{\text{all-out}}(Y) &= r \sum_{b=10}^{300} (b-10)
\left(
{b-1 \choose 9} p^{(b-1)-9} (1-p)^9
\right)
(1-p)
\label{eqn:bruns:all-out}
\end{align}
The second case can be partitioned on basis of the number of wickets ($w$) lost.
Applying the same logic, one can derive the following result for the mean
number of runs scored:
\begin{align}
\mathbb{E}_{\overline{\text{all-out}}}(Y) &=
r \sum_{w=0}^{9} (300-w) {300 \choose w} p^{300-w} (1-p)^w
\label{eqn:bruns:not-all-out}
\end{align}
Substituting Eq.~\eqref{eqn:bruns:all-out} and Eq.~\eqref{eqn:bruns:not-all-out} in
Eq.~\eqref{eqn:bruns:split} we get the following equation for the mean of the runs
scored:
\begin{multline}
\mathbb{E}(Y) = r \sum_{b=10}^{300} (b-10) {b-1 \choose 9} p^{b-10} (1-p)^{10} \\
+ r \sum_{w=0}^{9} (300-w) {300 \choose w} p^{300-w} (1-p)^w
\label{eqn:bruns:mean}
\end{multline}
One can generalize the above Eq.~\eqref{eqn:bruns:mean} to generate any moment.
The $k$th moment is given by:
\begin{multline}
\mathbb{E}(Y^{k}) = r^{k} \sum_{b=10}^{300} (b-10)^{k} {b-1 \choose 9} p^{b-10} (1-p)^{10} \\
+ r^{k} \sum_{w=0}^{9} (300-w)^{k} {300 \choose w} p^{300-w} (1-p)^w
\label{eqn:bruns:moment}
\end{multline}
The standard deviation can be obtained by
\begin{align}
\sigma_{Y} &= \sqrt{\mathbb{E}(Y^{2}) - \left( \mathbb{E}(Y) \right)^{2}}
\label{eqn:bruns:sd}
\end{align}
To make things concrete, we illustrate the calculation of Bernoulli runs
for a batsman and a bowler using the statistical programming language
\verb@R@~\cite{R:manual}. The \verb@R@ code which implements this is
listed in Appendix~\ref{appendix:formula}.
\begin{example}[Batsman]
\href{http://www.espncricinfo.com/westindies/content/player/52812.html}
{Sir Viv Richards}.
Richards has an $\mathrm{avg} = 47.00$ and $\mathrm{sr} = 90.20$ in ODI matches.
From Eq.~\eqref{eqn:def:batsman:r} we get $r = \frac{\mathrm{sr}}{100} = 0.9020$ and
from Eq.~\eqref{eqn:batsman:prob:out}, we get $1-p = \frac{0.9020}{47.00} = 0.01919$
Substituting the values of $1-p$ and $r$ in Eq.~\eqref{eqn:bruns:mean} and
Eq.~\eqref{eqn:bruns:sd} we get $\mathrm{mean} = 262.84$ and
$\mathrm{sd} = 13.75$. One can interpret the result as, a team consisting of
eleven Richards' will score on average $262.84$ runs per ODI inning with a
standard deviation of $13.75$ runs per inning.
The code listed in Appendix~\ref{appendix:formula} is at \verb@R/analytical.R@
and can be executed as follows:
\begin{Schunk}
\begin{Sinput}
> source(file = "R/analytical.R")
> bernoulli(avg = 47, sr = 90.2)
\end{Sinput}
\begin{Soutput}
$mean
[1] 262.8434
$sd
[1] 13.75331
\end{Soutput}
\end{Schunk}
\end{example}
\begin{example}[Bowler]
\href{http://www.espncricinfo.com/westindies/content/player/51107.html}
{Curtly Ambrose}.
Ambrose
has an $\mathrm{avg} = 24.12$ and $\mathrm{econ} = 3.48$ in ODI matches.
From Eq.~\eqref{eqn:def:bowler:r}, we get $r = \frac{\mathrm{econ}}{6} = 0.58$ and
from Eq.~\eqref{eqn:batsman:prob:out}, we get $1-p = \frac{r}{\mathrm{avg}} = \frac{0.58}{24.12} = 0.024$
Substituting the values of $1-p$ and $r$ in Eq.~\eqref{eqn:bruns:mean} and
Eq.~\eqref{eqn:bruns:sd} we get $\mathrm{mean} = 164.39$ and
$\mathrm{sd} = 15.84$. One can interpret the result as, a team consisting of
eleven Ambrose's will concede on average $164.39$ runs per ODI inning with a
standard deviation of $15.84$ runs per inning.
\begin{Schunk}
\begin{Sinput}
> bernoulli(avg = 24.12, sr = 3.48 * 100/6)
\end{Sinput}
\begin{Soutput}
$mean
[1] 164.3869
$sd
[1] 15.84270
\end{Soutput}
\end{Schunk}
\end{example}
\subsubsection{Poisson process}
An aside. One can also use Poisson process to model a batsman's career.
This is because the probability of getting dismissed is
pretty small ($q \rightarrow 0$), and the number of deliveries a player faces
is pretty high over his entire career ($n$).
Poisson distribution can used to model the rare events (dismissal)
counting with parameter $\lambda = nq$~\cite{ross:2002:probability}.
The $\lambda$ can be interpreted as the average number of wickets
that a team will lose in $n$ balls. For example, a team of eleven Richards
will lose $300 \times 0.01919 = 5.78$ wickets on an average which explains
the reason why his standard deviation is very low.
\subsubsection{Monte Carlo Simulation}
Another aside. The Monte Carlo simulation code for the probability model
proposed in this paper is listed in Appendix~\ref{appendix:montecarlo}
in \verb@R@.
Monte Carlo simulation can be used to verify the formula for Bernoulli runs
we have derived.
In other words, it provides another way find the Bernoulli runs.
Also as one refines the model it becomes difficult to obtain a closed form
solution to the Bernoulli runs and Monte Carlo simulation comes in handy
during such situations.
For example, it is straightforward to modify the code
to generate Bernoulli runs using the seven-sided die model presented in
Eq.~\eqref{eqn:seven-sided}. Thus any model can be simulated using
Monte Carlo.
\section{A Probabilistic Model for Cricket}
The book cricket model described above can be thought of as a five-sided die game.
The biggest drawback of the book cricket model described in
Eq.~\eqref{eqn:book:cricket:prob:model} is the fact all the five outcomes,
\begin{align}
\Omega &= \{\mathrm{out},1,2,4,6\} \notag
\end{align}
are equally likely.
Instead of assigning uniform probabilities to all the five outcomes, a realistic
model will take probabilities from the career record.
For example,
\href{http://www.espncricinfo.com/india/content/player/30750.html}
{VVS Laxman}
has hit $4$ sixes in $3282$ balls he faced in his ODI career.
Thus one can model the probability of Laxman hitting a six as $\frac{4}{3282}$
instead of our naive book cricket probability of $\frac{1}{5}$.
To further refine this model, we have to add two more outcomes to the model namely:
the dot ball (resulting in $0$ runs) and the scoring shot which results in $3$ runs
being scored. This results in a seven-sided die model whose sample space is:
\begin{align}
\Omega &= \{\mathrm{out},0,1,2,3,4,6\} \label{eqn:seven-sided}
\end{align}
\subsection{A Simplified Model}
We need to simplify our model since it is hard to obtain detailed statistics
for the seven-sided die model.
For example, it is pretty hard to find out how many twos and threes Inzamam-ul-Haq
scored in his career. But it is easy to obtain his career average and strike rate.
\begin{definition}
The \textbf{average} ($\mathrm{avg}$) for a \emph{batsman} is defined as the average number of runs scored
per dismissal.
\begin{align}
\mathrm{avg} &= \frac{\text{number of runs scored}} {\text{number of dismissals}} \label{eqn:def:avg}
\end{align}
In case of a \emph{bowler} the numerator becomes number of runs \emph{conceded}.
\end{definition}
\begin{definition}
The \textbf{strike rate} ($\mathrm{sr}$) for a batsman is defined as the runs scored per $100$ balls.
Thus when you divide the strike rate by $100$ you get the average runs ($r$) scored per ball.
\begin{align}
\mathrm{r} = \frac{\mathrm{sr}}{100}
&= \frac {\text{number of runs scored}} {\text{number of balls faced}} \label{eqn:def:batsman:r}
\end{align}
\end{definition}
\begin{definition}
The \textbf{economy rate} ($\mathrm{econ}$) for a bowler is defined as the runs
conceded per $6$ balls\footnote{
In the case of bowlers, strike rate ($\mathrm{sr}$) means number of deliveries
required on a average for a dismissal.
So strike rate ($\mathrm{sr}$) is interpreted differently for a batsman and bowler.
An example of context sensitive information or in Object Oriented jargon overloading!
}.
Thus the average runs ($r$) conceded per ball is:
\begin{align}
\mathrm{r} = \frac{\mathrm{econ}}{6}
&= \frac {\text{number of runs conceded}} {\text{number of balls bowled}} \label{eqn:def:bowler:r}
\end{align}
\end{definition}
We need to simplify the seven-sided die probabilistic model to take advantage
of the easy availability of the above statistics: average, strike rate and
economy rate.
This simplification leads to modeling of every delivery in cricket
as simple coin tossing experiment.
Thus the simplified model will have only two outcomes, either a \emph{scoring shot} (heads)
or a \emph{dismissal} (tails) during every delivery.
We also assign a value to the scoring shot (heads) namely the average runs scored per ball.
Now to complete our probabilistic model all we need is to estimate the probability of getting
dismissed\footnote{The probability of getting a tail.}.
From the definitions of strike rate and average, one can calculate the
average number of deliveries a batsman faces before he is dismissed ($\mathrm{bpw}$).
\begin{align}
\mathrm{bpw} &= \frac{\text{number of balls faced}} {\text{number of dismissals}} \notag \\
&= \frac{\text{number of runs scored}} {\text{number of dismissals}} \times
\frac{\text{number of balls faced}} {\text{number of runs scored}} \notag \\
&= \mathrm{avg} \times \frac{1}{\frac{\mathrm{sr}}{100}} \notag \\
&= 100 \times \left( \frac{\mathrm{avg}} {\mathrm{sr}} \right) \label{eqn:bpw}
\end{align}
Assuming that the batsman can be dismissed during any delivery,
the probability of being dismissed ($1-p$) is given by\footnote{
The formula
$\mathbb{P}\{\text{Dismissal}\} = 1-p = \frac{r} {\mathrm{avg}}$
also applies to a bowler with the $r$ being calculated using
Eq.~\eqref{eqn:def:bowler:r}.
}:
\begin{align}
\mathbb{P}\{\text{Dismissal}\} = 1-p &= \frac{1}{\text{bpw}}
= \frac{\mathrm{sr}} {100 \times \mathrm{avg}}
= \frac{r} {\mathrm{avg}} \label{eqn:batsman:prob:out}
\end{align}
In probability parlance, coin tossing is called a Bernoulli
trial~\cite{ross:2002:probability}.
From the perspective of a batsman, if you get a head you score $r$ runs
and if you get a tail you are dismissed.
Thus the most basic event in cricket, the ball delivered by a bowler to a
batsman is modeled by a coin toss.
Just as Markov chains form the theoretical underpinning for modeling baseball
run scoring~\cite{dangelo:2010:baseball}, Bernoulli\footnote{
Arguably, the Bernoulli's were the greatest mathematical family that ever
lived~\cite{mukhopadhyay:2001:bernoulli,polasek:2000:bernoulli}
}
trials form the basis
for cricket.
Now that we have modeled each delivery as a Bernoulli trial, we now have the
mathematical tools to evaluate a batsman or bowler.
\section{Introduction}
In the late 1980s and early 1990s to beat afternoon drowsiness in school one resorted to
playing ``book cricket''. The book cricket rules was quite simple.
Pick a text book and open it randomly and note the last digit of the even numbered page.
The special case is when you see a page ending with $0$ then you have lost a wicket.
If you see $8$, most of my friends would score it as a $1$ run.
The other digits {$2,4,6$} would be scored as the same, that is if you see
a page ending with $4$ then you have scored $4$ runs.
We would play two national teams without any overs limit since it was a
too much of a hassle to count the number of deliveries\footnote{The number of
deliveries equals the number of times we opened the book.}.
A book cricketer would construct his own team and match it up against his friend.
One of the teams would be the Indian team and the other team
would be the side the Indian team was playing at that time.
The book cricketer would play till he lost $10$ wickets\footnote{Remember losing a wicket is
when on opening the book, you see a page number that ends with a $0$.
Thus the inning ends when he sees ten page numbers that end with a $0$.}.
Thus it was like test cricket but with just one innings for each team.
The one who scores the most number of runs would be declared the winner.
In probabilistic terms, we were simulating a batsman with the following
probability mass function:
\begin{align}
\label{eqn:book:cricket:prob:model}
p_{X}(x) &= \frac{1}{5} \quad x \in \{\mathrm{out},1,2,4,6\}
\end{align}
With this simple model, we used to get weird results like a well-known batting
bunny like Narendra Hirwani scoring the most number of runs. So we had to change
rules for each player based on his batting ability. For example, a player like
Praveen Amre would be dismissed only if we got consecutive page numbers that ended
with a $0$. Intuitively without understanding a whole lot of probability, we had
reduced the probability of Amre being dismissed from $\frac{1}{5}$ to $\frac{1}{25}$.
In the same spirit, for Hirwani we modified the original model and made $8$ a
dismissal. Thus the probability of Hirwani being dismissed went up from
$\frac{1}{5}$ to $\frac{2}{5}$ thus reducing the chances of Hirwani getting
the highest score which did not please many people in the class\footnote{
The another reason for the lack of popularity was that it involved more book keeping.}.
In the next section, we develop a probabilistic model of cricket by refining
the book cricket model. The refinement is in probabilities which is updated to
approximate the real world cricketing statistics.
\subsection*{Keywords}
Probability, Combinatorics, Coin Toss,
Bernoulli Runs, Book Cricket, Monte Carlo Simulation.
\end{abstract}
\maketitle
\section{Introduction}
In the late 1980s and early 1990s to beat afternoon drowsiness in school one resorted to
playing ``book cricket''. The book cricket rules was quite simple.
Pick a text book and open it randomly and note the last digit of the even numbered page.
The special case is when you see a page ending with $0$ then you have lost a wicket.
If you see $8$, most of my friends would score it as a $1$ run.
The other digits {$2,4,6$} would be scored as the same, that is if you see
a page ending with $4$ then you have scored $4$ runs.
We would play two national teams without any overs limit since it was a
too much of a hassle to count the number of deliveries\footnote{The number of
deliveries equals the number of times we opened the book.}.
A book cricketer would construct his own team and match it up against his friend.
One of the teams would be the Indian team and the other team
would be the side the Indian team was playing at that time.
The book cricketer would play till he lost $10$ wickets\footnote{Remember losing a wicket is
when on opening the book, you see a page number that ends with a $0$.
Thus the inning ends when he sees ten page numbers that end with a $0$.}.
Thus it was like test cricket but with just one innings for each team.
The one who scores the most number of runs would be declared the winner.
In probabilistic terms, we were simulating a batsman with the following
probability mass function:
\begin{align}
\label{eqn:book:cricket:prob:model}
p_{X}(x) &= \frac{1}{5} \quad x \in \{\mathrm{out},1,2,4,6\}
\end{align}
With this simple model, we used to get weird results like a well-known batting
bunny like Narendra Hirwani scoring the most number of runs. So we had to change
rules for each player based on his batting ability. For example, a player like
Praveen Amre would be dismissed only if we got consecutive page numbers that ended
with a $0$. Intuitively without understanding a whole lot of probability, we had
reduced the probability of Amre being dismissed from $\frac{1}{5}$ to $\frac{1}{25}$.
In the same spirit, for Hirwani we modified the original model and made $8$ a
dismissal. Thus the probability of Hirwani being dismissed went up from
$\frac{1}{5}$ to $\frac{2}{5}$ thus reducing the chances of Hirwani getting
the highest score which did not please many people in the class\footnote{
The another reason for the lack of popularity was that it involved more book keeping.}.
In the next section, we develop a probabilistic model of cricket by refining
the book cricket model. The refinement is in probabilities which is updated to
approximate the real world cricketing statistics.
\section{A Probabilistic Model for Cricket}
The book cricket model described above can be thought of as a five-sided die game.
The biggest drawback of the book cricket model described in
Eq.~\eqref{eqn:book:cricket:prob:model} is the fact all the five outcomes,
\begin{align}
\Omega &= \{\mathrm{out},1,2,4,6\} \notag
\end{align}
are equally likely.
Instead of assigning uniform probabilities to all the five outcomes, a realistic
model will take probabilities from the career record.
For example,
\href{http://www.espncricinfo.com/india/content/player/30750.html}
{VVS Laxman}
has hit $4$ sixes in $3282$ balls he faced in his ODI career.
Thus one can model the probability of Laxman hitting a six as $\frac{4}{3282}$
instead of our naive book cricket probability of $\frac{1}{5}$.
To further refine this model, we have to add two more outcomes to the model namely:
the dot ball (resulting in $0$ runs) and the scoring shot which results in $3$ runs
being scored. This results in a seven-sided die model whose sample space is:
\begin{align}
\Omega &= \{\mathrm{out},0,1,2,3,4,6\} \label{eqn:seven-sided}
\end{align}
\subsection{A Simplified Model}
We need to simplify our model since it is hard to obtain detailed statistics
for the seven-sided die model.
For example, it is pretty hard to find out how many twos and threes Inzamam-ul-Haq
scored in his career. But it is easy to obtain his career average and strike rate.
\begin{definition}
The \textbf{average} ($\mathrm{avg}$) for a \emph{batsman} is defined as the average number of runs scored
per dismissal.
\begin{align}
\mathrm{avg} &= \frac{\text{number of runs scored}} {\text{number of dismissals}} \label{eqn:def:avg}
\end{align}
In case of a \emph{bowler} the numerator becomes number of runs \emph{conceded}.
\end{definition}
\begin{definition}
The \textbf{strike rate} ($\mathrm{sr}$) for a batsman is defined as the runs scored per $100$ balls.
Thus when you divide the strike rate by $100$ you get the average runs ($r$) scored per ball.
\begin{align}
\mathrm{r} = \frac{\mathrm{sr}}{100}
&= \frac {\text{number of runs scored}} {\text{number of balls faced}} \label{eqn:def:batsman:r}
\end{align}
\end{definition}
\begin{definition}
The \textbf{economy rate} ($\mathrm{econ}$) for a bowler is defined as the runs
conceded per $6$ balls\footnote{
In the case of bowlers, strike rate ($\mathrm{sr}$) means number of deliveries
required on a average for a dismissal.
So strike rate ($\mathrm{sr}$) is interpreted differently for a batsman and bowler.
An example of context sensitive information or in Object Oriented jargon overloading!
}.
Thus the average runs ($r$) conceded per ball is:
\begin{align}
\mathrm{r} = \frac{\mathrm{econ}}{6}
&= \frac {\text{number of runs conceded}} {\text{number of balls bowled}} \label{eqn:def:bowler:r}
\end{align}
\end{definition}
We need to simplify the seven-sided die probabilistic model to take advantage
of the easy availability of the above statistics: average, strike rate and
economy rate.
This simplification leads to modeling of every delivery in cricket
as simple coin tossing experiment.
Thus the simplified model will have only two outcomes, either a \emph{scoring shot} (heads)
or a \emph{dismissal} (tails) during every delivery.
We also assign a value to the scoring shot (heads) namely the average runs scored per ball.
Now to complete our probabilistic model all we need is to estimate the probability of getting
dismissed\footnote{The probability of getting a tail.}.
From the definitions of strike rate and average, one can calculate the
average number of deliveries a batsman faces before he is dismissed ($\mathrm{bpw}$).
\begin{align}
\mathrm{bpw} &= \frac{\text{number of balls faced}} {\text{number of dismissals}} \notag \\
&= \frac{\text{number of runs scored}} {\text{number of dismissals}} \times
\frac{\text{number of balls faced}} {\text{number of runs scored}} \notag \\
&= \mathrm{avg} \times \frac{1}{\frac{\mathrm{sr}}{100}} \notag \\
&= 100 \times \left( \frac{\mathrm{avg}} {\mathrm{sr}} \right) \label{eqn:bpw}
\end{align}
Assuming that the batsman can be dismissed during any delivery,
the probability of being dismissed ($1-p$) is given by\footnote{
The formula
$\mathbb{P}\{\text{Dismissal}\} = 1-p = \frac{r} {\mathrm{avg}}$
also applies to a bowler with the $r$ being calculated using
Eq.~\eqref{eqn:def:bowler:r}.
}:
\begin{align}
\mathbb{P}\{\text{Dismissal}\} = 1-p &= \frac{1}{\text{bpw}}
= \frac{\mathrm{sr}} {100 \times \mathrm{avg}}
= \frac{r} {\mathrm{avg}} \label{eqn:batsman:prob:out}
\end{align}
In probability parlance, coin tossing is called a Bernoulli
trial~\cite{ross:2002:probability}.
From the perspective of a batsman, if you get a head you score $r$ runs
and if you get a tail you are dismissed.
Thus the most basic event in cricket, the ball delivered by a bowler to a
batsman is modeled by a coin toss.
Just as Markov chains form the theoretical underpinning for modeling baseball
run scoring~\cite{dangelo:2010:baseball}, Bernoulli\footnote{
Arguably, the Bernoulli's were the greatest mathematical family that ever
lived~\cite{mukhopadhyay:2001:bernoulli,polasek:2000:bernoulli}
}
trials form the basis
for cricket.
Now that we have modeled each delivery as a Bernoulli trial, we now have the
mathematical tools to evaluate a batsman or bowler.
\section{Evaluating a Batsman}
To evaluate a batsman we imagine a ``team'' consisting of eleven replicas of the same
batsman and find how many \emph{runs on average} this imaginary team will
score\footnote{
It is straightforward to apply this method to evaluate a bowler too.
}.
For example, to evaluate Sachin Tendulkar we want to find out
how many runs will be scored by a team consisting of eleven Tendulkar's.
Since we model each delivery as a Bernoulli trial, the total runs scored by this
imaginary team will be a probability distribution.
To further elaborate this point, if a team of Tendulkar's faces $300$ balls,
they score $300r$ runs if they don't lose a wicket where $r$ is defined in
Eq.~\eqref{eqn:def:batsman:r}.
They score $299r$ runs if they lose only one wicket. On the other end of the scale,
this imaginary team might be dismissed without scoring a run if it so happens
that all the first $10$ tosses turn out to be tails\footnote{
The probability of being all out without a run being scored will be astronomically
low for an imaginary team of eleven Tendulkar's!}.
Thus this imaginary team can score total runs anywhere between $[0, 300r]$
and each total is associated with probability.
But it is difficult to interpret probability distribution and it is much
easier to comprehend basic statistical summaries such as \emph{mean}
and \emph{standard deviation}.
We call the \emph{mean} as \textbf{Bernoulli runs} in this paper since
the idea was inspired by Markov runs in Baseball~\cite{dangelo:2010:baseball}.
\subsection{Bernoulli Runs}
We now derive the formula for the \emph{mean} of runs scored by a team
consisting of eleven replicas of the same batsman in an One day international
(ODI) match\footnote{The ODI has a maximum of $300$ deliveries per team.
The formulas can be derived for Twenty20 and Test matches with appropriate
deliveries limit.}.
Let $Y$ denote the number of runs scored in a ODI by this imaginary team.
It is easier to derive the formula for mean of the total runs ($\mathbb{E}(Y)$)
scored by partitioning the various scenarios into two cases:
\begin{enumerate}
\item The team loses all the wickets ($\mathbb{E}_{\text{all-out}}(Y)$);
\item The team uses up all the allotted deliveries which implies that the team
has lost less than 10 wickets in the allotted deliveries ($\mathbb{E}_{\overline{\text{all-out}}}(Y)$).
\end{enumerate}
This leads to:
\begin{align}
\mathbb{E}(Y) &= \mathbb{E}_{\text{all-out}}(Y) + \mathbb{E}_{\overline{\text{all-out}}}(Y)
\label{eqn:bruns:split}
\end{align}
In the first case of team losing all the wickets, we can once again partition
on the delivery the tenth wicket was lost.
Let $b$ be the delivery the tenth wicket fell. The tenth wicket can fall on any
delivery between $[10,300]$.
The first nine wickets could have fallen in any one of the previous $b-1$ deliveries.
The number of possible ways the nine wickets could have fallen in $b-1$
deliveries is given by ${b-1 \choose 9}$.
The number of scoring shots (heads in coin tosses) is $b-10$.
The mean number of runs scored while losing all wickets is given by:
\begin{align}
\mathbb{E}_{\text{all-out}}(Y) &= r \sum_{b=10}^{300} (b-10)
\left(
{b-1 \choose 9} p^{(b-1)-9} (1-p)^9
\right)
(1-p)
\label{eqn:bruns:all-out}
\end{align}
The second case can be partitioned on basis of the number of wickets ($w$) lost.
Applying the same logic, one can derive the following result for the mean
number of runs scored:
\begin{align}
\mathbb{E}_{\overline{\text{all-out}}}(Y) &=
r \sum_{w=0}^{9} (300-w) {300 \choose w} p^{300-w} (1-p)^w
\label{eqn:bruns:not-all-out}
\end{align}
Substituting Eq.~\eqref{eqn:bruns:all-out} and Eq.~\eqref{eqn:bruns:not-all-out} in
Eq.~\eqref{eqn:bruns:split} we get the following equation for the mean of the runs
scored:
\begin{multline}
\mathbb{E}(Y) = r \sum_{b=10}^{300} (b-10) {b-1 \choose 9} p^{b-10} (1-p)^{10} \\
+ r \sum_{w=0}^{9} (300-w) {300 \choose w} p^{300-w} (1-p)^w
\label{eqn:bruns:mean}
\end{multline}
One can generalize the above Eq.~\eqref{eqn:bruns:mean} to generate any moment.
The $k$th moment is given by:
\begin{multline}
\mathbb{E}(Y^{k}) = r^{k} \sum_{b=10}^{300} (b-10)^{k} {b-1 \choose 9} p^{b-10} (1-p)^{10} \\
+ r^{k} \sum_{w=0}^{9} (300-w)^{k} {300 \choose w} p^{300-w} (1-p)^w
\label{eqn:bruns:moment}
\end{multline}
The standard deviation can be obtained by
\begin{align}
\sigma_{Y} &= \sqrt{\mathbb{E}(Y^{2}) - \left( \mathbb{E}(Y) \right)^{2}}
\label{eqn:bruns:sd}
\end{align}
To make things concrete, we illustrate the calculation of Bernoulli runs
for a batsman and a bowler using the statistical programming language
\verb@R@~\cite{R:manual}. The \verb@R@ code which implements this is
listed in Appendix~\ref{appendix:formula}.
\begin{example}[Batsman]
\href{http://www.espncricinfo.com/westindies/content/player/52812.html}
{Sir Viv Richards}.
Richards has an $\mathrm{avg} = 47.00$ and $\mathrm{sr} = 90.20$ in ODI matches.
From Eq.~\eqref{eqn:def:batsman:r} we get $r = \frac{\mathrm{sr}}{100} = 0.9020$ and
from Eq.~\eqref{eqn:batsman:prob:out}, we get $1-p = \frac{0.9020}{47.00} = 0.01919$
Substituting the values of $1-p$ and $r$ in Eq.~\eqref{eqn:bruns:mean} and
Eq.~\eqref{eqn:bruns:sd} we get $\mathrm{mean} = 262.84$ and
$\mathrm{sd} = 13.75$. One can interpret the result as, a team consisting of
eleven Richards' will score on average $262.84$ runs per ODI inning with a
standard deviation of $13.75$ runs per inning.
The code listed in Appendix~\ref{appendix:formula} is at \verb@R/analytical.R@
and can be executed as follows:
\begin{Schunk}
\begin{Sinput}
> source(file = "R/analytical.R")
> bernoulli(avg = 47, sr = 90.2)
\end{Sinput}
\begin{Soutput}
$mean
[1] 262.8434
$sd
[1] 13.75331
\end{Soutput}
\end{Schunk}
\end{example}
\begin{example}[Bowler]
\href{http://www.espncricinfo.com/westindies/content/player/51107.html}
{Curtly Ambrose}.
Ambrose
has an $\mathrm{avg} = 24.12$ and $\mathrm{econ} = 3.48$ in ODI matches.
From Eq.~\eqref{eqn:def:bowler:r}, we get $r = \frac{\mathrm{econ}}{6} = 0.58$ and
from Eq.~\eqref{eqn:batsman:prob:out}, we get $1-p = \frac{r}{\mathrm{avg}} = \frac{0.58}{24.12} = 0.024$
Substituting the values of $1-p$ and $r$ in Eq.~\eqref{eqn:bruns:mean} and
Eq.~\eqref{eqn:bruns:sd} we get $\mathrm{mean} = 164.39$ and
$\mathrm{sd} = 15.84$. One can interpret the result as, a team consisting of
eleven Ambrose's will concede on average $164.39$ runs per ODI inning with a
standard deviation of $15.84$ runs per inning.
\begin{Schunk}
\begin{Sinput}
> bernoulli(avg = 24.12, sr = 3.48 * 100/6)
\end{Sinput}
\begin{Soutput}
$mean
[1] 164.3869
$sd
[1] 15.84270
\end{Soutput}
\end{Schunk}
\end{example}
\subsubsection{Poisson process}
An aside. One can also use Poisson process to model a batsman's career.
This is because the probability of getting dismissed is
pretty small ($q \rightarrow 0$), and the number of deliveries a player faces
is pretty high over his entire career ($n$).
Poisson distribution can used to model the rare events (dismissal)
counting with parameter $\lambda = nq$~\cite{ross:2002:probability}.
The $\lambda$ can be interpreted as the average number of wickets
that a team will lose in $n$ balls. For example, a team of eleven Richards
will lose $300 \times 0.01919 = 5.78$ wickets on an average which explains
the reason why his standard deviation is very low.
\subsubsection{Monte Carlo Simulation}
Another aside. The Monte Carlo simulation code for the probability model
proposed in this paper is listed in Appendix~\ref{appendix:montecarlo}
in \verb@R@.
Monte Carlo simulation can be used to verify the formula for Bernoulli runs
we have derived.
In other words, it provides another way find the Bernoulli runs.
Also as one refines the model it becomes difficult to obtain a closed form
solution to the Bernoulli runs and Monte Carlo simulation comes in handy
during such situations.
For example, it is straightforward to modify the code
to generate Bernoulli runs using the seven-sided die model presented in
Eq.~\eqref{eqn:seven-sided}. Thus any model can be simulated using
Monte Carlo.
\section{Reward to Risk Ratio}
\href{http://www.espncricinfo.com/india/content/player/35263.html}
{Virender Sehwag}
has an $\mathrm{avg} = 34.64$ and $\mathrm{sr} = 103.27$ in ODI
matches\footnote{
The statistics for current players are up to date as of December 31, 2010.
}
this leads to Bernoulli runs (mean) = $275.96$ and
standard deviation = $42.99$. Thus on an average, Sehwag scores more runs
than Richards but he is also risky compared to Richards.
To quantify this, we borrow the concept of
\href{http://en.wikipedia.org/wiki/Sharpe_ratio}
{Sharpe Ratio} from the world of Financial Mathematics and we call it
Reward to Risk Ratio ($\mathrm{RRR}$).
\begin{definition}
The Reward to Risk Ratio ($\mathrm{RRR}$) for a \emph{batsman} is defined as:
\begin{align}
\mathrm{RRR} &= \frac{\mathbb{E}(Y) - c_{\text{batsman}}} {\sigma_{Y}}
\label{eqn:batsman:rrr}
\end{align}
and for a \emph{bowler} it is defined as:
\begin{align}
\mathrm{RRR} &= \frac{c_{\text{bowler}} - \mathbb{E}(Y)} {\sigma_{Y}}
\label{eqn:bowler:rrr}
\end{align}
where $\mathbb{E}(Y)$ is defined in Eq.~\eqref{eqn:bruns:mean} and
$\sigma_{Y}$ is defined in Eq.~\eqref{eqn:bruns:sd}. The constants
$c_{\text{batsman}}$ and $c_{\text{bowler}}$ are discussed below.
\end{definition}
\subsection{Constants}
The Duckworth-Lewis (D/L) method predicts an
\href{http://static.espncricinfo.com/db/ABOUT_CRICKET/RAIN_RULES/DL_FAQ.html}
{average score}\footnote{The pertinent question is No.13 in the
hyperlinked D/L FAQ.} of $235$ runs will be scored by a team in an ODI match.
Though D/L average score seems to be a good candidate for usage as the constant
in $\mathrm{RRR}$, we use scale it before using it. The reason for scaling
is due to a concept named Value over Replacement player (VORP) which
comes from Baseball~\cite{wiki:vorp}.
Replacement player is a player who plays at the next rung below international
cricket\footnote{ In India, the replacement players play in Ranji Trophy.}.
A team full of replacement players will have no risk and hence no upside.
The baseball statisticians have set the scale factor for replacement players
to be $20\%$ worse than the international players. Thus for batsman
\begin{align}
c_{\text{batsman}} &= 0.8 \times 235 = 188 \label{eqn:batsman:c}
\end{align}
and for a bowler it is
\begin{align}
c_{\text{bowler}} &= 1.2 \times 235 = 282 \label{eqn:bowler:c}
\end{align}
Thus a team of replacement batsman will end up scoring $188$ runs while
a team of replacement bowlers will concede $282$ runs.
We end this paper by listing Bernoulli runs, standard deviation and reward to
risk ratio for some of the Indian ODI cricketers of 2010.
\begin{table*}[hbt]
\caption{Bernoulli Runs for batsmen}
\label{tbl:batsmen}
\begin{center}
\begin{tabular}{|l|r|r|r|r|r|}
\hline
\multicolumn{1}{|c|}{Name} &
\multicolumn{1}{|c|}{$\mathrm{avg}$} &
\multicolumn{1}{|c|}{$\mathrm{sr}$} &
\multicolumn{1}{|c|}{mean} &
\multicolumn{1}{|c|}{sd} &
\multicolumn{1}{|c|}{$\mathrm{RRR}$}
\\\hline\hline
Virender Sehwag & 34.64 & 103.27 & 275.96 & 42.99 & 2.05\\\hline
Sachin Tendulkar & 45.12 & 86.26 & 251.43 & 13.01 & 4.88\\\hline
Gautam Gambhir & 40.43 & 86.52 & 249.52 & 17.75 & 3.47\\\hline
Yuvraj Singh & 37.06 & 87.94 & 249.84 & 23.29 & 2.66\\\hline
Mahendra Singh Dhoni & 50.28 & 88.34 & 258.87 & 10.42 & 6.80\\\hline
Suresh Raina & 36.11 & 90.15 & 253.62 & 26.79 & 2.45\\\hline
Yusuf Pathan & 29.33 & 110.00 & 261.15 & 59.59 & 1.23\\\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[hbt]
\caption{Bernoulli Runs for bowlers}
\label{tbl:bowlers}
\begin{center}
\begin{tabular}{|l|r|r|r|r|r|}
\hline
\multicolumn{1}{|c|}{Name} &
\multicolumn{1}{|c|}{$\mathrm{avg}$} &
\multicolumn{1}{|c|}{$\mathrm{econ}$} &
\multicolumn{1}{|c|}{mean} &
\multicolumn{1}{|c|}{sd} &
\multicolumn{1}{|c|}{$\mathrm{RRR}$}
\\\hline\hline
Zaheer Khan & 29.85 & 4.91 & 224.89 & 29.44 & 1.94\\\hline
Praveen Kumar & 33.57 & 5.07 & 237.30 & 25.56 & 1.75\\\hline
Ashish Nehra & 31.03 & 5.15 & 235.25 & 31.40 & 1.49 \\\hline
Harbhajan Singh & 32.84 & 4.30 & 206.19 & 15.46 & 4.90\\\hline
Yusuf Pathan & 34.06 & 5.66 & 258.45 & 34.59 & 0.68 \\\hline
Yuvraj Singh & 39.76 & 5.04 & 242.61 & 16.66 & 2.36\\\hline
\end{tabular}
\end{center}
\end{table*}
It is clear from Table~\ref{tbl:bowlers} it is quite unfair to compare
Zaheer Khan with Harbhajan Singh. Zaheer operates usually in the manic
periods of power plays and slog overs while Harbhajan bowls mainly in the
middle overs. But until we get detailed statistics the adjustments that go
with it have to wait.
\section*{Acknowledgments}
The statistics for the cricketers in this paper were taken from the
\href{http://www.espncricinfo.com}{\texttt{ESPNcricinfo}} website.
The paper was typeset using \LaTeX{} and calculations were performed using
\href{http://www.r-project.org}{\texttt{R}} statistical computing system.
The author thanks countless contributors who made the above software in
particular and open source software in general.
\bibliographystyle{plain}
\section{Reward to Risk Ratio}
\href{http://www.espncricinfo.com/india/content/player/35263.html}
{Virender Sehwag}
has an $\mathrm{avg} = 34.64$ and $\mathrm{sr} = 103.27$ in ODI
matches\footnote{
The statistics for current players are up to date as of December 31, 2010.
}
this leads to Bernoulli runs (mean) = $275.96$ and
standard deviation = $42.99$. Thus on an average, Sehwag scores more runs
than Richards but he is also risky compared to Richards.
To quantify this, we borrow the concept of
\href{http://en.wikipedia.org/wiki/Sharpe_ratio}
{Sharpe Ratio} from the world of Financial Mathematics and we call it
Reward to Risk Ratio ($\mathrm{RRR}$).
\begin{definition}
The Reward to Risk Ratio ($\mathrm{RRR}$) for a \emph{batsman} is defined as:
\begin{align}
\mathrm{RRR} &= \frac{\mathbb{E}(Y) - c_{\text{batsman}}} {\sigma_{Y}}
\label{eqn:batsman:rrr}
\end{align}
and for a \emph{bowler} it is defined as:
\begin{align}
\mathrm{RRR} &= \frac{c_{\text{bowler}} - \mathbb{E}(Y)} {\sigma_{Y}}
\label{eqn:bowler:rrr}
\end{align}
where $\mathbb{E}(Y)$ is defined in Eq.~\eqref{eqn:bruns:mean} and
$\sigma_{Y}$ is defined in Eq.~\eqref{eqn:bruns:sd}. The constants
$c_{\text{batsman}}$ and $c_{\text{bowler}}$ are discussed below.
\end{definition}
\subsection{Constants}
The Duckworth-Lewis (D/L) method predicts an
\href{http://static.espncricinfo.com/db/ABOUT_CRICKET/RAIN_RULES/DL_FAQ.html}
{average score}\footnote{The pertinent question is No.13 in the
hyperlinked D/L FAQ.} of $235$ runs will be scored by a team in an ODI match.
Though D/L average score seems to be a good candidate for usage as the constant
in $\mathrm{RRR}$, we use scale it before using it. The reason for scaling
is due to a concept named Value over Replacement player (VORP) which
comes from Baseball~\cite{wiki:vorp}.
Replacement player is a player who plays at the next rung below international
cricket\footnote{ In India, the replacement players play in Ranji Trophy.}.
A team full of replacement players will have no risk and hence no upside.
The baseball statisticians have set the scale factor for replacement players
to be $20\%$ worse than the international players. Thus for batsman
\begin{align}
c_{\text{batsman}} &= 0.8 \times 235 = 188 \label{eqn:batsman:c}
\end{align}
and for a bowler it is
\begin{align}
c_{\text{bowler}} &= 1.2 \times 235 = 282 \label{eqn:bowler:c}
\end{align}
Thus a team of replacement batsman will end up scoring $188$ runs while
a team of replacement bowlers will concede $282$ runs.
We end this paper by listing Bernoulli runs, standard deviation and reward to
risk ratio for some of the Indian ODI cricketers of 2010.
\begin{table*}[hbt]
\caption{Bernoulli Runs for batsmen}
\label{tbl:batsmen}
\begin{center}
\begin{tabular}{|l|r|r|r|r|r|}
\hline
\multicolumn{1}{|c|}{Name} &
\multicolumn{1}{|c|}{$\mathrm{avg}$} &
\multicolumn{1}{|c|}{$\mathrm{sr}$} &
\multicolumn{1}{|c|}{mean} &
\multicolumn{1}{|c|}{sd} &
\multicolumn{1}{|c|}{$\mathrm{RRR}$}
\\\hline\hline
Virender Sehwag & 34.64 & 103.27 & 275.96 & 42.99 & 2.05\\\hline
Sachin Tendulkar & 45.12 & 86.26 & 251.43 & 13.01 & 4.88\\\hline
Gautam Gambhir & 40.43 & 86.52 & 249.52 & 17.75 & 3.47\\\hline
Yuvraj Singh & 37.06 & 87.94 & 249.84 & 23.29 & 2.66\\\hline
Mahendra Singh Dhoni & 50.28 & 88.34 & 258.87 & 10.42 & 6.80\\\hline
Suresh Raina & 36.11 & 90.15 & 253.62 & 26.79 & 2.45\\\hline
Yusuf Pathan & 29.33 & 110.00 & 261.15 & 59.59 & 1.23\\\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[hbt]
\caption{Bernoulli Runs for bowlers}
\label{tbl:bowlers}
\begin{center}
\begin{tabular}{|l|r|r|r|r|r|}
\hline
\multicolumn{1}{|c|}{Name} &
\multicolumn{1}{|c|}{$\mathrm{avg}$} &
\multicolumn{1}{|c|}{$\mathrm{econ}$} &
\multicolumn{1}{|c|}{mean} &
\multicolumn{1}{|c|}{sd} &
\multicolumn{1}{|c|}{$\mathrm{RRR}$}
\\\hline\hline
Zaheer Khan & 29.85 & 4.91 & 224.89 & 29.44 & 1.94\\\hline
Praveen Kumar & 33.57 & 5.07 & 237.30 & 25.56 & 1.75\\\hline
Ashish Nehra & 31.03 & 5.15 & 235.25 & 31.40 & 1.49 \\\hline
Harbhajan Singh & 32.84 & 4.30 & 206.19 & 15.46 & 4.90\\\hline
Yusuf Pathan & 34.06 & 5.66 & 258.45 & 34.59 & 0.68 \\\hline
Yuvraj Singh & 39.76 & 5.04 & 242.61 & 16.66 & 2.36\\\hline
\end{tabular}
\end{center}
\end{table*}
It is clear from Table~\ref{tbl:bowlers} it is quite unfair to compare
Zaheer Khan with Harbhajan Singh. Zaheer operates usually in the manic
periods of power plays and slog overs while Harbhajan bowls mainly in the
middle overs. But until we get detailed statistics the adjustments that go
with it have to wait.
| {
"attr-fineweb-edu": 2.214844,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdYk4dbjiVCOrAHpL | \section{Introduction}
Integrating narrative text with visual evidence is a fundamental aspect of various investigative reporting fields such as sports journalism and intelligence analysis. Sports journalists generate stories about sporting events that typically generate tremendous amounts of data. These stories summarize the event and build support for their particular narrative (e.g. why a particular team won or lost) using the data as evidence. Intelligence analysts carry out similar tasks, collecting data as evidence in support of the narrative that they construct to inform decision makers~\cite{arcos2015intelligence}. Such a process would also be familiar to digital humanities scholars who routinely distill meta-narratives by automatic means from large corpora consisting of literary texts~\cite{moretti2013distant,jockers2013macroanalysis}. Support for linking visualizations of this information to the text is an underexplored area that could benefit each of these users.
When visual evidence is provided, it is often presented completely independently of the narrative text.
ESPN\footnote{espn.go.com}, for example, routinely presents journalistic articles on web pages that are separate from relevant videos, interactive visualizations, and box score tables that all present information about the same game event (See Figure \ref{fig:espn}). This approach may limit the reader's ability to interpret the writer's narrative in context.
For example, a reader might be interested in more detail surrounding a specific element described in the narrative text because she may not agree with the statement. Alternatively, the reader might also be interested in finding the writer's particular take on a specific player of interest or time in the game that she came across while exploring the data visualization. This bi-directional support for interpretation is currently lacking. We leverage the existing skill set of the writer and combine it with computational methods to automatically link the writer's narrative text to visualization elements in order to provide that context in both directions.
\begin{figure}[t!]
\centering
\includegraphics[width=.48\columnwidth, frame]{figures/text.png}
\includegraphics[width=.48\columnwidth, frame]{figures/box.png}
\includegraphics[width=.48\columnwidth, frame]{figures/multiple4.png}
\includegraphics[width=.48\columnwidth, frame]{figures/multiple3.png}
\caption{Sites such as www.espn.com provide the reader with several forms of content, but they are often presented independent of one another. Here we see typical narrative text on the upper left, box score in the upper right, and visualizations (shot chart and game flow time series) along the bottom. Each of these elements is shown on a separate web page on the ESPN site. All content from www.espn.com}
\label{fig:espn}
\end{figure}
\section{Related Work}
Narrative visualizations are data visualizations with embedded ``stories'' presenting particular perspectives using various embedding mechanisms\cite{segel2010narrative}. Segel et al. \cite{segel2010narrative} discuss the utility of textual ``messaging''and how author-driven approaches typically include heavy messaging while reader-driven approaches involve more interaction. Kosara and Mackinlay \cite{kosara2013storytelling} and Lee et al. \cite{lee2015more}, on the other hand, adopt a view of storytelling in visualization as primarily consisting of communication separate from exploratory analysis. This view matches the way journalists use various data sources including videos and notes to create compelling stories \cite{kosara2013storytelling}.
We attempt to provide equal flexibility to the reader and author. Our proposed method for coupling textual stories to data visualizations can not only assist writers in constructing their message and framing the data \cite{hullman2011visualization, kosara2013storytelling} but also create exploratory interaction mechanisms for readers. The linking of subjective narrative with objective data evidence can serve to reinforce author communication and give readers the ability to explore their own interpretations of the story and the event. This approach can be viewed as providing a ``close reading'' of the narrative, using the visualization as a means for providing the semantic context \cite{janicke2015close} as opposed to a distant reading where the actual text is replaced by abstract visual views \cite{moretti2013distant}.
We are not the first to investigate the coupling of text to visualization.
Chul Kwon et al. \cite{kwon2014visjockey} directly, but manually, tie specific text elements to visualization views to support interpretation of the author's content. Others analyze text to automatically generate elements of narrative visualization such as annotations \cite{hullman2013contextifier, bryan2017temporal}. Likewise, Dou et al. \cite{dou2012leadline} also analyze text, but from large scale text corpora, to identify key ``event'' elements of who, what, when and where and use that information to construct a visual summary of the corpora and support exploration of it. We build on these approaches, automatically linking text to visualization elements, but focus on a single data-rich text document. In addition, we explore the use of links from the visualization back to the text.
\section{Sports Stories: Basketball}
To ground the discussion, we focus on the data-rich domain of sports, specifically basketball, for which we have a significant set of narratives and data published regularly online. Sports events produce tremendous amounts of data and are the subject of sports journalists and general public fans. Basketball is no exception. Data regarding player position and typical basketball ``statistics'' are collected for every game in the National Basketball Association (NBA)\footnote{http://stats.nba.com/}. These data typically include field goals (2pt and 3pt), assists, rebounds, steals, blocked shots, turnovers, fouls, and minutes played. The NBA has recently begun to track the individual positions of players throughout the entire game, leading to spatial data as well as various derived data such as the total number of times a player touches the ball (i.e., touches). The data is rich in space, time, and discrete nominal and quantitative attributes.
\section{Overview}
Our approach is based on a detailed analysis of narrative text to identify key elements of the story that can directly index into the visualizations (and vice versa) to create a \textit{deep coupling}. The system architecture is shown in Figure \ref{fig:overview}. Raw text is provided as input to the text processing module. This module analyzes the text to extract the four key narrative elements - who, what, when, and where (4 Ws). The coupler then indexes into the multimedia and vice versa with these Ws. We focus solely on data visualization multimedia elements (charts, tables, etc.) as the supporting evidence, however, evidence such as video, images, and audio could be treated similarly.
\begin{figure
\begin{center}
\includegraphics[width=0.50\textwidth]{figures/architecture.png}
\end{center}
\caption{Software architecture for deep coupling of narrative text and visualization components.}
\label{fig:overview}
\end{figure}
\section{Identifying Story Elements}
A story is defined by characters, plot, setting, theme, and goal which are also sometimes referred to as the five Ws: who, what, when, where and why. These are the elements considered necessary to tell and/or uncover, in the case of intelligence investigation, a complete story \cite{arcos2015intelligence}.
In sports, there are clear mappings for these essential elements. In basketball, for example, there are players, coaches, referees, and teams (who), locations/spaces on the basketball court such as the 3-point line or inside the key (where), distinct times or time periods such as the 4th quarter, or 3-minutes left (when), and particular game ``stats'' such as shots, fouls, timeouts, touches, etc. (what). Our first step is to extract these elements.
We first segment the text into sentences and for each sentence, we seek to determine if it contains a reference to who, what, when, or where. Consider the following sentence from an Associated Press story from Game 3 of the 2017 NBA Finals between the Cleveland Cavaliers and the Golden State Warriors \footnote{http://www.espn.com/nba/recap?gameId=400954512}:\textit{
The Warriors trailed by six with three minutes left before Durant, criticized for leaving Oklahoma City last summer to chase a championship, brought them back, scoring 14 in the fourth.} From the second clause in this sentence, we would expect to extract a Who (Durant), a What (14 points), a When (the fourth), and no Where elements.
\noindent \textbf{Who:}
Extracting the story characters is straightforward, especially for a particular domain. In the NBA case, we can obtain a complete list of all teams, players, coaches, and referees in the league. A more general solution, however, could utilize a named-entity identifier such as the Stanford Core NLP Named Entity Recognizer \footnote{https://nlp.stanford.edu/software/CRF-NER.shtml}.
\noindent \textbf{What:} The ``What'' of a basketball story, or any sports story in general, refers to the game statistics of interest. While it would seem straightforward to look for mentions of numbers and/or specific stat names since numbers and names are generally used to describe these stats (e.g. 25 points, 10 rebounds, five 3-pointers), this is not always the case. Consider the many ways to refer to the points scored stat: Durant scored 25 points, Durant added 15, or even, Durant outscored the entire Cavalier team in the second quarter.
To identify stats, we take inspiration from the digital humanities \cite{franzosi2010quantitative} and build a ``story grammar'' to parse the narrative. The stats story grammar is a set of manually designed production rules for creating and/or matching variations of mentions of stats. While the grammar-based approach is appropriate for identifying most mentions of stats, it is only as effective as the grammar specification and a new variation of a stat mention will not be recognized. We therefore use the grammar to generate training data for an SVM classifier for stats.
We run the sentences of 13 stories/blog posts through the stats story grammar to label the sentences as STAT or NO STAT. We then use these positive (STAT) sentence-classification pairs as training data for the SVM classifier. Specifically, for each sentence that contains a statistic, we create a 10-word window around the statistic to use as a feature vector. All such windows are provided as training examples in order to learn the general textual context that surrounds mentions of stats. After training on approximately 600 such windows, the classifier achieves 98\% accuracy on a test data set of approximately 150 sentences and it finds additional statistic mentions that were not classified correctly by the grammar.
\noindent \textbf{When and Where:} We currently extract both times and locations using only a grammar with acceptable results because there are a limited number of ways to specify times and places in basketball. A classifier similar to that developed for stats could also be generated for both times and places to boost performance if necessary.
\subsection{Narrative Text to Visualization Coupling}
The text analysis stage results in a list of Ws for each sentence in the narrative, stored as a .json object with W-value pairs.
This information is then provided to the ``coupling'' component. The coupler associates each sentence with the appropriate visualization(s) (i.e., those that support the specific Ws in the sentence) and initializes the visualization to show the data for the particular person (who), stat (what), time (when), and location (where) mentioned in the sentence.
In Figure \ref{fig:annotated}, the user has hovered over the sentence highlighted on the left side. The mentioned player is highlighted in the box score on the right (Kevin Durant) and his player profile is shown at the bottom left. The specific mentioned time in the game is marked in the time series visualization (3 minutes left), the specific quarter that is mentioned is selected in the time series visualization (fourth quarter), and the shot chart in the upper left of the visualization shows the shots attempted (both made and missed) by Kevin Durant in the 4th quarter. This visualization initialization provides the context for the writer's interpretation of what was important, and allows the reader to ask their own questions, such as: How many shots did Durant have to take in the 4th quarter to score 14 points?
\subsection{Interaction Design}
No guidelines or principles currently exist for the design of tightly coupled reader interaction with narrative text and visualization concurrently. We chose a multi-view or ``partitioned poster'' layout \cite{segel2010narrative} to allow for all visualizations to be accessible in a single view along with the narrative text. Interaction is guided by basic multi-view coordination principles \cite{north2000snap}. We assume each data table consists of items described by attributes - both of which can correspond directly to one of the four Ws of a story. Thus a visualization can be directly parameterized by one or more of the 4Ws from a sentence.
We coordinate text views with visualization views and vice versa. User interaction (e.g., mouse over, selection) with specific sentences results in the corresponding Ws being brushed in the linked visualizations. Likewise, user interaction with the visualization components results in the corresponding Ws being highlighted in the narrative text. For example, in Figure \ref{fig:annotated}, users can select a row (i.e. a player) in the box score tables and that player's data is then highlighted (brushing and linking) in all other views (e.g. shots in the short chart and mentions in the time series). That player's profile is also displayed in detail at the bottom left (details on demand). All visualizations support tooltip details where appropriate.
\begin{figure*}[h!]
\centering
\includegraphics[width=0.82\linewidth]{figures/annotatedExample.png}
\caption{Coupling from text to visualization. The user has selected a passage that reads: The Warriors trailed by six with three minutes left before Durant, criticized for leaving Oklahoma City last summer to chase a championship, brought them back, scoring 14 in the fourth. Story excerpt from the Associated Press. }
\label{fig:annotated}
\end{figure*}
\subsubsection{Navigation Between Text and Visualization}
The reader must be able to navigate from the narrative text to the visual representations and vice versa. As the reader browses the text, sentences are highlighted to show the current sentence of interest and the visualizations update to reflect the corresponding Ws from the highlighted text. The user can click on a sentence to indicate that he is interested in exploring the data context around that aspect of the narrative, thus, we freeze the state of the visualizations.
The user can then interact with the visualizations, using mouse-overs to get further details. Any direct interaction, such as a click on an element or brush to select a time series range, results in updating all visualizations and the highlighted narrative text given the new selected Ws. The user can now continue to explore the visualizations or can navigate back to the narrative text at any time to examine the writer's commentary with regards to the selected Ws.
\section{Case Study: Sports Narratives}
To gain an initial understanding of the potential opportunities and pitfalls that this approach presents, we conducted an informal case study with a domain expert user with 4.5 years of experience as a media producer for Bleacher Report\footnote{www.bleacherreport.com} and multiple collegiate sports media outlets. We gave the user a brief introduction to the interface and asked him to explore a game recap of Game 3 of the 2017 NBA finals and provide feedback using a ``Think Aloud'' protocol.
Our user immediately grasped the navigation concept and was easily able to move from reading/scanning the narrative text to the visualization and back to the narrative text. His general impression was that there was great value in being able to more deeply explore the specific narrative presented by the author: \textit{I love the idea of being able to experience the article, but really dial-in on a specific part of it myself.} He was also intrigued by the potential to more deeply tie the text phrasing to the visuals. \textit{If the writer emphasized the number of points Curry had from the paint, I'd be interested in seeing that visually, and also exploring for myself where else he may have scored from. I'd like to select areas like the paint, midrange, and 3-pointers and beyond to see the shots he took in those areas.}
Our user also articulated several areas for improvement. For example, he found the box score filtering by selecting a player row to be extremely useful, but wanted even more control. \textit{I'd like to select both a player row and a stat column so I could more specifically look for a specific stat for that player in the other charts like the time series view.} He also recommended a more subtle tie from the visualization to the narrative text might be more effective. Currently, data items selected through direct interaction with the visualization causes the sentences that mentioned those items to be highlighted in the narrative. This was sometimes overwhelming because some sentences may have directly mentioned specific players and their stats while others may have simply provided commentary or mentioned the player in a secondary role. A deeper analysis of the text to understand the saliency of the sentence with regards to the selected data should be explored. Finally, our user desired a more granular link to the text. \textit{It would be useful if the text highlighted the types of elements - player, stats, times, etc., with different colors }. Future versions could employ color encodings to distinguish the 4Ws in the text.
\section{Conclusion and Future Work}
We have presented a novel approach to automatically couple narrative text to visualizations for data-rich stories and demonstrated the approach in the basketball story domain.
This serves only as an introductory step for using text analysis to tightly integrate visualization with narrative text. There are several exciting avenues of research that remain to be explored.
First, Where is the 5th W? We have not attempted to analyze the text to understand the ``Why'' of the narrative. A more nuanced narrative analysis is necessary to take semantics and topics into account when coupling to the visualization.
Our approach is carried out completely offline. One can imagine, however, a real-time version that operated online as the journalist writes the story - suggesting a set of initialized candidate visualizations to support the most recently completed sentence. Our case study user was intrigued by the possibility that the tool could influence how journalists write. \textit{I wonder if a writer might use specific phrasing as a tool for leading the reader to particular elements of the game?} It would be interesting to study how real-time interactive visualization coupling might influence the writing style of journalists.
This approach should transfer well to any sport where stories typically consist of discussions of players, teams, the field (or pitch, court, etc.), times, and game stats. We suspect it can also be extended to data-rich news stories as well, however, these will require more sophisticated algorithms for extracting the Ws given the more general language and context. Finally, a thorough evaluation of the approach is needed. We are in the process of conducting a crowdsourced study to examine the effects of this type of coupling on the reading experience.
\balance{}
\bibliographystyle{SIGCHI-Reference-Format}
| {
"attr-fineweb-edu": 2.863281,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdSc5qsBDCrvnN3ER | \section{Introduction}
Quantitative analysis in sports analytics has received a great deal of attention in recent years. Traditional statistical research for sports analytics mainly focused on game result prediction, such as predicting the number of goals scored in soccer matches \citep{dixon1997modelling, karlis2003analysis,baio2010bayesian}, and the basketball game outcomes \citep{carlin1996improved,caudill2003predicting,cattelan2013dynamic}. More recently, fast development in game tracking technologies has greatly improved the quality and variety of collected data sources \citep{albert2017handbook}, and in turn substantially expanded the role of statistics in sports analytics, including performance evaluation of players and teams \citep{cervone2014pointwise,franks2015characterizing,wu2018modeling,hu2020bayesian,hu2021cjs}, commentator's in-game analysis and coach's decision making \citep{fernandez2018wide,sandholtz2019measuring}.
The home field advantage is a key concept that has been studied across different sports for over a century. The first public attempt to statistically quantify the home field advantage was about 40 years ago \citep{physicaleducation}, where \citet{schwartz1977home} studied the existence of home advantage in several sports and found that its effect was most pronounced in the indoor sports such as ice hockey and basketball.
Since then, statisticians and psychologists have dug their heels into trying to map out this anomaly, and there is some consensus on it. The percentages found through different studies in general float around 55-60\% of an advantage to the home team (wherein 50\% would mean there is no advantage either way). \citet{courneya1992home} found that different sports in general held different advantage percentages: 57.3\% for American football versus 69\% for soccer, for example. In a study of basketball home court advantage, \citet{calleja2018brief} found that the visiting team scored 2.8 points less, attempted 2 fewer free throws, and made 1 less free throw than their season averages. Season length is also shown to have a negative correlation with the strength of home field advantage, where sports with 100 games or more per season were shown to have significantly less advantage than a sport with fewer than 50 games per season \citep{Metaanalysis}. This finding can be explained by the fact that each game becomes less important on average as the number of games increases. Some studies have shown that the home field advantage becomes less significant in the modern era. For example, \citet{Metaanalysis} found that the home field advantage was the strongest in the pre-1950s, as opposed to any other twenty-year block leading up to 2007, the year he conducted the study. In another work, \citet{sanchez2009analysis} showed that between 2003 and 2013, the home field advantage for the UEFA Champions League decreased by 1.8\%, supporting the claim that as time goes on, the home field advantage decreases to a certain level and remains still. In general, the home field advantage tends to deteriorate as the athletes play longer on an away field.
In principle, there are four major factors associated with the home field advantage: crowd involvement, travel fatigue, familiarity of facilities, and referee bias that benefits the home teams. Several studies \citep{schwartz1977home,greer1983spectator} have shown a positive correlation between the size of the crowd and the effect of the home field advantage, an advantage that can get as high as 12\% over a team's opponents. Travel fatigue is another important factor for home field advantage \citep{Metaanalysis,calleja2018brief,NFLtimezone}, since the athletes have an overall reduction in mean wellness, including a reduction of sleep, self-reported feelings of jet lag and energy reduction; the importance of sleep and proper recovery cannot be over-exaggerated, and it can be difficult to do either of those properly in unfamiliar settings (e.g., a hotel room or a bus). Furthermore, familiarity of facilities gives the home team a strong advantage over a visiting team \citep{MovingNewStadium}. A home team not only does not have to travel to an unfamiliar facility, be away from home, and spend hours transporting themselves; they also get to use their own facility, locker room, and field. It cannot be underestimated the value of knowing and knowing well the nooks and crannies of a facility. Ultimately, giving rule advantages to the home team is very common in certain sports \citep{courneya1992home}. For example, in baseball the home team gets the advantage of batting last, giving them the final opportunity to score a run in the game. In hockey, the last line change goes to the home team.
Soccer has proven to have one of the highest home field advantages among major sports in a majority of current research. \citet{MovingNewStadium} states that the home field advantage in soccer is equivalent to 0.6 goals per game and the visual cues that come from being intimately familiar with a facility can be exponentially helpful in fast paced sports such as soccer, which could perhaps explain why the home field advantage is less pronounced in slower, stopping sports such as baseball. \citet{sanchez2009analysis} observes that the better a soccer team is, the more often the home field advantage appears, after studying home field advantage across groups of variably ranked soccer teams.
Despite the vast amount of existing work on home field advantage quantification in soccer and other sports \citep{leitner2020no,benz2021estimating,fischer2021does}, very little is known about its {\it causal effect} on team performance; and it is our goal in this paper to fill this gap. In particular, we study soccer games by analyzing a data set collected from the English Premier League (EPL) 2020-2021 season, where 380 games were played, that is, two games between each pair of 20 teams. We choose eleven team-level summary statistics as the main outcomes that represent team's performance on defensive and offensive sides, and the referee bias. We then develop a new causal inference approach for assessing the causal effect of home field advantage on these outcomes. More details about our data application is provided in Section \ref{sec:data}.
In causal inference literature (for observational studies), causal effect estimation of a binary treatment is a classic topic that has been intensively studied \citep{licausal,imbens2004nonparametric}.
Over the past few decades, a number of methods in both statistics and econometrics have been proposed to identify and estimate the average treatment effects of a particular event/treatment \citep[see a recent overview in][]{athey2017estimating} under the assumption of ignorability (also unknown as unconfounded treatment assignment) \citep{rosenbaum1983central}, including regression imputation, (augmented) inverse probability weighting \citep[see e.g., ][]{horvitz1952generalization, rosenbaum1983central, robins1994estimation, bang2005doubly, cao2009improving}, and matching \citep[see e.g., ][]{rubin1973matching, rosenbaum1989optimal, heckman1997matching, hansen2004full, rubin2006matched, abadie2006large, abadie2016matching}. However, the data structure in soccer games is distinguished from those in the existing literature, leading to {\it two unique challenges} in identifying the causal effects of home field advantage. First, the EPL data set is obtained neither from a randomized trial or observational studies as standard in casual inference literature, but instead is based on a collection of pair-wise matches between every two teams. For one season, each team will have one home game and one away game with each of the other 19 opponents. All matches are pre-scheduled. Thus, propensity-score-based methods can hardly gain efficiency with such design. Secondly, for each match, there is one team with home field advantage and the other without, i.e., there are no matches in neutral field. In fact, this is the common practice for other major professional sports leagues such as NBA, NFL, and MLB. In other words, there is technically no control group where both teams have no home field advantage. Hence, we cannot rely on matching-based method to estimate the casual effects. To overcome these difficulties, in this paper, we establish a hierarchical causal model to characterize the underlying true causal effects at the league and team levels, and propose a novel causal estimation approach for home field advantages with inference procedures.
Our proposed method is unique in the following aspects. First, the idea of {\it pairing home and away games} for solving causal inference problem is novel. In fact, this idea and our proposed approach are widely applicable to general sports applications such as football, baseball and basketball studies, and provide a valuable alternative to the existing literature that mainly relies on propensity score. Secondly, under the proposed hierarchical model framework, both league level and team level causal effects are identifiable and can be conveniently estimated. Moreover, our inference procedure is developed based on linear model theory that is accessible to a wide audience including first-year graduate students in statistics. Thirdly, our real data analysis results reveal several interesting findings from England Primer League, which may provide new insights to practitioners in sports industry. It is our hope that the data application presented in this paper as well as the developed statistical methodology can reach to a wide range of audience, be useful for educational purpose in statistics and data science classes, and stimulate new ideas for more causal analysis in sports analytics.
The rest of the paper is organized as follows. In Section \ref{sec:data}, we
give an overview of our motivating data application and introduce several representative summary outcomes. In Section~\ref{sec:model}, we introduce the causal inference framework and propose a hierarchical causal model for both team-level and league-level home field advantage effect estimation. Extensive simulation studies are presented in Section~\ref{sec:simu} to investigate the empirical performance of our approach.
We apply our method to analyze the 2020-2021 England Primer League in Section~\ref{sec:app} and conclude with a discussion of future directions in Section~\ref{sec:disc}.
\section{Motivating Data}\label{sec:data}
We are interested in studying the causal effect of home field advantage for in-game performance during English Premier League 2020-2021 season. Despite as obvious as it sounds, the home field advantage is in fact not evident at first glance through the usual descriptive statistics such as the game outcomes and number of scored goals. For example, among the 380 games in that season, the number of home wins is 144 (37.89\%), which is even less than the number of away wins, 153 (40.26\%). Among the total of 1024 goals scored in that season, 514 (50.19\%) were scored by the home team, and 510 (49.81\%) by the away team.
To better understand this phenomenon, we study a data set provided by Hudl \& Wyscout, a company that excels at soccer game scouting and match analysis. The data is collected from 380 games played by 20 teams in EPL, and includes
a number of statistics collected from each game that range across the defensive and offensive capabilities of the home and away teams and players.
We choose to focus on eleven in-game statistics as follows:
\begin{itemize}
\item \textbf{Attacks w/ Shot} - The number of times that an offensive team makes a forward move towards their goal (a dribble, a pass, etc) followed by a shot;
\item \textbf{Defence Interceptions} - The number of times that a defending team intercepts a pass;
\item \textbf{Reaching Opponent Box} - The number of times that a team moves the ball into their opponent's goal box;
\item \textbf{Reaching Opponent Half} - The number of times that a team moves the ball into their opponent's half;
\item \textbf{Shots Blocked} - The number of times that a defending team deflects a shot on goal to prevent scoring;
\item \textbf{Shots from Box} - The number of shots taken from the goal box;
\item \textbf{Shots from Danger Zone} - The number of shots taken from the "Danger Zone", which is a relative area in the center of the field approximately 18 yards or less from the goal;
\item \textbf{Successful Key Passes} - The number of passes that would have resulted in assists if the resulting shot had been made;
\item \textbf{Touches in Box} - Number of passes or touches that occur within the penalty area;
\item \textbf{Successful Key Passes} - The number of passes that would have resulted in assists if the resulting shot had been made;
\item \textbf{Expected Goals (XG)} - The average likelihood a goal will be scored given the position of the player over the course of a game;
\item \textbf{Yellow Cards} - The number of yellow cards given to a team in a game.
\end{itemize}
Table \ref{tab:summary_statistics} contains the eleven summary statistics from the raw data, where each row shows the statistic, its primary role in the game (defense, offense, or referee), the means for the home team and for the away team, respectively, and the overall standard deviation. These in-game statistics are chosen because they are most relevant to studying the home field advantage and also sufficient to cover different aspects of the soccer games (e.g., team offensive and defensive performance).
\begin{table}[th]
\centering
\caption{Selective summary statistics for 2020-2021 England Primer League.}
\label{tab:summary_statistics}
\begin{tabular}{ c c c c c }
\toprule
\textbf{Statistic} & \textbf{Role} & \textbf{Mean Home} & \textbf{Mean Away} & \textbf{Overall SD}\\
\midrule
Attacks w/ Shot & Offense & 11.547 & 9.979 & 4.910 \\
Defence Interceptions & Defense & 42.639 & 44.637 & 11.287 \\
Reaching Opponent Box & Offense & 14.779 & 13.097 & 6.135\\
Reaching Opponent Half & Offense & 58.561 & 54.968 & 12.566\\
Shots Blocked & Defense & 3.382 & 2.95 & 2.273 \\
Shots from Box & Offense & 7.5 & 6.434 & 3.594 \\
Shots from Danger Zone & Offense & 5.058 & 4.271 & 2.622\\
Successful Key Passes & Offense & 3.584 & 3.037 & 2.328\\
Touches in Box & Offense & 19.468 & 17.216 & 8.883 \\
Expected Goals (XG) & Offense & 1.577 & 1.395 & 0.867\\
Yellow Cards & Referee & 1.447 & 1.474 & 1.151\\
\bottomrule
\end{tabular}
\end{table}
To better understand the role of these selected statistics. We choose three of them (representing offense, defense, and referee), calculate the difference in these statistics between home and away games for each team and present their distributions in Figure~\ref{fig:reaching} to Figure~\ref{fig:yellow}.
The offensive statistic is Reaching Opponent Half, and its distribution for different teams is illustrated in Figure \ref{fig:reaching}. The difference between home and away teams, in theory, should be positive overall if the home field advantage is present, negative if there is actually an advantage towards the away team, and 0 if there is no advantage either way. From the picture, we can see a clear trend for home advantage, e.g., 17 of the 20 teams have a positive mean, which implies that they reach their opponents' half more often on average when they play at home as opposed to away. This finding is in fact fairly consistent across all offensive variables. The three teams that do not show evidence of home field advantage for Reaching the Opponent's Half are Aston Villa, Chelsea, and Southampton. The top three teams from that season (Manchester City, Manchester United, and Liverpool) exhibit similar patterns in the picture.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{fig/Reaching_opp_half.png}
\caption{Net difference of Reaching Opponent Half for 20 teams.}
\label{fig:reaching}
\end{figure}
The Defense Interceptions is chosen as a representation for the defensive statistic; and we present its distribution of the difference between home and away teams in Figure \ref{fig:Interception}. Since defence interceptions negatively impact a team, the home field advantage will be seen here if, oppositely to the offensive statistic, the distribution of the difference is skewed negatively. Five teams do not show evidence of home field advantage: Chelsea, Crystal Palace, Everton, Fulham, and Southampton. Interestingly, the top three teams from that season (Manchester City, Manchester United, and Liverpool) all have negative means but show a different level of variability in this statistic.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{fig/Defence_Interceptions_1.png}
\caption{Net difference of Defence Interceptions for 20 teams.}
\label{fig:Interception}
\end{figure}
Finally, the referee bias is represented by the team-level number of yellow cards in each match. The difference between a team's yellow card calls at home versus away is shown in Figure~\ref{fig:yellow}. If there was a home field advantage effect in yellow cards given, the boxplot would trend negatively. Different from the offensive and defensive statistics, we see a majority of teams centralizing their distributions around the zero line, which is an indication that the referee calls are similar whether the team is home or away. Out of 20 teams, 14 have means about the zero, and the other 6 have a mix of positive and negative means.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{fig/Yellow_Cards_1.png}
\caption{Net difference of Yellow Cards for 20 teams.}
\label{fig:yellow}
\end{figure}
\section{Method}\label{sec:model}
\subsection{Framework and Assumptions}
Suppose there are $n$ different teams in the league, denoted by $\{T_1,\cdots, T_n\}$, with a total number of $N\equiv n(n-1)$ matches between any pair of two teams, $T_i$ and $T_j$, where $i\not = j$ and $i,j \in \{1,\cdots,n\}$. Without loss of generality, we assume $n\geq 3$.
Therefore, each team $T_i$ has $(n-1)$ matches with home field advantage and another $(n-1)$ without.
Define a treatment indicator $\delta_{i}=1$ if team $T_i$ has the home field advantage in a match against team $T_j$ for $j\not = i$, and $\delta_{i}=0$ otherwise.
By definition, we have $\delta_{i} + \delta_{j} =1$ for any match between $T_i$ and $T_j$.
The main outcome is defined as the net difference in the outcome of interest, i.e., $Y_{i,j} =(c_i - c_j)\in\mathbb{R}$, where $c_i$ and $c_j$ are one of the eleven statistics that we described in Section \ref{sec:data} for team $T_i$ (with home field advantage) and team $T_j$ (without home field advantage), respectively. Following the potential outcome framework \citep[see e.g., ][]{rubin1974estimating}, we define the potential outcome $Y^*(\delta_{i}=a,\delta_{j}=1-a)$ as the outcome of interest that would be observed after the match between team $T_i$ and team $T_j$, where $a=1$ or $a=0$ corresponds to that team $T_i$ or team $T_j$ has the home field advantage, respectively. As standard in the causal inference literature \citep[see e.g., ][]{rosenbaum1983central}, we make the following assumptions for any pairs of $i \neq j$.
\noindent \textbf{(A1)}. Stable Unit Treatment Value Assumption:
\begin{equation*}
\begin{split}
Y_{i,j}= \delta_{i}(1- \delta_{j})Y^*(\delta_{i}=1,\delta_{j}=0) +(1- \delta_{i}) \delta_{j} Y^*(\delta_{i}=0,\delta_{j}=1).
\end{split}
\end{equation*}
\noindent \textbf{(A2)}. Ignorability:
\begin{equation*}
\{Y^*(\delta_{i}=1,\delta_{j}=0),Y^*(\delta_{i}=0,\delta_{j}=1)\}\independent \{\delta_{i},\delta_{j}\}.
\end{equation*}
\noindent \textbf{(A3)}. Non-monotonicity: If team A dominates team B and team B dominates team C, team C can still dominate team A with a certain probability.
Assumptions (A1) and (A2) are standard in causal inference literature \citep[see e.g.,][]{athey2017estimating}, to ensure that the causal effects are estimable from observed data. By game design, each team will compete the rest teams with and without home field advantage once respectively, hence the ignorability assumption holds automatically in our study. Assumption (A3) is to rule out the correlation between different matches that involves the same team. In reality, there are style rivalries between teams in soccer games. Hence the transitive relation does not always hold.
\subsection{Hierarchical Causal Modeling and Estimation}
In this section, we detail the proposed hierarchical causal model and its estimation and inference procedures. Specifically, we are interested in team level and league level estimators of
causal effects. To this end, we define the causal effect of home field advantage associated with team $T_i$ as
\begin{eqnarray}\label{def_betai}
\beta_i = {\mbox{E}}_j \{Y^*(\delta_{i}=1,\delta_{j}=0)-Y^*(\delta_{i}=0,\delta_{j}=0)\},~~\text{for}~i=1,\ldots,n,
\end{eqnarray}
and the causal effect of home field advantage for the entire league as
\begin{eqnarray}\label{def_delta}
\Delta = {\mbox{E}}_i \{\beta_i\}.
\end{eqnarray}
Given finite number of teams in the league, we are interested in two estimands, the average home field advantage of the league $\Delta\equiv \sum_{i=1}^n \beta_i /n $ and the team-specific home field advantage $\beta_i$.
Yet, since there is no match in neutral field in major professional sports, we always have $\delta_{i} + \delta_{j} =1$, i.e., $Y^*(\delta_{i}=0,\delta_{j}=0)$ can never be observed. To address this difficulty and estimate $\beta_i$ from the observational studies, we propose to decompose the outcome function into two parts, one corresponding to the home field advantage and the other representing the potential outcome in a hypothetical neutral field, via a mixed two-way ANOVA design. To be specific, the outcome of a match between team $T_i$ (with home field advantage) and team $T_j$ (without home field advantage) is modeled by
\begin{eqnarray}\label{main_model_beta}
Y_{i,j}=\alpha_{i,j}+ \beta_{i}+\epsilon_{i,j},
\end{eqnarray}
where $\alpha_{i,j}= {\mbox{E}} \{Y^*(\delta_{i}=0,\delta_{j}=0)\}$ is the expected net outcome between team $T_i$ and $T_j$ in a hypothetical neutral field (i.e., if there is no team taking home field advantage) based on Assumption (A3),
and $\epsilon_{i,j}$ is random noise with $\mathcal{N}(0,\sigma^2_0)$. The factorization in \eqref{main_model_beta} enables us to unravel the home field advantage at team level by utilizing the pair-wise match design in soccer games that we will discuss shortly. Assumption (A3) is required for \eqref{main_model_beta} so that we can allow independent baseline effect $\alpha_{i,j}$ for any pair of two teams in a hypothetical neutral field. In addition, we adopt a hierarchical model \citep{berry2013bayesian,chu2018bayesian,geng2020mixture} to characterize the relationship between the home field advantage of individual teams and that of the whole league as
\begin{eqnarray}\label{main_model_alpha}
\beta_{i}\sim \mathcal{N}(\Delta,\sigma^2),
\end{eqnarray}
where $\sigma^2$ describes the variation of home field advantages across different teams. Based on \eqref{main_model_beta}, oppositely, we have the model for the match between team $T_j$ (with home field advantage) and team $T_i$ (without home field advantage) as
\begin{equation}\label{main_model_op}
Y_{j,i}=-\alpha_{i,j}+ \beta_{j}+\epsilon_{j,i},
\end{equation}
where the net score without home field advantage satisfies $\alpha_{i,j} = - \alpha_{j,i}$. Thus, combining \eqref{main_model_beta} and \eqref{main_model_op}, we have \begin{equation}\label{hte}
Y_{i,j} + Y_{j,i}= \beta_{i}+\beta_{j}+\epsilon_{i,j}+\epsilon_{j,i}.
\end{equation}
In other words, $\alpha_{i,j}$'s are treated as nuisance parameters and do not need to be estimated in our model. By repeating \eqref{hte} over all paired matches between team $T_i$ and team $T_j$,
for $i\not = j$ and $i,j \in \{1,\cdots,n\}$, we have
\begin{equation}\label{est3}
\underbrace{\begin{bmatrix}
1 & 1 & 0&\cdots &0&0&0\\
1 & 0 & 1&\cdots &0&0&0\\
\vdots & \vdots & \vdots &\cdots &\vdots &\vdots &\vdots \\
0 & 0 & 0&\cdots &1&0&1\\
0 & 0 & 0&\cdots &0&1&1\\
\end{bmatrix}}_{\bm{H}_{N/2\times n}}
\underbrace{\begin{pmatrix}
{\beta}_1\\
{\beta}_2\\
{\beta}_3\\
\vdots\\
{\beta}_{n-2}\\
{\beta}_{n-1}\\
{\beta}_n
\end{pmatrix}}_{\widehat{\bm{\beta}}_{n\times 1}}
+ \underbrace{\begin{pmatrix}
\epsilon_{1,2} + \epsilon_{2,1}\\
\epsilon_{1,3} + \epsilon_{3,1}\\
\vdots\\
\epsilon_{(n-2),n} + \epsilon_{n,(n-2)}\\
\epsilon_{(n-1),n} + \epsilon_{n,(n-1)}
\end{pmatrix}}_{\bm{\epsilon}_{N/2\times 1}}
=
\underbrace{\begin{pmatrix}
Y_{1,2} + Y_{2,1}\\
Y_{1,3} + Y_{3,1}\\
\vdots\\
Y_{(n-2),n} + Y_{n,(n-2)}\\
Y_{(n-1),n} + Y_{n,(n-1)}
\end{pmatrix}}_{\bm{Y}_{N/2\times 1}}.
\end{equation}
This motivates us to estimate the team-specific home advantage effects through the above linear equation system. Specifically, since $\bm{H}$ is a full rank matrix under $n\geq 3$, we use the following estimate of $\bm{\beta}=[\beta_1, \cdots,\beta_n]^\top$,
\begin{equation*}
\widehat{\bm{\beta}} = (\bm{H}^\top \bm{H})^{-1} \bm{H} \bm{Y}.
\end{equation*}
Based on the definition that $\Delta=\sum_i \beta_i /n$, we have an estimator of $\Delta$ as
\begin{equation}\label{est1}
\widehat{\Delta} = \sum_i \widehat{\beta}_i /n,
\end{equation}
where $\widehat{\beta}_i$ is the $i$-th element in $\widehat{\bm{\beta}} $. Following the standard theory for linear regression, we establish the normality of $\widehat{\bm{\beta}} $ in the following proposition.
\begin{prop}
\label{beta_variance}
Assuming the noise terms $\epsilon_{i,j}$ are independent and identically distributed (i.i.d.) Gaussian random variables, i.e., $\epsilon_{i,j}\sim \mathcal{N} (0,\sigma_0^2)$. Under (A1)-(A3) with $n\geq3$, we have
\begin{equation*}
\widehat{\bm{\beta}} - \bm{\beta} \sim \mathcal{N}_{n}(\bm{0},\Sigma_{\bm{\beta}}),
\end{equation*}
where $\bm{\beta}=[\beta_1,\cdots,\beta_n]^\top$ denotes the true causal effect for $n$ teams. The covariance matrix $\Sigma_{\bm{\beta}}$ can be estimated by $\widehat{\sigma}^2_{\beta}(\bm{H}^{\top}\bm{H})^{-1}$, where $\widehat{\sigma}^2_{\beta}=2||\bm{H}\widehat{\bm{\beta}}-\bm{Y}||^2_{2}/N$.
\end{prop}
Proposition \ref{beta_variance} holds for every $n \geq 3$ since the normality of $\widehat{\beta}$ is exact. A two-sided $(1-\alpha)$ marginal confidence band of $\bm{\beta}$ can be obtained as
\begin{equation}
\widehat{\bm{\beta}}\pm z_{\alpha/2}\sqrt{\text{diag}\{\widehat{\sigma}^2_{\beta}(\bm{H}^{\top}\bm{H})^{-1})\}},
\end{equation}
where $z_{\alpha/2}$ denotes the upper $\alpha/2-$th quantile of a standard normal distribution,
using the estimation of $\Sigma_{\bm{\beta}}$ from Proposition \ref{beta_variance}. Since $\widehat{\Delta}$ is the sample mean of $\widehat{\beta}_i$ as indicated in \eqref{est1}, the normality of the estimated average treatment effect can be obtained immediately from Proposition \ref{beta_variance}
\begin{prop}
\label{prop:sigma_delta}
Suppose that the same set of assumptpions in Proposition \ref{beta_variance} hold. Let $\Delta = \sum_i \beta_i/n$. Then
\begin{equation*}
\sqrt {n}\{\widehat{\Delta} -\Delta\} \sim \mathcal{N}(0,\sigma^2),
\end{equation*}
where an unbiased estimator for the variance $\sigma^2$ is \begin{equation}\label{eq:est}
\widehat{\sigma}^2 ={1\over n -1} \sum_{i=1}^n (\widehat{\beta}_i - \widehat{\Delta})^2 - \sum_{i=1}^n \text{diag}\{\widehat{\sigma}^2_{\beta}(\bm{H}^{\top}\bm{H})^{-1})\}_i /n
\end{equation}
and $\text{diag}(W)_i$ is the $i$-th diagonal element of matrix $W$.
\end{prop}
The first part of Proposition \ref{prop:sigma_delta} is obvious since $\widehat{\Delta}$ is a linear combination of $\widehat{\bm{\beta}}$, which follows a multivariate normal distribution. However, the estimation of $\sigma^2$ is non-trivial because the off-diagonal entries in the covariance estimator for $\Sigma_{\beta}$ do not perform well due to the high-dimensionality, i.e., number of the free parameters in $\Sigma_{\beta}$
is $O(n^2)$ and our data have a sample size of the same order. Therefore, the usual quadratic form estimator for $\sigma^2$, $n^{-1} \widehat{\sigma}^2_{\beta} \bm{1}^{\top} (\bm{H}^{\top}\bm{H})^{-1})\bm{1}$, is biased, where $\bm{1}$ is the vector of $n$ ones. To solve this problem, the key observation is to use the law of total variance as suggested in the hierarchical modeling literature \citep{berry2013bayesian,chu2018bayesian,geng2020mixture} by considering
\begin{equation*}
\sigma^2=\text{Var}\{{\mbox{E}}_i(\widehat{\beta_i}|\bm{O})\}= \text{Var}_i(\widehat{\beta_i})-{\mbox{E}}_i\{\text{Var}(\widehat{\beta_i}|\bm{O})\},
\end{equation*}
where $\bm{O}$ is the observed data. The first term $\text{Var}_i(\widehat{\beta_i})$ can be estimated by the sample variance $ \sum_i(\widehat{\beta}_i - \widehat{\Delta})^2/(n -1)$, and the second term can be estimated by $\sum_i\text{diag}\{\widehat{\sigma}^2_{\beta}(\bm{H}^{\top}\bm{H})^{-1})\}_i /n$. Both estimators are unbiased. Hence \eqref{eq:est} holds.
The two-sided $(1-\alpha)$ confidence interval of $\Delta$ thus can be constructed as
\begin{equation}
\widehat{\Delta}\pm z_{\alpha/2} \sqrt{\widehat{\sigma}^2/n},
\end{equation}
based on Proposition \ref{prop:sigma_delta}.
\begin{remark}
The normality of $\beta_i$ in \eqref{main_model_alpha} can be relaxed to other distributions with mean $\Delta$ and variance $\sigma^2$, which leads to an asymptomatic normality of $\widehat{\Delta}$ as $\sqrt {n}\{\widehat{\Delta} -\Delta\} \stackrel{d}{\rightarrow} \mathcal{N}(0,\sigma^2)$ when $n\to \infty$, by the central limit theorem. Yet, in reality, we have finite and usually a fairly small number of teams in one league, such as $n=20$ for EPL. Therefore we choose to keep the normality assumption.
\end{remark}
\section{Simulation}\label{sec:simu}
\subsection{Simulation Setup}
In this section, simulation studies are conducted to evaluate the empirical performance of estimators for both $\bm{\beta}$'s and $\Delta$. We consider two data generation scenarios for $\alpha_{ij}$ in our simulation. In the first scenario, we generate ability difference between team $T_i$ and team $T_j$ as $\alpha_{ij}\overset{\text{i.i.d}}{\sim} \mathcal{N}(0,2^2), i=1,\ldots,n,j=1,\ldots,n$ and $\alpha_{ij}=-\alpha_{ji}$. In second scenario, we firstly generate ability of each team $\text{Ab}_{i}\sim \mathcal{N}(0,2^2),i=1,\ldots,n$, and then set $\alpha_{ij}=\text{Ab}_{i}-\text{Ab}_{j}$ for $i=1,\ldots,n,\, j=1,\ldots,n$. We fix $\Delta=1$ for all the simulation studies, $\beta_i\overset{\text{i.i.d}}{\sim} \mathcal{N}(1,0.3^2)$, and $\epsilon_{ij}\overset{\text{i.i.d}}{\sim} \mathcal{N}(0,\sigma^2_0), i=1,\ldots,n,\, j=1,\ldots,n$. We choose three different values for the variance of $\epsilon_{ij}$, $\sigma^2_0=0.5,1,2$, and four different values for the number of teams, $n=10,20,40,80$. In total, we generate $1,000$ independent replicates for each scenario. The performance of the estimates are evaluated by the bias, the coverage probability (CP), the sample variance of the 1,000 estimates (SV), and the mean of the 1,000 variance estimates (MV) in the following way; take $\Delta$ as an example:
\begin{equation*}
\begin{split}
&\text{Bias} = \frac{1}{1000} \sum_{i=1}^{1000}
\widehat{\Delta}^{(i)} - \Delta^{(0)} , \\
&\text{CP} =
\frac{1}{1000}1\left\{\Delta^{(0)}\in (\Delta^{(i)}_{\alpha/2},\Delta^{(i)}_{1-\alpha/2})\right\},\\
&\text{SV} = \frac{1}{1000-1}
\sum_{i=1}^{1000}
\left(\widehat{\Delta}^{(i)} -\bar{\widehat{\Delta}} \right)^2,\\
&\text{MV} = \frac{1}{1000}
\sum_{i=1}^{1000}
\widehat{\text{Var}}(\Delta^{(i)}) ,
\end{split}
\end{equation*}
where $\Delta^{(0)}$ is the true value for $\Delta$, $\widehat{\Delta}^{(i)}$, $ (\Delta^{(i)}_{\alpha/2},\Delta^{(i)}_{1-\alpha/2})$, and $\widehat{\text{Var}}(\Delta^{(i)})$ are the point estimate, $(1-\alpha)$-level confidence bands, and variance estimate from the $i$th replicated simulation data, respectively. We set $\alpha = 0.05$ throughout the simulation.
\subsection{Simulation Results}
We first examine the simulation results for $\Delta$, as well as the
variance estimator of $\Delta$. Table \ref{tab:delta_results} summarizes the performance of the $\Delta$ estimator in different data generation scenarios, and reports the bias, coverage probability, sample variance
of the 1,000 estimates and mean of the 1,000 variance estimates. From this table, it is
clear that our proposed estimator for $\Delta$ has a very small bias that decreases quickly as $n$ increases. The coverage probabilities are also very close to the desired $95\%$ nominal coverage level, especially when $n \geq 20$. Note that the standard error of the estimated coverage probabilities is $\sqrt{.05\times.95/1,000}=0.0069$. The last two columns also confirm the accurate variance estimation for $\widehat{\Delta}$ under all the simulation scenarios; and the estimation accuracy becomes better as $n$ increases.
\begin{table}[!thp]
\centering
\caption{Summary of simulation results for $\widehat{\Delta}$. (Column (1): mean bias, (2): coverage probabilities for estimators, (3): sample variance of the 1,000 estimates, and (4): mean of the 1,000 variance estimates).}\label{tab:delta_results}
\begin{tabular}{cclcccc}
\toprule
& & & (1)& (2) & (3)& (4)\\
\midrule
Scenario 1 & $n=10$& $\sigma_0^2=0.5$&-0.0016&0.899&0.0143&0.0143\\
& & $\sigma_0^2=1$&-0.0042&0.913&0.0195&0.0198\\
& & $\sigma_0^2=2$&0.0073&0.893&0.0323&0.0345\\
\midrule
& $n=20$& $\sigma_0^2=0.5$&-0.0006&0.923&0.0061&0.0055\\
& & $\sigma_0^2=1$&-0.0059&0.914&0.0075&0.0073\\
& & $\sigma_0^2=2$&0.0055&0.950&0.0092&0.0106\\
\midrule
& $n=40$& $\sigma_0^2=0.5$&0.0002&0.928&0.0027&0.0025\\
& & $\sigma_0^2=1$&0.0023&0.941&0.0030&0.0030\\
& & $\sigma_0^2=2$&-0.0002&0.936&0.0039&0.0038\\
\midrule
& $n=80$& $\sigma_0^2=0.5$&-0.0013&0.947&0.0012&0.0012\\
& & $\sigma_0^2=1$&0.0004&0.949&0.0013&0.0013\\
& & $\sigma_0^2=2$&0.0016&0.947&0.0015&0.0015\\
\midrule
Scenario 2 & $n=10$& $\sigma_0^2=0.5$&0.0036&0.882&0.0149&0.0139\\
& & $\sigma_0^2=1$&0.0032&0.910&0.0192&0.0208\\
& & $\sigma_0^2=2$&-0.0004&0.913&0.0321&0.0344\\
\midrule
& $n=20$& $\sigma_0^2=0.5$&0.0011&0.927&0.0057&0.0055\\
& & $\sigma_0^2=1$&0.0003&0.928&0.0074&0.0072\\
& & $\sigma_0^2=2$&0.0016&0.933&0.0100&0.0107\\
\midrule
& $n=40$& $\sigma_0^2=0.5$&0.0019&0.949&0.0025&0.0025\\
& & $\sigma_0^2=1$&0.0008&0.940&0.0028&0.0029\\
& & $\sigma_0^2=2$&0.0027&0.945&0.0036&0.0038\\
\midrule
& $n=80$& $\sigma_0^2=0.5$&-0.0009&0.954&0.0018&0.0018\\
& & $\sigma_0^2=1$&0.0004&0.949&0.0013&0.0013\\
& & $\sigma_0^2=2$&-0.0010&0.955&0.0014&0.0015\\
\bottomrule
\end{tabular}
\end{table}
Next we examine the simulation results for $\bm{\beta}$'s, as well as the
coverage probability of $\bm{\beta}$'s. Figures \ref{fig:bias_beta1} to \ref{fig:bias_beta4} summarize the boxplots of all $\bm{\beta}$s' estimators in different data generation process to show estimation performance. It is
clear from these figures that the estimator is essentially unbiased for all scenarios considered in the simulation. Furthermore, we present the empirical coverage probabilities for the confidence intervals in Figure~\ref{fig:cover_beta}. All the coverage probabilities are fairly close to the nominal level of 0.95, especially when $n \geq 20$. The coverage probability in general becomes more accurate
as the sample size $n$ increases.
In summary, the simulation results confirm the excellent performance (e.g., bias, variance estimate, and coverage probabilities) of our method in terms of estimating both $\Delta$ and $\bm{\beta}$'s. Note that our simulation scenarios also include the situation when $\alpha_{ij}$ are not $i.i.d.$ generated. Our estimator still performs well in this case because our estimation procedure does not require modeling assumptions or estimation of $\alpha_{ij}$. The simulation results validate the theoretical results, especially the variance formula, in Propositions \ref{beta_variance} and \ref{prop:sigma_delta}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.95\textwidth]{fig/box_N10.pdf}
\caption{Simulation: Bias of $\bm{\beta}$'s for $n=10$.}
\label{fig:bias_beta1}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.95\textwidth]{fig/box_N20.pdf}
\caption{Simulation: Bias of $\bm{\beta}$'s for $n=20$.}
\label{fig:bias_beta2}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.95\textwidth]{fig/box_N40.pdf}
\caption{Simulation: Bias of $\bm{\beta}$'s for $n=40$.}
\label{fig:bias_beta3}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.95\textwidth]{fig/box_N80.pdf}
\caption{Simulation: Bias of $\bm{\beta}$'s for $n=80$.}
\label{fig:bias_beta4}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.95\textwidth]{fig/point.pdf}
\caption{Simulation: Coverage Probabilities for $\bm{\beta}$'s in different scenarios.}
\label{fig:cover_beta}
\end{figure}
\section{Analysis of EPL Data}\label{sec:app}
We focus on the eleven selected in-game statistics collected from 20 teams in 2020-2021 English Premier League as described in Section \ref{sec:data}. For each statistic, we treat it as the main outcome and apply the proposed causal inference method to calculate the team-specific home field advantage $\widehat{\beta}_i$ and the league wide home field advantage $\widehat{\Delta}$ to further analyze the implications of the home field advantage on a team-by-team basis and a statistic-specific basis.
Figure~\ref{fig:cover_beta1} shows the 95\% confidence intervals for the $\widehat{\beta}_i$. A $\beta=0$ would indicate no home field advantage. Brighton is clearly a standout team here, with a wide confidence interval across all statistics.
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\textwidth]{fig/All_plots_beta.pdf}
\caption{EPL data analysis: estimated $\bm{\beta}$ for all in-game statistics.}
\label{fig:cover_beta1}
\end{figure}
In Figure~\ref{fig:cover_beta1}, each of the eleven statistics had a range of 3-6 teams out of 20 that were significant for that particular statistic. While this is clearly not a majority, it does give us some data on the type of statistic that retains the home field advantage, which we will discuss more in the next section. Conversely, we can look at the significance of the $\widehat{\beta}_i$ by team, which will continue to build on to the story. The Figure~\ref{fig:pvalue_beta} shows the p-values of the $\widehat{\beta}_i$'s for each team (x-axis and color) and for each in-game statistic (denoted by shape).
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\textwidth]{fig/Pvalues_all_teams.pdf}
\caption{EPL data analysis: p-values for $\bm{\beta}$ estimators of 20 teams.}
\label{fig:pvalue_beta}
\end{figure}
While a majority of the statistics are above the 0.05 standard threshold for significance, there are some teams that have a majority of the in-game statistics being significant. We now look at the makeup of those particular teams and begin to make inference on what makes those specific teams susceptible to the phenomenon that is the home field advantage. The teams that had the highest number of significant statistics were Fulham, Brighton, Newcastle United and Wolverhampton Wanderers. This prompted us to ask what these teams had in common that the other teams did not that caused them to reap the benefits of the home field advantage. The most obvious and telling common factor amongst the teams was their records- they all were ranked in the bottom $50\%$ at the end of the season, with Fulham being one of the three relegation teams in the English Premier League that year. This could imply that less performant teams reap higher benefits of home-field advantage than teams that excel. Teams that perform in the top rankings of the English Premier League can dominate their opponents regardless of their match location. Perhaps their talent and skill prevails such that the difference in their statistics when they are home versus when they are away is nearly indiscernible. In contrast, teams that are not as talented or skilled retain every advantage from being at home - from increased confidence due to crowd involvement, to familiarity, and to a lack of travel fatigue.
The $\widehat{\Delta}$ values tell us the estimated home field advantage that a specific statistic awards across the league. Significant $\widehat{\Delta}$ values give us insight to the statistics that have the highest impact on the home field advantage and how much they contribute. Table~\ref{tab:delta_estimate} shows the estimated $\Delta$ values for each of the in-game statistics, their estimated standard deviations, and their p-values for significance.
\begin{table}[!t]
\centering
\caption{EPL data analysis: estimated $\Delta$ for eleven summary statistics.}\label{tab:delta_estimate}
\begin{tabular}{ c c c c }
\toprule
\textbf{Statistic} & $\widehat{\Delta}$ & $\widehat{\sigma}$ & \textbf{P-value} \\
\midrule
Attacks w/ Shot & 1.568 & 0.556 & 0.005 \\
\hline
Defence Interceptions & -1.997 & 1.200 & 0.096 \\
\hline
Reaching Opponent Box & 1.732 & 0.608 & 0.004\\
\hline
Reaching Opponent Half & 3.592 & 0.949 & 0.000\\
\hline
Shots Blocked & 0.350 & 0.329 & 0.287\\
\hline
Shots from Box & 1.066 & 0.379 & 0.005 \\
\hline
Shots from Danger Zone & 0.786 & 0.279 & 0.005\\
\hline
Successful Key Passes & 0.489 & 0.237 & 0.039\\
\hline
Touches in Box & 2.253 & 1.043 & 0.031 \\
\hline
XG & 0.232 & 0.092 & 0.011 \\
\hline
Yellow Cards & -0.026 & 0.126 & 0.834 \\
\bottomrule
\end{tabular}
\end{table}
Seven of the eleven statistics we chose to analyze proved to significantly have an effect on the net increase (or decrease) of a statistic in favor of the home team.
We notice that offense based statistics, such as attacks with shot and reaching opponent box, are significant at $\alpha=0.05$; while defense based statistics, such as shots blocked and defence interceptions, are not significant, as well as the referee based statistic, yellow cards. This could be an indication that teams retain an advantage offensively when they compete at their home field. Going back to the causes of home-field advantage and examining the statistics, it appears that familiarity of the field would have an impact on the outcomes of the offensive statistics. For example, attacks with shot, reaching the opponent's half or box, shots from box or danger zone, and successful key passes are statistics that are based on the players' ability to get into scoring position, which would increase their likelihood of making a goal. The statistic goal itself did not show to hold a significant home field advantage, which allows us to infer that perhaps the number of goals scored by the home team may not be more than that of the away team, but the opportunities that are presented due to the causal factors of the home field advantage phenomenon are significantly greater. That is, the quality of play is better for the home team than the away team.
\section{Discussion}\label{sec:disc}
Our research found significant measurement of the home field advantage in various soccer statistics. The home field advantage resides more heavily in offensive-based statistics than it does in defensive or referee based statistics. This does not illuminate the importance of defense over offense, but rather that the home field advantage phenomenon is not as prominent in the defensive side of soccer. We found no home field advantage exhibited by the officials of the game, a positive indication of unbiasedness in the sport. We elected to deeply analyze the eleven statistics that we chose based on their $\widehat{\Delta}$ values and $\widehat{\bm{\beta}}$ values. The statistics that showed significance had less to do with the stand-out statistics of soccer goals, free kicks, fouls, etc. These statistics were more based in quality of play. For example, successful key passes and attacks with shots are statistics based on the quality of the action. What we can derive from this is that the home team is not necessarily benefiting in terms of the obvious, but in the details. The home team takes advantage in being familiar enough with their field such that they get more shots off of attacks and passes, and reach the opponent's box more times. While this may not show up in the box score, it nevertheless can give home team an edge.
Further, we discovered that teams that performed poorly in the season and had lower rankings in the English Premier League retained higher home field advantage. Less successful teams would likely be more confident and comfortable playing in their home environment, while the highly successful teams would be confident playing anywhere.
There are several possible directions for further investigations. First, considering the specific factors such as crowd involvement, familiarity of facilities, and travel fatigue that make up the home field advantage denotes an interesting future direction. Furthermore, building multiple comparison procedure and multivariate response model would merit future research from both methodological and applied perspectives. In addition, our current model depends on the normality assumption for responses. Proposing distribution-free estimators as well as inference procedure will broaden the applications in different areas. From application point of view, studying the in-game statistics over the Covid-19 season in comparison to previous years would help to identify the strength of crowd involvement in the home field advantage.
\bibliographystyle{agsm}
| {
"attr-fineweb-edu": 2.619141,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeQ44eIOjSKE2vPld | \section{Introduction}
All graphs in this paper
are (finite or infinite) undirected graphs
with no loops and no multiple edges.
A \emph{groupoid} is the pair $(V,*)$
of a nonempty set $V$ and a binary operation $*$ on $V$.
The notion of travel groupoids was introduced
by L. Nebesk{\'y} \cite{TG2006} in 2006.
First, let us recall the definition of travel groupoids.
A \emph{travel groupoid} is a groupoid $(V,*)$
satisfying the following axioms (t1) and (t2):
\begin{itemize}
\item[(t1)]
$(u * v) * u = u$ (for all $u,v \in V$),
\item[(t2)]
if $(u * v) * v = u$, then $u = v$ (for all $u,v \in V$).
\end{itemize}
A travel groupoid is said to be \emph{simple}
if the following condition holds.
\begin{itemize}
\item[{\rm (t3)}]
If $v*u \neq u$, then $u*(v*u)=u*v$ (for any $u,v \in V$).
\end{itemize}
A travel groupoid is said to be \emph{smooth}
if the following condition holds.
\begin{itemize}
\item[{\rm (t4)}]
If $u*v=u*w$, then $u*(v*w)=u*v$ (for any $u,v,w \in V$).
\end{itemize}
A travel groupoid is said to be \emph{semi-smooth}
if the following condition holds.
\begin{itemize}
\item[{\rm (t5)}]
If $u*v=u*w$, then $u*(v*w)=u*v$ or $u*((v*w)*w)=u*v$
(for any $u,v,w \in V$).
\end{itemize}
Let $(V,*)$ be a travel groupoid,
and let $G$ be a graph.
We say that \emph{$(V,*)$ is on $G$}
or that \emph{$G$ has $(V,*)$} if $V(G) = V$ and
$E(G) = \{\{u,v \} \mid u, v \in V, u \neq v,
\text{ and } u * v = v \}$.
Note that every travel groupoid is on exactly one graph.
In this paper, we introduce the notion of
T-partition systems on a graph
and give a characterization of travel groupoids
on a graph.
This paper is organized as follows:
In Section 2, we define the right translation system of a groupoid
and characterize travel groupoids
in terms of the right translation systems of groupoids.
Section 3 is the main part of this paper.
We introduce T-partition systems on a graph
and give a characterization of travel groupoids on a graph
in terms of T-partition systems.
In Section 4, we consider
simple, smooth, and semi-smooth travel groupoids,
and give characterizations for them.
\section{The right translation system of a travel groupoid}
\begin{Defi}
Let $(V,*)$ be a groupoid.
For $u,v \in V$, let
$V^R_{u,v}$ be
the set of elements whose right translations send the element $u$
to the element $v$, i.e.,
\[
V_{u,v} = V^R_{u,v} := \{ w \in V \mid u * w = v \}.
\]
Then we call the system $(V_{u,v} \mid (u,v) \in V \times V)$
the \emph{right translation system} of the groupoid $(V,*)$.
\qed
\end{Defi}
\begin{Rem}
For each element $w$ in a groupoid $(V,*)$,
the \emph{right translation map} $R_w:V \to V$
is defined by $R_w(u):=u*w$ for $u \in V$.
Then the set $V^R_{u,v}$ is given by $V^R_{u,v} = \{ w \in V \mid R_w(u)=v \}$.
We can also consider the set $V^R_{u,v}$ defined above
as the inverse image of $\{v\}$
through the left translation map $L_u:V \to V$ of the groupoid $(V,*)$
defined by $L_u(w):=u*w$ for $w \in V$, i.e.,
$V^R_{u,v} = L_u^{-1}(\{v\})$.
\end{Rem}
\begin{Lem}\label{lem:s1}
Let $(V,*)$ be a groupoid.
Then,
$(V,*)$ satisfies the condition {\rm (t1)}
if and only if
the right translation system $(V_{u,v} \mid (u,v) \in V \times V)$
of $(V,*)$
satisfies the following property:
\begin{itemize}
\item[{\rm (R1)}]
For any $u, v \in V$,
if $V_{u,v} \neq \emptyset$, then $u \in V_{v,u}$.
\end{itemize}
\end{Lem}
\begin{proof}
Let $(V,*)$ be a groupoid satisfying the condition {\rm (t1)}.
Take any $u,v \in V$ such that $V_{u,v} \neq \emptyset$.
Take an element $x \in V_{u,v}$.
Then $u*x=v$.
By (t1), we have $(u*x)*u=u$.
Therefore, we have $v*u=u$ and so $u \in V_{v,u}$.
Thus the property (R1) holds.
Suppose that
the right translation system $(V_{u,v} \mid (u,v) \in V \times V)$
of a groupoid $(V,*)$ satisfies the property (R1).
Take any $u,v \in V$.
Then there exists a unique element $x \in V$ such that $v \in V_{u,x}$.
Since $V_{u,x} \neq \emptyset$,
it follows from the property (R1) that $u \in V_{x,u}$ and so $x*u=u$.
Since $v \in V_{u,x}$, we have $u*v=x$.
Therefore, we have $(u*v)*u=u$.
Thus $(V,*)$ satisfies the condition (t1).
\end{proof}
\begin{Lem}\label{lem:s2}
Let $(V,*)$ be a groupoid.
Then,
$(V,*)$ satisfies the condition {\rm (t2)}
if and only if
the right translation system $(V_{u,v} \mid (u,v) \in V \times V)$
of $(V,*)$
satisfies the following property:
\begin{itemize}
\item[{\rm (R2)}]
For any $u, v \in V$ with $u \neq v$,
$V_{u,v} \cap V_{v,u} = \emptyset$.
\end{itemize}
\end{Lem}
\begin{proof}
Let $(V,*)$ be a groupoid satisfying the condition {\rm (t2)}.
Suppose that $V_{u,v} \cap V_{v,u} \neq \emptyset$
for some $u,v \in V$ with $u \neq v$.
Take an element $x \in V_{u,v} \cap V_{v,u}$.
Then $u*x=v$ and $v*x=u$.
Therefore, we have $(u*x)*x=u$,
which is a contradiction to the property (t2).
Thus the property (R2) holds.
Suppose that
the right translation system $(V_{u,v} \mid (u,v) \in V \times V)$
of a groupoid $(V,*)$ satisfies the property (R2).
Suppose that
there exist $u,v \in V$ with $u \neq v$
such that $(u*v)*v=u$.
Let $x:=u*v$. Then $v \in V_{u,x}$.
Since $(u*v)*v=u$, we have $x * v = u$ and so $v \in V_{x,u}$.
Therefore $v \in V_{u,x} \cap V_{x,u}$,
which is a contradiction to the property (R2).
Thus $(V,*)$ satisfies the condition (t2).
\end{proof}
The following gives a characterization of travel groupoids
in terms of the right translation systems of groupoids.
\begin{Thm}\label{thm:main0}
Let $(V,*)$ be a groupoid.
Then,
$(V,*)$ is a travel groupoid
if and only if
the right translation system of $(V,*)$
satisfies the properties {\rm (R1)} and {\rm (R2)}.
\end{Thm}
\begin{proof}
The theorem follows from Lemmas \ref{lem:s1} and \ref{lem:s2}.
\end{proof}
\section{T-partition systems on a graph}
We introduce the notion of T-partition systems on a graph
to characterize travel groupoids on a graph.
For a vertex $u$ in a graph $G$,
let $N_G[u]$ denote the closed neighborhood of $u$ in $G$, i.e.,
$N_G[u] := \{u\} \cup \{v \in V \mid \{u,v\}\in E \}$.
\begin{Defi}
Let $G=(V,E)$ be a graph.
A \emph{T-partition system} on $G$
is a system
$$\mathcal{P} = (V_{u,v} \subseteq V \mid (u,v) \in V \times V)$$
satisfying the following conditions:
\begin{itemize}
\item[{\rm (P0)}]
$\mathcal{P}_u := \{V_{u,v} \mid v \in N_G[u] \}$ is a partition of $V$
\quad
(for any $u \in V$);
\item[{\rm (P1a)}]
$V_{u,u} = \{u\}$
\quad
(for any $u \in V$);
\item[{\rm (P1b)}]
$v \in V_{u,v}$
$\iff$
$\{u, v\} \in E$
\quad
(for any $u, v \in V$ with $u \neq v$);
\item[{\rm (P1c)}]
$V_{u,v} = \emptyset$
$\iff$
$\{u, v\} \not\in E$
\quad
(for any $u, v \in V$ with $u \neq v$);
\item[{\rm (P2)}]
$V_{u,v} \cap V_{v,u} = \emptyset$
\quad
(for any $u, v \in V$ with $u \neq v$).
\qed
\end{itemize}
\end{Defi}
\begin{Rem}
Let $\mathcal{P}=(V_{u,v} \mid (u,v) \in V \times V)$
be a T-partition system on a graph $G$.
It follows from the conditions (P0) and (P1c) that,
for any $u, v \in V$,
there exists the unique vertex $w$ such that $v \in V_{u,w}$.
\end{Rem}
\begin{Defi}
Let $\mathcal{P}=(V_{u,v} \mid (u,v) \in V \times V)$
be a T-partition system on a graph $G$.
For $u,v \in V$,
let $f_u(v)$ be the unique vertex $w$ such that $v \in V_{u,w}$.
We define a binary operation $*$ on $V$ by
$u*v:=f_u(v)$.
We call $(V,*)$ the \emph{groupoid associated with} $\mathcal{P}$.
\qed
\end{Defi}
\begin{Rem}
It follows from definitions that the right translation system of the
groupoid associated with a T-partition system $\mathcal{P}$ on a graph $G$
is the same as $\mathcal{P}$.
\end{Rem}
\begin{Lem}\label{lem:VPS2TG}
Let $G$ be a graph, and
let $\mathcal{P}$
be a T-partition system on $G$.
Then, the groupoid associated with $\mathcal{P}$
is a travel groupoid on $G$.
\end{Lem}
\begin{proof}
It follows from
the properties (P1a), (P1b), and (P1c) in
the definition
a T-partition system on a graph
that a T-partition system
$\mathcal{P}=(V_{u,v} \mid (u,v) \in V \times V)$ satisfies
the property (R1).
Moreover, the condition (P2) is the same as the property (R2).
By Theorem \ref{thm:main0},
$(V,*)$ is a travel groupoid on $G$.
Now we show that $(V,*)$ is on the graph $G$.
Let $G_{(V,*)}$ be the graph which has $(V,*)$.
We show that $G_{(V,*)}=G$.
Take any edge $\{u,v\}$ in $G_{(V,*)}$.
Then, we have $u*v=v$. Therefore $v \in V_{u,v}$.
Thus $\{u,v\}$ is an edge in $G$, and so $E(G_{(V,*)}) \subseteq E(G)$.
Take any edge $\{u,v\}$ in $G$.
Then, we have $v \in V_{u,v}$.
Therefore $u*v=v$. Thus $\{u,v\}$ is an edge in $G_{(V,*)}$,
and so $E(G) \subseteq E(G_{(V,*)})$.
Hence we have $G_{(V,*)}=G$.
\end{proof}
\begin{Lem}\label{lem:TG2VPS}
Let $G$ be a graph, and
let $(V,*)$ be a travel groupoid on $G$.
Then, the right translation system
of $(V,*)$
is a T-partition system on $G$.
\end{Lem}
\begin{proof}
Let $(V_{u,v} \mid (u,v) \in V \times V)$ be
the right translation system
of a travel groupoid $(V,*)$ on a graph $G$.
Fix any $u \in V$.
Since $u*w$ is in $N_G[u]$ for any $w \in V$,
we have $\bigcup_{v \in N_G[u]} V_{u,v} = V$.
Suppose that $V_{u,x} \cap V_{u,y} \neq \emptyset$
for some $x, y \in N_G[u]$ with $x \neq y$.
Take $z \in V_{u,x} \cap V_{u,y}$.
Then $u*z=x$ and $u*z=y$.
Therefore we have $x=y$, which is a contradiction to
the assumption $x \neq y$.
Thus $V_{u,x} \cap V_{u,y} = \emptyset$
for any $x, y \in N_G[u]$ with $x \neq y$.
Therefore
$\{ V_{u,v} \mid v \in N_G[u] \}$ is a partition of $V$.
Thus the condition (P0) holds.
By \cite[Proposition 2 (2)]{TG2006},
$u*v=u$ if and only if $u=v$.
Therefore $V_{u,u} = \{u\}$ and thus the condition (P1a) holds.
If $u$ and $v$ are adjacent in $G$, then we have $u*v=v$,
and so $v \in V_{u,v}$.
If $v \in V_{u,v}$, then then we have $u*v=v$,
and so $\{u, v\}$ is an edge of $G$.
Thus the condition (P1b) holds.
Since $u * w$ is a neighbor of $u$ in $G$ for any $w \in V \setminus \{u\}$,
if $u$ and $v$ are not adjacent in $G$,
then $V_{u,v} = \emptyset$.
If $V_{u,v} = \emptyset$, then we have $u*v \neq v$,
and so $u$ and $v$ are not adjacent in $G$.
Thus the condition (P1c) holds.
Since $(V,*)$ satisfies the condition (t2),
it follows from Lemma \ref{lem:s2} that
the condition (P2) holds.
Hence the right translation system of $(V,*)$
is a T-partition system on $G$.
\end{proof}
The following gives a characterization of travel groupoids on a graph
in terms of T-partition systems on the graph.
\begin{Thm}\label{thm:main1}
Let $G$ be a graph.
Then, there exists a one-to-one correspondence between
the set of all travel groupoids on $G$
and the set of all T-partition systems on $G$.
\end{Thm}
\begin{proof}
Let $V:=V(G)$.
Let $\mathrm{TG}(G)$ denote the set of all travel groupoids on $G$
and let $\mathrm{TPS}(G)$
denote the set of all T-partition systems on $G$.
We define a map $\Phi:\mathrm{TPS}(G) \to \mathrm{TG}(G)$ as follows:
For
$\mathcal{P}
\in \mathrm{TPS}(G)$,
let $\Phi(\mathcal{P})$ be the groupoid
associated with $\mathcal{P}$.
By Lemma \ref{lem:VPS2TG},
$\Phi(\mathcal{P})$
is a travel groupoid on $G$.
We define a map $\Psi:\mathrm{TG}(G) \to \mathrm{TPS}(G)$ as follows:
For
$(V,*) \in \mathrm{TG}(G)$,
let $\Psi((V,*))$ be the right translation system of $(V,*)$.
By Lemma \ref{lem:TG2VPS},
$\Psi((V,*))$ is a T-partition system on $G$.
Then, we can check that $\Psi(\Phi(\mathcal{P})) = \mathcal{P}$ holds for any
$\mathcal{P} \in \mathrm{TPS}(G)$
and that $\Phi(\Psi((V,*))) = (V,*)$ holds for any $(V,*) \in\mathrm{TG}(G)$.
Hence the map $\Phi$ is a one-to-one correspondence between
the sets $\mathrm{TPS}(G)$ and $\mathrm{TG}(G)$.
\end{proof}
\begin{Exa}\label{ex:TGon4cycle}
Let $G=(V,E)$ be the graph defined by
$V=\{a,b,c,d\}$ and
$E=\{\{a, b\},\{b, c\},\{c, d\},\{a, d\} \}$.
Let $(V,*)$ be the groupoid defined by
\[
\begin{array}{llll}
a * a = a, & a * b = b, & a * c = d, & a * d = d, \\
b * a = a, & b * b = b, & b * c = c, & b * d = a, \\
c * a = b, & c * b = b, & c * c = c, & c * d = d, \\
d * a = a, & d * b = c, & d * c = c, & d * d = d. \\
\end{array}
\]
Then $(V,*)$ is a travel groupoid on the graph $G$.
Let $\mathcal{P}=(V_{u,v} \mid (u,v) \in V \times V)$
be the system of vertex subsets defined by
\[
\begin{array}{llll}
V_{a,a} = \{a\}, & V_{a,b} = \{b\}, & V_{a,c} = \emptyset, & V_{a,d} = \{c,d\}, \\
V_{b,a} = \{a,d\}, & V_{b,b} = \{b\}, & V_{b,c} = \{c\}, & V_{b,d} = \emptyset, \\
V_{c,a} = \emptyset, & V_{c,b} = \{a,b\}, & V_{c,c} = \{c\}, & V_{c,d} = \{d\}, \\
V_{d,a} = \{a\}, & V_{d,b} = \emptyset, & V_{d,c} = \{b,c\}, & V_{d,d} = \{d\}. \\
\end{array}
\]
Then $\mathcal{P}$ is a T-partition system on $G$.
Now we can see that the right translation system of $(V,*)$ is $\mathcal{P}$
and that the groupoid associated with $\mathcal{P}$ is $(V,*)$.
\qed
\end{Exa}
\section{Simple, smooth, and semi-smooth systems}
In this section, we give characterizations
of simple, smooth, and semi-smooth travel groupoids on a graph
in terms of T-partition systems on the graph.
\subsection{Simple T-partition systems}
\begin{Prop}\label{prop:simple-chara}
Let $(V,*)$ be a travel groupoid.
For $u,v \in V$, let
\[
V_{u,v} := \{ w \in V \mid u * w = v \}.
\]
Then, the following conditions are equivalent:
\begin{itemize}
\item[{\rm (a)}]
$(V,*)$ is simple;
\item[{\rm (b)}]
For $u,v,x,y \in V$, if
$u \in V_{v,x}$, $v \in V_{u,y}$, $u \neq x$, and $v \neq y$,
then
$x \in V_{u,y}$ and $y \in V_{v,x}$.
\end{itemize}
\end{Prop}
\begin{proof}
First, we show that (a) implies (b).
Take any $u,v,x,y \in V$
such that
$u \in V_{v,x}$, $v \in V_{u,y}$, $u \neq x$, $v \neq y$
Then $v*u = x \neq u$ and $u*v = y \neq v$.
Since $(V,*)$ is simple by (a),
$v*u \neq u$ implies $u * (v * u) = u * v$
and
$u*v \neq v$ implies $v * (u * v) = v * u$.
Therefore, we have
$u*x=y$ and $v*y=x$.
Thus
$x \in V_{u,y}$ and $y \in V_{v,x}$.
Second, we show that (b) implies (a).
Take any $u,v \in V$ such that $v*u \neq u$.
Note that $v*u \neq u$ implies $u*v \neq v$ (cf. \cite[Proposition 2 (1)]{TG2006}).
Let $x:=v*u$. Then $u \in V_{v,x}$ and $u \neq x$.
Let $y:=u*v$. Then $v \in V_{u,y}$ and $v \neq y$.
By (b), we have $x \in V_{u,y}$ and $y \in V_{v,x}$
Therefore $u*x=y$, that is, $u*(v*u)=u*v$.
Thus $(V,*)$ is simple.
\end{proof}
\begin{Defi}
A T-partition system $(V_{u,v} \mid (u,v) \in V \times V)$ on a graph $G$
is said to be \emph{simple}
if the following condition holds:
\begin{itemize}
\item[{\rm (R3)}]
For $u,v,x,y \in V$,
if $u \in V_{v,x}$, $v \in V_{u,y}$, $u \neq x$, and $v \neq y$,
then $x \in V_{u,y}$ and $y \in V_{v,x}$.
\qed
\end{itemize}
\end{Defi}
\begin{Lem}\label{lem:SiVPS2SiTG}
Let $G$ be a graph, and
let $\mathcal{P}$
be a simple T-partition system on $G$.
Then, the groupoid associated with $\mathcal{P}$
is a simple travel groupoid on $G$.
\end{Lem}
\begin{proof}
The lemma follows from Lemma \ref{lem:VPS2TG} and
Proposition \ref{prop:simple-chara}.
\end{proof}
\begin{Lem}\label{lem:SiTG2SiVPS}
Let $G$ be a graph, and
let $(V,*)$ be a simple travel groupoid on $G$.
Then, the right translation system
of $(V,*)$
is a simple T-partition system on $G$.
\end{Lem}
\begin{proof}
The lemma follows from Lemma \ref{lem:TG2VPS}
and Proposition \ref{prop:simple-chara}.
\end{proof}
\begin{Thm}\label{thm:chara-simple}
Let $G$ be a graph.
Then, there exists a one-to-one correspondence between
the set of all smooth travel groupoids on $G$
and the set of all smooth T-partition systems on $G$.
\end{Thm}
\begin{proof}
The theorem follows from Theorem \ref{thm:main1}
and Lemmas \ref{lem:SiVPS2SiTG} and \ref{lem:SiTG2SiVPS}.
\end{proof}
\subsection{Smooth T-partition systems}
\begin{Prop}\label{prop:smooth-chara}
Let $(V,*)$ be a travel groupoid.
For $u,v \in V$, let
\[
V_{u,v} := \{ w \in V \mid u * w = v \}.
\]
Then, the following conditions are equivalent:
\begin{itemize}
\item[{\rm (a)}]
$(V,*)$ is smooth;
\item[{\rm (b)}]
For $u,v,x,y \in V$,
if $x,y \in V_{u,v}$, then $x*y \in V_{u,v}$;
\item[{\rm (c)}]
For $u,v,x,y,z \in V$,
if $x, y \in V_{u,v}$ and $x \in V_{y,z}$, then $z \in V_{u,v}$.
\end{itemize}
\end{Prop}
\begin{proof}
First, we show that (a) implies (b).
Take any $u,v,x,y \in V$
such that $x,y \in V_{u,v}$.
Then $u*x=v$ and $u*y=v$, so $u*x=u*y$.
Since $(V,*)$ is smooth by (a),
$u*x=u*y$ implies $u*(x*y)=u*x=v$.
Thus $x*y \in V_{u,v}$.
Second, we show that (b) implies (c).
Take any $u,v,x,y,z \in V$ such that
$x,y \in V_{u,v}$ and $y \in V_{x,z}$.
Then $x*y=z$.
By (b), we have $x*y \in V_{u,v}$.
Thus $z \in V_{u,v}$.
Third, we show that (c) implies (a).
Take any $u,x,y \in V$
such that $u*x=u*y$.
Let $v:=u*x=u*y$. Then $x,y \in V_{u,v}$.
Let $z:=x*y$. Then $y \in V_{x,z}$.
By (c), we have $z \in V_{u,v}$.
Therefore $u*z=v$, that is, $u*(x*y)=u*x$.
Thus $(V,*)$ is smooth.
\end{proof}
\begin{Defi}
A T-partition system $(V_{u,v} \mid (u,v) \in V \times V)$ on a graph $G$
is said to be \emph{smooth}
if the following condition holds:
\begin{itemize}
\item[{\rm (R4)}]
For $u,v,x,y,z \in V$,
if $x, y \in V_{u,v}$ and $x \in V_{y,z}$, then $z \in V_{u,v}$.
\qed
\end{itemize}
\end{Defi}
\begin{Lem}\label{lem:SmVPS2SmTG}
Let $G$ be a graph, and
let $\mathcal{P}$
be a smooth T-partition system on $G$.
Then, the groupoid associated with $\mathcal{P}$
is a smooth travel groupoid on $G$.
\end{Lem}
\begin{proof}
The lemma follows from Lemma \ref{lem:VPS2TG}
and Proposition \ref{prop:smooth-chara}.
\end{proof}
\begin{Lem}\label{lem:SmTG2SmVPS}
Let $G$ be a graph, and
let $(V,*)$ be a smooth travel groupoid on $G$.
Then, the right translation system
of $(V,*)$
is a smooth T-partition system on $G$.
\end{Lem}
\begin{proof}
The lemma follows from Lemma \ref{lem:TG2VPS}
and Proposition \ref{prop:smooth-chara}.
\end{proof}
\begin{Thm}\label{thm:chara-smooth}
Let $G$ be a graph.
Then, there exists a one-to-one correspondence between
the set of all smooth travel groupoids on $G$
and the set of all smooth T-partition systems on $G$.
\end{Thm}
\begin{proof}
The theorem follows from Theorem \ref{thm:main1}
and Lemmas \ref{lem:SmVPS2SmTG} and \ref{lem:SmTG2SmVPS}.
\end{proof}
\begin{Rem}
Matsumoto and Mizusawa \cite{MM} gave an algorithmic way to construct
a smooth travel groupoid on a finite connected graph.
\qed
\end{Rem}
\subsection{Semi-smooth T-partition systems}
\begin{Prop}\label{prop:semismooth-chara}
Let $(V,*)$ be a travel groupoid.
For $u,v \in V$, let
\[
V_{u,v} := \{ w \in V \mid u * w = v \}.
\]
Then, the following conditions are equivalent:
\begin{itemize}
\item[{\rm (a)}]
$(V,*)$ is semi-smooth;
\item[{\rm (b)}]
For $u,v,x,y \in V$,
if $x,y \in V_{u,v}$, then $x*y \in V_{u,v}$ or $x*^2y \in V_{u,v}$;
\item[{\rm (c)}]
For $u,v,x,y,z,w \in V$,
if $x, y \in V_{u,v}$ and $x \in V_{y,z} \cap V_{z,w}$,
then $z \in V_{u,v}$ or $w \in V_{u,v}$,
\end{itemize}
where $x*^2y := (x*y)*y$.
\end{Prop}
\begin{proof}
First, we show that (a) implies (b).
Take any $u,v,x,y \in V$
such that $x,y \in V_{u,v}$.
Then $u*x=v$ and $u*y=v$, so $u*x=u*y$.
Since $(V,*)$ is semi-smooth by (a),
$u*x=u*y$ implies $u*(x*y)=u*x=v$
or $u*((x*y)*y)=u*x=v$ .
Thus $x*y \in V_{u,v}$
or $x *^2 y = (x*y)*y \in V_{u,v}$.
Second, we show that (b) implies (c).
Take any $u,v,x,y,z,w \in V$ such that
$x,y \in V_{u,v}$ and $y \in V_{x,z} \cap V_{z,w}$.
Then $x*y=z$ and $z*y=w$. Therefore $w=(x*y)*y=x *^2 y$.
By (b), we have $x*y \in V_{u,v}$ or $x *^2 y \in V_{u,v}$.
Thus $z \in V_{u,v}$ or $w \in V_{u,v}$.
Third, we show that (c) implies (a).
Take any $u,x,y \in V$
such that $u*x=u*y$.
Let $v:=u*x=u*y$. Then $x,y \in V_{u,v}$.
Let $z:=x*y$ and $w:=z*y=(x*y)*y$. Then $y \in V_{x,z} \cap V_{z,w}$.
By (c), we have $z \in V_{u,v}$ or $w \in V_{u,v}$.
Therefore $u*z=v$ or $u*w=v$,
that is, $u*(x*y)=u*x$ or $u*((x*y)*y)=u*x$.
Thus $(V,*)$ is semi-smooth.
\end{proof}
\begin{Defi}
A T-partition system $(V_{u,v} \mid (u,v) \in V \times V)$ on a graph $G$
is said to be \emph{semi-smooth}
if the following condition holds:
\begin{itemize}
\item[{\rm (R5)}]
For $u,v,x,y,z,w \in V$,
if $x, y \in V_{u,v}$ and $x \in V_{y,z} \cap V_{z,w}$,
then $z \in V_{u,v}$ or $w \in V_{u,v}$.
\qed
\end{itemize}
\end{Defi}
\begin{Lem}\label{lem:SeSmVPS2SeSmTG}
Let $G$ be a graph, and
let $\mathcal{P}$
be a semi-smooth T-partition system on $G$.
Then, the groupoid associated with $\mathcal{P}$
is a semi-smooth travel groupoid on $G$.
\end{Lem}
\begin{proof}
The lemma follows from Lemma \ref{lem:VPS2TG}
and Proposition \ref{prop:semismooth-chara}.
\end{proof}
\begin{Lem}\label{lem:SeSmTG2SeSmVPS}
Let $G$ be a graph, and
let $(V,*)$ be a semi-smooth travel groupoid on $G$.
Then, the right translation system
of $(V,*)$
is a semi-smooth T-partition system on $G$.
\end{Lem}
\begin{proof}
The lemma follows from Lemma \ref{lem:TG2VPS}
and Proposition \ref{prop:semismooth-chara}.
\end{proof}
\begin{Thm}\label{thm:chara-semismooth}
Let $G$ be a graph.
Then, there exists a one-to-one correspondence between
the set of all semi-smooth travel groupoids on $G$
and the set of all semi-smooth T-partition systems on $G$.
\end{Thm}
\begin{proof}
The theorem follows from Theorem \ref{thm:main1}
and Lemmas \ref{lem:SeSmVPS2SeSmTG} and \ref{lem:SeSmTG2SeSmVPS}.
\end{proof}
| {
"attr-fineweb-edu": 2.431641,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUc9zxaKgTrD7b7EHo | \section{Introduction}
Although deep learning (DL) has been an active field of research over the last decade, its application on top of sports data has had a slow start. The lack of universal sports datasets made it an impossible challenge for a lot of researchers, professional clubs were not aware about the unlocked potential of data-driven tools, and companies were highly focused on manual video analysis.\\
However, during this last lustrum, the whole paradigm shifted, and for instance, in the case of football, complete datasets like SoccerNet have been publicly shared
\cite{giancola2018soccernet,deliege2020soccernet}, hence providing researchers with valid resources to work with \cite{cioppa2019arthus,cioppa2020context}. At the same time, top European clubs created their own departments of data scientists while publishing their findings \cite{fernandez2018wide,llana2020right}, and companies also shifted to data-driven products based on trained large-scale models. Companies such as SciSports \cite{decroos2019actions,bransen2020player}, Sport Logiq \cite{sanford2020group}, Stats Perform \cite{sha2020end,power21} or Genius Sports \cite{quiroga2020seen} made a huge investment in research groups (in some cases, in collaboration with academia), and other companies are also sharing valuable open data \cite{metricasports,statsbomb,skillcorner}. All these facts prove that, DL is currently both trendy and useful within the context of sports analytics, thus creating a need for \textit{plug-and-play} models that could be exploited either by researchers, clubs or companies. \\
Recently, expected possession value (EPV) and expected goals models proved to produce realistic outcomes \cite{spearman2017physics,spearman2018beyond,fernandez2019decomposing}, which can be directly used by coaches to optimize their tactics. Furthermore, in this same field, Arbués-Sangüesa \textit{et al.} spotted a specific literature gap regarding the presented models: player body-orientation. The authors claimed that by merging existing methods with player orientation, the precision of existing models would improve \cite{arbues2020using}, especially in pass events. By defining orientation as the projected normal vector right in the middle of the upper-torso, the authors propose a sequential computer vision pipeline to obtain orientation data \cite{arbues2020always}. Their model stems from pose estimation, which is obtained with existing models \cite{ramakrishna2014pose,wei2016convolutional,cao2017realtime}, and achieves an absolute median error of 28 degrees, which indicates that, despite being a solid baseline, there is still room for improvement. \\
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\linewidth]{ReferencesThin2HD-min.jpg}
\caption{Several domains are merged in this research: (left) sensor-, (middle) field-, and (right) image-domain. By using corners and intersection points of field lines, the corresponding homographies are used to map data across domains into one same reference system.}
\label{fig:ref1}
\end{center}
\end{figure*}
Therefore, in this article, a novel deep learning
model to obtain orientation out of any player's bounding boxes is presented. By: (1) using sensor-based orientation data as ground truth, (2) turning this estimation into a classification problem, (3) compensating angles with respect to the camera's viewing direction, and (4) introducing a cyclic loss function based on soft labels, the network is able to estimate orientation with a median error of fewer than 12 degrees per player. \\
The rest of the paper is organized as follows: in Section \ref{sec:data} the main data structures and types of datasets are detailed; the proposed fine-tuning process is explained in Section \ref{sec:prop} together with the appropriate details about the loss function and angle compensation. Results are shown in Section \ref{sec:res}, and finally, conclusions are drawn in Section \ref{sec:conc}.
\section{Data Sources} \label{sec:data}
Before introducing the proposed method, a detailed description of the required materials to train this model is given. Similarly, since we are going to mix data from different sources, their corresponding domains should be listed as well:
\begin{itemize}
\item \textbf{Image-domain}, which includes all kinds of data related to the associated video footage. That is: (\textit{i1}) the video footage itself, (\textit{i2}) player tracking and (\textit{i3}) corners' position. Note that the result of player tracking in the image-domain consists of a set of bounding boxes, expressed in pixels; similarly, corners' location is also expressed in pixels. In this research, full HD resolution (1920 x 1080) is considered, together with a temporal resolution of 25 frames per second.
\item \textbf{Sensor-domain}, which gathers all pieces of data generated by wearable EPTS devices. In particular, data include: (\textit{s4}) player tracking, and (\textit{s5}) orientation data. In this case, players are tracked according to the universal latitude and longitude coordinates, and orientation data are captured with a gyroscope in all XYZ Euler angles. In this work, sensor data were gathered with RealTrack Wimu wearable devices \cite{realtrack}, which generate GPS/Orientation data at 100/10 samples per second respectively.
\item \textbf{Field-domain}, which expresses all variables in terms of a fixed two-dimensional football field, where the top-left corner is the origin.
\end{itemize}
\begin{figure*}
\begin{center}
\includegraphics[width=0.95\linewidth]{PipelineMixedHD-min.jpg}
\caption{Proposed pipeline to match sensor orientation data with bounding boxes. Different input sources are merged: (top, image-domain) video footage, which is used for player detection and jersey filtering; the resulting bounding boxes are later mapped into the field-domain. (middle, image-domain) Corner's location, which is used for building the corresponding mapping homographies, and (bottom, sensor-domain) ground-truth data, which are also mapped into the field-domain. Finally, players in the 2D-domain are matched through pairwise distances.}
\label{fig:pip1}
\end{center}
\end{figure*}
Once data are gathered and synchronized from different sources, two possible scenarios are faced:
\begin{itemize}
\item The complete case, in which all variables (\textit{i1}, \textit{i2}, \textit{i3}, \textit{s4}, \textit{s5}) are available. Note that both image- and sensor-data include unique identifiers, which are easy to match by inspecting a small subset of frames.
\item The semi-complete case, where only part of the information is available (\textit{i1},\textit{s4},\textit{s5}). In order to estimate the missing pieces (\textit{i2, i3}) and match data across domains, a sequential pipeline is proposed in Section \ref{sec:Pip1}.
\end{itemize}
In this article, both a complete and a semi-complete datasets are used, each one containing data from single games. In particular, the complete dataset contains a full game of F.C. Barcelona's Youth team recorded with a tactical camera with almost no panning and without zoom; this dataset will be named $\text{FCB}_{DS}$. The semi-complete dataset contains a full preseason match of CSKA Moscow's professional team, recorded in a practice facility (without fans) with a single static camera that zooms quite often and has severe panning. Similarly, this second dataset will be named $\text{CSKA}_{DS}$. Furthermore, intersection and corner image-coordinates were manually identified and labelled in more than 4000 frames of $\text{CSKA}_{DS}$ (1 frame every 37, \textit{i.e.} 1.5 seconds), with a mean of 8.3 ground-truth field-spots per frame (34000 annotations).\\
\subsection{Homography Estimation}
Since the reference system of the image- and the sensor-domain is not the same, corners' positions (or line intersections) are used to translate all coordinates into the field-domain. On the one hand, obtaining field locations in the sensor domain is pretty straightforward: since its gathered coordinates are expressed with respect to the universal latitude/longitude system, the corners' locations are fixed. By using online tools such as the \textit{Satellite View} of Google Maps, and by accurately picking field intersections, the corners' latitude and longitude coordinates are obtained. On the other hand, corner's positions in the image domain (in pixels) depend on the camera shot and change across the different frames; although several literature methods \cite{citraro2020real} can be implemented in order to get the location of these field spots or the camera pose, our proposal leverages homographies computed from manual annotations.
From now on, the homography that maps latitude/longitude coordinates into the field will be named $H_{SF}$, whereas the one that converts pixels in the image into field coordinates will be named $H_{IF}$. The complete homography-mapping process is illustrated in Figure \ref{fig:ref1}.\\
\subsection{Automatic Dataset Completion} \label{sec:Pip1}
In this Subsection, the complete process to convert a semi-complete dataset into a complete one is described. It has to be remarked that the aim is to detect players in the image-domain and to match them with sensor data, hence pairing orientation and identified bounding boxes. Note that this procedure has been applied to $\text{CSKA}_{DS}$, which did not contain ground-truth data in the image-domain. The proposed pipeline is also displayed in Figure \ref{fig:pip1}. \\
\textbf{Player Detection}: the first step is to locate players' position in the image. In order to do so, literature detection models can be used, such as OpenPose \cite{cao2017realtime} (used in this research) or Mask R-CNN \cite{maskrcnn}. Once identified all different targets in the scene, detections are converted into bounding boxes. Note that this step does not exploit any temporal information across frames.\\
\textbf{Jersey Filtering}: since sensor data are only acquired for one specific team, approximately half of the detected bounding boxes (opponents) are filtered out. Given that the home/away teams of football matches are required to wear distinguishable colored jerseys, a simple clustering model can be trained. Specifically, by computing and by concatenating quantized versions of the HSV / LAB histograms, a single 48-feature vector is obtained per player. Having trained a $K$-Means model, with $K=3$, boxes with three different types of content are obtained: (1) home team, (2) away team, and (3) outliers.\\
\textbf{Mapping}: in order to establish the same reference system for both sensor and image data, all tracking coordinates are mapped into the field domain. More specifically, corner-based homographies $H_{SF}$ and $H_{IF}$ are used; in the latter, since we are dealing with bounding boxes, the only point being mapped for each box is the middle point of the bottom boundary of the box.\\
\textbf{Matching}: once all points are mapped into the field-domain, a customized version of the Hungarian method \cite{kuhn1955hungarian} is implemented, thus matching sensor and image data in terms of pairwise field-distances. \\
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{AngleRefsideCHD-min.jpg}
\caption{Orientation references in the field-domain.}
\label{fig:ref2}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=0.7\linewidth]{OrientationMixedHD-min.jpg}
\caption{(Top) Three players oriented towards 0º can look really different depending on the camera pose and orientation. (Bottom) Proposed technique for angle compensation: (left) detected player together with his orientation \{red\} and \textit{apparent zero-vector} \{cyan\}; (middle-left) mapped \textit{apparent zero-vector} in the field-domain \{dashed axes - apparent reference system, continous axes - absolute reference system\} (middle-right) Applied compensation on the original orientation \{purple\}; (right) resulting compensated absolute orientation \{purple\}. }
\label{fig:angCompAx}
\end{center}
\end{figure*}
\section{Proposed Method} \label{sec:prop}
In this Section, the complete adaptation and fine-tuning procedure of a state-of-the-art convolutional neural network are detailed, hence resulting in a model capable of estimating body orientation directly from bounding boxes containing players. By default, all orientations are expressed in the field-domain reference system. In this bi-dimensional field it can be assumed that 0º / 90º / 180º / 270º are the corresponding orientations of players facing towards the right / top / left / bottom sides of the fields, respectively, as shown in Figure \ref{fig:ref2}.
\subsection{Angle Compensation}
The apparent orientation of each player is influenced by the current image content, which is drastically affected by the camera pose and its orientation. This means that, if a bounding box of a particular player is cropped without taking into account any kind of field-reference around him/her, it is not possible to obtain an absolute orientation estimation. As displayed in Figure \ref{fig:angCompAx} (top), the appearance of three players oriented towards the same direction (0 degrees) can differ a lot. Since the presented classification model only takes a bounding box as input, we propose to compensate angles \textit{a priori}, thus assuming that all orientations have been obtained under the same camera pose; \textit{i.e.} the pink camera displayed in Figure \ref{fig:ref2}. For instance, if the full chest of a player is spotted in a particular frame, its orientation must be approximately 270, no matter what the overall image context is. \\
In order to conduct this compensation, as seen in the bottom row of Figure \ref{fig:angCompAx}, the orientation vector of the player is first mapped into the field-domain. Then, the \textit{apparent zero-vector} is considered in the image-domain; for the reference camera, \textit{i.e.} the pink one in Figure \ref{fig:ref2}, this vector would point to the right side of the field whilst being parallel to the sidelines. By using $H_{IF}$, the \textit{apparent zero-vector} is mapped into the field-domain, and the corresponding compensation is then found by computing the angular difference between the mapped \textit{apparent zero-vector} and the reference zero-vector in the field-domain. According to Figure \ref{fig:ref2}, this difference indicates how much does the orientation vector differ from the \textit{apparent zero-vector}. \\
Formally, for a player $i$ with non-compensated orientation $\alpha_{i}'$ at position $P_{i} = (P_{i,x}, P_{i,y})$ and being the (unitary) \textit{apparent zero-vector} $Z$ described by ($1,0$), another point is defined towards the zero direction:
\begin{equation}
P_{i}^0 = P_{i} + Z = (P_{i,x} + 1, P_{iy})
\end{equation}
Both points $P_{i}$ and $P_{i}^0$ are mapped into the field domain by using $H_{IF}$, thus obtaining their 2D position $F_{i}$ and $F_{i}^0$, respectively.
The final compensated angle is then found as:
\begin{equation}
\alpha_{i} = \alpha_{i}'-\angle(\overrightarrow{F_{i}F_{i}^0}),
\end{equation}
where $\angle$ expresses the angle of the vector $\overrightarrow{F_{i}F_{i}^0}$ with respect to the reference zero-vector.
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\linewidth]{VGGStructHD-min.jpg}
\caption{Proposed architecture for fine-tuning a VGG according to the main blocks of the original network.}
\label{fig:vgg}
\end{center}
\end{figure*}
\subsection{Network}
Once all bounding boxes have an associated compensated body-orientation value, the model is set to be trained. In this work, orientation estimation has been approached as a classification task, where each bounding box is classified within a certain number of orientation bins.
As detailed in Section \ref{sec:res}, orientation data are grouped into 12 bins, each one containing an orientation range of 30 degrees (\textit{e.g.} bin 1 goes from 0º to 30º, bin 2 form 30º to 60º, until bin 12, which goes from 330º to 360º). Consequently, the above-mentioned bounding boxes in the image-domain were automatically labeled with their corresponding class according to their compensated orientation. Another reason for grouping similar angles into the same class is the noisy raw orientation signals generated by the EPTS devices. \\
In particular, the chosen network to be fine-tuned in this research is a VGG-19 \cite{simonyan2014very}; this type of network has also been used as a backbone in existing literature methods such as OpenPose \cite{cao2017realtime}. However, in order to further analyze and to justify our choice, alternative results are shown in Section \ref{sec:res} when using Densenet \cite{huang2017densely}. The original architecture of VGG-19 is composed of 5 convolutional blocks -each one containing either 2 or 4 convolutional layers-, and a final set of fully connected layers with a probability output vector of 1000 classes. For the presented experiments, as seen in Figure \ref{fig:vgg}, the architecture adaptation and the proposed method consists of: (1) changing the dimensions of the final fully-connected layer, thus obtaining an output with a length equal to 12, the desired number of classes, (2) freezing the weights of the first couple of convolutional blocks, (3) re-training the convolutional layers of the third block and the fully connected layers of the classifier, and (4) omitting both the fourth and fifth convolutional blocks.
By visualizing the final network weights with Score-CAM \cite{wang2020score} (Figure \ref{fig:weightViz}), it can be spotted how the most important body parts regarding orientation (upper-torso) are already being vital for the sake of classification after the third block; in fact, the responses of the fourth block do not provide useful information in terms of orientation. Therefore, omitting blocks 4 and 5 is a safe choice to have an accurate model whilst decreasing the total number of parameters to be trained. \\
Let us finally remark that bounding boxes' values are converted into grayscale, thus improving the overall capability of generalization, since the model will not be learning the specific jersey colors, which seemed to be one of the drawbacks in \cite{arbues2020always}. In terms of data augmentation, brightnes, and contrast random changes are performed for all boxes in the training set.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{newConvLayersHD-min.jpg}
\caption{Obtained ScoreCam responses. While the 1st block detects mainly edges and shapes, the 3rd one has a high response over the upper-torso of players. The last row shows how the 4th block learns specific features that have little to with orientation.}
\label{fig:weightViz}
\end{center}
\end{figure}
\begin{comment}
\begin{figure*}
\begin{center}
\includegraphics[width=0.65\linewidth]{lossesHD-min.jpg}
\caption{Losses.}
\label{fig:losses}
\end{center}
\end{figure*}
\end{comment}
\subsection{Cyclic Loss}
An important aspect of the training process is the definition of the loss function.
\textit{A priori}, state-of-the-art loss functions such as binary cross-entropy could be a valid resource, but in general classification scenarios, the order and the distance within classes is not taken into account. Nonetheless, in this particular scenario, we have 12 ordered-cyclic classes and a distance between them that can be well-defined. Besides, in this classification problem, since similar orientations have been grouped into bins, enforcing a one-hot encoding is not the best solution. For example, imagine a player $P_{1}$ oriented towards 31º and another $P_{2}$ oriented towards 59º; both players are included in the second bin, which encompasses all orientations between 30-60. With one-hot encoding, it would be assumed that since both $P_{1}$ and $P_{2}$ are in the second bin, both of them have the same orientation (45º). However, alternatives such as soft labels \cite{diaz2019soft} can describe the players' class as a mixture; in the given example, the soft labels of $P_{1}$/$P_{2}$ would indicate that these players are right between the first-second/second-third bins, respectively. The other challenge to be solved is the need for this loss function to be cyclic, as the first bin (number 1, 0-30º) and the last one (12, 345-360º) are actually really close. \\
Let $\{b_{1}, b_{2}, ... , b_{12}\}$ be the set of orientation classes and let $\chi = \{r_{1}, r_{2},\dots, r_{12}\}$ be the set such that each $r_j$ denotes the central angle of bin $b_{j}$, for all $j\in\{ 1,\dots,12\}$. Then, for a player $i$ with compensated ground-truth orientation $\alpha_{i}$, the soft labels representing the ground-truth probability distribution is defined as the vector with coordinates:
\begin{equation}
y_{ij} = \frac{exp(-\phi (\alpha_{i}, r_{j}))}{\sum_{k=1}^{K}{exp(-\phi (\alpha_{i},r_{k}))}},\; \text{for}\, j=1,\dots,12
\end{equation}
where $\phi$ is cyclic distance between the ground-truth player's orientation $\alpha_i$ and the angle corresponding to the $j$th bin, $r_{j}$:
\begin{equation}
\phi(\alpha_{i},r_{j}) = \frac{\text{min}(|\alpha_{i}-r_{j}|,360-|\alpha_{i}-r_{j}|)^{2}}{90}.
\end{equation}
Let us denote as $x_i$ the estimated probability distribution of orientation of player $i$ obtained by applying the softmax function to the last layer of the network. Finally, our loss is the cross-entropy between $x_i$ and the ground-truth soft labels $y_i$.
\subsection{Training Setting}
As mentioned in Section \ref{sec:data}, bounding boxes were gathered from the only two games at our disposal (Section \ref{sec:data}), both recorded under different camera shot conditions.
Consequently, as seen in Figure \ref{fig:diffPl}, the content inside both bounding boxes differs a lot: while in $FCB_{DS}$ players are seen from a tactical camera and have small dimensions, players in $CSKA_{DS}$ are spotted from a camera that is almost in the same height as the playing field, thus resulting in big bounding boxes. Although all bounding boxes are resized as a preprocessing stage of the network, the raw datasets suffer from concept drift \cite{webb2016characterizing}.
\begin{figure}
\begin{center}
\includegraphics[width=0.65\linewidth]{DiffPlayersHD-min.jpg}
\caption{Resized bounding boxes of both datasets; several artifacts can be spotted in $FCB_{DS}$ (\textit{e.g.} JPEG, ringing, aliasing).}
\label{fig:diffPl}
\end{center}
\end{figure}
The proposed solution in this article
is to build an unbalanced-mixed train set; that is, merging bounding boxes from both datasets with an unbalanced distribution in the train set, whilst using the remaining instances from $FCB_{DS}$ and $CSKA_{DS}$ on their own to build the validation and the test set, respectively. In particular, the presented experiments have been carried out with a ~90-10 distribution in the training set; for each class, a total of 4500 bounding boxes -corresponding to the first half of the games- are included, where 4000 of them are obtained from $FCB_{DS}$ and the 500 remaining ones are gathered from $CSKA_{DS}$. Both the validation and test set include 500 samples per class, all of them belonging to data from the second half.
\section{Results} \label{sec:res}
The obtained classification results will be shown in terms of angular difference and confusion matrices. Nonetheless, it has to be remarked that, when grouping orientations, an intrinsic error is introduced: assuming that each bin contains $d$ degrees, and that a players' orientation is equal to the central bin value, properly classified players may have an associated absolute error up to $d/2$.
The results of six different experiments are shown in Table \ref{tab:res}: (1) $t_{12}$ and (2) $t_{24}$ use a VGG architecture that classifies into 12 and 24 orientation bins, respectively, both trained with compensated angles; (3) $t_{12nC}$ uses the same network as in $t_{12}$ but trained without angle compensation, and (4) $t_{12den}$ uses a DenseNet architecture -fine-tuning of the fourth dense block- that performs a 12-bin classification, (5) $t_{12CE}$ uses binary cross-entropy instead of the proposed cyclic loss, and (6) $t_{12CV}$ shows the performance of the existing \textit{state-of-the-art} method \cite{arbues2020always} (12 bins as well).
Table \ref{tab:res} contains the mean absolute error (MEAE) and the median absolute error (MDAE) of the estimated angles in each experiment. As it can be spotted, the test of 12 classes is the one providing the most reliable test results in terms of generalization; in particular, classifying orientation into 24 classes produces better results in the validation set, but seemingly the model overfits and learns specific features that do not seem to generalize properly. Moreover, the model benefits from the cyclic loss implementation, as binary cross-entropy introduces errors both in the validation and in the test set due to the unknown distance between classes and the non-cyclic angular behavior. Actually, the obtained boost with this cyclic loss is displayed in the confusion matrices of Figure \ref{fig:cmTests}. The addition of angle compensation also proves to be vital, especially in the test set, where the corresponding video footage contained a lot of panning and zooming. Besides, the performance of DenseNet does not seem to generalize either; however, it is likely that with an exhaustive trial-error procedure of freezing weights of particular layers and performing small changes in the original DenseNet structure, this architecture should be able to generalize as well. Finally, the existing computer-vision-based method, implemented without the model in charge of the coarse corroboration, performs the worst, obtaining more than 32 degrees in both absolute errors.
\begin{table}[]
\begin{center}
\resizebox{0.35\textwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|}
\hline
& $\text{MEAE}_{v}$ & $\text{MDAE}_{v}$ & $\text{MEAE}_{t}$ & $\text{MDAE}_{t}$ \\ \hline
$t_{12}$ & 17.37 & 9.90 & \textbf{18.92} & \textbf{11.60} \\ \hline
$t_{24}$ & \textbf{13.13} & \textbf{7.70} & 24.34 & 13.01 \\ \hline
$t_{12CE}$ & 22.34 & 17.00 & 28.98 & 23.00 \\ \hline
$t_{12nC}$ & 21.47 & 14.16 & 31.75 & 24.54 \\ \hline
$t_{12den}$ & 15.22 & 10.46 & 25.27 & 17.29 \\ \hline
$t_{CV}$ & - & - & 38.23 & 32.09 \\ \hline
\end{tabular}}
\caption{Obtained results in all experiments, expressed in terms of the mean/median absolute error, both in the validation and test set.}
\label{tab:res}
\end{center}
\end{table}
\begin{comment}
\begin{table}[]
\resizebox{0.35\textwidth}{!}{%
\begin{tabular}{ccccccccccccc}
\hline
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\textbf{b1}} & \multicolumn{1}{c|}{\textbf{b2}} & \multicolumn{1}{c|}{\textbf{b3}} & \multicolumn{1}{c|}{\textbf{b4}} & \multicolumn{1}{c|}{\textbf{b5}} & \multicolumn{1}{c|}{\textbf{b6}} & \multicolumn{1}{c|}{\textbf{b7}} & \multicolumn{1}{c|}{\textbf{b8}} & \multicolumn{1}{c|}{\textbf{b9}} & \multicolumn{1}{c|}{\textbf{b10}} & \multicolumn{1}{c|}{\textbf{b11}} & \multicolumn{1}{c|}{\textbf{b12}} \\ \hline
\multicolumn{1}{|c|}{\textbf{b1}} & \multicolumn{1}{c|}{\textbf{314}} & \multicolumn{1}{c|}{73} & \multicolumn{1}{c|}{8} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{4} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{15} & \multicolumn{1}{c|}{99} \\ \hline
\multicolumn{1}{|c|}{\textbf{b2}} & \multicolumn{1}{c|}{83} & \multicolumn{1}{c|}{290} & \multicolumn{1}{c|}{75} & \multicolumn{1}{c|}{8} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{4} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{3} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{18} \\ \hline
\multicolumn{1}{|c|}{(...)} & \multicolumn{12}{c|}{(...)} \\ \hline
\multicolumn{1}{|c|}{\textbf{b11}} & \multicolumn{1}{c|}{7} & \multicolumn{1}{c|}{4} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{9} & \multicolumn{1}{c|}{74} & \multicolumn{1}{c|}{308} & \multicolumn{1}{c|}{83} \\ \hline
\multicolumn{1}{|c|}{\textbf{b12}} & \multicolumn{1}{c|}{77} & \multicolumn{1}{c|}{20} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{3} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{3} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{8} & \multicolumn{1}{c|}{17} & \multicolumn{1}{c|}{86} & \multicolumn{1}{c|}{270} \\ \hline
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ \hline
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\textbf{b1}} & \multicolumn{1}{c|}{\textbf{b2}} & \multicolumn{1}{c|}{\textbf{b3}} & \multicolumn{1}{c|}{\textbf{b4}} & \multicolumn{1}{c|}{\textbf{b5}} & \multicolumn{1}{c|}{\textbf{b6}} & \multicolumn{1}{c|}{\textbf{b7}} & \multicolumn{1}{c|}{\textbf{b8}} & \multicolumn{1}{c|}{\textbf{b9}} & \multicolumn{1}{c|}{\textbf{b10}} & \multicolumn{1}{c|}{\textbf{b11}} & \multicolumn{1}{c|}{\textbf{b12}} \\ \hline
\multicolumn{1}{|c|}{\textbf{b1}} & \multicolumn{1}{c|}{\textbf{182}} & \multicolumn{1}{c|}{102} & \multicolumn{1}{c|}{21} & \multicolumn{1}{c|}{118} & \multicolumn{1}{c|}{36} & \multicolumn{1}{c|}{4} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{4} & \multicolumn{1}{c|}{10} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{12} & \multicolumn{1}{c|}{7} \\ \hline
\multicolumn{1}{|c|}{\textbf{b2}} & \multicolumn{1}{c|}{109} & \multicolumn{1}{c|}{\textbf{196}} & \multicolumn{1}{c|}{4} & \multicolumn{1}{c|}{41} & \multicolumn{1}{c|}{105} & \multicolumn{1}{c|}{13} & \multicolumn{1}{c|}{9} & \multicolumn{1}{c|}{6} & \multicolumn{1}{c|}{5} & \multicolumn{1}{c|}{7} & \multicolumn{1}{c|}{4} & \multicolumn{1}{c|}{1} \\ \hline
\multicolumn{1}{|c|}{(...)} & \multicolumn{12}{c|}{(...)} \\ \hline
\multicolumn{1}{|c|}{\textbf{b11}} & \multicolumn{1}{c|}{3} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{23} & \multicolumn{1}{c|}{13} & \multicolumn{1}{c|}{5} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{15} & \multicolumn{1}{c|}{53} & \multicolumn{1}{c|}{89} & \multicolumn{1}{c|}{\textbf{208}} & \multicolumn{1}{c|}{87} \\ \hline
\multicolumn{1}{|c|}{\textbf{b12}} & \multicolumn{1}{c|}{8} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{89} & \multicolumn{1}{c|}{44} & \multicolumn{1}{c|}{6} & \multicolumn{1}{c|}{3} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{7} & \multicolumn{1}{c|}{11} & \multicolumn{1}{c|}{33} & \multicolumn{1}{c|}{131} & \multicolumn{1}{c|}{\textbf{165}} \\ \hline
\end{tabular}}
\caption{First and last rows of the obtained confusion matrix (test set) when using the (top) proposed cyclic and (bottom) binary cross-entropy as a loss function ($t_{12}$ and $t_{12CE}$ respectively).}
\label{tab:cmTests}
\end{table}
\end{comment}
\begin{figure}
\begin{center}
\includegraphics[width=0.75\linewidth]{ConfMats-min.jpg}
\caption{First and last rows of the obtained confusion matrix (test set) when using the (top) proposed cyclic and (bottom) binary cross-entropy as a loss function ($t_{12}$ and $t_{12CE}$ respectively).}
\label{fig:cmTests}
\end{center}
\end{figure}
\section{Conclusions} \label{sec:conc}
In this article, a novel DL model to estimate the orientation of football players is presented. More concretely, the fine-tuned model learns how to classify players' crops into orientation bins. The core of this method combines a VGG structure with frozen and re-trained layers, an angle compensation strategy to get rid of the camera behavior, and a cyclic loss function based on soft labels that take the intra-class distance into account. The obtained results outperform (by a large margin of more than 20 degrees) the existing state-of-the-art computer-vision method with a MDAE of 11.60 degrees in the test set. Moreover, since complete datasets are difficult to gather, a sequential-based pipeline has also been proposed, which fuses data from different domains in order to establish the ground truth orientation of the player (sensor-domain) in each bounding box (image-domain).
The main limitation of the presented model is that only two different games were used in the given dataset, as ground-truth sensor data (together with high-quality frames) are difficult to obtain. This research shows that even with unbalanced training sets it is possible to train a model with certain generalization capabilities, hence promising results should be obtained with a more varied and balanced dataset in terms of different games. As future work, apart from (a) analyzing the model extension to other football datasets and (b) testing the same approach in a regression-based fashion, its potential intra-sport generalization will be studied as well. Besides, the inclusion of fine-grained orientation into existing EPV models could lead to better performance. \\
\section*{Acknowledgements}
The authors acknowledge partial support by MICINN/FEDER UE project, ref. PGC2018-098625-B-I00, H2020-MSCA-RISE-2017 project, ref. 777826 NoMADS, EU H-2020 grant 952027 (project AdMiRe), and RED2018-102511-T. Besides, we also acknowledge RealTrack Systems' (in particular, Carlos Padilla's) and F.C. Barcelona's data support.
| {
"attr-fineweb-edu": 2.121094,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeX84eIOjSZd0FpQr | \section{Introduction}
Planning the future career of young runners is a relevant aspect of the work of coaches, whose role is to guide them during training so that they can perform at their best in competitions.
Identifying runner's capabilities and future possibilities is important for multiple reasons.
It allows the training load to be appropriately allocated over the years, for improving their performances and reducing their risk of injuries.
Good planning, along with support during injuries, has been identified as one of the relevant factors that help avoiding drop-outs of runners \citep{dropout_iaaf}.
Moreover, good planning is important also from a psychological and emotional point of view, as it allows runners to strive for achievable goals and collect successes over the years.
Pleasant emotions, including satisfaction, have been associated with positive outcomes in, e.g. mental health, performance and engagement
\citep[][]{CECE2019128}.
In this context, the identification of possible careers for an runner, in terms of observed personal performance trajectories over time, is of paramount importance.
For example, identifying the period in which runners reach their peaks can help prepare them for the most important events in their career.
Similarly, the knowledge of the expected progress of different runners over the years provides an indication of whether the training process has been carried out correctly. The analysis of runner's trajectory is carried out in various sports.
\cite{leroy2018functional} have studied young swimmers' progression using a functional clustering approach, while
\cite{boccia_career_2017} focused on individual careers of Italian long and high jumpers to figure out which characteristics of young runners are predictive of good-level results during their careers. \newline
\indent
We focus on the analysis of performances of Italian male middle distance runners, born in $1988$, in a period ranging from $2006$ to $2019$.
Previous studies on middle distance runners are few or limited to samples with a small number of runners \citep[see, e.g.,][]{middle_distances}.
We use a combination of latent class and matrix-variate state space models. Latent class models for time dependent data have been extensively studied in the literature \citep[see, among others,][]{fruhwirth_schnatter_panel_2011, maharaj2019time, BartolucciMurphy}.
They allow to capture the heterogeneity in the careers of runners, thereby describing various possible observable scenarios.
Combining them with state space models offers additional advantages, including the possibility of building models for multivariate time series in an intuitive manner as well as the possibility of leveraging well-known tools for inference, including the treatment of missing data \citep{durbinkoopman}.
Unlike other types of runners and sports, middle distance runners have the major feature of competing in different distances, i.e. in the $800$, $1500$ and $5000$ meters distances, as well as in other spurious ones (i.e. the mile, $3000$ meters, etc.).
The choice of discipline in which to compete is subjective and typically associated with personal attitudes \citep{mooses2013anthropometric}.
A runner capable of developing greater speed and power typically competes in shorter distances, with respect to those with greater endurance who compete in longer distances.
As a consequence, observations in different distances are available for each runner over time, but the absence of a particular discipline can be informative on the runner's attitude.
Beyond the variability among subjects related to the type of discipline performed, there is also variability in the developing of runners' careers related to both their abilities and histories.
If a runner begins their career late in life, it is less likely that they will reach high levels; similarly, runners with unsatisfactory careers are likely to end their careers earlier, with respect to those satisfied with their performances \citep{hernandez2011age}.
These aspects are related to drop-in and drop-out phenomena, defined as the events where runners enter and exit the observed sample, respectively.
\newline
\indent
A key interesting and important question that naturally emerges is whether the absence of data is really associated with observed performances. We attempt to shed light in this question by proposing a matrix state space model model in which multivariate time series are clustered together on the basis of their observed trend.
Matrix-variate state space models have found application in finance and engineering in past years \citep[see, e.g.,][]{choukroun2006kalman,wangwest2009}, but have recently gained additional interest in the statistical literature to analyze problems in which observations over time are matrices \citep[see, e.g.,][]{Hsu_MAR_spatio2021, Chen_constr_factor2020, chen_MAR2021}.
In this part of model specification, clustering is achieved via a latent selection matrix which is involved in the measurement equation.
We propose to include temporal dynamics that aim to describe missing data patterns using two different processes.
First, runner's personal history is described by a three state process, which describes their entry (drop-in) and exit (drop-out) from the sample.
Second, different propensities to compete in different distances are considered to describe runner's personal attitude.
The probabilities of both the processes are assumed to be dependent on the latent classes stored in the selection matrix previously mentioned, allowing to consider the possible relation of missing data patterns with the observed performances.
In this way, clustering is not only achieved on the basis of runners' performances, but the presence and absence of data is considered informative as well.
Works on latent class and clustering models for longitudinal data where the presence of missing values may be informative on the latent structure behind the data are few or limited to domain-specific works \citep[see, e.g.,][]{BartolucciMurphy, MIKALSEN2018569}.
\newline
\indent
Since the seminal work on missing values by \cite{rubin76}, researchers have wondered if and when it is possible to ignore the presence of missing values in their datasets.
In this work, we consider this problem in a pure predictive framework, in which missing values will be considered as informative if having information on their presence and distribution over time helps in predicting runners performances over the years.
If so, one could think to a causal relationship, in a Granger sense, between missing values and observed performances.
Although coherent with our model's construction, we avoid the use of definition of informativeness of missing data in a causal sense.
Indeed, while correlation between missing data and performances is typically expected in sports performance analysis, direct cause-effect relationships between them and their directions are not clearly defined in the sports science literature.
\newline
\indent
The rest of the paper is organized as follows: Section \ref{subsec:data} presents a new publicly available dataset on middle distance runners; Section \ref{sec:model} describes the proposed model; Section \ref{sec:prior_specification} discusses the likelihood and the prior specification; Section \ref{sec:posterior_inference} presents the evaluation strategy of the model; Section \ref{sec:examples} shows the results with the real data. Additional details on the data, the algorithms, and the results are reported in the supplementary material accompanying the paper.
\section{Data and exploratory analysis}
\label{subsec:data}
Our data refer to annual seasonal best performances of male Italian runners, born in $1988$, on $5000$, $1500$ and $800$ meters distances in a period between $2006$ and $2019$.
They were collected from the annual rankings accessible on the website of the Italian athletics federation (www.fidal.it), which stores results and rankings in competitions since $2005$.
All runners with at least two observations were selected and the data are illustrated in Figure \ref{fig:collected_data} with the help of a local regression fit that allows us to perceive
a U-shaped curve that describes the distribution of sample trajectories across ages.
U-shapes are typically observed in the evolution of runners careers \citep{performance_quadratic}.
However, their shape can be biased by the presence of missing data, related for example to early exit or late entry in the sample.
\begin{figure}
\centering
\includegraphics[scale=0.35]{./images/performanes_full_line.pdf}
\caption{Seasonal best performances of $369$ Italian male middle distance runners in $5000$, $1500$, and $800$ meters distances.
Points represent the observed performances.
Lines connect the performances in consecutive years of the same runner. The black lines are obtained using local regression.}
\label{fig:collected_data}
\end{figure}
Indeed, unlike other type of data, the presence of missing values is predominant in the careers of these runners.
Out of $15498$ observable seasonal best performances of $Q=369$ runners in $P=3$ distances and $T = 14$ years, only $2411$ seasonal best results are observed.
A missing value is observed for one runner if the runner does not conclude (and, hence, record) any official competition in a specific discipline during one year.
The reasons for not observing any performance can be multiple.
The runner can be \emph{not in career} in a specific year, and hence no races are performed during that year.
Alternatively, a runner in career can decide not to compete in a specific discipline for several reasons, such as lack of preparation, attitude, technical choices, etc.
To understand how missing data patterns differ between runners, Figure \ref{fig:missing_data_patterns} shows the observed patterns of missing data for nine distinct runners present in the sample.
The patterns shown differ in various features.
Some runners have few observations, such as runners $241$ and $119$.
Other runners are characterized by careers with many observed performances, such as runners $129$ and $256$.
These two runners are interesting because they differ in the type of discipline they run: while runner $129$ competes only in the $1500$ meters discipline, runner $256$ competes in all the distances, recording a different number of observations for each distance.
The observed differences are typically associated both with technical choices, but also with different attitudes of the runners, leading them to compete in races of different length, according for example to their endurance and speed abilities.
\begin{figure}
\centering
\includegraphics[scale=0.5]{./images/ueueue2.pdf}
\caption{Missing data patterns for nine runners describing their actual participation in different distances across their ages, shown on top. Yellow squares indicate that the performance is present, blue squares the absence of the observations.}
\label{fig:missing_data_patterns}
\end{figure}
We define the drop-in and drop-out as the runner age in which the first observation in at least one discipline is present and the age after the last performance is observed, respectively.
Naturally, the careers of the runners differ both in length and the age they start racing.
Based on this definition, runner $129$ drops-in at age $19$ and drops-out at age $31$ and runner $233$ drops-in at age $18$ and has not dropped-out in the period under examination.
The empirical distribution of runners careers' length, shown in panel (a) of Figure \ref{fig:two_classes}, is right skewed, with an average length of $5.04$ years. The increased observation at year $14$ is due to data censoring.
Panel (b) illustrates that around $60\%$ of runners in the sample competes when they are $18$ years old, but about $20\%$ of them have already left the competition at the age of $20$.
A visual exploration that indicates whether these aspects are effectively associated with observed performances are presented in panel (c) that depicts that performance at drop-in seems to be worse (greater times) if the runners start competing late in life and in panel (d) that shows the distributions of the observed performances, distinguishing between runners with careers longer or shorter than $7$ years.
In our data runners with longer careers perform typically better than those with shorter careers.
The reason for this behavior can be either because runners with unsatisfactory career leave the competitions earlier, or because competing (and, hence, training for it) for a long period is a prerequisite for improving. We refer to the supplementary material for similar plots with other distances.
\begin{figure}
\centering
\subfloat[]{\includegraphics[scale=0.35]{./images/length_career.pdf}}
\subfloat[]{\includegraphics[scale=0.35]{./images/entry_exit_percentage.pdf}}
\subfloat[]{\includegraphics[scale=0.35]{./images/performanes_drop_in.pdf}} \\ \subfloat[]{\includegraphics[scale=0.35]{./images/history_matters_1.pdf}}
\caption{
Panel (a) shows the distribution of runners career length, panel (b) the percentage of runners that are in career (or not) in the different ages considered, and panel (c) the performances at drop-in in $5000$ meters.
Panel (d) shows the distributions of seasonal best performances over ages in $1500$ meters discipline, grouped by the career length of the runners. Blue boxplots represent the performances of runners with a career shorter than $7$ years (included), red boxplots the performances of runners with a career longer than $7$ years.
}
\label{fig:two_classes}
\end{figure}
\section{The model}
\label{sec:model}
\subsection{Clustering longitudinal data with matrix state space model}
\noindent
Let the scalar element $y_{pq,t}$ denote the observation of the performance in discipline $p$ for runner $q$ during year $t$, for $p=1,\ldots,P$, $q=1,\ldots,Q$, and $t=1,\ldots, T$.
To facilitate the exposition, in this Section we assume that the complete set of observations is available in the sense that runners participate in all $P$ distances during the years and that no drop-ins or drop-outs are observed.
We assume that runners are divided into $G$ different unobserved groups according to the evolutionary trajectories during their careers.
Suppose that runner $q$ belongs to group $g$, and that their observations over time are described by the following dynamic linear model
\begin{align}
\label{meas_clust_scalar}
y_{pq,t} & = \mathbf{z}_{p}^\top \boldsymbol{\alpha}_{p,t}^{(g)} + \varepsilon_{pq,t},
\\
\label{state_clust_scalar}
\boldsymbol{\alpha}_{p,t+1}^{(g)} & = \boldsymbol{T}_p \boldsymbol{\alpha}_{p,t}^{(g)} + \boldsymbol{\xi}_{p,t}^{(g)},
\end{align}
in which $\boldsymbol{\alpha}_{p,1}^{(g)} \sim \text{N}_{F_p}(\widehat{\boldsymbol{\alpha}}_{p,1\vert0}^{(g)}, \boldsymbol{P}_{p, 1\vert0}^{(g)})$, for $p=1,\ldots,P$, $t=1,\ldots, T$, and $\widehat{\boldsymbol{\alpha}}_{p,1\vert0}^{(g)}$, $\boldsymbol{P}_{p, 1\vert0}^{(g)}$ are fixed mean and variance for the initial state, respectively. The row vector $\mathbf{z}_{p}^\top$, which has a known structure, links the observation $y_{pq,t}$ to the column vector $\boldsymbol{\alpha}_{p,t}^{(g)}$, which describes the group-specific dynamics of the $p$--th discipline for all the runners that belong to group $g$.
These dynamics are determined by the state transition equation that describes a first-order autoregressive process with transition matrix $\boldsymbol{T}_p$, which is discipline-specific, known, and shared across all the groups.
In this way, for a generic discipline $p$, we require that the latent states of the different groups are different from each other, but are characterized by the same Markovian dependence induced by $\boldsymbol{T}_p$.
Moreover, this dependence is not required to be common across different distances, as $\boldsymbol{T}_p$ may differ from $\boldsymbol{T}_{p^\prime}$ for any $p\neq p^\prime$.
The error terms $\varepsilon_{pq,1},\ldots, \varepsilon_{pq,T}$ are assumed to be Gaussian with zero-mean and variances that can be discipline and subject-specific. They are assumed to be serially independent and independent of both the states $\boldsymbol{\alpha}_{p,1}^{(g)}, \ldots,\boldsymbol{\alpha}_{p,T}^{(g)}$ and the disturbances $\boldsymbol{\xi}_{p,1}^{(g)}, \ldots,\boldsymbol{\xi}_{p,T}^{(g)}$, whose covariance is $\boldsymbol{\Psi}_{pg}$, for $p = 1,\ldots, P$ and $g=1,\ldots,G$.
Let $\mathbf{y}_{\cdot q, t} = (y_{1q,t}, \ldots, y_{Pq,t})^\top$, $\boldsymbol{\alpha}_{t}^{(g)} = (\boldsymbol{\alpha}_{1,t}^{(g)\top},\ldots,\boldsymbol{\alpha}_{P, t}^{(g)\top})^\top$,
$\boldsymbol{\varepsilon}_{\cdot q, t} = (\varepsilon_{1q,t}, \ldots, \varepsilon_{Pq,t})^\top$, $\boldsymbol{\xi}_{t}^{(g)} = (\boldsymbol{\xi}_{1,t}^{(g)\top},\ldots,\boldsymbol{\xi}_{P, t}^{(g)\top})^\top$, $\widehat{\boldsymbol{\alpha}}_{1 \vert 0}^{(g)} = (\widehat{\boldsymbol{\alpha}}_{1,{1 \vert 0}}^{(g)\top},\ldots,\widehat{\boldsymbol{\alpha}}_{P, {1 \vert 0}}^{(g)\top})^\top$, and $F = \sum_{p=1}^{P}F_p$. Moreover, let
$\boldsymbol{T} = \text{blkdiag}(\boldsymbol{T}_1, \ldots,\boldsymbol{T}_P)$, $\boldsymbol{P}_{1\vert0}^{(g)} = \text{blkdiag}(\boldsymbol{P}_{1,1\vert 0}^{(g)}, \ldots,\boldsymbol{P}_{P,1\vert 0}^{(g)})$, as well as the covariance matrix $\boldsymbol{P}_{1\vert0} = \text{blkdiag}(\boldsymbol{P}_{1\vert0}^{(1)},\ldots, \boldsymbol{P}_{1\vert0}^{(G)})$ where $\text{blkdiag}(\boldsymbol{X}_a,\ldots,\boldsymbol{X}_z)$ is the block-diagonal operator, creating a block-diagonal matrix with arguments $\boldsymbol{X}_a,\ldots, \boldsymbol{X}_z$ stacked in the main diagonal. Finally, let $\boldsymbol{Z}$ be the $P \times F$ matrix storing, in its $p$--th row, the row-vector $\mathbf{z}_p^\top$ starting from column $1-F_p+\sum_{j=1}^{p} F_j$, and zeros otherwise, and define also the following matrices:
\begin{equation*}
\boldsymbol{Y}_t = \begin{bmatrix} \mathbf{y}_{\cdot 1,t} & \ldots & \mathbf{y}_{\cdot Q,t} \end{bmatrix}, \quad \boldsymbol{A}_t = \begin{bmatrix}\boldsymbol{\alpha}_t^{(1)}&\ldots &\boldsymbol{\alpha}_t^{(G)}\end{bmatrix}, \quad \boldsymbol{S}^\top = \begin{bmatrix} \mathbf{s}_{1\cdot}^\top & \ldots & \mathbf{s}_{Q\cdot}^\top \end{bmatrix},
\end{equation*}
\begin{equation*}
\boldsymbol{E}_t = \begin{bmatrix} \boldsymbol{\varepsilon}_{\cdot 1,t} & \ldots & \boldsymbol{\varepsilon}_{\cdot Q,t} \end{bmatrix}, \quad \boldsymbol{\Xi}_t = \begin{bmatrix}\boldsymbol{\xi}_{t}^{(1)} & \ldots & \boldsymbol{\xi}_{t}^{(G)}\end{bmatrix},\quad \widehat{\boldsymbol{A}}_{1\vert0} = \begin{bmatrix} \widehat{\boldsymbol{\alpha}}_{1 \vert 0}^{(1)}&\ldots &\widehat{\boldsymbol{\alpha}}_{1 \vert 0}^{(G)}\end{bmatrix},
\end{equation*}
where $\mathbf{s}_{q\cdot}^\top = (\mathbbm{1}(S_{q}=1),\ldots,\mathbbm{1}(S_{q}=G))^\top$ is an allocation vector such that $\mathbbm{1}(S_q=g) =1$ if runner $q$ belongs to group $g$, and $0$ otherwise.
Leveraging the previous notation, the model admits a matrix-variate state space representation, in which
\begin{align}
\label{clust_meas_matr}
\boldsymbol{Y}_t & = \boldsymbol{Z}\boldsymbol{A}_t \boldsymbol{S}^\top + \boldsymbol{E}_t, \quad \boldsymbol{E}_t \sim \text{MN}_{P,Q}(\mathbf{0}, \boldsymbol{\Sigma}^C \otimes \boldsymbol{\Sigma}^R),\\
\label{clust_state_matr}
\boldsymbol{A}_{t+1} & = \boldsymbol{T} \boldsymbol{A}_t + \boldsymbol{\Xi}_t, \quad \quad \boldsymbol{\Xi}_t \sim \text{MN}_{F,G}(\mathbf{0}, \boldsymbol{\Psi}^C \otimes \boldsymbol{\Psi}^R),
\end{align}
with $\boldsymbol{A}_1 \sim \text{MN}_{F,G}(\widehat{\boldsymbol{A}}_{1\vert0}, \boldsymbol{P}_{1\vert 0})$.
The matrix $\boldsymbol{S}$ in Equation \eqref{clust_meas_matr} is a selection matrix, with the role of selecting, for each runner, the columns of states associated with the group the runner belongs to, and silencing the others.
The matrices of errors and disturbances are assumed to follow a matrix-variate Normal distribution with covariance matrix decomposed by a Kronecker product \citep{guptanagar}, which is a typical assumption in models for matrix-variate time series \citep[see, e.g.,][]{wangwest2009, Chen_constr_factor2020}.
Here, $\boldsymbol{\Sigma}^R$ and $\boldsymbol{\Psi}^R$ are row-covariance matrices with dimensions $P \times P$ and $F \times F$, and measure row-wise dependence of errors and disturbances, respectively.
Conversely, the matrices $\boldsymbol{\Sigma}^C$ and $\boldsymbol{\Psi}^C$ are column-covariance matrices with dimensions $Q \times Q$ and $G \times G$ that measure column-wise dependence of errors and disturbances, respectively.
Dependent rows or columns are characterized by full covariance matrices, while independent row or columns are characterized by diagonal matrices \citep{guptanagar}. Thus, the model is general enough to encopass various forms of dependence, while keeping the number of parameters low with respect to alternative full specifications of the covariance matrices.
Since we deal with annual-based data describing the careers of different runners, we assume $\boldsymbol{Z} = \boldsymbol{I}_P$,
$\boldsymbol{T} = \boldsymbol{I}_P$, $\boldsymbol{\Sigma} = \boldsymbol{I}_Q \otimes \boldsymbol{\Sigma}^R$ and $\boldsymbol{\Psi} = \boldsymbol{I}_G \otimes \boldsymbol{\Psi}^R$.
These assumptions imply that the states of different groups describing runners' performance across years are independent of each other and characterized by the same temporal dependence structures, which are those implied by local level models in which the trends of each discipline are discrete random walk \citep{durbinkoopman}.
Assuming a priori that performance on discipline $p$ at time $t+1$ is a deviation from that at year $t$ seems reasonable, as a runner is not expected to progress or regress excessively from year to year.
Further, setting $\boldsymbol{\Psi}^C$ to be diagonal implies that groups are independent of each other, a typical assumption in clustering. In this framework, the role of the states (i.e. $\boldsymbol{\alpha}_{p,t}^{(g)}$) is to describe various evolution of the performances of the runners.
How these states evolve can be considered as the combination of many factors (e.g. individual traits, training, motivation, etc) that lead to unpredictable prior behaviors.
Further, conditional on the states and $\boldsymbol{S}$, there is no reason to assume the runners to be dependent between each other.
We note also that imposing $\boldsymbol{\Sigma}^C =\boldsymbol{I}_Q$ and $\boldsymbol{\Psi}^C =\boldsymbol{I}_G$ are restrictions even stronger than required, but they help in stabilizing the estimation of the other components given the large number of missing observations present in the data.
Removing these restrictions is a delicate aspect in the predictive context in which we fit our model (see Section \ref{sec:posterior_inference}). Indeed, the aim of increasing flexibility struggles with the goal of reducing the variability of the estimates and predictions, due to both the large dimensions of the state space we are considering, but also by the need of adopting a diffuse prior specification and a large number of groups for well capturing the variability of the considered phenomena (see Section \ref{sec:prior_specification}).
\subsection{Missing data inform on clustering structure}
The previous section was developed conditional on all data being observed, i.e., when the runners run all $P$ distances during the entire period of observation.
However, this is not the case for data that describe the career trajectories of runners, since the lack of data is part of the career itself.
To include these factors as informative aspect of runners' careers, we consider two other variables in the model.
As first, we consider
\begin{align*}
d_{pq,t} = \begin{cases}
1, \quad \text{if discipline $p$ for runner $q$ is observed at time $t$},
\\
0, \quad \text{otherwise},
\end{cases}
\end{align*}
to describe the presence or absence of the observed discipline for the runners.
Then we consider the variable $d_{q,t}^\star$ that informs whether the runner $q$ is in career during year $t$, which is
\begin{align*}
d_{q,t}^\star =
\begin{cases} 0, \quad \text{if runner $q$ has not started the career before $t$ (included),} \\
1, \quad \text{if runner $q$ is in career during $t$}, \\
2, \quad \text{if runner $q$ has finished the career in $t$ (included).}
\end{cases}
\end{align*}
The variable $d_{q,t}^\star$ is not decreasing in $t$, and describes the three possible states of runner's career.
Moreover, if $d_{q,t}^\star \in \{0,2\}$, then $d_{pq,t} = 0$ with probability $1$, for $ p =1,\ldots,P$, meaning that no distances are observed since the runner is not competing.
On the contrary, there might be runners with $d_{pq,t} = 0$, for $p = 1,\ldots, P$, even if $d_{q,t}^\star=1$. This is typical of runners who, despite being in a career, decide not to compete during one specific year, but compete in the following years.
The division into three non-concurrent states allows for the introduction of temporal dynamics within the model of missing data patterns in an easy way.
In particular, let $\mathbf{d}_{q}^\star = (d_{q,1}^\star,\ldots,d_{q,T}^\star)$, $\mathbf{d}_{\cdot q,t} = (d_{1q,t},\ldots,d_{Pq,t})^\top$, and $\boldsymbol{D}_{q} = \begin{bmatrix}\mathbf{d}_{\cdot q,1} & \ldots & \mathbf{d}_{\cdot q,T}\end{bmatrix}$, $\mathcal{D} = \{\boldsymbol{D}_1,\ldots, \boldsymbol{D}_Q\}$, and $\mathcal{D}^\star = \{\mathbf{d}_{1}^\star, \ldots, \mathbf{d}_{Q}^\star\}$.
First, we make the following independence assumption among different subjects
\begin{align}
\label{indip2}
p_{\boldsymbol{\theta}}(\mathcal{D}, \mathcal{D}^\star |\boldsymbol{S}) = \prod_{q=1}^Q p_{\boldsymbol{\theta}} ( \boldsymbol{D}_{q}, \mathbf{d}_{q}^\star \vert S_q).
\end{align}
As a second step, we let $\mathbf{d}_{q}^\star$ and $\boldsymbol{D}_{q}$ be dependent on the group $S_q=g$ to which the runner $q$ belongs, and make the following conditional independence assumption
\begin{align}
\nonumber
p_{\boldsymbol{\theta}} ( \boldsymbol{D}_{q}, \mathbf{d}_{q}^\star \vert S_q) &= p_{\boldsymbol{\theta}} ( \boldsymbol{D}_{q} \vert \mathbf{d}_{q}^\star, S_q) p_{\boldsymbol{\theta}} ( \mathbf{d}_{q}^\star \vert S_q) \\
\label{eq:conditioning_missing}&=
\prod_{t=1}^T \bigg\{ \prod_{p=1}^P p_{\boldsymbol{\theta}}( d_{pq,t} \vert d_{q,t}^\star, S_q ) \bigg\} p_{\boldsymbol{\theta}}(d_{q,t}^\star \vert d_{q,t-1}^\star, S_q),
\end{align}
where $p_{\boldsymbol{\theta}}(d_{q,1}^\star \vert d_{q,0}^\star, S_q) = \lambda_{1g}^\star$ if $d_{q,1}^\star =1$, and $p_{\boldsymbol{\theta}}(d_{q,1}^\star \vert d_{q,0}^\star, S_q) =1-\lambda_{1g}^\star$ if $d_{q,1}^\star =0$.
Note that, in Equations \eqref{indip2} and \eqref{eq:conditioning_missing}, the subscript ${\boldsymbol{\theta}}$ in $p_{\boldsymbol{\theta}}( A \vert B) $ denotes conditional dependence of the form $p( A \vert B, \boldsymbol{\theta})$, for slight abuse of notation, where $\boldsymbol{\theta}$ denotes a set of unknown parameter with finite dimensions (specified later).
In Equation \eqref{eq:conditioning_missing} we consider the following assumptions: for runner $q$, the conditional probabilities at time $t$ of transition from state $0$ to state $1$ is $p_{\boldsymbol{\theta}}(d_{q,t}^\star=1 \vert d_{q,t-1}^\star=0, S_q=g) = \lambda_{1g}^\star$ and from state $1$ to state $2$ is $p_{\boldsymbol{\theta}}(d_{q,t}^\star=2 \vert d_{q,t-1}^\star=1, S_q=g) = \lambda_{2g}^\star$. Both the probabilities are group dependent but constant over time.
By construction, the transitions from state $1$ to state $0$ or from state $2$ to states $0$ or $1$ are impossible events.
Further, for runner $q$, the conditional probability at time $t$ of observing a value for the generic discipline $p$ is $p_{\boldsymbol{\theta}}( d_{pq,t}=1 \vert d_{q,t}^\star=1, S_q =g)=\delta_{pg}$, which is group-dependent, but fixed over time.
Although transitions in the prevalence of the type of discipline done in a long career are possible for some runners (e.g. from shorter to longer distances), these transitions are difficult to detect with annual based data---which are summaries of the entire years---since it is enough to compete in only one race of the considered discipline to be included in the discipline-specific ranking lists.
Similarly, the assumption of constant probabilities during years used to describe the presence of missing values does not contemplate the possibility that runners would get seriously injured, and, thus, they would not compete in any discipline for more than a year.
Although there is no clear indication in the literature about the average duration and severity of an injury in middle distance runners \citep[see, e.g.,][]{van_gent_incidence_2007}, we assume here that severe injuries (injuries that stop competitions for more than one year) are present only in low proportions, leaving open possible investigations on this aspect in the future.
\section{Likelihood and posterior distributions}
\label{sec:prior_specification}
\subsection{Likelihood and prior for the proposed model}
\noindent
Let $\mathcal{Y}$ denote the set of observations as if they were fully observed, $\mathcal{Y}^\star$ the set of variables which are effectively observed, and $\widetilde{\mathcal{Y}}$ the completion of $\mathcal{Y}^\star$, i.e. such that $\mathcal{Y} = \mathcal{Y}^\star \cup \widetilde{\mathcal{Y}}$ and $ \mathcal{Y}^\star \cap \widetilde{\mathcal{Y}} =$\O.
Let also $\mathcal{A}=\{\boldsymbol{A}_1, \ldots, \boldsymbol{A}_T\}$ be the set storing the latent states of the state space model.
In order to derive the posterior distribution of the parameters, we present the likelihood of the observed process first, augmented for both the states $\mathcal{A}$, the missing observations $\widetilde{\mathcal{Y}}$, and $\boldsymbol{S}$.
The augmented likelihood is characterized by the following conditional independence structure
\begin{align}
\label{aug_likelihood_clus}
p_{\boldsymbol{\theta}}(\mathcal{Y}, \mathcal{D}, \mathcal{D}^\star, \mathcal{A}, \boldsymbol{S}) = p_{\boldsymbol{\theta}} (\mathcal{Y} \vert \mathcal{D}, \mathcal{A}, \boldsymbol{S})
p_{\boldsymbol{\theta}}(\mathcal{D} \vert \mathcal{D}^\star, \boldsymbol{S}) p_{\boldsymbol{\theta}}( \mathcal{D}^\star \vert \boldsymbol{S}) p_{\boldsymbol{\theta}}(\boldsymbol{S}) p_{\boldsymbol{\theta}} (\mathcal{A}).
\end{align}
In Equation \eqref{aug_likelihood_clus}, $p_{\boldsymbol{\theta}} (\mathcal{Y} \vert \mathcal{D}, \mathcal{D}^\star, \mathcal{A}, \boldsymbol{S}) = p_{\boldsymbol{\theta}} (\mathcal{Y} \vert \mathcal{D}, \mathcal{A}, \boldsymbol{S})$, and is determined by the measurement Equation \eqref{clust_meas_matr}, for which all observations are assumed to be available, and the prior on $\mathcal{A}$ is implicitly determined by the form of the state equation of the state space in Equation \eqref{clust_state_matr}.
However, only $\mathcal{Y}^\star =\{\mathcal{Y}_1^\star,\ldots, \mathcal{Y}_T^\star\}$ is observed, but $p_{\boldsymbol{\theta}} (\mathcal{Y} \vert \mathcal{D}, \mathcal{A}, \boldsymbol{S})$ can be obtained by conditioning, noting that
\begin{align*}
p_{\boldsymbol{\theta}} (\mathcal{Y} \vert \mathcal{D}, \mathcal{A}, \boldsymbol{S}) = p_{\boldsymbol{\theta}} (\mathcal{Y}^\star \vert \mathcal{D}, \mathcal{A}, \boldsymbol{S}) p_{\boldsymbol{\theta}} ( \widetilde{\mathcal{E}} \vert \mathcal{Y}^\star, \mathcal{D}, \boldsymbol{S}),
\end{align*}
where $\widetilde{\mathcal{E}}$ stores all those entries in $\mathcal{E}= \{ \boldsymbol{E}_1,\ldots,\boldsymbol{E}_T\}$ associated with the missing values.
To characterize $\boldsymbol{S}$, we make the following independence assumption \begin{align}
\label{indipdendence_assumption}
p_{\boldsymbol{\theta}}(\boldsymbol{S}) = \prod_{q=1}^Q p_{\boldsymbol{\theta}}(\mathbf{s}_{q\cdot}) = \prod_{q=1}^Q \prod_{g=1}^G \pi_g^{\mathbbm{1}(S_q=g)},
\end{align}
where $\boldsymbol{\pi} = (\pi_g,\ldots,\pi_G)$ is such that $\pi_g \in (0,1)$, for $g=1,\ldots, G$, and $\sum_{g=1}^G \pi_g=1$.
As concerns model parameters, we assume that $\boldsymbol{\theta}$ factorizes as follows
\begin{align}
\label{prior_distribution}
p(\boldsymbol{\theta}) = p(\widehat{\boldsymbol{A}}_{1\vert 0}) p(\boldsymbol{\Sigma}^R) p(\boldsymbol{\Psi}^R) p(\boldsymbol{\pi}) \prod_{g=1}^{G} \bigg\{p(\boldsymbol{\lambda}_g^\star)\prod_{p=1}^P p(\delta_{pg})\bigg\},
\end{align}
where $\boldsymbol{\lambda}_g^\star = (\lambda_{1g}^\star,\lambda_{2g}^\star)$.
We further assume the probabilities driving missing data patterns to be uninformative Beta distributions, such that $\lambda_{1g}^\star \sim \text{Be}(1, 1)$, $\lambda_{2g}^\star \sim \text{Be}(1, 1)$, and $\delta_{pg} \sim \text{Be}(1, 1)$, for any $p=1,\ldots,P$ and $g=1,\ldots,G$. Covariance matrices are assumed to be Inverse Wishart distributions, such that $\mathbf{\Sigma}^R\sim \text{IW}_P(P+1, \boldsymbol{I}_P)$ and $\mathbf{\Psi}^R\sim \text{IW}_P(P+1, \boldsymbol{I}_P)$.
For what concerns the state space model, $\widehat{\boldsymbol{A}}_{1\vert 0}$ is assumed to follows a matrix-variate Normal distributions of mean $\bar{\mathbf{y}}_1 \mathbf{1}_G^\top$ and covariance $\boldsymbol{P}_{1\vert 0}= \boldsymbol{I}_G \otimes \boldsymbol{P}_{1\vert 0}^0$, with $\boldsymbol{P}_{1\vert 0}^0=\text{diag}(p^2_1,\ldots,p^2_P)$, where $\bar{\mathbf{y}}_1$ is the vector storing sample average of observed distances at first time instant and $p^2_p$ is twice the sample variance of the $p$–th observed discipline at the first time instant.
It is interesting to observe, however, that the number of parameters depends on the number of groups $G$, which is fixed. We consider an overfitting finite mixture specification of the model \citep[see, e.g.,][]{walli_sparse2016, walli_label}, in which $G$ is set to be large, and $\boldsymbol{\pi} = (\pi_1,\ldots,\pi_G) \sim \text{Dir}_G(e_1,\ldots, e_G)$ with hyper-parameters $e_1 = \ldots = e_G = 1/G$, the prior on the mixture weights favours emptying the extra components, leaving complete symmetry between the different components included in the model.
This assumption implies that, during the estimation procedure, the number of filled components may be lower than $G$, leading to the classical distinction between the number of clusters $G^{+}$ (i.e. the number of filled components) and the number of components $G$ included in the model, with $G^{+}\leq G$ \citep[see,][for an extensive discussion on the topic]{walli_sparse2016,walli_label,telescoping_bayesian}.
Under this prior specification, it is possible to derive a Gibbs sampling algorithm that involves all full conditionals that are conditionally conjugate. Its derivation is reported in the supplementary material. Draws of the states $\mathcal{A}$ are obtained using the simulation smoothing technique by \cite{sismo_koopman}, applied to a reduced form of the model derived using the reduction by transformation technique \citep[see][]{jungbacker2008likelihood}.
\subsection{Posterior distribution and alternative specifications}
The goal of our inferential procedure is to derive quantity of interests (e.g. predictive distributions) from a sample of the posterior distribution
\begin{align}
\label{posterior_distr}
p^{\text{C}}(\boldsymbol{\theta}, \mathcal{A}, \boldsymbol{S}, \widetilde{\mathcal{E}} \vert \mathcal{Y}^\star, \mathcal{D}, \mathcal{D}^\star) \propto p(\boldsymbol{\theta}) p_{\boldsymbol{\theta}} (\mathcal{Y} \vert \mathcal{D}, \mathcal{A}, \boldsymbol{S})
p_{\boldsymbol{\theta}}(\mathcal{D} \vert \mathcal{D}^\star, \boldsymbol{S}) p_{\boldsymbol{\theta}}( \mathcal{D}^\star \vert \boldsymbol{S}) p_{\boldsymbol{\theta}}(\boldsymbol{S}) p_{\boldsymbol{\theta}} (\mathcal{A}).
\end{align}
A sample from the posterior distribution can be obtained using a Gibbs sampling scheme, as discussed in the supplementary material. In Equation \eqref{posterior_distr}, the superscript ${}^\text{C}$ indicates that the considered posterior is referring to the \emph{complete} model (or model 1), because it assumes that both attitudes and history matter.
By restricting the complete model in Equation \eqref{posterior_distr} it is possible to obtain a set of alternative reduced specifications, in which different missing data pattern schemes have different influence for clustering.
More specifically, a set of alternative specifications can be derived by dropping the dependence on the selection matrix $\boldsymbol{S}$ in some parts of the model. We consider the following set of alternative specifications:
\begin{description}
\item[Model 2:] Missing data do not matters:
\begin{align}
\label{model2}
p^{\text{NM}}(\boldsymbol{\theta}, \mathcal{A}, \boldsymbol{S}, \widetilde{\mathcal{E}} \vert \mathcal{Y}^\star, \mathcal{D}, \mathcal{D}^\star) \propto p(\boldsymbol{\theta}) p_{\boldsymbol{\theta}} (\mathcal{Y} \vert \mathcal{D}, \mathcal{A}, \boldsymbol{S})
p_{\boldsymbol{\theta}}(\boldsymbol{S}) p_{\boldsymbol{\theta}} (\mathcal{A});
\end{align}
\item[Model 3:] Only attitude matters:
\begin{align}
\label{model3} p^{\text{A}}(\boldsymbol{\theta}, \mathcal{A}, \boldsymbol{S}, \widetilde{\mathcal{E}} \vert \mathcal{Y}^\star, \mathcal{D}, \mathcal{D}^\star) \propto p(\boldsymbol{\theta}) p_{\boldsymbol{\theta}} (\mathcal{Y} \vert \mathcal{D}, \mathcal{A}, \boldsymbol{S})
p_{\boldsymbol{\theta}}(\mathcal{D} \vert \mathcal{D}^\star, \boldsymbol{S}) p_{\boldsymbol{\theta}}(\boldsymbol{S}) p_{\boldsymbol{\theta}} (\mathcal{A});\end{align}
\item[Model 4:] Only history matters:
\begin{align}\label{model4}
p^{\text{H}}(\boldsymbol{\theta}, \mathcal{A}, \boldsymbol{S}, \widetilde{\mathcal{E}} \vert \mathcal{Y}^\star, \mathcal{D}, \mathcal{D}^\star) \propto p(\boldsymbol{\theta}) p_{\boldsymbol{\theta}} (\mathcal{Y} \vert \mathcal{D}, \mathcal{A}, \boldsymbol{S})
p_{\boldsymbol{\theta}}( \mathcal{D}^\star \vert \boldsymbol{S}) p_{\boldsymbol{\theta}}(\boldsymbol{S}) p_{\boldsymbol{\theta}} (\mathcal{A}).\end{align}
\end{description}
Alternative model specifications in Equations \eqref{model2}--\eqref{model4} have meaningful structural interpretations, if compared with the complete model in Equation \eqref{posterior_distr}.
In model $2$ both $p_{\boldsymbol{\theta}}(\mathcal{D} \vert \mathcal{D}^\star, \boldsymbol{S}) = p(\mathcal{D} \vert \mathcal{D}^\star)$ and $p_{\boldsymbol{\theta}}( \mathcal{D}^\star \vert \boldsymbol{S})=p( \mathcal{D}^\star)$, meaning that neither attitude nor history matter for clustering, and therefore are not correlated with the evolution of performances.
In this case, missing data are still considered in the estimation procedure for obtaining $\widetilde{\mathcal{E}}$ and other elements related to the set of completed observations $\mathcal{Y}$ (e.g., $\boldsymbol{\Sigma}^R)$, but the part of likelihood describing the evolution of $\mathcal{D}$ and $\mathcal{D}^\star$ is no longer dependent on $\boldsymbol{\theta}$ and $\mathbf{S}$.
In model $3$, instead, the attitude matters but history does not.
This request is obtained by requiring $p_{\boldsymbol{\theta}}( \mathcal{D}^\star \vert \boldsymbol{S})= p( \mathcal{D}^\star)$.
Note that the dependence of $p_{\boldsymbol{\theta}}(\mathcal{D} \vert \mathcal{D}^\star, \boldsymbol{S})$ on $\mathcal{D}^\star$ is preserved, an important aspect because it is involved in the estimation of the parameters $\delta_{pg}$ related to runners' attitudes.
Finally, in model $4$ $p_{\boldsymbol{\theta}}(\mathcal{D} \vert \mathcal{D}^\star, \boldsymbol{S}) = p(\mathcal{D} \vert \mathcal{D}^\star)$, leading to a model in which runners' attitudes do not matter for the evolution of the performances.
For simplicity, we do not distinguish $p(\boldsymbol{\theta})$ in the different models letting the elements included in $\boldsymbol{\theta}$ under different model specifications differ, based on the single case being considered (e.g. $\boldsymbol{\theta}=\{\hat{\boldsymbol{A}}_{1\vert 0}, \boldsymbol{\Sigma}^R, \boldsymbol{\Psi}^R, \boldsymbol{\pi}\}$ in model $2$).
We can provide an interpretation to our model constructions from a two-step Bayesian learning perspective.
First, different structured prior on $\boldsymbol{S}$ are obtained, which simply reflect different clustering structures that we believe to be relevant for clustering the performances.
For example, for the complete model, this structured prior is
\begin{align*}
p^{\text{C}}(\boldsymbol{\theta}, \boldsymbol{S} \vert \mathcal{D}, \mathcal{D}^\star) \propto
p(\boldsymbol{\theta})
p_{\boldsymbol{\theta}}(\mathcal{D} \vert \mathcal{D}^\star, \boldsymbol{S}) p_{\boldsymbol{\theta}}( \mathcal{D}^\star \vert \boldsymbol{S}) p_{\boldsymbol{\theta}}(\boldsymbol{S}).
\end{align*}
Second, the knowledge on the clustering structure is updated by considering the likelihood related to the performances, which is $p_{\boldsymbol{\theta}} (\mathcal{Y} \vert \mathcal{D}, \mathcal{A}, \boldsymbol{S}) p_{\boldsymbol{\theta}} (\mathcal{A})$.
Comparing different models allows to determine which of the four alternative specifications is most credible in explaining the observed variability in runners' performances.
Comparisons between models are obtained by assessing the models' abilities to predict the performance of out-of-sample runners, as explained in Section \ref{sec:competitors}.
We note here that, in our model construction, performances depend directly on missing data patterns and that
the opposite direction, in which the performances have a direct influence on the fact that runners remains (or not) in the sample, is not considered.
In general, it is easy to imagine that runners with unsatisfactory careers are more likely to leave competitions.
Our conjecture is that the decision of competing is a determinant factor for improving the performances, and this is motivated by both physiological and psychological considerations for which defining goals and training for achieving them can help in improving performances as well.
Further, although the priors on clustering structure does not depend on the performances, the posteriors do.
The choice of considering a sufficiently large number of groups $G$ allows to account also for variability present in the data that may be effectively caused by cases in which performances have a direct impact on the choice of leaving the competitions.
\iffalse
To conclude this section, we highlight two other important aspects.
The first one is related to our use of a mixture model.
In literature, models that involve a latent class construction are typically used for two different purposes \citep{fruhwirth2019handbook}. The first purpose is clustering, in which the aim is to identify a certain number of groups from the observed populations typically expected in the specific domains of application.
The second purpose is density estimation, in which the aim is to recover the unknown density function that have generated the data using by of a flexible tool for capturing heterogeneity.
We fit our approach within the second framework since our aim is to describe the possible relation between the presence of missing values and the observed performances.
Although it is typically accepted that there are runners who are better suited for short distances and others for long distances, it is not clear, based on our knowledge in the field, if and how many groups may actually exist in this context.
Nor is distinguishing runners into middle-distance runners who are more prone to short versus long distances actually of interest, as these distinctions are already done by coaches based on physical characteristics of their runners before the competitions.
The underlying assumption under our model construction is indeed that the actual decisions of runners to compete in certain distances and in certain period of life, which, trivially, are required aspects for observing their performances, are relevant prior information on the latent clustering structure influencing the performances. This clustering structure is simply an technical expedient to link the performances with the occurrence of missing values, by means of a summary of an undefined set of factors that may actually influence runners' performances.
For what concern the second aspect to highlight, we should note that in our model's construction performances depend directly on missing data patterns and that
the opposite direction, for which the observed performances directly influence the fact that runners remains in the sample, is not considered.
In general, it is easy to imagine that runners with unsatisfactory careers are more likely to leave competitions.
Our conjecture is that the decision of competing is a determinant factor for improving the performances, and this is motivated by both physiological and psychological perspectives for which defining goals and training for achieving them can help in improving performances as well.
Further, although the priors on clustering structure does not depend on the performances, the posteriors do.
The choice of considering a sufficiently large number of groups $G$ allows to account also for variability present in the data that may be effectively caused by cases in which performances have a direct impact on the choice of leaving the competitions.
\fi
\section{Posterior inference and out of sample predictions}
\label{sec:posterior_inference}
\subsection{Predictive inference}
\iffalse
Let $\mathcal{Y}_{[n]}^\star,$ $\boldsymbol{D}_{[n]}$, and $\mathbf{d}_{[n]}^\star$ denote
the random variables describing, respectively, the performances, the participation in the distances and the history of a new runner $n$, not included in the sample. Let also $\mathbf{\Theta} = (\mathcal{A}, \boldsymbol{\theta})$ be the unknown elements which are shared across different runners, characterized by a posterior distribution $p^j(\mathbf{\Theta} \vert \mathcal{Y}^\star, \mathcal{D}, \mathcal{D}^\star)$ that can be obtained by means of an MCMC algorithm under model $j$.
The quantity
\begin{align}
\label{predictive1}
p^j(\mathcal{Y}_{[n]}^\star, \boldsymbol{D}_{[n]}, \mathbf{d}_{[n]}^\star \vert \mathcal{Y}^\star, \mathcal{D}, \mathcal{D}^\star) = \int p^j(\mathbf{\Theta} \vert \mathcal{Y}^\star, \mathcal{D}, \mathcal{D}^\star) p^j_{\mathbf{\Theta}}(\mathcal{Y}^\star_{[n]}, \boldsymbol{D}_{[n]}, \boldsymbol{D}^\star_{[n]}) \text{d}\mathbf{\Theta}
\end{align}
is called \emph{predictive density} under the model $j$ and represents the density of $(\mathcal{Y}_{[n]}^\star,$ $\boldsymbol{D}_{[n]}, \mathbf{d}_{[n]}^\star)$ obtained under model $j$, given the set of available information $(\mathcal{Y}^\star, \mathcal{D}, \mathcal{D}^\star)$.
In Bayesian inference, predictive densities are used for a variety of purposes, including evaluating and comparing models, forecasting, and obtaining scores and quantities of interest useful for posterior analysis.
The integral in Equation \eqref{predictive1} can be approximated by Markov Chan Monte Carlo approximation methods, which are useful also for obtaining sample from the predictive density as well, by simply exploiting
the conditional independence structure of the data generating process, for which
\begin{align*}
p^j_{\boldsymbol{\Theta}}(\mathcal{Y}^\star_{[n]}, \boldsymbol{D}_{[n]}, \mathbf{d}^\star_{[n]}) = \sum_{g=1}^G p^j_{\boldsymbol{\Theta}}(S_{[n]}=g) p^j_{\boldsymbol{\Theta}}(\mathcal{Y}^\star_{[n]}, \boldsymbol{D}_{[n]}, \mathbf{d}^\star_{[n]} \vert S_{[n]} = g)
\end{align*}
and
\begin{align*}
p^j_{\boldsymbol{\Theta}}&(\mathcal{Y}^\star_{[n]}, \boldsymbol{D}_{[n]}, \mathbf{d}^\star_{[n]} \vert S_{[n]} ) = p^j_{\boldsymbol{\Theta}}(\boldsymbol{D}_{[n]}, \mathbf{d}^\star_{[n]} \vert S_{[n]} = g) p^j_{\boldsymbol{\Theta}}(\mathcal{Y}^\star_{[n]} \vert\boldsymbol{D}_{[n]}, \mathbf{d}^\star_{[n]} , S_{[n]} = g) \\
& = p^j_{\boldsymbol{\Theta}}(\boldsymbol{D}_{[n]}, \mathbf{d}^\star_{[n]} \vert S_{[n]} = g) \int p^j_{\boldsymbol{\Theta}}(\mathcal{Y}^\star_{[n]},\mathcal{E}_{[n]} \vert\boldsymbol{D}_{[n]}, \mathbf{d}^\star_{[n]} , S_{[n]} = g) \text{d}\mathcal{E}_{[n]}.
\end{align*}
\fi
Let $\mathcal{Y}_{[n]}^\star,$ $\boldsymbol{D}_{[n]}$, and $\mathbf{d}_{[n]}^\star$ denote
the random variables describing, respectively, the performances, the participation in the distances and the history of a new runner $n$, not included in the sample. Let also $\mathbf{\Theta} = (\mathcal{A}, \boldsymbol{\theta})$ be the unknown elements which are shared across different runners, characterized by a posterior distribution $p^j(\mathbf{\Theta} \vert \mathcal{Y}^\star, \mathcal{D}, \mathcal{D}^\star)$ that can be obtained by means of an MCMC algorithm under model $j$.
We consider the following predictive density:
\begin{align}
\label{predictive2}
p^j(\mathcal{Y}_{[n]}^\star \vert \boldsymbol{D}_{[n]}, \mathbf{d}_{[n]}^\star, \mathcal{Y}^\star, \mathcal{D}, \mathcal{D}^\star) = \int p^j(\mathbf{\Theta} \vert \mathcal{Y}^\star, \mathcal{D}, \mathcal{D}^\star) p^j_{\boldsymbol{\Theta}}(\mathcal{Y}^\star_{[n]} \vert \boldsymbol{D}_{[n]}, \mathbf{d}^\star_{[n]}) \text{d}\mathbf{\Theta},
\end{align}
for $j \in \{\text{C}, \text{NM}, \text{A}, \text{H}\}$.
In Equation \eqref{predictive2}, missing data patterns are supposed to be known and are treated as control variables that potentially have an influence on the predicted performances $\mathcal{Y}_{[n]}^\star$.
While the posterior distribution $p^j(\mathbf{\Theta} \vert \mathcal{Y}^\star, \mathcal{D}, \mathcal{D}^\star)$ is an output of the MCMC algorithms (see supplementary material),
the likelihood $p^j_{\boldsymbol{\Theta}}(\mathcal{Y}^\star_{[n]} \vert \boldsymbol{D}_{[n]}, \mathbf{d}^\star_{[n]})$ for the new individual $n$ can be obtained by marginalizing over groups as follows:
\begin{align*}
p^j_{\boldsymbol{\Theta}}(\mathcal{Y}^\star_{[n]} \vert \boldsymbol{D}_{[n]}, \mathbf{d}^\star_{[n]}) = \sum_{g=1}^{G} p^j_{\boldsymbol{\Theta}}(S_{[n]}=g \vert \boldsymbol{D}_{[n]}, \mathbf{d}^\star_{[n]}) p^j_{\boldsymbol{\Theta}}(\mathcal{Y}^\star_{[n]} \vert \boldsymbol{D}_{[n]}, \mathbf{d}^\star_{[n]}, S_{[n]} = g).
\end{align*}
In the equation, the cluster allocation follows a Multinomial distribution characterized by weights
\begin{align*}
p^j_{\boldsymbol{\Theta}}(S_{[n]}=g \vert \boldsymbol{D}_{[n]}, \mathbf{d}^\star_{[n]}) \propto p^j_{\boldsymbol{\Theta}}(S_{[n]}=g) p^j_{\boldsymbol{\Theta}}( \boldsymbol{D}_{[n]}, \mathbf{d}^\star_{[n]}\vert S_{[n]}=g),
\end{align*}
that depends on the likelihood related to cluster allocation $p_{\boldsymbol{\Theta}}(S_{[n]}=g)$ but also on the observed missing data patterns, that weigh differently the cluster allocation by means of $p_{\boldsymbol{\Theta}}( \boldsymbol{D}_{[n]}, \mathbf{d}^\star_{[n]}\vert S_{[n]}=g)$.
Next section details the use of the predictive distribution developed here for evaluating informativeness of missing data patterns under different model's specifications.
\subsection{Informativeness of missing data: out of sample comparison of alternative specifications}
\label{sec:competitors}
Let $\mathcal{Y}_{[1:N]}$ be a test set, storing the performances in different distances and years of $N$ runners not included in the training sample for model estimation.
Let also $y_{p[n],t}$ be the generic scalar element denoting the performance of runner $n$ in discipline $p$ during year $t$.
The quantity
\begin{align*}
\mathbb{Q}^j(y_{p[n],t}) = p^j(y_{p[n],t} \vert \boldsymbol{D}_{[n]}, \mathbf{d}_{[n]}^\star, \mathcal{Y}^\star, \mathcal{D}, \mathcal{D}^\star),
\end{align*}
represents the predictive distribution obtained under model $j$, conditional on missing data patterns $(\boldsymbol{D}_{[n]}, \mathbf{d}_{[n]}^\star)$ and the set of available information $(\mathcal{Y}^\star, \mathcal{D}, \mathcal{D}^\star)$.
Let also $\widetilde{\mathbb{Q}}^j(y_{p[n],t})$ denote an approximation of $\mathbb{Q}^j(y_{p[n],t})$, given by a set of $B$ particles $\{y_{p[n],t}^{1},\ldots, y_{p[n],t}^{B}\}$.
It is possible to use the samples $\widetilde{\mathbb{Q}}^j(y_{p[n],t})$ to evaluate and compare the ability of our proposals in predicting the performances over different distances, for fixed missing data patterns described by $\boldsymbol{D}_{[n]}$, and $\mathbf{d}_{[n]}^\star$.
We base our evaluations on the empirical counterpart of the \emph{continuous ranked probability score} (CRPS) and the \emph{interval score} \citep[see,][]{raftery2007,Kruger2021}, preferring models that minimize these scoring rules and that provide adequate prediction interval estimates in term of coverage and interval width.
The CRPS is defined as
\begin{align}
S_1(\mathbb{Q}^j(y_{p[n],t})) = \int_{-\infty}^\infty\big\{\mathbb{Q}^j(y_{p[n],t})- \mathbbm{1}(y_{p[n],t}\leq z)\big\}^2 \text{d}z,
\end{align}
and the interval score is defined as
\begin{align*}
S_2(\mathbb{Q}^j(y_{p[n],t})) = (u_\alpha^j-l_\alpha^j) + \frac{2}{\alpha} \big\{ (l_\alpha^j-y_{p[n],t})\mathbbm{1}(y_{p[n],t}<l_\alpha^j) + (y_{p[n],t}-u_\alpha^j)\mathbbm{1}(y_{p[n],t}>u_\alpha^j)\big\},
\end{align*}
where $l_\alpha^j$ and $u_\alpha^j$ are the $\alpha/2$ and $1-\alpha/2$ quantiles for the distribution $\mathbb{Q}^j(y_{p[n],t})$, respectively, and $\alpha \in (0,1/2)$ is a fixed tolerance.
The interval score rewards narrow prediction intervals, and penalizes prediction intervals that do not include the observations. Details on these scores and their computation are reported in \cite{Kruger2021} (see, e.g., Equation 9).
It is relevant to highlight, however, that these scores are scale sensitive, and the scale of the predictions might depend both on the discipline $p$, the age $t$, as well as on the specific missing data patterns of runner $n$ we are considering.
For this reason, we propose to evaluate the predictions of a reference model $j$ and the prediction of an alternative model $j^\prime$ by means of the following score
\begin{align*}
S_s^{j j^\prime}(\mathcal{Y}_{[1:N]}) = \frac{1}{\vert \mathcal{Y}_{[1:N]} \vert}\sum_{n=1}^{N} \sum_{y_{p[n],t} \in \mathcal{Y}_{[n]}}\mathbbm{1}(S_s(\mathbb{Q}^j(y_{p[n],t}) < S_s(\mathbb{Q}^{j^\prime}(y_{p[n],t})),
\end{align*}
for $s \in\{1,2\}$.
In the equation, $\vert \mathcal{Y}_{[1:N]} \vert$ denotes the number of distinct observations present in the test set $\mathcal{Y}_{[1:N]}$.
The scores $S_s^{j j^\prime}(\mathcal{Y}_{[1:N]})$ range in $(0,1)$, and suggest that model $j$ is overall better than model $j^\prime$ if $S_s^{j j^\prime}(\mathcal{Y}_{[1:N]})>0.5$.
\section{Real data analysis}
\label{sec:examples}
\subsection{Informativeness of missing data}
The models were estimated randomly splitting the runners into a training set, composed of $70\%$ of runners, and a test set with the remaining $30\%$.
Samples from the posterior distributions were obtained on the training set, using the last $2000$ of $10000$ iterations of the Gibbs sampling for each model.
The number of components $G$ was fixed to $50$.
Of them, samples of size $2000$ from the predictive distributions in Equation \eqref{predictive2} were obtained, conditional on knowing the missing data patterns of runners in the test set (i.e. $\boldsymbol{D}_{[n]}$ and $\mathbf{d}_{[n]}^\star$, for $n = 1,\ldots, N$).
The comparison over different models' predictions was done both graphically and using the scores described in Section \ref{sec:competitors}.
Results are summarized in Figure \ref{fig:results1} and Table \ref{tab:crps}.
In the figures, the real data are represented by black solid lines, while the colored lines delimit $95\%$ quantile-based prediction bands.
A model is preferred if: (a) the real data lies within the prediction bands, (b) the predictions bands are narrower.
By looking at the results we can claim that model 1 (both attitude and history are considered as informative) and model 3 (only attitude is considered as informative) provide better results in terms of band widths, while including within the bands the real data.
As we note, different missing data patterns are represented in the figures.
We note the tendency of model 1 and model 3 of having lower upper limits of the band, an aspect that is even more evident for runners that participated to many competitions in different years (e.g. runners $100$, $63$, and $56$).
Knowing that one runner has competed for many years reduces the uncertainty in the predictions by reducing the probability of observing bad performances.
This highlights the effective presence of an association between the abilities of the runners in recording better performances with long histories and their participation in many competitions during the years.
Further, by comparing runner $89$ with runner $56$, for example, it is possible to grasp how entering later in competitions leads to more uncertainty in the performance predictions, increasing also the probabilities of observing worse performances.
While the effect of knowing missing data patterns is clear when we compare the upper limits of the predictions bands, the movements of lower limits appear to be limited and less pronounced.
This aspect is interesting because it points out that there are runners that, despite being characterized by short histories, are still able to perform satisfactorily when compared with those with longer histories.
These reasonings are conditional on the graphs shown, which are, of course, a selection of the runners in the test set.
The complete set of plots is reported in the supplementary material, showing a large number of runners with different missing data patterns and, as a consequence, different behaviors of the bands.
\begin{figure}
\centering
\includegraphics[scale = 0.32]{./images/at_100_ra2_70_comparison.pdf}
\includegraphics[scale = 0.32]{./images/at_63_ra2_70_comparison.pdf}
\includegraphics[scale = 0.32]{./images/at_56_ra2_70_comparison.pdf}
\includegraphics[scale = 0.32]{./images/at_89_ra2_70_comparison.pdf}
\caption{Quantile-based $95\%$ prediction intervals for the observed distances obtained conditional on knowing the missing data patterns of runners included in the test set.
The red-dotted lines represent the intervals for the model that treats missing data as informative, while the blue-dashed lines represents the respective intervals for the model that does not treat them as such.
The black lines represent the observed performance of the runners.}
\label{fig:results1}
\end{figure}
\begin{table}
\caption{\label{tab:crps}Comparison between models given by $S_s^{j j^\prime}$ with the different scores.
In this table, model $j$ (row) is compared to model $j^\prime$ (column) with respect the two scores.
Above the diagonal we report the scores for CRPS, below the diagonal we report the scores for IS, computed with $\alpha=0.05$.
A value above $0.5$ indicates preference of model $j$ with respect to model $j^\prime$. Remember that $S_s^{j j^\prime}=1-S_s^{j^\prime j}$, for any score considered.}
\centering
\fbox{\begin{tabular}{l|rrrr}
Model & Complete (1) & No missing (2) & Attitude (3) & History (4) \\ \hline
Complete (1) & -- & 0.574 & 0.537 & 0.520 \\
No missing (2) & 0.178 & -- & 0.402 & 0.395 \\
Attitude (3) & 0.364 & 0.757 & -- & 0.511 \\
History (4) & 0.233 & 0.680 & 0.306 & --
\end{tabular}}
\end{table}
The predictive scores computed with data of the test set suggest that the complete model is better than the others considering both the scores.
However, the interval score suggest this aspect more markedly, highlighting the ability in outperforming the model that does not treat missing values as informative for around $80\%$ of the observations present in the test set.
For both the scores, it results that the complete model is better than model $3$ (only attitude), which is better than model $4$ (only history), which is itself better than model $2$ (uninformative missing data patterns).
These results highlight how attitude and history (encopassed in the term missing data) seem to be effectively related to performances, giving support to the hypothesis that considering these aspect in the analysis of runners careers is definitely relevant and that, in this context, missing data have to be considered as informative.
Note that both the estimates of predictive scores are based on $741$ scalar observations which are characterized by different levels of dependence, so that a proper evaluation of uncertainty of these estimates is difficult.
In the supplementary material, a three-fold cross validation scheme shows how results seems to be stable.
\subsection{Application}
We illustrate how the complete model can be meaningful for sports scientists and coaches, answering to specific questions, based on a sample of size $2000$ of the predictive distributions explained in Section \ref{sec:posterior_inference}.
The first question is: how do late entry into competitions and early exit from competitions are related to performances?
To answer this question, we consider the conditional predictive distribution in Equation \eqref{predictive2}, in which we vary the age at which the runner enters or exits from the sample, letting the runner participate in both $800$ and $1500$ discipline for all the years of his career.
Comparing the different distributions of the performance allows to catch how the uncertainty related to the predicted performance changes, according to the different histories considered.
Figure \ref{fig:results_analysis} show the results of our procedure for the $1500$ and $800$ meters distances.
Solid central lines represent the median of the predictive distributions over different ages. Quantile-based $95\%$ prediction bands are on the contrary represented with different dashed lines, that denote the respective lower an the upper limits.
For what concerns drop-in, we note from the left panels that late entry into competition is associated with worse median performances over the years, with lower limits of the predictive confidence bands that worsen only for runners that entry into competition at ages $26$ and $30$.
The upper limits, on the contrary, seem to rise with later drop-in.
Based on these results, we can say that for the ideal runners we are considering, later entry in career can still permit to reach good levels, but it is much more likely that their performances will be worse with respect to runners with a longer career (earlier drop-in).
A similar reasoning applies for drop-out, in the right panels.
The median of the predictive distributions seems indeed to be worse for runners that drop-out earlier in their life, with the upper limits that seem to show more variation with respect to the lower.
In this case, it is not unlikely to expect runners that drop-out earlier, although their performances seems good, but it is more unlikely that runners that compete for more years record worse results.
Similar results for different scenarios can be obtained with an analogous approach and are reported in the supplementary material.
\begin{figure}
\centering
{\includegraphics[scale=0.37]{./images/Drop_in1500.pdf}}
{\includegraphics[scale=0.37]{./images/Drop_out1500.pdf}}\\
\includegraphics[scale=0.37]{./images/Drop_in800.pdf}
\includegraphics[scale=0.37]{./images/Drop_out800.pdf}
\caption{The association between drop-in and drop-out with performances in the $1500$ meters discipline, for an ideal runner that competes regularly in $1500$ (top) and $800$ meters (bottom).
On the left, the runner drops-in at ages $18$, $22$, $26$, $30$, respectively. On the right, the runner drops-out at age $20$, $24$, $28$ or after $32$ years old. Central solid line indicate the median of the predictive distributions. External lines indicate the $95\%$ confidence bands based on symmetric quantiles of the predictive distributions.}
\label{fig:results_analysis}
\end{figure}
\par
\indent
The second question is: how does competing in different disciplines impact the performances?
In this case, we consider runners with a complete observed career, and that compete, every year, in the distances with different strategy:
the first runner competes only in the $1500$ meters discipline; the second runner competes in $1500$m and $5000$m distances; the third one competes in $1500$m and $800$m distances; the fourth runner competes in all distances.
Results are shown in Figure \ref{fig:attitudes}, in terms of their respective predictive distributions in $1500$ meters discipline. By comparing the results, we see clearly how, in the training sample, worse performances appear to be associated with the choice of competing only in one discipline over the years.
On the contrary, being a runner that competes in more than one discipline seems to be associated with better performances, with differences between the respective predictive distributions which are less evident. More specifically, the greater differences can be seen comparing runners that compete in two distances with the one that competes in all three.
Indeed, both the limits of the bands and the median of the predictive distribution appear to be shifted up for the runner that competes in all distances, implying a slightly worse performance for these kind of runners.
Based on our results, competing in all distances seems not to be an optimal choice to achieve better results in the $1500$ meters discipline. On the contrary, runners that specialize in $1500$ and $5000$ meters or in $1500$ and $800$ meters distances seem to have better performances overall, especially for the latter type of runners.
Further analyses with other distances are left in the supplementary material.
\begin{figure}
\centering
{\includegraphics[scale=0.37]{./images/Attitude1500_0.pdf}}
{\includegraphics[scale=0.37]{./images/Attitude1500_1.pdf}}\\
{\includegraphics[scale=0.37]{./images/Attitude1500_2.pdf}}
{\includegraphics[scale=0.37]{./images/Attitude1500_3.pdf}}
\caption{Predictive distributions of performances in the $1500$ meters discipline, obtained for four different runners with different ways of participation in other distances during the years. }
\label{fig:attitudes}
\end{figure}
\section{Conclusions}
We have investigated whether prediction of the runners' performance is improved by an accurate assessment of the presence of missing data patterns.
Our analysis has provided strong evidence that for our data, missing data patterns are informative in predicting performances and they constitute a structural part of the signal explaining the observed variability of the runners' performances.
The statistical analysis took place via a matrix-variate state space model, in which the observed trends were clustered by employing a selection matrix involved in the measurement equation, and by storing the unknown cluster allocations of the runners. To include observed missing data patterns as informative on the clustering structure, two distinct processes were included in the model.
The first included the runner's history as potentially informative by considering when an runner starts or stops competing. The second aimed to include as potentially informative the runner's attitude by considering in which distances the runner mostly participates.
Our results based on out of sample comparisons suggest that it is important to consider both these processes when describing runners' performances whereas considering only attitude is better than considering only history.
There is evidence for a deterioration in performance when one runner starts competing later or finishes earlier
and for improvement for runners that compete regularly in more distances compared to runners that compete only in one distance. Finally, competing in all three distances does not seem to be associated to better performances with respect to competing in two adjacent distances, such as $800$ and $1500$ meters or $1500$ and $5000$ meters.
Our key message is to illustrate the usefulness of considering missing data when describing runners' performances.
Our modelling framework can be immediately used in data with alternative distances, such as sprinting, throwing, or multidisciplinary distances such as of heptathlon and decathlon. It would be also interesting to investigate whether these findings differ in female runners or in countries with possibly different coaching methodologies.
\section*{Acknowledgment}This research was supported by funding from the University of Padova Research Grant 2019-2020, under grant agreement BIRD203991.
\section*{Supplementary material} Please write to the authors for supplementary material, including code, data, proofs and derivations.
\bibliographystyle{rss}
| {
"attr-fineweb-edu": 2.660156,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbSbxK1UJ-rWoIBvE | \section{Introduction}
\label{intro}
Shot put is a track and field event involving throwing (``putting") the shot, a metal ball (7.26kg/16lb for men, 4kg/8.8lb for women), with one hand as far as possible from a seven-foot diameter (2.135m) circle. In order for each put to be considered valid, the shot must not drop below the line of the athlete's shoulders and must land inside a designated 35-degree sector. Athletes commonly put four to six times per competition, and their best performance is recorded. Figure \ref{data} displays the results of elite shot put competitions for four athletes with careers of different lengths.
\begin{figure*}
\includegraphics[width = 1.0\textwidth]{./figures/Fig1}
\caption{Each panel displays the performance results (points) of a professional shot put athlete throughout his/her career at elite competitions. Performance is measured in meters (length of the throw) and plotted against the days elapsed since January 1st of each athlete's career starting year. The dark vertical lines represent seasons changing points, namely new year days.}
\label{data}
\end{figure*}
Furthermore, several athlete-specific variables (e.g., age, gender, doping history, nationality, etc.) that might be of interest in a sport analytics perspective are naturally gathered during elite competitions or can be easily retrieved retrospectively. Shot put competitions have been part of the modern Olympics since their revival, hence there exist complete and long-established data sets for shot put performances that can be exploited for performance analysis. \\
\indent Sportive competitions might display some sort of seasonality in the results. We underline that here the term ``seasonality" is not used to indicate a cyclical behaviour of the results over time as in the literature of time series but, rather, a time dependent gathering of observations. On one hand competitions are traditionally concentrated in some months of the year, and on the other hand weather and environmental conditions may affect the performances or even the practicability of the sport itself. Straightforward examples are sports leagues that run on a different schedule and host their season openers, playoffs and championships at different times of the year or, even more dramatically, winter sports as opposed to outdoor water sports. Shot put events range over the whole year, with indoor competitions held during Winter months and major tournaments, like the Olympics, Diamond League and World Championship organised during Summer. Therefore, it is reasonable to say that seasons coincides with calendar years. In Figure \ref{data}, vertical lines represent new years' days: the time point at which seasons change. To provide an accurate representation of the data, seasonality effects need to be taken into account. Note that, from a modelling perspective, the amplitude of seasons is arbitrary, and ideally any model should be easily adapted to any sport with any type of seasonality. \\
\indent There exist a well established literature of both frequentist and Bayesian contributions to performance data analysis in various sports. We mention, among others, \cite{Tri} and \cite{Intro1} in triathlon, \cite{swim} and \cite{swim2} in swimming, and \cite{cricket} in cricket. \cite{batting} describe batting performance in baseball, \cite{deca} performance evolution in decathlon, \cite{golf} in golf, and \cite{Orani} in tennis. Further, \cite{aging} rely on a Bayesian latent variable model to investigate age-related changes across the complete lifespan of basketball athletes on the basis of exponential functions.\\
\indent In this work, we are interested in describing the evolution of performances of professional shot put athletes throughout their careers. To this end, we propose a Bayesian hierarchical model where results of each athlete are represented as error prone measurements of some underlying unknown function. The distinctive contribution of our approach is the form of such function, which has an additive structure with three components. First, we consider a smooth functional contribution for capturing the overall variability in athletes' performances. For this component, we follow the approach in \cite{M1}, who propose a Bayesian latent factor regression model for detecting the doping status of athletes given their shot put performance results and other covariates. However, the authors limit the analysis to data collected from 2012, whereas we aim at describing the trajectories in performance over the whole time span available for our data (1996 to 2016). For this time span, a global smoothness assumption for the trajectories could be too restrictive. Indeed, data may exhibit jumps localised at fixed and shared time points among all athletes. See, for example, athlete 303 in Figure \ref{data}, whose results show a jump between the second and fourth season (calendar year) of his/her career. The presence of jumps between seasons is even more striking when yearly average performances are considered. We highlight this behaviour by showing yearly averages via thick horizontal lines in Figure \ref{data}. Accordingly, the description of this yearly dependent contribution is delegated to a mixed effect model, that quantifies the seasonal mean for each athlete as a deviation from a grand mean. This component captures the inter-seasonal variability of the data set, whereas the smooth functional component describes the intra-seasonal evolution of performances. Finally, we complete our model specification accounting for the effect of a selection of time dependent covariates through a regressive component. We embed our model in a Bayesian framework by proposing suitable prior distributions for all parameters of interest. We believe the presented model represents a flexible tool to analyse evolution of performances in measurable sports, namely, all those disciplines for which results can be summarised by a unique measure (e.g., distance, time or weight). \\
\indent The rest of the paper is organised as follows. In Section 2, we describe the motivating case study. In Section 3, we present the proposed model and briefly discuss possible alternative settings. Moreover, we elicit priors for our Bayesian approach. In Section 4, we outline the algorithm for posterior computation. In Section 5, we discuss posterior estimates, argue on the performance of the model and interpret the model's parameters form a sports analytic perspective. Conclusions are presented in Section 6. Finally, the Appendix includes the complete description of the MCMC algorithm.
\section{The World Athletics shot put data set}
\label{dataset}
World Athletics (WA) is the world governing body for track and field athletic sports. It provides standardized rules, competition programs, regulated technical equipment, a list of official world records and verified measurements. The data at our disposal was obtained with permission from an open results database (\url{www.tilastopaja.eu}) following institutional ethical approval (Prop\_72\_2017\_18). The data set comprises 56,000 measurements of WA recognized elite shot put competitions for 1,115 athletes from 1976 to 2016. For each athlete, the data set reports the date of the event, the best result in meters, the finishing position, an indication of any doping violation during the athlete's career as well as demographic information (athlete's name, WA ID number, date of birth, sex and country of birth). \\
\indent In this work, we restrict our analysis to results for athletes performing after 1996. Indeed, we pursue consistency of measurement accuracy, and 1996 represents a turning point in anti-doping regulation and fraud detection procedures. The resulting data set is still sufficiently broad for our purposes. It contains 41,033 observations for 653 athletes (309 males and 344 females). The outcome of interest is the shot distance, which ranges from a minimum if $10.6$ up to a maximum if $22.56$ meters, with a mean of $17.30$ meters. \\
\indent As shown in Figure \ref{data} for a selection of athletes, data are collected over time. Hereafter we will denote as $t_{ij}$ the time at which the $j$-th observation for athlete $i$ is recorded. $t_{ij}$ corresponds to the time elapsed from January 1st of each athlete's career starting year to the date of the competition. Accordingly, equal time values for different athletes are likely to specify distinct years, but the same moment in those athletes' careers. Moreover, different athletes will have observations ranging over a large time span, according to the length of their careers. Having described seasons as calendar years, athletes will also compete in a different number of seasons. Figure \ref{seasons} shows the number of athletes per season as well as boxplots of the distribution of their mean performances across the various seasons. Athletes with the longest careers have been playing for 19 years. A general increasing trend in performance can be observed as a function of career length (right panel in Figure \ref{seasons}) or, equivalently, the age of the athlete. In the following, we will discuss two different modelling choices for age, respectively, accounting for its time dependence and considering age as a fixed quantity, namely the age of the athlete at the beginning of his/her career. \\
\indent Table 1 reports descriptive statistics suggesting how sex, environment and doping have an effect on the average value of the result. We point out that in our dataset we only have 18 athletes who tested positive for doping at some point in their career. Information on the date the test was taken (or if multiple tests were taken) is not available in the data. As expected, performances for men are, on average, higher than for women. Similar effects, despite less evident in magnitude, also hold true for the variable environment, which takes values indoor and outdoor. Regarding environment, a further remark is in order. We already pointed out that major WA events take place outdoor (27,800 observations) during Summer months, whereas less competitive events are held inside between November and March (13,200 observations). Figure \ref{Environment} displays results in gray when recorded outdoor, and in black otherwise. We can clearly see that field events gather in Summer months, whereas indoor events take place during Winter (solid lines).
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/Fig2}
\caption{Left: total number of athletes per season. Right: each boxplot shows the distribution of the athletes' mean performances within each season.}
\label{seasons}
\end{figure*}
\begin{table}
\centering
\caption{Performance results conditioned on covariates.}
\label{result}
\begin{tabular}{lllll}
\toprule
& Mean & Sd & Max & Min \\
\midrule
Total & 17.30 & 1.78 & 22.56 & 10.6 \\
Women & 16.09 & 1.35 & 21.70 & 10.6 \\
Men & 18.55 & 1.21 & 22.56 & 12.93 \\
Not Doped & 17.30 & 1.79 & 22.56 & 10.6 \\
Doped & 17.77 & 1.35 & 20.88 & 13.55 \\
Indoor & 17.17 & 1.70 & 22.23 & 12.15 \\
Outdoor & 17.38 & 1.81 & 22.56 & 10.6 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/Fig3}
\caption{Performance results displayed according to the variable environment, which takes values indoor (black) and outdoor (gray). Vertical lines corresponding to the label ticks represent season changes, whereas the two enclosing it indicate the boundaries of winter months.}
\label{Environment}
\end{figure}
\section{The model}
\label{baymodel}
Let $n$ denote the total number of athletes in the study. We assume that shot put performances for athlete $i$ are given by noisy measurements of an underlying function $g_i(t_{ij})$:
\begin{equation}
\label{full}
y_{ij}=g_i(t_{ij}) + \epsilon_{ij}
\end{equation}
with $\epsilon_{ij} \stackrel{iid}{\sim} N(0, \psi^2)$ independent errors. Recall $t_{ij}$ is the time at which the $j$-th observation for athlete $i$ is collected, for $j = 1, \ldots, n_i$, where $n_i$ is the total number of measurements available on athlete $i$.\\
We further suggest an explicit functional form for $g_i(t_{ij})$:
\begin{equation}
g_i(t_{ij}) = f_i(t_{ij}) + \mu_{is} + \boldsymbol{x}_{i}(t_{ij}) \boldsymbol{\beta}
\label{model}
\end{equation}
where $f_i(t)$ is a smooth functional component for intra-seasonal variability, $\mu_{is}$ a season-specific intercept, and $\boldsymbol{x}_{i}(t) \boldsymbol{\beta}$ is an additional multiple regression component. Here $s \in \{ 1,2, \ldots, S_i \}$ indicates the season in which the shot was recorded. Specifically, $\mu_{is} \equiv \mu_i(t_{ij}) = \sum_{s=1}^{S_i} \mu_{is} \ \mathbb{I}_{(t_i^s, t_i^{s+1})}(t_{ij})$ is an athlete-specific step function taking value $\mu_{is}$ for all time points in season $s$, delimited by $t_i^s$ and $t_i^{s+1}$. For a complete treatment of the notation used insofar, please refer to Table \ref{notation}. \\
\indent We will now discuss each of the three terms in Eq.~\eqref{model} in more detail.
\begin{table}
\centering
\caption{Mathematical notation}
\label{notation}
\begin{tabular}{ll}
\toprule
Symbol & Meaning \\
\midrule
\textit{i} & Index identifying the athlete \\
\textit{j} & Index identifying a specific observation \\
\textit{n} & Total number of athletes \\
\textit{N} & Total number of observations \\
$t_{ij}$ & Time point at which the \textit{j}'th observation of athlete \textit{i} is recorded\\
$S_i$ & Total number of seasons for athlete $i$\\
\textit{s} & The currently considered season\\
$g_{is}$ & Number of observations in season $s$ for athlete $i$ \\
\textit{r} & The number of covariates to be considered\\
$y_{ij}$ & Response variable at time $j$ for athlete $i$ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{The functional component}
\label{funpart}
The functional component $f_i(t)$ is meant to capture the subject-specific global evolution of the response variable. It explains the global dependence of the data from time. We require that these functions display a smooth behaviour: the latter is assured by assuming $\{ f_i (t)\}_{i=1}^n $ are linear combinations of smooth basis functions, $\{b_m (t)\}_{m=1}^p$. Note that, both the nature and the number $p$ of these bases are to be determined according to some properties we wish them to satisfy. In particular, we assume:
\begin{equation}
f_i(t) = \sum_{{m}=1}^p \theta_{im} b_m(t)
\label{fmodel}
\end{equation}
where $\{b_m(t) \}_{m=1}^p$ represent the B-spline basis \citep{boor} and $\{ \theta_{im} \}_{m=1}^p$ are subject-specific coefficients.\\
\indent We briefly recall that the B-spline basis of degree $k$ on $[L, U]$ is a collection of $p$ polynomials defined recursively on a sequence of points, known as knots, and indicated with $L \equiv t_1 \leq \ldots \leq t_{p+k+1} \equiv U$.
We follow the common approach of choosing $k = 3$, leading to cubic splines \citep[see, for instance,][]{cubic}. Moreover, we assume the knot sequence to be equispaced and $(k+1)$-open. That is, the first and last $k+1$ knots are identified with the extremes of the definition interval, whereas the remaining $p-k-1$ knots divide said interval into sets of the same length. Under these assumptions, each basis function $b_j(t)$ has compact support over $k+1$ knots, precisely $[t_j, t_{j+k+1}]$. Moreover, together they span the space of piecewise polynomial functions of degree $k$ on $[L, U]$ with breakpoints $\{ t_n \}_{n=1}^{p+K+1}$. Finally, such functions are twice continuously differentiable at the breakpoints, de facto eliminating any visible type of discontinuity and providing a smooth result.\\
\indent The number of basis functions is chosen to be large enough for sufficiently many basis functions to have a completely enclosed support in a given season. In particular, rescaling time to $[0,1]$ (for computational purposes and easier prior definition) seasons have amplitude $0.054$ and, if we want at least a local basis to be completely supported in one of them, we need $4$ knots to fall in it. Accordingly, we require about 75 internal knots and 80 degrees of freedom. \\
\indent With the sake of tractability, a low dimensional representation of the individual curves is of interest. Following the approach by \cite{M1}, we exploit a sparse latent factor model on the basis coefficients:
\begin{equation}
\theta_{im} = \sum_{l=1}^k \lambda_{ml} \eta_{il} + \xi_{im}
\label{factors}
\end{equation}
where $\lambda_{ml}$ are the entries of a $(p \times k)$ factor loading matrix $\boldsymbol{\Lambda}$, and $\boldsymbol{\eta_{i}}$ is a vector of $k$ latent factors for subject $i$. Finally, $\boldsymbol{\xi_i} = ( \xi_{i1}, \ldots , \xi_{ip} )$ is a residual vector, independent of all other variables in the model. We assume:
\begin{equation}
\boldsymbol{\eta_i} \stackrel{iid}{\sim} N_k(\boldsymbol{0}, \boldsymbol{I})
\end{equation}
and the error terms $\boldsymbol{\xi_i}$ are assumed to have normal distribution with diagonal covariance matrix, $\boldsymbol{\xi_i} \stackrel{iid}{\sim} N_p(\boldsymbol{0}, diag(\sigma_1^{-2}, \ldots, \sigma_p^{-2}))$, with $\sigma_j^{-2} \stackrel{iid}{\sim} Ga(a_{\sigma}, b_{\sigma})$.\\
\indent For the modelling of the factor loading matrix $\boldsymbol{\Lambda}$, we follow the approach in \cite{shrink}. Specifically, we adopt a multiplicative gamma process shrinkage prior which favours an unknown but small number of factors $k$. The prior is specified as follows:
\begin{align}
\label{functional}
& \lambda_{ml} | \phi_{ml}^{-1}, \tau_l^{-1} \stackrel{iid}{\sim} N\left(0, \phi_{ml}^{-1} \tau_l^{-1}\right) \qquad \text{with} \\
& \phi_{ml} \sim Ga \biggl( \frac{\nu_{\phi}}{2},\frac{\nu_{\phi}}{2} \biggr) \qquad \tau_l=\prod_{v=1}^{h} \varpi_v \nonumber \\
& \varpi_1 \sim Ga(a_1,1) \qquad
\varpi_v \sim Ga(a_v,1) \nonumber
\end{align}
Here $\tau_l$ is a global stochastically increasing shrinkage parameter for the $l$-th column which favors more shrinkage as the column index increases. Similarly, $\phi_{ml}$ are local shrinkage parameters for the elements in the $l$-th column.
\cite{shrink} describe a procedure to select the number of factors $k \ll p$ adaptively. We follow their lead, thus $k$ is not set a priori but automatically tuned as the Gibbs sampler progresses. Refer to \cite{shrink} for details. Note that, by combining Eq. \eqref{fmodel} and Eq. \eqref{factors} one gets:
\begin{equation*}
f_i(t) = \sum_{l=1}^k \eta_{il} \Phi_{l}(t) + r_i(t)
\end{equation*}
where $\Phi_{l}(t) = \sum_{m=1}^p \lambda_{ml} b_m(t)$ is a new, unknown non-local basis function learnt from the data and $r_i(t)$ a functional error term. Although the number of pre-specified basis function $p$ is potentially large, the number of ``operative'' bases $k$ is $k \ll p$ and learnt from the data. \\
\subsection{The seasonal component}
\label{seaspart}
Early graphical displays and straightforward exploratory analysis suggest a significant variability of the average response across seasons as displayed in Figure \ref{data}. Namely, performances prove to be gathered over predetermined time intervals, the seasons (calendar years). However, it is reasonable to expect some degree of dependence for the average performance across seasons. To model such dependence, an autoregressive model for seasonal intercepts can be proposed. The idea behind this choice is to allow for borrowing of information across seasons, in the sense that the seasonal intercept $\mu_{is}$ at season $s$ is influenced by the intercept at season $s-1$ through the autoregressive coefficient $\rho_i$. Namely,
\begin{equation}
\mu_{is} \ | \ \rho_i, \sigma_{\mu}^2 \stackrel{iid}{\sim} N\left(\rho_i \mu_{i(s-1)}, \sigma_{\mu}^2\right)
\label{autoregress}
\end{equation}
However, when we first implemented this model, we noted how residuals presented a pattern which we would like to intercept with a finer model. Therefore, we consider a random intercept model with Normal Generalized Autoregressive Conditional Heteroskedastic (GARCH) errors \citep{GARCH}. Specifically,
\begin{gather}
\mu_{is} \ | \ m, h_{is} = m + \zeta_{is} \stackrel{iid}{\sim} N(m, h_{is})\\
h_{is} = \alpha_0 + \alpha_1 \zeta_{is-1}^2 + \varpi h_{is-1} \label{garch}
\end{gather}
where $\alpha_0 > 0, \alpha_1 \geq 0$ and $\varpi \geq 0$ to ensure a positive conditional variance and $\zeta_{is} = \mu_{is} - m$ with $ h_{i0} = \zeta_{i0} := 0 $ for convenience. The additional assumption of wide-sense stationarity with
\begin{gather*}
\mathbb{E}(\zeta_t)=0\\
\mathbb{V}ar(\zeta _{t}) = \alpha_{0} (1- \alpha_{1} - \varpi)^{-1} \\
\mathbb{C}ov(\zeta _{t}, \zeta _{s}) = 0 \text{ for } t \neq s
\end{gather*}
is guaranteed by requiring $\alpha_{1} + \varpi < 1$, as proven by \cite{GARCH}.\\
\indent Three parameters of the seasonal component require prior specification: the overall mean $m$ and the conditional variance parameters, $\varpi$ and $\boldsymbol{\alpha}=(\alpha_0, \alpha_1)^\top$. For the autoregressive and heteroskedastic parameters of the GARCH model, we propose non-informative priors satisfying the positivity constraint. For the overall mean parameter, we rely on a more informative Normal prior centered around the mean suggested by posterior analysis of preliminary versions of the model. In particular:
\begin{gather}
\nonumber m \sim N(\mu_{m_0}, \Sigma_{m_0}) \\
\boldsymbol{\alpha} \sim N_2(\mu_{\alpha}, \Sigma_{\alpha}) \ \mathbb{I} \{ \boldsymbol{\alpha} > 0 \} \label{GARCHprior} \\
\nonumber \varpi \sim N(\mu_{\varpi}, \Sigma_{\varpi}) \ \mathbb{I} \{ \varpi \geq 0 \}
\end{gather}
where $\boldsymbol{\alpha} = (\alpha_0,\alpha_1)$ is a bidimensional vector. We complete the model specification assuming that the parameters are statistically independent and noticing that the hypothesis needed for wide-sense stationarity do not translate into actual prior conditions on the parameters. Hence, one of the objects of our analysis becomes to test whether the constraint $\alpha_{1} + \varpi < 1$ holds true.
\subsection{Covariates}
\label{covpart}
We consider the effect of three covariates, gender, age and environment, and assume conjugate prior choices for the covariates coefficients:
\begin{gather}
\nonumber \boldsymbol{\beta} \stackrel{iid}{\sim} N(\boldsymbol{\boldsymbol{\beta}_0}, \sigma_{\beta}^{2} \boldsymbol{\mathbb{I}}) \\
\sigma_{\beta}^{-2} \sim Ga \biggl( \frac{\nu_{\beta}}{2},\frac{\nu_{\beta} \sigma_{\beta}^2}{2} \biggr)
\label{reg}
\end{gather}
\section{The Bayesian update}
\label{Update}
Because of the additive nature of the overall sampling model \eqref{full} - \eqref{model}, we are able to exploit a blocked Gibbs sampler grouping together the parameters of the three modelling components described in Section \ref{baymodel}.
Note first that, because of the high dimensionality of the problem, it is computationally convenient to choose conditionally conjugate prior distributions for the parameters. Indeed, conjugacy guarantees analytical tractability of posterior distributions. In some cases, specifically for the conditional variances of GARCH errors, no conjugate model exists and updates rely on an adaptive version of the Metropolis Hastings algorithm for posterior sampling. \\
\indent Algorithm \ref{Gibbs} outlines our sampling scheme, while details are presented in Appendix~\ref{appendix}. As far as the parameters of the functional component $\boldsymbol{\theta}_i$ are concerned, we follow \cite{M1} by choosing conditionally conjugate prior distributions so that the update proceeds via simple Gibbs sampling steps. Analogously, the update of the regression coefficients $\boldsymbol{\beta}$ and the error term $\psi$ proceeds straightforwardly by sampling from their full conditional posterior distributions. Conjugate priors for the GARCH parameters $m, \varpi$ and $\boldsymbol{\alpha}$ are not available, therefore we resort to adaptive Metropolis schemes to draw values from their full conditionals. Specifically, we build an adaptive scale Metropolis such that the covariance matrix of the proposal density adapts at each iteration to achieve an \emph{optimal} acceptance rate \citep[see][]{HAARIO}. \\
\indent Further details about the algorithm can be found in the Appendix \ref{appendix}, whereas code is available at \url{https://github.com/PatricDolmeta/Bayesian-GARCH-Modeling-of-Functional-Sports-Data.}
\IncMargin{5mm}
\begin{algorithm}[t]
\label{Gibbs}
\vspace{2mm}
\hspace{-5mm} \KwData{$y_{ij} = (y_{11},\ldots,y_{nn_n})$}
\hspace{-5mm} Set the required MCMC sample size $G$, the burn-in period $g_0$ and the thinning parameter $g_s$. \\[0mm]
\SetKwBlock{Begin}{Initialise}{end}
\Begin{ $\boldsymbol{\theta}_i^{(0)}, \boldsymbol{\mu}_i^{(0)}, m^{(0)},\varpi^{(0)},\boldsymbol{\alpha}^{(0)}, \boldsymbol{\beta}_i^{(0)}, \psi_i^{(0)} $\\
}
\SetKwBlock{Begin}{For $g=0,\ldots,G$}{end}
\Begin{
\SetKwBlock{Begin}{Update functional component}{end}
\Begin{
Set partial residuals $y_{ij}^{(1)(g)} = y_{ij} - \mu_{i,s}^{(g)} - \boldsymbol{x_i}(t_{ij}) \boldsymbol{\beta}^{(g)}$ \\
Update $\boldsymbol{\theta}_i^{(g+1)}$ on the base of Appendix A.1
}
\SetKwBlock{Begin}{Update seasonal component}{end}
\Begin{
Set partial residuals $y_{ij}^{(2)(g)} = y_{ij} - f_i(t_{ij})^{(g)} - \boldsymbol{x_i}(t_{ij}) \boldsymbol{\beta}^{(g)}$ \\
Update $\boldsymbol{\mu}_i^{(g+1)}, m^{(g+1)},\varpi^{(g+1)},\boldsymbol{\alpha}^{(g+1)} $ on the base of Appendix A.2
}
\SetKwBlock{Begin}{Update regressive component}{end}
\Begin{
Set partial residuals $y_{ij}^{(3)(g)} = y_{ij} - f_i(t_{ij})^{(g)} - \mu_{i,s}^{(g)} $ \\
Update $\boldsymbol{\beta}_i^{(g+1)}$ on the base of Appendix A.3
}
\SetKwBlock{Begin}{Update error term}{end}
\Begin{
Set partial residuals $\epsilon_{ij}^{(g)} = y_{ij} - f_i(t_{ij})^{(g)} - \mu_{i,s}^{(g)} - \boldsymbol{x_i}(t_{ij}) \boldsymbol{\beta}^{(g)}$ \\
Update $\psi^{(g+1)}$ on the base of Appendix A.4
}
\textbf{Return} $\boldsymbol{\theta}_i^{(g)}, \boldsymbol{\mu}_i^{(g)}, m^{(g)},\varpi^{(g)},\boldsymbol{\alpha}^{(g)}, \boldsymbol{\beta}_i^{(g)}, \psi_i^{(g)} $ for $g = g_0, g_0+g_s, g_0 + 2g_s, \ldots,G$
}
\caption{Gibbs Sampler}
\end{algorithm}
\section{Posterior analysis}
\label{estimation}
The idea of estimating trajectories for athletes' performances is a natural pursuit for the model specification we adopted. Indeed, describing observations as error prone measurements of an unknown underlying function suggests evaluating such function, once retrieved, on any number of points of interest. In practice, we will generate a fine grid of $T$ equispaced time points: $\{ t_k \}_{k=1}^T$ between $0 \equiv t_1$ and $1 \equiv t_T$ and evaluate the function on this grid. \\
\indent In particular, we start by evaluating the athlete-specific functional component by exploiting the basis function representation. Being:
\[ \boldsymbol{\Theta}_i = \begin{bmatrix}
\theta_{i1}^{(1)} &\theta_{i2}^{(1)} & \ldots & \theta_{ip}^{(1)} \\
\theta_{i1}^{(2)} &\theta_{i2}^{(2)} & \ldots & \theta_{ip}^{(2)} \\
\vdots & \vdots & \ddots & \vdots \\
\theta_{i1}^{(G)} &\theta_{i2}^{(G)} & \ldots & \theta_{ip}^{(G)}
\end{bmatrix}
\]
the matrix of individual-specific spline basis coefficients for all iterations $g = 1, \ldots G$ and \[ \boldsymbol{b}^\top = \begin{bmatrix}
b_{1} (t_1) &b_{1} (t_2) & \ldots & b_{1} (t_k) & \ldots & b_{1}(t_T) \\
b_{2} (t_1) &b_{2} (t_2) & \ldots & b_{2} (t_k) & \ldots & b_{2}(t_T) \\
\vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\
b_{p} (t_1) &b_{p} (t_2) & \ldots & b_{p} (t_k) & \ldots & b_{p}(t_T)
\end{bmatrix}
\]
all values of a $p$-dimensional, degree-$3$, spline basis on a set of $T+1$ equispaced knots in the unit interval, the estimated contribution of the functional component to the overall trajectory is, at each iteration:
\[ {f}_i^{(g)}(t) = \sum_{m=1}^p {\theta}_{im}^{(g)} b_m(t) = {\Theta}_i^{(g)} \boldsymbol{b}_{t}^\top \quad \text{ for } t = {t_1, \ldots, t_T},
\]
where ${\Theta}_i^{(g)}$ corresponds to the $i$-th row of matrix $\boldsymbol{\Theta}_i$. \\
\indent As for the seasonal linear mixed effect, we modelled it as a piecewise continuous function taking individual- and season-specific values. Hence, when retrieving its estimated effect on any point in the time grid, we need to determine which season it belongs to. As discussed in Section 3, time is rescaled so that equal values across individuals indicate the same day of the year, possibly in different years. Therefore, season changes, that occur at new year's days, can be easily computed by straightforward proportions. At this point, the season to which $t_k$ belongs to is obtained by comparison with the season thresholds. In the following Equation, the indicator variable $ \chi_{(t \in s)}$ determines to which season each time point belongs to. Accordingly, the estimated contribution of the seasonal component to the overall trajectory is, at each iteration:
\[ {\mu}_i^{(g)}(t) = \sum_{s=1}^{S_i} {\mu}_{is}^{(g)} \chi_{(t \in s)} \quad \text{ for } t = {t_1, \ldots, t_T}.
\]
\indent Lastly, the regressive component has to be taken into account. The estimated contribution of the regressive component to the overall trajectory is, at each iteration:
\[ \sum_{l=1}^{r} x_{il} (t) {\beta}_{l}^{(g)} = \boldsymbol{x}_{i}(t) {\boldsymbol{\beta}}^{(g)} \quad \text{ for } t = {t_1, \ldots, t_T}.
\]
\indent Given the three components, the overall estimate of the underlying function is obtained by adding these three components. In particular, the estimated mean trajectory can be written as:
\begin{equation}
\widehat{y_i(t)} = \frac{1}{G}\sum_{g=1}^G {f}_i^{(g)}(t) + {\mu}_i^{(g)}(t) + \boldsymbol{x}_{i}(t) {\boldsymbol{\beta}}^{(g)} \quad \text{ for } t = {t_1, \ldots, t_T} \label{compll}
\end{equation}
\indent Similarly, $95 \%$ credible intervals can be computed to quantify uncertainty around our point estimate.
\subsection{Model application}
In this Section, we fit different specifications of our model to the data described in Section \ref{dataset}.\\
\indent In general, we consider the additive structure of the sampling model illustrated in Equation \ref{model}. Table \ref{modelsummary} reports an overview on of the six models we compare. In Model $M_1$, the B-sline basis functions have 80 degrees of freedom, the seasonal component has GARCH errors and three regressors are taken into account: \textit{sex, age and environment}. $M_2$ represents a slight modification of $M_1$ given by the fixed-age implementation. Here we consider the covariate \textit{age} not as a time dependent variable, but as a fixed value given by the age at the beginning of each athlete's career. In model $M_3$ a simpler dependence structure among the seasonal effects is used. Namely, we assume an autoregressive model for $\mu_is$ (see Eq. \ref{autoregress}). For model $M_4$, we simply consider a larger number of basis functions, i.e. 120, accounting for up to three splines having support in a season and hence meant to better capture the intra-seasonal variability. Finally, models $M_5$ and $M_6$ allow for \textit{doping} as additional covariate, both in the case of the time-dependent and time-independent specification of \textit{age}.\\
\indent Priors were chosen as discussed in Section \ref{baymodel}, and with hyperparameter choices summarized in Table \ref{hyper}. To argue on the choice of the informative prior for the overall mean parameter $m$, in Table \ref{comparison} we also report the results under a slight modification of model $M_1$, that we denote $M_1^{(2)}$, yielding a vague prior for $m$.
\begin{table}
\centering
\caption{Models name and description.}
\label{modelsummary}
\begin{tabular}{ll}
\toprule
Symbol & Meaning \\
\midrule
$M_1$ & 80 df B-splines, GARCH, covariates: sex, age (time dependent), env. \\
$M_2$ & 80 df B-splines, GARCH, covariates: sex, age (time constant), env. \\
$M_3$ & 80 df B-splines, AR, covariates: sex, age (t. dep.), env. \\
$M_4$ & 120 df B-splines, GARCH, covariates: sex, age (t. dep.), env.\\
$M_5$ & 80 df B-splines, GARCH, covariates: sex, age (t. dep.), env., doping\\
$M_6$ & 80 df B-splines, GARCH, covariates: sex, age (t. const.), env., doping \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Hyperparameter choices. In the first column, we refer to the Equation where the hyperparameter first appears.}
\label{hyper}
\begin{tabular}{llll}
\toprule
Ref. & Hyp. & Value & Description \\
\midrule
(\ref{factors}) & $a_{\sigma}$ & 1.0 & $1^{st}$ Gamma coeff. of error term in the factor exp. \\
(\ref{factors}) & $b_{\sigma}$ & 0.3 & $2^{nd}$ Gamma coeff. of error term in the factor exp. \\
(\ref{functional}) & $\nu_{\phi}$ & 9 & Gamma coeff.s of local shrink. param. $\phi_{ml}$ \\
(\ref{functional}) & $a_{1}$ & 2.1 & $1^{st}$ Gamma coeff. of the $1^{st}$ global shrink. factor $\delta_1$ \\
(\ref{functional}) & $b_{1}$ & 1.0 & $2^{nd}$ Gamma coeff. of the $1^{st}$ global shrink. factor $\delta_1$ \\
(\ref{functional}) & $a_{l}$ & 2.1 & $1^{st}$ Gamma coeff. of the $l$-th global shrink. factor $\delta_l$ \\
(\ref{functional}) & $b_{l}$ & 1.0 & $2^{nd}$ Gamma coeff. of the $l$-th global shrink. factor $\delta_l$ \\
(\ref{GARCHprior}) & $\mu_{m_0}$ & -0.2 & Mean of the overall mean $m$ \\
(\ref{GARCHprior}) & $\Sigma_{m_0}$ & 0.0001 & Variance of the overall mean $m$ \\
(\ref{GARCHprior}) & $ \mu_{\alpha}$ & (0.0, 0.0) & Mean vector of the $\boldsymbol{\alpha}$ GARCH coeff. \\
(\ref{GARCHprior}) & $ \Sigma_{\alpha} $ & $\mathbb{I}_2$ & Covariance matrix of the $\boldsymbol{\alpha}$ GARCH coeff. \\
(\ref{GARCHprior}) & $ \mu_{\varpi}$ & 0.0 & Mean of the $\varpi$ GARCH coeff. \\
(\ref{GARCHprior}) & $ \Sigma_{\varpi} $ & 1 & Variance of the $\varpi$ GARCH coeff. \\
(\ref{reg}) & $\nu_{\beta}$ & 0.5 & $1^{st}$ Gamma coeff.s regression param. \\
(\ref{reg}) & $\sigma_{\beta}$ & 0.5 & $2^{nd}$ Gamma coeff.s of regression param. \\
(\ref{full}) & $\mu_{\psi}$ & 1.0 & Mean of the error variance $\psi$ \\
(\ref{full}) & $\sigma_{\psi}$ & 1.0 & Variannce of the error variance $\psi$ \\
\bottomrule
\end{tabular}
\end{table}
For all experiments, inference is obtained via posterior samples drawn by the Gibbs sampler introduced in Section \ref{Update}. In particular, we ran $20,000$ iterations with a burn-in period of $60 \%$ and a thinning of $5$. Performances are compared by means of the logarithm of the pseudo marginal likelihood (LPML) index \citep{GEI}. This estimator for the log marginal likelihood is based on conditional predictive densities and provides an overall comparison of model fit, with higher values denoting better performing models.
\begin{table}
\centering
\caption{Model and hyperparameter comparison for the models in Table \ref{modelsummary}.}
\begin{tabular}{ll}
\toprule
Model & LPML \\ [0.5ex]
\midrule
$M_1$ & $\boldsymbol{-45943}$ \\
$M_1^{(2)}$ & -46573 \\
$M_2$ & $\boldsymbol{-45472}$ \\
$M_3$ & -46544 \\
$M_4$ & -46314 \\
$M_5$ & -48565 \\
$M_6$ & -48122 \\
\bottomrule
\end{tabular}
\label{comparison}
\end{table}
Performances for the different models are fairly similar: as a matter of fact, the model specifications do not differ in a significant way. Despite having a slightly lower LPML than the best performing model, $M_2$, we prefer looking at results for model $M_1$ with 80 degrees od freedom splines, GARCH errors and three regressors with time-dependent \textit{age} definition because regression parameters prove to be significant in this setting. As far as the estimation of trajectories describing the evolution of athletes' performances is concerned, we use the method discussed in Section \ref{estimation}. Figure \ref{Athl} displays the estimate (with $95 \%$ credible bounds) for a random selection of athletes (black) together with one-season-ahead performance prediction (grey). The results are graphically pleasing in terms of model fit, but some comments are of order. First, we acknowledge that the seasonal random intercept captures the majority of the variability in the data. Second, the functional component, which is meant to capture the overall variability in the data set, reduces to capture the intra-seasonal variability. Interestingly, the number on non-local bases selected by the adaptive procedure in \cite{shrink} is exactly equal to the number of seasons in the data set. This effect seems to be consistent with the choice of degrees of freedom, that limits the support of each spline to a unique season. Finally, the effect of covariates is very small in magnitude. Note that, because \textit{sex} is time-constant, in the estimated trajectory we expect to recognise the contribution of \textit{age} as a linear trend and the environmental effect as a diversification of summer and winter performances. \\
\begin{figure}
\centering
\includegraphics[width = 1.0\textwidth]{figures/Fig4}
\caption{Performance trajectory estimates for a random selection of athletes. The $x$-axis denotes the time measured in days from January 1st of the first season of career, whereas on the y-axis there is the length of throw in meters. Vertical lines represent calendar years (seasons in our notation). The final part of each trajectory (grey) for which no observations are available, represents one-season-ahead performance prediction.}
\label{Athl}
\end{figure}
\indent In Figure \ref{comp} we underline the effect of the three additive components of our functional model. The first panel (top-left) displays the observed data and the estimated trajectory for athlete 226. The top-right panel shows the seasonal contribution (i.e, an estimation of $\mu_{is}; \ i = 226; \ s = 1,\ldots,S_{226}$). The third panel (bottom-left) reports the functional contribution (i.e, an estimate of $f_{226}(t)$). Finally, the bottom-right panel shows the effect of covariates. As anticipated, within the trajectory estimation of a unique athlete, we only recognise a performance drop predicted during winter months (negative effect of indoor environment on performances) in the bottom-right panel. The tight credible intervals for this component assures estimates to be significant, despite small, as we will discuss in further detail in the next Section.\\
\begin{figure}
\centering
\includegraphics[width = 1.0\textwidth]{figures/Fig5}
\caption{Single contributions to the whole additive model as in Equation \ref{compll}. The first panel is the complete additive model, whereas the second (top-right) displays the estimate of the seasonal random intercept. The third panel (bottom-left) represents the functional contribution, while the bottom-right panel displays the regressive component.}
\label{comp}
\end{figure}
\subsection{Parameters interpretation}
The regression parameters can be easily interpreted from a sports' analytics perspective. It is important to stress that, to improve convergence of the MCMC algorithm, we fitted our model centering athlete-specific data around their average. Accordingly, in our experiments the raw data $y_{ij}$ were substituted by the centered points:
\[ \tilde{y}_{ij} = y_{ij} - \frac{\sum_{j=1}^{n_i} y_{ij}}{n_i} = y_{ij} - \overline{y}_i \quad \text{for } i=1,\ldots,n \text{ and } \ j=1,\ldots,n_i\]
as data-input for the model. We have to take into account this transformation when interpreting the regression
parameters, especially when dealing with dummy variables.\\
\indent We report the posterior mean estimate of the regression coefficients, their standard deviation, the effective sample size (ESS) and the $95 \%$ posterior credible bounds for the most interesting models. Table \ref{resM1} displays the results under model $M_1$, where covariates are \textit{sex} ($x_1$), \textit{age} ($x_2$) and \textit{environment} ($x_3$).
Even if the covariate effect is small in size, we observe that the $95\%$ credible intervals do not contain zero,
showing a significant effect. In particular $\beta_2$ is positive, suggesting that athletes, on average, increase their performances throughout their career. Further, $\beta_3$ is positive, meaning that an athlete is likely to perform better outdoors than indoors. Finally, $\beta_1$ is negative. Since \textit{sex} is a time-constant dummy variable, its coefficient quantifies the difference in variability of the athlete's performance around his/her average $\overline{y}_i$. We conclude that
female's trajectories express less variability around their average than men's. Parameter estimation under the other models considered in the paper are similar in sign with respect to the ones just discussed here. We report complete results in Appendix \ref{tables}. \\
\indent A final comment on results obtained using doping as additional regressor is required. We stressed that LPML performances for models $M_5$ and $M_6$ are quite low, however, this may be due to the fact that the data set is imbalanced, i.e., there are too few doped athletes (18 out of 653). In fact, the ESS of the parameter corresponding to \textit{doping} is very low. Nevertheless, it is interesting to observe that estimates are similar for all common parameters and that the coefficient of the doping regressor is
negative (even if the credible intervals contain zero). We conclude that the use of performance-enhancing drugs seems to have a negative effect on the variability of athletes' performances.
\begin{table}
\centering
\caption{Posterior mean estimate of the regression coefficients for model $M_1$(Table \ref{modelsummary}), together with the standard deviation of their estimate, effective sample size (ESS) with respect to 1600 retained samples, and $95 \%$ posterior credible bounds.}
\begin{tabular}{lllllll}
\toprule
Coeff & Mean & Sd & ESS & $2.5 \% $ & $97.5 \% $ \\ [0.5ex]
\midrule
$\beta_1$ & -0.120 & 0.0270 & 190 & -0.175 & -0.0675 \\
$\beta_2$ & 6.22 e-03 & 9.95 e-04 & 170 & 4.20 e-03 & 8.20 e-03 \\
$\beta_3$ & 0.0453 & 9.55 e-03 & 1600 & 0.0269 & 0.0643 \\
\bottomrule
\end{tabular}
\label{resM1}
\end{table}
\section{Discussion}
We proposed an additive hierarchical Bayesian model for the analysis of athletes' performances in a longitudinal context. Following \cite{MH}, we proposed a smooth functional contribution for explaining the overall variability in the data set. The functions are represented by means of a high-dimensional set of pre-specified basis functions and a factor model on the basis coefficients ensures dimensionality reduction. We enriched the model by allowing for time-dependent covariates to affect estimates through a regressive component. Finally, we addressed the issue of seasonal gathering of sports data introducing a mixed effect model with GARCH errors which provides evolving random intercepts over different time intervals in the data set. While the motivation of our work comes from the analysis of shot put performance data, the methodology presented in this work is applicable to the analysis of performance data collected in all measurable sports. \\
\indent The Bayesian latent factor methodology was originally developed for very sparse longitudinal
data, with the purpose of capturing a global trend in subject-specific trajectories. We balanced the model with the requirement of smoothness using a B-spline basis system and adding a seasonal random intercept. However, it is evident that the latter explains the majority of variability in the dataset. Therefore, it might be worth considering a functional basis that batter captures the intra-seasonal variability. Further, we observed that the contribution of the regressive component is consistent across various modelling choices. \\
\indent Finally, we looked at the effect of \textit{doping} on results. Despite not being significant, the negative effect seems to suggest that performances of doped athletes are less variable. This is in line with previous literature suggesting that doping is more likely used to enhance performances in periods of decreasing fitness than to consolidate already good performances, generally exposed to strict controls. We think this aspect deserves further investigation, considering, for instance, more specific modeling techniques and a less imbalanced data set.\\
\indent In conclusion, the attempt to extend exiting tools of functional statistics to the modelling of (shot put) performance data seems promising because of the adaptability of these methodologies to all sorts of performance longitudinal data in measurable sports.
\section*{Acknowledgements}
We would like to thank Prof. James Hopker (University of Kent, School of Sport and Exercise Sciences) for sharing the shot put data set, and for the helpful comments.
\section*{Conflict of interest}
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
\section*{Availability of data and material}
The raw data set is available at \url{www.tilastopaja.eu}, whereas the data prepared for our analysis at \url{https://github.com/PatricDolmeta/Bayesian-GARCH-Modeling-of-Functional-Sports-Data.}
\section*{Code availability}
The code for the Bayesian analysis and the output production (numerical and graphical) is available at \url{https://github.com/PatricDolmeta/Bayesian-GARCH-Modeling-of-Functional-Sports-Data.}
\bibliographystyle{apa}
| {
"attr-fineweb-edu": 2.521484,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfiQ25V5jcjlvc6yg | \section{}
\begin{abstract}
We derive the equations of motion for the planar somersault, which consist of two additive terms. The first is the dynamic phase that is proportional to the angular momentum, and the second is the geometric phase that is independent of angular momentum and depends solely on the details of the shape change.
Next, we import digitised footage of an elite athlete performing 3.5 forward somersaults off the 3m springboard, and use the data to validate our model. We show that reversing and reordering certain sections of the digitised dive can maximise the geometric phase without affecting the dynamic phase, thereby increasing the overall rotation achieved. Finally, we propose a theoretical planar somersault consisting of four shape changing states, where the optimisation lies in finding the shape change strategy that maximises the overall rotation of the dive. This is achieved by balancing the rotational contributions from the dynamic and geometric phases, in which we show the geometric phase plays a small but important role in the optimisation process.
\end{abstract}
\maketitle
\section{Introduction}
The somersault is an acrobatic manoeuvre that is essential in Olympic sports such as diving, trampolining, gymnastics and aerial skiing. Today, athletes seek to better understand the scientific theory behind somersaults in the hope of gaining an edge in competition. Within diving alone there is extensive literature ranging from books aimed at athletes and coaches (e.g.\! \cite{batterman, Fairbanks, obrien}), to journal articles targeting the scientific community. Yeadon has provided great insight into the biomechanics behind the twisting somersault, which include a series of classical papers \cite{yeadon93a, yeadon93b, yeadon93c, yeadon93d}. In this paper we focus on optimising somersaults without twist, which we will refer to as the planar somersault. There are several studies focusing on the planar somersault, e.g.\! \cite{Blajer2003, King2004471, MurthyK93, schuler2011optimal}, but here we take a different approach that utilises the geometric phase in order to maximise somersault rotation.
The geometric phase is also important in the twisting somersault, see \cite{TwistSom}, and has been used to generate a new dive in \cite{513XD}.
The main focus for the twisting somersault is the generation of twist, while in the planar case the particulars of the shape change can instead be used to generate additional somersault.
Our initial formulas are a special case of those derived in \cite{TwistSom}, but since the resulting differential equation for overall rotation is only
a single first order equation, a much more thorough analysis is possible.
The splitting into the dynamic and geometric phases is a well known modern concept in geometric mechanics,
see e.g.\! \cite{MarsdenRatiu13,Holm08,Holm09}. To our knowledge this is the first application of
geometric phase to diving. When modelling the rotation of a non-rigid
(and hence a shape changing) body the contributions to the overall rotation
can be split into two terms.
The more familiar term is the dynamic phase, which is proportional to the angular momentum of the body and thus expresses the obvious fact that the body rotates faster when it has larger angular momentum.
The less familiar term is {\em not} proportional to the angular momentum, and is only present
when a shape change occurs. This term is called the geometric phase, and it gives a contribution
to the overall rotation of the body even when the angular momentum vanishes. This is the
reason a cat can change orientation by changing its body shape even when it has no angular momentum.
In diving angular momentum is non-zero, but the geometric phase is still present, and both
together determine the overall rotation.
The structure of the paper is as follows. In section \ref{sec:model} we take the mathematical model of an athlete introduced in \cite{WTongthesis}, and simplify it to analyse planar somersaults. In section \ref{sec:eqofmotion} we then present the generalised equations of motion for coupled rigid bodies in space, and perform planar reduction to reduce the 3-dimensional vector equations into the 2-dimensional scalar variants. Next in section \ref{sec:realworld} we analyse a real world dive and demonstrate how the geometric phase can be used to improve overall rotation obtained by the athlete. Finally, in section \ref{sec:theoreticalplanar} we propose a new theoretical planar somersault using realistic assumptions to find optimal shape change trajectories that maximise overall rotation. In these instances the dynamic and geometric phases are accessed for different values of angular momentum, and the role of the geometric phase in optimising overall rotation is demonstrated.
A preliminary version of this study was presented at the 1st Symposium for Researchers in Diving at Leipzig, Germany. The conference proceedings can be found in \cite{divingsym}. However, here the model presented in section 2 has been tweaked, the analysis of the real world dive in section 4 has been extended to show how the geometric phase can be utilised to increase the amount of somersault produced, and we present a new optimisation procedure in section 5 that maximises the theoretical planar somersault using the geometric phase.
\section{Model}\label{sec:model}
We begin with the 10-body model proposed in \cite{WTongthesis}, which is a slight modification of Frohlich's \cite{Frohlich} 12-body model that uses simple geometric solids connected at joints to represent the athlete. The components of the 10-body model consist of the torso, head, $2\times$ upper arm, $2\times$ forearm with hand attached, $2\times$ thigh and $2\times$ lower leg with foot attached. Although more sophisticated models exist (like those proposed by Jensen \cite{Jensen76} and Hatze \cite{Hatze}) that use the elliptical zone method to estimate segment parameters, this sophistication only affects the tensor of inertia $I_i$, centre of mass $\bo{C}_i$ and joint location $\bo{E}_i^j$ for each body $B_i$ connected to body $B_j$. As the dynamics are driven only by $I_i$, $\bo{C}_i$ and $\bo{E}_i^j$, we can use the 10-body model since it provides similar estimates for these quantities.
\begin{figure}[b]
\centering
\subfloat[Body segments of the model.]{\includegraphics[width=7.25cm]{schematicB.pdf}}
\hspace*{3mm}\subfloat[Joint vectors of the model.]{\includegraphics[width=7.25cm]{schematicE.pdf}}
\caption{The schematic diagrams illustrate the collection of body segments $B_i$ and joint vectors $\bo{E}_i^j$. Each joint vector $\bo{E}_i^j$ shows the connection between $B_i$ and $B_j$, and is a constant vector when measured in the $B_i$-frame.}
\label{fig:schematic}
\end{figure}
In the case of the planar somersault we enforce the shape change to be strictly about the somersault axis and require that both left and right limbs move together, so that the normalised angular velocity vector is constant. By combining the corresponding left and right limb segments we obtain the 6-body planar model shown in Figure \ref{fig:schematic}, whose parameters are listed in Table \ref{tab:model}.
\begin{table}[t]\centering\begin{tabular}{|l|l|l|l|}
\hline
body $B_i$ & mass $m_i$ & moi $I_i$ & joint position $\bo{E}_i^j$ \\\hline
$B_1=$ torso & $m_1=32.400$ & $I_1=1.059$ & $\bo{E}_1^2=(0,0.25)^t$\\
& & & $\bo{E}_1^4=(0.08,-0.3)^t$\\
& & & $\bo{E}_1^6=(0,0.3)^t$\\
$B_2=$ upper arms & $m_2=4.712$ & $I_2=0.038$ & $\bo{E}_2^1=(0,0.2)^t$\\
& & & $\bo{E}_2^3=(0,-0.15)^t$\\
$B_3=$ forearms \& hands & $m_3=4.608$ & $I_3=0.055$ & $\bo{E}_3^2=(0,0.183)^t$\\
$B_4=$ thighs & $m_4=17.300$ & $I_4=0.294$ & $\bo{E}_4^1=(0.08,0.215)^t$\\
& & & $\bo{E}_4^5=(0,-0.215)^t$\\
$B_5=$ lower legs \& feet& $m_5=11.044$ & $I_5=0.310$ & $\bo{E}_5^4=(0.08,0.289)^t$\\
$B_6=$ head & $m_6=5.575$ & $I_6=0.027$ & $\bo{E}_6^1=(0,-0.11)^t$\\
\hline
\end{tabular}
\caption{The parameters of the 6-body planar model obtained by reducing the 10-body model proposed in \cite{WTongthesis}. We abbreviate moment of inertia as moi in the table above.}
\label{tab:model}
\end{table}
\section{Equations of motion}\label{sec:eqofmotion}
The equations of motion for a rigid body in 3-dimensional space is
\begin{equation}
\bo{\dot{L}}=\bo{L}\times \bo{\Omega},\label{eq:eom}
\end{equation}
where $\bo{L}$ is the angular momentum and $\bo{\Omega}$ is the angular velocity. Now if the rigid body is replaced by a system of coupled rigid bodies, then \eqref{eq:eom} holds if
\begin{equation}
\bo{\Omega} = I^{-1}(\bo{L}-\bo{A}).\label{eq:omega}
\end{equation}
Here, $I$ is the overall tensor of inertia and $\bo{A}$ is the total momentum shift generated by the shape change. Thus in the absence of shape change $\bo{A}=\bo{0}$, which gives the classical result $\bo{\Omega} = I^{-1}\bo{L}$. The proof of \eqref{eq:omega} is provided in Theorem 1 in \cite{TwistSom}, along with $I$ and $\bo{A}$, which are
\begin{align}
I&=\sum_{i=1}^6\Big(R_{\alpha_i} I_i R_{\alpha_i}^t+m_i\left[|\bo{C}_i|^2\mathbb{1}-\bo{C}_i\bo{C}_i^t\right]\Big)\label{eq:I}\\
\bo{A}&=\sum_{i=1}^6\Big(m_i \bo{C}_i\times \bo{\dot{C}}_i +R_{\alpha_i} I_i\bo{\Omega}_{\alpha_i}\Big).\label{eq:A}
\end{align}
In \eqref{eq:I} and \eqref{eq:A} for each body $B_i$ we have: the mass $m_i$, the tensor of inertia $I_i$, the centre of mass $\bo{C}_i$, the relative orientation $R_{\alpha_i}$ to the reference body (chosen to be $B_1$), and the relative angular velocity $\bo{\Omega}_{\alpha_i}$, such that the angular velocity tensor is $\hat{\Omega}_{\alpha_i} = R_{\alpha_i}^t \dot{R}_{\alpha_i}$.
It is clear that when there is no shape change $\dot{\bo{C}}_i$ and $\bo{\Omega}_{\alpha_i}$
both vanish, and hence $\bo{A}$ vanishes.
Manipulating \eqref{eq:A} by using the definition of $\bo{C}_i$ found in \cite{WTongthesis} and the vector triple product formula, we can factorise out the relative angular velocity $\bo{\Omega}_{\alpha_i}$ to obtain
\begin{equation}
\bo{A}=\sum_{i=1}^6\Big(R_{\alpha_i}\Big[\sum_{j=1}^6 m_j [R_{\alpha_i}^t \bo{C}_j \cdot \bo{\tilde{D}}_i^j \mathbb{1} - \bo{\tilde{D}}_i^j \bo{C}_j^t R_{\alpha_i}] + I_i \Big]\bo{\Omega}_{\alpha_i}\Big),\label{eq:A2}
\end{equation}
where $\bo{\tilde{D}}_i^j$ is a linear combination of $\bo{E}_i^j$'s defined in Appendix \ref{app:D}.
For planar somersaults the angular momentum and angular velocity vector are only non-zero about the somersault axis, so we write $\bo{L}=(0,L,0)^t$ and $\bo{\Omega}=(0,\dot{\theta},0)^t$ to be consistent with \cite{WTongthesis}. Substituting $\bo{L}$ and $\bo{\Omega}$ in \eqref{eq:eom} shows that $\bo{\dot{L}}=\bo{0}$, meaning the angular momentum is constant. As the athlete is symmetric about the median plane, the centre of mass for each $B_i$ takes the form $\bo{C}_i = (C_{i_x},0,C_{i_z})^t$ and the relative angular velocity $\bo{\Omega}_{\alpha_i}=(0,\dot{\alpha}_i,0)^t$. Substituting these results in \eqref{eq:I} and \eqref{eq:A2}, we can then simplify to obtain the 2-dimensional analogue of $I$ and $\bo{A}$ giving
\begin{align}
I^{(2D)} &= \sum_{i=1}^6 I_{i_y}+m_i(C_{i_x}^2+C_{i_z}^2)\\
A^{(2D)} &= \sum_{i=1}^6\Big( \sum_{j=1}^6 m_j\big[\bo{C}_j\cdot \bo{\tilde{D}}_i^j \cos{\alpha_i} + P\bo{C}_j\cdot \bo{\tilde{D}}_i^j \sin{\alpha_i}\big]+I_{i_y}\Big)\dot{\alpha}_i,
\end{align}
where $I_{i_y}$ is the (2,2) entry of $I_i$ and $P = \text{antidiag}(-1,1,1)$ is an anti-diagonal matrix used to permute the components of $\bo{C}_j$.
The 2-dimensional analogue of \eqref{eq:omega} with the arguments $\bo{\alpha}=(\alpha_2,\dots,\alpha_6)^t$ (note $\alpha_1=0$ because $B_1$ is the reference segment) explicitly written and $(2D)$ superscripts suppressed is then
\begin{equation}
\dot{\theta} = I^{-1}(\bo{\alpha})L+\bo{F}(\bo{\alpha})\cdot\bo{\dot{\alpha}},\label{eq:theta}
\end{equation}
where we write $\bo{F}(\bo{\alpha})\cdot\bo{\dot{\alpha}} = -I^{-1}(\bo{\alpha})A(\bo{\alpha},\bo{\dot{\alpha}})$ to match the differential equation found in \cite{pentagon}. In that paper $L=0$, and the study focused on maximising the geometric phase. However, here $L\neq 0$ and the differential equation \eqref{eq:theta} is composed of two parts: the dynamic phase $I^{-1}(\bo{\alpha})L$, which is proportional to $L$, and the geometric phase $\bo{F}(\bo{\alpha})\cdot\bo{\dot{\alpha}}$, which is independent of $L$. Solving \eqref{eq:theta} with the initial condition $\theta(0)=\theta_0$ gives the orientation of the athlete as a function of time.
The sequence of shape changes an athlete goes through while performing a dive can be represented by a curve on shape space, which closes into a loop provided the athlete's take-off and final shape is the same. The dynamic and geometric phases are both dependent on the path of the loop, however the dynamic phase also depends on the velocity with which the loop is traversed, while the geometric phase is independent of the velocity. Traversing the same loop with different velocities therefore contributes different amounts to the dynamic phase, while the contribution to the geometric phase is unchanged. For planar somersaults, we generally expect the dominating term to be the dynamic phase as it is proportional to $L$ (which is large), and the geometric phase to play a lesser role.
The main idea behind our optimisation is that we assume $L$ is already as large as possible
and cannot be increased further. Also, we assume that the athlete is holding tuck or pike as tight as possible in the middle of the dive, so again no further improvement is possible. Both increasing $L$ and decreasing $I$ change the dynamic phase, which increases overall rotation and thus the
number of somersaults. We will show that the contribution from the second term in \eqref{eq:theta},
the geometric phase, can be used in principle to increase the overall rotation.
\section{A real world dive}\label{sec:realworld}
Footage of a professional male athlete performing a 107B dive (forward 3.5 somersaults in pike) off the 3m springboard was captured at the New South Wales Institute of Sport (NSWIS) using a 120 FPS camera. SkillSpector \cite{skillspector} was used to manually digitise the footage, which commenced from the moment of take-off and ended once the athlete's hand first made contact with the water upon entry. The total airborne time spanned 1.55 seconds, creating a total of 187 frames, and the digitisation of the dive is shown in Figure \ref{fig:digitise}. For convenience we shall refer to the initial frame as the zeroth frame, and write $\bo{\alpha}[j]=\{\alpha_2[j],\dots\}$ to denote the collection of shape angles of the $j$th frame, where $0 \leq j \leq 186$.
\begin{figure}[t]
\centering
\includegraphics[width=14cm]{planarsketch.pdf}
\caption{Illustration of the digitised dive for frames 0, 15, 30, 45, 60, 75, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180 and 186, from left to right. To avoid clutter, each illustrated frame has been shifted right by a small constant amount to provide better visualisation of the dive sequence.}\label{fig:digitise}
\end{figure}
In each frame we locate the joint positions of the ankle, knee, hip, shoulder, elbow, wrist, and ear (which serves as a decent approximation for the centre of mass of the head). To reduce digitisation errors a discrete Fourier cosine transform is applied to the data,
so that by keeping the first fifteen Fourier coefficients the data is smoothed when inverting the transformation. The spatial orientation $\theta_i$ is the angle between an orientation vector constructed from appropriate joint positions (e.g.\! hip and shoulder positions for the torso) and the reference vector given by the anatomical neutral position vector when standing upright. From $\theta_i$ the relative orientation $\alpha_i$ can be obtained by using
$\alpha_i = \theta_i - \theta_\mathit{obs}$ for $i\in\{2,\dots,6\}$, where $\theta_\mathit{obs}=\theta_1$ is the observed spatial orientation of the torso. The spatial orientation $\theta_\mathit{obs}$ and relative orientations $\alpha_2,\dots,\alpha_6$ for the digitised dive are shown in Figure \ref{fig:angles}.
\begin{figure}[p]
\centering
\subfloat[Orientation of the athlete.]{\includegraphics[width=7.25cm]{angu.pdf}}
\hspace*{2mm}\subfloat[Relative angle of the upper arms.]{\includegraphics[width=7.25cm]{angup.pdf}}\\
\subfloat[Relative angle of the forearms and hands.]{\includegraphics[width=7.25cm]{angud.pdf}}
\hspace*{2mm}\subfloat[Relative angle of the thighs.]{\includegraphics[width=7.25cm]{anglp.pdf}}\\
\subfloat[Relative angle of the lower legs and feet.]{\includegraphics[width=7.25cm]{angld.pdf}}
\hspace*{2mm}\subfloat[Relative angle of the head.]{\includegraphics[width=7.25cm]{anghd.pdf}}
\caption{The orientation of the athlete is specified by $\theta_\mathit{obs}$ and the collection of $\alpha_i$ specifies the shape.}
\label{fig:angles}
\end{figure}
To validate the 6-body planar model we compute $\theta$ using \eqref{eq:theta} and compare the theoretical result to the observed $\theta_\mathit{obs}$. Small variations between the two curves are expected as the segment parameters are taken from Table \ref{tab:model} and not from the particular athlete used in the data. Comparison is made by using the initial orientation
\begin{equation}
\theta_0=\theta_\mathit{obs}[0]=0.6478\label{eq:theta0}
\end{equation}
and collection of shape angles $\bo{\alpha}[\mathit{fr}]$, where square brackets denote frame $\mathit{fr}$ and round brackets indicate continuous time $t$. A cubic interpolation is used to obtain $\bo{\alpha}(t)$, which is then substituted in \eqref{eq:theta} to obtain $\theta$. The angular momentum constant $L$ is found by a least square fit that reveals $L=122.756$, and the difference $\Delta\theta=\theta-\theta_\mathit{obs}$ is shown in Figure \ref{fig:deltatheta}. From the moment of take-off until the diver hits the water we see that the discrepancy remains small, with the maximum difference being $0.331$ radians at time $t=0.650$. When the dive is completed the difference is only $0.045$ radians, which gives us confidence in using the planar model.
\begin{figure}[t]
\centering
\includegraphics[width=9cm]{shape5.pdf}
\caption{The difference between $\theta$ computed with \eqref{eq:theta} and the observed $\theta_\mathit{obs}$ from the data.}\label{fig:deltatheta}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[width=5cm]{weirdshape.pdf}
\caption{An impossible shape configuration of the athlete with $\theta = 0.648$ and $\bo{\alpha}=\{1.5,-2.5,2,-2,1\}$.}\label{fig:weird}
\end{figure}
Looking at the graphs in Figure \ref{fig:angles} it appears the upper arms and forearms move together relative to the torso, and similarly the thighs and lower legs. Therefore it is reasonable to assume that the elbows and knees remain straight throughout the dive, in particular the knees during pike somersaults. The head can be included
in the torso to remove another degree of freedom. Using these assumptions thus further reduces the segment count to three so that the shape space is some subset of a 2-dimensional torus. We say subset as there are shapes deemed impossible or unrealistic for diving, e.g.\! the shape shown in Figure \ref{fig:weird}. This motivates us to constrain the shape space to $\{\alpha_2,\alpha_4\}\in[-\pi,0]^2$ for the set of all possible shapes obtainable by the athlete. An important point to note is that the general theory is not dependent on the segment count reduction, and that these assumptions merely make it easier to understand the main principles behind the theory. Using $L=120$ for comparison purposes, we found the overall difference in rotation obtained to be less than $6.53^\circ$, which is small considering the athlete completes 3.5 somersaults for the dive.
\begin{figure}[t]
\centering
\hspace*{-10mm}\subfloat[The full trajectory.]{\includegraphics[width=7.8cm]{Bcurve.png}}
\hspace{2mm}\subfloat[Close-up of the left pane.]{\includegraphics[width=8.1cm]{Bcurvezoom.png}}
\caption{The interpolated shape change trajectory plotted with constant $B$ contours. The 10 pieces $C_1,\dots, C_{10}$ of $C$ are also identified here.}\label{fig:shapespace}
\end{figure}
\begin{figure}[b]
\centering
\hspace*{-0mm}\subfloat[The shape trajectories of the original dive where the blue regions denote sub-loops that require their directions reversed to maximise the geometric phase.]{\includegraphics[width=16cm]{acurvesv1.pdf}}\\
\hspace{0mm}\subfloat[The shape trajectories after orientating all sub-loops to provide a positive contribution to the geometric phase. The vertical dashed lines indicate where the pieces are joined.]{\includegraphics[width=16cm]{acurvesv2.pdf}}
\caption{The shape trajectories before and after orientating the sub-loops.}\label{fig:loops}
\end{figure}
The athlete's 2-dimensional shape change trajectory is shown in Figure \ref{fig:shapespace}, where the loop $C$ is constructed by letting $\bo{\alpha}(t)$ run from zero to the airborne time $T_\mathit{air}$. As take-off and entry shapes are not identical, the overall rotation obtained is gauge dependent, see \cite{LittlejohnReinsch97} for details. To eliminate this ambiguity we add one additional frame where $\bo{\alpha}[187] = \bo{\alpha}[0]$, such that $\bo{\alpha}(t)$ closes and the overall rotation is well-defined. As we now have a total of 188 frames captured at 120 FPS, the additional frame adds $1/120$th of a second to the airborne time, thus making $T_\mathit{air} = 187/120$. To find the change in orientation of the dive we solve \eqref{eq:theta} with \eqref{eq:theta0} giving
\begin{equation}
\theta(T_\mathit{air})= \theta_\mathit{dyn} + \theta_\mathit{geo} + \theta_0,
\end{equation}
where the dynamic phase contribution is
\begin{equation}
\theta_\mathit{dyn} = L \int_0^{T_\mathit{air}} I^{-1}\big(\bo{\alpha}(t)\big)\,\mathrm{d}t= 0.1705L\label{eq:dyn}
\end{equation}
and the geometric phase contribution is
\begin{equation}
\theta_\mathit{geo} = \int_0^{T_\mathit{air}}\bo{F}\big(\bo{\alpha}(t)\big)\cdot\bo{\dot{\alpha}}(t)\,\mathrm{d}t= 0.0721.\label{eq:geo}
\end{equation}
As the angular momentum $L\approx 120$ is large, this confirms our hypothesis that the dynamic phase is the dominant term and the geometric phase only plays a minor role. For this particular dive the contribution is less than one degree of rotation, and hence is negligible.
However, we will show that this contribution can be increased to something tangible
by changing the approach into and out of pike in the next section.
Instead of directly integrating the second term in the differential equation \eqref{eq:theta}, the geometric phase can be obtained from certain areas in shape space.
If the loop in shape space is closed the integral can be rewritten in terms of a certain function over the enclosed area.
The reformulation is obtained by applying Green's theorem, which gives
\begin{equation}
\int_{C} F(\bo{\alpha})\cdot\dot{\bo{\alpha}}\,dt = \iint_A B(\bo{\alpha})\,d\tilde{A},\label{eq:green}
\end{equation}
where $A$ is the region enclosed by $C$ and $B(\bo{\alpha})$ is interpreted as the magnetic field with constant contours shown in Figure \ref{fig:shapespace}.
The advantage of the area formulation is that it is easy to visualise the function $B(\alpha)$ in shape
space, and wherever this function is large the potential contribution to the geometric phase is large as well.
By writing out $\bo{F}(\bo{\alpha})=F_1(\bo{\alpha})\bo{i}+F_2(\bo{\alpha})\bo{j}$ explicitly, we have
\begin{align}
F(\bo{\alpha})\cdot\dot{\bo{\alpha}} &= F_1(\bo{\alpha})\,\frac{d\alpha_2}{dt}+F_2(\bo{\alpha})\,\frac{d\alpha_4}{dt} &\text{ and }&&
B(\bo{\alpha}) &= \frac{\partial F_2(\bo{\alpha})}{\partial \alpha_2}-\frac{\partial F_1(\bo{\alpha})}{\partial \alpha_4}\nonumber
\end{align}
appearing in \eqref{eq:green}. As $C$ is self-intersecting, the geometric phase can be computed by partitioning the loop into 10 pieces labelled $C_1,\dots, C_{10}$ (as shown in Figure \ref{fig:shapespace}) and then taking the appropriate pieces that make up the sub-loops. The total geometric phase is therefore the sum of the individual geometric phase contributions from each sub-loop. Specifically, the total geometric phase is composed of
\begin{align}
(C_2,C_4,C_6,C_8) &= 0.0953 & C_3 &= -0.0149 & C_5 &= -0.0012\nonumber\\
(C_1,C_9,C_{10}) &= -0.0023 & C_7 &= -0.0048, &\nonumber
\end{align}
where summing the contributions from these five sub-loops yields $\theta_\mathit{geo}$ found in \eqref{eq:geo}.
We observe that most contributions towards the geometric phase are negative, meaning improvement can be made by reversing the direction of travel along the loop. As $B < 0$ throughout the dive, loops orientated clockwise will provide a positive contribution to the geometric phase, while loops oriented counterclockwise will provide a negative contribution.
We will now show that the geometric phase can be increased without changing the dynamic phase.
Originally the pieces were ordered from $C_1$ to $C_{10}$, but after orientating the sub-loops the order becomes $-C_{10} \rightarrow -C_9 \rightarrow C_2\rightarrow -C_3\rightarrow C_4\rightarrow -C_5\rightarrow C_6\rightarrow -C_7\rightarrow C_8\rightarrow -C_1$, where a negative means we traverse along the piece in the opposite direction. The before and after shape trajectories are shown in Figure \ref{fig:loops}, where the improved geometric phase is
\begin{equation}
\tilde{\theta}_\mathit{geo} = 0.1186.
\end{equation}
While the effect from the geometric phase is small, it is still an improvement of $64\%$ compared to
the original geometric phase \eqref{eq:geo}.
Clearly these adjustments are not practical in an actual dive. However, we want to emphasise that we can increase the overall rotation achieved simply by reordering and reversing certain parts of the loop $C$. This additional rotation is obtained by maximising the geometric phase while leaving the dynamic phase unchanged.
\section{Optimising Planar Somersaults}\label{sec:theoreticalplanar}
\begin{figure}[t]
\centering
\includegraphics[width=14cm]{moi.pdf}
\caption{The moment of inertia $I$ plotted frame by frame.}
\label{fig:moi}
\end{figure}
The following observations can be made by analysing Figure \ref{fig:digitise} and Figure \ref{fig:moi}:
\begin{enumerate}
\item[1.] The athlete takes off with a large moment of inertia.
\item[2.] The athlete transitions quickly into pike position.
\item[3.] The athlete maintains pike position during which the moment of inertia is small.
\item[4.] The athlete completes the dive with (roughly) the same shape as during take-off.
\end{enumerate}
\begin{figure}[t]
\centering
\subfloat[Shape changing velocity of the arms.]{\includegraphics[width=15cm]{velau.pdf}}\\
\subfloat[Shape changing velocity of the legs.]{\includegraphics[width=15cm]{velal.pdf}}
\caption{When entering pike position after take-off, the athlete's arms and legs reach their desired positions at frames 32 and 34, respectively. When leaving pike, the arms move first during frames 147 to 176, followed by the legs between frames 156 and 187.}
\label{fig:shapevelocity}
\end{figure}
Figure \ref{fig:shapevelocity} illustrates the relative velocities of the athlete's arms and legs, where the black boxes indicate transitions into and out of pike position that take at least a quarter second to complete. The interval between the two transition stages reveals small oscillations in the velocities, which are the result of the athlete making micro-adjustments to maintain pike position under heavy rotational forces.
Let $\bo{\alpha}_\mathit{max}\in[-\pi,0]^2$ correspond to the shape with maximum moment of inertia $I_\mathit{max}$, and let $\bo{\alpha}_\mathit{min}\in[-\pi,0]^2$ correspond to the shape with minimum moment of inertia $I_\mathit{min}$. We then find
\begin{align}
\bo{\alpha}_\mathit{max}&=(-\pi, -0.3608) & I_\mathit{max} &= 21.0647\nonumber\\
\bo{\alpha}_\mathit{min}&=(-0.3867,-\pi) & I_\mathit{min} &= 5.2888\nonumber
\end{align}
and illustrate these shape configurations in Figure \ref{fig:Iextreme}. The figure reveals hip flexion in the shape corresponding to $\bo{\alpha}_\mathit{max}$, which is due to the positioning of the hip joint in the model. This resembles reality as athletes exhibit some hip flexion during take-off and entry into the water, as seen in Figure \ref{fig:digitise}.
\begin{figure}[t]
\centering
\subfloat[Shape $\bo{\alpha}_\mathit{max}$.]{\includegraphics[width=7.25cm]{Imax.pdf}}
\subfloat[Shape $\bo{\alpha}_\mathit{min}$.]{\includegraphics[width=7.25cm]{Imin.pdf}}
\caption{The shape configurations with extremum moments of inertia.}
\label{fig:Iextreme}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=12cm]{fastest.png}
\caption{The black curve illustrates the quickest way to move into and out of pike position so that the transition time is a quarter second. Constant $I(\bo{\alpha})$ contours have been plotted, which show that at $\bo{\alpha}_\mathit{max}$ and $\bo{\alpha}_\mathit{min}$ the moments of inertia are $I_\mathit{max}$ and $\bo{\alpha}_\mathit{min}$, respectively.}\label{fig:fastest}
\end{figure}
We now propose a theoretical dive by using the structure observed in the digitised dive as a guideline. In the idealised dive the athlete takes off with shape $\bo{\alpha}_\mathit{max}$ and immediately transitions into pike position specified by shape $\bo{\alpha}_\mathit{min}$. The athlete maintains pike without oscillations, then reverts back into the original shape $\bo{\alpha}_\mathit{max}$ to complete the dive.
It appears obvious that this process will yield the maximal amount of somersault. However, as we will show, this is only true when the transition from $\bo{\alpha}_\mathit{max}$ to $\bo{\alpha}_\mathit{min}$ is instantaneous, which is of course unrealistic. When a maximum speed of shape change is imposed we will show that the amount of
somersault can be increased slightly by using a different manoeuvre. The explanation behind this surprising observation is the geometric phase.
We cap $|\dot{\alpha}_2| = 11.0194$ (arms) and $|\dot{\alpha}_4| = 11.1230$ (legs)
when the limbs move at maximum speed, so that the transition from $\bo{\alpha}_\mathit{max}$ to $\bo{\alpha}_\mathit{min}$ (and vice versa) takes precisely a quarter second. Moving the limbs at maximum speed into and out of pike position maximises the time spent in pike, but there is no contribution to the amount of somersault from the geometric phase. We know the geometric phase is zero because there is no area enclosed by the loop $C$, as illustrated in Figure \ref{fig:fastest}.
The loop $C$ appears to be a line (and hence has no enclosed area) because the motions into pike and back into the layout position are happening in exactly the same way.
The essential idea is to break this symmetry by making the shape change into pike and the shape change back into the layout position different.
\begin{figure}[t]
\centering
\includegraphics[width=12cm]{slowest.png}
\caption{Here $\{s_\mathit{in}, s_\mathit{out}\}=\{0.9818, 0.3158\}$, so the transition from $\bo{\alpha}_f= (-0.3867, -3.0910)$ to $\bo{\alpha}_\mathit{min}$ takes $0.0046$ seconds, and from $\bo{\alpha}_\mathit{min}$ to $\bo{\alpha}_b=(-2.2717, -\pi)$ takes $0.1711$ seconds. The loop is shown with constant $B(\bo{\alpha})$ contours, so the enclosed region gives an idea of the expected geometric phase contribution.}\label{fig:slowest}
\end{figure}
This idea is illustrated in Figure \ref{fig:slowest}, which has a slower leg movement when moving into pike, so that after a quarter second the arms are in place while the legs are not. This is indicated by the shape $\bo{\alpha}_f$, and the black vertical curve shows the additional leg movement required to reach $\bo{\alpha}_\mathit{min}$. When leaving pike position the arms move first to reach shape $\bo{\alpha}_b$, before both pairs of limbs move concurrently for a quarter second to complete the dive with shape $\bo{\alpha}_\mathit{max}$.
In this part of the shape change the legs move at maximal speed.
While this results in rotational contribution from the geometric phase, the contribution from the dynamic phase is less due to the reduced time spent in pike position.
As the geometric phase given by \eqref{eq:green} involves integrating $B(\bo{\alpha})$ over the region enclosed by the loop $C$, having the absolute maximum of $B(\bo{\alpha})$ and its neighbouring large absolute values contained in this region will provide a more efficient (in terms of geometric phase per arc length) contribution towards the geometric phase. Maximising the overall rotation obtained therefore involves finding the balance between the dynamic and geometric phase contributions. We will now perform optimisation to determine the speed at which the arms and legs should move to achieve this.
\begin{figure}[p]
\centering
\hspace*{-10mm}\subfloat[$L=10$; $\{s_\mathit{in}, s_\mathit{out}\}=\{0.6093,0\}$]{\includegraphics[width=7.95cm]{s10.png}}
\hspace{2mm}\subfloat[$L=30$; $\{s_\mathit{in}, s_\mathit{out}\}=\{0.9818,0.3158\}$.]{\includegraphics[width=7.95cm]{s30.png}}\\
\hspace*{-10mm}\subfloat[$L=120$; $\{s_\mathit{in}, s_\mathit{out}\}=\{1,0.8593\}$.]{\includegraphics[width=7.95cm]{s120.png}\label{subfig:s120}}
\hspace{2mm}\subfloat[$L=240$; $\{s_\mathit{in}, s_\mathit{out}\}=\{1,1\}$.]{\includegraphics[width=7.95cm]{s240.png}}\\
\caption{Contour plots of constant $\theta$ for four different $L$ values shown. The optimal $\{s_\mathit{in}, s_\mathit{out}\}$ in each case that maximises the overall rotation $\theta$ is specified and indicated by the black point.}\label{fig:s}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=12cm]{optimal.png}
\caption{Loop with $\{s_\mathit{in}, s_\mathit{out}\}=\{1, 0.8593\}$ shown with constant $B(\bo{\alpha})$ contours, and the point $\bo{\alpha}_b=(-0.7743, -\pi)$.}\label{fig:optimal}
\end{figure}
We define $s_\mathit{in}\in[0,1]$ to be the fraction of maximum speed for the leg movement into pike position, and $s_\mathit{out}\in[0,1]$ to be the fraction of maximum speed for the arm movement out of pike. These reduced speeds (for either arms or legs) will only be used when moving from $\bo{\alpha}_\mathit{max}$ to $\bo{\alpha}_f$ and $\bo{\alpha}_b$ to $\bo{\alpha}_\mathit{max}$, as the transitions from $\bo{\alpha}_f$ to $\bo{\alpha}_\mathit{min}$ and $\bo{\alpha}_\mathit{min}$ to $\bo{\alpha}_b$ will involve the appropriate limb moving at maximum speed to minimise the extra time spent in shape change. With this construction, the extra time required to move into and out of pike is $\tau_E(s_\mathit{in})$ and $\tau_E(s_\mathit{out})$, where
\begin{equation}
\tau_E(s) = (1-s)/4.\label{eq:extT}
\end{equation}
For the dive illustrated in Figure \ref{fig:fastest} we have $\{s_\mathit{in}, s_\mathit{out}\}=\{1,1\}$, thus both $\tau_E(s_\mathit{in}) = \tau_E(s_\mathit{out}) =0$. However, in general when $\tau_E(s_\mathit{in}) \neq 0$ and $\tau_E(s_\mathit{out}) \neq 0$ the dive is composed of four pieces: the transitions from $\bo{\alpha}_\mathit{max} \rightarrow \bo{\alpha}_f$, $\bo{\alpha}_f \rightarrow \bo{\alpha}_\mathit{min}$, $\bo{\alpha}_\mathit{min} \rightarrow \bo{\alpha}_b$ and $\bo{\alpha}_b \rightarrow \bo{\alpha}_\mathit{max}$, which we denote as $\bo{\alpha}_i(t)$ for $i\in\{1,2,3,4\}$, respectively.
Let the transition time for each $\bo{\alpha}_i(t)$ be from zero to $\tau_i$, then $\tau_1=\tau_4=1/4$, $\tau_2=\tau_E(s_\mathit{in})$, $\tau_3=\tau_E(s_\mathit{out})$, giving the cumulative shape change time
\begin{equation}
T_\Sigma = 1/2 + \tau_E(s_\mathit{in}) + \tau_E(s_\mathit{out}).
\end{equation}
Combined with $T_\mathit{pike}$, which is the duration the athlete maintains pike position, we obtain the total airborne time
\begin{equation}
T_\mathit{air} = T_\Sigma + T_\mathit{pike}.\label{eq:Tair}
\end{equation}
Defining the shape change velocities in vector form to be
\begin{equation}
\bo{v}(s_\mathit{in},s_\mathit{out}) = 4\diag{(s_\mathit{out}, s_\mathit{in})}(\bo{\alpha}_\mathit{min}-\bo{\alpha}_\mathit{max}),
\end{equation}
the shape changing transitions can be written as
\begin{align}
\bo{\alpha}_1(t) &= \bo{\alpha}_\mathit{max}+t\bo{v}(s_\mathit{in}, 1) & \bo{\alpha}_2(t) &= \bo{\alpha}_f+t\bo{v}(1,0) \nonumber\\
\bo{\alpha}_3(t) &= \bo{\alpha}_\mathit{min}-t\bo{v}(0,1) & \bo{\alpha}_4(t) &= \bo{\alpha}_b-t\bo{v}(1, s_\mathit{out}),\nonumber
\end{align}
where $\bo{\alpha}_f = \bo{\alpha}_1(\tau_1)$ and $\bo{\alpha}_b = \bo{\alpha}_3(\tau_3)$.
Phases $2$ and $3$ disappear when $s_{in} = s_{out} = 1$ because the corresponding time spent in these phases is $\tau_E(1) = 0$.
Solving for the overall rotation obtained via \eqref{eq:theta} we get
\begin{equation}
\theta(s_\mathit{in},s_\mathit{out})=\theta_\mathit{dyn}(s_\mathit{in},s_\mathit{out})+\theta_\mathit{geo}(s_\mathit{in},s_\mathit{out}),\label{eq:maxtheta}
\end{equation}
where the components are
\begin{align}
\theta_\mathit{dyn}(s_\mathit{in},s_\mathit{out}) &= L\left[\sum_{i=1}^4 \int_0^{\tau_i}I^{-1}\big(\bo{\alpha}_i(t;s_\mathit{in},s_\mathit{out})\big)\,\mathrm{d}t + I^{-1}(\bo{\alpha}_\mathit{min})T_\mathit{pike}\right]\nonumber\\
\theta_\mathit{geo}(s_\mathit{in},s_\mathit{out}) &= \sum_{i=1}^4 \int_0^{\tau_i}\bo{F}\big(\bo{\alpha}_i(t;s_\mathit{in},s_\mathit{out})\big)\cdot\bo{\dot{\alpha}}_i(t;s_\mathit{in},s_\mathit{out})\,\mathrm{d}t.\nonumber
\end{align}
The above components can be further simplified by substituting in \eqref{eq:Tair} to eliminate $T_\mathit{pike}$, and combining the shape change segments $\bo{\alpha}(t) = \bigcup_{i=1}^4 \bo{\alpha}_i(t)$ so that $t$ runs from zero to $T_\mathit{air}$ for the complete dive. This then gives
\begin{align}
\theta_\mathit{dyn} &= L\left[\int_0^{T_\mathit{air}}I^{-1}(\bo{\alpha})\,\mathrm{d}t -I^{-1}(\bo{\alpha}_\mathit{min})T_\Sigma + I^{-1}(\bo{\alpha}_\mathit{min})T_\mathit{air}\right]\label{eq:thetadyn}\\
\theta_\mathit{geo} &= \int_0^{T_\mathit{air}} F(\bo{\alpha})\cdot\dot{\bo{\alpha}}\,dt = \iint_A B(\bo{\alpha})\,d\tilde{A},
\end{align}
where the arguments $t, s_\mathit{in}$ and $s_\mathit{out}$ have been suppressed to avoid clutter. We see in \eqref{eq:thetadyn} that the constant $T_\mathit{air}$ provides an overall increase in $\theta_\mathit{dyn}$, but otherwise plays no role in the optimisation strategy.
To obtain maximal rotation there is competition between having a large $B$ and a small $I$ for as long as possible.
The angular momentum $L$ is an important parameter:
decreasing $L$ reduces the contribution from $\theta_\mathit{dyn}$ while $\theta_\mathit{geo}$ is unaffected, thus the geometric phase contribution has a greater impact on $\theta$ when $L$ is small. The essential competition in the optimisation in this model comes from the extra time $T_\Sigma$ taken to make the loop larger around large $B$
(which increases the geometric phase) and $T_\Sigma$ taken away from the time spent in pike position (which decreases the dynamic phase).
Our main result is that for a fixed total time $T_\mathit{air}$ and a moderate $L$ the total amount of somersault can be improved by using a shape change that has a different motion into pike than out of pike, and hence generates a geometric phase. The detailed results are as follows.
Figure \ref{fig:s} shows the optimal $\{s_\mathit{in}, s_\mathit{out}\}$ that maximises $\theta$ for different values of $L$. When $L=0$ this implies $\theta_\mathit{dyn}=0$, meaning the rotation is purely governed by $\theta_\mathit{geo}$. So by choosing $\{s_\mathit{in}, s_\mathit{out}\} = \{0,0\}$, the geometric phase is maximised and so too is the overall rotation.
The case $L=0$ corresponds to the problem of the falling cat reorienting itself: rotation can be generated without angular momentum via a shape change. As $L$ increases the dynamic phase becomes proportionally larger, making it more important to enter pike position earlier, and hence $\{s_\mathit{in}, s_\mathit{out}\}\rightarrow\{1,1\}$ as $L$ gets larger. The limiting point $\{s_\mathit{in}, s_\mathit{out}\} = \{1,1\}$ occurs when $L=193.65$, where beyond this point maximum overall rotation is achieved by transitioning to pike position as fast as possible, as demonstrated in Figure \ref{fig:fastest}. For comparison, Figure \ref{fig:slowest} shows the optimal shape change trajectory when $L=30$.
When optimising $\theta$ in \eqref{eq:maxtheta} for a typical planar somersault with $L=120$, we see in Figure \ref{subfig:s120} that $\{s_\mathit{in}, s_\mathit{out}\}=\{1, 0.8593\}$ yields the maximal rotation, whose shape change trajectory is illustrated in Figure \ref{fig:optimal}. When compared to the dive with $\{s_\mathit{in}, s_\mathit{out}\}=\{1, 1\}$, the gain in overall rotation is $0.0189$ radians or $1.0836^\circ$. We see that in the optimal dive the limbs move at maximum speed into pike position, but when leaving the pike the arms move slower, thus providing a geometric phase contribution that exceeds the dynamic phase contribution lost by $1.0836^\circ$. This behaviour is due to the take-off and pike shapes being $\bo{\alpha}_\mathit{max}$ and $\bo{\alpha}_\mathit{min}$, but had we chosen different shapes then the observed result may differ. The dynamic phase benefits from being at (or close to) the minimum moment of inertia $I_\mathit{min}$ located at $\bo{\alpha}_\mathit{min}$, whereas the geometric phase favours enclosing a region with large magnitudes of the magnetic field $B$, where the absolute maximum is $0.3058$ occurring at $(-1.3148, -3.0043)$.
Repeating the same computation with $L=30$ yields the optimal shape change trajectory shown in Figure \ref{fig:slowest}, and comparing this with the $\{s_\mathit{in}, s_\mathit{out}\}=\{1, 1\}$ dive yields an additional $0.2327$ radians or $13.3354^\circ$ in rotation, which is more significant than the $1.0836^\circ$ found for $L=120$. Although the addition of $1^\circ$ is irrelevant in practice, the model used is extremely simple and these results should only be considered as a proof of principle. We hope that by using more realistic models with asymmetric movements, the additional rotation obtained via geometric phase optimisation will yield a small but important contribution that transforms a failed dive into a successful one. The principles here are not limited to planar somersaults, but can also be utilised in other fields such as robotics and space aeronautics,
where the benefit of the geometric phase will be large whenever the angular momentum is small.
\section*{Acknowledgement}
This research was supported in part by the Australian Research Council through the Linkage Grant LP100200245 "Bodies in Space" in collaboration with the New South Wales Institute of Sport.
| {
"attr-fineweb-edu": 2.734375,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdEHxaJJQnKrAjpwv | \section{A null model for competition dynamics}
We first introduce the limiting case of an \textit{ideal competition}, which provides a useful tool by which to identify and quantify interesting deviations within real data, and to generate hypotheses as to what underlying processes might produce them. Although we describe this model in terms of two teams accumulating points, it can in principle be generalized to other forms of competition.
In an ideal competition, events unfold on a perfectly neutral or ``level'' playing field, in which there are no environmental features that could give one side a competitive advantage over the other~\cite{merritt2013environmental}. Furthermore, each side is perfectly skilled, i.e., they possess complete information both about the state of the game, e.g., the position of the ball, the location of the players, etc.\ and the set of possible strategies, their optimum responses, and their likelihood of being employed. This is an unrealistic assumption, as real competitors are imperfectly skilled, and possess both imperfect information and incomplete strategic knowledge of the game. However, increased skill generally implies improved performance on these characteristics, and the limiting case would be perfect skill. Finally, each side exhibits a slightly imperfect ability to execute any particular chosen strategy, which captures the fact that no side can control all variables on the field. In other words, two perfectly skilled teams competing on a level playing field will produce scoring events by chance alone, e.g., a slight miscalculation of velocity, a fumbled pass, shifting environmental variables like wind or heat, etc.
\begin{table*}[t!]
\begin{center}
\begin{tabular}{ll|cc|cc}
sport & abbrv. & seasons & teams & competitions & scoring events \\ \hline
Football (college) & CFB & 10, 2000--2009 & 486 & 14,588 & \hspace{0.7em}120,827 \\
Football (pro) & NFL & 10, 2000--2009 & \hspace{0.5em}31 & \hspace{0.5em}2,654 & \hspace{1.2em}19,476 \\
Hockey (pro) & NHL & 10, 2000--2009 & \hspace{0.5em}29 & 11,813 & \hspace{1.2em}44,989 \\
Basketball (pro) & NBA & \hspace{0.5em}9, 2002--2010 & \hspace{0.5em}31 & 11,744 & 1,080,285 \\
\end{tabular}
\end{center}
\caption{Summary of data for each sport, including total number of seasons, teams, competitions, and scoring events.}
\label{tab:data}
\end{table*}
An ideal competition thus eliminates all of the environmental, player, and strategic heterogeneities that normally distinguish and limit a team. The result, particularly from the spectator's point of view, is a competition whose dynamics are fundamentally unpredictable. Such a competition would be equivalent to a simple stochastic process, in which scoring events arrive randomly, via a Poisson process with rate $\lambda$, points are awarded to each team with equal probability, as in a fair Bernoulli process with parameter $c=1/2$, and the number of those points is an iid random variable from some sport-specific distribution.
Mathematically, let $S_r(t)$ and $S_b(t)$ denote the cumulative scores of teams $r$ and $s$ at time $t$, where $ 0 \le t \le T$ represents the game clock. (For simplicity, we do not treat overtime and instead let the game end at $t=T$.) The probability that $S_r$ increases by $k$ points at time $t$ is equal to the joint probability of observing an event worth $k$ points, scored by team $r$ at time $t$. Assuming independence, this probability is
\begin{equation}
\Pr(\Delta S_r(t) = k) = \Pr(\textrm{event at $t$})\Pr(\textrm{$r$ scores})\Pr(\textrm{points $=k$}) \enspace .
\end{equation}
The evolution of the difference in these scores thus follows an finite-length unbiased random walk on the integers, moving left or right with equal probability, starting at $\Delta S=0$ at $t=0$.
Real competitions will deviate from this ideal because they possess various non-ideal features. The type and size of such deviations are evidence for competitive mechanisms that drive the scoring dynamics away from the ideal.
\section{Scoring event data}
Throughout our analyses, we utilize a comprehensive data set of all points scored in league games of consecutive seasons of college-level American football (NCAA Divisions 1--3, 10 seasons; 2000--2009), professional American football (NFL, 10 seasons; 2000--2009), professional hockey (NHL, 10 seasons; 2000--2009), and professional basketball (NBA, 9 seasons, 2002--2010).%
\footnote{Data provided by STATS LLC, copyright 2014.}
Each scoring event includes the time at which the event occurred, the player and corresponding team that won the event, and the number of points it was worth. From these, we extract all scoring events that occurred during regulation time (i.e., we exclude all overtime events), which account for 99\% or more of scoring events in each sport, and we combine events that occur at the same second of game time. Table~\ref{tab:data} summarizes these data, which encompass more than 1.25 million scoring events across more than 40,000 games.
A brief overview of each sport's primary game mechanics is provided in Additional File~1 as Appendix~A. In general, games in these sports are competitions between two teams of fixed size, and points are accumulated each time one team places the ball or puck in the opposing team's goal. Playing fields are flat, featureless surfaces. Gameplay is divided into three or four scoring periods within a maximum of 48 or 60 minutes (not including potential overtime). The team with the greatest score at the end of this time is declared the winner.
\section{Game tempo}
A game's ``tempo'' is the speed at which scoring events occur over the course of play. Past work on the timing of scoring events has largely focused on hockey, soccer and basketball~\cite{thomas2007inter,heuer:etal:2010,gabel:redner:2012}, with little work examining football or in contrasting patterns across sports. However, these studies show strong evidence that game tempo is well approximated by a homogenous Poisson process, in which scoring events occur at each moment in time independently with some small and roughly constant probability.
Analyzing the timing of scoring events across all four of our sports, we find that the Poisson process is a remarkably good model of game tempo, yielding predictions that are in good or excellent agreement with a variety of statistical measures of game play. Furthermore, these results confirm and extend previous work~\cite{ayton2004hot,gabel:redner:2012}, while contrasting with others~\cite{yaari:david:2012,yaari:eisenman:2011}, showing little or no evidence for the popular belief in ``momentum'' or ``hot hands,'' in which scoring once increases the probability of scoring again very soon. However, we do find some evidence for modest non-Poissonian patterns in tempo, some of which are common to all four sports.
\begin{table}[b!]
\begin{center}
\begin{tabular}{l|cc|cc}
sport & $\hat{\lambda}$ & $T$ & $\hat{\lambda} T$ & $1/\hat{\lambda}$ \\
& [events / s] & [s] & [events / game] & [s / event] \\ \hline
NFL & 0.00204(1) & 3600 & \hspace{0.5em}7.34 & 490.2 \\
CFB & 0.00230(1) & 3600 & \hspace{0.5em}8.28 & 434.8 \\
NHL & 0.00106(1) & 3600 & \hspace{0.5em}3.81 & 943.4 \\
NBA & 0.03194(5) & 2880 & 91.99 & \hspace{0.5em}31.3
\end{tabular}
\end{center}
\caption{Tempo summary statistics for each sport, along with simple derived values for the expected number of events per game and seconds between events. Parenthetical values indicate standard uncertainty in the final digit.}
\label{table:tempo:balance}
\end{table}
\begin{figure*}[t!]
\centering
\includegraphics[scale=0.99]{Figure1.pdf}
\caption{\textbf{Scoring events per game.} Empirical distributions for the number of scoring events per game, along with the estimated Poisson model with rate $\lambda T$ (dashed).}
\label{fig:timing:events}
\end{figure*}
\subsection{The Poisson model of tempo}
A Poisson process is fully characterized by a single parameter $\lambda$, representing the probability that an event occurs, or the expected number of events, per unit time. In each sport, game time is divided into seconds and there are $T$ seconds per a game (see Table~\ref{table:tempo:balance}). For each sport, we test this model in several ways: we compare the empirical and predicted distributions for the number of events per game and for the time between consecutive scoring events, and we examine the two-point correlation function for these inter-event times.
Under a Poisson model~\cite{boas:2006}, the number of scoring events per game follows a Poisson distribution with parameter $\lambda T$, and the maximum likelihood estimate of $\lambda$ is the average number of events observed in a game divided by the number of intervals (which varies per sport). Furthermore, the time between consecutive events follows a simple geometric (discrete exponential) distribution, with mean $1/\lambda$, and the two-point correlation between these delays is zero at all time scales.
For the number of events per game, we find generally excellent agreement between the Poisson model and the data for every sport (Figure~\ref{fig:timing:events}). However, there are some small deviations, which suggests some second-order, non-Poissonian processes, which we investigate below. Deviations are greatest in NHL games, whose distribution is slightly broader than predicted, underproducing games with 3 events, and overproducing games with 0 or with 8 or more events. Similarly, CFB games have a slight excess of games with 9 events, and NBA games exhibit slightly more variation in NBA games with scores close to the average (92.0 events) than expected. In contrast, NFL games exhibit slightly less variance than expected, with more games close to the average (7.3 events) than expected.
For the time between consecutive scoring events within a game, or the inter-arrival time distribution, we again find excellent agreement between the Poisson model and the data in all sports (Figure~\ref{fig:timing:iat}). That being said, in CFB, NFL and NBA games, there are slightly fewer gaps of the minimum size than predicted by the model. This indicates a slight dispersive effect in the timing of events, perhaps caused by the time required to transport the ball some distance before a new event may be generated. In contrast, NHL games produce as many short gaps, more intermediate gaps, and fewer very long gaps than expected were events purely Poissonian.
\begin{figure*}[t!]
\centering
\includegraphics[scale=0.99]{Figure2.pdf}
\caption{\textbf{Time between scoring events.} Empirical distribution of time between consecutive scoring events, shown as the complementary cdf, along with the estimated distribution from the Poisson model (dashed). Insets show the correlation function for inter-event times.}
\label{fig:timing:iat}
\end{figure*}
Finally, we calculate the two-point correlation function on the times between scoring events~\cite{box2011time},
\begin{equation}
C(n) = \left( \sum_k (t_k - \langle t \rangle)(t_{k+n}-\langle t \rangle) \right) \left/ \sum_k (t_k - \langle t\rangle)^2 \right. \enspace ,
\end{equation}
where $t_{k}$ is the $k$th inter-arrival time, $n$ indicates the gap between it and a subsequent event, and $\langle t \rangle$ is the mean time between events. If $C(n)$ is positive, short intervals tend to be followed by other short intervals (or, large intervals by large intervals), while a negative value implies alternation, with short intervals followed by long, or vice versa. Across all four sports, the correlation function is close or very close to zero for all values of $n$ (Figure~\ref{fig:timing:iat} insets), in excellent agreement with the Poisson process, which predicts $C(n)=0$ for all $n>0$, representing no correlation in the timing of events (a result also found by~\cite{gabel:redner:2012} in basketball). However, in CFB, NFL and NHL games, we find a slight negative correlation for very small values of $n$, suggesting a slight tendency for short intervals to be closely followed by longer ones, and vice versa.
\begin{figure*}[t!]
\centering
\includegraphics[scale=0.99]{Figure3b.pdf}
\caption{\textbf{Game tempo.} Empirical probability of scoring events as a function of game time, for each sport, along with the mean within-sport probability (dashed line). Each distinct game period, demarcated by vertical lines, shows a common three-phase pattern in tempo.}
\label{fig:timing:rate}
\end{figure*}
\subsection{Common patterns in game tempo}
\label{sec:tempo}
Our results above provide strong support for a common Poisson-like process for modeling game tempo across all four sports. We also find some evidence for mild non-Poissonian processes, which we now investigate by directly examining the scoring rate as a function of clock time. Within each sport, we tabulate the fraction of games in which a scoring event (associated with any number of points) occurred in the $t$th second of gameplay.
Across all sports, we find that the tempo of events follows a common three-phase pattern within each distinct period of play (Figure~\ref{fig:timing:rate}). This pattern, which resembles an inverse sigmoid, is characterized by (i) an early phase of non-linearly increasing tempo, (ii) a middle phase of stable (Poissonian) or slightly increasing tempo, and (iii) an end phase of sharply increasing tempo. This pattern is also observed in certain online games~\cite{merritt2013environmental}, which have substantially different rules and are played in highly heterogeneous environments, suggesting a possibly fundamental generating mechanism for team-competitive systems.
\paragraph*{Early phase:\ non-linear increase in tempo.} When a period begins, players are in specific and fixed locations on the field, and the ball or puck is far from any team's goal. Thus, without regard to other aspects of the game, it must take some time for players to move out of these initial positions and to establish scoring opportunities. This would reduce the probability of scoring relative to the game average by limiting access to certain player-ball configurations that require time to set up. Furthermore, and potentially most strongly in the first of these phases (beginning at $t=0$), players and teams may still be ``warming up,'' in the sense of learning~\cite{thompson:2010} the capabilities and tendencies of the opposing team and players, and which tactics to deploy against the opposing team's choices. These behaviors would also reduce the probability of scoring by encouraging risk averse behavior in establishing and taking scoring opportunities.
We find evidence for both mechanisms in our data. Both CFB and NFL games exhibit short and modest-sized dips in scoring rates in periods 2 and 4, reflecting the fact that player and ball positions are not reset when the preceding quarters end, but rather gameplay in the new quarter resumes from its previous configuration. In contrast, CFB and NFL periods 1 and 3 show significant drops in scoring rates, and both of these quarters begin with a kickoff from fixed positions on the field. Similarly, NBA and NHL games exhibit strong but short-duration dips in scoring rate at the beginning of each of their periods, reflecting the fact that each quarter begins with a tossup or face-off, in which players are located in fixed positions on the court or rink. NBA and football games also exhibit some evidence of the ``warming up'' process, with the overall scoring rate being slightly lower in period 1 than in other equivalent periods. In contrast, NHL games exhibit a prolonged warmup period, lasting well past the end of the first period. This pattern may indicate more gradual within-game learning in hockey, perhaps are a result of the large diversity of on-ice player combinations caused by teams rotating their four ``lines'' of players every few minutes.
\paragraph*{Middle phase:\ constant tempo.} Once players have moved away from their initial locations and/or warmed up, gameplay proceeds fluidly, with scoring events occurring without any systematic dependence on the game clock. This produces a flat, stable or stationary pattern in the probability of scoring events. A slight but steady increase in tempo over the course of this phase is consistent with learning, perhaps as continued play sheds more light on the opposing team's capabilities and weaknesses, causing a progressive increase in scoring rate as that knowledge is accumulated and put into practice.
A stable scoring rate pattern appears in every period in NFL, CFB and NBA games, with slight increases observed in periods 1 and 2 in football, and in periods 2--4 in basketball. NHL games exhibit stable scoring rates in the second half of period 2 and throughout period 3. Within a given game, but across scoring periods, scoring rates are remarkably similar, suggesting little or no variation in overall strategies across the periods of gameplay.
\paragraph*{End phase:\ sharply increased tempo.} The end of a scoring period often requires players to reset their positions, and any effort spent establishing an advantageous player configuration is lost unless that play produces a scoring event. This impending loss-of-position will tend to encourage more risky actions, which serve to dramatically increase the scoring rate just before the period ends. The increase in scoring rate should be largest in the final period, when no additional scoring opportunities lay in the future. In some sports, teams may effectively slow the rate by which time progresses through game clock management (e.g., using timeouts) or through continuing play (at the end of quarters in football). This effectively compresses more actions than normal into a short period of time, which may also increase the rate, without necessarily adding more risk.
We find evidence mainly for the loss-of-position mechanism, but the rules of these games suggest that clock management likely also plays a role. Relative to the mean tempo, we find a sharply increased rate at the end of each sport's games, in agreement with a strong incentive to score before a period ends. (This increase indicates that a ``lolly-gag strategy,'' in which a leading team in possession intentionally runs down the clock to prevent the trailing team from gaining possession, is a relatively rare occurrence.) Intermediate periods in NFL, CFB and NBA games also exhibit increased scoring rates in their final seconds. In football, this increase is greatest at the end of period 2, rather than period 4. The increased rate at the ends of periods 1 and 3 in football is also interesting, as here the period's end does not reset the player configuration on the field, but rather teams switch goals. This likely creates a mild incentive to initiate some play before the period ends (which is allowed to finish, even if the game clock runs out). NHL games exhibit no discernible end-phase pattern in their intermediate periods (1 and 2), but show an enormous end-game effect, with the scoring rate growing to more than three times its game mean. This strong pattern may be related to the strategy in hockey of the losing team ``pulling the goalie,'' in which the goalie leaves their defensive position in order to increase the chances of scoring. Regardless of the particular mechanism, the end-phase pattern is ubiquitous.
In general, we find a common set of modest non-Poissonian deviations in game tempo across all four sports, although the vast majority of tempo dynamics continue to agree with a simple Poisson model.
\section{Game balance}
A game's ``balance'' is the relative distribution of scoring events (not points) among the teams. Perfectly balanced games, however, do not always result in a tie. In our model of competition, each scoring event is awarded to one team or the other by a Bernoulli process, and in the case of perfect balance, the probability is equal, at $c=1/2$. The expected fraction of scoring events won by a team is also $c=1/2$, and its distribution depends on the number of scoring events in the game. We estimate this null distribution by simulating perfectly balanced games for each sport, given the empirical distribution of scoring events per game (see Fig.~\ref{fig:timing:events}). Comparing the simulated distribution against the empirical distribution of $c$ provides a measure of the true imbalance among teams, while controlling for the stochastic effects of events within games.
Across all four sports, we find significant deviations in this fraction relative to perfect balance. NFL and CFB games exhibited more variance than expected, while NHL and NBA games exhibited the least. Within a game, scoring balance exhibits unexpected patterns. In particular NBA games exhibit an unusual ``restoring force'' pattern, in which the probability of winning the next scoring event \textit{decreases} with the size of a team's lead (a pattern first observed by~\cite{gabel:redner:2012}). In contrast, NFL, CFB and NHL games exhibit the opposite effect, in which the probability of winning the next scoring event appears to increase with the size of the lead---a pattern consistent with a heterogeneous distribution of team skill.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.99]{Figure4.pdf}
\caption{\textbf{Game balance.} Smoothed distributions for the empirical fraction $\hat{c}$ of events won by a team, for each sport, and the predicted fraction for a perfectly balanced scoring, when given the empirical distribution of events per game (Fig.~\ref{fig:timing:events}). Modes at 1 and 0 indicate a non-trivial probability of one team winning or losing every event, which is more common when only a few events occur.}
\label{fig:scoring:balance}
\end{figure*}
\subsection{Quantifying balance}
The fraction of all events in the game that were won by a randomly selected team provides a simple
measure of the overall balance of a particular game in a sport. Let $r$ and $b$ index the two teams and let $E_{r}$ ($E_{b}$) denote the total number of events won by team $r$ in its game with $b$. The maximum likelihood estimator for a game's bias is simply the fraction $\hat{c} = E_r\left/ (E_r+E_b) \right.$ of all scoring events in the game won by $r$.
Tabulating the empirical distributions of $\hat{c}$ within each sport, we find that the most common outcome, in all sports, is $c=1/2$, in agreement with the Bernoulli model. However, the distributions around this value deviate substantially from the form expected for perfect balance (Figure~\ref{fig:scoring:balance}), but not always in the same direction.
In CFB and NFL, the distributions of scoring balances are similar, but the shape for CFB is broader than for NFL, suggesting that CFB competitions are less balanced than NFL competitions. This is likely a result of the broader range of skill differences among teams at the college level, as compared to the professionals. Like CFB and NFL, NHL games also exhibit substantially more blowouts and fewer ties than expected, which is consistent with a heterogeneous distribution of team skills. Surprisingly, however, NBA games exhibit less variance in the final relative lead size than we expect for perfectly balanced games, a pattern we will revisit in the following section.
\subsection{Scoring while in the lead}
Although many non-Bernoulli processes may occur within professional team sports, here we examine only one: whether the size of a lead $L$, the difference in team scores or point totals, provides information about the probability of a team winning the next event. \cite{gabel:redner:2012} previously considered this question for scoring events and lead sizes within NBA games, but not other sports. Across all four of our sports, we tabulated the fraction of times the leading team won the next scoring event, given it held a lead of size $L$. This function is symmetric about $L=0$, where it passes through probability $p=1/2$ where the identity of the leading team may change.
Examining the empirical scoring functions (Figure~\ref{fig:scoring:lead}), we find that the probability of scoring next varies systematically with lead size $L$. In particular, for CFB, NFL and NHL games, the probability appears to increase with lead size, while it decreases in NBA games. The effect of the negative relationship in NBA games is a kind of ``restoring force,'' such that leads of any size tend to shrink back toward a tied score. This produces a narrower distribution of final lead sizes than we would expect under Bernoulli-style competition, precisely as shown in Figure~\ref{fig:scoring:balance} for NBA games.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.99]{Figure5.pdf}
\caption{\textbf{Lead-size dynamics.} The probability of scoring as a function of a team's lead size for each sport, football, hockey, and basketball and a linear least-squares fit ($p\le0.1$), indicating positive or negative correlations between scoring and a competition's score difference.}
\label{fig:scoring:lead}
\end{figure*}
Although the positive function for CFB, NFL and NHL games may superficially support a kind of ``hot hands'' or cumulative advantage-type mechanism, in which lead size tends to grow superlinearly over time, we do not believe this explains the observed pattern. A more plausible mechanism is a simple heterogeneous skill model, in which each team has a latent skill value $\pi_{r}$, and the probability that team $r$ wins a scoring event against $b$ is determined by a Bernoulli process with $c=\pi_{r}\left/ (\pi_{r}+\pi_{b})\right.$. (This model is identical to the popular Bradley-Terry model of win-loss records of teams~\cite{bradley1952rank}, except here we apply it to each scoring event within a game.)
For a broad class of team-skill distributions, this model produces a scoring function with the same sigmoidal shape seen here, and the linear pattern at $L=0$ is the result of averaging over the distribution of biases $c$ induced by the team skill distribution. The function flattens out at large $|L|$ assuming the value representing the largest skill difference possible among the league teams. This explanation is supported by the stronger correlation in CFB games ($+0.005$ probability per point in the lead) versus NFL games ($+0.002$ probability per point), as CFB teams are known to exhibit much broader skill differences than NFL teams, in agreement with our results above in Figure~\ref{fig:scoring:balance}.
NBA games, however, present a puzzle, because no distribution of skill differences can produce a negative correlation under this latent-skill model. \cite{gabel:redner:2012} suggested this negative pattern could be produced by possession of the ball changing after each scoring event, or by the leading team ``coasting'' and thereby playing below their true skill level. However, the change-of-possession rule also exists in CFB and NFL games (play resumes with a faceoff in NHL games), but only NBA games exhibit the negative correlation. Coasting could occur for psychological reasons, in which losing teams play harder, and leading teams less hard, as suggested by~\cite{berger:2011}. Again, however, the absence of this pattern in other sports suggest that the mechanism is not psychological.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.99]{Figure6b.pdf}
\caption{\textbf{Modeling lead-size dynamics.} Comparison of empirical lead-size variation as a function of clock time with those produced by Bernoulli (B) or Markov (M) tempo or balance models, for each sport.}
\label{fig:sim:lead}
\end{figure*}
A plausible alternative explanation is that NBA teams employ various strategies that serve to change the ratio $c=\pi_{r}\left/ (\pi_{r}+\pi_{b})\right.$ as a function of lead size. For instance, when a team is in the lead, they often substitute out their stronger and more offensive players, e.g., to allow them to rest or avoid injury, or to manage floor spacing or skill combinations. When a team is down by an amount that likely varies across teams, these players are put back on the court. If both teams pursue such strategies, then effective ratio $c$ will vary inversely with lead size such that the leading team becomes effectively weaker compared to the non-leading team. In contrast to NBA teams, teams in CFB, NFL and NHL seem less able to pursue such a strategy. In football, substitutions are relatively uncommon, implying that $\pi_{r}$ should not vary much over the course of a game. In hockey, each team rotates through most of its players every few minutes, which limits the ability for high- or low-skilled players to effectively change $\pi_{r}$ over the course of a game.
\section{Modeling lead-size dynamics}
\label{sec:lead:size:dynamics}
The previous insights identify several basic patterns in scoring tempo and balance across sports. However, we still lack a clear understanding of the degree to which any of these patterns is necessary to produce realistic scoring dynamics. Here, we investigate this question by combining the identified patterns within a generative model of scoring over time, and test which combinations produce realistic dynamics in lead sizes. In particular, we consider two models of tempo and two models of balance. For each of the four pairs of tempo and balance models for each sport, we generate via Monte Carlo a large number of games and measure the resulting variation in lead size as a function of the game clock, which we then compare to the empirical pattern.
Our two scoring tempo models are as follows. In the first (Bernoulli) model, each second of time produces an event with the empirical probability observed for that second across all games (shown in Figure~\ref{fig:timing:rate}). In the second (Markov), we draw an inter-arrival time from the empirical distribution of such gaps (shown in Figure~\ref{fig:timing:iat}), advance the game clock of that amount, and generate a scoring event at that clock time.
Our two balance models are as follows. In the first (Bernoulli) model, for each match we draw a uniformly random value $c$ from the empirical distribution of scoring balances (shown in Figure~\ref{fig:scoring:balance}) and for each scoring event, the points are won by team $r$ with that probability and by team $b$ otherwise. In the second (Markov), a scoring event is awarded to the leading team with the empirically estimated probability for the current lead size $L$ (shown in Figure~\ref{fig:scoring:lead}). Once a scoring event is generated and assigned, that team's score is incremented by a point value drawn iid from the empirical distribution of point values per scoring event for the sport (see Appendix~\ref{appendix:points:per:event}).
The four combinations of tempo and balance models thus cover our empirical findings for patterns in the scoring dynamics of these sports. The simpler models (called Bernoulli) represent dynamics with no memory, in which each event is an iid random variable, albeit drawn from a data-driven distribution. The more complicated models (called Markov) represent dynamics with some memory, allowing past events to influence the ongoing gameplay dynamics. In particular, these are first-order Markov models, in which only the events of the most recent past state influence the outcome of the random variable at the current state.
Generating 100,000 competitions under each combination of models for each sport, we find a consistent pattern across sports (Figure~\ref{fig:sim:lead}): the Markov model of game tempo provides little improvement over the Bernoulli model in capturing the empirical pattern of lead-size variation, while the Markov model for balance provides a significant improvement over the Bernoulli model. In particular, the Markov model generates gameplay dynamics in very good agreement with the empirical patterns.
That being said, some small deviations remain. For instance, the Markov model slightly overestimates the lead-size variation in the first half, and slightly underestimates it in the second half of CFB games. In NFL games, it provides a slight overestimate in first half, but then converges on the empirical pattern in the second half. NHL games exhibit the largest and most systematic deviation, with the Markov model producing more variation than observed, particularly in the game's second half. However, it should be noted that the low-scoring nature of NHL means that what appears to be a visually large overestimate here (Fig.~\ref{fig:sim:lead}) is small when compared to the deviations seen in the other sports. NBA games exhibit a similar pattern to CFB games, but the crossover point occurs at the end of period 3, rather than at period 2. These modest deviations suggest the presence of still other non-ideal processes governing the scoring dynamics, particularly in NHL games.
We emphasize that the Markov model's accuracy for CFB, NFL and NHL games does not imply that individual matches follow this pattern of favoring the leader. Instead, the pattern provides a compact and efficient summary of scoring dynamics conditioned on unobserved characteristics like team skill. Our model generates competition between two featureless teams, and the Markov model provides a data-driven mechanism by which some pairs of teams may behave as if they have small or large differences in latent skill. It remains an interesting direction for future work to investigate precisely how player and team characteristics determine team skill, and how team skill impacts scoring dynamics.
\section{Predicting outcomes from gameplay}
The accuracy of our generative model in the previous section suggest that it may also produce accurate predictions of the game's overall outcome, after observing only the events in the first $t$ seconds of the game. In this section, we study the predictability of game outcome using the Markov model for scoring balance, and compare its accuracy to the simple heuristic of guessing the winner to be the team currently in the lead at time $t$. Thus, we convert our Markov model into an explicit Markov chain on the lead size $L$, which allows us to simulate the remaining $T-t$ seconds conditioned on the lead size at time $t$. For concreteness, we define the lead size $L$ relative to team $r$, such that $L<0$ implies that $b$ is in the lead.
The Markov chain's state space is the set of all possible lead sizes (score differences between teams $r$ and $b$), and its transition matrix $P$ gives the probability that a scoring event changes a lead of size $L$ to one of size $L'$. If $r$ wins the event, then $L'=L+k$, where $k$ is the event's point value, while if $b$ wins the event, then $L'=L-k$. Assuming the value and winner of the event are independent, the transition probabilities are given by
\begin{align}
P_{L,L+k} & = \Pr(\textrm{$r$ scores $ |\, L$}) \Pr(\textrm{point value $= k$}) \nonumber \\
P_{L,L-k} & = (1-\Pr(\textrm{$r$ scores $ |\, L$})) \Pr(\textrm{point value $= k$}) \enspace , \nonumber
\end{align}
where, for the particular sport, we use the empirical probability function for scoring as a function of lead size (Figure~\ref{fig:scoring:lead}), from $r$'s perspective, and the empirical distribution (Appendix~\ref{appendix:points:per:event}) for the point value.
The probability that team $r$ is the predicted winner depends on the probability distribution over lead sizes at time $T$. Because scoring events are conditionally independent, this distribution is given by $P^{n}$, where $n$ is the expected number of scoring events in the remaining clock time $T-t$, multiplied by a vector $S_{0}$ representing the initial state $L=0$. Given a choice of time $t$, we estimate $n = \sum_{w=t}^T \Pr(\textrm{event}\,|\,w)$, which is the expected number of events given the empirical tempo function (Fig.~\ref{fig:timing:rate}, also the Bernoulli tempo model in Section~\ref{sec:lead:size:dynamics}) and the remaining clock time.
We then convert this distribution, which we calculate numerically, into a prediction by summing probabilities for each of three outcomes: $r$ wins (states $L>0$), $r$ ties $b$, (state $L=0$), and $b$ wins (states $L>0$). In this way, we capture the information contained in magnitude of the current lead, which is lost when we simply predict that the current leader will win, regardless of lead size.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.99]{Figure7b.pdf}
\caption{\textbf{Predicting game outcome from dynamics.} Probability of correctly predicting the game winner (AUC) using the Markov chain model and the ``leader wins'' model for football, hockey, and basketball. The vertical line shows the average number of scoring events per game, and the highlighted region shows the middle 50\% of games around this value.}
\label{fig:pred}
\end{figure*}
We test the accuracy of the Markov chain using an out-of-sample prediction scheme, in which we repeatedly divide each sports' game data into a training set of a randomly selected 3/4 of all games and a test set of the remaining 1/4. From each training set, we estimate the empirical functions used in the model and compute the Markov chain's transition matrix. Then, across the games in each test set, we measure the mean fraction of times the Markov chain's prediction is correct. This fraction is equivalent to the popular AUC statistic~\cite{bradley:1997}, where AUC$\,=0.5$ denotes an accuracy no better than guessing.
Instead of evaluating the model at some arbitrarily selected time, we investigate how outcome predictability evolves over time. Specifically, we compute the AUC as a function of the cumulative number of scoring events in the game, using the empirically observed times and lead sizes in each test-set game to parameterize the model's predictions. When the number of cumulative events is small, game outcomes should be relatively unpredictable, and as the clock runs down, predictability should increase. To provide a reference point for the quality of these results, we also measure the AUC over time for a simple heuristic of predicting the winner as the team in the lead after the event.
Across all sports, we find that game outcome is highly predictable, even after only a small number of scoring events (Figure~\ref{fig:pred}). For instance, the winner of CFB and NFL games can be accurately chosen more than 60\% of the time after only a single scoring event, and this rate increases to more than 80\% by three events. NHL games are even more predictable, in part because they are very low-scoring games, and the winner may be accurately chosen roughly 80\% of the time after the first event. The fast rise of the AUC curve as a function of continued scoring in these sports likely reflects the role played by differences in latent team skill in producing large leads, which make outcomes more predictable (Figure~\ref{fig:scoring:lead}).
In contrast, NBA games are the least predictable, requiring more than 40 events before the AUC exceeds 80\%. This likely reflects the role of the ``restoring force'' (Figure~\ref{fig:scoring:lead}), which tends to make NBA games more unpredictable than we would expect from a simple model of scoring, and significantly more unpredictable than CFB, NFL or NHL games.
In all cases, the Markov chain substantially outperforms the ``leader wins'' heuristic, even in the low-scoring NHL games. This occurs in part because small leads are less informative than large leads for guessing the winner, and the heuristic does not distinguish between these.
\section{Discussion}
Although there is increasing interest in quantitative analysis and modeling in sports~\cite{arkes2011finally,bourbousson:seve:mcgarry:2012,de2011basketball,everson2008composite,neiman2011reinforcement}, many questions remain about what patterns or principles, if any, cut across different sports, what basic dynamical processes provide good models of within-game events, and the degree to which the outcomes of games may be predicted from within-game events alone.
The comprehensive database of scoring events we use here to investigate such questions is unusual for both its scope (every league game over 9--10 seasons), its breadth (covering four sports), and its depth (timing and attribution information on every point in every game). As such, it offers a number of new opportunities to study competition in general, and sports in particular.
Across college (American) football (CFB), professional (American) football (NFL), professional hockey (NHL) and professional basketball (NBA) games, we find a number of common patterns in both the tempo and balance of scoring events. First, the timing of events in all four sports is remarkably well-approximated by a simple Poisson process (Figures~\ref{fig:timing:events} and~\ref{fig:timing:iat}), in which each second of gameplay produces a scoring event independently, with a probability that varies only modestly over the course of a game (Figure~\ref{fig:timing:rate}). These variations, however, follow a common three-phase pattern, in which a relatively constant rate is depressed at the beginning of a scoring period, and increases dramatically in the final few seconds of the period. The excellent agreement with a Poisson process implies that teams employ very few strategically-chosen chains of events or time-sensitive strategies in these games, except in a period's end-phase, when the incentive to score in elevated. These results provide further support to some past analyses~\cite{ayton2004hot,gabel:redner:2012}, while contrasting with others~\cite{yaari:david:2012,yaari:eisenman:2011}, showing no evidence for the popular notion of ``hot hands,'' in which scoring once increases the chance of scoring again soon.
Second, we find a common pattern of imbalanced scoring between teams in CFB, NFL and NHL games, relative to an ideal model in which teams are equally likely to win each scoring event (Figure~\ref{fig:scoring:balance}). CFB games are much less balanced than NFL games, suggesting that the transition from college to professional tends to reduce the team skill differences that generate lopsided scoring. This reduction in variance is likely related both to only the stronger college-level players successfully moving up into the professional teams, and in the way the NFL Draft tends to distributed the stronger new players to the weaker teams.
Furthermore, we find that all three of these sports exhibit a pattern in which lead sizes tend to increase over time. That is, the probability of scoring while in the lead tends to be larger the greater the lead size (Figure~\ref{fig:scoring:lead}), in contrast to the ideal model in which lead sizes increase or decrease with equal probability. As with overall scoring balance, the size of this effect in CFB games is much larger (about 2.5 times larger) than in NFL games, and is consistent with a reduction in the variance of the distribution of skill across teams. That is, NFL teams are generally closer in team skill than CFB teams, and this produces gameplay that is much less predictable. Both of these patterns are consistent with a kind of Bradley-Terry-type model in which each scoring event is a contest between the teams.
NBA games, however, present the opposite pattern: team scores are much closer than we would expect from the ideal model, and the probability of scoring while in the lead effectively \textit{decreases} as the lead size grows (Figure~\ref{fig:scoring:lead}; a pattern originally identified by~\cite{gabel:redner:2012}). This pattern produces a kind of ``restoring force'' such that leads tend to shrink until they turn into ties, producing games that are substantially more unpredictable. Unlike the pattern in CFB, NFL and NHL, no distribution of latent team skills, under a Bradley-Terry-type model, can produce this kind of negative correlation between the probability of scoring and lead size.
Recently, \cite{berger:2011} analyzed similar NBA game data and argued that increased psychological motivation drives teams that are slightly behind (e.g., by one point at halftime) to win the game more often than not. That is, losing slightly is good for winning. Our analysis places this claim in a broader, more nuanced context. The effective restoring force is superficially consistent with the belief that losing in NBA games is ``good'' for the team, as losing does indeed empirically increase the probability of scoring. However, we find no such effect in CFB, NFL or NHL games (Figure~\ref{fig:scoring:lead}), suggesting either that NBA players are more poorly motivated than players in other team sports or that some other mechanism explains the pattern.
One such mechanism is for NBA teams to employ strategies associated with substituting weaker players for stronger ones when they hold various leads, e.g., to allow their best players to rest or avoid injury, manage floor spacing and offensive/defensive combinations, etc., and then reverse the process when the other team leads. In this way, a team will play more weakly when it leads, and more strongly when it is losing, because of personnel changes alone rather than changes in morale or effort. If teams have different thresholds for making such substitutions, and differently skilled best players, the averaging across these differences would produce the smooth pattern observed in the data. Such substitutions are indeed common in basketball games, while football and hockey teams are inherently less able to alter their effective team skill through such player management, which may explain the restoring force's presence in NBA games and its absence in CFB, NFL or NHL games. It would be interesting to determine whether college basketball games exhibit the same restoring force, and the personnel management hypothesis could be tested by estimating the on-court team's skill as a function of lead size.
The observed patterns we find in the probability of scoring while in the lead are surprisingly accurate at reproducing the observed variation in lead-size dynamics in these sports (Figure~\ref{fig:sim:lead}), and suggest that this one pattern provides a compact and mostly accurate summary of the within-game scoring dynamics of a sport. However, we do not believe these patterns indicate the presence of any feedbacks, e.g., ``momentum'' or cumulative advantage~\cite{price1976general}. Instead, for CFB, NFL and NHL games, this pattern represents the the distribution of latent team skills, while for NBA games, it represents strategic decisions about which players are on the court as a function of lead size.
This pattern also makes remarkably good predictions about the overall outcome of games, even when given information about only the first $\ell$ scoring events. Under a controlled out-of-sample test, we found that CFB, NFL and NHL games are highly predictable, even after only a few events. In contrast, NBA games were significantly less predictable, although reasonable predictions here can still be made, despite the impact of the restoring force.
Given the popularity of betting on sports, it is an interesting question as to whether our model produces better or worse predictions than those of established odds-makers. To explore this question, we compared our model against two such systems, the online live-betting website Bovada%
\footnote{See {\tt https://live.bovada.lv}. Only data on NBA games were available.}
and the odds-maker website Sports Book Review (SBR).%
\footnote{See {\tt http://www.sbrforum.com/betting-odds}}
Neither site provided comprehensive coverage or systematic access, and so our comparison was necessarily limited to a small sample of games. Among these, however, our predictions were very close to those of Bovada, and, after 20\% of each game's events had occurred, were roughly 10\% more accurate than SBR's money lines across all sports. Although the precise details are unknown for how these commercial odds were set, it seems likely that they rely on many details omitted by our model, such as player statistics, team histories, team strategies and strengths, etc. In contrast, our model uses only information relating to the basic scoring dynamics within a sport, and knows nothing about individual teams or game strategies. In that light, its accuracy is impressive.
These results suggest several interesting directions for future work. For instance, further elucidating the connection between team skill and the observed scoring patterns would provide an important connection between within-game dynamics and team-specific characteristics. These, in turn, could be estimated from player-level characteristics to provide a coherent understanding of how individuals cooperate to produce a team and how teams compete to produce dynamics. Another missing piece of the dynamics puzzle is the role played by the environment and the control of space for creating scoring opportunities. Recent work on online games with heterogeneous environments suggests that these spatial factors can have large impact on scoring tempo and balance~\cite{merritt2013environmental}, but time series data on player positions on the field would further improve our understanding. Finally, our data omit many aspects of gameplay, including referee calls, timeouts, fouls, etc., which may provide for interesting strategic choices by teams, e.g., near the end of the game, as with clock management in football games. Progress on these and other questions would shed more light on the fundamental question of how much of gameplay may be attributed to skill versus luck.
Finally, our results demonstrate that common patterns and processes do indeed cut across seemingly distinct sports, and these patterns provide remarkably accurate descriptions of the events within these games and predictions of their outcomes. However, many questions remain unanswered, particularly as to what specific mechanisms generate the modest deviations from the basic patterns that we observe in each sport, and how exactly teams exerting such great efforts against each other can conspire to produce gameplay so reminiscent of simple stochastic processes. We look forward to future work that further investigates these questions, which we hope will continue to leverage the powerful tools and models of dynamical systems, statistical physics, and machine learning with increasingly detailed data on competition.
\begin{acknowledgments}
We thank Dan Larremore, Christopher Aicher, Joel Warner, Mason Porter, Peter Mucha, Pete McGraw, Dave Feldman, Sid Redner, Alan Gabel, Dave Feldman, Owen Newkirk, Oskar Burger, Rajiv Maheswaran and Chris Meyer for helpful conversations. This work was supported in part by the James S. McDonnell Foundation.
\end{acknowledgments}
| {
"attr-fineweb-edu": 2.527344,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbPrxK6Ot9UjD_etU | \section{\uppercase{Introduction}}
\label{sec:introduction}
\noindent
Accurate and efficient ball and player detection is a key element of any solution intended to
automate analysis of video recordings of soccer games.
The method proposed in this paper allows effective and efficient ball and player detection in long shot, high definition, video recordings.
It's intended as a key component of the computer system developed for football academies and clubs to automate analysis of soccer video recordings.
Detecting the ball from long-shot video footage of a soccer game is a challenging problem.
\cite{deepball} lists the following factors that make the problem of ball localization difficult.
First, the ball is very small compared to other objects visible in the observer scene. Its size varies significantly depending on the position. In long shot recordings of soccer games, the ball can appear as small as 8 pixels, when it's on the far side of the pitch, opposite from the camera; and as big as 20 pixels, when it's on the near side of the field. At such small size, the ball can appear indistinguishable from parts of the players body (e.g. head or white socks) or the background clutter, such as small litter on the pitch or parts of stadium advertisements.
The shape of the ball can vary. When it's kicked and moves at high velocity, it becomes blurry and elliptical rather then circular. Perceived colour changes due to shadows and lighting variation. Situations when the ball is in player's possession or partially occluded are especially difficult. Simple ball detection methods based on motion-based background subtraction fail in such cases.
Top row of Fig.~\ref{jk:fig:ball_examples} shows exemplary image patches illustrating variance in the ball appearance and difficulty of the ball detection task.
Players are larger than a ball and usually easier to detect. But it some situations their detection can be problematic.
Players are sometimes in close contact with each other and partially occluded. They can have unusual pose due to stumbling and falling on the pitch. See bottom row of Fig.~\ref{jk:fig:ball_examples} for exemplary images showing difficulty of the player detection task.
\begin{figure}
\centering
\includegraphics[trim={0 0.5cm 1.1cm 0},clip, width=0.9\columnwidth]{ball_examples.png} \\
\includegraphics[height=1.8cm]{p1.png}
\includegraphics[height=1.8cm]{p3.png}
\includegraphics[height=1.8cm]{p5.png}
\includegraphics[height=1.8cm]{p6.png}
\caption{Exemplary patches illustrating difficulty of the ball and players detection task.
Images of the ball exhibit high variance due to motion blur or can be occluded by a player. Players can be in a close contact and occluded.}
\label{jk:fig:ball_examples}
\end{figure}
In this paper we present a ball and players detection method inspired by recent progress in deep neural network-based object detection methods.
Our method operates on a single video frame and is intended as the first stage in the soccer video analysis pipeline.
The detection network is designed with performance in mind to allow efficient processing of high definition video.
Compared to the generic deep neural-network based object detector it has two orders of magnitude less parameters (e.g. 120 thousand parameters in our model versus 24 million parameters in SSD300~\cite{Liu16}).
Evaluation results in Section \ref{jk:ev_results} prove that our method can efficiently process high definition video input in a real time. It achieves 37 frames per second throughput for high resolution (1920x1080) videos using low-end GeForce GTX 1060 GPU.
In contrast, Faster-RCNN~\cite{Ren15} generic object detector runs with only 8 FPS on the same machine.
\section{\uppercase{Related Work}}
\label{sec:related_work}
\noindent
The first step in traditional ball or player detection methods, is usually the background subtraction.
The most commonly used background subtraction approaches are based on chromatic features~\cite{Gong95,Yoon02}
or motion-based techniques such as background subtraction~\cite{DOr02,Mazz12}.
Segmentation methods based on chromatic features use domain knowledge about the visible scene: football pitch is mostly green and the ball mostly white.
The colour of the pitch is usually modelled using a Gaussian Mixture Model and hardcoded in the system or learned. When the video comes from the static camera, motion-based segmentation is often used.
For computational performance reasons, a simple approach is usually applied based on an absolute difference between consecutive frames or the difference between the current frame and the mean or median image obtained from a few previously processed frames \cite{High16}.
Traditional ball detection methods often use heuristic criteria based on chromatic or morphological features of connected components obtained from background segmentation process.
These criteria include blob size, colour and shape (circularity, eccentricity). Variants of Circle Hough Transform, modified to detect elliptical rather than circular objects, are used to verify if a blob contains the ball ~\cite{DOr02,Popp10,Halb15}.
To achieve real-time performance and high detection accuracy,
a two-stage approach may be employed~\cite{DOr02}. First, regions that probably contain the ball are found (\emph{ball candidates extraction}), then candidates are validated (\emph{ball candidate validation}).
Ball detection methods using morphological features to analyze shape of blobs produced by background segmentation, fail if a ball is touching a player.
See three rightmost examples in the top row of Fig.~\ref{jk:fig:ball_examples} for situations where these methods are likely to fail.
Earlier player detection methods are based on connected component analysis~\cite{Yoon02,Dor07} or use techniques such as variants of Viola and Jones detector with Haar features~\cite{Lehu07,Liu09}.
\cite{Lehu07} present a football player detection method based on convolutional neural networks. The network has relatively shallow architecture and produces feature maps indicating probable player positions.
\cite{Mack10} uses a combination of Histogram of Oriented Gradients (HOG) and Support Vector Machines (SVM) for player detection. First background segmentation, using a domain football pitch colour, is performed. HOG descriptors are classified by linear SVM algorithm trained on player template database.
\cite{Lu13} uses Deformable Part Model (DPM)~\cite{Felz13} to detect sport players in video frames.
The DPM consists of 6 parts and 3 aspect ratios.
The weakness of this method is that it may fail to detect partially occluded players due to non-maximum suppression operator applied after detection.
A comprehensive review of player detection and tracking methods in~\cite{Mana17} lists the following weaknesses present in reviewed algorithms.
One of the most frequent is the problem with correctly detecting partially occluded players.
Motion-based techniques using background subtraction fail if a player is standing still or moving very slowly for a prolonged time.
Discrimination of players wearing white jerseys from the pitch white lines can be challenging using colour cues.
Pixel-based player detection methods often fragment player into multiple separated regions (e.g. parts of legs) due to variations in the colour of legs, jerseys, shorts and socks.
Template-based detectors have problems with handling occlusions or situations when a players has an unusual pose (e.g. due to stumbling and falling on the pitch).
See bottom row of Fig.~\ref{jk:fig:ball_examples} for exemplary difficult situations for the player detection task.
In recent years a spectacular progress was made in the area of neural-network based object detection methods.
Deep neural-network based YOLO detector~\cite{Redm16} achieves 63\% mean Average Precision (mAP) on PASCAL VOC 2007 dataset, whereas traditional Deformable Parts Models (DPM) detector~\cite{Felz10} scores only 30\%. Current state-of-the-art object detectors can be categorized as one-stage or two-stage. In two-stage detector, such as: Fast R-CNN~\cite{Girs15} or Faster R-CNN~\cite{Ren15},
the first stage generates a sparse set of candidate object locations (region proposals). The second stage uses deep convolutional neural network to classify each candidate location as one of the foreground classes or as a background. One-stage detectors, SSD~\cite{Liu16} or YOLO~\cite{Redm16}, do not include a separate region-proposal generation step. A single detector based on deep convolutional neural network is applied instead.
However general-purpose neural network-based object detectors are relatively large scale networks with tens of millions of trainable parameters (e.g. 24 million parameters in SSD300~\cite{Liu16} detector). Another drawback is their limited performance on on small object detection. Authors of SSD method report significant drop of performance when detecting small objects. On COCOtest-dev2015 dataset average precision drops from 41.9\% for large object to only 9.0\% for small objects.
This restricts usage of generic object detectors to recognize small objects, such as the ball.
Following the successful applications of convolutional neural networks to solve many computer vision problems, a few neural network-based ball and player detection methods were recently proposed.
\cite{Spec17} uses convolutional neural networks (CNN) to localize the ball under varying environmental conditions. The first part of the network consists of multiple convolution and max-pooling layers which are trained on the standard object classification task. The output of this part is processed by fully connected layers regressing the ball location as probability distribution along x- and y-axis. The network is trained on a large dataset of images with annotated ground truth ball position.
The limitation of this method is that it fails if more than one ball, or object very similar to the ball, is present in the image.
\cite{Reno18} presents a
deep neural network classifier, consisting of convolutional feature extraction layers followed by fully connected classification layer. It is trained to classify small, rectangular image patches as ball or no-ball.
The classifier is used in a sliding window manner to generate a probability map of the ball occurrence.
The method has two drawbacks.
First, the set of negative training examples (patches without the ball) must be carefully chosen to include sufficiently hard examples.
Also the rectangular patch size must be manually selected to take into account all the possible ways the ball appears on the scene: big or small due to the perspective, sharp or blurred due to its speed. The method is also not optimal from the performance perspective.
Each rectangular image patch is separately processed by the neural network using a sliding-window approach. Then, individual results are combined to produce a final ball probability map.
\cite{gabel2018jetson} uses off-the-shelf deep neural network-based classifier architectures (AlexNet and Inception) fine tuned to detect the ball in rectangular patches cropped from the input image.
Authors reported 99\% ball detection accuracy on the custom dataset from RoboCup competition.
However, their dataset contains much closer, zoomed, views of the pitch, with the ball size considerably larger than in a long shot videos used as an input to our method. Also off-the-shelf architectures, such as Inception, are powerful but require much more computational resources for training and inference .
\cite{kamble2019deep} describes deep learning approach for 2D ball and player detection and tracking in soccer videos.
First, median filtering-based background subtraction is used to detect moving objects.
Then, extracted patches are classified into three categories: ball, player and background using VGG-based classifier.
Such approach fails if ball is touching or partially occluded by the player.
\cite{csah2018evaluation} uses a similar approach for player detection, where fixed-size images patches are extracted from an input image using a sliding window approach. The patches are initially filtered using hand crafted rules and then fed into a classification CNN with a relatively shallow, feed forward architecture.
In contract to above patch-based methods, our solution requires a single pass of an entire image through the network. It can be efficiently implemented using modern deep learning frameworks and use the full capabilities of GPUs hardware.
It does not require extraction, resizing and processing of multiple separate patches.
\cite{Lu17} presents a cascaded convolutional neural network (CNN) for player detection. The network is lightweight and thanks to cascaded architecture the inference is efficient. The training consists of two phases: branch-level training and whole network training and cascade thresholds are found using the grid search.
Our method is end-to-end trainable in a single phase and requires less hyper-parameters as multiple cascade thresholds are not needed. Training is also more efficient as our network processes entire images in a single pass, without the need to extract multiple patches.
\cite{zhang2018rc} is a player detection method based on SSD~\cite{Liu16} object detector. Authors
highlight the importance of integrating low level and high level (semantic) features to improve detection accuracy. To this end, they introduce reverse connected modules which integrate features derived from multiple layers into multi-scale features. In principle, the presented design modification is similar to Feature Pyramid Network~\cite{Lin17} concept, where higher level, semantically richer features are integrated with lower level features.
DeepBall~\cite{deepball} method is an efficient ball detection method based on fully convolutional architecture. It takes an input image and produces a ball confidence map indicating the most probable ball locations.
The method presented in this paper proposes a unified solution to efficiently detect both players and the ball in input images. It's architectured to effectively detect objects of different scales such as the ball and players by using Feature Pyramid Network~\cite{Lin17} design pattern.
\section{\uppercase{Player and ball detection method}}
\label{sec:method}
\noindent
The method presented in this paper, dubbed \emph{FootAndBall}, is inspired by recent developments of neural network-based generic object detectors, such as SSD~\cite{Liu16} and \emph{DeepBall}~\cite{deepball} ball detection method.
Typical architecture of a single-stage, neural network-based, object detector is modified, to make it more appropriate for player and ball detection in long shot soccer videos.
Modifications aim at improving the processing frame rate by reducing the complexity of the underlying feature extraction network.
For comparison, SSD300~\cite{Liu16} model has about 24 million parameters, while our model only 199 thousand, more than 200 times less
Multiple anchor boxes, with different sizes and aspect ratios, are not needed as we detect objects from two classes with a limited shape and size variance.
The feature extraction network is architectured to improve detection accuracy of object with different scales: small objects such as the ball and larger players.
It's achieved by using Feature Pyramid Network~\cite{Lin17} design pattern which efficiently combines higher resolution, low level feature maps with lower resolution maps encoding higher-level features and having larger receptive field.
This allows both precise ball localization and improved detection accuracy in difficult situations as larger visual context allows differentiating the ball from parts of player body and background clutter (e.g. parts of stadium advertisement).
Generic single-shot object detection methods usually divide an image into the relatively coarse size grid (e.g. 7x7 grid in YOLO~\cite{Redm16}) and detect no more than one object of interest with particular aspect ratio in each grid cell.
This prevents correct detections when two objects, such as players, are close to each other. Our methods uses denser grids.
For input image of 1920x1080 pixels we use 480x270 grid (input image size scaled down by 4) for ball detection and 120x68 grid (input image size scaled down by 16) for player detection. This allows detecting two near players as two separate objects.
Our method takes an input video frame and produces three outputs:
\emph{ball confidence map} encoding probability of ball presence at each grid cell,
\emph{player confidence map} encoding probability of player presence at each grid cell
and \emph{player bounding box tensor} encoding coordinates of a player bounding box at each cell of the player confidence map.
See Fig.~\ref{jk:fig:network-diagram} for visualization of network outputs.
For the input image with \(w \times h\) resolution,
the size of the output \emph{ball confidence map} is \(w/k_B \times h/k_B\), where \(k_B \) is the scaling factor (\(k_B=4\) in our implementation).
Position in the \emph{ball confidence map} with coordinates \((i, j)\) corresponds to the position \((\lfloor k_B(i-0.5) \rfloor, \lfloor k_B(j-0.5) \rfloor\) in the input image.
The ball is located by finding the location in the ball confidence map with the highest confidence.
If the highest confidence is lower than a threshold \(\Theta_{B}\), no balls are detected. This can happen when a ball is occluded by a player or outside the pitch.
The pixel coordinates of the ball in the input frame are computed using the following formula:
\((x, y) = (\lfloor k_B(i-0.5) \rfloor, \lfloor k_B(j-0.5) \rfloor\),
where \((i, j)\) are coordinates in the \emph{ball confidence map}.
The size of the \emph{player confidence map} is \(w/k_P \times h/k_P\), where \(k_P\) is the scaling factor (\(k_P=16\) in our implementation).
The size of the \emph{player bounding box tensor} is \(w/k_P \times h/k_P \times 4\).
The tensor encodes four bounding box coordinates for each location in the player confidence map where the player is detected.
Position in the \emph{player confidence map} with coordinates \((i, j)\) corresponds to the position \((\lfloor k_P(i-0.5) \rfloor, \lfloor k_P(j-0.5) \rfloor\) in the input image.
Player positions are found by first finding finding local maxima in the player confidence map with the confidence above the threshold \(\Theta_{P}\). This is achieved by applying non maximum suppression to the player confidence map and taking all locations with the confidence above the threshold \(\Theta_{P}\).
Let \(P = \left\{ \left( i,j \right) \right\}\) be the set of such local maxima.
For each identified player location \((i,j)\) in the player confidence map, we retrieve bounding box coordinates from the bounding box tensor.
Similar to SSD~\cite{Liu16}, player bounding boxes are encoded as \((x_{bbox}, y_{bbox}, w_{bbox}, h_{bbox}) \in \mathbb{R}^4\) vectors, where \((x_{bbox}, y_{bbox})\) is a relative position of the centre of the bounding box with respect to the center of the corresponding grid cell in the player confidence map and \(w_{bbox}\), \(h_{bbox}\) are its width and height. The coordinates are normalized to be in $0..1$ scale.
The bounding box centre in pixel coordinates is calculated as:
\((x_{bbox}', y_{bbox}') = (\lfloor k_P(i-0.5) + x_{bbox} w \rfloor, \lfloor k_P(j-0.5) + y_{bbox} h \rfloor)\), where \( (w, h) \) is an input image resolution.
It's height and width in pixel coordinates is \((\lfloor w_{bbox} w\rfloor, \lfloor h_{bbox} h \rfloor)\).
Detection results are visualized in Fig.~\ref{jk:fig:det_results}. Numbers above the player bounding boxes show detection confidence. Detected ball position is indicated by the red circle.
Fig.~\ref{jk:fig:det_overlap} show player detection performance in difficult situations, when players' are touching each other or occluded.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{1.png}
\includegraphics[width=0.48\textwidth]{4.png}
\caption{Player and ball detection results. Numbers above bounding boxes show detection confidence from the player classifier.}
\label{jk:fig:det_results}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=2cm]{o1.png}
\includegraphics[height=2cm]{o2.png}
\includegraphics[height=2cm]{o3.png}
\includegraphics[height=2cm]{o4.png}
\includegraphics[height=2cm]{o7.png}
\caption{Detection results in difficult situations (player occlussion).}
\label{jk:fig:det_overlap}
\end{figure}
\paragraph{Network architecture}
\begin{figure*}
\centering
\includegraphics[clip, trim=1cm 18.0cm 1.9cm 1.0cm, width=0.95\textwidth]{footandball_diagram2.pdf}
\caption{High-level architecture of \emph{FootAndBall} detector.
The input image is processed bottom-up by five convolutional blocks (Conv1, Conv2, Conv3, Conv4 and Conv5) producing feature maps with decreasing spatial resolution and increasing number of channels.
The feature maps are then processed in the top-down direction. Upsampled feature map from the higher pyramid level is added to the feature map from the lower level. 1x1 convolution blocks decrease the number of channels to the same value (32 in our implementation).
Resultant feature maps are processed by three heads: ball classification head, player classification head and player bounding box regression head.
Numbers in brackets denote size of feature maps (width, height, number of channels) produced by each block, where \(w, h\) is the input image width and height.
}
\label{jk:fig:network-diagram}
\end{figure*}
\begin{table}
\centering
\begin{tabular}{l@{\quad}l@{\quad}l@{\quad}l}
\hline
Conv1 & Conv: 16 3x3 filters & \\
& Max pool: 2x2 filter & (w/2, h/2, 16)\\
Conv2 & Conv: 32 3x3 filters & \\
& Conv: 32 3x3 filters & \\
& Max pool: 2x2 filter & (w/4, h/4, 32)\\
Conv3 & Conv: 32 3x3 filters & \\
& Conv: 32 3x3 filters & \\
& Max pool: 2x2 filter & (w/8, h/8, 32)\\
Conv4 & Conv: 64 3x3 filters & \\
& Conv: 64 3x3 filters & \\
& Max pool: 2x2 filter & (w/16, h/16, 64) \\
Conv5 & Conv: 64 3x3 filters & \\
& Conv: 64 3x3 filters & \\
& Max pool: 2x2 filter & (w/32, h/32, 32) \\
\hline
1x1Conv1 & Conv: 32 1x1 filters & (w/4, h/4, 32)\\
1x1Conv2 & Conv: 32 1x1 filters & (w/8, h/8, 32)\\
1x1Conv3 & Conv: 32 1x1 filters & (w/16, h/16, 32)\\
\hline
Ball & Conv: 32 3x3 filters & \\
classifier & Conv: 2 3x3 filters & \\
& Sigmoid & (w/4, h/4, 1) & \\
Player & Conv: 32 3x3 filters & \\
classifier & Conv: 2 3x3 filters & & \\
& Sigmoid & (w/16, h/16, 1) & \\
BBox & Conv: 32 3x3 filters & \\
regressor & Conv: 4 3x3 filters & (w/16, h/16, 4) \\
\hline
\end{tabular}
\caption{Details of \emph{FootAndBall} detector architecture. Third column lists size of the output from each block as (width, height, number of channels) for an input image with \(w \times h\) resolution.
All convolutional layers, except for 1x1 convolutions, are followed by BatchNorm and ReLU non-linearity (not listed in the table for brevity). All convolutions use 'same' padding and stride one.
}
\label{jk:tab-details}
\end{table}
Figure~\ref{jk:fig:network-diagram} shows high level architecture of our \emph{FootAndBall} network. It's based on Feature Pyramid Network~\cite{Lin17} design pattern.
The input image is processed in the bottom-up direction by five convolutional blocks (Conv1, Conv2, Conv3, Conv4, Conv5) producing feature maps with decreasing spatial resolution and increasing number of channels.
Numbers in brackets denote the width, height and number of channels \((w, h, c)\) of feature maps produced by each block.
The feature maps are then processed in the top-down direction. Upsampled feature maps from the higher pyramid level are added to feature maps from the lower level.
1x1 convolution blocks decrease the number of channels to the same value (16 in our implementation), so two feature maps can be summed up.
Resultant feature maps are processed by three heads: ball classification head, player classification head and player bounding box regression head.
Ball classification head takes a feature map with spatial resolution \(w/4 \times h/4\) as an input and produces one channel \(w/4 \times h/4\) \emph{ball confidence map} denoting probable ball locations. Each location in the ball confidence map corresponds to \(4 \times 4\) pixel block in the input image.
Player classification head takes a feature map with spatial resolution \(w/16 \times h/16\) as an input and produces one channel \(w/16 \times h/16\) \emph{player confidence map} denoting detected player locations.
Each location in the player confidence map corresponds to \(16 \times 16\) pixel block in the input image. Player bounding box regression head takes the same input as the player classification head.
It produces produces four channel \(w/16 \times h/16 \times 4\) \emph{player bounding box tensor} encoding coordinates of a player bounding box at each location of the player confidence map.
Details of each block are listed in Table~\ref{jk:tab-details}.
The network has fully convolutional architecture and can operate on images of any size.
Using Feature Pyramid Network~\cite{Lin17} architecture allows using both low-level features from the first convolutional layers and high-level features computed by higher convolutional layers.
Information from first convolutional layers is necessary for a precise spatial location of the object of interest.
Further convolutional layers operate on feature maps with lower spatial resolution, thus they cannot provide exact spatial location.
But they have bigger receptive fields and their output provides additional context to improve classification accuracy.
This is especially important for the ball detection tasks. The ball is very small and other objects, such as parts of players' body or stadium advertisement, may have similar appearance.
Using larger receptive field and higher level features can improve discriminability of the ball detector.
\paragraph{Loss function} is a modified version of the loss used in SSD~\cite{Liu16} detector.
The loss minimized during the neural network training consists of three components: ball classification loss, player classification loss and player bounding box loss.
Proposed network does not regress ball bounding box.
Ball classification loss (\(\mathcal{L}_{b}\)) is a binary cross-entropy loss over predicted ball confidence and the ground truth:
\begin{equation}
\mathcal{L}_{B} =
-\sum_{\left(i,j\right)\in Pos^{B}} \log c_{ij}^{B}
-\sum_{\left(i,j\right)\in Neg^{B}} \log \left( 1 - c_{ij}^{B} \right) ,
\end{equation}
where \(c_{ij}^{B}\) is the value of the ball confidence map at the spatial location \((i,j)\).
\(Pos^{B}\) is a set of positive ball examples, that is the set of locations in the ball confidence map corresponding to the ground truth ball position (usually for one input image it's only one location).
\(Neg^{B}\) is a set of negative examples, that is the set of locations that does not correspond to the ground truth ball position (all locations do not containing the ball).
Player classification loss (\(\mathcal{L}_{p}\)) is a binary cross-entropy loss over predicted confidence in each cell in the player confidence map and the ground truth.
\begin{equation}
\mathcal{L}_{p} =
-\sum_{\left(i,j\right)\in Pos^{P}} \log c_{ij}^{P}
-\sum_{\left(i,j\right)\in Neg^{P}} \log \left( 1 - c_{ij}^{P} \right) ,
\end{equation}
where \(c_{ij}^{P}\) is the value of the player confidence map at the spatial location \((i,j)\).
\(Pos^{P}\) is a set of positive player examples, that is the set of locations in the player confidence map corresponding to the ground truth player position.
\(Neg^{P}\) is a set of negative examples, that is the set of locations that does not correspond to any ground truth player position.
Player bounding box loss (\(\mathcal{L}_{bbox}\)) is Smooth L1 loss~\cite{girshick2015fast} between the predicted and ground truth bounding boxes.
As in SSD~\cite{Liu16} detector we regress the offset of the bounding box with respect to the cell center and its width and height.
\begin{equation}
\mathcal{L}_{bbox} =
\sum_{\left(i,j\right)\in Pos^{p}}
\mathrm{smooth_{L1}}
\left(
l_{(i,j)} - g_{(i,j)}
\right) ,
\end{equation}
where \(l_{(i,j)} \in \mathbb{R}^4\) denotes a predicted bounding box coordinates in the location \((i,j)\)
and \(g(i,j) \in \mathbb{R}^4\) are a ground truth bounding box coordinates in the location \((i,j)\).
Similar to SSD, we regress relative position of the center \((cx, cy)\) of the bounding box with respect to the location centre and its relative width (\(w\)) and height (\(w\)) with respect to the location/cell height and width.
The total loss is the sum of the three above components, averaged by the number of training examples \(N\):
\begin{equation}
\mathcal{L} = \frac{1}{N}
\left(
\alpha_B \mathcal{L}_{B} + \alpha_B \mathcal{L}_{P} + \mathcal{L}_{bbox}
\right) ,
\end{equation}
where \(\alpha_B\) and \(\beta_P\) are weights for the \(\mathcal{L}_{B}\)
and \(\mathcal{L}_{P}\) chosen experimentally.
Sets of positive ball \( Pos^{B} \) and player \( Pos^{P} \) examples are constructed as follows.
If \((x,y)\) is a ground true ball position in pixel coordinates, then the corresponding confidence map location \((i,j) = ( \lfloor x/k_B, y/k_B \rfloor )\) and all neighbourhood locations are added to \(Pos^{B}\).
If \((x,y)\) is a ground truth position of the player's bounding box center in pixel coordinates, then the corresponding confidence map location \((i,j) = ( \lfloor x/k_P, y/k_P \rfloor )\) is added to \(Pos^{P}\).
Due to smaller size of players confidence map, we mark only a single cell as a positive example.
Sets of negative ball \( Neg^{B} \) and negative player \( Neg^{P} \) examples correspond to locations in the confidence map, where the objects are not present.
The number of negative examples is orders of magnitude higher than a number of positive examples and this would create highly imbalanced training set.
To mitigate this, we employ hard negative mining strategy as in~\cite{Liu16}. For both ball and players we chose a limited number of negative examples with the highest confidence loss, so the ratio of negative to positive examples is at most 3:1.
\paragraph{Network training}
The network is trained using combination of two datasets: ISSIA-CNR Soccer~\cite{DOr09} and Soccer Player Detection~\cite{Lu17} datasets.
ISSIA-CNR Soccer dataset contains six synchronized, long shot views of the football pitch acquired by six Full-HD DALSA 25-2M30 cameras. Three cameras are designated for each side of the playing-field, recording at 25 fps. Videos are acquired during matches of the Italian 'serie A'. There're 20,000 annotated frames in the dataset annotated with ball position and player bounding boxes. Fig. \ref{jk:fig:det_results} shows exemplary frames from the ISSIA-CNR Soccer Dataset.
Soccer Player Detection dataset is created from two professional football matches. Each match was recorded by three broadcast cameras at 30 FPS with 1280×720 resolution. It contains 2019 images with 22,586 annotated player locations. However, ball position is not annotated.
For training we select 80\% of images and use remaining 20\% for evaluation. We use data augmentation to increase the variety of training examples and reduce overfitting. The following transformations are randomly applied to the training images: random scaling (with scale factor between 0.8 and 1.2), random cropping, horizontal flip and random photometric distortions (random change of brightness, contrast, saturation or hue). The ground truth ball position and player bounding boxes are modified accordingly to align with the transformed image.
The network is trained using a standard gradient descent approach with Adam~\cite{King14} optimizer. The initial learning rate is set to \(0.001\) and decreased by 10 after 75 epochs. The training runs for 100 epochs in total. Batch size is set to 16.
\section{\uppercase{Experimental results}}
\label{jk:section-experimental-results}
\noindent
\paragraph{Evaluation dataset}
We evaluate our method on publicly available ISSIA-CNR Soccer~\cite{DOr09} and Soccer Player Detection~\cite{Lu17} datasets.
For evaluation we select 20\% of images, whereas remaining 80\% are used for training.
Both datasets are quite challenging, there's noticeable motion blur, many occlusions, cluttered background and varying player's size.
In ISSIA-CNR dataset the height of players is between 63 and 144 pixels. In Soccer Player Detection dataset, it varies even more, from 20 to 250 pixels.
In ISSIA-CNR dataset one team wears white jerseys which makes difficult to distinguish the ball when it's close to the player.
\paragraph{Evaluation metrics}
\label{sec:metrics}
We evaluate Average Precision (AP), a standard metric used in assessment of object detection methods. We follow Average Precision definition from Pascal 2007 VOC Challenge \cite{Ever10}.
The precision/recall curve is computed from a method's ranked output.
For the ball detection task, the set of positive detections is the set of ball detections (locations in the \emph{ball confidence map} with the confidence above the threshold \(\Theta_B\)) corresponding to the ground truth ball position. Our method does not regress bounding boxes for the ball.
For the player detection task, the set of positive detections is the set of player detections (estimated player bounding boxes) with Intersection over Union (IOU) with the ground truth players' bounding boxes above 0.5.
Recall is defined as a proportion of all positive detections with confidence above a given threshold to all positive examples in the ground truth.
Precision is a proportion of all positive detections with confidence above that threshold to all examples. Average Precision (AP) summarizes the shape of the precision/recall curve, and is defined as the mean precision at a set of eleven equally spaced recall levels:
\begin{equation}
\mathrm{AP} = \frac{1}{11} \sum_{r \in \left\{ 0, 0.1, \ldots 1 \right\}} p(r) \; ,\end{equation}
where \(p(r)\) is a precision at recall level \(r\).
\paragraph{Evaluation results}
\label{jk:ev_results}
Evaluation results are summarized in Table~\ref{jk:table1}. The results show Average Precision (AP) for ball and player detection as defined in the previous section.
The table also lists a number of parameters of each evaluated model and inference time, expressed in frames per second (FPS) achieved when processing Full HD (1920x1080 resolution) video.
All methods are implemented in PyTorch~\cite{Pasz17} 1.2 and run on low-end nVidia GeForce GTX 1060 GPU.
Our method yields the best results on the ball detection task on ISSIA-CNR dataset. It achieves 0.909 Average Precision.
For comparison we evaluated three recent, neural network-based, detection methods.
Two are dedicated ball detection methods trained and evaluated on the same datasets: \cite{Reno18} scores 0.834 and \cite{deepball} 0.877 Average Precision.
The other is general purpose Faster R-CNN~\cite{Girs15} object detector fine-tuned for ball and player detection using our training datasets. Despite having almost two order of magnitude more parameters is has lower ball detection average precision (0.866).
On the player detection task we compared our model to general purpose Faster R-CNN~\cite{Girs15} object detector. Comparison with other recently published player detection method was not made due to reasons such as unavailability of the source code, usage of proprietary datasets, lack details on train/test split or different evaluation metrics than used in our paper.
On ISSIA CNR dataset our method achieves higher ball and player detection average accuracy (0.909 and 0.921 respectively) than Faster R-CNN (0.866 and 0.774) despite having two orders of magnitude less parameters.
On Soccer Player Dataset Faster R-CNN gets higher score (0.928 as compared to 0.885 scored by our method).
On average these two methods have similar accuracy. But due to the specialized design our method can process high definition video (1920 x 1080 frames) in a real time at 37 FPS. Faster R-CNN throughput on the same video is only 8 FPS\footnote{We benchmarked Faster-RCNN implementation based on ResNet-50 backbone, included in PyTorch 1.2 distribution.\url{https://pytorch.org}}.
DeepBall network has the highest performance (87 FPS) but it's a tiny network built for ball detection only.
\begin{table*}[t]
\caption{Average precision (AP) of ball and player detection methods on ISSIA-CNR and Soccer Player datasets. Two last columns show the number of trainable parameters and inference time in frame per seconds for high resolution (1920, 1080) video. }
\begin{center}
\begin{tabular}{l@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}r@{\quad}r@{\quad}l}
\hline
Dataset & \multicolumn{3}{c}{ISSIA-CNR} & Soccer Player \\
& \begin{tabular}{@{}c@{}}Ball AP \end{tabular}
& \begin{tabular}{@{}c@{}}Player AP \end{tabular}
& \begin{tabular}{@{}c@{}}mAP \end{tabular}
& \begin{tabular}{@{}c@{}}Player AP \end{tabular}
&\begin{tabular}{@{}c@{}}\#params\end{tabular}
&\begin{tabular}{@{}c@{}}FPS\end{tabular}
\\
\hline
\cite{Reno18} & 0.834 & - & - & - & 313k & 32 \\
DeepBall & 0.877 & - & - & - & 49k & 87 \\
\hline
Faster R-CNN & 0.866 & 0.874 & 0.870 & \textbf{0.928} & 25 600k & 8 \\
\hline
FootAndBall (no top-down) & 0.853 & 0.889 & 0.872 & 0.834 & 137k & 39 \\
\textbf{FootAndBall} & \textbf{0.909} & \textbf{0.921} & \textbf{0.915} & 0.885 & 199k & 37 \\
[2pt]
\hline
\end{tabular}
\end{center}
\label{jk:table1}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=0.15\textwidth]{ball_not_4.png}
\includegraphics[width=0.15\textwidth]{ball_not_6.png}
\includegraphics[width=0.15\textwidth]{ball_not_8.png} \\
\includegraphics[width=0.15\textwidth]{wrong_ball_2.png}
\includegraphics[width=0.15\textwidth]{wrong_ball_4.png}
\includegraphics[width=0.15\textwidth]{ball_not_10.png}
\caption{Visualization of incorrect ball detection results. Top row show image patches where the ball is not detected (false negatives). The bottom row shows patches with incorrectly detected ball (false positives).}
\label{jk:fig:misclassifications}
\end{figure}
Using Feature Pyramid Network design pattern, where higher level features with larger receptive field are combined with lower level features with higher spatial resolution, is an important element of our design.
Evaluation of a similar network but without top-down connections and without combining multiple feature maps with different receptive fields, produces worse results. Such architecture (FootAndBall -- no top-down) achieves 4-5\% percentage points lower Average Precision in all categories.
Fig.~\ref{jk:fig:misclassifications} show examples of incorrect ball detections. Two top rows show image patches where our method fails to detect the ball (false negatives). It can be noticed, that misclassification is caused by severe occlusion, where only small part of the ball is visible, or due to blending of the ball image with white parts of the player wear or white background objects outside the play field, such as stadium advertisement. The bottom row shows examples of patches where a ball is incorrectly detected (false positives). The detector is sometimes confused by players' white socks or by the background clutter outside the play field.
\section{\uppercase{Conclusions}}
\noindent
The article proposes an efficient deep neural network-based player and ball detection method.
The proposed network has a fully convolutional architecture processing entire image in a single pass through the network. This is much more computationally effective than a sliding window approach proposed some of th recent method, e.g.~\cite{Reno18}. Additionally, the network can operate on images of any size that can differ from size of images used during the training. It outputs ball locations and player bounding boxes.
The method performs very well on a challenging ISSIA-CNR Soccer~\cite{DOr09} and Soccer Player Detection~\cite{Lu17} datasets (0.915 and 0.885 mean Average Precision).
In ball detection task it outperforms two other, recently proposed, neural network-based ball detections methods: \cite{Reno18} and \cite{deepball}.
In player detection task it's on par with fine-tuned general-purpose Faster R-CNN object detector, but due to specialized design it's almost five times faster (37 versus 8 FPS for high-definition 1920x1080 video). This allows real time processing of high-definition video recordings.
In the future we plan to use temporal information to increase the accuracy. Combining convolutional feature maps from subsequent frames may help to discriminate between object of interest and static distactors (e.g. parts of stadium advertisement or circular marks on the pitch).
Another research direction is to investigate model compression techniques, such as using half-precision arithmetic, to improve the computational efficiency. The proposed method can process high definition videos in real time on relatively low-end GPU platform. However, real time processing of recordings from multiple cameras poses a challenging problem.
\section*{Acknowledgements}
This work was co-financed by the European Union within the European Regional Development Fund
\bibliographystyle{apalike}
{\small
| {
"attr-fineweb-edu": 2.259766,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUd9Y4eIOjRl2TtEqJ |
\section{Introduction}
\label{sec:intro}
In many competitive sports and games (such as tennis, basketball, chess and electronic sports), the most useful definition of a competitor's skill is the propensity of that competitor to win against an opponent.
It is often difficult to measure this skill \emph{explicitly}:
take basketball for example, a team's skill depends on the abilities of its players in terms of shooting accuracy, physical fitness, mental preparation, but also on the team's cohesion and coordination, on its strategy, on the enthusiasm of its fans, and a number of other intangible factors.
However, it is easy to observe this skill \emph{implicitly} through the outcomes of matches.
In this setting, probabilistic models of pairwise-comparison outcomes provide an elegant and practical approach to quantifying skill and to predicting future match outcomes given past data.
These models, pioneered by \citet{zermelo1928berechnung} in the context of chess (and by \citet{thurstone1927law} in the context of psychophysics), have been studied for almost a century.
They posit that each competitor $i$ (i.e., a team or player) is characterized by a latent score $s_i \in \mathbf{R}$ and that the outcome probabilities of a match between $i$ and $j$ are a function of the difference $s_i - s_j$ between their scores.
By estimating the scores $\{ s_i \}$ from data, we obtain an interpretable proxy for skill that is predictive of future match outcomes.
If a competitor's skill is expected to remain stable over time, these models are very effective.
But what if it varies over time?
A number of methods have been proposed to adapt comparison models to the case where scores change over time.
Perhaps the best known such method is the Elo rating system \citep{elo1978rating}, used by the World Chess Federation for their official rankings.
In this case, the time dynamics are captured essentially as a by-product of the learning rule (c.f. Section~\ref{sec:relwork}).
Other approaches attempt to model these dynamics explicitly \citep[e.g.,][]{fahrmeir1994dynamic, glickman1999parameter, dangauthier2007trueskill, coulom2008whole}.
These methods greatly improve upon the static case when considering historical data, but they all assume the simplest model of time dynamics (that is, Brownian motion).
Hence, they fail to capture more nuanced patterns such as variations at different timescales, linear trends, regression to the mean, discontinuities, and more.
In this work, we propose a new model of pairwise-comparison outcomes with expressive time-dynamics: it generalizes and extends previous approaches.
We achieve this by treating the score of an opponent $i$ as a time-varying Gaussian process $s_i(t)$ that can be endowed with flexible priors \citep{rasmussen2006gaussian}.
We also present an algorithm that, in spite of this increased flexibility, performs approximate Bayesian inference over the score processes in linear time in the number of observations so that our approach scales seamlessly to datasets with millions of observations.
This inference algorithm addresses several shortcomings of previous methods: it can be parallelized effortlessly and accommodates different variational objectives.
The highlights of our method are as follows.
\begin{description}
\item[Flexible Dynamics]
As scores are modeled by continuous-time Gaussian processes, complex (yet interpretable) dynamics can be expressed by composing covariance functions.
\item[Generality]
The score of an opponent for a given match is expressed as a (sparse) linear combination of features.
This enables, e.g., the representation of a home advantage or any other contextual effect.
Furthermore, the model encompasses a variety of observation likelihoods beyond win / lose, based, e.g., on the number of points a competitor scores.
\item[Bayesian Inference]
Our inference algorithm returns a posterior \emph{distribution} over score processes.
This leads to better predictive performance and enables a principled way to learn the dynamics (and any other model hyperparameters) by optimizing the log-marginal likelihood of the data.
\item[Ease of Intepretation]
By plotting the score processes $\{ s_i(t) \}$ over time, it is easy to visualize the probability of any comparison outcome under the model.
As the time dynamics are described through the composition of simple covariance functions, their interpretation is straightforward as well.
\end{description}
Concretely, our contributions are threefold.
First, we develop a probabilistic model of pairwise-comparison outcomes with flexible time-dynamics (Section~\ref{sec:model}).
The model covers a wide range of use cases, as it enables
\begin{enuminline}
\item opponents to be represented by a sparse linear combination of features, and
\item observations to follow various likelihood functions.
\end{enuminline}
In fact, it unifies and extends a large body of prior work.
Second, we derive an efficient algorithm for approximate Bayesian inference (Section~\ref{sec:inference}).
This algorithm adapts to two different variational objectives;
in conjunction with the ``reverse-KL'' objective, it provably converges to the optimal posterior approximation.
It can be parallelized easily, and the most computationally intensive step can be offloaded to optimized off-the-shelf numerical software.
Third, we apply our method on several sports datasets and show that it achieves state-of-the-art predictive performance (Section~\ref{sec:eval}).
Our results highlight that different sports are best modeled with different time-dynamics.
We also demonstrate how domain-specific and contextual information can improve performance even further;
in particular, we show that our model outperforms competing ones even when there are strong intransitivities in the data.
In addition to prediction tasks, our model can also be used to generate compelling visualizations of the temporal evolution of skills.
All in all, we believe that our method will be useful to data-mining practitioners interested in understanding comparison time-series and in building predictive systems for games and sports.
\paragraph{A Note on Extensions}
In this paper, we focus on \emph{pairwise} comparisons for conciseness.
However, the model and inference algorithm could be extended to multiway comparisons or partial rankings over small sets of opponents without any major conceptual change, similarly to \citet{herbrich2006trueskill}.
Furthermore, and even though we develop our model in the context of sports, it is relevant to all applications of ranking from comparisons, e.g., to those where comparison outcomes reflect human preferences or opinions \citep{thurstone1927law, mcfadden1973conditional, salganik2015wiki}.
\section{Model}
\label{sec:model}
In this section, we formally introduce our probabilistic model.
For clarity, we take a clean-slate approach and develop the model from scratch.
We discuss in more detail how it relates to prior work in Section~\ref{sec:relwork}.
The basic building blocks of our model are \emph{features}\footnote{%
In the simplest case, there is a one-to-one mapping between competitors (e.g., teams) and features, but decoupling them offers increased modeling power.}.
Let $M$ be the number of features; each feature $m \in [M]$ is characterized by a latent, continuous-time Gaussian process
\begin{align}
\label{eq:score}
s_m(t) \sim \mathrm{GP}[0, k_m(t, t')].
\end{align}
We call $s_m(t)$ the \emph{score process} of $m$, or simply its \emph{score}.
The \emph{covariance function} of the process, $k_m(t, t') \doteq \Exp{s_m(t) s_m(t')}$, is used to encode time dynamics.
A brief introduction to Gaussian processes as well as a discussion of useful covariance functions is given in Section~\ref{sec:covariances}.
The $M$ scores $s_1(t), \dots, s_M(t)$ are assumed to be (a priori) jointly independent, and we collect them into the \emph{score vector}
\begin{align*}
\bm{s}(t) = \begin{bmatrix}s_1(t) & \cdots & s_M(t) \end{bmatrix}^\Tr.
\end{align*}
For a given match, each opponent $i$ is described by a sparse linear combination of the features, with coefficients $\bm{x}_i \in \mathbf{R}^M$.
That is, the score of an opponent $i$ at time $t^*$ is given by
\begin{align}
\label{eq:compscore}
s_i = \bm{x}_i^\Tr \bm{s}(t^*).
\end{align}
In the case of a one-to-one mapping between competitors and features, $\bm{x}_i$ is simply the one-hot encoding of opponent $i$.
More complex setups are possible: For example, in the case of team sports and if the player lineup is available for each match, it could also be used to encode the players taking part in the match \citep{maystre2016player}.
Note that $\bm{x}_i$ can also depend contextually on the match.
For instance, it can be used to encode the fact that a team plays at home \citep{agresti2012categorical}.
Each observation consists of a tuple $(\bm{x}_i, \bm{x}_j, t^*, y)$, where $\bm{x}_i, \bm{x}_j$ are the opponents' feature vectors, $t^* \in \mathbf{R}$ is the time, and $y \in \mathcal{Y}$ is the match outcome.
We posit that this outcome is a random variable that depends on the opponents through their latent score difference:
\begin{align*}
y \mid \bm{x}_i, \bm{x}_j, t^* \sim p( y \mid s_i - s_j ),
\end{align*}
where $p$ is a known probability density (or mass) function and $s_i, s_j$ are given by~\eqref{eq:compscore}.
The idea of modeling outcome probabilities through score differences dates back to \citet{thurstone1927law} and \citet{zermelo1928berechnung}.
The likelihood $p$ is chosen such that positive values of $s_i - s_j$ lead to successful outcomes for opponent $i$ and vice-versa.
A graphical representation of the model is provided in Figure~\ref{fig:pgms}.
For perspective, we also include the representation of a static model, such as that of \citet{thurstone1927law}.
Our model can be interpreted as ``conditionally parametric'': conditioned on a particular time, it falls back to a (static) pairwise-comparison model parametrized by real-valued scores.
\begin{figure}[t]
\subcaptionbox{
Static model
}[2cm]{
\input{fig/pgm-static}
}
\hfill
\subcaptionbox{
Our dynamic model \label{fig:model}
}{
\input{fig/pgm-dynamic}
}
\caption{
Graphical representation of a static model (left) and of the dynamic model presented in this paper (right).
The observed variables are shaded.
For conciseness, we let $\bm{x}_n \doteq \bm{x}_{n,i} - \bm{x}_{n,j}$.
Right: the latent score variables are mutually dependent across time, as indicated by the thick line.}
\label{fig:pgms}
\end{figure}
\paragraph{Observation Models}
Choosing an appropriate likelihood function $p(y \mid s_i - s_j)$ is an important modeling decision and depends on the information contained in the outcome $y$.
The most widely applicable likelihoods require only \emph{ordinal} observations, i.e., whether a match resulted in a win or a loss (or a tie, if applicable).
In some cases, we might additionally observe points (e.g., in association football, the number of goals scored by each team).
To make use of this extra information, we can model
\begin{enuminline}
\item the number of points of opponent $i$ with a Poisson distribution whose rate is a function of $s_i - s_j$, or
\item the points difference with a Gaussian distribution centered at $s_i - s_j$.
\end{enuminline}
A non-exhaustive list of likelihoods is given in Table~\ref{tab:likelihoods}.
\begin{table}[t]
\caption{
Examples of observation likelihoods.
The score difference is denoted by $d \doteq s_i - s_j$ and the Gaussian cumulative density function is denoted by $\Phi$.
}
\label{tab:likelihoods}
\centering
\begin{tabular}{l lll}
\toprule
Name & $\mathcal{Y}$ & $p(y \mid d)$ & References \\
\midrule
Probit & $\{\pm 1 \}$ & $\Phi(yd)$ & \citep{thurstone1927law, herbrich2006trueskill} \\
Logit & $\{\pm 1 \}$ & $[1 + \exp(-yd)]^{-1}$ & \citep{zermelo1928berechnung, bradley1952rank} \\
Ordinal probit & $\{\pm 1, 0 \}$ & $\Phi(yd - \alpha), \ldots$ & \citep{glenn1960ties} \\
Poisson-exp & $\mathbf{N}_{\ge 0}$ & $\exp(yd - e^d) / y!$ & \citep{maher1982modelling} \\
Gaussian & $\mathbf{R}$ & $\propto \exp[(y - d)^2 / (2\sigma^2)]$ & \citep{guo2012score} \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Covariance Functions}
\label{sec:covariances}
A Gaussian process $s(t) \sim \mathrm{GP}[0, k(t, t')]$ can be thought of as an infinite collection of random variables indexed by time, such that the joint distribution of any finite vector of $N$ samples $\bm{s} = [s(t_1) \cdots s(t_N)]$ is given by $\bm{s} \sim \mathcal{N}(\bm{0}, \bm{K})$, where $\bm{K} = [k(t_i, t_j)]$.
That is, $\bm{s}$ is jointly Gaussian with mean $\bm{0}$ and covariance matrix $\bm{K}$.
We refer the reader to \citet{rasmussen2006gaussian} for an excellent introduction to Gaussian processes.
Hence, by specifying the covariance function appropriately, we can express prior expectations about the time dynamics of a feature's score, such as smooth or non-smooth variations at different timescales, regression to the mean, discontinuities, linear trends and more.
Here, we describe a few functions that we find useful in the context of modeling temporal variations.
Figure~\ref{fig:covariances} illustrates these functions through random realizations of the corresponding Gaussian processes.
\begin{figure*}[t]
\includegraphics{fig/covariances}
\caption{Random realizations of a zero-mean Gaussian process with six different covariance functions.}
\label{fig:covariances}
\end{figure*}
\begin{description}
\item[Constant] This covariance captures processes that remain constant over time.
It is useful in composite covariances to model a constant offset (i.e., a mean score value).
\item[Piecewise Constant]
Given a partition of $\mathbf{R}$ into disjoint intervals, this covariance is constant inside a partition and zero between partitions.
It can, for instance, capture discontinuities across seasons in professional sports leagues.
\item[Wiener] This covariance reflects Brownian motion dynamics (c.f. Section~\ref{sec:relwork}).
It is non-stationary: the corresponding process drifts away from $0$ as $t$ grows.
\item[Matérn] This family of stationary covariance functions can represent smooth and non-smooth variations at various timescales.
It is parametrized by a variance, a characteristic timescale and a smoothness parameter $\nu$.
When $\nu = 1/2$, it corresponds to a mean-reverting version of Brownian motion.
\item[Linear] This covariance captures linear dynamics.
\end{description}
Finally, note that composite functions can be created by adding or multiplying covariance functions together.
For example, let $k_a$ and $k_b$ be constant and Matérn covariance functions, respectively.
Then, the composite covariance $k(t, t') \doteq k_a(t, t') + k_b(t, t')$ captures dynamics that fluctuate around a (non-zero) mean value.
\citet[Sec. 2.3]{duvenaud2014automatic} provides a good introduction to building expressive covariance functions by composing simple ones.
\section{Inference Algorithm}
\label{sec:inference}
In this section, we derive an efficient inference algorithm for our model.
For brevity, we focus on explaining the main ideas behind the algorithm.
A reference software implementation, available online at \url{https://github.com/lucasmaystre/kickscore}, complements the description provided here.
We begin by introducing some notation.
Let $\mathcal{D} = \{ (\bm{x}_n, t_n, y_n) : n \in [N] \}$ be a dataset of $N$ independent observations, where for conciseness we fold the two opponents $\bm{x}_{n,i}$ and $\bm{x}_{n,j}$ into $\bm{x}_n \doteq \bm{x}_{n,i} - \bm{x}_{n,j}$, for each observation\footnote{%
This enables us to write the score difference more compactly.
Given an observation at time $t^*$ and letting $\bm{x} \doteq \bm{x}_i - \bm{x}_j$, we have $s_i - s_j = \bm{x}_i^\Tr \bm{s}(t^*) - \bm{x}_j^\Tr \bm{s}(t^*) = \bm{x}^\Tr \bm{s}(t^*)$.
}.
Let $\mathcal{D}_m \subseteq [N]$ be the subset of observations involving feature $m$, i.e., those observations for which $x_{nm} \ne 0$, and let $N_m = \lvert \mathcal{D}_m \rvert$.
Finally, denote by $\bm{s}_m \in \mathbf{R}^{N_m}$ the samples of the latent score process at times corresponding to the observations in $\mathcal{D}_m$.
The joint prior distribution of these samples is $p(\bm{s}_m) = \mathcal{N}(\bm{0}, \bm{K}_m)$, where $\bm{K}_m$ is formed by evaluating the covariance function $k_m(t, t')$ at the relevant times.
We take a Bayesian approach and seek to compute the posterior distribution
\begin{align}
\label{eq:truepost}
p(\bm{s}_1, \ldots, \bm{s}_M \mid \mathcal{D}) \propto \prod_{m = 1}^M p(\bm{s}_m) \prod_{n = 1}^N p[y_n \mid \bm{x}_n^\Tr \bm{s}(t_n)].
\end{align}
As the scores are coupled through the observations, the posterior no longer factorizes over $\{ \bm{s}_m \}$.
Furthermore, computing the posterior is intractable if the likelihood is non-Gaussian.
To overcome these challenges, we consider a mean-field variational approximation \citep{wainwright2008graphical}.
In particular, we assume that the posterior can be well-approximated by a multivariate Gaussian distribution that factorizes over the features:
\begin{align}
\label{eq:approxpost}
p(\bm{s}_1, \ldots, \bm{s}_M \mid \mathcal{D})
\approx q(\bm{s}_1, \ldots, \bm{s}_M)
\doteq \prod_{m = 1}^M \mathcal{N}(\bm{s}_m \mid \bm{\mu}_m, \bm{\Sigma}_m).
\end{align}
Computing this approximate posterior amounts to finding the variational parameters $\{\bm{\mu}_m, \bm{\Sigma}_m \}$ that best approximate the true posterior.
More formally, the inference problem reduces to the optimization problem
\begin{align}
\label{eq:varopt}
\min_{\{\bm{\mu}_m, \bm{\Sigma}_m \}} \mathrm{div} \left[ p(\bm{s}_1, \ldots, \bm{s}_M \mid \mathcal{D}) \;\Vert\; q(\bm{s}_1, \ldots, \bm{s}_M) \right],
\end{align}
for some divergence measure $\mathrm{div}(p \Vert q) \ge 0$.
We will consider two different such measures in Section~\ref{sec:inf-pseudo-obs}.
A different viewpoint on the approximate posterior is as follows.
For both of the variational objectives that we consider, it is possible to rewrite the optimal distribution $q(\bm{s}_m)$ as
\begin{align*}
q(\bm{s}_m) \propto p(\bm{s}_m) \prod_{n \in \mathcal{D}_m} \mathcal{N}[s_{mn} \mid \tilde{\mu}_{mn}, \tilde{\sigma}^2_{mn}].
\end{align*}
Letting $\mathcal{X}_n \subseteq [M]$ be the subset of features such that $x_{nm} \ne 0$, we can now reinterpret the variational approximation as transforming every observation $(\bm{x}_n, t_n, y_n)$ into several independent \emph{pseudo-observations} with Gaussian likelihood, one for each feature $m \in \mathcal{X}_n$.
Instead of optimizing directly $\{ \bm{\mu}_m, \bm{\Sigma}_m \}$ in~\eqref{eq:varopt}, we can alternatively choose to optimize the parameters $\{ \tilde{\mu}_{mn}, \tilde{\sigma}^2_{mn} \}$.
For any feature $m$, given the pseudo-observations' parameters $\tilde{\bm{\mu}}_m$ and $\tilde{\bm{\sigma}}_m^2$, computing $q(\bm{s}_m)$ becomes tractable (c.f. Section~\ref{sec:inf-posterior}).
An outline of our iterative inference procedure is given in Algorithm~\ref{alg:inference}.
Every iteration consists of two steps:
\begin{enumerate}
\item updating the pseudo-observations' parameters given the true observations and the current approximate posterior (lines \ref{li:begin-param}--\ref{li:end-param}), and
\item recomputing the approximate posterior given the current pseudo-observation (lines \ref{li:begin-post} and \ref{li:end-post}).
\end{enumerate}
Convergence is declared when the difference between two successive iterates of $\{ \tilde{\mu}_{mn} \}$ and $\{ \tilde{\sigma}_{mn}^2 \}$ falls below a threshold.
Note that, as a by-product of the computations performed by the algorithm, we can also estimate the log-marginal likelihood of the data, $\log p(\mathcal{D})$.
\algtext*{EndFor}
\begin{algorithm}[t]
\caption{Model inference.}
\label{alg:inference}
\begin{algorithmic}[1]
\Require $\mathcal{D} = \{ (\bm{x}_n, t_n, y_n) : n \in [N] \}$
\State $\tilde{\bm{\mu}}_m, \tilde{\bm{\sigma}}^2_m \gets \bm{0}, \bm{\infty} \quad \forall m$
\State $q(\bm{s}_m) \gets p(\bm{s}_m) \quad \forall m$
\Repeat
\For{$n = 1, \ldots, N$} \label{li:begin-param}
\State $\bm{\delta} \gets \textsc{Derivatives}(\bm{x}_n, y_n)$ \label{li:derivatives}
\For{$m \in \mathcal{X}_n$}
\State $\tilde{\mu}_{mn}, \tilde{\sigma}^2_{mn} \gets \textsc{UpdateParams}(x_{nm}, \bm{\delta})$ \label{li:updateparams}
\EndFor
\EndFor \label{li:end-param}
\For{$m = 1, \ldots, M$} \label{li:begin-post}
\State $q(\bm{s}_m) \gets \textsc{UpdatePosterior}(\tilde{\bm{\mu}}_m, \tilde{\bm{\sigma}}^2_m)$ \label{li:updateposterior}
\EndFor \label{li:end-post}
\Until convergence
\end{algorithmic}
\end{algorithm}
\paragraph{Running Time}
In Appendix~\ref{app:inference}, we show that \textsc{Derivatives} and \textsc{UpdateParams} run in constant time.
In Section~\ref{sec:inf-posterior}, we show that \textsc{UpdatePosterior} runs in time $O(N_m)$.
Therefore, if we assume that the vectors $\{ \bm{x}_n \}$ are sparse, the total running time per iteration of Algorithm~\ref{alg:inference} is $O(N)$.
Furthermore, each of the two outer \emph{for} loops (lines \ref{li:begin-param} and \ref{li:begin-post}) can be parallelized easily, leading in most cases to a linear acceleration with the number of available processors.
\subsection{Updating the Pseudo-Observations}
\label{sec:inf-pseudo-obs}
The exact computations performed during the first step of the inference algorithm---updating the pseudo-observations---depend on the specific variational method used.
We consider two: expectation propagation \citep{minka2001family}, and reverse-KL variational inference~\citep{blei2017variational}.
The ability of Algorithm~\ref{alg:inference} to seamlessly adapt to either of the two methods is valuable, as it enables practitioners to use the most advantageous method for a given likelihood function.
Detailed formulae for \textsc{Derivatives} and \textsc{UpdateParams} can be found in Appendix~\ref{app:inference}.
\subsubsection{Expectation Propagation}
We begin by defining two distributions.
The \emph{cavity} distribution $q_{-n}$ is the approximate posterior without the pseudo-observations associated with the $n$th datum, that is,
\begin{align*}
q_{-n}(\bm{s}_1, \ldots, \bm{s}_M) \propto \frac{q(\bm{s}_1, \ldots, \bm{s}_M)}{
\prod_{m \in \mathcal{X}_n} \mathcal{N}[s_{mn} \mid \tilde{\mu}_{mn}, \tilde{\sigma}^2_{mn}]}.
\end{align*}
The \emph{hybrid} distribution $\hat{q}_n$ is given by the cavity distribution multiplied by the $n$th likelihood factor, i.e.,
\begin{align*}
\hat{q}_n(\bm{s}_1, \ldots, \bm{s}_M) \propto q_{-n}(\bm{s}_1, \ldots, \bm{s}_M) p[y_n \mid \bm{x}_n^\Tr \bm{s}(t_n)].
\end{align*}
Informally, the hybrid distribution $\hat{q}_n$ is ``closer'' to the true distribution than $q$.
Expectation propagation (EP) works as follows. At each iteration and for each $n$, we update the parameters $\{ \tilde{\mu}_{mn}, \tilde{\sigma}_{mn} : m \in \mathcal{X}_n \}$ such that $\mathrm{KL}( \hat{q}_n \Vert q )$ is minimized.
To this end, the function \textsc{Derivatives} (on line~\ref{li:derivatives} of Algorithm~\ref{alg:inference}) computes the first and second derivatives of the log-partition function
\begin{align}
\label{eq:logpart}
\log \mathbf{E}_{q_{-n}} \left\{ p[y_n \mid \bm{x}_n^\Tr \bm{s}(t_n)] \right\}
\end{align}
with respect to ${\mu}_{-n} \doteq \mathbf{E}_{q_{-n}}[\bm{x}_n^\Tr \bm{s}(t_n)]$.
These computations can be done in closed form for the widely-used probit likelihood, and they involve one-dimensional numerical integration for most other likelihoods.
EP has been reported to result in more accurate posterior approximations on certain classification tasks \citep{nickisch2008approximations}.
\subsubsection{Reverse KL Divergence}
This method (often referred to simply as \emph{variational inference} in the literature) seeks to minimize $\mathrm{KL}(q \Vert p)$, i.e., the KL divergence from the approximate posterior $q$ to the true posterior $p$.
To optimize this objective, we adopt the approach of \citet{khan2017conjugate}.
In this case, the function \textsc{Derivatives} computes the first and second derivatives of the expected log-likelihood
\begin{align}
\label{eq:exp-ll}
\mathbf{E}_q \left\{ \log p[y_n \mid \bm{x}_n^\Tr \bm{s}(t_n)] \right\}
\end{align}
with respect to $\mu \doteq \mathbf{E}_q[\bm{x}_n^\Tr \bm{s}(t_n)]$.
These computations involve numerically solving two one-dimensional integrals.
In comparison to EP, this method has two advantages.
The first is theoretical:
If the likelihood $p(y \mid d)$ is log-concave in $d$, then the variational objective has a unique global minimum, and we can guarantee that Algorithm~\ref{alg:inference} converges to this minimum \citep{khan2017conjugate}.
The second is numerical:
Excepted for the probit likelihood, computing~\eqref{eq:exp-ll} is numerically more stable than computing~\eqref{eq:logpart}.
\subsection{Updating the Approximate Posterior}
\label{sec:inf-posterior}
The second step of Algorithm~\ref{alg:inference} (lines~\ref{li:begin-post} and~\ref{li:end-post}) solves the following problem, for every feature $m$.
Given Gaussian pseudo-observations $\{ \tilde{\mu}_{mn}, \tilde{\sigma}_{mn} : n \in \mathcal{D}_m \}$ and a Gaussian prior $p(\bm{s}_m) = \mathcal{N}(\bm{0}, \bm{K}_m)$, compute the posterior
\begin{align*}
q(\bm{s}_m) \propto p(\bm{s}_m) \prod_{n \in \mathcal{D}_m} \mathcal{N}[s_{mn} \mid \tilde{\mu}_{mn}, \tilde{\sigma}^2_{mn}].
\end{align*}
This computation can be done independently and in parallel for each feature $m \in [M]$.
A naive approach is to use the self-conjugacy properties of the Gaussian distribution directly.
Collecting the parameters of the pseudo-observations into a vector $\tilde{\bm{\mu}}_m$ and a diagonal matrix $\tilde{\bm{\Sigma}}_m$, the parameters of the posterior $q(\bm{s}_m)$ are given by
\begin{align}
\label{eq:batch}
\bm{\Sigma}_m = (\bm{K}_m^{-1} + \tilde{\bm{\Sigma}}_m^{-1})^{-1}, \qquad
\bm{\mu}_m = \bm{\Sigma}_m \tilde{\bm{\Sigma}}_m^{-1} \tilde{\bm{\mu}}_m.
\end{align}
Unfortunately, this computation runs in time $O(N_m^3)$, a cost that becomes prohibitive if some features appear in many observations.
Instead, we use an alternative approach that exploits a link between temporal Gaussian processes and state-space models \citep{hartikainen2010kalman, reece2010introduction}.
Without loss of generality, we now assume that the $N$ observations are ordered chronologically, and, for conciseness, we drop the feature's index and consider a single process $s(t)$.
The key idea is to augment $s(t)$ into a $K$-dimensional vector-valued Gauss-Markov process $\bar{\bm{s}}(t)$, such that
\begin{align*}
\bar{\bm{s}}(t_{n+1}) = \bm{A}_n \bar{\bm{s}}(t_n) + \bm{\varepsilon}_n,
\qquad \bm{\varepsilon}_n \sim \mathcal{N}(\bm{0}, \bm{Q}_n)
\end{align*}
where $K \in \mathbf{N}_{>0}$ and $\bm{A}_n, \bm{Q}_n \in \mathbf{R}^{K \times K}$ depend on the time interval $\Abs{t_{n+1} - t_n}$ and on the covariance function $k(t, t')$ of the original process $s(t)$.
The original (scalar-valued) and the augmented (vector-valued) processes are related through the equation
\begin{align*}
s(t) = \bm{h}^\Tr \bar{\bm{s}}(t),
\end{align*}
where $\bm{h} \in \mathbf{R}^K$ is called the \emph{measurement vector}.
Figure~\ref{fig:ssm} illustrates our model from a state-space viewpoint.
It is important to note that the mutual time dependencies of Figure~\ref{fig:model} have been replaced by Markovian dependencies.
In this state-space formulation, posterior inference can be done in time $O(K^3 N)$ by using the Rauch--Tung--Striebel smoother \citep{sarkka2013bayesian}.
\begin{figure}[t]
\centering
\input{fig/pgm-ssm}
\caption{State-space reformulation of our model.
With respect to the representation in Figure~\ref{fig:model}, the number of latent variables has increased, but they now form a Markov chain.}
\label{fig:ssm}
\end{figure}
\paragraph{From Covariance Functions to State-Space Models}
A method for converting a process $s(t) \sim \mathrm{GP}[\bm{0}, k(t, t')]$ into an equivalent Gauss-Markov process $\bar{\bm{s}}(t)$ by explicit construction of $\bm{h}$, $\{\bm{A}_n\}$ and $\{\bm{Q}_n\}$ is given in \citet{solin2016stochastic}.
All the covariance functions described in Section~\ref{sec:covariances} lead to exact state-space reformulations of order $K \leq 3$.
The composition of covariance functions through addition or multiplication can also be treated exactly and automatically.
Some other covariance functions, such as the squared-exponential function or periodic functions \citep{rasmussen2006gaussian}, cannot be transformed exactly but can be approximated effectively and to arbitrary accuracy \citep{hartikainen2010kalman, solin2014explicit}.
Finally, we stress that the state-space viewpoint is useful because it leads to a faster inference procedure; but defining the time dynamics of the score processes in terms of covariance functions is much more intuitive.
\subsection{Predicting at a New Time}
Given the approximate posterior $q(\bm{s}_1, \ldots, \bm{s}_M)$, the probability of observing outcome $y$ at a new time $t^*$ given the feature vector $\bm{x}$ is given by
\begin{align*}
p(y \mid \bm{x}, t^*) = \int_\mathbf{R} p(y \mid z) p(z) dz,
\end{align*}
where $z = \bm{x}^\Tr \bm{s}(t^*)$ and the distribution of $s_m(t^*)$ is derived from the posterior $q(\bm{s}_m)$.
By using the state-space formulation of the model, the prediction can be done in constant time \citep{saatci2012scalable}.
\section{Experimental Evaluation}
\label{sec:eval}
\begin{figure*}[t]
\includegraphics{fig/scores}
\caption{
Temporal evolution of the score processes ($\mu \pm \sigma$) corresponding to selected basketball teams (top) and tennis players (bottom).
The basketball teams are the Los Angeles Lakers (LAL), the Chicago Bulls (CHI) and the Boston Celtics (BOS).}
\label{fig:scores}
\end{figure*}
In this section, we evaluate our model and inference algorithm on real data.
Our experiments cover three aspects.
First, in Section~\ref{sec:evaldyn}, we compare the predictive performance of our model against competing approaches, focusing on the impact of flexible time-dynamics.
Second, in Section~\ref{sec:evalgen}, we show that by carefully choosing features and observation likelihoods, predictive performance can be improved significantly.
Finally, in Section~\ref{sec:evalinf}, we study various facets of our inference algorithm.
We measure the impact of the mean-field assumption and of the choice of variational objective, and we demonstrate the scalability of the algorithm.
\paragraph{Datasets}
We consider six datasets of pairwise-comparison outcomes of various sports and games.
Four of them contain timestamped outcomes; they relate to tennis, basketball, association football and chess.
Due to the large size of the chess dataset\footnote{%
This dataset consists of all the match outcomes contained in \emph{ChessBase Big Database 2018}, available at \url{https://shop.chessbase.com/en/products/big_database_2018}.}, we also consider a subset of the data spanning 30 years.
The two remaining datasets contain match outcomes of the StarCraft computer game and do not have timestamps.
Table~\ref{tab:datasets} provides summary statistics for all the datasets.
Except for chess, all data are publicly available online\footnote{%
Tennis: \url{https://github.com/JeffSackmann/tennis_atp},
basketball: \url{https://projects.fivethirtyeight.com/nba-model/nba_elo.csv},
football: \url{https://int.soccerway.com/},
StarCraft: \url{https://github.com/csinpi/blade_chest}.
}.
\begin{table}[t]
\caption{
Summary statistics of the sports datasets.}
\label{tab:datasets}
\centering
\input{tab/datasets}
\end{table}
\paragraph{Performance Metrics}
Let $(\bm{x}, t^*, y)$ be an observation.
We measure performance by using
the logarithmic loss: $-\log p(y \mid \bm{x}, t^*)$ and
the accuracy: $\textbf{1}\{y = \Argmax_{y'} p(y' \mid \bm{x}, t^*)\}$.
We report their average values on the test set.
\paragraph{Methodology}
Unless specified otherwise, we partition every dataset into a training set containing the first 70\% of the observations and a test set containing the remaining 30\%, in chronological order.
The various hyperparameters (such as covariance functions and their parameters, learning rates, etc.) are selected based on the training data only, by maximizing the log-marginal likelihood of Bayesian models and by minimizing the average leave-one-out log loss otherwise.
The final hyperparameter configuration of all models can be found in Appendix~\ref{app:eval}.
In order to predict the outcome of an observation at time $t^*$, we use \emph{all} the data (in both training and test sets) up to the day preceding $t^*$.
This closely mimics the setting where a predictor must guess the outcome of an event in the near future based on all past data.
Unless specified otherwise, we use Algorithm~\ref{alg:inference} with the EP variational objective, and we declare convergence when the improvement in log-marginal likelihood falls below $10^{-3}$.
Typically, the algorithm converges in less than a hundred iterations.
\subsection{Flexible Time-Dynamics}
\label{sec:evaldyn}
In this experiment, we compare the predictive performance of our model against competing approaches on four timestamped datasets.
In order to better isolate and understand the impact of accurately modeling \emph{time dynamics} on predictive performance, we keep the remaining modeling choices simple: we treat all outcomes as ordinal-valued (i.e., \emph{win}, \emph{loss} and possibly \emph{tie}) with a probit likelihood and use a one-to-one mapping between competitors and features.
In Table~\ref{tab:predperf}, we report results for the following models:
\begin{itemize}
\item \emph{Random}. This baseline assigns equal probability to every outcome.
\item \emph{Constant}. The model of Section~\ref{sec:model} with a constant covariance function.
This model assumes that the scores do not vary over time.
\item \emph{Elo}. The system used by the World Chess Federation \citep{elo1978rating}.
Time dynamics are a by-product of the update rule (c.f. Section~\ref{sec:relwork}).
\item \emph{TrueSkill}. The Bayesian model of \citet{herbrich2006trueskill}.
Time dynamics are assumed to follow Brownian motion (akin to our Wiener kernel) and inference is done in a single pass over the data.
\item \emph{Ours}. The model of Section~\ref{sec:model}.
We try multiple covariance functions and report the one that maximizes the log-marginal likelihood.
\end{itemize}
\begin{table*}[t]
\caption{
Predictive performance of our model and of competing approaches on four datasets, in terms of average log loss and average accuracy.
The best result is indicated in bold.}
\label{tab:predperf}
\centering
\input{tab/predperf}
\end{table*}
Our model matches or outperforms other approaches in almost all cases, both in terms of log loss and in terms of accuracy.
Interestingly, different datasets are best modeled by using different covariance functions, perhaps capturing underlying skill dynamics specific to each sport.
\paragraph{Visualizing and Interpreting Scores}
Figure~\ref{fig:scores} displays the temporal evolution of the score of selected basketball teams and tennis players.
In the basketball case, we can recognize the dominance of the Boston Celtics in the early 1960's and the Chicago Bulls' strong 1995-96 season.
In the tennis case, we can see the progression of a new generation of tennis champions at the turn of the 21\textsuperscript{st} century.
Plotting scores over time provides an effective way to compactly represent the history of a given sport.
Analyzing the optimal hyperparameters (c.f. Table~\ref{tab:hpvals} in Appendix~\ref{app:eval}) is also insightful: the characteristic timescale of the dynamic covariance component is \num{1.75} and \num{7.47} years for basketball and tennis, respectively.
The score of basketball teams appears to be much more volatile.
\subsection{Generality of the Model}
\label{sec:evalgen}
In this section, we demonstrate how we can take advantage of additional modeling options to further improve predictive performance.
In particular, we show that
choosing an appropriate likelihood and
parametrizing opponents with match-dependent combinations of features
can bring substantial gains.
\subsubsection{Observation Models}
Basketball and football match outcomes actually consist of \emph{points} (respectively, goals) scored by each team during the match.
We can make use of this additional information to improve predictions \citep{maher1982modelling}.
For each of the basketball and football datasets, we compare the best model obtained in Section~\ref{sec:evaldyn} to alternative models.
These alternative models keep the same time dynamics but use either
\begin{enumerate}
\item a logit likelihood on the ordinal outcome,
\item a Gaussian likelihood on the points difference, or
\item a Poisson-exp likelihood on the points scored by each team.
\end{enumerate}
The results are presented in Table~\ref{tab:lklperf}.
The logit likelihood performs similarly to the probit one \citep{stern1992all}, but likelihoods that take points into account can indeed lead to better predictions.
\begin{table}[ht]
\caption{
Average predictive log loss of models with different observation likelihoods.
The best result is indicated in bold.}
\label{tab:lklperf}
\centering
\input{tab/lklperf}
\end{table}
\subsubsection{Match-Dependent Parametrization}
For a given match, we can represent opponents by using (non-trivial) linear combinations of features.
This enables, e.g., to represent context-specific information that might influence the outcome probabilities.
In the case of football, for example, it is well-known that a team playing at home has an advantage.
Similarly, in the case of chess, playing White results in a slight advantage.
Table~\ref{tab:homeadv} displays the predictive performance achieved by our model when the score of the home team (respectively, that of the opponent playing White) is modeled by a linear combination of two features: the identity of the team or player and an \emph{advantage} feature.
Including this additional feature improves performance significantly, and we conclude that representing opponents in terms of match-dependent combinations of features can be very useful in practice.
\begin{table}[ht]
\caption{
Predictive performance of models with a home or first-mover advantage in comparison to models without.}
\label{tab:homeadv}
\centering
\input{tab/homeadv}
\end{table}
\subsubsection{Capturing Intransitivity} \label{sec:eval-intrans}
Score-based models such as ours are sometimes believed to be unable to capture meaningful intransitivities, such as those that arise in the ``rock-paper-scissors'' game~\citep{chen2016modeling}.
This is incorrect: if an opponent's score can be modeled by using match-dependent features, we can simply add an \emph{interaction} feature for every pair of opponents.
In the next experiment, we model the score difference between two opponents $i, j$ as $d \doteq s_i - s_j + s_{ij}$.
Informally, the model learns to explain the transitive effects through the usual player scores $s_i$ and $s_j$ and the remaining intransitive effects are captured by the interaction score $s_{ij}$.
We compare this model to the Blade-Chest model of \citet{chen2016modeling} on the two StarCraft datasets, known to contain strong intransitivities.
The Blade-Chest model is specifically designed to handle intransitivities in comparison data.
We also include two baselines, a simple Bradley--Terry model without the interaction features (\emph{logit}) and a non-parametric estimator (\emph{naive}) that estimates probabilities based on match outcomes between each pair---without attempting to capture transitive effects.
As shown in Figure~\ref{fig:starcraft}, our model outperforms all other approaches, including the Blade-Chest model.
More details on this experiment can be found in Appendix~\ref{app:intrans}.
\begin{figure}[t]
\includegraphics{fig/starcraft}
\caption{
Average log loss of four models (Bradley--Terry, naive, blade-chest and ours) on the StarCraft datasets.}
\label{fig:starcraft}
\end{figure}
\subsection{Inference Algorithm}
\label{sec:evalinf}
We turn our attention to the inference algorithm and study the impact of several implementation choices.
We start by quantifying the impact of the mean-field assumption~\eqref{eq:approxpost} and of the choice of variational objective on predictive performance.
Then, we demonstrate the scalability of the algorithm on the ChessBase dataset and measure the acceleration obtained by parallelizing the algorithm.
\subsubsection{Mean-Field Approximation}
In order to gain understanding on the impact of the factorization assumption in~\eqref{eq:approxpost}, we devise the following experiment.
We consider a small subset of the basketball data containing all matches between 2000 and 2005 ($N = 6382$, $M = 32$).
We evaluate the predictive performance on each week of the last season by using all the matches prior to the test week as training data.
Our model uses a one-to-one mapping between teams and features, a constant + Matérn 1/2 covariance function, and a Gaussian likelihood on the points difference.
We compare the predictive performance resulting from two inference variants,
\begin{enuminline}
\item mean-field approximate inference, i.e., Algorithm~\ref{alg:inference}, and
\item \emph{exact} posterior inference\footnote{%
This is possible for this particular choice of likelihood thanks to the self-conjugacy of the Gaussian distribution, but at a computational cost $O(N^3)$.}.
\end{enuminline}
Both approaches lead to an average log loss of \num{0.634} and an average accuracy of \num{0.664}.
Strikingly, both values are equal up to four decimal places, suggesting that the mean-field assumption is benign in practice \citep{birlutiu2007expectation}.
\subsubsection{Variational Objective}
Next, we study the influence of the variational method.
We re-run the experiments of Section~\ref{sec:evaldyn}, this time by using the reverse-KL objective instead of EP.
The predictive performance in terms of average log loss and average accuracy is equal to the EP case (Table~\ref{tab:predperf}, last three columns) up to three decimal places, for all four datasets.
Hence, the variational objective seems to have little practical impact on predictive performance.
As such, we recommend using the reverse-KL objective for likelihoods whose log-partition function~\eqref{eq:logpart} cannot be computed in closed form, as the numerical integration of the expected log-likelihood~\eqref{eq:exp-ll} is generally more stable.
\subsubsection{Scalability}
Finally, we demonstrate the scalability of our inference algorithm by training a model on the full ChessBase dataset, containing over \num{7} million observations.
We implement a multithreaded version of Algorithm~\ref{alg:inference} in the Go programming language\footnote{%
The code is available at \url{https://github.com/lucasmaystre/gokick}.}
and run the inference computation on a machine containing two 12-core Intel Xeon E5-2680 v3 (Haswell generation) processors clocked at 2.5 GHz.
Figure~\ref{fig:scalability} displays the running time per iteration as function of the number of worker threads.
By using \num{16} threads, we need only slightly over \num{5} seconds per iteration.
\begin{figure}[t]
\includegraphics{fig/scalability}
\caption{
Running time per iteration of a multithreaded implementation of Algorithm~\ref{alg:inference} on the ChessBase full dataset, containing over \num{7} million observations.}
\label{fig:scalability}
\end{figure}
\section{Related Work}
\label{sec:relwork}
Probabilistic models for pairwise comparisons have been studied for almost a century.
\citet{thurstone1927law} proposed his seminal \emph{law of comparative judgment} in the context of psychology.
Almost concurrently, \citet{zermelo1928berechnung} developed a method to rank chess players from match outcomes.
Both rely on the same idea: objects are characterized by a latent score (e.g., the intrinsic quality of a perceptual variable, or a chess player's skill) and the outcomes of comparisons between objects depend on the difference between the corresponding latent scores.
Zermelo's model was later rediscovered by \citet{bradley1952rank} and is currently usually referred to as the Bradley--Terry model.
\citet{stern1992all} provides a unifying framework and shows that, in practice, Thurstone's and Zermelo's models result in similar fits to the data.
In the context of sports, some authors suggest going beyond ordinal outcomes and investigate pairwise-comparison models with Gaussian \citep{guo2012score}, Poisson \citep{maher1982modelling, guo2012score}, or Skellam \citep{karlis2009bayesian} likelihoods.
In many applications of practical interest, comparison outcomes tend to vary over time.
In chess, for example, this is due to the skill of players changing over time.
The World Chess Federation, which uses a variant of the Bradley--Terry model to rank players, updates player scores after each match by using a stochastic gradient update:
\begin{align*}
s_i \gets s_i + \lambda \frac{\partial}{\partial s_i} \log p(y \mid s_i - s_j),
\end{align*}
where $\lambda \in \mathbf{R}$ is a learning rate.
It is interesting that this simple online update scheme (known as the Elo rating system \citep{elo1978rating}) enables a basic form of ``tracking'': the sequence of scores gives an indication of a player's evolution over time.
Whereas, in this case, score dynamics occur as a by-product of the learning rule, several attempts have been made to model time dynamics explicitly.
Usually, these models assume a variant of Brownian motion:
\begin{align}
\label{eq:brownian}
s(t_{n+1}) = s(t_n) + \varepsilon_{n},
\qquad \varepsilon_{n} \sim \mathcal{N}(0, \sigma^2 \Abs{t_{n+1} - t_n}).
\end{align}
\citet{glickman1993paired} and \citet{fahrmeir1994dynamic} are, to the best of our knowledge, the first to consider such a model.
\citet{glickman1999parameter} derives a computationally-efficient Bayesian inference method by using closed-form approximations of intractable integrals.
\citet{herbrich2006trueskill} and \citet{dangauthier2007trueskill} propose a similar method based on Gaussian filtering and expectation propagation, respectively.
\citet{coulom2008whole} proposes a method based on the Laplace approximation.
Our model strictly subsumes these approaches; Brownian motion is simply a special case of our model obtained by using the Wiener kernel.
One of the key contributions of our work is to show that it is not necessary to restrict the dynamics to Brownian motion in order to get linear-time inference.
Finally, we briefly review literature on the link between Gaussian processes (GPs) with scalar inputs and state-space models (SSMs), as this forms a crucial component of our fast inference procedure.
Excellent introductions to this link can be found in the theses of \citet{saatci2012scalable} and \citet{solin2016stochastic}.
The connection is known since the seminal paper of \citet{ohagan1978curve}, which introduced Gaussian processes as a method to tackle general regression problems.
It was recently revisited by \citet{hartikainen2010kalman}, who provide formulae for going back-and-forth between GP covariance and state-space forms.
Extensions of this link to non-Gaussian likelihood models are discussed in \citet{saatci2012scalable} and \citet{nickisch2018state}.
To the best of our knowledge, we are the first to describe how the link between GPs and SSMs can be used in the context of observation models that combine \emph{multiple} processes, by using a mean-field variational approximation.
\section{Conclusions}
\label{sec:conclusions}
We have presented a probabilistic model of pairwise comparison outcomes that can capture a wide range of temporal dynamics.
This model reaches state-of-the-art predictive performance on several sports datasets, and it enables generating visualizations that help in understanding comparison time-series.
To fit our model, we have derived a computationally efficient approximate Bayesian inference algorithm.
To the best of our knowledge, our algorithm is the first linear-time Bayesian inference algorithm for dynamic pairwise comparison models that minimizes the reverse-KL divergence.
One of the strengths of our approach is that it enables to discover the structure of the time dynamics by comparing the log-marginal likelihood of the data under various choices of covariance functions.
In the future, we would like to fully automatize this discovery process, in the spirit of the \emph{automatic statistician} \citep{duvenaud2014automatic}.
Ideally, given only the comparison data, we should be able to systematically discover the time dynamics that best explain the data, and generate an interpretable description of the corresponding covariance functions.
\begin{acks}
We thank Holly Cogliati-Bauereis, Ksenia Konyushkova, and the anonymous reviewers for careful proofreading and constructive feedback.
\end{acks}
\balance
\section{Inference Algorithm}
\label{app:inference}
For conciseness, we drop the index $n$ and consider a single observation $(\bm{x}, t^*, y) \in \mathcal{D}$.
Let $\bm{s} \doteq \bm{s}(t^*)$ be the vector containing the score of all features at the time of the observation.
Instead of optimizing the ``standard'' parameters $\tilde{\mu}_m, \tilde{\sigma}^2_m$, we will optimize the corresponding \emph{natural} parameters $\tilde{\alpha}_m, \tilde{\beta}_m$.
They are related through the following equations.
\begin{align*}
\tilde{\alpha}_m = \tilde{\mu}_m / \tilde{\sigma}^2_m, \qquad
\tilde{\beta}_m = 1 / \tilde{\sigma}^2_m
\end{align*}
\paragraph{Expectation Propagation}
Let $q_{-}(\bm{s}) = \mathcal{N}(\bm{\mu}, \bm{\Sigma})$ be the cavity distribution.
The log-partition function can be rewritten as a one-dimensional integral:
\begin{align*}
\log Z
&\doteq \log \Exp[q_{-}]{ p(y \mid \bm{x}^\Tr \bm{s}) }
= \log \int_{\bm{s}} p(y \mid \bm{x}^\Tr \bm{s}) \mathcal{N}(\bm{s} \mid \bm{\mu}, \bm{\Sigma}) d\bm{s} \\
&= \log \int_{u} p(y \mid u) \mathcal{N}(u \mid \mu, \sigma^2) du,
\end{align*}
where $\mu = \bm{x}^\Tr \bm{\mu}$ and $\sigma^2 = \bm{x}^\Tr \bm{\Sigma} \bm{x}$.
The function \textsc{Derivatives} computes the first and second derivatives with respect to this mean, i.e.,
\begin{align*}
\delta_1 = \frac{\partial}{\partial \mu} \log Z, \qquad
\delta_2 = \frac{\partial^2}{\partial \mu^2} \log Z.
\end{align*}
Given these quantities, the function \textsc{UpdateParams} updates the pseudo-observations' parameters for each $m \in \mathcal{X}$:
\begin{align*}
\tilde{\alpha}_m &\gets (1 - \lambda) \tilde{\alpha}_m
+ \lambda \left[ \frac{x_m \delta_1 - \mu_m x_m^2 \delta_2}{1 + \Sigma_{mm} x_m^2 \delta_2} \right], \\
\tilde{\beta}_m &\gets (1 - \lambda) \tilde{\beta}_m
+ \lambda \left[ \frac{-x_m^2 \delta_2}{1 + \Sigma_{mm} x_m^2 \delta_2} \right],
\end{align*}
where $\lambda \in (0, 1]$ is a learning rate.
A formal derivation of these update equations can be found in \citet{seeger2007bayesian, rasmussen2006gaussian, minka2001family}.
\paragraph{Reverse KL Divergence}
Let $q(\bm{s}) = \mathcal{N}(\bm{\mu}, \bm{\Sigma})$ be the current posterior.
Similarly to the log-partition function, the expected log-likelihood can be rewritten as a one-dimensional integral:
\begin{align*}
L
&\doteq \Exp[q]{ \log p(y \mid \bm{x}^\Tr \bm{s}) }
= \int_{\bm{s}} \log p(y \mid \bm{x}^\Tr \bm{s}) \mathcal{N}(\bm{s} \mid \bm{\mu}, \bm{\Sigma}) d\bm{s} \\
&= \int_{u} \log p(y \mid u) \mathcal{N}(u \mid \mu, \sigma^2) du,
\end{align*}
where $\mu = \bm{x}^\Tr \bm{\mu}$ and $\sigma^2 = \bm{x}^\Tr \bm{\Sigma} \bm{x}$.
The function \textsc{Derivatives} computes the first and second derivatives with respect to this mean, i.e.,
\begin{align*}
\delta_1 = \frac{\partial}{\partial \mu} L, \qquad
\delta_2 = \frac{\partial^2}{\partial \mu^2} L.
\end{align*}
Given these quantities, the function \textsc{UpdateParams} updates the pseudo-observations' parameters for each $m \in \mathcal{X}$:
\begin{align*}
\tilde{\alpha}_m &\gets (1 - \lambda) \tilde{\alpha}_m
+ \lambda \left[ x_m \delta_1 - \mu_m x_m^2 \delta_2 \right], \\
\tilde{\beta}_m &\gets (1 - \lambda) \tilde{\beta}_m
+ \lambda \left[ -x_m^2 \delta_2 \right],
\end{align*}
where $\lambda \in (0, 1]$ is the learning rate.
A formal derivation of these update equations can be found in \citet{khan2017conjugate}.
\section{Experimental Evaluation}
\label{app:eval}
The code used to produce the experiments presented in this paper is publicly available online.
It consists of two software libraries.
\begin{itemize}
\item A library written in the Python programming language, available at \url{https://github.com/lucasmaystre/kickscore}.
This libary provides a reference implementation of Algorithm~\ref{alg:inference} with a user-friendly API.
\item A library written in the Go programming language, available at \url{https://github.com/lucasmaystre/gokick}.
This library provides a multithreaded implementation of Algorithm~\ref{alg:inference}, focused on computational performance.
\end{itemize}
Additionally, the scripts and computational notebooks used to produce the experiments and figures presented in this paper are available at \url{https://github.com/lucasmaystre/kickscore-kdd19}.
\subsection{Hyperparameters}
Generally speaking, we choose hyperparameters based on a search over \num{1000} configurations sampled randomly in a range of sensible values (we always make sure that the best hyperparameters are not too close to the ranges' boundaries).
In the case of our models, we choose the configuration that maximizes the log-marginal likelihood of the training data.
In the case of TrueSkill and Elo, we choose the configuration that minimizes the leave-one-out log loss on the training data.
A list of all the hyperparameters is provided in Table~\ref{tab:hpdefs}.
A formal definition of the covariance functions we use is given in Table~\ref{tab:covfuncs}.
Finally, Table~\ref{tab:hpvals} lists the hyperparameter values used in most of the experiments described in Section~\ref{sec:eval}.
\begin{table}[ht]
\caption{
Hyperparameters and their description.}
\label{tab:hpdefs}
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{c l}
\toprule
Symbol & Description \\
\midrule
$\lambda$ & Learning rate \\
$\alpha$ & Draw margin \\
$\sigma^2_n$ & Observation noise (Gaussian likelihood) \\
$\sigma_{\text{cst}}^2$ & Variance (constant covariance) \\
$\sigma_{\text{lin}}^2$ & Variance (linear covariance) \\
$\sigma_{\text{W}}^2$ & Variance (Wiener covariance) \\
$\nu$ & Smoothness (Matérn covariance) \\
$\sigma_{\text{dyn}}^2$ & Variance (Matérn covariance) \\
$\ell$ & Timescale, in years (Matérn covariance) \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[ht]
\caption{
Covariance functions.}
\label{tab:covfuncs}
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{l l}
\toprule
Name & $k(t, t')$ \\
\midrule
Constant & $\sigma_{\text{cst}}^2$ \\
Linear & $\sigma_{\text{lin}}^2 t t'$ \\
Wiener & $\sigma_{\text{W}}^2 \min \{ t, t' \}$ \\
Matérn, $\nu = 1/2$ & $\sigma_{\text{dyn}}^2 \exp (-\Abs{t - t'} / \ell)$ \\
Matérn, $\nu = 3/2$ & $\sigma_{\text{dyn}}^2 (1 +\sqrt{3} \Abs{t - t'} / \ell) \exp (-\sqrt{3} \Abs{t - t'} / \ell)$ \\
\bottomrule
\end{tabular}
\end{table}
\begin{table*}[t]
\caption{
Hyperparameter values used for the models of Section~\ref{sec:eval}.}
\label{tab:hpvals}
\centering
\input{tab/hpvals}
\end{table*}
\subsection{Capturing Intransitivity}
\label{app:intrans}
We closely follow the experimental procedure of \citet{chen2016modeling} for the experiment of Section~\ref{sec:eval-intrans}.
In particular, we randomly partition each dataset into three splits: a training set (\num{50}\% of the data), a validation set (\num{20}\%), and a test set (\num{30}\%).
We train the model on the training set, choose hyperparameters based on the log loss measured on the validation set, and finally report the average log loss on the test set.
For the Blade-Chest model, we were not able to reproduce the exact results presented in \citet{chen2016modeling} using the open-source implementation available at \url{https://github.com/csinpi/blade_chest}.
Instead, we just report the best values in Figures 3 and 4 of the paper (\emph{blade-chest inner} model, with $d = 50$).
For the Bradley--Terry model, we use the Python library \emph{choix} and its function \texttt{opt\_pairwise}.
The only hyperparameter to set is the regularization strength $\alpha$.
For our model, since there are no timestamps in the StarCraft datasets, we simply use constant covariance functions.
The two hyperparameters are $\sigma^2_{\text{cst}}$ and $\sigma^2_{\times}$, the variance of the player features and of the interaction features, respectively.
The hyperparameter values that we used are given in Table~\ref{tab:hpintrans}.
\begin{table}[ht]
\caption{
Hyperparameter values for the experiment of Section~\ref{sec:eval-intrans}.}
\label{tab:hpintrans}
\centering
\sisetup{table-format=1.3}
\begin{tabular}{l S SS}
\toprule
& \text{Bradley--Terry} & \multicolumn{2}{c}{Ours} \\
\cmidrule(r){2-2} \cmidrule{3-4}
Dataset & {$\alpha$} & {$\sigma^2_{\text{cst}}$} & {$\sigma^2_{\times}$} \\
\midrule
StarCraft WoL & 0.077 & 4.821 & 3.734 \\
StarCraft HotS & 0.129 & 4.996 & 4.342 \\
\bottomrule
\end{tabular}
\end{table}
\balance
| {
"attr-fineweb-edu": 2.148438,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUa5E5qsNCPdQKsWqo | \section{Introduction}
\label{sec:intro}
American football is one of the most popular sports worldwide. The 2022 Super Bowl had 99.18 million television viewers in the US and 11.2 million streaming viewers around the world\cite{sportsmediawatch_2022}.
With more sophisticated developments in data analysis and storage, there has been significant growth in the demand for data-intensive analytic applications in sports.
Player identification is a standard way for interpreting sports videos and for helping commentators explain games on television. It also facilitates the comprehensive indexing of game video based on player participation per play.
However, in order to meet this demand, data analysis methods need to be equally improved over traditional manual methods. Such traditional methods are time-consuming and impractical for handling large number of football games recorded today.
\begin{figure}[t]
\centering
\includegraphics[width= 0.45\textwidth]{images/intro1.png}
\caption{An example of the output of proposed system: player detection, jersey number recognition and game log.}
\label{fig:exampleintro}
\vspace{-15pt}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width= 1\textwidth]{images/0310.png}
\caption{Flowchart of the proposed work
}
\label{fig:flow}
\vspace{-15pt}
\end{figure*}
Although different player tracking strategies have been explored in various sports, the achievement of fast and accurate player tracking in American football remains elusive. A key element of a successful analysis system is detection accuracy. However, camera blur, player motion, and small targets pose many challenges to achieve a satisfactory accuracy for data analysis.
In comparison with most sport analysis tasks, players participating in football games are usually crowded together and are prone to occlusion. The size of players also varies significantly depending on their positions relative to the camera. The small size of objects increases the difficulty of identifying players. In addition, the instantaneous and rapid movement of the players can blur the image and distorts their shape. Different poses during falling and stumbling can further make detection more challenging.
A motion-based background subtraction method performs weakly on blurred and occluded objects. Classical object detection algorithms give unsatisfactory results when dealing with crowded settings, because overlapping objects can lead to false negative predictions. Since useful information is ignored, sports analysis systems bring incomplete results that may mislead viewers.
In this work, we developed an innovative football analysis system that automatically identifies players on a per-play basis and generates logs for play indexing. The proposed system operates directly on the video footage and simultaneously extracts player information and game clock information on a frame-by-frame basis. Fig.\ref{fig:exampleintro} shows the typical output of proposed system.
We utilize a detection transformer-based network to detect players in a crowded setting and employ a focal loss-based network as a digit recognition stage to index each player given an imbalanced dataset. We then capture game clock data per play.
The football analysis system outputs a database containing an index of each play in the game, as well as a list of quarters, game clock start/end times, and participating players. We summarize our contributions as:
\begin{enumerate}
\item We propose, to our best knowledge, the first deep learning-based football analysis system for jersey number identification and logging.
\item We devise a two-stage neural network to address player detection and jersey number recognition, respectively. The two-stage design highlights the player-related region for precise jersey number recognition. The first stage, a transformer design, addresses the issue of identifying a player in a crowded setting. The second stage further addresses the issue of data imbalance in jersey number identification.
\item We build a database for play indexing. The system automatically extracts game clock information along with identified jersey number, making it possible to generate a game log for player indexing.
\item We conduct experiments on American football videos and demonstrate that our proposed system outperforms Faster R-CNN by $37.7\%$ in player detection and achieves $91.7\%$ mean average precision in jersey number recognition.
\end{enumerate}
\section{Related Work}
\label{sec:formatting}
Recent advances in image processing techniques have opened the door for many interesting and effective solutions to this problem. Player identification can be classified into two main categories: Optical Character Recognition (OCR) based and Convolution Neural Networks (CNN) based approaches \cite{nady2021player}.
Before deep learning became popular, OCR-based approaches segmented images into HSV color space and classified numbers based on segmentation results \cite{vsari2008player}. In another example, \cite{lu2013identification} extracted bounding boxes of players by a deformable part model detector, then performed OCR and classification. However, deep learning models have shown to provide greater capacity in object detection tasks and present more accurate results \cite{zhao2019object}. The CNN approach in \cite{gerke2015soccer} detected players by histogram of oriented gradients and a linear support vector machine (SVM) and classified player numbers within bounding box by training a deep convolutional neural network. \cite{gerke2015soccer,li2018jersey,vats2021multi,nag2019crnn,ahammed2018basketball} explored jersey number recognition by adopting various deep learning models.
\cite{langmann2010multi} utilized Gaussian Mixture Model (GMM) for background subtraction to locate moving foreground objects from static background and to detect moving players from the field. In \cite{buric2018object}, Mask R-CNN and YOLOv2 were compared for player detection using the pre-trained models because of a lack of annotated data.
\cite{senocak2018part,liu2019pose,arbues2019single,suda2019prediction} utilized pose information to guide the player detection deep learning network. \cite{liu2019pose} explored human body part cues and proposed a faster region-based convolution neural network (R-CNN) approach for player detection and number recognition, which works well when the player and number are shown clearly to camera, but is not often the case for broadcast video. To address player identification problem in broadcast video, \cite{senocak2018part} utilized body part features to design a system to recognize body representation at multi-scale and enhanced accuracy. \cite{cioppa2019arthus} proposed an adaptive real-time player segmentation approach by training a student network with the data annotated by a teacher network, which saves manual annotating. However, segmentation requires more computational resources, and the use of mask R-CNN causes degraded performance for complex scenarios.
Most of the existing sports analysis studies have focused on player detection in soccer, hockey, and basketball. These studies lack in-depth analysis of football player tracking. Football player identification is more challenging due to the clustering pattern of crowded players. Besides the exploration of player identification, specific player detection systems for football games have not been very extensively studied. To date, no database systems for automatic jersey number detection and player information indexing have been reported.
\section{Methodology}
\subsection{Overall Framework}
We propose an automated football analysis system to index each play in a game by quarter, game clock start/end time, and a list of participating players, as shown in Fig.\ref{fig:flow}.
\begin{figure}[t]
\centering
\includegraphics[width= 0.45\textwidth]{images/out2112.jpg}
\caption{Image processed with one-stage YOLOv2, shows limited performance if a one-stage model is directly applied.}
\label{fig:out2112}
\vspace{-15pt}
\end{figure}
\textbf{Two-stage design.} We choose a two-stage object detection network to enhance the capability of small object-detection in high definition videos. In sports broadcasting and player identification, the input image is usually in high definition. Conventional object detection networks usually downsize an input image to a specific size, e.g. YOLO\cite{redmon2018yolov3} resizes input to $416\times416$,
SSD\cite{liu2016ssd} resizes input to $512\times512$, which makes the player region unrecognizable. As shown in Fig.\ref{fig:out2112}, a one-stage YOLOv2 \cite{redmon2017yolo9000} leads to an unsatisfactory performance for two aspects. First, multiple players are identified within a single bounding box, lacking capability to differentiate crowded players. Second, jersey number information are eliminated in downsized image, as limited number of pixels associated with a jersey number. It thus is prone to detection error. To address this issue, we propose a two-stage design to first highlight a player region and then zoom into the player region for jersey number recognition.
\textbf{Three subsystems.} The overall flowchart consists of three subsystems of pattern recognition: player detection, jersey number recognition, and clock identification. For the first subsystem, we employ the Detection Transformer\cite{carion2020end}, which can detect players within extremely crowded scenario in order to address the technical challenge caused by highly overlapped targets. The second subsystem processes the detected player proposals and identifies jersey numbers accordingly. The digit recognition subsystem is implemented using RetinaNet50 \cite{retinanet-implementation}. The third subsystem extracts game clock information for play indexing. The subsystem operates on frame-by-frame basis to extract game clock time at the start of a play using the Tesseract for optical character recognition. Clock information gets filtered into plays, which provides additional information of the estimated time of each play within a game. Upon completing the three subsystems, the information discovered in the jersey number is further associated to a readable and comprehensive database indexed by its game clock timestamp.
\subsection{Player Detection Subsystem}
\label{subsubsection:player}
\begin{figure}[t]
\centering
\includegraphics[width= 0.45\textwidth]{images/augment.JPG}
\caption{ Example of data augmentation: (a) Original frame; (b) Original frame with blurry effect; (c) Scaled frame; (d) Scaled frame with blurry effect.}
\label{fig:augment}
\vspace{-15pt}
\end{figure}
Two main challenges are presented for player detection. First, we use data augmentation to alleviate data variation caused by motion effects, such as blurry and distorted objects. Second, we employ DEtection TRansformer (DETR) \cite{carion2020end} to address the player detection problem against crowded background settings.
To ensure our method is robust to frames that are motion-corrupted, we augment our training dataset with more instances having motion effects. Specifically, we apply a Gaussian filter \cite{singh2019advanced} to a portion of training images to create realistic motion-corrupted images. The training images are also scaled for data augmentation, as shown in Fig.\ref{fig:augment}.
Inspired by the transformer model in natural language processing, the player detection model is capable of identifying overlapped bounding boxes by reasoning their spatial relationships of those bounding boxes.
Combining a set-based Hungarian loss which enables unique matching between predictions and ground-truth, DETR solves the set prediction problem with a transformer. Compared to conventional object detection methods, such as Faster R-CNN \cite{ren2015faster}, DETR benefits from the bipartite matching and attention mechanism of the transformer encode-decode architecture \cite{vaswani2017attention}, and therefore it excels in solving the detection task within a crowded setting of a football game.
The DETR framework in our implementation stems from ResNet50 \cite{he2016deep}, a conventional CNN backbone. The feature map is extracted from the input image, then augmented with a positional encoding and then fed into the transformer encoder. Each encoder layer has a multi-head self-attention module to explore the correlation within the input of each layer. For the transformer decoder, the learnable object queries pass through each decoder layer with a self-attention module to explore the relations within itself.
In the decoder, another cross-attention module takes the key elements of the feature map from the encoder output as side input to let object queries cross-learn the features from the feature map. At the last step, a feed-forward network (FFN) is connected to each prediction from the decoder to predict the final class label, center coordinates, and dimensions of the bounding box. As in \cite{carion2020end}, the loss function contains Hungarian loss and bounding box loss.
\begin{figure}[t]
\centering
\includegraphics[width= 0.45\textwidth]{images/example.JPG}
\caption{ Example of players' proposals; example of digit on jersey.}
\label{fig:example}
\vspace{-15pt}
\end{figure}
\subsection{Jersey Number Recognition Subsystem}
\label{subsubsection:number}
Jersey number recognition severely suffers from a data imbalance issue. Obtaining a balanced jersey number dataset with the jersey numbers from 1 to 99 requires a massive dataset for training. We propose to address this problem in two directions. First, rather than recognizing two-digit numbers, we strategically target single digit recognition, therefore dramatically reduce the needs for training data. Second, we employ RetinaNet \cite{retinanet-implementation,retinanet} with focal loss design as the digit recognition step to alleviate the imbalance issue in the jersey number dataset.
With our single digit problem formulation, the number of classes is reduced from 100 to 10. With this significant reduction in the number of classes, it is then possible to achieve accurate results with a training dataset at the thousand level.
Although the distribution of the 10 classes may still be unevenly, the data imbalance issue will be further addressed by focal loss.
The jersey number recognition subsystem is applied to the detected players from the previous subsystem. Fig.\ref{fig:example} shows the example of proposed detected players and the jersey numbers that are expected to be recognized. RetinaNet is a one-stage detection with fast performance due to a one-time process for each input image.
Two essential components of the network are Feature Pyramid Networks (FPN) and Focal Loss. FPN is built on the top of ResNet-50, for the purpose of feature extraction,
which therefore can take images of any arbitrary size as input and then output proportionally sized feature maps at multiple levels in the feature pyramid \cite{lin2017feature}. RetinaNet handles imbalance training classes better with the use of focal loss. Traditionally, class imbalance limits the performance of a classification model because most regions of an image can be classified as background and result in unhelpful contributions towards learning\cite{leevy2018survey}.
As a consequence, the gradient is overwhelmed and leads to degenerated models. Focal loss introduces a focusing parameter to cross entropy loss as defined by \cite{liu2018age}:
\begin{equation}
Focal Loss = -\sum_{i=1}^{N}y_ilog(p_i)(1-p_i)^{\gamma}
\end{equation}
where $N$ denotes the number of classes; $y_i$ is binary and equals $1$ when $i$ belongs to a true label, otherwise is $0$; $p_i$ is the probability distribution of the prediction of the $i$-th class; $\gamma$ is a focusing parameter to down-weight the loss of easily classified regions therefore differentiating the majority/minority regions.
\subsection{Game Clock Subsystem}
Game clock identification first extracts the game clock from the game video. The subsystem then filters through the game clock data to derive individual play windows. It provides the information of the estimated start and end time of each play. The play windows feature the absolute range of frames for each play and are associated with the player detection subsystem to index the players present in each play.
\textbf{Detection of Play Clock Information.}
We choose a simple and fast solution, the Tesseract Optical Character Recognition (OCR) \cite{patel2012optical}, to build the main software used for game clock subsystem because the clock region location in broadcast video has a distinctive background and uniform font as shown in lower left of Fig.\ref{fig:flow}. The program begins by first breaking down the video into individual frames. Individual frames are then cropped into specific regions to isolate the region with the game clock information. The cropped image is then altered three different ways to reach the desired OCR detection format. First, the image is grayscaled to enhance contrast between the text and background. Second, binary thresholding is applied to make the image black and white. Finally, the image is color-inverted to reach the desired OCR format, which is black text on a white background. After the preprocessing steps, the image is processed using the Tesseract OCR software for character detection and is outputted together with the absolute frame number. Zero is designated as output for scenes in which clock information is absent. As shown in Fig.\ref{fig:flow}, the information in the output text file is, from left to right, the absolute frame number, game clock information and play clock information. Based on this setup, the absolute frame index allows for simple synchronization with the player detection subsystem.
\textbf{Play Window Detection.} A play designation filter program is designed to sift through the text file and output play windows that contain the range of frames for each play. As shown in Fig.\ref{fig:flow}, the output of the play designation filter from left to right is play number, quarter number, frame at play start, frame at play end, play start time, and play end time.
\begin{figure}[t]
\centering
\includegraphics[width= 0.45\textwidth]{images/histogram.JPG}
\caption{Color filtering demonstration: (a) player wears red jersey, (b) histogram of center region of red jersey, (c) player wears white jersey; (d) histogram of center region of white jersey.}
\label{fig:colorfilter}
\vspace{-15pt}
\end{figure}
\subsection{Database Subsystem}
The database subsystem synchronizes the results from the game clock and player identification subsystems, and writes the results into a readable format. In addition to the input from the two subsystems, there are two additional inputs that we consider as prior knowledge: team jersey color and team roster.
Home and away team players are categorized by color filtering. Different features of a color histogram are used to differentiate teams. As examples shown in Fig.\ref{fig:colorfilter}, in practice, the most representative region which is a small strip in the middle of player proposals is extracted to isolate the jersey. The $x$ axis of the color histograms of images is the intensity from $0$ to $255$, and the $y$ axis is the number of occurrences of that intensity. As an example of the red team player, the red channel intensity value is much higher than blue and green on average. Validation on a large number of red team jersey images analyzed in a similar manner agrees with the observation that the red intensity value is consistently much higher. On the other hand, a white jersey from the other team is hypothesized to show no dominant color channel. As shown in the corresponding color histogram, the white jersey has no dominant color channel. For the other jersey colors, a color filtering method can be easily generalized. Finally, jersey numbers are converted to the names of the players by roster.
\begin{figure*}[t]
\centering
\includegraphics[width= 0.9\textwidth]{images/result_vis.JPG}
\caption{Representative testing results. (a-c) original frames; (d-f) Faster R-CNN with $IoU = 0.75$; (g-i) Faster R-CNN with $IoU = 0.5$; (j-l) DETR with $IoU = 0.75$.}
\label{fig:qualitative}
\vspace{-15pt}
\end{figure*}
\section{Experiments}
\subsection{Implementation Details}
\label{subsubsection:imple}
In our experiments, we compare the performance of our player detection subsystem with a base line model: Faster R-CNN with ResNet50 backbone \cite{ren2015faster}. With each model, we empirically set the IoU to 0.75 to extract the detected bounding box. We compute the Average Precision (AP) of different Intersection of Union (IoU) and of different sized object. The size of small object is from $32\times32$ to $96\times96$; the size of large object is above $96\times96$. We exclude the objects of size less than $32\times32$ because the jersey number is generally invisible when the whole player is included in such a small region. We transfer the weight that learned by \cite{carion2020end} and further train the DETR with ResNet50 as backbone for 500 epochs until it converges using our football players dataset.
For comparison, the pretrained Faster R-CNN with ResNet-50 backbone was trained on our dataset for 10,000 epochs until it converged. For each training epoch, we used a batch size of 2, learning rate of $10^{-4}$.
To evaluate the jersey number recognition subsystem, we trained the RetinaNet convolutional neural network with a pretrained weight from Model Zoo dataset \cite{retinanet-implementation}. All images are padded with zeros to avoid distortion. The model was trained for 20,000 epochs with a learning rate of $10^{-4}$ and a batch size of 81. We adopt the focal loss with focusing parameter $\gamma = 2$. The performance of this model is measured by assessing its mAP of detection and classification for all single digits. The optimal IoU threshold is found to be 0.55, and the optimal confidence threshold is 0.97. Note that this approach considers the two digits of a two-digit number independently. For example, the number ``51'' is labeled as a ``5'' and a ``1'', separately. Combining the digit recognition result on a left-to-right manner is further done as a post-processing step.
For the game clock subsystem, after clock information is extracted by the OCR, we set the condition for play switching as the period of 40 seconds of the game clock or the period of 5 seconds of the snap clock. We test the subsystem with two game videos \cite{LSU,Southcarolina} and three evaluation criterion: the accuracy of the OCR, quarter transition and start/end time of each play. All experiments were carried out in parallel on four Quadro RTX 6000 GPUs.
\begin{figure*}[ht]
\centering
\includegraphics[width= 0.88\textwidth]{images/number.png}
\caption{(a) Digit counts grouped by jersey color.
(b) Confusion Matrix of digit recognition result, with $IoU$=$0.55$ and Confidence=$0.97$. }
\label{fig:digit count}
\vspace{-15pt}
\end{figure*}
\subsection{Dataset}
We validated our player detection subsystem using a $22$ minutes highlight video of a football broadcast game footage from the 2015 Cotton Bowl \cite{ironbowl}. As a proof of concept, we choose highlighted footage which is free of distractions caused by graphics and other media found in full length footage. It is in standard high-definition (HD) display resolution of 1280x720 pixels. Full-length footage of the same game is available at \cite{fulllength}. Frames captured from the video at 30 frames per second totaled 1196, 221 of which are further filtered out for training and testing. To make sure the training and testing data are independent to each other, 162 frames from the first quarter through the third quarter are used for training; 59 frames from the fourth quarter are used for testing. For data augmentation, 50 randomly-selected frames from training set are enlarged for a zoom-in effect. One copy of each image is given a blurry effect. Therefore, the training set contains 424 frames after augmentation, as examples shown in Fig.\ref{fig:augment}.
In the digit recognition subsystem, 2401 player proposals extracted from \cite{Southcarolina} with visible jersey numbers are manually labeled. Besides, we identify and label 2464 more images from the 2015 Cotton Bowl \cite{ironbowl} to add to this dataset. Lastly, two copies of each image are created. One copy is a scaled version, and the other is a motion blurred version, which brings the final image count to 14595, with 9730 of the images being augmented copies.
\subsection{Results and Discussion}
\begin{table}[t]
\centering
\caption{Player detection performance. $AP_{0.05:0.95}$ denotes average $AP$ over $IoU$ from $0.05$ to $0.95$ with step size of $0.05$. $AP_{0.75}$ denotes the AP of $IoU=0.75$. $AP_{s}$ and $AP_{l}$ denote $AP_{0.05:0.95}$ of small and large objects, respectively.}
\begin{tabular}{l|llll}
& \textbf{$AP_{0.05:0.95}$} & \textbf{$AP_{0.75}$} & \textbf{$AP_{s}$} & \textbf{$AP_{l}$} \\ \hline
\textbf{Faster R-CNN} & 24.5 & 5.7 & 21.2 & 26.7 \\
\textbf{DETR} & 38.6 & 43.4 & 31.8 & 38.9
\end{tabular}
\label{table:res}
\vspace{-10pt}
\end{table}
\subsubsection{Player Detection Subsystem}
We quantitatively analyze the performance of the player detection subsystem. As shown in Table \ref{table:res}. We compare the player detection and Faster R-CNN in AP.
We compute the AP of an object with small size and large size, respectively. The results show that DETR delivered better performance in an extremely crowded setting. With an IoU of $0.75$, the average precision is $43.5\%$, which outperforms the Faster R-CNN by $37.7\%$. For an object with small and large size, the AP of the proposed model surpasses Faster R-CNN by $10.6\%$ and $12.2\%$ respectively.
\begin{table}[]
\centering
\caption{Evaluation of game clock subsystem}
\scalebox{0.7}{\renewcommand{\arraystretch}{1.3}
\begin{tabular}{l|c|c}
\multicolumn{1}{c|}{Evaluation} & Criteria & Result \\ \hline
Clock Detection & Time are read correctly & 95.6\% \\
Quarter Number & \begin{tabular}[c]{@{}c@{}}Quarter transitions are correct \\
\end{tabular} & 100\% \\
\begin{tabular}[c]{@{}l@{}}Start/ End Time \\ of each play\end{tabular} &
\begin{tabular}[c]{@{}c@{}}Play Windows begin before the ball is snapped; \\ end before the next snap of the ball\end{tabular} &
62\%
\end{tabular}
}
\label{table:clc}
\vspace{-10pt}
\end{table}
\begin{figure*}[ht]
\centering
\includegraphics[width= 0.95\textwidth]{images/result_csv.png}
\caption{Part of the output game log, including the player tracking information, play number, quarter, start/end time, home/away team.}
\label{fig:csv}
\vspace{-15pt}
\end{figure*}
To analyze the performance qualitatively, we list 3 representative images and their test results using different models. Fig. \ref{fig:qualitative} (a-c) are original frames from a sparse scenario, moderate scenario, and crowded scenario, respectively. On the second row, faster R-CNN fails to detect most players with an IoU of $0.75$. In sparse and moderate scenarios, faster R-CNN with an IoU of $0.5$ (the $3^{rd}$ row) is able to detect some of the players, however, the detection could be only part of the players and therefore hurt the next digit recognition subsystem. The DETR in the last row outputs a desired result with the majority of players detected with high confidence score. In crowded setting, Faster R-CNN fails to detect relatively smaller object and especially misses a lot of players in the crowded region. As the crowded scene is very common and is the focus of our work, we favor the result that the DETR delivered with almost every player fully detected.
\subsubsection{Jersey Number Recognition Subsystem}
The final break-down and distribution of the jersey number dataset is shown in Fig.\ref{fig:digit count}a. The horizontal axis shows the individual digits, and the vertical axis shows the number of instances. All columns are colored to show the split between home and away jerseys. The digit 2 with the most occurrences is twice as frequent as the lowest digit 6, indicating a data imbalance issue generally existing in Jersey number dataset, even for a single-digit dataset. However, such imbalance issue is not dominant and we achieve a high mAP of 91.7\% in evaluation.
In Fig.\ref{fig:digit count}b, we show a confusion matrix which is normalized to the most popular class, digit 2.
Digits 2,4,7,9 achieve higher accuracy compared to digits 6 and 8, which corresponds to the distribution of classes shown in Fig.\ref{fig:digit count}a. Even though the utilization of focal loss alleviates the data imbalance issue to some extent,
we notice that most of the incorrect classifications come from digits that were partially obscured or located on a player who was angled substantially with respect to the camera.
\subsubsection{Game Clock Subsystem}
Evaluation results are shown in Table \ref{table:clc}. OCR performs well on recognizing the clock information with an accuracy of $95.6\%$. Play window detection achieves $100\%$ accuracy on quarter transition, and an accuracy of $62\%$ for start/end time designation.
In full game footage, the reset of the play clock can be used to track the end and subsequent start of each play. Although there are no standards for the end and start of a play in accelerated footage, it is possible to start a new play based on the jump in the play clock.
\subsubsection{Database Subsystem}
We present a screenshot of data format in the integrated database in Fig. \ref{fig:csv}. Each play is related with game information, such as quarter number, start and end time, home and away team, and participating players of the home team. It is worth noting that the duplicated numbers are for offense and defense. However, it doesn't happen in the NFL because duplicated numbers are not allowed. A viewer could retrieve information and perform analysis either by play index or the jersey number of players. Integration of player information and clock information can alleviate the impact of players who may be partially occluded and invisible in some frames.
\section{Conclusion}
In this work, we present a football game analysis system based on deep learning. The proposed system enables player identification and time tracking. We propose a two-stage network design to first highlight player region and then identify jersey number in zoomed in sub-regions. The experimental results demonstrate robustness in handling crowded scenarios and the capability to alleviate data imbalance issues. The proposed end-to-end system delivers complete tracking information using a single pass of football game footage. Our player detection and jersey number recognition subsystems can be directly generalized to other football game footage.
The qualitative and quantitative analysis have been thoroughly performed over three subsystems separately and over the entire system. The system achieves accurate and efficient performance on separate tasks. The experimental result illustrates the reliability on handling the challenges of football game analysis.
For the future work, we plan to enhance the player tracking by integrating information through consecutive frames and replenishing the players whose jersey numbers are not visible in some frames due to movement. Very small objects could be detectable by using ultra-high resolution camera or in combination with a super resolution method. Ball tracking could be fused to the current design and serves as complimentary information towards game clock tracking,
therefore helping with the play divide. The user interface that outputs the database will be further improved by developing a more robust graphic user interface.
\section*{Acknowledgement(s)}
The authors would like to thank Nguyen Hung Nguyen, Jeff Reidy, and Alex Ramey for their preliminary study, data annotation, and supportive experiment.
{\small
\bibliographystyle{ieee_fullname}
| {
"attr-fineweb-edu": 2.298828,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeUzxK3YB9i3RMQHB | \section{Introduction}
Given the live commentary documents, the goal of sports game summarization is to generate the corresponding sports news~\cite{zhang-etal-2016-towards}. As shown in Figure~\ref{fig:example}, the commentary document records the commentaries of a whole game while the sports news briefly introduces the core events in the game. Both the lengthy commentaries and the different text styles between commentaries and news make the task challenging~\cite{huang-etal-2020-generating,sportssum2,ksportssum}.
\citet{zhang-etal-2016-towards} propose sports game summarization task and construct the first dataset which contains 150 samples. Later, another dataset with 900 samples is presented by~\citet{Wan2016OverviewOT}.
To construct the large-scale sports game summarization data, \citet{huang-etal-2020-generating} propose SportsSum with 5,428 samples. \citet{huang-etal-2020-generating} first adopt deep learning technologies for this task.
Further, \citet{sportssum2} find the quality of SportsSum is limited due to the original rule-based data cleaning process. Thus, they manually clean the SportsSum dataset and obtain SportsSum2.0 dataset with 5,402 samples.
\citet{ksportssum} point out the hallucination phenomenon in sports game summarization. In other words, the sports news might contain additional knowledge which does not appear in the corresponding commentary documents.
To alleviate hallucination, they propose K-SportsSum dataset which includes 7,428 samples and a knowledge corpus recording the information of thousands of sports teams and players.
\begin{figure}[t]
\centerline{\includegraphics[width=0.45\textwidth]{figure/example.pdf}}
\caption{An \href{https://www.goal.com/en-us/match/internazionale-v-pescara/1gnkkzqjipx08k2ygwzelipyh}{example} of sports game summarization.}
\label{fig:example}
\end{figure}
Though great contributions have been made, all the above sports game summarization datasets are Chinese since the data is easy to collect. As a result, all relevant studies~\cite{zhang-etal-2016-towards,Yao2017ContentSF,Wan2016OverviewOT,Zhu2016ResearchOS,Liu2016SportsNG,lv2020generate,huang-etal-2020-generating,sportssum2} focus on Chinese, which might be difficult for global researchers to understand and study this task.
To this end, in this paper, we present \texttt{GOAL}, the first English sports game summarization dataset which contains 103 samples, each of which includes a commentary document as well as the corresponding sports news.
Specifically, we collect data from an English sports website\footnote{\url{https://www.goal.com/}\label{url:goal}} that provides both commentaries and news. We study the dataset in few-shot scenario due to: (1) When faced with real applications, few-shot scenarios are more common. Compared with the summarization datasets in the news domain which usually contain tens of thousands of samples, the largest Chinese sports game summarization dataset is still in a limited scale. (2) The English sports news is non-trivial to collect. This is because manually writing sports news is labor-intensive for professional editors~\cite{huang-etal-2020-generating,ksportssum}. We find only less than ten percent of sports games on the English website are provided with sports news.
To further support the semi-supervised research on \texttt{GOAL}, we additionally provide 2,160 unlabeled commentary documents.
The unlabeled commentary documents together with labeled samples could train sports game summarization models in a self-training manner~\cite{lee2013pseudo}.
Based on our \texttt{GOAL}, we build and evaluate several baselines of different paradigms, i.e., extractive and abstractive baselines.
The experimental results show that the sports game summarization task is still challenging to deal with.
\section{GOAL}
\subsection{Data Collection}
\href{https://www.goal.com/}{Goal.com} is a sports website with plenty of football games information. We crawl both the commentary documents and sports news from the website.
Following~\citet{zhang-eickhoff-2021-soccer}, the football games are collected from four major soccer tournaments including the UEFA Champions League, UEFA Europa League, Premier League and Series A between 2016 and 2020.
As a result, we collect 2,263 football games in total. Among them, all games have commentary documents but only 103 games have sports news.
Next, the 103 commentary-news pairs form \texttt{GOAL} dataset with the splitting of 63/20/20 (training/validation/testing).
The remaining 2,160 (unlabeled) commentary documents are regarded as a supplement to our \texttt{GOAL}, supporting the semi-supervised setting in this dataset.
Figure~\ref{fig:example} gives an example from \texttt{GOAL}.
\begin{table}[t]
\centering
\resizebox{0.47\textwidth}{!}
{
\begin{tabular}{l|l|cc|cc}
\bottomrule[1pt]
\multicolumn{1}{c|}{\multirow{2}{*}{Dataset}} & \multicolumn{1}{c|}{\multirow{2}{*}{\# Num.}} & \multicolumn{2}{c|}{Commentary} & \multicolumn{2}{c}{News} \\
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & Avg. & 95th pctl & Avg. & 95th pctl \\ \toprule[1pt]
\multicolumn{6}{c}{Chinese} \\ \bottomrule[1pt]
\citet{zhang-etal-2016-towards} & 150 & - & - & - & - \\
\citet{Wan2016OverviewOT} & 900 & - & - & - & - \\
SportsSum\small~\cite{huang-etal-2020-generating} & 5428 & 1825.6 & 3133 & 428.0 & 924 \\
SportsSum2.0\small~\cite{sportssum2} & 5402 & 1828.6 & - & 406.8 & - \\
K-SportsSum\small~\cite{ksportssum} & 7854 & 1200.3 & 1915 & 351.3 & 845 \\ \toprule[1pt]
\multicolumn{6}{c}{English} \\ \bottomrule[1pt]
\texttt{GOAL} (supervised) & 103 & 2724.9 & 3699 & 476.3 & 614 \\
\texttt{GOAL} (semi-supervised) & 2160 & 2643.7 & 3610 & - & - \\ \toprule[1pt]
\end{tabular}
}
\caption{Statistics of \texttt{GOAL} and previous datasets. ``\# Num.'' indicates the number of samples in the datasets. ``Avg.'' and ``95th pctl'' denote the average and 95th percentile number of words, respectively.}
\label{table:statistics}
\end{table}
\subsection{Statistics}
As shown in Table~\ref{table:statistics}, \texttt{Goal} is more challenging than previous Chinese datasets since: (1) the number of samples in \texttt{GOAL} is significantly less than those of Chinese datasets; (2) the length of input commentaries in \texttt{GOAL} is much longer than the counterparts in Chinese datasets.
In addition, compared with Chinese commentaries, English commentaries do not provide real-time scores. Specifically, each commentary in Chinese datasets is formed as $(t,c,s)$, where $t$ is the timeline information, $c$ indicates the commentary sentence and $s$ denote current scores at time $t$.
The commentaries in \texttt{GOAL} do not provide $s$ element. Without this explicit information, models need to implicitly infer the real-time status of games.
\subsection{Benchmark Settings}
Based on our \texttt{GOAL} and previous Chinese datasets, we introduce supervised, semi-supervised, multi-lingual and cross-lingual benchmark settings. The first three settings all evaluate models on the testing set of \texttt{GOAL}, but with different training samples:
(\romannumeral1) \textit{Supervised Setting} establishes models on the 63 labeled training samples of \texttt{GOAL}; (\romannumeral2) \textit{Semi-Supervised Setting} leverages 63 labeled and 2160 unlabeled samples to train the models; (\romannumeral3) \textit{Multi-Lingual Setting} trains the models only on other Chinese datasets.
Moreover, inspired by~\citet{feng-etal-2022-msamsum,Wang2022ClidSumAB,Chen2022TheCC}, we also provide a (\romannumeral4) \textit{Cross-Lingual Setting} on \texttt{GOAL}, that lets the models generate Chinese sports news based on the English commentary documents.
To this end, we employ four native Chinese as volunteers to translate the original English sports news (in \texttt{GOAL} dataset) to Chinese. Each volunteer majors in Computer Science, and is proficient in English.
The translated results are checked by a data expert.
Eventually, the cross-lingual setting uses the 63 cross-lingual samples to train the models, and verify them on the cross-lingual testing set.
\subsection{Task Overview}
For a given live commentary document $C=\{(t_1,c_1), (t_2,c_2),...,(t_n,c_n)\}$, where $t_i$ is the timeline information, $c_i$ is the commentary sentence and $n$ indicates the total number of commentaries, sports game summarization aims to generate its sports news $R=\{r_1,r_2,...,r_m\}$. $r_i$ denotes the $i$-th news sentence. $m$ is the total number of news sentences.
\section{Experiments}
\subsection{Baselines and Metrics}
We only focus on the supervised setting, and reserve other benchmark settings for future work. The adopted baselines are listed as follows:
\begin{itemize}[leftmargin=*,topsep=0pt]
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item Longest: the longest $k$ commentary sentences are selected as the summaries (i.e., sports news). It is a common baseline in summarization.
\item TextRank~\cite{mihalcea-tarau-2004-textrank} is a popular unsupervised algorithm for extractive summarization. It represents each sentence from a document as a node in an undirected graph, and then extracts sentences with high importance as the corresponding summary.
\item PacSum~\cite{zheng-lapata-2019-sentence} enhances the TextRank algorithm with directed graph structures, and more sophisticated sentence similarity technology.
\item PGN~\cite{see-etal-2017-get} is a LSTM-based abstractive summarization model with copy mechanism and coverage loss.
\item LED~\cite{Beltagy2020LongformerTL} is a transformer-based generative model with a modified self-attention operation that scales linearly with the sequence length.
\end{itemize}
Among them, Longest, TextRank and PacSum are extractive summarization models which directly select commentary sentences to form sports news. PGN and LED are abstractive models that generate sports news conditioned on the given commentaries.
It is worth noting that previous sports game summarization work typically adopts the pipeline methods~\cite{huang-etal-2020-generating,sportssum2,ksportssum}, where a selector is used to select important commentary sentences, and then a rewriter conveys each commentary sentence to a news sentence.
However, due to the extremely limited scale of our \texttt{GOAL}, we do not have enough data to train the selector. Therefore, all baselines in our experiments are end-to-end methods.
To evaluate baseline models, we use \textsc{Rouge-1/2/l} scores~\cite{lin-2004-rouge} as the automatic metrics.
The employed evaluation script is based on \texttt{py-rouge} toolkit\footnote{\url{https://github.com/Diego999/py-rouge}}.
\subsection{Implementation Details}
For Longest, TextRank\footnote{\url{https://github.com/summanlp/textrank}} and PacSum\footnote{\url{https://github.com/mswellhao/PacSum}} baselines, we extract three commentary sentences to form the sports news.
$\beta$, $\lambda_1$ and $\lambda_2$ used in PacSum are set to 0.1, 0.9 and 0.1, respectively.
The sentence representation in PacSum is calculated based on TF-IDF value.
For LED baseline, we utilize \textit{led-base-16384}\footnote{\url{https://huggingface.co/allenai/led-base-16384}} with the default settings. We set the learning rate to 3e-5 and batch size to 4. We train the LED baselines with 10 epochs and 20 warmup steps.
During training, we truncate the input and output sequences to 4096 and 1024 tokens, respectively. In the test process, the beam size is 4,
minimum decoded length is 200 and maximum length is 1024.
\begin{table}[t]
\centering
\resizebox{0.45\textwidth}{!}
{
\begin{tabular}{l|ccc|ccc}
\bottomrule[1pt]
\multicolumn{1}{c|}{\multirow{2}{*}{Method}} & \multicolumn{3}{c|}{Validation} & \multicolumn{3}{c}{Testing} \\
\multicolumn{1}{c|}{} & R-1 & R-2 & R-L & R-1 & R-2 & R-L \\ \toprule[1pt]
\multicolumn{7}{c}{Extractive Methods} \\ \bottomrule[1pt]
Longest & 31.2 & 4.6 & 20.0 & 30.3 & 4.2 & 19.5 \\
TextRank & 30.3 & 3.7 & 20.2 & 27.6 & 2.9 & 18.8 \\
PacSum & 31.8 & 4.9 & 20.6 & 31.0 & 5.3 & 19.6 \\ \toprule[1pt]
\multicolumn{7}{c}{Abstractive Methods} \\ \bottomrule[1pt]
PGN & 34.2 & 6.3 & 22.7 & 32.8 & 5.7 & 21.4 \\
LED & \textbf{37.8} & \textbf{9.5} & \textbf{25.2} & \textbf{34.7} & \textbf{7.8} & \textbf{24.3} \\ \toprule[1pt]
\end{tabular}
}
\caption{Experimental results on \texttt{GOAL}.}
\label{table:results}
\end{table}
\subsection{Main Results}
Table~\ref{table:results} shows experimental results on \texttt{GOAL}. The performance of extractive baselines is limited due to the different text styles between commentaries and news (commentaries are more colloquial than new).
PGN outperforms the extractive methods since it can generate sports news not limited to original words or phrases.
However, such a LSTM-based method cannot process long documents efficiently.
LED, as an abstractive method, achieves the best performance among all baselines due to its abstractive nature and sparse attention.
\begin{figure}[t]
\centerline{\includegraphics[width=0.45\textwidth]{figure/probe.pdf}}
\caption{Predicting key verbs during generating sports news based on LED baseline. During predicting, the model is conditioned on the corresponding commentaries. The \textcolor{gray}{gray} text indicates where the model do not compute self-attention mechanism.}
\label{fig:probe}
\end{figure}
\subsection{Discussion}
Moreover, we also manually check the generated results of LED baseline, and find that the generated sports news contains many repeated phrases and sentences, and fail to capture some important events in the games.
We conjecture this is because the few-shot training samples make the baseline difficult to learn the task effectively.
To further verify whether the model is familiar with sports texts, we let the model predict the key verbs during generating the sports news, which is similar to~\citet{chen-etal-2022-probing}.
As shown in Figure~\ref{fig:probe}, when faced with a simple and common event (i.e., beating), the trained LED model could predict the right verb (i.e., beat). However, for a complex and uncommon event (i.e., blocking), the model cannot make correct predictions.
Therefore, it is an urgent need to utilize the external resources to enhance the model's abilities to know sports texts and deal with sports game summarization. For example, considering the semi-supervised setting and multi-lingual settings, where the models could make use of unlabeled commentaries and Chinese samples, respectively.
Inspired by~\citet{Wang2022ASO}, the external resources and vanilla few-shot English training samples could be used to jointly train sports game summarization models in the multi-task, knowledge-distillation or pre-training framework.
In addition, following~\citet{Feng2021ASO}, another promising way is to adopt other long document summarization resources to build multi-domain or cross-domain models with sports game summarization.
\section{Conclusion}
In this paper, we present \texttt{GOAL}, the first English sports game summarization dataset, which contains 103 commentary-news samples. Several extractive and abstractive baselines are built and evaluated on \texttt{GOAL} to benchmark the dataset in different settings.
We further analyze the model outputs and show the challenges still remain in sports game summarization.
In the future, we would like to (1) explore the semi-supervised and multi-lingual settings on \texttt{GOAL}; (2) leverage graph structure to model the commentary information, and generate sports news in a graph-to-text manner~\cite{ijcai2021-524}, or even consider the temporal information (i.e., $t_i$) in the graph structure~\cite{10.1007/978-3-031-00129-1_10}.
| {
"attr-fineweb-edu": 2.183594,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUc5M4dbjiVFoXIZWd | \section{Introduction}
Due to the low-scoring nature of soccer, the performances of individual soccer players are challenging to appropriately quantify. Most actions that players perform in a match do not impact the scoreline directly but they may affect the course of the match. Until the mid 2010s, soccer clubs primarily relied on count-based statistics for match analysis and player recruitment purposes. These statistics typically focus on events that are easy to count such as the number of completed passes, number of shots on target or number of tackles. However, they are typically not representative of the players' actual performances as they often fail to account for the relevant context. Most traditional statistics treat each action of a given type as equally valuable, which is never the case in practice.
The increased availability of detailed match event data has enabled more sophisticated approaches to quantify the performances of soccer players in matches. These approaches exploit the observation that the outcome of a particular action depends on many different factors, many of which are often beyond the power of the player who executes the action. For instance, a line-breaking pass in midfield is extremely valuable if the teammate at the receiving end of the pass manages to put the ball in the back of the net, or can be perceived as useless if the same teammate fumbles their first touch.
The majority of the performance metrics that are currently used in soccer rely on the expected value of the actions that the players perform in matches. Ignoring the actual impact of the actions, these approaches reward each action based on the impact that highly similar actions had in the past~\cite{decroos2017starss,decroos2019actions,vanroy2020valuing}. As a result, a particular type of action that led to a goal further down the line several times in the past will be deemed more valuable than a particular type of action that has never led to a goal before. The value of an action is typically expressed in terms of the action's impact on their team's likelihood of scoring and/or conceding a goal within a given time period or number of actions after the valued action. For example, the VAEP framework uses a ten-action look-ahead window to value an action in terms of likelihood of scoring or preventing a goal~\cite{decroos2019actions}.
Although many different approaches to valuing player actions have been proposed in recent years~\cite{rudd2011framework,decroos2017starss,vanroy2020valuing,liu2020deep}, the adoption of these approaches by soccer practitioners has been quite limited to date. Soccer practitioners such as performance analysts, recruitment analysts and scouts are often reluctant to base their decisions on performance metrics that provide little insights into how the numbers came about. The community's efforts have primarily gone to devising alternative methods and improving the accuracy of existing methods.
Although the \textit{explainability} of the expected value for a particular action has received little to no attention to date, there has been some work on improving the \textit{interpretability} of the underlying models that produce the expected values. \cite{decroos2019interpretable} introduce an alternative implementation of the VAEP framework. Their approach differs from the original approach in two ways. First, they replace the Gradient Boosted Decision Tree models with Explainable Boosting Machine models, which combine the concepts of Generalized Additive Models and boosting. Second, they reduce the number of features from over 150 to only 10. Their implementation yields similar performance to the original implementation, but the models are easier to interpret. However, the resulting expected values for specific actions often remain hard to explain to soccer practitioners. The features were designed to facilitate the learning process and not to be explained to soccer practitioners who are unfamiliar with mathematics.
In an attempt to improve the \textit{explainability} of the expected values that are produced by the \cite{decroos2019interpretable} approach, this paper proposes the following two changes:
\begin{enumerate}
\item The spatial location of each action is represented as a feature vector that reflects a fuzzy assignment to 16 pitch zones, which were inspired by the 12 pitch zones introduced in~\cite{caley2013shot}. The zones are meaningful to soccer practitioners as they are often used to convey tactical concepts and insights about the game~\cite{maric2014halfspaces}. In contrast, \cite{decroos2019interpretable} represent each spatial location using the distance to goal and the angle to goal like the original VAEP implementation.
\item The potential interactions between two features are manually defined such that the learning algorithm can only leverage feature interactions that are meaningful from a soccer perspective when learning the model. In contrast, \cite{decroos2019interpretable} allow the learning algorithm to learn at most three arbitrary pairwise interactions, which do not necessarily capture concepts that are meaningful to soccer practitioners.
\end{enumerate}
The remainder of this paper is organized as follows. Section~\ref{sec:problem-statement} formally introduces the task at hand and Section~\ref{sec:dataset} describes the dataset. Section~\ref{sec:approach} introduces our approach to learning an \textit{explainable} model to estimate the expected values for shots. Section~\ref{sec:experimental-evaluation} provides an empirical evaluation of our approach. Section~\ref{sec:related-work} discusses related work. Section~\ref{sec:conclusion-future-work} presents the conclusions and avenues for further research.
\section{Problem Statement}
\label{sec:problem-statement}
This paper addresses the task of estimating expected values for shots, which are often referred to as ``expected-goals values'' or ``xG values'' in the soccer analytics community. The goal is to learn a model whose estimates can conveniently be explained to practitioners using the jargon that they are familiar with. More formally, the task at hand is defined as follows:
\begin{description}
\item[Given:] characteristics of the shot (e.g., spatial location, body part used), and characteristics and outcomes of shots in earlier matches;
\item[Estimate:] the probability of the shot yielding a goal;
\item[Such that:] the estimate can be explained to soccer practitioners who are unfamiliar with machine learning.
\end{description}
Unlike this paper, \cite{decroos2019interpretable} addresses the broader task of estimating the likelihood of scoring and conceding a goal from a particular game state within the next ten actions. However, from a machine learning perspective, both tasks are very similar. In our case, the label depends on the outcome of the shot. In their case, the label depends on the outcome of potential shots within the following ten actions.
\section{Dataset}
\label{sec:dataset}
We use match event data that was collected from the Goal.com website for the top-5 European competitions: the English Premier League, German Bundesliga, Spanish LaLiga, Italian Serie A and French Ligue 1. Our dataset covers all matches in the 2017/2018 through 2019/2020 seasons as well as the 2020/2021 season until Sunday 2 May 2021.
Our dataset describes each on-the-ball action that happened in each covered match. For each action, our dataset includes, among others, the type of the action (e.g., pass, shot, tackle, interception), the spatial location of the action (i.e., the x and y coordinate), the time elapsed since the start of the match, the body part that the player used to perform the action, the identity of the player who performed the action and the team that the player who performed the action plays for.
Table~\ref{tbl:dataset} shows the number of shots in each season in each competition in our dataset.
\begin{table}[htb]
\centering
\begin{tabular}{lrrrr}
\toprule
\textbf{Competition} & \textbf{2017/18} & \textbf{2018/19} & \textbf{2019/20} & \textbf{2020/21} \\
\midrule
Premier League & 9,328 & 9,667 & 9,429 & 5,893 \\
Bundesliga & 7,760 & 8,273 & 8,152 & 4,849 \\
LaLiga & 9,191 & 9,273 & 8,615 & 5,054 \\
Serie A & 9,835 & 10,601 & 10,910 & 5,679 \\
Ligue 1 & 9,475 & 9,428 & 6,829 & 6,182 \\
\midrule
\textbf{Total} & 45,592 & 47,242 & 43,935 & 38,737 \\
\bottomrule
\end{tabular}
\caption{Number of shots in each season in each competition in our dataset. The 2020/2021 season is covered until Sunday 2 May 2021 when the last three to five rounds of the season still had to be played, depending on the competition.}
\label{tbl:dataset}
\end{table}
\section{Approach}
\label{sec:approach}
We train a probabilistic classifier that estimates the probability of a shot resulting in a goal based on the characteristics of the shot. We first define a feature vector that reflects the shot characteristics and then pick a suitable model class.
\subsection{Representing Shot Characteristics}
\label{subsec:approach-shot-characteristics}
The location of the shot and the body part used by the player who performs the shot are known to be predictive of the outcome of a shot~\cite{caley2013shot,ijtsma2013forget,ijtsma2013where,caley2015premier}. Since the precise spatial locations of the players are unavailable, most approaches attempt to define features that are a proxy for the relevant context. For example, some approaches use the speed of play (e.g., distance covered by the ball within the last X seconds) as a proxy for how well the opposition's defense was organized at the time of the shot. That is, a defense tends to be less-well organized after a fast counter-attack than after a slow build-up.
We leverage a subset of the features that are used by the VAEP framework~\cite{decroos2019actions}. We process the data using the \textsc{socceraction} Python package.\footnote{\url{https://github.com/ML-KULeuven/socceraction}} First, we convert our dataset into the SPADL representation, which is a unified representation for on-the-ball player actions in soccer matches that facilitates analysis. Second, we produce the features that are used by the VAEP framework to represent game states. For our task, we only consider the game states from which a shot is taken. However, many VAEP features contribute little to nothing to the predictive power of the model while increasing the complexity of the model and the likeliness of over-fitting, as shown by~\cite{decroos2019interpretable}. Therefore, we only use the following four VAEP features:
\begin{itemize}
\item \textbf{bodypart\_foot}: an indicator that is \textit{true} for footed shots and \textit{false} for all other shots;
\item \textbf{bodypart\_head}: an indicator that is \textit{true} for headed shots and \textit{false} for all other shots;
\item \textbf{bodypart\_other}: an indicator that is \textit{true} for non-footed and non-headed shots and \textit{false} for all other shots;
\item \textbf{type\_shot\_penalty}: an indicator that is \textit{true} for penalty shots and \textit{false} for non-penalty shots.
\end{itemize}
Unlike~\cite{decroos2019interpretable}, we do not use VAEP's \textbf{start\_dist\_to\_goal} and \textbf{start\_angle\_to\_goal} features that represent the location of a shot as the distance and angle to the center of the goal, respectively. Instead, we use a zone-based representation for pitch locations.
\subsection{Representing Pitch Locations}
\label{subsec:approach-pitch-locations}
The location of a shot is an important predictor of the outcome of the shot. Hence, several different approaches to representing shot locations have been explored in the soccer analytics community~\cite{robberechts2020interplay}. However, the focus has been on optimizing the accuracy of the resulting models rather than on improving the \textit{explainability} of the representation to soccer practitioners. Shot locations are often represented as a combination of the distance and angle of the shot to the center of the goal. Although this representation allows machine learning algorithms to learn more accurate models, the representation is challenging to explain to soccer practitioners who are unfamiliar with mathematics.
We propose an alternative representation for shot locations that allows to learn accurate predictive models that can be more easily explained to soccer practitioners. We exploit the observation that soccer practitioners often refer to designated zones on the pitch to convey tactical concepts and insights about the game~\cite{maric2014halfspaces}. Hence, shot locations could be represented by overlaying the pitch with a grid and assigning each shot to the corresponding grid cell. However, shots that were taken from roughly the same location near a cell boundary may end up in different grid cells. As a result, these shots may look more dissimilar than they are from the perspective of the machine learning algorithm due to the \textit{hard} zone boundaries~\cite{decroos2020soccermix}.
To overcome the limitations of \textit{hard} zone boundaries, we introduce an alternative representation that uses \textit{soft} zone boundaries. We perform a fuzzy assignment of shot locations to grid cells. That is, each shot location is assigned to a grid cell with a certain probability, where the probabilities sum to one across the grid. We perform c-means clustering~\cite{dunn1973fuzzy,bezdek1981pattern} using the \textsc{scikit-fuzzy} Python package.\footnote{\url{https://github.com/scikit-fuzzy/scikit-fuzzy}} First, we manually initialize the cluster centers, where each cluster center corresponds to the center of one of 16 zones, as shown in Figure~\ref{fig:zones}. The 12 green dots represent the zones identified by~\protect\cite{caley2013shot}. The additional 4 blue dots cover the goal line to improve the estimates for shots from very tight angles. Second, we run 1000 iterations of the c-means clustering algorithm to obtain the membership of each shot location to each of the clusters. We use the default value of 1000 iterations since running fewer or more iterations does not seem to affect the results. We also use the default exponent of 2 for the membership function. Smaller exponents lead to a \textit{too harsh} assignment of shot locations to clusters, whereas larger exponents lead to a \textit{too lenient} assignment such that valuable contextual information is lost.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{zones.png}
\caption{Definition of the zones. The circles show the centers of the 16 defined zones. The 12 green dots represent the zones identified by~\protect\cite{caley2013shot}. The additional 4 blue dots cover the goal line to improve the estimates for shots from a very tight angle.}
\label{fig:zones}
\end{figure}
\subsection{Training a Probabilistic Classifier}
\label{subsec:approach-classifier}
To estimate the probability of a shot yielding a goal, we train a probabilistic classifier. Like \cite{decroos2019interpretable}, we train an Explainable Boosting Machine. As the name suggests, the predictions made by Explainable Boosting Machine models can more easily be explained to soccer practitioners than the predictions made by traditional Gradient Boosted Decision Tree models, which are increasingly used in soccer analytics (e.g.,~\cite{decroos2019actions,bransen2019choke,bransen2020player}).
An Explainable Boosting Machine \cite{nori2019interpretml} is a Generalized Additive Model of the following form:
\begin{align*}
g(E[y]) = \beta_0 + \sum f_j(x_j) + \sum f_{ij}(x_i,x_j),
\end{align*}
where $g$ is the \textit{logit} link function in the case of classification, $y$ is the target, $\beta_0$ is the intercept, $f_j$ is a feature function that captures a feature $x_j$ and $f_{ij}$ is a feature function that captures the interaction between features $x_i$ and $x_j$. Each feature function is learned in turn by applying boosting and bagging on one feature or pair of features at a time. Due to the additive nature of the model, the contribution of each individual feature or feature interaction can be explained by inspecting or visualizing the feature functions $f_j$ and $f_{ij}$.
We train an Explainable Boosting Machine model using the \textsc{InterpretML} Python package.\footnote{\url{https://github.com/interpretml/interpret}} Unlike \cite{decroos2019interpretable}, who allow the algorithm to learn up to three arbitrary pairwise feature functions, we manually specify the pairwise feature functions that the algorithm is allowed to learn in order to guarantee the explainability of the feature interactions. We allow the algorithm to learn 25 pairwise feature functions: 12 feature functions that capture the relationship between each zone inside the penalty area and the footed-shot indicator, 12 feature functions that capture the relationship between each zone inside the penalty area and the headed-shot indicator, and a feature function that captures the relationship between the zone that contains the penalty spot and the penalty-shot indicator. Since headed shots from outside the penalty area are rare, the zones outside the penalty area are not considered in the pairwise feature functions.
\section{Experimental Evaluation}
\label{sec:experimental-evaluation}
We first discuss our experimental setup, then compare our approach to three baseline approaches, and finally analyze the explanations of the estimates made by our approach.
\subsection{Methodology}
\label{subsec:experimental-evaluation-methodology}
We compare our proposed zone-based approach to estimating the probability of a shot yielding a goal to three baseline approaches. Concretely, we consider the following four approaches in our experimental evaluation:
\begin{description}
\item[Soft zones:] Our approach as introduced in Section~\ref{sec:approach};
\item[Hard zones:] An alternative version of our approach where the \textit{soft} assignment of shot locations to zones is replaced by a \textit{hard} assignment. To keep the implementation as close as possible to the implementation of our original approach, we use an exponent of 1.001 for the membership function in the c-means clustering step;
\item[Distance and angle:] An alternative version of our approach, inspired by \cite{decroos2019interpretable}, where the 16 features reflecting the \textit{soft} assignment of shot locations to zones is replaced by VAEP's \textbf{start\_dist\_to\_goal} and \textbf{start\_angle\_to\_goal} features that capture the shot's distance and angle to the center of the goal;
\item[Naive baseline:] A simple baseline that predicts the class distribution for each shot (i.e., the proportion of shots yielding a goal: 10.51\%)~\cite{decroos2019interpretable}.
\end{description}
We split the dataset into a training set and a test set. The training set spans the 2017/2018 through 2019/2020 seasons and includes 136,769 shots. The test set covers the 2020/2021 season and includes 38,737 shots.
We train Explainable Boosting Machine models for the first three approaches using the \textsc{InterpretML} Python package, which does not require extensive hyperparameter tuning. Based on limited experimentation on the training set, we let the learning algorithm use maximum 64 bins for the feature functions and 32 bins for the pairwise feature functions. We use the default values for the remaining hyperparameters. The learning algorithm uses 15\% of the available training data as a validation set to prevent over-fitting in the boosting step.
We compute seven performance metrics for probabilistic classification. We compute the widely-used AUC-ROC, Brier score (BS) and logarithmic loss (LL) performance metrics~\cite{ferri2009experimental}. Like~\cite{decroos2019interpretable}, we compute normalized variants of both Brier score (NBS) and logarithmic loss (NLL), where the score for each approach is divided by the score for the baseline approach. In addition, we compute the expected calibration error (ECE)\footnote{\url{https://github.com/ML-KULeuven/soccer\_xg}} and a normalized variant thereof (NECE)~\cite{guo2017calibration}. For AUC-ROC, a higher score is better. For all other performance metrics, a lower score is better.
\subsection{Performance Comparison}
\label{subsec:experimental-evaluation-comparison}
\begin{table*}[ht]
\centering
\begin{tabular}{l||r||rr||rr||rr}
\toprule
\textbf{Approach} & \textbf{AUC-ROC} & \textbf{BS} & \textbf{NBS} & \textbf{LL} & \textbf{NLL} & \textbf{ECE} & \textbf{NECE} \\
\midrule
Soft zones & 0.7922 & 0.0825 & 0.8120 & \textbf{0.2869} & \textbf{0.8043} & \textbf{0.0020} & \textbf{0.2105} \\
Hard zones & 0.7743 & 0.0838 & 0.8248 & 0.2930 & 0.8214 & 0.0027 & 0.2842 \\
Distance and angle & \textbf{0.7932} & \textbf{0.0823} & \textbf{0.8100} & 0.2878 & 0.8068 & 0.0025 & 0.2632 \\
Naive baseline & 0.5000 & 0.1016 & 1.0000 & 0.3567 & 1.0000 & 0.0095 & 1.0000 \\
\bottomrule
\end{tabular}
\caption{AUC-ROC, Brier score (BS), normalized Brier score (NBS), logarithmic loss (LL), normalized logarithmic loss (NLL), expected calibration error (ECE) and normalized expected calibration error (NECE) for the four considered approaches. For AUC-ROC, a higher score is better. For all other performance metrics, a lower score is better. The best result for each metric is in bold. The proposed \textit{Soft zones} approach yields comparable performance to the traditional \textit{Distance and angle} approach, while the estimates are easier to explain to practitioners.}
\label{tbl:results}
\end{table*}
\begin{figure*}[ht]
\centering
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\linewidth]{analysis-summary.png}
\caption{Overall importance of each feature towards making predictions about shots. The vertical axis presents the different features, whereas the horizontal axis reports the overall importance of each feature, where a higher score corresponds to a higher importance.}
\label{fig:analysis-summary}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\linewidth]{analysis-zone1.png}
\caption{Impact of the feature value for the feature that corresponds to the zone right in front of goal (\textbf{zone\_1}) on the prediction. The score increases as the value for this feature increases, where the score is expressed in log odds.}
\label{fig:analysis-zone1}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\linewidth]{analysis-penalty-zone6.png}
\caption{Pairwise interaction between the zone that contains the penalty spot (\textbf{zone\_6}) and the indicator whether the shot is a penalty or not. Surprisingly, the score drops for shots close to the penalty spot (i.e., high value for \textbf{zone\_6}) in case the shot is a penalty (i.e., \textbf{type\_shot\_penalty\_a0}) is 1.}
\label{fig:analysis-penalty-zone6}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\linewidth]{analysis-shot.png}
\caption{Explanation for a random shot in the test set. The vertical axis presents the different features, whereas the horizontal axis reports the overall importance of each feature, where a higher score corresponds to a higher importance.}
\label{fig:analysis-shot}
\end{subfigure}
\caption{Insights into the learned model (\ref{fig:analysis-summary},~\ref{fig:analysis-zone1} and~\ref{fig:analysis-penalty-zone6}) as well as the prediction for a random shot in the test set (\ref{fig:analysis-shot}).}
\label{fig:analysis}
\end{figure*}
Table~\ref{tbl:results} shows the AUC-ROC, Brier score (BS), normalized Brier score (NBS), logarithmic loss (LL), normalized logarithmic loss (NLL), expected calibration error (ECE) and normalized expected calibration error (NECE) for all four approaches on the test set. The best result for each of the performance metrics is highlighted in bold.
The proposed \textit{Soft zones} approach clearly outperforms the \textit{Hard zones} approach across all metrics and yields similar performance to the traditional \textit{Distance and angle} approach. Moreover, the performance of the proposed \textit{Soft zones} approach is comparable to the performance of other approaches on similar datasets~\cite{robberechts2020data}.
Since the probability estimates that the models produce should be explainable to soccer practitioners, the models should not only be accurate but also robust against adversarial examples. It would be challenging to explain to soccer practitioners why a model produces vastly different probability estimates for two shots performed under very similar circumstances. Research by~\cite{devos2020versatile} and~\cite{devos2021verifying} suggests that the Gradient Boosted Decision Tree models used by the reference implementation of the VAEP framework~\cite{decroos2019actions} in the \textsc{socceraction} Python package\footnote{\url{https://github.com/ML-KULeuven/socceraction}} are vulnerable to adversarial examples, which is undesirable for our use case. Early experiments suggest that the Explainable Boosting Machine models are less vulnerable to adversarial examples but further research is required.
\subsection{Explanation Analysis}
\label{subsec:experimental-evaluation-analysis}
Figure~\ref{fig:analysis} shows insights into the learned model (\ref{fig:analysis-summary},~\ref{fig:analysis-zone1} and~\ref{fig:analysis-penalty-zone6}) as well as the prediction for a random shot in the test set (\ref{fig:analysis-shot}).
Figure~\ref{fig:analysis-summary} shows the overall importance of each feature towards making predictions about shots. The vertical axis presents the different features, whereas the horizontal axis reports the overall importance of each feature, where a higher score corresponds to a higher importance. The first-ranked feature corresponds to the zone furthest away from the goal (\textbf{zone\_14}). Shots from this zone rarely result in a goal. In contrast, the second-ranked feature corresponds to the zone right in front of goal (\textbf{zone\_1}). Shots from this zone often result in a goal. Hence, both features are useful to make predictions about the outcome of a shot.
Figure~\ref{fig:analysis-zone1} shows the impact of the feature value for the feature that corresponds to the zone right in front of goal (\textbf{zone\_1}) on the prediction. The score increases as the value for this feature increases, where the score is expressed in log odds. Intuitively, the prediction for a shot increases as the shot is closer to the center of this zone.
Figure~\ref{fig:analysis-penalty-zone6} shows the pairwise interaction between the zone that contains the penalty spot (\textbf{zone\_6}) and the indicator whether the shot is a penalty or not. Surprisingly, the score drops for shots close to the penalty spot in case the shot is a penalty. This behavior is likely caused by the presence of a separate feature function that captures whether the shot is a penalty or not. To improve the explainability of the model, either this pairwise feature function or the feature function corresponding to the penalty indicator could be dropped.
Figure~\ref{fig:analysis-shot} shows the explanation for a random shot in the test set. The vertical axis presents the different features and the corresponding values for the shot, whereas the horizontal axis reports the overall importance of each feature, where a higher score corresponds to a higher importance. The shot is headed (i.e.,~\textbf{bodypart\_head\_a0} is 1) instead of footed (i.e., \textbf{bodypart\_foot\_a0} is 0), which has a negative impact on the prediction. In contrast, the shot is taken from close to the center of the zone that includes the penalty spot (i.e., \textbf{zone\_6} is 0.58), which has a positive impact on the prediction.
\section{Related Work}
\label{sec:related-work}
The task of estimating the probability of a shot resulting in a goal has received a lot of attention in the soccer analytics community, both inside and outside\footnote{\url{https://wikieducator.org/Sport_Informatics_and_Analytics/Performance_Monitoring/Expected_Goals}} the academic literature. The earliest mentions of ``expected-goals models'' date back to around 2013 when soccer analytics enthusiasts started describing their approaches in blog posts~\cite{caley2013shot,ijtsma2013forget,ijtsma2013where,eastwood2014expected2,eastwood2014expected,ijtsma2015close,eastwood2015expected,caley2015premier}. Most approaches first define a number of shot characteristics and then fit a Logistic Regression model. The research primarily focuses on identifying shot characteristics that are predictive of the outcome of a shot. While these blog posts carry many useful insights, the experimental evaluation is often rather limited~\cite{mackay2016how}. Unlike the non-academic literature, the limited academic literature in this area primarily focuses on approaches that use more sophisticated machine learning techniques and operate on tracking data~\cite{lucey2014quality,anzer2021goal}.
Due to the wider availability of finer-grained data, the broader task of valuing on-the-ball player actions in matches has received a lot of attention in recent years. The low-scoring nature of soccer makes the impact of a single action hard to quantify. Therefore, researchers have proposed many different approaches to distribute the reward for scoring a goal across the actions that led to the goal-scoring opportunity, including the shot itself. Some of these approaches operate on match event data~\cite{rudd2011framework,decroos2017starss,decroos2019actions,decroos2019interpretable,vanroy2020valuing,liu2020deep}, whereas other approaches operate on tracking data~\cite{link2016real,spearman2018beyond,dick2019learning,fernandez2019decomposing}.
The work by \cite{decroos2019interpretable} comes closest to our work. They replace the Gradient Boosted Decision Tree models in the reference implementation of the VAEP framework~\cite{decroos2019actions} with Explainable Boosting Machine models that are easier to interpret. Unlike our approach, their approach uses the original VAEP features, some of which are challenging to explain to soccer practitioners who are unfamiliar with mathematics and machine learning.
\section{Conclusion and Future Work}
\label{sec:conclusion-future-work}
This paper addresses the task of estimating the probability that a shot will result in a goal such that the estimates can be explained to soccer practitioners who are unfamiliar with mathematics and machine learning. Inspired by ~\cite{decroos2019interpretable}, we train an Explainable Boosting Machine model that can easily be inspected and interpreted due to its additive nature. We represent each shot location by a feature vector that reflects a fuzzy assignment of a shot to designated zones on the pitch that soccer practitioners use to convey tactical concepts and insights about the game. In contrast, existing approaches use more complicated location representations that are designed to facilitate the learning process but are less straightforward to explain to soccer practitioners. In addition to being easier to explain to practitioners, our approach exhibits similar performance to existing approaches.
In the future, we aim to improve and extend our approach. One research direction is to extend the approach from valuing shots to valuing other types of actions such as passes, crosses and dribbles. Another research direction is to refine the zones for representing shot locations in collaboration with soccer practitioners. Yet another research direction is to investigate whether the proposed location representation performs well for Gradient Boosted Decision Tree models, which are widely used in soccer analytics nowadays. Furthermore, we aim to perform more thorough comparative tests with soccer practitioners and to investigate whether the proposed location representation leads to more robust models that are less vulnerable to adversarial examples.
\balance
\bibliographystyle{named}
| {
"attr-fineweb-edu": 2.767578,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdbY4uBhi_K2XLkbL | \section*{INTRODUCTION}
Traveling is getting more accessible in the present era of progressive globalization. It has never been easier to travel, resulting in a significant increase of the volume of leisure trips and tourists around the world (see, for instance, the statistics of the last UNWTO reports \cite{unwto}).
Over the last fifty years, this increasing importance of the economic, social and environmental impact of tourism on a region and its residents has led to a considerable number of studies in the so-called geography of tourism \cite{Christaller1964}. In particular, geographers and economists have attempted to understand the contribution of tourism to global and regional economy \cite{Williams1991, Hazari1995, Durbarry2004, Proenca2008, Matias2009} and to assess the impact of tourism on local people \cite{Long1990, Madrigal1993, Jurowski1997, Lindberg1997, Andereck2000, Gursoy2002}.
These researches on tourism have traditionally relied on surveys and economic datasets, generally composed of small samples with a low spatio-temporal resolution. However, with the increasing availability of large databases generated by the use of geolocated information and communication technologies (ICT) devices such as mobile phones, credit or transport cards, the situation is now changing. Indeed, this flow of information has notably allowed researchers to study human mobility patterns at an unprecedented scale \cite{Brockmann2006, Gonzalez2008, Song2010, Noulas2012, Hawelka2014, Lenormand2015a}. In addition, once these data are recorded, they can be aggregated in order to analyze the city's spatial structure and function \cite{Reades2007, Soto2011, Frias2012, Pei2014, Louail2014, Grauwin2014, Lenormand2015c, Louail2015} and they have also been successfully tested against more traditional data sources \cite{Lenormand2014a, Deville2014, Tizzoni2014}. In the field of tourism geography, these new data sources have offered the possibility to study tourism behavior at a very high spatio-temporal resolution \cite{Asakura2007, Shoval2007, Girardin2008, Freytag2010, Poletto2012, Poletto2013, Hawelka2014, Bajardi2015}.
In this work, we propose a ranking of touristic sites worldwide based on their attractiveness measured with geolocated data as a proxy for human mobility. Many different rankings of most visited touristic sites exist but they are often based on the number of visitors, which does not really tell us much about their attractiveness at a global scale. Here we apply an alternative method proposed in \cite{Lenormand2015b} to measure the influence of cities. The purpose of this method is to analyze the influence and the attractiveness of a site based on the average radius traveled and the area covered by individuals visiting this site. More specifically, we select $20$ out of the most popular touristic sites of the world and analyze their attractiveness using a dataset containing about $10$ million geolocated tweets, which have already demonstrated their efficiency as useful source of data to study mobility at a world scale \cite{Hawelka2014, Lenormand2015b}. In particular, we propose three rankings of the touristic sites' attractiveness based on the spatial distribution of the visitors' place of residence, we show that the Taj Mahal, the Pisa Tower and the Eiffel Tower appear always in the top 5. Then, we study the touristic site's visiting figures by country of residence, demonstrating that the Eiffel Tower, Times Square and the London Tower attract the majority of the visitors. To close the analysis, we focus on users detected in more than one site and explore the relationships between the $20$ touristic sites by building a network of undirected trips between them.
\begin{figure*}
\includegraphics[width=\textwidth]{Figure1.pdf}
\caption{\textbf{Density of geolocated tweets and positions of the touristic sites.}\label{Fig1}}
\end{figure*}
\section*{MATERIALS AND METHODS}
The purpose of this study is to measure the attractiveness of $20$ touristic sites taking into account the spatial distribution of their visitors' places of residence. To do so, we analyze a database containing $9.6$ million geolocated tweets worldwide posted in the period between September 10, 2010 and October 21, 2015. The dataset was built by selecting the geolocated tweets sent from the touristic places in the general streaming and requesting the time-lines of the users posting them. The touristic sites boundaries have been identified manually. Collective accounts and user exhibiting non-human behaviors have been removed from the data by identifying users tweeting too quickly from the same place, with more than 9 tweets during the same minute and from places separated in time and space by a distance larger than what is possible to be covered by a commercial flight (with an average speed of 750 km/h). Their spatial distributions and that of the touristic sites can be seen in Figure \ref{Fig1}.
In order to measure the site attractiveness, we need to identify the place of residence of every user who have been at least once in one of the touristic sites. First, we discretize the space by dividing the world into squares of equal area ($100 \times 100 \text{ km}^2$) using a cylindrical equal-area projection. Then, we identify the place most frequented by a user as the cell from which he or she has spent most of his/her time. To ensure that this most frequented location is the actual user's place of residence the constraint that at least one third of the tweets has been posted from this location is imposed. The resulting dataset contains about $59,000$ users' places of residence. The number of valid users is shown in Table \ref{Tab1} for each touristic site. In the same way, we identify the country of residence of every user who have posted a tweets from one of the touristic sites during the time period.
\begin{table}
\begin{center}
\small
\begin{tabular}{ | l | l |}
\hline
\textbf{Site} & \textbf{Users}\\
\hline
Alhambra (Granada, Spain) & 1,208 \\ \hline
Angkor Wat (Cambodia) & 947 \\ \hline
Corcovado (Rio, Brazil) & 1,708 \\ \hline
Eiffel Tower (Paris, France) & 11,613 \\ \hline
Forbidden City (Beijing, China) & 457 \\ \hline
Giza (Egypt) & 205 \\ \hline
Golden Pavilion (Kyoto, Japan) & 1,114 \\ \hline
Grand Canyon (US) & 1,451 \\ \hline
Hagia Sophia (Istanbul, Turkey) & 2,701 \\ \hline
Iguazu Falls (Argentina-Brazil) & 583 \\ \hline
Kukulcan (Chichen Itz\'a, Mexico) & 209 \\ \hline
London Tower (London, UK) & 3,361 \\ \hline
Machu Pichu (Peru) & 987 \\ \hline
Mount Fuji (Japan) & 2,241 \\ \hline
Niagara Falls (Canada-US) & 920 \\ \hline
Pisa Tower (Pisa, Italy) & 1,270 \\ \hline
Saint Basil's (Moscow, Russia) & 262 \\ \hline
Taj Mahal (Agra, India) & 378 \\ \hline
Times Square (NY, US) & 13,356 \\ \hline
Zocalo (Mexico City, Mexico) & 16,193 \\ \hline
\end{tabular}
\end{center}
\caption{\textbf{Number of valid Twitter users identified in each touristic sites.}}
\label{Tab1}
\end{table}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{Figure2.pdf}
\caption{\textbf{Ranking of the touristic sites according to the radius and the coverage.} (a) Radius. (b) Coverage (cell). (c) Coverage (country). \label{Fig2}}
\end{center}
\end{figure*}
Two metrics have been considered to measure the attractiveness of a touristic site based on the spatial distribution of the places of residence of users who have visited this site:
\begin{itemize}
\item \textbf{Radius:} The average distance between the places of residence and the touristic site. The distances are computed using the Haversine formula between the latitude and longitude coordinates of the centroids of the cells of residence and the centroid of the touristic site. In order not to penalize isolated touristic sites, the distances have been normalized by the average distance of all the Twitter users' places of residence to the site. It has been checked that the results are consistent if the median is used instead of the average radius for the rankings.
\item \textbf{Coverage:} The area covered by the users' places of residence computed as the number of distinct cells (or countries) of residence.
\end{itemize}
To fairly compare the different touristic sites which may have different number of visitors, the two metrics are computed with $200$ users' place of residence selected at random and averaged over $100$ independent extractions. Note that unlike the coverage the radius does not depend on the sample size but, to be consistent, we decided to use the same sampling procedure for both indicators.
\section*{RESULTS}
\subsection*{Touristic sites' attractiveness}
We start by analyzing the spatial distribution of the users' place of residence to assess the attractiveness of the $20$ touristic sites. In Figure \ref{Fig2}a and Figure \ref{Fig2}b, the touristic sites are ranked according to the radius of attraction based on the distance traveled by the users from their cell of residence to the touristic site and the area covered by the users' cells of residence. In both cases, the results are averaged over $100$ random selection of $200$ users. The robustness of the results have been assessed with different sample sizes (50, 100 and 150 users), we obtained globally the same rankings for the two metrics. Both measures are very correlated and for most of the site the absolute difference between the two rankings is lower or equal than $2$ positions. However, since the metrics are sensitive to slightly different information both rankings also display some dissimilarities. For example, the Grand Canyon and the Niagara Falls exhibit a high coverage due to a large number of visitors from many distinct places in the US but a low radius of attraction at the global scale.
To complete the previous results, we also consider the number of countries of origin averaged over $100$ random selection of $200$ users. This gives us new insights on the origin of the visitors. For example, as it can be observed in Figure \ref{Fig3}, the visitors of the Grand Canyon are mainly coming from the US, whereas in the case of the Taj Mahal the visitors' country of residence are more uniformly distributed. Also, it is interesting to note that in most of the cases the nationals are the main source of visitors except for Angkor Wat (Table \ref{Tab2}). Some touristic sites have a national attractiveness, such as the Mont Fuji or Zocalo hosting about 84\% and 93\% of locals, whereas others have a more global attractiveness, this is the case of the Pisa Tower and the Machu Pichu welcoming only 21\% of local visitors.
\begin{table*}
\begin{center}
\small
\begin{tabular}{ | l | l | l | l | l | l | }
\hline
\textbf{Site} & \textbf{Top 1} & \textbf{Top 2} & \textbf{Top 3}\\ \hline
Alhambra (Spain) & Spain 71.14$\%$ & US 6.06$\%$ & UK 2.61$\%$ \\ \hline
Angkor Wat (Cambodia) & Malaysia 19.64$\%$ & Philippines 17.4$\%$ & US 9.59$\%$ \\ \hline
Corcovado (Brazil) & Brazil 81.13$\%$ & US 4.92$\%$ & Chile 3.08$\%$ \\ \hline
Eiffel Tower (France) & France 26.75$\%$ & US 16.62$\%$ & UK 8.92$\%$ \\ \hline
Forbidden City (China) & China 26.48$\%$ & US 14.46$\%$ & Malaysia 10.95$\%$ \\ \hline
Giza (Egypt) & Egypt 30.65$\%$ & US 9.8$\%$ & Kuwait 5.85$\%$ \\ \hline
Golden Pavilion (Japan) & Japan 60.74$\%$ & Thailand 11.72$\%$ & US 4.84$\%$ \\ \hline
Grand Canyon (US) & US 75.79$\%$ & UK 2.87$\%$ & Spain 2.16$\%$ \\ \hline
Hagia Sophia (Turkey) & Turkey 71.26$\%$ & US 5.48$\%$ & Malaysia 1.67$\%$ \\ \hline
Iguazu Falls (Arg-Brazil-Para) & Argentina 48.26$\%$ & Brazil 26.61$\%$ & Paraguay 8.63$\%$ \\ \hline
Kukulcan (Mexico) & Mexico 73.78$\%$ & US 10.07$\%$ & Spain 2.83$\%$ \\ \hline
London Tower (UK) & UK 65.61$\%$ & US 10.24$\%$ & Spain 2.77$\%$ \\ \hline
Machu Pichu & Peru 20.43$\%$ & US 19.95$\%$ & Chile 10.43$\%$ \\ \hline
Mount Fuji (Japan) & Japan 84.01$\%$ & Thailand 5.83$\%$ & Malaysia 2.66$\%$ \\ \hline
Niagara Falls (Canada-US) & US 60.5$\%$ & Canada 16.31$\%$ & Turkey 3.25$\%$ \\ \hline
Pisa Tower (Italy) & Italy 20.85$\%$ & US 13.56$\%$ & Turkey 10.95$\%$ \\ \hline
Sant Basil (Russia) & Russia 66.71$\%$ & US 5.06$\%$ & Turkey 3.77$\%$ \\ \hline
Taj Mahal (India) & India 27.97$\%$ & US 15.59$\%$ & UK 7.61$\%$ \\ \hline
Times Square (US) & US 74.32$\%$ & Brazil 3.26$\%$ & UK 2.31$\%$ \\ \hline
Zocalo (Mexico) & Mexico 92.22$\%$ & US 3.1$\%$ & Colombia 0.77$\%$ \\ \hline
\end{tabular}
\end{center}
\caption{\textbf{The three countries hosting most of the visitors for each touristic site.} The countries are ranked by percentage of visitors.}
\label{Tab2}
\end{table*}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=8.5cm]{Figure3.pdf}
\caption{\textbf{Heat map of the spatial distribution of the visitors' country of origin for the Taj Mahal and the Grand Canyon. The results have been averaged over $100$ extractions of $200$ randomly selected users.} \label{Fig3}}
\end{center}
\end{figure}
More generally, we plot in Figure \ref{Fig2}c the ranking of touristic sites based on the country coverage. The results obtained are very different than the ones based on the cell coverage (Figure \ref{Fig2}b). Indeed, some touristic sites can have a low cell coverage but with residence cells located in many different countries, this is the case of the Pyramids of Giza, which went up $7$ places and appears now in second position. On the contrary, other touristic sites have a high cell coverage but with many cells in the same country, as in the previously mentioned cases of the Grand Canyon and the Niagara Falls. Finally, the ranks of the Taj Mahal, the Pisa Tower and Eiffel Tower are consistent with the two previous rankings, these three sites are always in the top $5$. Finally, we compare quantitatively the rankings with the Kendall's $\tau$ correlation coefficient which is a measure of association between two measured quantities based on the rank. In agreement with the qualitative observations, we obtain significant correlation coefficients comprised between 0.66 and 0.77 confirming the consistency between rankings obtained with the different metrics.
\subsection*{Touristic site's visiting figures by country of residence}
We can also do the opposite by studying the touristic preferences of the residents of each country. We extract the distribution of the number of visitors from each country to the touristic sites and normalize by the total number of visitors in order to obtain a probability distribution to visit a touristic site according to the country of origin. This distribution can be averaged over the $70$ countries with the higher number of residents in our database (gray bars in Figure \ref{Fig4}). The Eiffel Tower, Times Square and the London Tower welcome in average $50$\% of the visitors of each country. It is important to note that these most visited touristic sites are not necessarily the ones with the higher attractiveness presented in the previous section. That is the advantage of the method proposed in \cite{Lenormand2015b}, which allows us to measure the influence and the power of attraction of regions of the world with different number of local and non-local visitors.
\begin{figure*}
\begin{center}
\includegraphics[width=14cm]{Figure4.pdf}
\caption{\textbf{Clustering analysis.} (a) Map of the spatial distribution of the country of residence according to the cluster. (b) Fraction of visitors according to the touristic site. \label{Fig4}}
\end{center}
\end{figure*}
We continue our analysis by performing a hierarchical cluster analysis to group together countries exhibiting similar distribution of the number of visitors according to the touristic sites. Countries are clustered together using the ascending hierarchical clustering method with the average linkage clustering as agglomeration method and the Euclidean distance as similarity metric, respectively. To choose the number of clusters, we used the average silhouette index \cite{Rousseeuw1987}. The results of the clustering analysis are shown in Figure \ref{Fig4}. Two natural clusters emerge from the data, these clusters are without surprise composed of countries which tend to visit in a more significant way touristic sites located in countries belonging to their cluster. The first cluster gather countries of America and Asia whereas the second one is composed of countries from Europe and Oceania.
\subsection*{Network of touristic sites}
In the final part of this work, we investigate the relationships between touristic sites based on the number of Twitter users who visited more than one site during a time window between September 2010 and October 2015. More specifically, we built an undirected spatial network for which every link between two toursitc sites represents at least one user who has visited both sites. As a co-occurence network, the weight of a link between two sites is equal to the total number of users visiting the connected sites. The network is represented in Figure \ref{Fig5} where the width and the brightness of a link is proportional to its weight and the size of a node is proportional to its weighted degree (strenght). The Eiffel Tower, Times Square, Zocalo and the London Tower appear to be the most central sites playing a key role in the global connectivity of the network (Table \ref{Tab3}). The Eiffel Tower alone accounted for a 25\% of the total weighted degree. The three links exhibiting the highest weights connect the Eiffel Tower with Time Square, the London Tower and the Pisa Tower representing 30\% the total sum of weights. Zocalo is also well connected with the Eiffel Tower and Time Square representing 11\% of the total sum of weights.
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{Figure5.pdf}
\caption{\textbf{Network of undirected trips between touristic sites.} The width and the brightness of a link is proportional to its weight. The size of a node is proportional to its weighted degree. \label{Fig5}}
\end{center}
\end{figure*}
\begin{table}
\begin{center}
\small
\begin{tabular}{ | l | c | }
\hline
\textbf{Site} & \textbf{Node Total Weight}\\ \hline
Eiffel Tower (France) & $0.25$ \\ \hline
Times Square (US) & $0.17$ \\ \hline
Zocalo (Mexico) & $0.10$ \\ \hline
London Tower (UK) & $0.10$ \\ \hline
Pisa Tower (Italy) & $0.06$ \\ \hline
Hagia Sophia (Turkey) & $0.04$ \\ \hline
Niagara Falls (Canada-US) & $0.04$ \\ \hline
Corcovado (Brazil) & $0.03$ \\ \hline
Alhambra (Spain) & $0.03$ \\ \hline
Grand Canyon (US) & $0.03$ \\ \hline
\end{tabular}
\end{center}
\caption{\textbf{Ranking of nodes based on the fraction of the total weight.}}
\label{Tab3}
\end{table}
\section*{DISCUSSION}
We study the global attractiveness of $20$ touristic sites worldwide taking into account the spatial distribution of the place of residence of the visitors as detected from Twitter. Instead of studying the most visited places, the focus of the analysis is set on the sites attracting visitors from most diverse parts of the world. A first ranking of the sites is obtained based on cells of residence of the users at a geographical scale of $100$ by $100$ kilometers. Both the radius of attraction and the coverage of the visitors' origins consistently point toward the Taj Mahal, the Eiffel tower and the Pisa tower as top rankers. When the users' place of residence is scaled up to country level, these sites still appear on the top and we are also able to discover particular cases such as the Grand Canyon and the Niagara Falls that are most visited by users residing in their hosting countries. At country level, the top rankers are the Taj Mahal and the Pyramids of Giza exhibiting a low cell coverage but with residence cells distributed in many different countries.
Our method to use social media as a proxy to measure human mobility lays the foundation for even more involved analysis. For example, when we cluster the sites by the country of the origin of their visitors, two main clusters emerge: one including the Americas and the Far East and the other with Europe, Oceania and South Africa. The relations between sites have been also investigated by considering users who visited more than one place. An undirected network was built connecting sites visited by the same users. The Eiffel Tower, Times Square, Zocalo and the London Tower are the most central sites of the network.
In summary, this manuscript serves to illustrate the power of geolocated data to provide world wide information regarding leisure related mobility. The data and the method are completely general and can be applied to a large range of geographical locations, travel purposes and scales. We hope thus that this work contribute toward a more agile and cost-efficient characterization of human mobility.
\vspace*{0.5cm}
\section*{ACKNOWLEDGEMENTS}
Partial financial support has been received from the Spanish Ministry of Economy (MINECO) and FEDER (EU) under project INTENSE@COSYP (FIS2012-30634), and from the EU Commission through project INSIGHT. The work of ML has been funded under the PD/004/2013 project, from the Conselleria de Educaci\'on, Cultura y Universidades of the Government of the Balearic Islands and from the European Social Fund through the Balearic Islands ESF operational program for 2013-2017. JJR acknowledges funding from the Ram\'on y Cajal program of MINECO. BG was partially supported by the French ANR project HarMS-flu (ANR-12-MONU-0018).
\bibliographystyle{unsrt}
| {
"attr-fineweb-edu": 2.884766,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdgfxK6EuNA_gOCUO | \section{Introduction}
\label{sec:intro}
The rise of the digital economy and technology has dramatically changed the way we shop and make purchasing decisions. With an abundance of products and services available online, consumers are faced with an overwhelming amount of information to process when making a purchase. From detailed product descriptions and specifications to customer reviews and ratings, it can be challenging for individuals to determine which product is the best fit for their needs.
In an ideal scenario, a completely rational agent would make decisions by assigning values to the relevant aspects of each option, computing the utilities, and choosing the option with the highest utility, assuming they have unlimited time and resources for this calculation. However, in reality, individuals have physical and cognitive limitations that make it difficult to make a `perfect' decision and choose the best option among many alternatives \citep{tversky1972elimination}. Instead, people tend to use simple heuristics to make decisions quickly without the need for a vast amount of information \citep{hutchinson2005simple}.
A clear demonstration of this is the hotel booking process which has been transformed by digitalization, offering a plethora of options through online platforms. The current trend of booking hotels online presents a challenge for decision-makers as there is an abundance of options available with a vast amount of information for each hotel. When individuals visit a travel agency website, they can find comprehensive details such as the hotel's location, services, amenities, quality, view, ratings, photos, reviews, etc. Even for a single hotel, it is impractical to analyze all the information within a limited time frame.
Therefore, travelers may prefer to use simple shortcuts instead of carefully evaluating all available information when making hotel booking decisions. An illustration of the use of choice heuristics can be seen in Figure~\ref{fig:booking_hotel}, where a person who searches for a two-night stay in a double room in Santa Monica, CA first narrows down their options by focusing on hotels within a price range of \$150-\$200 per night, three-star quality, and an average guest rating of at least seven out of ten. Out of the options that meet these criteria, only three are left and then the traveler randomly picks the hotel from this preselected set.
The preselected set is determined in a manner that ensures any choice from this set is equally more desirable than the effort and resources required to identify the best alternative.
\begin{figure}
\centering
\includegraphics[scale=0.5]{numerics/booking-hotels.png}
\label{fig:booking_hotel}
\caption{An example of the hotel booking process.
}
\end{figure}
In this study, we examine a general choice rule in which the preliminary selection of options is a crucial factor in the decision-making process. We introduce a novel model referred to as the stochastic set model (SSM) which is characterized by a probability distribution over preselected sets. In contrast to the widely used random utility maximization (RUM) models, which are founded on a probability distribution over preference rankings, our model incorporates two sources of choice randomness. The first source of randomness pertains to the distribution over preselected sets, while the second source pertains to the randomness of selecting an option from the preselected set itself. At its core, our model is connected to the well-established elimination-by-aspects model \citep{tversky1972elimination}, as preselected sets can be created by eliminating unwanted alternatives based on features, or other non-compensatory rules, such as conjunctive or disjunctive models \citep{pras1975comparison,brisoux1981evoked,laroche2003decision}. Instead of examining how preselected sets (also known as consideration sets) are formed through certain heuristics based on product features, our model assumes that the preselected set can be any set of products. This approach enables us to uncover the general properties of choice behavior influenced by consideration sets and move beyond its descriptive nature.
Notably, our paper is also aligned with the expanding literature on two-stage `consider-then-choose' choice models, where customers first sample a consideration set from a distribution and then select the alternative with the highest utility among the considered options \citep{manzini2014stochastic,brady2016menu, jagabathula2022demand}. Our model differs from this literature in the `choose' step. Rather than assuming that agents have the capacity to distinguish and rank all alternatives in the consideration set, our model assumes that agents are only boundedly rational due to cognitive constraints and therefore any option from the consideration set is equally viable.
\subsection{Contributions}
In this paper, we provide a full methodological
roadmap for the proposed stochastic set model (SSM). We discuss its identifiability as well as characterize the model by means of axioms. We then develop a framework to estimate the SSM model and show how this model contributes to the revenue management applications and demand prediction using a real-world dataset. In what follows, we provide an overview of our results and contributions.
\begin{itemize}
\item \emph{Novel choice model.} We propose a novel nonparametric choice-based demand model which is fully characterized by a distribution over subsets in the product universe. We represent customers' decision-making process as a two-stage procedure, where customers first preselect a set of alternatives that satisfy their criteria and then randomly pick the alternative from the preselected set. This model is motivated by the bounded rationality of customers who have limited cognitive abilities and information-processing capabilities, and thus, are unable to make fully informed decisions when purchasing the products.
\item \emph{Identification and axiomatic representation of the SSM model.} The proposed model has a rich representation through the distribution over subsets, which makes it a powerful tool to model the discrete choice of customers. However, the nonparametric nature of the model can lead to identifiability issues, as the model may have multiple solutions that fit the data equally well. Surprisingly, \YC{we show that the SSM model can be fully identified from the collection of choice probabilities. We further provide a closed-form expression on how one can compute the model parameters by means of the choice probabilities}. We also characterize the necessary and sufficient conditions for the choice probabilities to be consistent with our model.
This axiomatic representation of the model \YC{sheds light on how the aforementioned assumption of bounded rationality mitigates the common problem of overparameterization in nonparametric choice models.}
\item \emph{Model estimation framework.} We propose an efficient maximum likelihood estimation (MLE) procedure to calibrate the SSM model by first formulating the MLE problem as a concave maximization problem. We then show how to
solve this MLE problem by exploiting the EM methodology and the column generation algorithm which relies on solving a sequence of MILP problems that scale linearly in the number of products.
\item \emph{Assortment optimization problem under the SSM model.}
We analyze the problem of finding the optimal assortment under the SSM model as an application in revenue management where assortment optimization has become one of the most fundamental problems.
We first show that the problem is NP-hard and develop a fully polynomial-time approximation scheme (FPTAS) based on dynamic programming, in which the key insight it that the expected revenue of any assortment can be computed by recursively keeping track of the values of two functions that `summarize' the subset of products chosen thus far. %
\item \emph{Empirical study.} We estimate the SSM model using a real-world dataset and show its predictive power and computational tractability against state-of-the-art benchmarks such as the LC-MNL and ranking-based model. Specifically, we show that the SSM model is as competitive as the nonparametric raking-based model.
Moreover, compared
to the ranking-based model, the SSM model is more tractable computationally as we show that it takes almost ten times more time for the ranking-based model to reach the near-optimal solution for the MLE problem than the SSM model. Finally, we discuss how the symmetric cannibalization property of the SSM model factors into its predictive performance.
\end{itemize}
\subsection{Structure of the paper}
The paper proceeds as follows. Section~\ref{sec:literature} discusses the related literature. In Section~\ref{sec:model}, we formally define the stochastic set model. Section~\ref{sec:model_characterization} provides our main results related to the SSM model identification, that is, we prove that the model is identifiable from the choice probabilities alone and then characterize the model through choice axioms. In Section~\ref{sec:model_estimation}, we build an estimation framework for the SSM model which relies on column generation and EM algorithms. We then study the assortment optimization problem in Section~\ref{sec:assortment_optimization}, where we prove the NP-hardness of this problem and propose a dynamic program as an FPTAS. Finally, we present our empirical results using a real-world dataset in Section~\ref{sec:numerics} and then conclude in Section~\ref{sec:conclusion}. We relegate most of the proofs to the ecompanion.
\section{Literature Review}
\label{sec:literature}
In this section, we highlight several related choice-based demand models, review the empirical and theoretical studies of consideration set heuristics, and then position our work within the body of the existing literature.
\subsection{Overview of the Discrete Choice Models}
Discrete choice models are popular tools for predicting consumers' purchase decisions in operations, marketing, and economics research.
We refer readers to the seminal papers by \cite{ben1985discrete} and \cite{train2009discrete} for more details.
The most well-known discrete choice model is the multinomial logit model (MNL) and it uses a logistic function to represent the probability of choosing a particular product from a set of alternatives.
Notably, the MNL model is identifiable, which means that the parameters of the model can be estimated uniquely from the set of choice probabilities. The simplicity of the MNL model structure makes it easy to represent model parameters through the choice probabilities in a closed form.
This is in contrast to other popular choice models in the literature (mixed logit, LC-MNL, nested logit, tree logit, mallows model, etc.) which can have multiple solutions in the estimation problem and thus are not identifiable from the data. To the best of our knowledge, the choice model proposed in this paper is one of the very few nonparametric models that are identifiable from the choice probabilities in a closed-form expression. Moreover, in contrast to the MNL, our model is nonparametric which makes it more powerful in modeling consumer behavior and in prediction tasks.
We show in the paper that our model belongs to the RUM class like most of the other models that are popular in the operations and marketing literature, such as the MNL, the LC-MNL, the ranking-based \citep{farias2013nonparametric}, and the marginal distribution models \citep{mishra2014theoretical}. In the RUM class, each product is associated with a random variable, which represents the stochastic utility that a consumer derives from the product. When a set of products is offered to a consumer, s/he chooses the product that maximizes his/her realized utility. A well-known result in the literature is the representation power theorem by \cite{mcfadden2000mixed} which shows that any RUM choice model can be approximated to arbitrary precision by a mixture of MNL models, also known as the latent class multinomial logit (LC-MNL) model. This theorem implies that the LC-MNL model is a flexible tool for approximating a wide range of RUM models with different structures, and it can be useful for modeling complex choice behavior with latent classes of consumers with different preferences and decision-making processes.
The RUM class can be equivalently described by a distribution over product preference lists which can be referred to as a ranking-based model (or stochastic preference model) as shown by \cite{block1959random}.
This model can capture more complex choice behavior than traditional models such as the MNL model, but it is also more computationally intensive to estimate.
One approach that has been proposed to estimate the ranking-based model from data is the column generation procedure. The column generation is an optimization technique that is used to solve large-scale linear or integer programming problems. The idea is to start with a subset of the possible variables (columns) and iteratively add new variables to the model until a satisfactory solution is found. This approach has been applied to estimate ranking-based models from data in \cite{van2014market}. Similarly, we also exploit the column generation approach while estimating our model (see Section~\ref{sec:model_estimation}). However, we show that the proposed column generation algorithm is less computationally intensive than the one applied to the ranking-based model as we have a smaller number of binary variables and constraints in the optimization subproblems.
Finally, discrete choice models have been a fundamental building block in many research topics in revenue management \citep{kok2007demand, bernstein2015dynamic}, and it continues to be an active area of research. In particular, the assortment optimization problem was addressed under the LC-MNL model \citep{mendez2014branch,rusmevichientong2014assortment,desir2022capacitated}, the ranking-based model \citep{aouad2018approximability,bertsimas2019exact,feldman2019assortment}, the Markov-chain model \citep{feldman2017revenue,desir2020constrained}, and the decision forest model \citep{chen2021assortment}.
In addition, \cite{jagabathula2022personalized} propose the nonparametric DAG-based choice model which is used to run personalized promotions in the grocery retail setting. In the same spirit, in this paper we also investigates the assortment optimization problem under the proposed stochastic set choice model (see Section~\ref{sec:assortment_optimization}) and then describe a fully polynomial-time approximation scheme (FPTAS). To this end, there are several papers in the OM field that also provided the FPTAS approximations to the revenue management problems under different choice models, such as the LC-MNL model \citep{desir2022capacitated,akccakucs2021exact}, the nested logit model \citep{rusmevichientong2009ptas,desir2022capacitated}, exponomial choice model \citep{aouad2022exponomial}, among others.
\subsection{Consider-then-Choose Models}
The concept of consideration sets originated in the marketing and psychology fields and it has been around for several decades with many studies conducted on how consumers form and use consideration sets in their purchasing process. We refer readers to the survey papers by \cite{roberts1997consideration,hauser2014consideration} for more details. In particular, early research has shown that consumers usually make decisions in a two-stage process \citep{swait1987incorporating}. That is, consumers first consider a subset of products (i.e., form a consideration set) and then purchase one of the items in the set. Abundant empirical evidence supports the notion of the consideration set. For example, \cite{hauser1978testing} adapts an information-theoretic viewpoint and shows that a model which incorporates the consideration set concept can explain up to 78\% of the variation in the customer choice. \cite{hauser1990evaluation} empirically investigate the consideration set size for various product categories and report that the mean consideration set size, which is the number of brands that a customer considers before making a choice, is relatively small: 3.9 for deodorant, 4.0 for coffee, and 3.3 for frozen dinner.
Similarly, the choice-based demand model proposed in this paper also models customer choice in two stages where the first stage is used by a customer to form a preselected (or consideration) set. We also find that the size of these consideration sets is relatively small (see Section~\ref{subsec:IRI_performance}) for the set of various product categories based on the real-world grocery sales data \citep{bronnenberg2008database} which is consistent with the existing literature. As a result, it allows us to come up with an FPTAS approximation for the assortment optimization problem under the stochastic set model proposed in this paper.
Researchers have studied various theories and models to understand how consumers form their consideration sets. From the decision theory perspective, consumers will continue to search for new products as long as the gain in utility (or satisfaction) from finding a better product outweighs the cost of the search \citep{ratchford1982cost,roberts1991development}. This is known as the `bounded rationality' model, which assumes that consumers have limited cognitive resources and will make decisions based on a balance between the potential gains and the costs of searching.
However, there are other decision-making models that also try to explain the formation of consideration sets, such as the `satisfying' model which suggests that consumers will stop searching as soon as they find a product that meets their minimum standards. Additionally, consumer psychology and behavioral economics research also suggest that consumers do not always make decisions based on rational cost-benefit analysis, and other factors such as emotions, habits, and social influences can also play a role in the formation of consideration sets.
Moreover, consumers might be constrained by their cognitive limits and form consideration sets based on simple heuristic rules, such as elimination by aspects \citep{tversky1972elimination}, conjunctive and disjunctive rules \citep{pras1975comparison,brisoux1981evoked,laroche2003decision, gilbride2004choice,jedidi2005probabilistic}, and compensatory rules \citep{keeney1993decisions,hogarth2005simple}.
In this paper, we are taking a more general and flexible perspective on the formation of consideration sets. Rather than focusing on specific rules or heuristics that consumers use to construct their consideration sets, our approach is to model the possibility that a consideration set sampled by a customer can be any subset of products. To this end, our paper is related to the work by \cite{jagabathula2022demand} where the authors model customer choice through the consider-then-choose (CTC) model which is parameterized by the joint distribution over the consideration sets and the rankings. Our model is different from the CTC model in the second step where a customer chooses a product among the considered ones. %
In this paper, we assume that customers are boundedly rational, and thus their physical and cognitive limitations do not allow them to rank the preselected (i.e., considered) products. At a high level, our model implies that once customers have filtered down to a set of alternatives that meet their criteria, they are unable to differentiate further among these products, leading them to make a final decision during the second step in a random way.
The concept of a consideration set has been gaining a lot of attention in the field of operations management.
\cite{feldman2019assortment} propose an algorithm that can be used to find the optimal assortment, taking into account the specific characteristics of the ranking-based model and the small consideration set assumption. \cite{aouad2021assortment} study the assortment optimization under the CTC choice model, where the customer chooses the product with the highest rank within the intersection of her consideration set and the assortment.
In the paper by \cite{wang2018impact}, the authors propose a mathematical model that captures the trade-off between the expected utility of a product and the search cost associated with it.
\cite{chitla2023customers} use the consideration set concept in order to build the structural model and study the multihoming behavior of users in the ride-hailing industry. \cite{gallego2017attention} study the model proposed by \cite{manzini2014stochastic} and show that the related
assortment optimization problem runs in polynomial time even with capacity constraints. Interestingly, it was shown in the paper by \cite{jagabathula2022demand} that the consideration set approach is particularly advantageous in the grocery retail setting where we might expect the noise in the sales transaction data because of the stockouts or in the online platform setting. To this end, there is numerous evidence in the literature that online platforms might not have real-time information on the product availability in the grocery retail stores which could be the major cause for the stockouts \citep{knight2022disclosing} despite the AI-enabled technology to alleviate this problem \citep{knight2023impact}.
\section{Model Description}
\label{sec:model}
In this section, we formally define the Stochastic Set Model (SSM) and provide a simple illustrative example to better explain the structure of this model. We consider a universe $N$ of $n$ products in addition to the `no-purchase' or `default' option $0$ such that $N \equiv \{ 1,2,\ldots,n \}$. A \emph{menu} or an \emph{assortment} $S$ is a subset in $N$, i.e., $S \subseteq N$. When a set of products $S$ is offered, an \emph{agent} or \emph{a customer} chooses either a product in~$S$ or the `default' option~$0$. Thus, we implicitly assume that the default option is part of every offer set by convention.
Every choice model can be described by a mapping or function $\mathbb{P}$ that takes as input an assortment of products, represented as a subset $S$ in $N$, and outputs a probability distribution over the elements of $S \cup \{ 0 \}$. This probability distribution represents the likelihood of each product in the assortment being chosen by a consumer. Specifically, we let~$\mathbb{P}_j(S)$ denote the probability that an agent chooses product $j \in S$ and $\mathbb{P}_0(S)$ denote the probability that the agent chooses the `default' option. Note that when the assortment is empty, the default option is always chosen, i.e., $\mathbb{P}_0(\emptyset)=1$. More formally, the choice model specifies all the choice probabilities $\{\mathbb{P}_j(S) \colon j \in S^+, S \subseteq N\}$, where we use $S^+$ to denote the set $S \cup \{0\}$. As expected, the choice probabilities satisfy the standard probability laws: $\mathbb{P}_j(S) \geq 0$ for all $j \in S^+$ and $\sum_{j \in S^+} \mathbb{P}_j(S) = 1$ for all $S \subseteq N$. In the interest of exposition, we also let $N^+$ denote $N \cup \{ 0 \}$.
In what follows next, we introduce a novel choice rule, denoted as the \emph{Stochastic Set Model} (SSM), which can be parametrized by the distribution over subsets in $N$. Specifically, an SSM model is defined by a tuple $(\mathcal{C},\lambda)$, where $\mathcal{C}$ is a collection of subsets in $N$ and $\lambda$ is a probability distribution function over the elements in $\mathcal{C}$ such that $\lambda(C) \geq 0$ for all $C \in \mathcal{C}$ and $\sum_{C \in \mathcal{C}} \lambda(C) = 1$. Each set $C \in \mathcal{C}$ denotes a \emph{preselected (or consideration) set}. %
Interestingly, one may represent each preselected set $C$ through the set of preference relations between elements in $N^+$ as shown in the following definition.
\begin{definition}
\label{def:preselected_set_preferences}
Each preselected set $C \in \mathcal{C}$ of the SSM model $(\mathcal{C},\lambda)$ represents a set of preference relations between items in $N^+$ as follows:
\begin{align*}
i \sim j, \quad & \forall \quad i, j \in C, \\ i \succ 0 , \quad & \forall \quad i \in C,\\ 0 \succ j, \quad & \forall \quad j \notin C.
\end{align*}
\end{definition}
Note that we do not need to specify preference relations between the items which are outside of the preselected set $C$ since those items are dominated by the outside option which is assumed to be always available. However, without loss of generality, one can assume that $i \sim j$ for all $i, j \notin C$.
Before we show how one can compute the probability to choose an item from the assortment $S \subseteq N$ under the SSM model, let us first specify a conditional distribution function $\lambda(\cdot | S)$ which can provide an alternative parametrization of the SSM model and it is defined for each assortment $S$ in the following way.
\begin{definition}
\label{def:random_set_distribution}
A \emph{conditional stochastic set distribution} $\lambda(\cdot|S)$ is defined by a distribution $\lambda$ over subsets in $N$, for every $S \subseteq N$, as follows:
\begin{align*}
\lambda(C' \mid S) = \begin{cases}
\sum_{C \subseteq \mathcal{C}} \lambda(C) \cdot \mathbb{I} \left[ C \cap S = C' \right], & \text{if $C' \subseteq S$},\\
0, & \text{if $C' \not \subseteq S$},
\end{cases}
\end{align*}
where $\mathbb{I} \left[ A \right]$ is an indicator function and it is equal to 1 if condition $A$ is satisfied and 0, otherwise. %
\end{definition}
\DM{Notably, a conditional stochastic set distribution provides an alternative method for modeling the way in which customers form preselected sets under the SSM model. To this end, when offered the set of alternatives $S$, we might either assume that customers sample a preselected set $C \subseteq N$ in accordance with the distribution $\lambda(\cdot)$ and then make a final choice from the set $C \cap S$, or alternatively, we could assume that customers sample a preselected set $C \subseteq S$ in accordance with the
$\lambda(\cdot \mid S)$ and then make a final choice from the set $C$. Both ways to represent the first stage in the customer decision-making process using the SSM model lead to the same choice probabilities and thus, those two representations of the SSM model are equivalent. }
It is straightforward to verify that the conditional stochastic set distribution $\lambda(\cdot|S)$ satisfies the standard conditions such as $\lambda(C'|S) \geq 0$ for all $C' \subseteq N$ and $\sum_{C' \subseteq N} \lambda(C'|S)=1$. In addition, $\lambda(\cdot|S)$ is consistent with the `monotonicity property' where for all $S_1 \subseteq N, S_2 \subseteq S_1$ we have that $\lambda(C'|S_2) \ge \lambda(C'|S_1)$ for all $C' \subseteq S_2$, which follows directly from the definition of the conditional stochastic set distribution $\lambda(\cdot|S)$. More specifically, note that if for any $C \subseteq N$ the condition $C \cap S_1 = C'$ is satisfied then condition $C \cap S_2 = C'$ is also satisfied. Next, we formally define the probability to choose an item $j$ from the assortment $S \subseteq N$ under the SSM model $(\mathcal{C},\lambda)$ by means of the conditional stochastic set distribution $\lambda(\cdot|S)$.
\begin{definition}
Given an SSM model $(\mathcal{C},\lambda)$, the choice probability of product $j$ from an assortment $S$ is
\begin{equation} \label{eq:choice_prob_via_conditional_distribution}
\mathbb{P}^{(\mathcal{C},\lambda)}_j(S)=\sum_{ C' \subseteq S: j \in C'} \frac{\lambda(C'|S)}{\abs{C'}},
\end{equation}
if $j \in S$ and 0, otherwise. %
\end{definition}
It follows from the aforementioned definition that consumers make a choice in two stages. They first sample a preselected subset in $S$ from the distribution function $\lambda(\cdot|S)$ and then randomly choose an item from the preselected set of alternatives. If the preselected set is an empty set then a customer chooses the `default' option. Alternatively, the probability to choose item $j$ from the offer set $S$ can be represented by means of the distribution function $\lambda(\cdot)$ in the following way:
\begin{equation}
\label{eq:choice_prob_via_original_distribution}
\mathbb{P}^{(\mathcal{C},\lambda)}_j(S) =\sum_{ C \subseteq N: j \in C} \frac{\lambda(C)}{\abs{C \cap S}},
\end{equation}
if $j \in S$, and 0, otherwise.
There are at least two ways how one can interpret the proposed SSM model $(\mathcal{C},\lambda)$. On the one hand, we might assume that customers are homogeneous and each of them samples a preselected set $C$ according to the distribution $\lambda$. Alternatively, one might assume that customers are heterogeneous, and thus each customer is represented by a specific preselected set $C$ such that $\lambda(C)$ is a probability mass function of customers of type $C$.
We next provide a simple example to illustrate how one can compute choice probabilities under the SSM model.
\begin{example}
Consider a universe of $n=5$ products, $N=\{ 1,2,3,4,5 \}$, and assume that customers make choices in accordance with a SSM model parametrized by the following distribution function $\lambda(\cdot)$ over the subsets:
\begin{align*}
\lambda \left( \{ 1,3,5 \} \right) & = 0.1, \\
\lambda \left( \{ 1,2,3,4\} \right) & = 0.6, \\ \lambda \left( \{ 3,4,5 \} \right) & = 0.3.
\end{align*}
In other words, the support of the distribution function $\lambda(\cdot)$ consists of three subsets $C_1$, $C_2$, and $C_3$ such that $C_1 = \{ 1,3,5 \}$, $C_2 = \{ 1,2,3,4\} $, and $C_3 = \{ 3,4,5 \}$. One interpretation of the model is that every customer, before making the final choice, samples preselected sets $C_1$, $C_2$, and $C_3$ with probabilities $0.1$, $0.6$, and $0.3$, respectively. Another interpretation implies that our customer population consists of customers of types $1$, $2$, and $3$ who are characterized by preselected sets $C_1$, $C_2$, and $C_3$, respectively. Moreover, the probability mass of customers of the types $1$,$2$, and $3$ is equal to $0.1$, $0.6$, and $0.3$, respectively.
Then we can easily compute the conditional stochastic set distribution $\lambda( \cdot \mid S )$ for the assortment $S = \{ 2,3\}$, \DM{using Definition~\ref{def:random_set_distribution}}, as follows:
\begin{align*}
\lambda \left( \{ 3 \} \mid S \right) = 0.4, \,\, \text{ and }\quad
\lambda \left( \{ 2,3 \} \mid S \right) = 0.6.
\end{align*}
Finally, we can compute the probability to choose items $2$ and $3$ from the offer set $S = \{ 2,3\}$ in the following way:
\begin{align*}
\mathbb{P}^{(\mathcal{C},\boldsymbol{\lambda})}_2(S) & = \frac{
\lambda(\{ 2,3 \} \mid S ) }{|\{ 2,3 \} |} = 0.3,\\
\mathbb{P}^{(\mathcal{C},\boldsymbol{\lambda})}_3(S) & = \frac{
\lambda(\{ 3 \}\mid S)}{|\{ 3 \} |} + \frac{
\lambda(\{ 2,3\mid S \}) }{|\{ 2,3 \} |} = 0.4 + 0.3 = 0.7.
\end{align*}
\end{example}
\section{Model Characterization and Identification}
\label{sec:model_characterization}
In this section, we discuss the SSM model from a theoretical viewpoint. In particular, we first show that the model is identifiable from the \YC{collection of choice probabilities} and show how one can compute the model parameters accordingly (see Section~\ref{subsec:identifiability}). We also provide sufficient conditions that need to be imposed on the choice probabilities so that the data generation process is consistent with the SSM model (see Section~\ref{subsec:modeling_by_axioms}). We conclude this section by comparing the proposed model with other RUM class choice models (Section~\ref{subsec:compare_to_RUM_models}).
For ease of notation, we let $N_j \equiv \{ S \subseteq N: j \in S \}$ denote the collection of assortments that include product~$j$. \YC{Also, throughout this section, we let \emph{choice data} refer to the collection of the ground-truth choice probabilities $\{ \mathbb{P}_j(S) : j \in S^+, S \subseteq N \}$.}
\YC{For now, we assume that the exact value of choice probabilities is provided to us, that is, we ignore the finite sample issue.
This is a reasonable assumption if the number of transactions under each assortment is large.
We will relax this assumption in Section~\ref{sec:model_estimation} when estimating the model parameters from real-world grocery retail data.}
\subsection{Identification of the SSM model class}
\label{subsec:identifiability}
Since the SSM model is fully characterized by the distribution $\lambda$ over subsets, the identification of the SSM model reduces to the identification of the distribution~$\lambda$. In what follows, we present a collection of results that provide different ways of how the distribution $\lambda$ can be computed. The first result requires the specification of the probabilities to choose an item $j$ over the offer sets in $N_j$ in order to compute the $\lambda (C)$ for $C \in N_j$.
\begin{theorem}
\label{theorem:inference}
\YC{Suppose that the collection of probabilities to choose product $j$}, i.e., $\{{\mathbb{P}}_j(S) \colon S \subseteq N_j\}$, is available and consistent with an underlying SSM model. Then we can reconstruct the model by uniquely computing $\lambda(C)$, for every $C \in N_j$, as follows
\begin{equation}
\lambda(C)=\sum_{X \in N_j} \mathbb{I} \bigg[ \abs{C \cup X} \ge n-1 \bigg] \cdot n^{\abs{C \cup X}-n+1} (-1)^{\abs{C}+\abs{X}-n+1} \mathbb{P}_j(X). \label{Equation_ConsiderationEqualInferFormula}
\end{equation}
\end{theorem}
\emph{Proof sketch:}
In order to simplify the notion, for each $C \in N_j$, we first define three functions as follows:
\begin{align*}
\chi_C(X)&=\frac{1}{\abs{C \cap X}}, \\
\psi_C(X)&=\mathbb{I}\bigg[\abs{C \cup X} = n-1 \bigg] \cdot (-1)^{\abs{C}+\abs{X}-n+1}, \\
\varphi_C(X)&=\mathbb{I}\bigg[\abs{C \cup X} = n \bigg] \cdot n \cdot (-1)^{\abs{C}+\abs{X}-n+1}.
\end{align*}
Then, we can rewrite the choice probability, defined in Equation~\eqref{eq:choice_prob_via_original_distribution}, by means of the function $\chi_C(X)$ as follows:
\begin{align}
\label{eq: prob_1_thm1}
\mathbb{P}_j(X) =\sum_{C \in N_j} \lambda(C) \chi_C(X).
\end{align}
We also notice that $\psi_C(X)$ and $\varphi_C(X)$ have a similar structure and their summation can be simplified as
\begin{align}
\label{eq: prob_2_thm1}
\psi_C(X) + \varphi_C(X) = \mathbb{I} \bigg[ \abs{C \cup X} \ge n-1 \bigg] \cdot n^{\abs{C \cup X}-n+1} (-1)^{\abs{C}+\abs{X}-n+1}.
\end{align}
Next, we claim that for all $C,C' \in N_j$,
\begin{equation} \label{eq: basis}
\sum_{X \in N_j} \chi_{C'}(X) ( \psi_{C} (X) +\varphi_{C}(X))=
\begin{cases}
1, \text{ if } C'=C, \\
0, \text{ otherwise}.
\end{cases}
\end{equation}
Invoking the claim, Equation~\eqref{Equation_ConsiderationEqualInferFormula} follows immediately, since
\begin{align}
& \sum_{X \in N_j} \mathbb{P}_j(X) \cdot \mathbb{I} \bigg[ \abs{C \cup X} \ge n-1 \bigg] \cdot n^{\abs{C \cup X}-n+1} (-1)^{\abs{C}+\abs{X}-n+1}
\nonumber
\\ = &\sum_{X \in N_j} \sum_{C' \in N_j} \lambda(C') \cdot \chi_{C'}(X) \cdot (\psi_{C}(X)+\varphi_{C}(X)) \qquad \Big [\text{by Equations~\eqref{eq: prob_1_thm1} and \eqref{eq: prob_2_thm1}} \Big] \nonumber \\
= & \sum_{C' \in N_j}\lambda(C') \cdot \sum_{X \in N_j} \chi_{C'}(X) \cdot (\psi_{C}(X)+\varphi_{C}(X)) \nonumber \\
= & \sum_{C' \in N_j}\lambda(C') \cdot \mathbb{I} \left[ C' = C \right] \qquad \Big [\text{by the claim stated above as Equation~\eqref{eq: basis}} \Big] \label{claim: 1} \nonumber \\
= & \lambda(C). \nonumber
\end{align}
Therefore, in order to complete the proof of the Equation~\eqref{Equation_ConsiderationEqualInferFormula}, it is sufficient to prove that claim. Notably, the proof of the identity~\eqref{eq: basis} is quite involved and we thus relegate its proof to the ecompanion (see Section~\ref{subsec:proof_of_theorem_inference}).
In what follows next, we complete the proof of Theorem~\ref{theorem:inference} by showing that the distribution $\lambda$ is unique. First, note that Equation~\eqref{eq: prob_1_thm1} relates probability distribution $\lambda$
over choice sets to the choice probability $\mathbb{P} _j\big( X\big)$ through the system of linear equations which can be represented as $\boldsymbol{y}=A \cdot \boldsymbol{ \lambda}$, where $\boldsymbol{y} = \left(\mathbb{P}_j(X) \right)_{X: j \in X}$ and $\boldsymbol{\lambda} = \left( \lambda(C) \right)_{C: C \in N_j}$ are two vectors of length $2^{n-1}$. Then, Equation~\eqref{Equation_ConsiderationEqualInferFormula} provides another relationship between choice frequencies $ \mathbb{P}_j(X)$ and the model parameters $\lambda$ in a linear form as $\boldsymbol{ \lambda}=B \cdot \boldsymbol{y}$. Given that $A$ is a $2^{n-1} \times 2^{n-1}$ dimensional matrix, proving the uniqueness of $\lambda$ distribution reduces to showing that $\det(A) \ne 0$ and we show it as follows:
\begin{align*}
\boldsymbol{ \lambda}&=B \cdot \boldsymbol{y}= B \cdot A \cdot \lambda \ \ \Rightarrow \ I=B \cdot A \ \ \Rightarrow \ 1 = \det(I)=\det(B) \cdot \det(A) \ \ \Rightarrow \ \det(A)\ne 0.
\end{align*}
We refer the readers to Section~\ref{subsec:proof_of_theorem_inference} for the complete proof of the theorem.\hfill $\square$\\
Theorem~\ref{theorem:inference} allows us to compute the probability mass of each consideration set $C \subseteq N$ such that $C \ne \emptyset$. Then, we can find $\lambda(\emptyset)$ directly by using the equation $\sum_{C \subseteq N} \lambda(C)=1$. It follows from Theorem~\ref{theorem:inference} that identification of $\lambda(C)$ relies only on the access to the probabilities of choosing an arbitrary item $j$ in $C$ under various assortments, i.e, $j$ can be any item in $C$. As a result, for each $C$, there are at least $\abs{C}$ ways to compute $\lambda(C)$. We can also formulate the corollary below that specifies the necessary conditions if the data generation process is consistent with the SSM model.
\begin{corollary} \label{Corollary:1}
Consider two products $j$ and $k$. Suppose that the choice data for $j$ and $k$, $\{{\mathbb{P}}_j(S) \colon S \subseteq N_j\}$ and $\{{\mathbb{P}}_k(S) \colon S \subseteq N_k\}$, are consistent with an underlying SSM model. Then for every set $C \subseteq N$ such that $j,k \in C$, the following equation is satisfied:
\begin{align*}
&\sum_{X \in N_j} \mathbb{I} \bigg[ \abs{C \cup X} \ge n-1 \bigg] \cdot n^{\abs{C \cup X}-n+1} (-1)^{\abs{C}+\abs{X}-n+1} \mathbb{P}_j(X) \\
&=\sum_{X \in N_k} \mathbb{I} \bigg[ \abs{C \cup X} \ge n-1 \bigg] \cdot n^{\abs{C \cup X}-n+1} (-1)^{\abs{C}+\abs{X}-n+1} \mathbb{P}_k(X).
\end{align*}
\end{corollary}
Note that this corollary follows directly from the Theorem~\ref{theorem:inference}. Alternatively, we can recover the underlying parameters of the SSM model from the
collection of choice probabilities $\{{\mathbb{P}}_0(S) \colon S \subseteq N\}$, where we only need to have access to the probability to choose an `outside' option under assortments $S \subseteq N$. In particular, we have the following result:
\begin{lemma} \label{lemma: outside}
Suppose that \YC{the collection of probabilities to choose the `default' option}, i.e., $\{{\mathbb{P}}_0(S) \colon S \subseteq N\}$, is consistent with an underlying SSM model. Then, we have that
\begin{equation}
\lambda(C)=\sum_{X \subseteq C}(-1)^{\abs{C}-\abs{X}} \mathbb{P}_0(N \setminus X). \label{eq: outside option}
\end{equation}
\end{lemma}
Lemma~\ref{lemma: outside} follows from a particular form of the inclusion-exclusion principle stated in~\cite{graham1995handbook}. For any finite set $Z$, if $f \colon 2^Z \to \mathbb{R}$ and $g \colon 2^Z \to \mathbb{R}$ are two real-valued set functions defined on the subsets of $Z$ such that $g(X) = \sum_{Y \subseteq X} f(Y)$, then the inclusion-exclusion principle states that $f(Y) = \sum_{X \subseteq Y} (-1)^{\abs{Y} - \abs{X}} g(X)$.
Our result then follows from replacing $f(Y)$ with~$\lambda(C)$ and defining $g(X)=\mathbb{P}_0(N \setminus X) = \sum_{C \subseteq X} \lambda(C)$. Note that it might be challenging to observe the probability of choosing a `default' option beyond the scope of experimental studies since most of the time the choice data do not include this information in real-world settings. To this end, Theorem~\ref{theorem:inference} has higher practical importance in identifying parameters of the SSM model than Lemma~\ref{lemma: outside} despite the fact that Equation~\eqref{eq: outside option} is less involved than Equation~\eqref{Equation_ConsiderationEqualInferFormula}.
Finally, after combining together Theorem~\ref{theorem:inference} and Lemma~\ref{lemma: outside}, we can formulate another set of necessary conditions imposed on the choice frequencies that should be satisfied if the data generation process is consistent with the SSM model.
\begin{corollary} \label{Corollary:2}
Suppose that the choice data $\{{\mathbb{P}}_j(S) \colon S \subseteq N_j\}$ and $\{{\mathbb{P}}_0(S) \colon S \subseteq N\}$ are consistent with an underlying SSM model. Then for every set $C \subseteq N$ such that $j \in C$ the following equation is satisfied:
\begin{align*}
\sum_{X \subseteq C}(-1)^{\abs{C}-\abs{X}} \mathbb{P}_0(N \setminus X)=\sum_{X \in N_j} \mathbb{I} \bigg[ \abs{C \cup X} \ge n-1 \bigg] \cdot n^{\abs{C \cup X}-n+1} (-1)^{\abs{C}+\abs{X}-n+1} \mathbb{P}_j(X).
\end{align*}
\end{corollary}
\subsection{Characterization of the SSM Model through Axioms}
\label{subsec:modeling_by_axioms}
In all the results described above, we assumed that the choice data, i.e., the collection of choice probabilities, is consistent with an underlying SSM model.
To verify that this is indeed the case, we establish a set of
necessary and sufficient conditions that the choice probabilities must satisfy to ensure
consistency. In other words, we now characterize the SSM model using axioms imposed on the collection of choice probabilities. The first axiom is related to the `cannibalization' effect which is the phenomenon where the sales or market share of one product is negatively impacted by the introduction or availability of a substitute product.
We denote this axiom as \emph{symmetric cannibalization} (S-cannibalization) and state it as follows.
\textsc{S-cannibalization:} For all offer sets $S \subseteq N$ and $j, k \in S$ such that $j \neq k$: $$\mathbb{P}_{j}(S \setminus \{ k\}) - \mathbb{P}_{j}(S)=\mathbb{P}_{k}(S \setminus \{ j\}) - \mathbb{P}_{k}(S).$$
S-cannibalization axiom implies that for all pairs of products $j,k \in N$, the `cannibalization' effect of product $k$ on product $j$ is the same as the `cannibalization' effect of product $j$ on $k$ for every assortment $S\subseteq N$, i.e., the demand cannibalization effect is symmetrical. It follows from the axiom that if product $j$ cannibalizes (resp. does not cannibalize) product $k$ in an assortment, then product $k$ cannibalizes (resp. does not cannibalize) product $j$ in the same assortment.
Our second axiom is rather technical and it imposes some restrictions on the probabilities to choose the `default' option.
\textsc{D-regularity:} For all offer sets $S \subseteq N$, we have that: $$\sum_{X \subseteq S} (-1)^{\abs{S} - \abs{X}} \mathbb{P}_0(N \setminus X) \geq 0.$$
In what follows below, we present our main theorem which characterizes the SSM model through the axioms defined above.
\begin{theorem} \label{theorem: characterization}
The collection of choice probabilities $\{\mathbb{P}_j(S) \colon j \in S^+, S \subseteq N\}$ is consistent with an SSM model with unique distribution $\lambda$ if and only if it satisfies S-Cannibalization and D-Regularity.
\end{theorem}
The proof is relegated to ecompanion (see Section~\ref{subsec:proof_axiom_theorem}). As it can be seen therein, establishing necessity is rather straightforward, but establishing sufficiency is more involved. We prove sufficiency by constructing the distribution $\lambda$ and then the uniqueness of $\lambda$ follows directly from the Theorem~\ref{theorem:inference}.
Theorem~\ref{theorem: characterization} can be used to verify if the choice generation process is indeed consistent with an SSM model. It can also be observed that the D-Regularity axiom when combined with the S-Cannibalization axiom ensures a well-known \emph{regularity}
property (or \emph{weak rationality}) which plays an important role in the economics literature \citep{rieskamp2006extending}, i.e., if $S_1 \subseteq S_2$ then $\mathbb{P}_j(S_1) \geq \mathbb{P}_j(S_2)$.
\begin{lemma} \label{lemma: regularity}
If a choice model $\mathbb{P}$ satisfies S-Cannibalization and D-Regularity then for all subsets $S_1, S_2 \subseteq N$ such that $S_1 \subseteq S_2$ we have that $\mathbb{P}_j(S_1) \geq \mathbb{P}_j(S_2)$ for any $j \in S_1$.
\end{lemma}
W also formulate the lemma below which states that if the two aforementioned axioms are satisfied, then the `cannibalization' effect of item $k$ on item $j$ diminishes when we enlarge the assortment.
\begin{lemma} \label{lemma: monotonicity}
If a choice model $\mathbb{P}$ satisfies S-Cannibalization and D-Regularity then $\Delta_k \mathbb{P}_j(S_1) \geq \Delta_k \mathbb{P}_j(S_2)$ if $S_1 \subseteq S_2$, where $\Delta_k \mathbb{P}_j(S)=\mathbb{P}_j(S \setminus \{k\}) - \mathbb{P}_j(S)$.
\end{lemma}
In other words, it follows from the lemma that if item $k$ cannibalizes item $j$ in assortment $S_1$, then item $k$ also cannibalizes item $j$ in assortment $S_2 \subseteq S_1$. Equivalently, if item $k$ does not cannibalize item $j$ in assortment $S_1$ then item $k$ also does not cannibalize item $j$ in assortment $S_2 \supseteq S_1$. %
\subsection{Comparison to the Existing Models}
\label{subsec:compare_to_RUM_models}
A widely used choice model class in economics and marketing is the random utility maximization (RUM) class \citep{thurstone1927law,block1959random,mcfadden1973conditional}. In the RUM class, each alternative is associated with a random utility. Once the randomness is revealed, agents choose the alternative with the highest utility, i.e., agents maximize their utility. Alternatively, one can represent RUM as a probability distribution $\mu$ over the collection of all strict linear orders (i.e, rankings) on $N^+$ \citep{farias2013nonparametric}. From here we can see that any RUM model can be represented by a ranking-based model and a ranking-based model is a RUM model. Many widely used choice models, such as MNL and mixed-MNL models, are also in the RUM class. The stochastic set model proposed in this paper is another example of the RUM class as we show in the lemma below.
\begin{lemma} \label{LemmaRUMinGCCR}
The SSM model class is nested in the RUM class and the reverse does not hold.
\end{lemma}
We prove Lemma~\ref{LemmaRUMinGCCR} by first showing that any SSM model can be represented by a distribution $\mu$ over rankings and then by providing an instance of the RUM class which is not consistent with the SSM model (see details in Section~\ref{subsec:proof_SS_under_RUM} in ecompanion). %
Interestingly, neither the MNL model is nested in the SSM model nor the SSM model is nested in the MNL model. It is straightforward to check that the MNL model satisfies the S-Cannibalization axiom only if its attraction parameters are the same for all items, i.e., market shares of all the items are the same. Moreover, in contrast with the SSM model, the MNL model cannot generate the choice data instance in which item $i$ has a non-zero market share and at the same time does not cannibalize item $j$. Symmetric cannibalization property differentiates the SSM model from the broad class of choice models with a single preference order \citep{manzini2014stochastic,jagabathula2022demand}. Different from many existing choice rules, the SSM model can explain choice data instances when item $i$ cannibalizes item $j$, item $j$ cannibalizes item $k$, but item $i$ does not cannibalize item $k$ (e.g., MNL, LC-MNL, Nested Logit models are not consistent with this choice data generation process).
\subsection{Discussion of the Main Results}
We first highlight that most nonparametric choice models are not identifiable from the choice data $\{ \mathbb{P}_j (S) : j \in S^+, S \subseteq N \}$ alone. The examples include the ranking-based model \citep{farias2013nonparametric}, the Markov-chain model \citep{blanchet2016markov}, the decision forest model \citep{chen2022decision}, among others. In particular, \cite{sher2011partial} show that the ranking-based model cannot be identified when $n\ge 4$. Intuitively, the nonidentifiability of most of the aforementioned nonparametric models can be explained by the fact that the parameter space grows much faster \DM{with $n$} than the number of available equations that match predicted and actual choice probabilities $\{ \mathbb{P}_j (S) \}_{j \in S^+, S \subseteq N}$ to identify model parameters. For instance, a ranking-based model requires the estimation of $n!$ parameters, which is the number of all possible rankings, while the
number of available equations to identify model parameters is upper bounded by $n \cdot 2^n$, where
$2^n$ is the total number of assortments $S \subseteq N$. In contrast, the number of parameters in the SSM model proposed in this paper is lower than the number of the available equations that match predicted and actual purchase probabilities $\{ \mathbb{P}_j (S) \}_{j \in S^+, S \subseteq N}$ to identify model parameters, i.e., SSM model requires the estimation of $2^n$ parameters and we have at least $2^n$ equations to accomplish that. Thus, to the best of our knowledge, the SSM model is one of the most flexible choice rules (i.e., it has the highest number of parameters) which are still identifiable from the choice data alone.
Surprisingly, the parameters of our SSM model can be expressed in a closed form via the choice probabilities $\{ \mathbb{P}_j (S) \}_{j \in S^+, S \subseteq N}$. This distinguishes the SSM model from other parametric and non-parametric choice rules. To the best of our knowledge, MNL is one of the very few models that have a similar property.
\YC{Another example would be a consider-then-choose (CTC) model studied by \cite{jagabathula2022demand} where the authors show that the model is partially identified as the marginal distribution over consideration sets can be identified from the choice data. However, the unique identification of this marginal distribution is possible \emph{only} under the assumption that the `default' option is always the least preferred alternative in the preference lists of customers. In contrast, we do not make any assumption on the choice data in Theorem~\ref{theorem:inference} to achieve identifiability of the SSM model. \cite{jagabathula2022demand} also investigate a special case of the CTC model, called the GCC model, where all customers follow the same preference order $\sigma$ to make decisions after sampling a consideration set. They also provide closed-form expressions to estimate the parameters of the GCC model. However, this model is quite restrictive and suffers from one-directional cannibalization which implies that all customers follow the same preference order. %
}
Finally, we emphasize that the characterization of the choice models through the axioms is a common approach in the economics field where most of the papers focus on the \emph{parametric} models. Luce's independence of irrelevant alternatives (IIA) axiom \citep{luce2012individual,hausman1984specification} is one of the most popular examples which is frequently used to demonstrate the limitation of the MNL model both in the economics and operation fields.
A more recent example is provided by \cite{echenique2019general} where the authors extend the Luce model by proposing a set of axioms that relax the IIA property. \cite{manzini2014stochastic} characterize the random choice rule through `asymmetry' and `menu independence' axioms.
Interestingly, in addition to the SSM, the rank-based model is one of the very few nonparametric models that are characterized by axioms. In particular, choice data are consistent with a ranking-based model if and only if it satisfies the regularity property \citep{barbera1986falmagne,mcfadden1990stochastic} that imposes restrictions on the choice probabilities as in Lemma~\ref{lemma: regularity} and can be stated as an axiom.
As a result, our paper also contributes to the growing literature that focuses on axiomatic choice.
\section{Data Model and Estimation Methodology}
\label{sec:model_estimation}
We start this section with a description of the data that one usually faces in real-world applications and propose our estimation framework. We consider a maximum likelihood estimation (MLE) approach and formulate the MLE problem as a concave maximization problem. We conclude this section by showing how to solve this MLE problem by exploiting the expectation maximization (EM) methodology and the column generation algorithm.
\subsection{Data Model}
We assume we have the sales transactions denoted by $\{ (S_t,i_t) \}_{t =1,\ldots,T}$, which consist of purchase records over $T$ periods. A purchase record at time $t$ is characterized by a tuple $(S_t,i_t)$, where $S_t \subseteq N$ denotes the subset of products on offer in period $t$ and $i_t \in S_t \cup \{ 0 \}$ denotes the product purchased in period $t$. In addition, we let $\mathcal{S}$ denote the set of unique assortments observed in the data and let $m$ denote the cardinality of this set, i.e., $m = | \mathcal{S} |$. Then, for each assortment $S \in \mathcal{S}$ and $i \in S^+ \equiv S \cup \{ 0 \}$, we let $\tau(S,i)$ denote the number of times the tuple $(S,i)$ appears in the sales transactions $\{ (S_t,i_t) \}_{t =1,\ldots,T}$.
\subsection{Maximum Likelihood Estimation (MLE)}
\label{subsec:MLE}
To estimate the model from data $\{ (S_t,i_t) \}_{t =1,\ldots,T}$, we first write down the following optimization problem with respect to a fixed collection $\bar{\mathcal{C}}$ of preselected sets:
\begin{subequations}
\label{problem:MLE_w_fixed_set}
\begin{alignat}{3}
P^{\text{MLE}}\left( \bar{\mathcal{C}} \right): \quad \underset{\boldsymbol{\lambda} \ge 0, \boldsymbol{Y}}{\text{maximize}} \quad & \mathcal{L}(\boldsymbol{Y}) \\ \text{such that} \quad & \mathbf{A}_S \boldsymbol{\lambda} = \boldsymbol{Y}_{*,S} \qquad \forall S \in \mathcal{S}, \label{constr:1}
\\ & \sum_{C \in \bar{\mathcal{C}}} \lambda_C = 1,
\end{alignat}
\end{subequations}
where $\lambda_C$ is an element of a vector $\boldsymbol{\lambda} \in \mathbb{R}^{|\bar{C}|}$ and it represents the likelihood of a customer type that preselects the set $C \in \bar{\mathcal{C}}$, $\boldsymbol{Y}_{j,S}$ is an element of the matrix $\boldsymbol{Y}$ and it represents the aggregate likelihood of the customer types that would pick the alternative $j$ from the offer set $S$, $\boldsymbol{Y}_{*,S} \in \mathbb{R}^{n+1}$ is a column of the matrix $\boldsymbol{Y}$, and the log-likelihood function is represented as follows:
$$\mathcal{L}(\boldsymbol{Y}) \equiv \sum_{S \in \mathcal{S}} \sum_{i \in S^+} \tau(S,i) \cdot \log \left(Y_{i,S}\right),$$
which is a concave function. It can be inferred from the first set of constraints~\eqref{constr:1} that $\mathbf{A}_S$ is a matrix of the size $(n+1) \times |\bar{\mathcal{C}}|$. The element at the $j$'s row and $C$'s column of this matrix is equal to $1/|S \cap C|$ if $j \in S \cap C$ and 0, otherwise.
First, it is worth emphasizing that when $\bar{\mathcal{C}}$ includes any subset of $N$ (i.e., $\bar{\mathcal{C}}$ is the power set of $N$) then optimization problem~\eqref{problem:MLE_w_fixed_set} solves the MLE problem under the SSM model.
However, when $\bar{\mathcal{C}}$ does not include all the subsets in $N$, we refer the problem $P^{\text{MLE}}\left( \bar{\mathcal{C}} \right)$ specified above as a \emph{restricted} optimization problem. Second, we note that optimization problem~\eqref{problem:MLE_w_fixed_set} is concave and thus we can use a solver for convex programming to find the optimal solution. Alternatively, we will exploit an expectation-maximization (EM) algorithm to obtain the optimal solution to the restricted problem $P^{\text{MLE}}\left( \bar{\mathcal{C}} \right)$. We find that this EM algorithm is more efficient than existing general convex programming solvers.
\subsection{Solving MLE with the EM Algorithm}
\label{subsec:EM}
The EM algorithm is a method for finding the maximum likelihood estimates of parameters in statistical models where the data are incomplete or there are unobserved latent variables. The algorithm proceeds in two steps: the expectation step (E-step), where the expectation of the complete-data log-likelihood is calculated given the current parameter estimates, and the maximization step (M-step), where the parameters are updated to maximize the expected complete-data log-likelihood.
In the context of our MLE problem specified above, the unobserved latent variable is the probability mass of customer types, which are associated with specific preselected sets, since customer types are not directly observed from the sales transaction data.
We start our EM algorithm with arbitrary initial parameter estimate ${\lambda}^{\text{est}}$. Then, we repeatedly apply the ``E'' and ``M'' steps, which are described below, until convergence.
\underline{\emph{E-step:}} If customer types associated with each transaction are known to us then we would be able to represent the complete-data log-likelihood function in the following way
$$\mathcal{L}^{\text{complete}} = \sum_{C \in \bar{\mathcal{C}}} \tau_C \cdot \log \lambda(C) + \text{constant},$$
where $\tau_C$ is the number of transactions made by customers of the type $C$. However, in the context of our problem, the customer types are not observed in the data and we thus replace $\tau(C)$ from the aforementioned complete-data log-likelihood function by its conditional expectation $\mathbf{E} \left[ \tau(C) \mid {\lambda}^{\text{est}} \right]$ which allows us to obtain $\mathbf{E} \left[ \mathcal{L}^{\text{complete}} \mid {\lambda}^{\text{est}} \right]$. To obtain the conditional expectation, we first apply the Bayes' rule
\begin{align}
\label{sq:EM_E_step_Bayes}
\mathbb{P}\left( C \mid S,i,{\lambda}^{\text{est}} \right) = \frac{ \mathbb{P} \left( i \mid C,S,\lambda^{\text{est}} \right) \cdot \mathbb{P} \left( C \mid {\lambda}^{\text{est}} \right) }{\sum_{C \in \bar{\mathcal{C}}} \mathbb{P} \left( i \mid C,S,\lambda^{\text{est}} \right) \cdot \mathbb{P} \left( C \mid {\lambda}^{\text{est}} \right) } = \frac{ {\mathbb{P} \left( i \mid C,S \right)} \cdot {\lambda}^{\text{est}}(C) }{\sum_{C \in \bar{\mathcal{C}}} {\mathbb{P} \left( i \mid C,S \right)} \cdot {\lambda}^{\text{est}} (C)},
\end{align}
where $\mathbb{P}\left( C \mid S,i,{\lambda}^{\text{est}} \right)$ is the probability that an item $i$ from the offer set $S$ is purchased by a customer of type $C$ when ${\lambda}^{\text{est}}$ represents the probability mass of different customer types and
$\mathbb{P} \left( i \mid C,S \right)$ is a probability to choose item $i$ from the offer set $S$ by a customer of type $C$. Consequently, the aforementioned Equation~\eqref{sq:EM_E_step_Bayes} allows us to compute $\hat{\tau}(C)$, which is the expected number of transactions made by customers of the type $C$, in the following way:
$$\hat{\tau}_C = \mathbf{E} \left[ \tau_C \mid {\lambda}^{\text{est}} \right]= \sum_{S \in \mathcal{S}} \sum_{i \in S^+} \tau(S,i) \cdot \mathbb{P}\left( C \mid S,i,{\lambda}^{\text{est}} \right),$$
and we thus compute the expectation of the complete-data log-likelihood function as follows:
\begin{align}
\label{eq:EM_E_step_expected_likelihood}
\mathbf{E} \left[ \mathcal{L}^{\text{complete}} \mid {\lambda}^{\text{est}} \right] = \sum_{C} \hat{\tau}_C \cdot \log \lambda^{\text{est}}(C).
\end{align}
\underline{\emph{M-step:}} By maximizing the expected complete data log-likelihood function~\eqref{eq:EM_E_step_expected_likelihood}, we obtain the optimal solution
$\lambda^*(C) = \hat{\tau}_C \big\slash \sum_{C} \hat{\tau}_C$
which is used to update $\lambda^{\text{est}}$.
Finally, we highlight that the EM algorithm has been widely used in the OM field to estimate various choice models such as the mixed-MNL model \citep{train2008algorithms}, the ranking-based model \citep{van2017expectation}, the Markov chain choice model \citep{csimcsek2018expectation}, the consider-then-choose (CTC) model \citep{jagabathula2022demand}, the DAG-based choice models \citep{jagabathula2022personalized}, among others.
\subsection{Preselected Set Discovery Algorithm}
\label{subsec:CG_sub}
In theory, as it was mentioned above, in order to solve the MLE problem one can exploit the aforementioned EM algorithm and apply it to the optimization problem~$P^{\text{MLE}}\left( \bar{\mathcal{C}} \right)$ when the collection of customer segments is equal to the power set of $N$, i.e., $\bar{\mathcal{C}} = 2^N$. However, this approach becomes highly intractable as the number of products $n$ increases. In this case, the number of decision variables would grow exponentially with $n$.
Therefore, we propose an alternative way to solve the MLE problem which is based on the \emph{column generation} (CG) procedure \citep{bertsimas1997introduction}, a widely used procedure to solve large-scale optimization problems with linear constraints.
In accordance with the CG framework, instead of solving the full-scale optimization problem $P^{\text{MLE}}(2^N)$ directly, we will repeatedly execute the following two steps: (i) find the optimal solution $(\boldsymbol{Y},\boldsymbol{\lambda})$ to the restricted problem $P^{\text{MLE}}(\bar{\mathcal{C}})$ by the EM algorithm described in Section~\ref{subsec:EM}; (ii) solve a subproblem (which will be described later) to find a new column and expand the restricted problem $P^{\text{MLE}}(\bar{\mathcal{C}})$ by concatenating the column. Notice that each column in the complete problem $P^{\text{MLE}}(2^N)$ corresponds to a preselected set. Therefore, when we introduce a new column $C^*$ to the restricted problem $P^{\text{MLE}}(\bar{\mathcal{C}})$, we equivalently augment the `support' of the distribution $\lambda$ from $\mathcal{C}$ to $\mathcal{C} \cup \{ C^* \}$. We will repeat these two steps until we reach optimality or meet a stopping criterion such as a runtime limit. In what follows, we provide the details on how we augment the support of the distribution $\lambda$ by solving a subproblem and that will complete the description of our MLE framework to calibrate the SSM model.
Let $(\boldsymbol{Y},\boldsymbol{\lambda})$ be the optimal primal solution of the optimization problem $P^{\text{MLE}}\left( \bar{\mathcal{C}} \right)$, which is defined for a fixed collection $\bar{\mathcal{C}}$ of preselected sets. Then, let $(\boldsymbol{\alpha},\beta)$ be the dual solution of the the optimization problem $P^{\text{MLE}}\left( \bar{\mathcal{C}} \right)$, where $\boldsymbol{\alpha} = \left( \boldsymbol{\alpha}_S \right)_{S \in \mathcal{S}}$ corresponds to the first set of constraints such that $\boldsymbol{\alpha}$ is the matrix of the size $(n+1) \times |\bar{\mathcal{S}}|$ and $\boldsymbol{\alpha}_S$ is $(n+1)$ dimensional vector. We then solve the following CG subproblem:
\begin{align}
\label{problem:CG_subproblem}
\max_{C \subseteq N} \left[ \sum_{S \in \mathcal{S}} \boldsymbol{\alpha}_S^T \mathbf{A}_{S,C} + \beta \right] = \max_{C \subseteq N} \left[ \sum_{S \in \mathcal{S}} \left( \alpha_{S,0} \cdot \mathbb{I}\left[ C \cap S = \emptyset \right] + \sum_{i \in S} \frac{ \alpha_{S,i} \cdot \mathbb{I}\left[ i \in C \right] }{| C \cap S |} \right) + \beta \right],
\end{align}
where $\mathbf{A}_{S,C}$ is the column (i.e., $n+1$ dimensional vector) of the matrix $\mathbf{A}_{S}$ which corresponds to the preselected set $C$ and $\alpha_{S,i}$ is the $i$'s element of the vector $\boldsymbol{\alpha}_S$. We also use the convention that `$0/0 = 1$' when formulating the aforementioned optimization problem. Then, we add the optimal solution $C^*$ of the subproblem~\eqref{problem:CG_subproblem} to the set $\bar{\mathcal{C}}$, which is the support of the $\lambda$, if and only if the optimal objective value of that subproblem is positive which would indicate that adding $C^*$ to the set $\bar{\mathcal{C}}$ improves the objective value of $P^{\text{MLE}}\left( \bar{\mathcal{C}} \right)$. Consequently, if the optimal objective value of that subproblem is not positive then we stop our algorithm since it would indicate that the current set $\bar{\mathcal{C}}$ is the support of the distribution $\lambda$ that optimizes the log-likelihood under the SSM model, i.e., $(\bar{\mathcal{C}},\boldsymbol{\lambda})$ is the optimal solution to the aforementioned MLE problem.
Finally, it remains to show how we solve the subproblem~\eqref{problem:CG_subproblem}. We exploit a mixed-integer linear optimization (MILP) framework. To begin with, we introduce two sets of binary decision variables. For each $i \in N$, we let $x_i$ denote a binary decision variable that is equal to $1$ if product $i \in C$, and $0$, otherwise. For each $S \in \mathcal{S}$, we let $z_S$ be a binary variable which is equal to 1 if $| C \cap S | = 0$, and $0$, otherwise, i.e., $z_S$ is equal to 1 if and only if a customer type $C$ chooses an `outside' option when offered the set of alternatives $S$. Then, for all
$S \in \mathcal{S}$ and $i \in S$, we add two sets of continuous decision variables $u_S$ and $q_{S,i}$ that are designed to represent the expression $\mathbb{I}\left[i \in C \right] / | C \cap S |$ of the subproblem~\eqref{problem:CG_subproblem} in a linear way.
Next, we formulate the MILP to solve the CG subproblem~\eqref{problem:CG_subproblem}, in which we can ignore the constant $\beta$, as follows:
\begin{subequations}
\label{problem:CG_sub_by_IP}
\begin{alignat}{3}
P^{\text{CG-sub}}(\boldsymbol{\alpha}): \quad \underset{\mathbf{x},\mathbf{z},\mathbf{u},\mathbf{q}}{\text{maximize}} \quad & \sum_{S \in \mathcal{S}} \left[ \alpha_{S,0} \cdot z_s + \sum_{i \in S} \alpha_{S,i} \cdot q_{S,i} \right], \quad
\\ \text{such that} \quad & 0 \leq q_{S,i} \leq x_i, && \forall S \in \mathcal{S}, i \in S, \label{constraint_a_CG_sub_by_IP}\\
& 0 \leq q_{S,i} \leq u_S, && \forall S \in \mathcal{S}, i \in S,\\
& u_S + x_i \leq q_{S,i} + 1, && \forall S \in \mathcal{S}, i \in S,\\
& z_S + \sum_{i \in S} q_{S,i} = 1, && \forall S \in \mathcal{S}, \label{constraint:CG_sub_IP_4}\\
& u_S\in [1/n,1], \quad && \forall S \in \mathcal{S},\\
& x_i \in \{ 0,1 \}, \quad && \forall i \in N,\\
& z_S \in \{ 0,1 \}, \quad && \forall S \in \mathcal{S},\\
& q_{S,i} \in [0,1], \quad && \forall S \in \mathcal{S}, i \in S, \label{constraint_i_CG_sub_by_IP}
\end{alignat}
\end{subequations}
where the first three sets of constraints ensure that $q_{S,i}=x_i u_S$ for all $S \in \mathcal{S}, i \in S$. Then the fourth and fifth set of constraints allow us to represent the expression $\mathbb{I}\left[i \in C \right] / | C \cap S |$ of the subproblem~\eqref{problem:CG_subproblem} in a linear way.
Overall, the proposed algorithm consists of solving a finite sequence of MILPs. The size of each MILP scales in $n+m$ in the number of variables and in $n \cdot m$ in the number of constraints, where $n$ is the number of items in a product universe and $m$ is the number of unique assortments in the sales data. Thus, our algorithm is more tractable than the market discovery algorithm proposed by \cite{van2014market} for the ranking-based model. The latter CG algorithm scales in $n^2 + m$ in the number of variables and in $n^3 + nm$ in the number of constraints. We also test empirically the efficiency of our estimation framework in comparison with the one proposed for the ranking-based model (see Section~\ref{subsec:IRI_performance}).
\section{The Assortment Problem, Hardness Result, and Dynamic Program}
\label{sec:assortment_optimization}
In this section, we consider the problem of finding the optimal assortment under the SSM model. In what follows, we first formulate the assortment optimization problem and then we show the NP-hardness of this problem which leads us to develop its fully polynomial-time approximation scheme (FPTAS) by a dynamic program.
\subsection{Assortment Optimization Formulation and Hardness Result}
Throughout this section, we let $r_i > 0$ denote the revenue of a product $i \in N$. Without loss of generality, we assume $r_1 \geq r_2 \geq \ldots \geq r_n > 0$. For a given customer type associated with a preselected set $C$, we let $\text{Rev}_C(S)$ denote its expected revenue under assortment $S$. Thus, we have
\begin{align}
\label{def:revenue_of_a_customer_type}
\text{Rev}_C(S) = \begin{cases}
{ \sum_{i \in C \cap S} r_i}/{| C \cap S |}, & \text{if $|C \cap S| \geq 1$},\\
0, & \text{if $|C \cap S| = 0$}.
\end{cases}
\end{align}
Then, $\sum_{C \in \mathcal{C}} \lambda(C) \cdot \text{Rev}_C(S)$ represents the expected revenue under the SSM model $(\mathcal{C},\lambda)$. Formally,
the assortment optimization problem under the SSM model is formulated as follows:
\begin{align}
\label{problem:AO}
\max_{S \subseteq N} \left\lbrace \sum_{C \in \mathcal{C}} \lambda(C) \cdot \text{Rev}_C (S) \right\rbrace.
\end{align}
We first characterize the complexity of the assortment optimization problem in the following theorem.
\begin{theorem}
\label{thm:AO_NP_hardness}
The assortment optimization problem under the SSM model is NP-hard.
\end{theorem}
We prove the aforementioned theorem by constructing a reduction from the vertex cover problem, which is known as an NP-hard problem \citep{garey1979computers}. We relegate the details to ecompanion (see Section~\ref{subsec:proof_AO_NP_hardness}). We further characterize the complexity of the assortment problem by providing its fully polynomial-time approximation scheme (FPTAS) when the number of customer types is \YC{fixed}.
\subsection{A Fully Polynomial-Time Approximation Scheme (FPTAS): Dynamic Program}
In this section, our main goal is to provide the FPTAS for the assortment optimization problem under the SSM model. In the context of our problem, FPTAS is an algorithm that can approximate the solution to the assortment problem under the SSM model within a factor of $(1-\epsilon)$ of the optimal solution in polynomial time, where the degree of the polynomial depends on the input size of the problem and the inverse of $\epsilon$. In other words, this means that for any value of $\epsilon>0$, the FPTAS algorithm can find a solution that is arbitrarily close to the optimal one in a reasonable amount of time.
Throughout this section, let us assume that the SSM model is characterized by $k$ customer types, i.e., the SSM model is represented by the parameters $(\mathcal{C},\boldsymbol{\lambda})$ where $\mathcal{C} = \{ C_1,\ldots,C_k \}$ and $\lambda(C_m) = \lambda_m > 0$ only for $m \in K \equiv \{ 1,2,\ldots,k \}$, and $\sum_{m \in K} \lambda_m = 1$.
Then, let $\mathbf{x} \in \{ 0,1 \}^n$ represent the assortment decision such that $x_i = \mathbb{I}\left[ i \in S \right]$ for all $i\in N$. For each $m \in K$, we also define the functions $g_m(\mathbf{x}) = \sum_{i=1}^n r_i \cdot \mathbb{I}\left[ i \in C_m \right] \cdot x_i$ and $h_m(\mathbf{x}) = \sum_{i=1}^n \mathbb{I}\left[ i \in C_m \right] \cdot x_i$ so that we can represent the assortment optimization problem under the stochastic set model $(\mathcal{C},\boldsymbol{\lambda})$ as follows:
\begin{align}
\label{problem:AO_by_DP}
\max_{\mathbf{x} \in \{ 0,1 \}^n} \quad R(\mathbf{x}) : = \sum_{m \in K} \lambda_m \cdot f(g_m(\mathbf{x}),h_m(\mathbf{x})),
\end{align}
where $f: \mathbb{N}_+ \times \mathbb{N}_+ \rightarrow \mathbb{R}_+$ is defined in the following way:
\begin{align*}
f(a,b) = \begin{cases}
{ a}/{ b}, & \text{if $b > 0$},\\
0, & \text{if $b = 0$}.
\end{cases}
\end{align*}
In what follows, we first show that if $\mathbf{r} = (r_i)_{i \in N}$ is integral then the optimization problem~\eqref{problem:AO_by_DP} can be solved in pseudo-polynomial time by a dynamic program (see Section~\ref{subsubsec:AO_FPTAS_integral_coefficient}). Next, we relax the assumption that $\mathbf{r}$ is integral, i.e., $r_i$
can now take any positive real number for all $i \in N$ (see Section~\ref{subsubsec:FPTAS_rounding}), and then quantify the approximation performance after we round the values of the vector $\mathbf{r}$ (see Section~\ref{subsubsec:FPTAS_approximation}). We conclude this section with a discussion (see Section~\ref{subsubsec:FPTAS_discussion}).
\subsubsection{A pseudo-polynomial-time approach when \textbf{r} is integral.} \label{subsubsec:AO_FPTAS_integral_coefficient}
We first make several observations with respect to the functions $g_m$ and $h_m$ defined above, and then formally present a dynamic program to solve the assortment problem when \textbf{r} is integral.
{\bfseries Preliminary Observations.} One way to solve the aforementioned assortment problem is to sequentially decide whether to add products $1,2,\ldots,n$ to the assortment. Suppose we already made the decision for the first $i-1$ products of whether to add them to the assortment and now we need to make a decision of whether we should add product $i$ to the assortment. Notice that for a given customer type $m$, if product $i$ is in the preselected set $C_m$ and we decide to add product $i$ to the assortment, then the function $g_m$ would increase by $r_i$ units and the function $h_m$ would increase by one unit. This argument implies that both $g_m$ and $h_m$ are non-decreasing as we make these sequential decisions. Notably, knowing the value of the functions $g_m$ and $h_m$ is sufficient to compute $\text{Rev}_{C_m}$.
In addition, those two functions jointly save the necessary information to compute the revenue function $\text{Rev}_{C_m}$ after making the assortment decisions for items $1,...,i-1$, since
\begin{align*}
g_m = \underbrace{\sum_{j=1}^{i-1} r_j \cdot \mathbb{I} [j \in C_m] \cdot x_j}_{\text{for products $1,...,i-1$}} + \sum_{j=i}^{n} r_j \cdot \mathbb{I}[j \in C_m] \cdot x_j \quad \text{ and } \quad
h_m = \underbrace{\sum_{j=1}^{i-1} \mathbb{I} [j \in C_m] \cdot x_j}_{\text{for products $1,...,i-1$}} + \sum_{j=i}^{n} \mathbb{I}[j \in C_m] \cdot x_j.
\end{align*}
Therefore, we let the values of the functions $(g_m,h_m)_{m \in K}$ define the state space of the dynamic program, formulated below, to solve the optimization problem~\eqref{problem:AO_by_DP}.
{\bfseries The state space and value function.}
We let $A_m$ and $B_m$ denote the state space such that for any assortment decision $\mathbf{x}$ we have $g_m(\mathbf{x}) \in A_m$ and $h_m(\mathbf{x}) \in B_m$, i.e., $A_m = \{ 0,1,2,\ldots, \sum_{i=1}^{|C_m|} r_i \}$ and $B_m = \{ 0,1,2,\ldots,|C_m| \}$. In accordance with our preliminary observations above, we next define the value function (i.e., revenue-to-go function) and then describe the dynamic program that characterizes the sequential decision-making of adding products to the assortment.
\begin{align*}
\textit{Value function: }J_i \left( a_1,b_1,\ldots,a_k,b_k \right), \quad \forall \text{$(a_1,b_1,\ldots,a_k,b_k) \in \prod_{m=1}^k A_m \times B_m$, $i \in N \cup \{n+1\}$}.
\end{align*}
Note that the revenue-to-go function $J_i \left( a_1,b_1,\ldots,a_k,b_k \right)$ represents the maximum expected revenue that we can generate given that the decisions for products $\{ 1,...,i-1 \}$ are already made and they contribute the values $a_m$ and $b_m$ to the functions $g_m$ and $h_m$, for all $m \in K$, which factor into the value functions associated with items from the set $\{1,...,i-1\}$.
{\bfseries Recursive equation.} First, for all $m \in K$ and $i \in N$, we let $\delta_{m,i}$ denote the binary parameter which is equal to 1 if $i \in C_m$ and 0, otherwise, i.e., $\delta_{m,i} = \mathbb{I}\left[i \in C_m\right]$. Next, for all $m \in K$ and $i \in N$, we introduce the parameter $r_{m,i}$ such that $r_{m,i} = \delta_{m,i} \cdot r_i$. Then, the revenue-to-go function satisfies the following recursive relation:
\begin{equation}
\label{DP:recursive}
J_i(a_1,b_1,\ldots, a_k,b_k) = \max \bigg[ \underbrace{J_{i+1}(a_1,b_1,\ldots,a_k,b_k)}_{\text{not to include product $i$}} , \underbrace{J_{i+1}(a_1 + r_{1,i} , b_1 + \delta_{1,i},\ldots,a_k + r_{k,i} ,b_k + \delta_{k,i})}_{\text{to include product $i$}} \bigg],
\end{equation}
where the first argument on the right-hand side corresponds to the revenue-to-go function when the decision is not to include product $i$ in the assortment. In this case, the functions $g_m$ and $h_m$ would not collect additional values in comparison with what they have already collected from products $\{1,\ldots,i-1 \}$. The second argument on the right-hand side of the aforementioned recursive relation corresponds to the revenue-to-go function when the decision is to include product $i$ in the assortment. In this case, as discussed previously, given that $g_m$ and $h_m$ have collected values $a_m$ and $b_m$ from products $\{1,\ldots,i-1 \}$, the value of $g_m$ would increase by $r_i$ to $a_m + r_i$ if and only if $i \in C_m$ Similarly, the value of $h_m$ would increase by one unit from $b_m$ to $b_m + 1$ if and only if $i \in C_m$ for all $m \in K$.
{\bf Boundary conditions.}
The value of the revenue-to-go function at the terminal state $n+1$ is just an expected revenue that can be computed by Equation~\eqref{problem:AO_by_DP} that is fully defined by the values $a_m$ and $b_m$, for all $m \in K$, that are the inputs for the function $f$.
Then, the revenue-to-go function at the terminal state $n+1$ is defined as follows:
\begin{align}
J_{n+1}(a_1,b_1,\ldots,a_k,b_k) = \sum_{m \in K} \lambda_m \cdot f(a_m,b_m), \qquad \text{for $(a_1,b_1,\ldots,a_k,b_k) \in \prod_{m=1}^k A_m \times B_m$}.
\end{align}
Furthermore, the revenue-to-go function cannot take values outside the state space that we defined above. That leads to another another boundary condition when for any $i \in \{1,2,\ldots,n+1\}$, $J_{i}(a_1,b_1,\ldots,a_k,b_k) = -\infty$ if $(a_1,b_1,\ldots,a_k,b_k) \notin \prod_{m=1}^k A_m \times B_m$.
{\bf Solving the dynamic program with polynomial runtime.} We solve the dynamic program specified above using backward induction. We first obtain the value of the revenue-to-go function $J_n(a_1,b_1,\ldots,a_k,b_k)$ for all $(a_1,b_1,\ldots,a_k,b_k) \in \prod_{m=1}^k A_m \times B_m$ when $i=n$ using the recursive equation specified above and the values of the revenue-to-go function at the terminal state $n+1$. We then obtain the value of the revenue-to-go function $J_{n-1}(a_1,b_1,\ldots,a_k,b_k)$ for all $(a_1,b_1,\ldots,a_k,b_k) \in \prod_{m=1}^k A_m \times B_m$ when $i = n-1$ based on the values of the revenue-to-go function computed during the previous iteration. We continue this backward induction until $i=1$ and, notably, $J_1(0,0,\ldots,0,0)$ is exactly the optimal expected revenue in the assortment optimization problem~\eqref{problem:AO_by_DP}. Finally, we represent the total runtime when solving the aforementioned dynamic program as follows:
\begin{align*}
\text{DP runtime = }n \cdot \prod_{m=1}^k |A_k| \cdot |B_k| = O \left( n \cdot \left( R_{\max} \cdot C_{\max} \right)^k \right),
\end{align*}
where $C_{\max} = \max_{m \in K} | C_m |$ and $R_{\max} = \sum_{i=1}^{C_{\max}} r_i$. Thus, the runtime is polynomial in $C_{\max}$, $R_{\max}$ and $n$ given that $k$ is \YC{fixed}.
\subsubsection{Relaxing the integrality assumption of \textbf{r}.}
\label{subsubsec:FPTAS_rounding} We next let $r_i$ be any real positive number, for all $i \in N$. Then, we let $R^*$ be the optimal objective value of the assortment optimization problem~\eqref{problem:AO_by_DP} under this assumption and emphasize again that our main goal when constructing the FPTAS is to find a solution $\mathbf{x}$ to the aforementioned problem such that $R(\mathbf{x}) \geq (1 - \epsilon) \cdot R^*$ given an arbitrary precision $\epsilon > 0$. In order to achieve this goal, we first discretize the value of $r_i$ as follows:
\begin{align*}
\tilde{r}_i = \bigg\lfloor \frac{r_i}{r_n} \cdot \frac{1}{\epsilon} \bigg\rfloor, \qquad \forall i \in N.
\end{align*}
Similarly to the Section~\ref{subsubsec:AO_FPTAS_integral_coefficient} above, we have the following set of definitions: $\tilde{r}_{m,i} = \tilde{r}_i \cdot \delta_{m,i}$, $\tilde{A}_m = \{ 0,1,2,\ldots,\sum_{i=1}^{|C_m|} \tilde{r}_i \}$, and $\tilde{g}_m(\mathbf{x}) = \sum_{i=1}^n \tilde{r}_{m,i} \cdot x_i$. Thus, for any assortment decision $\mathbf{x}$, we have that $\tilde{g}_k(\mathbf{x}) \in \tilde{A}_m$. Unsurprisingly, we do not need to discretize $h_m(\mathbf{x})$, as this function is already integral for all $m \in K$. Next, let us introduce the discretized version of the assortment optimization problem under the SSM model in the following way:
\begin{align}
\label{problem:AO_by_DP_discretize}
\max_{\mathbf{x} \in \{ 0,1 \}^n} \quad \tilde{R}(\mathbf{x}) : = \sum_{m \in K} \lambda_m \cdot f(\tilde{g}_m(\mathbf{x}),h_m(\mathbf{x})),
\end{align}
which is similar to the assortment optimization problem~\eqref{problem:AO_by_DP} except that we replace the first argument in the function $f(\cdot,\cdot)$ by its discretized version $\tilde{g}_m$ for each $m \in K$. We can now solve the discretized assortment optimization problem by a dynamic program in the same spirit as in Section~\ref{subsubsec:AO_FPTAS_integral_coefficient} and we thus omit the details in the interest of space. Then, the state space is defined by $\prod_{m=1}^k \tilde{A}_m \times B_m$ and the revenue-to-go (i.e., value) function is denoted by $\tilde{J}_i(a_1,b_1,\ldots,a_k,b_k)$. Similarly to what we observe in Section~\ref{subsubsec:AO_FPTAS_integral_coefficient}, the revenue-to-go function $\tilde{J}_i(a_1,b_1,\ldots,a_k,b_k)$ can be represented in the recursive way:
\begin{equation}
\label{DP:recursive_discretized}
\tilde{J}_i(a_1,b_1,\ldots,a_k,b_k) = \max \bigg[ \tilde{J}_{i+1}(a_1,b_1,\ldots,a_k,b_k) , \tilde{J}_{i+1}(a_1 + \tilde{r}_{1,i} , b_1 + \delta_{1,i}, \ldots,a_k + \tilde{r}_{k,i} ,b_k + \delta_{k,i}) \bigg].
\end{equation}
As discussed, the first and the second arguments on the right-hand side of the aforementioned recursive equation correspond to the revenue-to-go function when the decision is not to include product $i$ in the assortment and to include product $i$ in the assortment, respectively. Then, the revenue-to-go function $\tilde{J}_i(a_1,b_1,\ldots,a_k,b_k)$ satisfies the following boundary conditions:
\begin{align*}
\tilde{J}_{n+1}(a_1,b_1,\ldots,a_k,b_k) = \sum_{m \in K} \lambda_m \cdot f(a_m,b_m), \qquad \forall (a_1,b_1,\ldots,a_k,b_k) \in \prod_{m=1}^k \tilde{A}_m \times B_m.
\end{align*}
Also, we intend to exclude the states that are outside $\prod_{m=1}^k \tilde{A}_m \times B_m$ from the consideration and we thus assume that $\tilde{J}_{i}(a_1,b_1,\ldots,a_k,b_k) = -\infty$ for $(a_1,b_1,\ldots,a_k,b_k) \notin \prod_{m=1}^k \tilde{A}_m \times B_m$, for any $i\in \{1,2,\ldots,n+1\}$. As in Section~\ref{subsubsec:AO_FPTAS_integral_coefficient}, we can solve this dynamic program using backward induction. Finally, the total runtime to solve the aforementioned dynamic program is $ n \cdot \prod_{m=1}^k | \tilde{A}_m | \cdot |B_m| = O \left( n \cdot \tilde{R}_{\max}^k \cdot C_{\max}^k \right)$, where $\tilde{R}_{\max} = \sum_{i=1}^{C_{\max}} \tilde{r}_i$.
\subsubsection{Approximation results.}\label{subsubsec:FPTAS_approximation}
After solving the dynamic program described above we obtain the vector $\tilde{\mathbf{x}}^*$ which is the optimal solution to assortment optimization problem~\eqref{problem:AO_by_DP_discretize}. In what follows, our major goal is to analyze how good $\tilde{\mathbf{x}}^*$ is as an approximation to $\mathbf{x}^*$, which is the optimal solution to the assortment optimization problem~\eqref{problem:AO_by_DP}. We begin addressing this question by formulating a lemma that gives a `uniform bound' to the difference between the `original' revenue function and the revenue function after we truncate the values $\tilde{r}_i$ for all $i \in N$.
\begin{lemma}
\label{lemma:FPTAS_uniform_bound}
For any assortment solution $\mathbf{x} \in \{ 0,1 \}^n$, we have $
0 \leq R(\mathbf{x}) - r_n \epsilon \cdot \tilde{R}(\mathbf{x}) \leq r_n \epsilon $, where $R(\mathbf{x})$ and $\tilde{R}(\mathbf{x})$ are defined as in the optimization problems~\eqref{problem:AO_by_DP} and \eqref{problem:AO_by_DP_discretize}, respectively.
\end{lemma}
{\it Proof:} We first represent $R(\mathbf{x}) - r_n \epsilon \cdot \tilde{R}(\mathbf{x})$ as follows
\begin{align*}
R(\mathbf{x}) - r_n \epsilon \cdot \tilde{R}(\mathbf{x}) = \sum_{m \in K} \lambda_m \cdot \bigg[f(g_m(\mathbf{x}),h_m(\mathbf{x})) - r_n \epsilon \cdot f(\tilde{g}_m(\mathbf{x}),h_m(\mathbf{x}))\bigg].
\end{align*}
Then, in order to prove this lemma it suffices to show that for every $m \in K$ we have that
\begin{align}
\label{eq:proof_PTAS_lemma_uniform_bound}
0 \leq f(g_m(\mathbf{x}),h_m(\mathbf{x})) - r_n \epsilon \cdot f(\tilde{g}_m(\mathbf{x}),h_m(\mathbf{x})) \leq r_n \epsilon,
\end{align}
since $\sum_{m \in K} \lambda_m =1$. In what follows next, we consider two cases for a specific $m \in K$:
\textit{Case 1: $h_m(\mathbf{x}) = \sum_{i \in C_m} x_i = 0$}. Then we have that
$$0 \le f(g_m(\mathbf{x}),h_m(\mathbf{x})) - r_n \epsilon \cdot f(\tilde{g}_m(\mathbf{x}),h_m(\mathbf{x})) = 0 - 0 = 0 \le r_n \epsilon, $$
\textit{Case 2: $h_m(\mathbf{x}) = \sum_{i \in C_m} x_i \ne 0$}.
In this case, we have that
\begin{align*}
0 & \le f(g_m(\mathbf{x}),h_m(\mathbf{x})) - r_n \epsilon \cdot f(\tilde{g}_m(\mathbf{x}),h_m(\mathbf{x}))
\\ & = \frac{ r_n \epsilon }{ \sum_{i \in C_m} x_i } \cdot \sum_{i \in C_m} \left( \frac{r_i}{r_n \epsilon} - \bigg\lfloor \frac{r_i}{r_n\epsilon} \bigg\rfloor \right) \cdot x_i
\le \frac{ r_n \epsilon }{ \sum_{i \in C_m} x_i } \cdot \sum_{i \in C_m} x_i = r_n \epsilon,
\end{align*}
where the first and the last inequalities follow since $0 \le t - \lfloor t \rfloor < 1 $ for any real number $t$. Now, in both Case 1 and 2 we show that Equation~\eqref{eq:proof_PTAS_lemma_uniform_bound} is satisfied. \hfill $\square$
Next, we show that the optimal solution of the discretized assortment problem has a $(1-\epsilon)$ approximation ratio.
\begin{lemma}
\label{lemma:FPTAS_approximation_ratio}
Let $\tilde{\mathbf{x}}^*$ be the optimal solution to the discretized assortment problem~\eqref{problem:AO_by_DP_discretize} by the dynamic program~\eqref{DP:recursive_discretized}. Then $\tilde{\mathbf{x}}^*$ has a $(1-\epsilon)$ approximation ratio to the assortment problem~\eqref{problem:AO_by_DP}, i.e.,
\begin{align*}
\frac{ R^* - R( \tilde{\mathbf{x}}^* ) }{R^*} \leq \epsilon,
\end{align*}
\end{lemma}
where $R^*$ is the optimal objective value of the assortment optimization problem \eqref{problem:AO_by_DP}.
{\it Proof:} We first claim that $R^* \geq r_n$. Indeed, let $\mathbf{x}_1$ be a vector of ones that corresponds to the assortment decision where all the products are offered. Then, we can obtain the following set of inequalities.
$$R^* \geq R(\mathbf{x}_1) = \sum_{m=1}^k \lambda_m \cdot \sum_{i \in C_m} r_i /{| C_m |} \ge \sum_{m=1}^k \lambda_m \cdot \sum_{i \in C_m} r_n /{| C_m |} = \sum_{m=1}^k \lambda_m \cdot r_n = r_n,$$
which proves the abovementioned claim.
Then, in what follows below we show that $R^* - R( \tilde{\mathbf{x}}^* ) \leq r_n \epsilon$ and it completes the proof of the lemma since we already know that the inequality $r_n \leq R^*$ holds.
\begin{align*}
R^* - R( \tilde{\mathbf{x}}^* ) &= R(\mathbf{x}^*) - R( \tilde{\mathbf{x}}^* ) = R(\mathbf{x}^*) - r_n\epsilon \cdot \tilde{R}(\mathbf{x}^*) + r_n\epsilon \cdot \tilde{R}(\mathbf{x}^*) - r_n\epsilon \cdot \tilde{R}( \tilde{\mathbf{x}}^* ) + r_n\epsilon \cdot \tilde{R}( \tilde{\mathbf{x}}^* ) - {R}( \tilde{\mathbf{x}}^* ) \\
& = \Big( R(\mathbf{x}^*) - r_n\epsilon \cdot \tilde{R}(\mathbf{x}^*) \Big) + r_n\epsilon \cdot \Big( \tilde{R}(\mathbf{x}^*) - \tilde{R}( \tilde{\mathbf{x}}^* ) \Big) + \Big( r_n\epsilon \cdot \tilde{R}( \tilde{\mathbf{x}}^* ) - {R}( \tilde{\mathbf{x}}^* ) \Big) \\
& \le \Big( R(\mathbf{x}^*) - r_n\epsilon \cdot \tilde{R}(\mathbf{x}^*) \Big) + \Big( r_n\epsilon \cdot \tilde{R}( \tilde{\mathbf{x}}^* ) - {R}( \tilde{\mathbf{x}}^* ) \Big) \ \ \ \ \ \ \Big[ \text{ since } \tilde{R}(\mathbf{x}^*) - \tilde{R}( \tilde{\mathbf{x}}^*)<=0 \Big] \\
& \leq R(\mathbf{x}^*) - r_n\epsilon \cdot \tilde{R}(\mathbf{x}^*) \ \ \ \ \ \ \Big[ \text{ by invoking Lemma~\ref{lemma:FPTAS_uniform_bound}} \Big] \\ & \leq r_n\epsilon \ \ \ \ \ \ \Big[ \text{ by invoking Lemma~\ref{lemma:FPTAS_uniform_bound}} \Big].
\end{align*}
\hfill $\square$ \\
Finally, we formulate the main theorem where we provide the FPTAS for the assortment optimization problem~\eqref{problem:AO_by_DP}.
\begin{theorem}
\label{thm:FPTAS}
The dynamic program~\eqref{DP:recursive_discretized} provides an FPTAS for the assortment optimization problem~\eqref{problem:AO_by_DP} when $k$ is \YC{fixed
}.
\end{theorem}
{\it Proof:} First of all, it follows from Lemma~\ref{lemma:FPTAS_approximation_ratio} that the solution returned by the dynamic program~\eqref{DP:recursive_discretized} has an approximation ratio of $1-\epsilon$. Second, the runtime of the dynamic program~\eqref{DP:recursive_discretized} is $
O \left( n \cdot \tilde{R}_{\max}^k \cdot C_{\max}^k \right).$ Finally, we claim that the total runtime of the dynamic program~\eqref{DP:recursive_discretized} is $O \left( n \cdot C_{\max}^{2k} \cdot (r_1/r_n)^k \cdot \left( 1 / \epsilon \right)^k \right)$,
which is polynomial in the input size and $1/\epsilon$ because of the following set of inequalities:
\begin{align*}
\tilde{R}_{\max} = \sum_{i=1}^{C_{\max}} \tilde{r}_i \leq \sum_{i=1}^{C_{\max}} \frac{r_i}{r_n \epsilon} \leq C_{\max} \cdot \frac{r_1}{r_n} \cdot \frac{1}{\epsilon}.
\end{align*}\hfill $\square$
\subsubsection{Discussion.}
\label{subsubsec:FPTAS_discussion}
In this section, we proposed the dynamic program to solve the assortment optimization problem under the SSM model. The dynamic program has runtime $O \left( n \cdot C_{\max}^{2k} \cdot (r_1/r_n)^k \cdot \left( 1 / \epsilon \right)^k \right)$, which is polynomial in input size (i.e., $n$, $C_{\max}$, $r_1/r_n$) and $1/\epsilon$. Throughout this section, we assumed that the number of customer types $k$ (i.e., the number of summands in the objective function) is fixed. Notably, this is a common and natural assumption in the existing literature when providing FPTAS results for the assortment optimization problem under various choice models \citep{desir2022capacitated,akccakucs2021exact} and sum-of-ratios optimization \citep{rusmevichientong2009ptas,mittal2013general}. In particular, \cite{desir2022capacitated} and \cite{akccakucs2021exact} also consider a discretize-then-approximate approach, where they discretize the utility parameters of the LC-MNL model.
In theory, one might assume that $C_{\max}$ scales as $O(n)$ and thus might lead to a $n^{2k+1}$ factor in the runtime of the proposed FPTAS in this section. In this case, the runtime of our FPTAS would be comparable to the runtime of the FPTAS for the assortment optimization problem under the LC-MNL model \citep{desir2022capacitated}. However, there is abundant evidence in the marketing and psychology literature showing that customers usually sample relatively small consideration sets when making purchasing decisions \citep{hauser1990evaluation,hauser2014consideration}, i.e., $C_{\max}$ is known to be relatively small. This is also supported by the empirical results obtained in this paper (see Section~\ref{subsec:IRI_performance} for details). Therefore, we have the evidence to assume that $C_{\max}$ is finite and thus the runtime of the proposed FPTAS could scale as $O(n)$.
\section{Empirical Study}
\label{sec:numerics}
In this section, our main goal is to compare the predictive power and computational tractability of the SSM model against standard benchmarks, such as the latent-class MNL (LC-MNL) and the ranking-based model, using the IRI Academic dataset \citep{bronnenberg2008database}, which consists of
real-world purchase transactions from grocery and drug stores.
We then evaluate the impact of asymmetry in demand cannibalization, which is related to the distinctive property of the SSM model, on the predictive performance of the proposed model. The latter analysis emphasizes the real-world scenarios when the SSM model can be especially competitive in demand forecasting in comparison with standard benchmarks.
\subsection{Data Preprocessing}
The IRI Academic Dataset consists of consumer packaged goods (CPG) purchase transaction data over a chain
of grocery stores in two large Behavior Scan markets in the USA.
In this dataset, each item is represented by its universal product code (UPC) and we aggregate all the items with the same vendor code (comprising digits 3 through 7 in a 13-digit-long UPC code) into a unique "product".
Next, to alleviate data sparsity, we first include in our analyses only the products with a relatively high market share (i.e., products with at least 1\% market share) and then aggregate the remaining products into the `outside' option.
In order to streamline our case study, we focus on the top fifteen product categories out of thirty-one that have the highest number of unique products (see Table~\ref{tb:IRI_and_SS_size}) and consider the first four weeks in the year 2007 for our analyses.
Then, we represent our sales transactions with the set of the tuples $\{ (S_t,i_t) \}_{t \in \mathcal{T}}$, where $i_t$ is the purchased product, $S_t$ is the offered assortment and $\mathcal{T}$ denotes the collection of all transactions. For every purchase instance in the
data set which is characterized by a tuple $(S_t,i_t)$, we have the week and the store id of the purchase which allows us to approximately construct the offer set $S_t$ by taking the union of all the products that were purchased in the same category as $i_t$,
in the same week and in the same store.
\subsection{Prediction Performance Measures}
We split our sales transaction data into the training set, which consists of the first two weeks of our data and is used for the calibration of the choice models, and the test/hold-out set, which consists of the last two weeks of our data and is used to compute the prediction performance scores defined below.
Throughout this section, we use two metrics to measure the predictive performance of the choice models. The first metric is KL-divergence which can be represented as follows:
\begin{align}
\text{KL} = - \frac{1}{\sum_{S \in \mathcal{S}} n_S} \cdot \sum_{S \in \mathcal{S}} n_S \sum_{i \in S^+} \bar{p}_{i,S} \log \left( \frac{ \hat{p}_{i,S} }{ \bar{p}_{i,S} } \right), %
\end{align}
where $\bar{p}_{i,S}$ is the empirical choice frequency computed directly from the sales transaction data, i.e., $\bar{p}_{i,S} = n_{i,S}/ (\sum_{i \in N^+} n_{i,S})$ and $n_{i,S}$ is the number of times product $i \in N^+$ was chosen under assortment $S$ in the test dataset, $\hat{p}_{i,S}$ is the probability to choose item $i \in S^+$ from the offer set $S$ by a specific choice model, $\mathcal{S}$ is the set of unique assortments in the test dataset, and $n_S = \sum_{i \in S^+} n_{i,S} $ is the number of observed transactions under assortment $S$.
The second metric is the mean absolute percentage error (MAPE), which is defined as follows:
\begin{align}
\text{MAPE} = \frac{1}{\sum_{S \in \mathcal{S}} n_S} \cdot \sum_{S \in \mathcal{S} } n_S \sum_{i \in S^+} \bigg| \frac{ \hat{p}_{i,S} - \bar{p}_{i,S} }{ \bar{p}_{i,S} } \bigg|.
\end{align}
For both metrics, a lower score indicates better performance.
\begin{table}[]
\centering
\begin{tabular}{lcrrrc} \toprule
Product Category & \# products & \# assortments & \# transactions & \# types & SS size \\ \midrule
\textit{Beer} & 19 & 721 & 759,968 & 39 & 1.36 \\
\textit{Coffee} & 17 & 603 & 749,867 & 38 & 1.86 \\
\textit{Deodorant} & 13 & 181 & 539,761 & 108 & 2.40 \\
\textit{Frozen Dinners} & 18 & 330 & 1,963,025 & 40 & 1.29 \\
\textit{Frozen Pizze} & 12 & 138 & 584,406 & 54 & 2.21 \\
\textit{Household Cleaners} & 21 & 883 & 562,615 & 41 & 1.30 \\
\textit{Hotdogs} & 15 & 533 & 202,842 & 51 & 1.82 \\
\textit{Margarine/Butter} & 11 & 27 & 282,649 & 36 & 1.93 \\
\textit{Milk} & 18 & 347 & 476,899 & 80 & 2.84 \\
\textit{Mustard/Ketchup} & 16 & 644 & 266,291 & 38 & 1.87 \\
\textit{Salty Snacks} & 14 & 152 & 1,476,847 & 86 & 1.84 \\
\textit{Shampoo} & 15 & 423 & 574,711 & 45 & 1.81 \\
\textit{Soup} & 17 & 315 & 1,816,879 & 44 & 1.37 \\
\textit{Spaghetti/Italian Sauce} & 12 & 97 & 552,033 & 45 & 1.84 \\
\textit{Tooth Brush} & 15 & 699 & 392,079 & 37 & 2.07 \\ \bottomrule
\end{tabular}
\caption{
Descriptive statistics of the IRI dataset after preprocessing. The first four columns correspond to the product category name, \# of products, \# of unique observed assortments and \# of transactions. The last two columns inform the \# of customer types and the average size of the preselected (i.e, consideration) set under the estimated SSM model.
} \label{tb:IRI_and_SS_size}
\end{table}
\subsection{Benchmarks and Model Calibration}
We consider four different benchmark models. The first two models are the most basic: the \emph{independent model}, which does not capture the substitution effect and it is very popular in practice, and the \emph{MNL model}, which is one of the most widely used models both in academia and practice as discussed above. These two models are calibrated by the MLE framework in a straightforward way.
The third benchmark model is the \emph{latent-class MNL model} (i.e., \emph{LC-MNL}) which we calibrate for $K=10$ classes and estimate by the EM algorithm \citep{train2009discrete}.
The fourth benchmark model is the \emph{ranking-based model} (i.e., \emph{RBM}) which is widely studied in the operations management literature \citep{farias2013nonparametric,van2014market}. We estimate the model by first formulating the MLE problem as a concave maximization problem and then solving the latter using the column generation algorithm (i.e., market discovery algorithm) proposed by \cite{van2014market}. Moreover, we exploit the EM algorithm proposed by \cite{van2017expectation} to efficiently learn the optimal distribution over the rankings at each iteration of the column generation procedure \citep{van2017expectation}. Notably, both the latent-class MNL and the ranking-based models subsume the stochastic set model studied in this paper as they are equivalent to the RUM class, and thus, both of them are very competitive benchmarks in this empirical study.
Finally, we compare the predictive performance of all four benchmarks with the SSM model which is calibrated based on the procedure proposed in Section~\ref{sec:model_estimation}.
For all five models, we set the runtime limit as one hour. Additionally, for the stochastic set and for the ranking-based nonparametric models, we set a time limit of three minutes when solving the column generation subproblem so that the entire hour of the runtime is not spent on solving just a single subproblem.
\subsection{SSM Model Estimation Results}
Calibration of the SSM model allows us to emphasize several observations. First, in the fifth column in Table~\ref{tb:IRI_and_SS_size}, we report the number of unique customer types (i.e., $|\mathcal{C}|$) under the estimated SSM model. It follows from that column that the number of consideration sets is moderate as it varies from 36 to 108 across all product categories which indicates that there is sparsity in the number of customer types. Second, the last column in Table~\ref{tb:IRI_and_SS_size} reports \YC{the weighted average size of the preselected (i.e., consideration) sets under the estimated SSM model $(\mathcal{C},\lambda)$, where each weight corresponds to the probability $\lambda(C)$ of a preselected set $C \in \mathcal{C}$}. It can be inferred from that column that the number of products that a `typical' customer considers is relatively small as it ranges between 1.3 and 2.8 across all product categories despite that for some categories we have as many as 18-21 products. This result is consistent with numerous empirical studies in behavioral economics and marketing research in which the authors show that consumers tend to only consider very few alternatives before making the final choice \citep{hauser1990evaluation,hauser2014consideration}.
\subsection{Brand Choice Prediction Results}
\label{subsec:IRI_performance}
Table~\ref{tb:IRI_prediction_tasks} presents the out-of-sample predictive performance of five aforementioned choice-based demand models based on the KL-divergence and MAPE metrics defined above. In the table, we let ID, MNL, SSM, RBM, and LC-MNL denote the independent demand model, the MNL model, the stochastic set model, the ranking-based model, and the LC-MNL model, respectively.
As expected, we first showcase that the independent demand model along with the MNL model provides significantly worse performance than the SSM model. While the SSM model does not subsume the MNL model (see Section~\ref{subsec:compare_to_RUM_models}) it still dominates MNL in the prediction performance because it is nonparametric and has much more degrees of freedom to explain customers' choice behavior.
Moreover, the prediction performance of the SSM model is pretty comparable to the prediction performance of the ranking-based and LC-MNL models which are as general as the RUM class. In particular, there is no clear winner when we compare the SSM model with a ranking-based model as the SSM has a slightly better out-of-sample performance than the ranking-based model based on the MAPE metrics and slightly worse performance than the ranking-based model based on the KL-divergence metrics. Among the stochastic set, the ranking-based, and the LC-MNL models, the LC-MNL model has the best predictive performance in our case study, in terms of both KL divergence and MAPE.
However, LC-MNL only marginally dominates the SSM model, it has only around 2\% lower KL-divergence and around 1\% lower MAPE score.
\begin{table}[]
\centering
\begin{tabular}{lrrccccccccccc}
\toprule
\multirow{2}{*}{Category} & & \multicolumn{5}{c}{KL-divergence} & & \multicolumn{5}{c}{MAPE} \\ \cmidrule{3-7} \cmidrule{9-13}
& & ID & MNL & SSM & RBM & LC-MNL & & ID & MNL & SSM & RBM & LC-MNL \\ \cmidrule{1-1} \cmidrule{3-7} \cmidrule{9-13}
\emph{Beer} & & 5.61 & 4.45 & 3.97 & 4.03 & 3.87 & & 2.26 & 2.03 & 1.90 & 1.96 & 1.88 \\
\emph{Coffee} & & 12.58 & 9.49 & 8.20 & 7.71 & 7.68 & & 3.61 & 3.23 & 2.96 & 2.94 & 2.80 \\
\emph{Deodorant} & & 1.82 & 1.00 & 0.93 & 0.99 & 0.94 & & 1.02 & 0.84 & 0.79 & 0.80 & 0.80 \\
\emph{Frozen Dinners} & & 3.54 & 2.94 & 2.46 & 2.41 & 2.51 & & 1.90 & 1.65 & 1.49 & 1.47 & 1.49 \\
\emph{Frozen Pizza} & & 6.26 & 3.84 & 2.84 & 2.63 & 2.82 & & 2.32 & 1.89 & 1.54 & 1.48 & 1.54 \\
\emph{Household Cleaners} & & 3.44 & 2.67 & 2.38 & 2.46 & 2.37 & & 1.74 & 1.52 & 1.46 & 1.49 & 1.45 \\
\emph{Hotdogs} & & 8.61 & 6.51 & 5.50 & 5.44 & 5.39 & & 3.08 & 2.81 & 2.57 & 2.56 & 2.54 \\
\emph{Margarine/Butter} & & 3.47 & 2.31 & 1.80 & 1.82 & 1.80 & & 1.63 & 1.44 & 1.22 & 1.21 & 1.21 \\
\emph{Milk} & & 16.77 & 8.85 & 6.40 & 6.37 & 6.19 & & 3.97 & 3.09 & 2.54 & 2.60 & 2.50 \\
\emph{Mustard/Ketchup} & & 7.20 & 3.87 & 3.33 & 3.48 & 3.27 & & 2.58 & 1.93 & 1.79 & 1.94 & 1.74 \\
\emph{Salty Snacks} & & 4.22 & 2.28 & 1.64 & 1.45 & 1.56 & & 2.17 & 1.51 & 1.25 & 1.18 & 1.22 \\
\emph{Shampoo} & & 2.78 & 1.52 & 1.32 & 1.53 & 1.27 & & 1.39 & 1.05 & 0.97 & 1.07 & 0.96 \\
\emph{Soup} & & 5.03 & 3.70 & 3.04 & 3.13 & 3.19 & & 2.14 & 1.85 & 1.71 & 1.78 & 1.74 \\
\emph{Spaghetti/Sauce} & & 6.98 & 4.19 & 3.42 & 3.25 & 3.35 & & 2.29 & 1.72 & 1.47 & 1.43 & 1.46 \\
\emph{Tooth Brush} & & 5.08 & 2.69 & 2.54 & 2.76 & 2.29 & & 1.79 & 1.40 & 1.29 & 1.33 & 1.24 \\ \cmidrule{1-1} \cmidrule{3-7} \cmidrule{9-13} \emph{Average} & & 6.23 & 4.02 & 3.32 & 3.29 & 3.25 & & 2.26 & 1.86 & 1.66 & 1.68 & 1.64 \\ \bottomrule
\end{tabular}
\caption{Out-of-sample prediction performance results measured either by KL-divergence (in unit of $10^{-2}$) or by MAPE (in unit of $10^{-1}$), for the five choice-based demand models.} \label{tb:IRI_prediction_tasks}
\end{table}
\subsection{Comparison of the Stochastic Set Model with the Ranking-Based Model}
In this subsection, our goal is to compare the SSM model with the ranking-based model as those two models are very similar in their structure. Both models are nonparametric and we have the SSM model, which is described by the distribution over the subsets, and at the same time, the ranking-based model, which is described by the distribution over rankings. First of all, as discussed above, neither the SSM model dominates the ranking-based model nor the ranking-based model dominates the SSM model in terms of prediction performance in our aforementioned case study.
Second, scalability is a common challenge when calibrating nonparametric choice models. In particular, when using the column generation method to solve the likelihood maximization problem under the ranking-based model, the column generation subproblem can be very hard as it needs to be solved by an integer program \citep{van2014market}. However, compared to the ranking-based model, our SSM model is much more tractable. As discussed in Section~\ref{sec:model_estimation}, the column generation subproblem of the SSM model can be solved by an integer program with $O(n+m)$ binary variables and $O(n^2 + nm)$ constraints, which is computationally much easier than the integer program with $O(n^2 + m)$ binary variables and $O(n^3 + nm)$ constraints that need to be solved for the column generation subproblem of the ranking-based model. Therefore, when we limit the estimation time of the models, one can expect that the predictive performance of the stochastic set model might be better than that of the ranking-based model. We indeed see in Table~\ref{tb:IRI_prediction_tasks} that the SSM model outperforms the ranking-based model when tested for the several product categories that have a higher number of products and thus are more computationally challenging despite that, in theory, the ranking-based model subsumes the SSM model.
We further illustrate the advantage of using the stochastic set model over the ranking-based model in terms of computational efficiency when calibrating these models. In Figure~\ref{fig:runtime_curves_milk}, we demonstrate the changes in the value of the log-likelihood function while calibrating the stochastic set and the ranking-based models for the `Milk' product category. We obtain qualitatively the same results if we pick some other product categories. More specifically, we show how fast the in-sample log-likelihood function of the two models improves when we run the model estimation algorithms over a one-hour time limit. %
It can be seen from Figure~\ref{fig:runtime_curves_milk} that the estimation algorithm for the proposed SSM model converges to its nearly optimal solution in a very short time (i.e., in around five minutes), while it takes around fifty minutes for the estimation algorithm for the ranking-based model to reach the same value of the likelihood function. On the other hand, it also follows from Figure~\ref{fig:runtime_curves_milk} that the estimation algorithm of the ranking-based model eventually achieves a higher value of the log-likelihood function than the estimation algorithm of the SSM model. This is not surprising as the ranking-based model subsumes the stochastic set model. Also, note that it takes as much as fifty minutes for that to happen, which is almost ten times more than what it takes for the SSM-based algorithm to reach the near-optimal point, and even in this case the log-likelihood obtained for the ranking-based model is only marginally higher than the log-likelihood obtained for the SSM model.
\begin{figure}
\centering
\includegraphics[scale=1.0]{numerics/Runtime_insample_LogLikelihood.pdf}
\caption{The change in the value of the log-likelihood function while running the MLE algorithm for the choice models over time. The y-axis is the in-sample log-likelihood and x-axis is the runtime (in seconds) of the SSM model (solid line) and the ranking-based model (dashed line) when computed for the 'Milk' product category.} \label{fig:runtime_curves_milk}
\end{figure}
\subsection{The Symmetry of the Cannibalization Effect under SSM model}
As it was mentioned in Section~\ref{subsec:modeling_by_axioms}, the symmetric cannibalization is a crucial property of the SSM model that on the one hand differentiates it from other models in the RUM class (e.g., LC-MNL model) and makes it more tractable than the ranking-based model but, on the other hand, might limit its ability to describe customers' purchase behavior.
In this section, we empirically investigate whether the symmetric cannibalization assumption of the SSM model might hurt its predictive performance in comparison with the state-of-the-art LC-MNL model.
First, recall that in Section~\ref{subsec:modeling_by_axioms} we say that a choice model is consistent with the symmetric cannibalization property if and only if the following equation is satisfied:
\begin{align}
\label{eq:sym_cannibalization_one_side}
\left[\mathbb{P}_{j}(S \setminus \{a_k\}) - \mathbb{P}_{j}(S) \right] - \left[ \mathbb{P}_{k}(S \setminus \{a_j\}) - \mathbb{P}_{k}(S) \right] = 0,
\end{align}
for all assortments $S \subseteq N$ such $|S| \geq 2$ and products $j,k \in S$.
Therefore, in order to measure how much the symmetric cannibalization property is violated, we propose a `cannibalization asymmetry' index which can be represented in the following way:
\begin{align}
\label{eq:asym_indx}
\frac{1}{ | \{ S \mid |S| \geq 2 \} | } \sum_{S: |S| \geq 2} \frac{1}{ {{|S|}\choose{2}} } \sum_{ j\neq k} \bigg| \frac{ \left[\mathbb{P}_{j}(S \setminus \{a_k\}) - \mathbb{P}_{j}(S) \right] - \left[ \mathbb{P}_{k}(S \setminus \{a_j\}) - \mathbb{P}_{k}(S) \right] }{ \mathbb{P}_{j}(S) + \mathbb{P}_{k}(S) } \bigg|.
\end{align}
Intuitively, this index measures the degree to which the property of the symmetric cannibalization, described by the Equation~\eqref{eq:sym_cannibalization_one_side}, is violated across all assortments $S \subseteq N$ such $|S| \geq 2$ and all pairs of products $j,k \in S$. Note that we add the term $\mathbb{P}_{j}(S) + \mathbb{P}_{k}(S)$ into the denominator of the Equation~\eqref{eq:sym_cannibalization_one_side} in order to make the violations of the `cannibalization asymmetry' property comparable under offer sets of different sizes (e.g., when the size of the offer set $S$ is large, then both $\mathbb{P}_{j}(S)$ and $\mathbb{P}_{k}(S)$ tend to be significantly smaller than the purchase probabilities under relatively small offer sets). Unsurprisingly, a higher `cannibalization asymmetry' index indicates a higher extent to which the symmetric cannibalization is violated in the choice data. Note that if the choice data are consistent with the SSM model, then the `cannibalization asymmetry' index is zero.
First of all, note that the `cannibalization asymmetry' index defined in Equation~\eqref{eq:asym_indx} requires access to the choice probabilities under almost all of the assortments of items in the product universe. As it is shown in Table~\ref{tb:IRI_and_SS_size}, for all the product categories we only have a limited number of unique assortments in our sales transactions and thus we cannot compute that index from the data directly. Therefore, we make an assumption that the choice data generation process is consistent with the LC-MNL model described above and we can thus compute the `cannibalization asymmetry' index for any collection of the offer sets after calibrating the LC-MNL model. Note that the LC-MNL model is not restricted by the symmetric cannibalization property. Second, even when using the LC-MNL model as a `ground truth' choice model, the empirical computation of the `cannibalization asymmetry' index is still challenging for the product categories that have a large number of items. Therefore, we use Monte Carlo simulations to obtain an approximate value of the `cannibalization asymmetry' index. In particular, for every instance, we randomly sample an assortment $S$ such that $|S| \geq 2$ and then randomly choose two indices $j$ and $k$ from the set $S$. In total, we have $10,000$ instances and then take the average to compute the index.
\begin{figure}
\centering
\includegraphics[scale=1.2]{numerics/scatter_cannibalization.pdf}
\caption{Scatter plot of the improvement of LC-MNL model over SSM in the prediction task against the cannibalization asymmetry index, for the fifteen product categories. The higher the improvement of the LC-MNL model over SSM the higher the difference in their MAPE score.} \label{fig:asym_vs_performance}
\end{figure}
Figure~\ref{fig:asym_vs_performance} represents a scatterplot of the improvement of the LC-MNL model over the SSM model, which is measured by the difference in the MAPE score, versus the cannibalization asymmetry index, across the fifteen product categories.
It follows from the figure that there is a very significant variation across product categories in the improvement of the LC-MNL model over the SSM model in MAPE.
In order to better explain this variation, we illustrate the linear regression with a red dashed line. We see a correlation between the improvement of the LC-MNL over the SSM model in prediction and the value of the cannibalization asymmetry index which suggests that the violation of the symmetric cannibalization assumption might lead to the underperformance of the SSM model in prediction tasks. \YC{However, we also see that the SSM model still performs as well as the LC-MNL in `Spaghetti/Italian Sauce' and `Frozen Pizza', which have high asymmetry indices, and outperform the LC-MNL in `Soup' and `Deodorant', which have intermediate asymmetry indices. What Figure~\ref{fig:asym_vs_performance} delivers is actually a positive message: when the demand cannibalization is not highly asymmetric, the SSM model is competitive in its predictive power and relevant to use in practice.}
\section{Concluding Remarks}
\label{sec:conclusion}
In this paper, we study a stochastic choice rule where agents first preselect a set of alternatives and then randomly choose an alternative within the set. This model is inspired by the fact that decision-makers are boundedly rational and have limited cognitive abilities to precisely evaluate utilities for all available alternatives. Instead, agents may form the preselected set by screening rules and heuristics and equally prefer the preselected alternatives. Interestingly, we prove that the model parameters, which may grow exponentially with the number of products, are identifiable from choice data via a closed-form formulation. We further show that the assumption of bounded rationality of consumers factors into the model in a way that implies symmetric demand cannibalization.%
We also study the assortment optimization problem under the SSM model. We first prove that the assortment optimization problem is NP-hard and then propose a practical solution. To this end, we build a dynamic program and show that it provides an FPTAS for the assortment problem. Finally, we examine the predictive performance of the proposed model on the real-world sales transaction data and show that the symmetric cannibalization, which is a fundamental building block of this model and might be considered as its limitation, does not make this model much less competitive in comparison with state-of-the-art benchmarks such as the ranking-based model or LC-MNL.
\bibliographystyle{plainnat}
| {
"attr-fineweb-edu": 2.412109,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfSvxK7Ehm4VQwr8j | \section{INTRODUCTION}
The typical mode of entry for a player into baseball is through the first-year player draft. Players usually enter the draft immediately after high school or college and then spend several years in the drafting team's minor league system. When deemed ready, the drafting team can promote the player to the Major Leagues. From that time forward, the team retains rights to the player for 6 years, with salaries determined by an independent arbiter. After six years, the player can elect to become a free agent and is free to negotiate a contract with any organization. It is typically at this point when the largest contracts are given out.
We looked at all players who appeared in at least 7 seasons since 1970. We used all the data from these players' first 6 seasons as features, both separated by season and aggregated together. We combined this data with additional exogenous information, such as age of first Major League season, handedness, and fielding position and then attempt to use machine learning to predict a player's career trajectory, as represented by yearly wins-above-replacement, the gold-standard statistic for player value. We then compared four machine learning models: linear regression, artificial neural networks, random forest ensemble, and $\epsilon$-support vector. Each model was trained twice, once for batters and once for pitchers.
\section{RELATED WORK}
The available literature on baseball career progression provides scant help for examining the problem with machine learning. A simple, yet oft-cited, approach is the ``delta method,'' which appears to have originated from baseball statistician Tom Tango\cite{c2}. To estimate change in some metric between two ages, the metric is calculated for all players active in a chosen time period who played at those two ages. The values at the two ages are then summed and the average difference is taken. As a specific example, hitters who played in the major leagues at both age 30 and age 31 saw their WAR decrease on average by 0.28 from their age-30 season to their age-31 season. Thus, if a player had a WAR value of 2 in their age-30 season, the Delta Method predicts that they will have a WAR of 1.72 in their age-31 season. If their WAR value was 1 in their age-30 season, their predicted age-31 WAR is 0.72. When deltas are computed across all ages, a complete ``aging curve'' can be drawn. The Delta Method depends only on a player's age and WAR in their most recent season. Given the relatively simple structure of this model, we believed a more complex machine learning approach could improve upon its predictions.
Others have similarly elaborated on the basic Delta Method model. Fair (2008)\cite{c2} used a nonlinear fixed-effects regression model to generate aging curves, where the periods of upslope and of decline were separate but constrained to merge at the peak age. Bradbury (2009)\cite{c3} used a multiple regression model, where average career performance was included as a variable. Lichtman (2009)\cite{c4} used the delta method but attempted to add corrections for the ``survivor bias,'' the phenomenon occurring when a player's performance degrades so much in a year that he does not play much the next. The Bradbury piece is the outlier in the literature, finding that players peak at age 29, two years later than most other analyses. That being said, all studies had different thresholds for including a player in the analysis, included data from different time periods, and utilized different approaches for normalizing between years, leagues, and ballparks. These factors make comparisons difficult.
We note that the above discussion only includes publicly published studies on the topic of career progression. It is possible and likely that some of the 30 professional baseball teams have performed proprietary research on the topic.
\section{DATASET AND FEATURES}
We obtained our dataset from Kaggle\cite{c6} and scraped WAR values from Baseball Reference\cite{c9}. The Kaggle format aggregates all data for each ``stint'', which consists of a player's contiguous activity with a team over a unique year. For example, if a batter played for team A for the first half of 2015, then team B for the second half of 2015 and all of 2016, they would have three stints -- one for team A, and two for team B.
The goal of our model is to allow teams to evaluate potential contracts with players who are free agents for the first time. With this in mind, we treated all the information we have about each player for the first six years as source data, and everything from the years following as outcomes or y values. To set a baseline, we aggregated all the data from years 1-6 and calculated simple metrics, then trained against various y values from the following years. In later models, we included separate features from each year in addition to the aggregates. We also decided to use WAR as the sole evaluation metric. A player was thus represented by a feature vector with yearly and aggregate information, which we used to predict their WAR in following years.
Some players lacked sufficient activity to justify a prediction, or did not receive a contract after the end of team control. To adjust for this, we excluded all players with careers spanning under seven years. For batting predictions, we also filtered out years where the player had 0 at bats, since our dataset includes a batting stint for all players with any pitching or fielding activity. We then excluded batters who did not have at least 1 active batting year both before and after the 6 year mark. Similarly, we also filtered out pitchers who didn't appear in at least 1 game both before and after the 6 year mark.
Batting predictions for pitchers are potentially important, but difficult to predict because of quirks in the league structure. Additionally, many batters had intermittent pitching data that should not factor into any analysis. To address these issues, we excluded all batters with enough data to show up in our pitching analysis. This was fairly aggressive; future work could include a more careful approach.
Finally, we excluded all players with a start year before 1970. The league has changed over the years, and we found that earlier data struggled to help predict contemporary outcomes. Even though we eliminated a large proportion of contemporary players, we still included a large percentage of the overall data (see tables 1 and 2 below).
We utilized one-hot-encoding to structure features for categorical data such as decade, position, and handedness.
Using WAR as the prediction metric raised two issues. First, there are competing definitions and interpretations of WAR, each of which have their own dataset. We discuss this later. Second, once players drop out of the league, they no longer record a WAR value. For these ``missing'' years, we set WAR to 0 since they're presumably providing the same value as at-replacement-level players (who are by definition interchangeable with the best minor league players). This is not necessarily indicative of skill, since the player would likely produce a negative WAR if their career ended for performance reasons. Furthermore, consistent players with small, positive WAR could still be worth substantial medium term contracts, so our model represents the value of marginal players poorly. We tried setting missing WAR to arbitrary small negative values (like -0.5 and -1) and observed slight improvements to all models (including the baseline delta method), but felt that this was not worth the additional model complexity.
Because WAR measures player value in a contemporary context, we attempted to normalize our other features in a similar fashion. For example, a .270 batting average would be .02 above the mean in 1970, but average in 2000. However, applying this on a year-to-year basis substantially decreased model performance. We believe that this was due to large noise in mean variations, particularly for features that reflect rare events (like home runs). Alternatives continued to hurt model performance (locally weighted means) or had marginal effects (per-decade weighting).
Before use in models, we applied min-max mean normalization to all features with scikit- learn's MinMaxScaler package.\cite{c5}
\begin{table}[!htbp]
\begin{center}
\caption{Cleaning Batter Data}
\begin{tabular}{|c|c|c|c|c|} \hline
& Contemporary & Included & Percent Included \\ \hline
Unique Players & 7956 & 1669 & 21.2 \\ \hline
Total ABs (K) & 6050 & 5167 & 85.4 \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!htbp]
\begin{center}
\caption{Cleaning Pitcher Data}
\begin{tabular}{|c|c|c|c|} \hline
& Contemporary & Included & Percent Included \\ \hline
Unique Players & 4395 & 1390 & 31.6 \\ \hline
Total IPOUTs (K) & 4773 & 3831 & 80.3 \\ \hline
\end{tabular}
\end{center}
\end{table}
\section{METHODS}
As noted, each training example contained over two hundred features, many of which were highly correlated. For example, the only difference between ``at-bats'' and ``plate appearances'' in a given year is that the former includes walks and a few other rare occurrences.
To account for these effects, scikit-learn's Recursive Feature Elimination (RFE) tool[5] was utilized, along with a basic linear ridge model with regularization ``alpha'' parameter set to 2. This utility instantiates the model with all features and then eliminates, upon each iteration, a user-defined number of features with the lowest coefficients. With our relatively small dataset, we chose to only eliminate one feature per iteration.
The utility was run for each year we attempted to predict (7-11) and for both batters and pitchers. Moreover, for each one of these items, the RFE was run several times, so as to gauge the performance of the model with various different numbers of features. Near-optimal model performance could be achieved with just a few features; however, the results were fully asymptotic after around 15-20 features. For all future tests, 15-20 features were utilized.
After selecting the optimal features, we ran the data through four different models as described below.
\subsection{Linear Regression with L2 Regularization (Ridge Model)}
This is a generalized linear model of the normal distribution. A weight for each parameter is determined by maximum-likelihood estimation over the training set. The equation of the cost function is given below.
\[
J(\theta) = \frac{1}{2m} \bigg[ \sum_{i=1}^m \Big( h_\theta(x^{(i)}) - y^{(i)} \Big)^2 + \lambda \sum_{j=1}^n \theta_j^2\bigg]
\]
Because this model is prone to overfitting, a regularization term was also added to the objective. This puts downward pressure on the size of each feature's coefficient, balanced against finding the optimal value to fit the training set.
\subsection{Multi-Layer Perception Regression (Neural Network)}
A fully connected artificial neural network consists of layers of neurons, where every neuron is connected by a weight to all of the neurons in the previous layer. As discussed later, models with 1 or 2 layers were examined. At each hidden neuron, the rectified linear unit (ReLU) activation function was used. Since the model was utilized for regression, the output node was not transformed. Due to the relatively small size of our dataset, the Newtonian 'L-BFGS' solver was used. The loss function for this model is the sum of the squared error for each training example, similar to that defined for Linear Regression.
\subsection{Random Forest Regression (Tree Bagging Model)}
The random forest model\cite{c7} utilized 100 separate decision trees. For each decision tree, samples were selected by bootstrapping the training set. Each decision tree was then formulated by splitting the data along the features that explain the most variation. Results were then averaged to produce the final model. Note that the model was also tested by restricting the number of features considered at each tree split; however, this generally produced worse results, so the final model did not contain the restriction. By convention, then, the final model was a bagging model and not a random forest.
\subsection{$\epsilon$-Support Vector Regression (SVR)}
$\epsilon$-SVM, or SVR, is an extension of the support vector machine concept to regression problems.\cite{c8} The algorithm attempts to find the response function that best fits the training data to the y-values without deviating by more than $\epsilon$. A regularization-like term called the Box coefficient is also included in the optimization in order to handle training examples that exist outside the $\epsilon$ region, similar to the purpose of the hinge loss equation in classification problems. The Box coefficient also reduces overfitting. The cost function is provided below, where $\xi$ and $\xi^*$ are slack variables to the $\epsilon$ constraints. In our experiments, the SVR model was used exclusively with a Gaussian kernel.
\[
J(\beta) = \frac{1}{2}||\beta|| + C \sum_{i=1}^{m} (\xi_i + \xi_i^*)
\]
\subsection{Hyperparameter Optimization}
For each of the four models above, scikit-learn's grid search utility was used to optimize hyperparameters. Recall that we included data for two types of players (batters and pitchers), five prediction years, and four models. As a result, grid-search was utilized 40 times, even though results tended to remain somewhat similar on each iteration. The parameters for each model and the range over which they were optimized are listed in the table below.
\begin{table}[!htbp]
\small
\begin{center}
\caption{Parameter Ranges}
\begin{tabular}{|c|c|} \hline
Model & Hyperparameters \\ \hline
Neural Net & \begin{math}\alpha\end{math}\begin{math}\in\end{math}[0.01, 100], lay-1\begin{math}\in \end{math}[4, 16], lay-2\begin{math}\in \end{math}[0, 5] \\ \hline
Random Forest & max depth \begin{math}\in \end{math} [2, 7], min leaf split \begin{math}\in \end{math} [1, 4] \\ \hline
\begin{math}
\epsilon-SVM (rbf)\end{math} & \begin{math}
\epsilon\end{math}\begin{math}\in \end{math}[1e-4, 1e2], C\begin{math}\in \end{math}[0, 10e5], \begin{math}\gamma\end{math}\begin{math}\in \end{math}[1e-5, 1e2]\\ \hline
Ridge\tablefootnote{Note that sci-kit learn uses $\alpha$ instead of $\lambda$ to refer to the regularization coefficient.} & \begin{math}\alpha\end{math} \begin{math}\in \end{math} [0.01, 100] \\ \hline
\end{tabular}
\end{center}
\end{table}
All feature selection was conducted on a training set consisting of 80\% of player data. After features were set, model hyperparameters were chosen using the training set and 3-fold cross validation. This approach minimized overfitting, while still allowing full use of the training set. Each trained model was then used to evaluate the unseen 20\% test set.
\section{RESULTS AND DISCUSSION}
The results on the test set, as measured by explained sum of squares (R$^2$), are displayed in the figure below. There was no clear winner between the machine learning models. For batters, the models were all typically able to explain approximately 60\% of variance. This varied moderately based on year. For example, year 9 -- the third year we attempted to predict -- had an R$^2$ closer to 0.55 for all models. We suspect this is indicative of random fluctuation.
\includegraphics[width=\linewidth]{year_vs_r2}
For pitchers, the models produced results explaining between 30\% and 40\% in years 7-10, substantially lower than for batters. This was most likely caused by another form of variation for pitchers -- injuries. Indeed, many pitchers at some point in their career will tear their ulnar collateral ligament (UCL). This injury requires surgery, and the surgery requires at least a year of healing. Even with this surgery, some pitchers never recover their previous level of performance. Although injuries also occur to batters, there is no equivalent that is as frequent or devastating; the typical injuries for batters are also injuries that can happen to pitchers.
We also note that model performance actually improved in year 11 for pitchers. The cause of this improvement is simple: by year 11, the models are projecting that certain pitchers will no longer be active. When this assumption is valid, all the variation is explained, whereas in previous active years there would still be some deviation.
For batters in years 7 and 8, two features -- cumulative WAR in first 6 seasons and WAR in 6th season -- fared essentially as well in predicting future performance as any combination of additional statistics. By year 9, age when entering the league was also needed to reach the maximal explained sum of squares. In years 10 and 11, age was more critical than sixth year WAR for predicting performance. This presumably has two causes: (1) players entering the league at more advanced ages will fare worse in their 10th season and (2) performance in year 6 says more about performance in year 7 than it does about year 10. The idea that a couple performance metrics are the best predictors of late career longevity lends some credence to Bradbury's approach, described in Related Work. Bradbury utilized a ``delta'' approach but controlled for baseline performance level.
Feature optimization for pitchers was similar to batters. In general, cumulative WAR in the first 6 seasons and individual WAR in the 6th season were sufficient for building good models. In year 8, strikeouts in the 6th season actually beat out 6th season WAR for second place in the recursive feature elimination; however, this was simply another indicator of value in the 6th year. More interestingly, age never showed up as critically in the model as it did for batters. Rather, additional explanatory power in later years was gained through statistics such as cumulative complete games in the first 6 seasons and some parameters of the first season (total batters faced, walks). The former, although seemingly an unimportant statistic, has some justification. Truly elite players, who tend to have long careers, also tend to throw more complete games. The complete game stat may also have been differentiating between starting pitchers and relief pitchers, who were both included in the pitching analysis. Relief pitchers, by role definition, will never be able to throw a complete game. The importance of first year performance is curious and warrants more study.
Interestingly, some features we thought would weigh heavily had minor impact. It had been our expectation that the decade in which a player's career began might be crucial, especially given that our dataset included the steroid era of the late 1990s. None of the one-hot encoded data corresponding to decade, however, appeared as a top feature. Similarly, we hypothesized that positional data might be crucial. Specifically, ``up-the-middle'' players (shortstop, second base, and centerfield) are widely regarded as the most athletic and might age better. Although the third base and left-field positions occasionally appeared as negative weights in the linear model, these were never more than the 30th most important feature. Similarly, we found that height and weight were never critical variables. Part of this could be due to the structure of our dataset; all players were assigned a single value for their entire careers, which is unrealistic for weight. Handedness (right, left, and switch) was also not deemed important. However, we had no reason to expect that it would be.
Even though only a few features were sufficient for reaching maximal explanatory power in our models, they still handily beat the naive ``delta method'' described earlier. To be fair, however, most authors who utilize the delta method (described in Related Work) believe it to be insufficient on its own and will perform corrections for aspects such as survivor bias, injuries, time period, and baseline talent level. We performed no such adjustments, with the intention to let the machine learning models infer these relationships. As such, the argument could be raised that our delta method calculation set up a proverbial ``straw man'' data point. Further analysis should attempt to directly compare our models with those of Bradbury, Fair, and Lichtman; however, this is not immediately possible due to different player inclusion criteria.
A comparison of the performance of the neural network to the delta method is provided in the plot below. The plot compares the actual results of the players in the test set with the values predicted by the models. Color is used to display frequency of observation -- hence, the lighter blue color represents the majority of observations which are heaviest around zero and extend upwards along the red reflection line. The darker blue points have low frequency of observation and tend to extend further away from the reflection line. Clearly, the neural network model has a spatially tighter distribution of values for both hitters and pitchers compared to the delta method. These results are representative of the other machine learning models. We also observe that the delta method has more predictions in negative territory. This is unrealistic, in the sense that these players would likely be benched; however, the naive assumptions in the model do not account for this. Note that all five years of data and predictions were included to generate these heat plots. The keen observer might also remark on the five points in the upper right of the neural network batter plot. We note, after the fact, that baseball's all-time home-run leader (and a player who experienced his best seasons in his late 30's), Barry Bonds, was in our test set, and these points correspond to him. It is interesting that the more complicated neural network model was able to maintain high predictions for Bonds for all 5 years in a way that the delta method was not.
\newpage
\includegraphics[width=\linewidth]{tiny_combined_hex}
\section{CONCLUSIONS AND FUTURE WORK}
In summary, we trained four machine learning models with data from the first 6 seasons of players careers, dating back to 1970. We then attempted to predict a player's value, measured by WAR, in years 7-11. For batters, we were typically able to forecast around 60\% of variation, while only 30\% - 40\% was possible for pitchers. This is perhaps the largest takeaway from the study: if deciding between giving a long-term contract to a batter or a pitcher, all else being equal, the team should prefer the batter. For both batters and pitchers, results handily beat the simplistic delta method calculation described in literature, although more work should be done to directly compare our results with the different variations other authors have attempted. Lastly, we found that cumulative WAR over the first 6 seasons and 6th season WAR are the best predictors of later career performance. For batters, rookie season age becomes critical starting in year 8.
Much time in this project was spent on feature selection; moving forward, more time should be spent on player inclusion criteria. While baseball in the early 20th century was quite different from baseball today, the year 1970 was somewhat arbitrarily chosen as a dividing line to ensure that players from baseball's early eras would not contaminate our predictions. A larger dataset (dating back to WWII or earlier) could be considered. Moreover, we chose relatively lenient criteria for player inclusion (at least 7 years in league) so as to maximize total league at-bats included in the study. The most popular literature criteria is 10 years, so this should also be attempted. Lastly, there is some debate over the optimal calculation for wins-above-replacement. In fact, three websites -- Baseball Reference, Fangraphs, and Baseball Prospectus -- tabulate similar, but competing, versions of the metric. We utilized the Baseball Reference version due to ease of access; however, it would be worthwhile to compare the Fangraphs and Baseball Prospectus versions to see if results differ.
\addtolength{\textheight}{-12cm}
| {
"attr-fineweb-edu": 2.466797,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcYI5qhLAB70I30Jq | \section{Introduction}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\footnotetext[1]{Equal contribution.}
\footnotetext[2]{Corresponding author.}
\begin{table}[!t]
\scriptsize
\begin{tabular}{p{207pt}}
\toprule
\textsf{\textbf{Context:} $\cdots$ Artur Boruc, a Polish national pilot, was going to get the group infiltrated into the area and prepared for the attack. $\cdots$ Metal began to tear through the thin wings outside the small windows and pinged heavily off the underside of the plane, a quiet arrival in Poland wasn't going to be an option anymore.}\\
\midrule
\textsf{\textbf{Persona A:} [Boruc]~A \colorlet{c}{red!20}\sethlcolor{c}\hl{skilled} pilot, \colorlet{c}{red!20}\sethlcolor{c}\hl{trained} in operating flight controls on the most common planes.}\\
\textsf{\textbf{Generated Story by \textsc{ConPer} Conditioned on Persona A:} $\cdots$He had been doing this much before, almost a week ago. \colorlet{c}{red!20}\sethlcolor{c}\hl{He took a long pull of the airlock, checked his controls, and made a quick mental note of the exact sequence of instructions.} He knew that he couldn't be sure if this would be safe for much longer$\cdots$}\\
\midrule
\textsf{\textbf{Persona B:} [Boruc]~An \colorlet{c}{red!20}\sethlcolor{c}\hl{unskilled} pilot, and \colorlet{c}{red!20}\sethlcolor{c}\hl{never trained} in operating flight controls.}\\
\textsf{\textbf{Generated Story by \textsc{ConPer} Conditioned on Persona B:} $\cdots$
He cursed as the plane suffered a complete failure and in a way had caused it to come to a stop, $\cdots$
\colorlet{c}{red!20}\sethlcolor{c}\hl{He'd never flown before, so he didn't know how to pilot in this situation and his experience of the controls had not been good either}$\cdots$}\\
\bottomrule
\end{tabular}
\caption{An example for controlling the protagonist's persona in story generation. The \textbf{Context} and \textbf{Persona A} are sampled from the \textsc{storium} dataset~\cite{akoury2020storium}. The protagonist's name is shown in the square bracket. And we manually write \textbf{Persona B} based on \textbf{Persona A}. We highlight the sentences which embody the given personas in \colorlet{c}{red!20}\sethlcolor{c}\hl{red}.}
\label{tab:example_story}
\end{table}
Stories are important for entertainment. They are made engaging often by portraying animated and believable characters since a story plot unfolds as the characters interact with the object world created in the story~\cite{young2000creating}. Cognitive psychologists determined that the ability of an audience to comprehend a story is strongly correlated with the characters' believability~\cite{graesser1991question}. And the believability mostly depends on whether the characters' reaction to what has happened and their deliberate behavior accord with their personas~(e.g., weakness, abilities, occupations)~\cite{madsen2009exploring,riedl2010narrative}.
Furthermore, previous studies have also stressed the importance of personas in stories to maintain the interest of audience and instigate their sense of empathy and relatedness~\cite{cavazza2009emotional,chandu2019my}. However, despite the broad recognition of its importance, it has not yet been widely explored to endow characters with specified personalities in story generation.
In this paper, we present the first study to impose free-form controllable persona on story generation. Specifically, we require generation models to generate a coherent story, where the protagonist should exhibit the desired personality. We focus on controlling the persona of only the protagonist of a story in this paper and leave the modeling of personas of multiple characters for future work. As exemplified in Table~\ref{tab:example_story}, given a context to present the story settings including characters, location, problems ~(e.g., {``Boruc''} was suffering from a plane crash) and a persona description, the model should generate a coherent story to exhibit the persona~(e.g., what happened when {``Boruc''} was {``skilled''} or {``unskilled''}). In particular, we require the model to embody the personality of the protagonist implicitly through his actions ~(e.g., {``checked his controls''} for the personality {``skilled pilot''}). Therefore, the modeling of relations between persona and events is the \textbf{first challenge} of this problem. Then, we observe that only a small amount of events in a human-written story relate to personas directly and the rest serve for explaining the cause and effect of these events to maintain the coherence of the whole story. Accordingly, the \textbf{second challenge} is learning to plan a coherent event sequence (e.g., first {``finding the plane shaking''}, then {``checking controls''}, and finally {``landing safely''}) to embody personas naturally.
In this paper, we propose a generation model named {\textsc{ConPer}} to deal with {\textit{Con}}trolling {\textit{Per}}sona of the protagonist in story generation. Due to the persona-sparsity issue that most events in a story do not embody the persona, directly fine-tuning on real-world stories may mislead the model to focus on persona-unrelated events and regard the persona-related events as noise~\cite{zheng2020pre}. Therefore, before generating the whole story, \textsc{ConPer} first plans persona-related events through predicting one target sentence, which should be motivated by the given personality following the leading context. To this end, we extract persona-related events that have a high semantic similarity with the persona description in the training stage. Then, \textsc{ConPer} plans the plot as a sequence of keywords to complete the cause and effect of the predicted persona-related events with the guidance of commonsense knowledge. Finally, \textsc{ConPer} generates the whole story conditioned on the planned plot. The stories are shown to have better coherence and persona-consistency than state-of-the-art baselines.
We summarize our contributions as follows:
\noindent\textbf{I.} We propose a new task of controlling the personality of the protagonist in story generation.
\noindent\textbf{II.} We propose a generation model named \textsc{ConPer} to impose specified persona into story generation by planning persona-related events and a keyword sequence as intermediate representations.
\noindent\textbf{III.} We empirically show that \textsc{ConPer} can achieve better controllability of persona and generate more coherent stories than strong baselines.
\section{Related Work}
\paragraph{Story Generation} There have been wide explorations for various story generation tasks, such as story ending generation~\cite{guan2019story}, story completion~\cite{DBLP:conf/ijcai/Wang019b} and story generation from short prompts~\cite{fan2018hierarchical}, titles~\cite{yao2019plan} or beginnings~\cite{guan2020knowledge}. To improve the coherence of story generation, prior studies usually first predicted intermediate representations as plans and then generated stories conditioned on the plans. The plans could be a series of keywords~\cite{yao2019plan}, an action sequence~\cite{fan2019strategies, DBLP:conf/emnlp/Goldfarb-Tarrant20} or a keyword distribution~\cite{kang2020plan}. In terms of character modeling in stories, some studies focused on learning characters' persona as latent variables~\cite{bamman2013learning,bamman2014bayesian} or represented characters as learnable embeddings~\cite{ji2017dynamic,clark2018neural,liu2020character}.
\citet{chandu2019my} proposed five types of specific personas for visual story generation.
\citet{brahman2021let} formulated two new tasks including character description generation and character identification.
In contrast, we focus on story generation conditioned on personas in a free form of text to describe one's strengths, weaknesses, abilities, occupations and goals.
\paragraph{Controllable Generation}
\begin{figure*}[!ht]
\includegraphics[width=\linewidth]{graph/model.pdf
\caption{Model overview of \textsc{ConPer}. The training process is divided into the following three stages: (a) Target Planning: planning persona-related events (called ``target'' for short); (b) Plot Planning: planning a keyword sequence as an intermediate representation of the story with the guidance of the target and a dynamically growing local knowledge graph; And (c) Story Generation: generating the whole story conditioned on the input and plans.}
\label{tab:model}
\end{figure*}
Controllable text generation aims to generate texts with specified attributes. For example, \citet{DBLP:journals/corr/abs-1909-05858} pretrained a language model conditioned on control codes of different attributes~(e.g., domains, links). \citet{DBLP:conf/iclr/DathathriMLHFMY20}
proposed to combine a pretrained language model with trainable attribute classifiers to
increase the likelihood of the target attributes.
Recent studies in dialogue models focused on controlling through sentence functions~\cite{ke2018generating}, politeness~\cite{niu2018polite} and conversation targets~\cite{tang2019target}. For storytelling, \citet{brahman2020cue} incorporated additional phrases to guide the story generation. \citet{brahman2020modeling} proposed to control the emotional trajectory in a story by regularizing the generation process with reinforcement learning. \citet{rashkin2020plotmachines} generated stories from outlines of characters and events by tracking the dynamic plot states with a memory network.
A similar research to ours is \citet{zhang2018personalizing}, which introduced the PersonaChat dataset for endowing the chit-chat dialogue agents with a consistent persona. However, dialogues in PersonaChat tend to exhibit the given personas explicitly~(e.g., the agent says ``I am terrified of dogs'' for the persona ``I am afraid of dogs''). For quantitative analysis, we compute the ROUGE score~\cite{lin-2004-rouge} between the persona description and the dialogue or story. We find that the rouge-2 score is 0.1584 for {PersonaChat} and 0.018 for our dataset~(i.e., \textsc{storium}). The results indicate that exhibiting personas in stories requires a stronger ability to associate the action of a character and his implicit traits compared with exhibiting personas in dialogues.
\paragraph{Commonsense Knowledge} Recent studies have demonstrated that incorporating external commonsense knowledge significantly improved the coherence and informativeness for dialog generation~\cite{DBLP:conf/ijcai/ZhouYHZXZ18,DBLP:journals/corr/abs-2012-08383}, story ending generation~\cite{guan2019story}, essay generation~\cite{DBLP:conf/acl/YangLLLS19}, story generation~\cite{guan2020knowledge,DBLP:conf/emnlp/XuPSPFAC20,mao2019improving} and story completion~\cite{ammanabrolu2020automated}. These studies usually retrieved a static local knowledge graph which contains entities mentioned in the input, and their related entities. We propose to incorporate the knowledge dynamically during generation to better model the
keyword transition in a long-from story.
\section{Methodology}
We define our task as follows: given a context ${X}=(x_1,x_2,\cdots,x_{|X|})$ with $|X|$ tokens, and a persona description for the protagonist ${P}=(p_1,p_2,\cdots,p_l)$ of length $l$, the model should generate a coherent story ${Y}=(y_1,y_2,\cdots,y_{|Y|})$ of length $|Y|$ to exhibit the persona. To tackle the problem, the popular generation model such as GPT2 commonly employ a left-to-right decoder to minimize the negative log-likelihood $\mathcal{L}_{ST}$ of human-written stories:
\begin{align}
\mathcal{L}_{ST}&=-\sum_{t=1}^{|Y|}\text{log}P(y_t|y_{<t},S),\\
P(y_t|y_{<t},S)&=\text{softmax}(\textbf{s}_t\boldsymbol{W}+\boldsymbol{b}),\\
\textbf{s}_t&=\texttt{Decoder}(y_{<t}, S),
\end{align}
where $S$ is the concatenation of $X$ and $P$, $\textbf{s}_t$ is the decoder's hidden state at the $t$-th position of the story, $\boldsymbol{W}$ and $\boldsymbol{b}$ are trainable parameters. Based on this framework, we divide the training process of \textsc{ConPer} into three stages as shown in Figure~\ref{tab:model}.
\subsection{Target Planning}
We observe that most sentences in a human-written story do not aim to exhibit any personas, but serve to maintain the coherence of the story. Fine-tuning on these stories directly may mislead the model to regard input personas as noise and focus on modeling the persona-unrelated events which are in the majority. Therefore, we propose to first predict persona-related events~(i.e., the target) before generating the whole story.
We use an automatic approach to extract the target from a story since there is no available manual annotation. Specifically, we regard the sentence as the target which has the highest semantic similarity with the persona description. We consider only one sentence as the target in this work due to the persona-sparsity issue, and we also present the result of experimenting with two sentences as the target in the appendix~\ref{appendix_target}. More explorations of using multiple target sentences are left as future work. We adopt NLTK~\cite{bird2009natural} for sentence tokenization. And we measure the similarity between sentences using BERTScore$_{\rm Recall}$~\cite{zhang2019bertscore} with RoBERTa$_{\rm Large}$~\cite{liu2019roberta} as the backbone model. Let $T=(\tau_1,\tau_2,\cdots,\tau_\iota)$ denote the target sentence of length $\iota$, which should be a sub-sequence of $Y$. Formally, the loss function $\mathcal{L}_{TP}$ for this stage can be derived as follows:
\begin{align}
\mathcal{L}_{TP}&=-\sum_{t=1}^{\iota}\text{log}P(\tau_t|\tau_{<t}, S).
\end{align}
In this way, we exert explicit supervision to encourage the model to condition on the input personas.
\subsection{Plot Planning}
At this stage, \textsc{ConPer} learns to plan a keyword sequence for subsequent story generation~\cite{yao2019plan}. Plot planning requires a strong ability to model the causal and temporal relationship in the context for expanding a reasonable story plot~(e.g., associating \textit{``unskilled''} with \textit{``failure''} for the example in Table~\ref{tab:example_story}), which is extremely challenging without any external guidance, for instance, commonsense knowledge. In order to plan a coherent event sequence, we introduce a dynamically growing local knowledge graph, a subset of the external commonsense knowledge base ConceptNet~\cite{speer2017conceptnet}, which is initialized to contain triples related to the keywords mentioned in the input and target. When planning the next keyword, \textsc{ConPer} combines the knowledge information from the local graph and the contextualized features captured by the language model with learnable weights. Then \textsc{ConPer} grows the local graph by adding the knowledge triples neighboring the predicted keyword. Formally, we denote the keyword sequence as ${W}=(w_{1}, w_2,\cdots, w_{k})$ of length $k$ and the local graph as $\mathcal{G}_t$ for predicting the keyword $w_{t}$. The loss function $\mathcal{L}_{KW}$ for generating the keyword sequence is as follows:
\begin{align}
\mathcal{L}_{KW}&=-\sum_{t=1}^{k}\text{log}P(w_t|w_{<t}, S, T, \mathcal{G}_{t}).
\end{align}
\paragraph{Keyword Extraction}
We extract words that relate to emotions and events from each sentence of a story as keywords for training, since they are important for modeling characters' evolving psychological states and their behavior. We measure the emotional tendency of each word using the sentiment analyzer in NLTK, which predicts a distribution over four basic emotions, i.e., \textit{negative}, \textit{neutral}, \textit{positive}, and \textit{compound}. We regard those words as related to emotions whose scores for \textit{negative} or \textit{positive} are larger than $0.5$. Secondly, we extract and lemmatize the nouns and verbs~(excluding stop-words) from a story as {event-related} keywords with NLTK for POS-tagging and lemmatization. Then we combine the two types of keywords in the original order as the keyword sequence for planning. We limit the number of keywords extracted from each sentence in stories up to 5, and we ensure that there is at least one keyword for a sentence by randomly choosing one word if no keywords are extracted. We don't keep this limitation when extracting keywords from the leading context and the persona description, since these keywords are only used to initialize the local knowledge graph.
\paragraph{Incorporating Knowledge} We introduce a dynamically growing local knowledge graph for plot planning. For each example, we initialize the graph $\mathcal{G}_1$ as a set of knowledge triples where the keywords in $S$ and $T$ are the head or tail entities, and then update $\mathcal{G}_{t}$ to $\mathcal{G}_{t+1}$ by adding triples related with the generated keyword $w_t$ at $t$-th step. Then, the key problem at this stage is representing and utilizing the local graph for next keyword prediction.
The local graph consists of multiple sub-graphs, each of which contains all the triples related with a keyword denoted as $\varepsilon_i=\{(h^i_n,r^i_n,t^i_n)|h^i_n\in\mathcal{V}, r^i_n\in\mathcal{R}, t^i_n\in\mathcal{V}\}\}|_{n=1}^N$,
where $\mathcal{R}$ and $\mathcal{V}$ are the relation set and entity set of ConceptNet, respectively. We derive the representation $\textbf{g}_i$ for $\varepsilon_i$ using graph attention~\cite{zhou2018commonsense} as follows:
\begin{align}
\textbf{g}_i&=\sum_{n=1}^{N}\alpha_n[\textbf{h}^i_n;\textbf{t}^i_n]\\
\alpha_n&=\frac{\text{exp}(\beta_n)}{\sum_{j=1}^{N}\text{exp}(\beta_j)},\\%\text{softmax}(\beta_n),\\
\beta_n&=(\boldsymbol{W}_r\textbf{r}^i_n)^T\text{tanh}(\boldsymbol{W}_h\textbf{h}^i_n+\boldsymbol{W}_t\textbf{t}^i_n),
\end{align}
where $\boldsymbol{W}_h,\boldsymbol{W}_r$ and $\boldsymbol{W}_t $ are trainable parameters, $\textbf{h}^i_n, \textbf{r}^i_n$ and $\textbf{t}^i_n$ are learnable embedding representations for $h^i_n, r^i_n$ and $t^i_n$, respectively.
We use the same BPE tokenizer~\cite{radford2019language} with the language model to tokenize the head and tail entities, which may lead to multiple sub-words for an entity.
Therefore, we derive $\textbf{h}^i_n$ and $\textbf{t}^i_n$ by adding the embeddings of all the sub-words. And we initialize the relation embeddings randomly.
After obtaining the graph representation, we predict the distribution of the next keyword by dynamically deciding whether to select the keyword from the local graph as follows:
\begin{align}
P(w_t|w_{<t}, S, T, \mathcal{G}_{t})&=\gamma_t P^t_{k} + (1-\gamma_t)P^t_{l},
\end{align}
where $\gamma_t\in\{0,1\}$ is a binary learnable weight, and $P^t_{l}$ is a distribution over the whole vocabulary while $P^t_{k}$ is a distribution over the entities in $\mathcal{G}_t$.
We incorporate the knowledge information implicitly for computing both distributions:
\begin{align}
P^t_{k} &= \text{softmax}(\boldsymbol{W}_k[\textbf{s}_t; \textbf{c}_t]+\boldsymbol{b}_k),\label{ptk}\\
P^t_{l} &= \text{softmax}(\boldsymbol{W}_l[\textbf{s}_t; \textbf{c}_t]+\boldsymbol{b}_l),\label{ptl}
\end{align}
where $\boldsymbol{W}_k,\boldsymbol{b}_k,\boldsymbol{W}_p$ and $\boldsymbol{b}_p$ are trainable parameters, and $\textbf{c}_t$ is a summary vector of the knowledge information by attending on the representations of all the sub-graphs in $\mathcal{G}_t$, formally as follows:
\begin{align}
\textbf{c}_t&=\sum_{n=1}^N\alpha_n\textbf{g}_n,\\
\alpha_n&=\text{softmax}(\textbf{s}_t^T\boldsymbol{W}_g\textbf{g}_n).
\end{align}
where $\boldsymbol{W}_g$ is a trainable parameter. During training process, we set $\gamma_t$ to the ground-truth label $\hat{\gamma}_t$. During generation process, we decide $\gamma_t$ by deriving the probability $p_t$ of selecting an entity from the local graph as the next keyword. And we set $\gamma_t$ to $1$ if $p_t<0.5$ otherwise 0. We compute $p_t$ as follows:
\begin{align}
p_t &= \text{sigmoid}(\boldsymbol{W}_p[\textbf{s}_t; \textbf{c}_t]+\boldsymbol{b}_p),
\end{align}
where $\boldsymbol{W}_p$ and $\boldsymbol{b}_p$ are trainable parameters. We train the classifier with the standard cross entropy loss $\mathcal{L}_{C}$ derived as follows:
\begin{align}
\mathcal{L}_{C}=-\big(\hat{\gamma}_t\text{log}p_t+(1-\hat{\gamma}_t)\text{log}(1-{p}_t)\big),
\end{align}
where $\hat{\gamma}_t$ is the ground-truth label. In summary, the overall loss function $\mathcal{L}_{PP}$ for the plot planning stage is computed as follows:
\begin{align}
\mathcal{L}_{PP}=\mathcal{L}_{KW}+\mathcal{L}_{C}.
\end{align}
By incorporating commonsense knowledge for planning, and dynamically updating the local graph, \textsc{ConPer} can better model the causal and temporal relationship between events in the context.
\paragraph{Target Guidance} In order to further improve the coherence and the persona-consistency, we propose to exert explicit guidance of the predicted target on plot planning. Specifically, we expect \textsc{ConPer} to predict keywords close to the target in semantics. Therefore, we add a bias term $\textbf{d}^t_k$ and $\textbf{d}^t_l$ into Equation~\ref{ptk} and \ref{ptl}, respectively, formally as follows:
\begin{align}
P^t_{k}&=\text{softmax}(\boldsymbol{W}_{k}[\textbf{s}_t;\textbf{c}_t]+\boldsymbol{b}_{k}+\textbf{d}^t_k),\\
\textbf{d}^t_k&=[\textbf{s}_\text{tar};\textbf{c}_t]^T\boldsymbol{W}_d\textbf{E}_k+\boldsymbol{b}_d,\label{bias_term}\\
\textbf{s}_\text{tar}&=\frac1\iota\sum_{t=1}^\iota\textbf{s}_{|\tau_t|},
\end{align}
where $\boldsymbol{W}_d$ and $\boldsymbol{b}_d$ are trainable parameters, $\textbf{s}_\text{tar}$ is the target representation computed by averaging the hidden states at each position of the predicted target, and $\textbf{E}_k$ is an embedding matrix, each row of which is the embedding for an entity in $\mathcal{G}_t$. The modification for Equation~\ref{ptl} is similar except that we compute the bias term $\textbf{d}_l^t$ with an embedding matrix $\textbf{E}_l$ for the whole vocabulary.
\subsection{Story Generation}
After planning the target $T$ and the keyword sequence $W$, we train \textsc{ConPer} to generate the whole story conditioned on the input and plans with the standard language model loss $\mathcal{L}_{ST}$. Since we extract one sentence from a story as the target, we do not train \textsc{ConPer} to regenerate the sentence in the story generation stage. And we insert a special token \texttt{Target} in the story to specify the position of the target during training. In the inference time, \textsc{ConPer} first plans the target and plot, then generates the whole story, and finally places the target into the position of \texttt{Target}.
\section{Experiments}
\subsection{Dataset}
We conduct the experiments on the \textsc{storium} dataset~\cite{akoury2020storium}. \textsc{storium} contains nearly 6k long-form stories and each story unfolds through a series of scenes with several shared characters. A scene consists of multiple short scene entries, each of which is written to either portray one character with annotation for his personality~(i.e., the \textit{``card''} in \textsc{storium}), or introduce new story settings~(e.g., problems, locations) from the perspective of the narrator.
In this paper, we concatenate all entries from the same scene since a scene can be seen as an independent story. And we regard a scene entry written for a certain character as the target output, the personality of the character as the persona description, and the previous entries written for this character or from the perspective of the narrator in the same scene as the leading context. We split the processed examples for training, validation and testing based on the official split of \textsc{storium}. We retain about 1,000 words (with the correct sentence boundary) for each example due to the length limit of the pretrained language model.
At the plot planning stage, we retrieve a set of triples from ConceptNet~\cite{speer2017conceptnet} for each keyword extracted from the input or generated by the model. We only retain those triples of which both the head and tail entity contain one word and occur in our dataset, and the confidence score of the relation~(annotated by ConceptNet) is more than 1.0. The average number of triples for each keyword is 33. We show more statistics in Table~\ref{tab:stat}.
\begin{table}[h]
\small
\centering
\begin{tabular}{lccc}
\toprule
& \textbf{Train} & \textbf{Valid} & \textbf{Test} \\
\midrule
\textbf{\# Examples} & 47,910 & 6,477 & 6,063 \\
\textbf{Avg. Context Length} & 332.7 & 324.8 & 325.7 \\
\textbf{Avg. Description Length} & 23.8 & 22.7 & 24.6 \\
\textbf{Avg. Story Length} & 230.5 & 225.7 & 234.3 \\
\midrule
\textbf{Avg. Target Length} & 21.8 & 21.7 & 22.2\\
\textbf{Avg. \# Keywords~(Input)}&101.1&99.2&99.6 \\
\textbf{Avg. \# Keywords~(Story)}&31.2&30.5&31.5 \\
\bottomrule
\end{tabular}
\caption{Dataset statistics. We compute the length by counting tokens using the BPE tokenizer of GPT2. Keywords are extracted either from the input to initialize the local graph, or from the story to train the model for plot planning.}
\label{tab:stat}
\end{table}
\subsection{Baselines}
We compare \textsc{ConPer} with following baselines.
\textbf{(1) {ConvS2S:}} It directly uses a convolutional seq2seq model to generate a story conditioned on the input~\cite{gehring2017convolutional}.
\textbf{(2) {Fusion:}} It generates a story by first training a convolutional seq2seq model, and then fixing the model and initializing another trainable convolutional seq2seq model with its parameters. Then the two models are trained together by a fusion mechanism.~\cite{fan2018hierarchical}.
\textbf{(3) {Plan\&Write:} }It first plans a keyword sequence conditioned on the input, and then generates a story based on the keywords~\cite{yao2019plan}. \textbf{(4) {GPT2$_{\rm Scr}$:}} It has the same network architecture with GPT2 but is trained on our dataset from scratch without any pretrained parameters.
\textbf{(5) {GPT2$_{\rm Ft}$:}} It is initialized using pretrained parameters, and then fine-tuned on our dataset with the standard language modeling objective. \textbf{(6) {PlanAhead:}} It first predicts a keyword distribution conditioned upon the input, and then generates a story by combining the language model prediction and the keyword distribution with a gate mechanism~\cite{kang2020plan}. We remove the sentence position embedding and the auxiliary training objective~(next sentence prediction) used in the original paper for fair comparison.
Furthermore, we evaluate the following ablated models to investigate the influence of each component: \textbf{\textsc{(1) ConPer} w/o KG:} removing the guidance of the commonsense knowledge in the plot planning stage. \textbf{\textsc{(2) ConPer} w/o TG:} removing target guidance in the plot planning stage. \textbf{\textsc{(3) ConPer} w/o PP:} removing the plot planning stage, which means the model first plans a target sentence and then directly generates the whole story. \textbf{\textsc{(4) ConPer} w/o TP:} removing the target planning stage, which also leads to the removal of target guidance in the plot planning stage.
\subsection{Experiment Settings}
We build \textsc{ConPer} based on GPT2~\cite{radford2019language}, which is widely used for story generation~\cite{guan2020knowledge}. We concatenate the context and the persona description with a special token as input for each example. For fair comparison, we also add special tokens at both ends of the target sentence in a training example for all baselines. We implement the non-pretrained models based on the scripts provided by the original papers, and the pretrained models based on the public checkpoints and codes of HuggingFace's Transformers\footnote{\url{https://github.com/huggingface/ transformers}}. And we set all the pretrained models to the base version due to limited computational resources. We set the batch size to 8, the initial learning rate of the AdamW optimizer to 5e-5, and the maximum training epoch to 5 with an early stopping mechanism. And we generate stories using top-$p$ sampling with $p=0.9$~\cite{holtzman2019curious}. We apply these settings to all the GPT-based models, including GPT$_{\rm Scr}$, GPT$_{\rm Ft}$, PlanAhead, \textsc{ConPer} and its ablated models. As for ConvS2S, Fusion and Plan\&Write, we used the settings from their respective papers and codebases.
\begin{table*}[h]
\small
\centering
\begin{tabular}{l|cccc|cccc}
\toprule
\multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Models}}} & \multicolumn{4}{c|}{\textbf{Coherence}}& \multicolumn{4}{c}{\textbf{Persona consistency}}\\
\multicolumn{1}{c|}{}& \textbf{Win(\%)} & \textbf{Lose(\%)} & \textbf{Tie(\%)} & \textbf{$\kappa$} & \textbf{Win(\%)} & \textbf{Lose(\%)} & \textbf{Tie(\%)} & \textbf{$\kappa$} \\ \midrule
\textbf{\textsc{ConPer} vs. ConvS2S}& 89.0* & 5.0 & 6.0 & 0.625 & 82.0* & 8.0 & 10.0 & 0.564\\
\textbf{\textsc{ConPer} vs. Fusion}&71.0*&23.0&6.0&0.213&61.0*&22.0&17.0&0.279\\
\textbf{\textsc{ConPer} vs. GPT2$_{\rm Ft}$}& 54.0* & 18.0 & 28.0 & 0.275 & 53.0* & 11.0 & 36.0 & 0.215 \\
\textbf{\textsc{ConPer} vs. PlanAhead}& 53.0* & 25.0 & 22.0 & 0.311 & 59.0* & 28.0 & 13.0 & 0.280\\
\bottomrule
\end{tabular}
\caption{Manual evaluation results. The scores indicate the percentage of \textit{win}, \textit{lose} or \textit{tie} when comparing our model with a baseline. $\kappa$ denotes Randolph's kappa to measure the inter-annotator agreement. * means \textsc{ConPer} outperforms the baseline model significantly with p-value$<0.01$ (Wilcoxon signed-rank test).}
\label{tab:human_eva}
\end{table*}
\begin{table}[!t]
\small
\centering
\begin{tabular}{lccccc}
\toprule
\textbf{Models}&\textbf{B-1}&\textbf{B-2}&\textbf{BS-t}&\textbf{BS-m}&\textbf{PC}\\
\midrule
\textbf{ConvS2S}&12.5&4.7&22.2&32.8&17.1\\
\textbf{Fusion}&13.3&5.0&22.7&33.3& 30.8\\
\textbf{Plan\&Write}&7.2&2.8&6.2&29.7&23.6\\
\midrule
\textbf{GPT2$_{\rm Scr}$}&13.3&4.8&24.7&38.0&26.6\\
\textbf{GPT2$_{\rm Ft}$}&13.5&4.7&26.7&37.8&39.5\\
\textbf{PlanAhead}&15.4&5.3&26.1&37.8&50.2\\
\midrule
\textbf{\textsc{ConPer}}&\textbf{19.1}&\textbf{6.9}&\textbf{32.1}&\textbf{41.4}&\textbf{59.7}\\
\textbf{~~w/o KG}&17.4&\underline{6.3}&31.6&39.7&53.4\\ \textbf{~~w/o TG}&\underline{17.7}&\underline{6.3}&31.9&\underline{40.2}&\underline{56.3}\\
\textbf{~~w/o PP}&14.9&5.3&\underline{32.0}&40.0&46.9\\
\textbf{~~w/o TP}&16.4&5.8&27.8&37.7&44.9\\
\midrule
\textbf{\textit{Grouth Truth}}&\textit{N/A}&\textit{N/A}&\textit{42.6}&\textit{42.6}&\textit{75.2}\\
\bottomrule
\end{tabular}
\caption{Automatic evaluation results. The best performance is highlighted in \textbf{bold}, and the second best is \underline{underlined}. All results are multiplied by 100.}
\label{tab:auto_eval}
\end{table}
\subsection{Automatic Evaluation}
\paragraph{Metrics} We adopt the following automatic metrics for evaluation on the test set. \textbf{(1) BLEU~(\textbf{B-n}):} We use $n=1,2$ to evaluate $n$-gram overlap between generated and ground-truth stories~\cite{papineni2002bleu}. \textbf{(2) BERTScore-target~(BS-t):} We use BERTScore$_{\rm Recall}$~\cite{zhang2019bertscore} to measure the semantic similarity between the generated target sentence and the persona description. A higher result indicates the target embodies the persona better. \textbf{(3) BERTScore-max~(BS-m)}: It computes the maximum value of BERTScore between each sentence in the generated story and the persona description. \textbf{(4) Persona-Consistency~(PC):} It is a learnable automatic metric~\cite{guan2020union}. We fine-tune RoBERTa$_{\textsc{base}}$ on the training set as a classifier to distinguish whether a story exhibits a consistent persona with a persona description. We regard the ground-truth stories as positive examples where the stories and the descriptions are consistent, and construct negative examples by replacing the story with a randomly sampled one. After fine-tuning, the classifier achieves an 83.63\% accuracy on the auto-constructed test set. Then we calculate the consistency score as the average classifier score of all the generated texts regarding the corresponding input.
\paragraph{Result}
Table \ref{tab:auto_eval} shows the automatic evaluation results. \textsc{ConPer} can generate more word overlaps with ground-truth stories as shown by higher BLEU scores. And \textsc{ConPer} can better embody the specified persona in the target sentence and the whole story as shown by the higher BS-t and BS-m score. The higher PC score of \textsc{ConPer} also further demonstrate the better exhibition of given personas in the generated stories. As for ablation tests, all the ablated models have lower scores in terms of all metrics than \textsc{ConPer}, indicating the effectiveness of each component. Both \textsc{ConPer} w/o PP and \textsc{ConPer} w/o TP drop significantly in BLEU scores, suggesting that planning is important for generating long-form stories.
\textsc{ConPer} w/o TP also performs substantially worse in all metrics than \textsc{ConPer} w/o TG, indicating the necessity of explicitly modeling the relations between persona descriptions and story plots. We also show analysis of target guidance in Appendix \ref{appendix:target_guide}.
\subsection{Manual Evaluation}
We conduct a pairwise comparison between our model and four strong baselines including PlanAhead, GPT2$_{{\rm Ft}}$, Fusion and ConvS2S. We randomly sample 100 stories from the test set, and obtain 500 stories generated by \textsc{ConPer} and four baseline models. For each pair of stories (one by \textsc{ConPer}, and the other by a baseline, along with the input), we hire three annotators to give a preference (\textit{win}, \textit{lose} or \textit{tie}) in terms of \textit{coherence}~(inter-sentence relatedness, causal and temporal dependencies) and \textit{persona-consistency} with the input (exhibiting consistent personas). We adopt majority voting to make the final decisions among three annotators. Note that the two aspects are independently evaluated. We resort to Amazon Mechanical Turk~(AMT) for the annotation. As shown in Table \ref{tab:human_eva}, \textsc{ConPer} outperforms baselines significantly in coherence and persona consistency.
\begin{table}[!h]
\small
\centering
\begin{tabular}{l|ccc}
\toprule
\textbf{{Policies}}& \textbf{Yes~(\%)} & \textbf{No~(\%)} & \textbf{$\kappa$} \\
\midrule
\textbf{Random}& 22.0 & 78.0 & 0.25\\
\textbf{Ours}& 75.0 & 25.0 & 0.36\\
\bottomrule
\end{tabular}
\caption{Percentages of examples labeled with ``Yes'' or ``No'' for whether the identified sentence reflects the given persona. $\kappa$ denotes Randolph's kappa to measure the inter-annotator agreement.}
\label{tab:human_eva_target}
\end{table}
Furthermore, we used human annotation to evaluate whether the identified target sentence embodies the given persona. We randomly sampled 100 examples from the test set, and identified the target for each example as the sentence with the maximum BERTScore with the persona description. And we used a random policy as a baseline which randomly samples a sentence from the original story as the target. We hired three annotators on AMT to annotate each example~(``Yes'' if the sentence embodies the given persona, and ``No'' otherwise). We adopted majority voting to make the final decision among three annotators. Table \ref{tab:human_eva_target} shows our method significantly outperforms the random policy in identifying the persona-related sentences.
\subsection{Controllability Analysis}
To further investigate whether the models can be generalized to generate specific stories to exhibit different personas conditioned on the same context, we perform a quantitative study to observe how many generated stories are successfully controlled as the input persona descriptions change.
\paragraph{Automatic Evaluation} For each example in the test set, we use a model to generate ten stories conditioned on the context of this example and ten persona descriptions randomly sampled from other examples, respectively.
We regard a generated story as successfully controlled if the pair of the story and its corresponding persona description~(along with the context) has the maximum persona-consistency score among all the ten descriptions.
We regard the average percentages of the stories which are successfully controlled in all the ten generated stories for each example in the whole test set as the controllability score of the model. We show the results for \textsc{ConPer} and strong baselines in Table~\ref{tab:auto_controbility}. Furthermore, we also compute the superiority~(denoted as $\Delta$) of the persona-consistency score computed between a generated story and its corresponding description compared to that computed between the story and one of the other nine descriptions~ \cite{sinha-etal-2020-learning}. A larger $\Delta$ means the model can generate more specific stories adhering to the personas.
\begin{table}[!t]
\small
\centering
\begin{tabular}{lcc}
\toprule
\textbf{Models}&\textbf{Controllability Score}&\textbf{$\Delta$}\\
\midrule
\textbf{Plan\&Write}&10.6&0.01\\
\textbf{GPT2$_{\rm Ft}$}&24.2&11.2\\
\textbf{PlanAhead}&23.1&11.2\\
\midrule
\textbf{\textsc{ConPer}}&\textbf{29.5}&\textbf{15.1}\\
\bottomrule
\end{tabular}
\caption{Automatic evaluation results for the controllability. All results are mulitplied by 100.}
\label{tab:auto_controbility}
\end{table}
\begin{table}[!t]
\small
\centering
\begin{tabular}{lccc}
\toprule
\textbf{Models}&\textbf{Acco (\%)}&\textbf{Oppo (\%)}&\textbf{Irre (\%)}\\
\midrule
\textbf{GPT2$_{\rm Ft}$}&21&10&69\\
\textbf{PlanAhead}&44&12&44\\
\midrule
\textbf{\textsc{ConPer}}&\textbf{66}&9&25\\
\bottomrule
\end{tabular}
\caption{Manual evaluation results for the controllability. Acco/Oppo/Irre means the example exhibits an accordant/opposite/irrelevant persona with the input.}
\label{tab:manual_controbility}
\end{table}
As shown in Table \ref{tab:auto_controbility}, there are more stories successfully controlled for \textsc{ConPer} than baselines. And the larger $\Delta$ of \textsc{ConPer} suggests that it can generate more specific stories to the input personas. The results show the better generalization ability of \textsc{ConPer} to generate persona-controllable stories.
\paragraph{Manual Evaluation}
For manual evaluation, we randomly sampled 50 examples from the test set, and manually revised the persona descriptions to exhibit an opposite persona~(e.g., from ``skilled pilot'' to ``unskilled pilot''). We required a model to generate two stories conditioned on the original and its opposite persona description, respectively. Finally we obtained 300 stories from three models including GPT2$_{\rm Ft}$, PlanAhead and \textsc{ConPer}. Then, we hired three graduates to judge whether each story accords with the input persona. All annotators have good English language proficiency and are well trained for this evaluation task.
Table~\ref{tab:manual_controbility} shows the evaluation results. We can see that 66\% of the stories generated by \textsc{ConPer} are accordant with the input persona, suggesting the better controllability of \textsc{ConPer}.
\begin{table*}[h]
\small
\begin{tabular}{p{443pt}}
\toprule
\textbf{Context:}
$\cdots$ the \underline{group} has \underline{gathered} on the rooftop \underline{garden} of Miyamoto Mansion $\cdots$ the TV \underline{set} out \underline{near} the long \underline{table} on the \underline{patio} is \underline{talking} about some \underline{spree} of \underline{thefts} at \underline{low volume} $\cdots$ the \underline{issue} of Chloe's \underline{disappearance} and the \underline{missing statue} still \underline{hang} over their heads. \\
\\
\textbf{Persona Description:} [Aito]~You are above \underline{average} in your \underline{computer skills}.
If \underline{information} is \underline{power}, then your \underline{ability} to \underline{use} the internet
\underline{makes} you one of the most \underline{powerful people} on the \underline{planet}.\\
\midrule
\textbf{GPT2$_{{\rm Ft}}$:}
{\textit{Aito looked at the others, still trying to help find a way out of the hotel. He wasn't sure what the rest of the group wanted to see if they were going to survive and all knew if he needed to be needed $\cdots$}}\\
\midrule
\textbf{PlanAhead:}
{Miyamoto Mansion $\cdots$ perhaps it's just a bit farther away. The music sounds bright enough but the line of visitors does not. \textit{Aito was once a pretty girl, he had always been quite witty when talking to people but she always found it annoying that a group of tourists looked like trash just to her} $\cdots$ }\\
\midrule
\textbf{\textsc{ConPer}:}
{$\cdots$ ``Oh, wait $\cdots$ wait $\cdots$ people are talking about Chloe?'' $\cdots$ ``\textbf{\colorlet{c}{red!20}\sethlcolor{c}\hl{I have a feeling the internet is probably our best chance to get through this}}'' $\cdots$ Aito looked around the table a moment before \colorlet{c}{red!20}\sethlcolor{c}\hl{pulling out her tablet and starting typing furiously into her computer}. She looked up at the tablet that had appeared, and she could see that it was working on a number of things$\cdots$}\\\\
\textbf{{Planned keywords}:} $\cdots$ people $\rightarrow$ look $\rightarrow$ around $\rightarrow$ tablet $\rightarrow$ see $\cdots$\\
\bottomrule
\end{tabular}
\caption{Generated stories by different models. \textit{Italic} words indicate the improper entities or events in terms of the consistency with the input. The \textbf{bold} sentence indicate the generated target by \textsc{ConPer}. \colorlet{c}{red!20}\sethlcolor{c}\hl{Red} words denote the consistent events adhering to the input. And the extracted keywords are \underline{underlined}.}
\label{tab:case_study}
\end{table*}
\subsection{Case Study}
We present some cases in Table \ref{tab:case_study}. We can see that the story generated by \textsc{ConPer} exhibits the specified persona with a coherent event sequence. The planned keywords by \textsc{ConPer} provide an effective discourse-level guidance for the subsequent story generation, such as \texttt{tablet}, which has a commonsense connection with \texttt{computer skills} and \texttt{Internet} in the input. In contrast, the baselines tend to not generate any persona-related events. For example, the given persona description emphasizes the strong computer skills of the protagonist while the stories generated by PlanAhead and GPT2 have nothing to do with the computer skills. We further analyze some error cases generated by our model in Appendix \ref{appendix:error}.
\section{Conclusion}
We present \textsc{ConPer}, a planning-based model for a new task aiming at controlling the protagonist's persona in story generation. We propose target planning to explicitly model the relations between persona-related events and input personas, and plot planning to learn the keyword transition in a story with the guidance of predicted persona-related events and external commonsense knowledge. Extensive experiments show that \textsc{ConPer} can generate more coherent stories with better consistency with the input personas than strong baselines. Further analysis also indicates the better persona-controllability of \textsc{ConPer}.
\section*{Acknowledgement}
This work was supported by the National Science Foundation for Distinguished Young Scholars (with No. 62125604) and the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096). This work was also supported by the
Guoqiang Institute of Tsinghua University, with Grant No. 2019GQG1 and 2020GQG0005. This work was also sponsored by Tsinghua-Toyota Joint Research Fund. We would also like to thank the anonymous reviewers for their invaluable suggestions and feedback.
\section*{Ethics Statements}
We conduct the experiments by adapting a public story generation dataset \textsc{storium} to our task. Automatic and manual evaluation results show that our model \textsc{ConPer} outperforms existing state-of-the-art models in terms of coherence, consistency and controllability, suggesting the generalization ability of \textsc{ConPer} to different input personas.
And our approach can be easily extended to different syntactic levels~(e.g., phrase-level and paragraph-level events), different model architectures~(e.g., BART~\cite{bart}) and different generation tasks~(e.g., stylized long text generation).
In both \textsc{storium} and ConceptNet, we find some potentially offensive words. Therefore, our model may suffer from risks of generating offensive content, although we have not observed such content in the generated results. Furthermore, ConceptNet consists of commonsense triples of concepts, which may not be enough for modeling inter-event relations in long-form stories.
We resort to Amazon Mechanical Turk (AMT) for manual evaluation. We do not ask about personal privacy or collect personal information of annotators in the annotation process. We hire three annotators and pay each annotator \$0.1 for comparing each pair of stories. The payment is reasonable considering that it would cost average one minute for an annotator to finish a comparison.
\normalem
\bibliographystyle{acl_natbib}
| {
"attr-fineweb-edu": 2.152344,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUe1_xK4tBViZLHE70 | \section{Introduction}
\subsection{Context}
The tourism is viewed as information intensive industry and a highly dynamic and changing domain where information plays an important role for decision and action making \cite{Inkpen1998}.
Tourism represents a significant part of the global economy, more than 9\% of the global GDP (US \$7 trillion) \cite{WTTC2014}.
It becomes increasingly difficult for policy makers, on the one hand to guide choices in order to boost and make their territory attractive, and on the other hand to analyze the comments and the vision of the territory by users.
As the World Wide Web has changed people's daily life, it has significantly influenced the way of information gathering and exchanging in the tourism area, with the intensive use of social networks and Web sites specialized in e-tourism (TripAdvisor, Booking.com, etc.).
Along with the Web technologies, tourism actors such as tour operators need knowledge about travel, hotels, markets, etc.
Indeed, information like hotel prices, hotel localization, offers, market tendencies are used to improve their service quality by enhancing employee's knowledge about customer's preferences.
Other actors such as experts from tourism industry are also making use of the Web to promote their territory, and so to attract more visitors.\\
However, the gaps of the existing Web technologies arise two main topics:\\
First, tourism domain is characterized by a significant heterogeneity of information sources and by a high volume of on-line data.
Data related to tourism is produced by different experts (travel agents, tourist offices, etc.) and by visitors, constituting semantically heterogeneous data, often incomplete and inconsistent.
This data could be composed of information related to tourism objects (hotels, concert, restaurant, etc.), temporal information or opinions.
There already exist different taxonomies and catalogues which are designed and used internally by tourism actors to help them to manage heterogeneous tourism data.
Efforts are now made to generate standards to facilitate inter and intra tourism data exchange.\\
Second, the information contained on Web pages are originally designed to be human-readable, and so, most of information currently available on the Web are kept in large collections of textual documents.
As the Web grows in size and complexity, there is an increasing need for automating some of the time consuming tasks such as information searching, extraction and interpretation.
Semantic Web technologies are an answer to this issue by proposing some novel and sophisticated question answering systems. In this way, ontologies provides a formal framework to organize data and to browse, search or access this information \cite{Bruijn2008}.
\subsection{Tourinflux}
This work takes place under the Tourinflux\footnote{http://tourinflux.univ-lr.fr/} project which aims at providing the tourism industry with a set of tools (1) allowing them to handle both their internal data, and the information available on the Web, and (2) allowing to improve the display information available about their territory on the Web.
This should allow them to improve their decision process and subsequently, their effectiveness.\\
This paper presents some preliminary results of Tourinflux project.
The paper aim to describe the development of an ontology based model for tourism domain.
It is aimed primarily to present some aspects of knowledge management:
Section 2 reviews the related ontologies developed in the previous projects.
In particular, we describe how an ontology can be used to improve the quality of users Web search and data publication.
Section 2 also provides description of current tourism standards and a brief background on a particular ontology: the Schema.org\footnote{http://schema.org/} model.
In an early stage of our project, a partial ontology called TIFSem (Semantic TourInFrance) for the tourism was created, using Prot\'{e}g\'{e} \footnote{http://protege.stanford.edu/} and the Web Ontology Language \footnote{http://www.w3.org/2001/sw/wiki/OWL} (OWL).
A partial view of TIFSem ontology developed will be presented in section 3 with some example of queries.
We notice that this work is in progress; we are still gathering new concepts for its taxonomy and new axioms in order to validate them with experts of the domain.
Finally, conclusions and outlook are provided in section 4.
\section{State of art}
\subsection{Tourism ontologies}
In the domain of tourism, several researches have focused on the design of ontologies.
Several available tourism ontologies show the current status of the efforts:\\
The OTA (OpenTravelAlliance) specification defines XML Schema documents corresponding to events and activities in various travel sectors \cite{OTA2000}.\\
The Harmonise ontology is specialised to address interoperability problems focusing on data exchange \cite{Dell2002}.
Harmonise is based on mapping different tourism ontologies by using a mediating ontology.
This central Harmonise ontology is represented in RDF and focuses mainly on tourism events and accommodation types.\\
The Hi-Touch ontology models tourism destinations and their associated documentations \cite{Legrand2004}.
The ontology classifies tourist objects, which are linked together in a network by semantic relationships. The top-level classes of the Hi-Touch ontology are documents (any kind of documentation about a
tourism product), objects (the tourism objects themselves) and publication (a document created from the results of a query).\\
The QALL-ME ontology provides a model to describe tourism destinations, sites, events and transportation \cite{Ou2008} and aims at establishing a shared infrastructure for multilingual and multimodal question answering.
The Tourpedia catalogue describes places (points, restaurants, accommodations and interests) related to different locations (Amsterdam, London, Paris, Rome, etc.) \cite{Cresci2014}.\\
These models focus on different areas of tourism domain but there does not exist one single ontology which matches all the needs of different tourism related applications.
We then propose in section 3 an ontology to globally describe tourism information mixing heterogeneous content.
Concepts included in the defined ontology will allow us to annotate information sources on tourism.
Then the enriched information can be used: (1) from the user side, to match tailored package holidays to client preferences and (2) from tourism experts' side, to analyze and better manage on-line data about their territory.
\subsection{Tourism standards}
The interoperability of Tourism Information Systems (TIS) is a major challenge for the development of tourism domain.
Several national, European and international institutional initiatives have proposed different standards to meet the specific needs of tourism professionals.
In France, professionals have adopted the TourInFrance standard (TIF).
Since its creation in 1999 by the tourism ministry, TourInFrance is used today by more than 3000 tourist offices in France, by Departmental Tourist Committees (DTC) and by different tour operators, to facilitate data exchange between these different actors.
In 2004, the TourInFrance Technical Group (TIFTG) approved the new version of the standard, TIF V3.
In this version, the standard has evolved towards XML technologies to facilitate publishing information on the Web and information exchange between systems, and is accompanied by several thesaurus.
The last version of TIF can be downloaded from the following Web site: http://tourinfrance.free.fr/.
Since 2005, this standard stopped evolving.
As a result, tourism professionals have adapted the standard to their own needs and proposed their own evolution in an unorganized way, producing TIS that are not really interoperable among themselves and that cannot directly be shared using international standard.
Accordingly, the exploitation of tourism information is trapped in its own territory, and so it is impossible to aggregate these information.\\
To illustrate this evolution of TIF we show in the following example how the standard was transformed.
The figures 1 and 2 show different syntaxes of TIF to describe an address; data represented in the figure 1 respect the standard TIF V3, while data represented in the figure 2 is structured under a derived version of TIF (different tags syntaxes, new tags added, etc.).\\
To overcome these limitations we want to evolve the TIF standard to share the knowledge it represents and to ensure data interoperability, by applying the concept of ontology to represent the standard terminology \cite{Bittner2005}.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.35]{1.PNG}
\caption{\scriptsize{Data respecting the standard TIF V3}}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.38]{2.PNG}
\caption{\scriptsize{Data structured under a derived version of TIF}}
\end{figure}
\subsection{Semantic Web standards}
TIF standard is unable to easily share and interoperate with global Web standards.
Since few years, an initiative led by Bing, Google and Yahoo tends to become a de-facto norm to easily share semantic content.
Launched in 2011, Schema.org aims to create and to support a common set of schemas for structured data markup Web pages.
The purpose of Schema.org is to provide a collection of schemas (HTML tags) that webmasters can use to markup HTML pages in ways recognized by major search providers, and that can also be used for structured data interoperability (RDFa, JSON-LD, etc.).
Upon using these tags in a website, search engines can understand the intention of using resource (text, image, video) in that website better; which in turn would return better search results when a user is looking for a specific resource through a search engine \cite{Toma2014}.
Schema.org is a very large vocabulary counting hundreds of terms from multiple domains (the full specification of Schema.org is available at https://schema.org/docs/full.html).\\
In our approach we want to spread enriched semantic tourist data that can be easily indexed by search engines.
One solution would be to use directly the Schema.org ontology which provides good indexing by search engines and which is easy to implement.
The disadvantages of this solution is that we lose precision in the tourist data of the TIF model, in addition, using directly Schema.org can cause economic problems because a large number of tourist offices already use TIF or derived versions.
A second solution would be to realize an ontology by matching terms of TIF with terms of Schema.org by using OWL relations, and work with Schema.org community to extend the schema, either formally by adding new terms or informally by defining how Schema.org can be combined with some additional vocabulary terms.
We can thus produce more accurate information than the terms of Schema.org, in addition with a TIF/Schema.org correspondence, producing an interoperable model becomes possible from XML TIF sources.\\
This method is more complex to implement because it requires to connect each term of TIF with a terms of Schema.org.
The difficulty is not the transformation of XML TIF to OWL, but searching correspondence between the terms of TIF and those of Schema.org, and seeking what is lacking in each model.
We suggest to consider the second solution, which presents a better compromise.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.33]{8.PNG}
\caption{\scriptsize{Schema.org/TIF mapping}}
\end{figure}
We consider the Schema.org as the extension core of the TIF standard to improve the online visibility of touristic service providers.
Schema.org includes many of the TIF terms.
Of course not all of the Schema.org terms are relevant for the tourism domain.
The relevant terms are those that belong to the categories Hotels, Food Establishments, Events, Trips, Place of Interest and News.
These tourism terms are distributed in different trees of the schema.
For example:
\vspace{-2mm}
\begin{itemize}
\item Thing $\rightarrow$ Place $\rightarrow$ LocalBusiness $\rightarrow$ LodgingBusiness:
\begin{itemize}
\vspace{-2mm}
\item Hostel, Hotel, Motel, etc.
\vspace{-2mm}
\end{itemize}
\item Thing $\rightarrow$ Event:
\begin{itemize}
\vspace{-2mm}
\item MusicEvent, SocialEvent, SportsEvent, etc.
\vspace{-2mm}
\end{itemize}
\end{itemize}
Schema.org offers other interesting terms such as customer reviews and reservation:
\begin{itemize}
\vspace{-2mm}
\item Thing $\rightarrow$ Action $\rightarrow$ AssessAction $\rightarrow$ ReviewAction
\vspace{-2mm}
\item Thing $\rightarrow$ Intangible $\rightarrow$ Reservation:
\begin{itemize}
\vspace{-2mm}
\item EventReservation, FoodEstablishmentReservation, LodgingReservation, etc.
\vspace{-2mm}
\end{itemize}
\end{itemize}
In addition, several Linked data initiatives have enabled the development of mappings between Schema.org and unavoidable semantic resources such as DBPedia, Dublin Core, FOAF, GoodRelations, SIOC and WordNet (full mappings of Web data vocabularies to Schema.org terms are available at http://schema.rdfs.org/mappings.html).
We are interested in establishing the mappings at two levels.
One will be a mapping between concepts which have the same name or approximate name, and between concepts which have the same role.
Then taking in to account the mapped concepts (equivalentClass, subClassOf), we need to match one property from the Schema.org term with another property from a concept in TIF (equivalentProperty, subPropertyOf).
\section{Proposed approach}
The tourist information can be used in different areas and different contexts.
It is important that this information have to be modular, so that the elements describing the same entity can be used separately, hence the notion of a modular information represented by the IO.
An \textit{InformationObject} (IO) is a modular entity, that can be used and re-used to support tourism activities.
An IO can be a hotel, a restaurant, an event, etc., or any other resource that can be attached to a tourism activity.
The idea of IO is to create an entity that is:
\begin{itemize}
\vspace{-2mm}
\item interoperable: can plug-and-play with any TIS,
\vspace{-2mm}
\item reusable: can be used or adapted for different use cases,
\vspace{-5mm}
\item accessible: can be stored a way that allows for easy searchability,
\vspace{-2mm}
\item manageable: can be tracked and updated over time.
\end{itemize}
\renewcommand{\arraystretch}{1.3}
\begin{table*}
\centering
\caption{Core concepts of the TIFSem ontology}
\begin{tabular}{|l|p{12.4cm}|}
\hline
\textbf{CORE CONCEPTS} & \textbf{Description}\\
\hline
DUBLIN CORE & The Dublin Core is a set of vocabulary terms that can be used to describe Web resources (video, images, Web pages, etc.).\\
\hline
UPDATE & Information about the updates of a resource, and information used to represent aspects of the evolution of the resource.\\
\hline
MULTIMEDIA & Information about multimedia materials associated to the described resource.\\
\hline
CONTACTS & Information about ways to be in relation with a resource, or a person responsible for this resource.\\
\hline
LEGAL INFORMATION & Information to formally identify an entity and its activities.\\
\hline
CLASSIFICATIONS & Information used to qualify the resource described and identify the quality of services associated.\\
\hline
RELATED SERVICES & Information to reference other resources.\\
\hline
GEOLOCATIONS & Information to locate and describe the environment a resource.\\
\hline
PERIODS & Information about an opening period, a closing period, a reservation period, etc.\\
\hline
CUSTOMERS & Customer information.\\
\hline
LANGUAGES & Languages spoken in the described resource.\\
\hline
RESERVATION MODES & Information about the reservation for the described resource, namely, who to contact customers and if the reservation is required.\\
\hline
PRICES & Services' price of the resource described, as well as the different means of payment.\\
\hline
CAPACITY & The capacity of the resource.\\
\hline
OFFERS SERVICES & Information about the services available within the described resource or nearby.\\
\hline
ADDITIONAL DESCRIPTION & Additional information about the described resource.\\
\hline
ITINERARIES & Activities associated to the described resource such as hiking.\\
\hline
SCHEDULES & Information about the availability status of services, over defined periods.\\
\hline
\end{tabular}
\end{table*}
In TIF, a typical IO is composed by 18 parts called granules (or semantic units). These granules form the main core of the TIFSem ontology.
We introduce then the TIFSem ontology for the tourist domain, which contains concepts and relations for tourist resources. It consists of 19 main concepts of tourist information described by the table 1.
The main concepts of that ontology have sub-concepts pertinent to the local area, and for all sub-concepts they have \textit{is\_a} relationships.
Those sub-concepts have properties, restrictions and semantic relations among other concepts.
The division of these main concepts into sub-concepts, was also guided by different data sources.
Indeed, in order to elaborate the TIFSem ontology, we have consulted different kinds of sources to enable the understanding and the collecting of concepts related to the specialized domain of tourism, and the corresponding vocabulary.
Sources coming from Departmental Tourism Committee of the Charente-Maritime\footnote{http://www.charente-maritime.org/} (CDT17) and Departmental Tourism Committee of the Aube\footnote{http://www.aube-champagne.com/} (CDT10) were consulted.
\begin{figure*}[!h]
\centering
\includegraphics[scale=0.45]{5.PNG}
\caption{\scriptsize{Simplified composition overview of the granule \textit{Geolocation} with Schema.org mapping}}
\end{figure*}
After collecting the concepts and corresponding lexical items from the sources, we started structuring the first version of the ontology, selecting classes and subclasses (step (1), figure 4). The ontology is currently available at this link: http://flux.univ-lr.fr/public/DataTourism.owl.
The mapping with Schema.org (step (2), figure 4) complements the ontology to make it more complete, up to date and coherent.
Table 2 shows the mappings between the main concepts coming from TIFSem to those in Schema.org.
\begin{table}[!h]
\centering
\caption{TIFSem main concepts/Schemas.org}
\begin{tabular}{|p{3.7cm}|p{4.3cm}|}
\hline
\textbf{CONCEPT} & \textbf{Schemas.org}\\
\hline
\textit{MULTIMEDIA} & schema.org:MediaObject\\
\hline
\textit{CLASSIFICATIONS} & schema.org:Rating\\
\hline
\textit{CONTACTS} & schema.org:ContactPoint\\
\hline
\textit{LEGAL INFORMATION} & schema.org:Organization\\
\hline
\textit{LANGUAGES} & schema.org:Language\\
\hline
\textit{GEOLOCATION} & schema.org:Place\\
\hline
\textit{RESERVATION MODES} & schema.org:Reservation, schema.org:LodgingReservation\\
\hline
\textit{PRICES} & schema.org:Offer, schema.org:PriceSpecification \\
\hline
\end{tabular}
\end{table}
Figures 5 and figure 6 show a simplified composition overview of the two granules \textit{Geolocation} and \textit{ReservationModes}, with Schema.org mapping.
\begin{figure*}
\centering
\includegraphics[scale=0.38]{3.PNG}
\caption{\scriptsize{Simplified composition overview of the granule \textit{Geolocation} with Schema.org mapping}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.38]{4.PNG}
\caption{\scriptsize{Simplified composition overview of the granule \textit{ReservationModes} with Schema.org mapping}}
\end{figure*}
The meaning of concepts is mainly expressed through the definitions in natural language.
So we used OWL language which is equipped with a rich expressive power, also to enrich Schema.org which is specified in RDF.
The use of an upper ontology, especially when there is extensive use of multiple inheritance, has the advantage of providing a manual navigation of the ontology and therefore simplify the recovery of concepts that can serve as a description.
The last step in the development process of TIFSem is the integration of instances into the ontology coming from CDT17 and CDT10 (step (3) (4), figure 4).\\
To illustrate our approach, we consider a first example that ranks different accommodation offers based on their proximity to restaurants, bars and events in La Rochelle.
A couple wants to spend a week on vacation may choose between hotels not far from downtown.
The couple likes dining out, going to bars and enjoying events.
Therefore, an additional characteristic that quantifies the aptness of each hotel for those that like to go out, would be of great help.
However, such qualitative information is rarely available on tourism on-line Web sites.
One possibility would be to compute such semantic annotations based on the geolocalisation of the hotels and those objects related to restaurants, bars and events.
Figure 7 shows a simplified SPARQL query reflecting the above example.
Distance function filters any potential results whose corresponding return value was not less than a constant value.\\
As second example, the tourist office of La Rochelle wants to have a better visibility of its territory by extracting information about rural events occurring on their territory.
They need to know the events audience and information about the public attending these events.
Figure 8 shows a simplified SPARQL query reflecting the second example.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.58]{6.PNG}
\caption{\scriptsize{Simplified SPARQL query, example 1}}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.58]{7.PNG}
\caption{\scriptsize{Simplified SPARQL query, example 2}}
\end{figure}
\section{Conclusions}
In this paper, we presented some early stage work in the Tourinflux project on identifying tourism domain ontology.
Semantic technologies provide methods and concepts facilitating effective integration of tourism information originating from various sources, representing basic notions and conceptual relations in tourism domain.
The objective is to improve the display of search results, making it easier for tourism industrial to find data in the right Web pages.
In particular having annotations on websites that can be understood by search engines busts the on-line visibility and increases the chances that Web sites are in the search engines results to a relevant query.
As part of our current and future work we are developing a Schema.org mapping to facilitate Web searching of tourism data.
We are also in the process of extending the TIFSem ontology by collecting contents about more touristic service providers.
We also plan to expand the TIFSem model with other types of data (about opinions, temporal data), to provide semantic and contextual answers to queries.
\bibliographystyle{abbrv}
| {
"attr-fineweb-edu": 2.707031,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeXI4eIOjRwCobzQs | \section{Introduction}
The development of mathematical models of sports faces many obstacles. Assessing the potential impact of unobservable variables and establishing the right relations among the observable ones are the main sources of hardships for this task. Even so, the study of team sports data has become increasingly popular in the last years. Several models have been proposed for the estimation of the parameters (characteristics) that may lead to successful results for a team, ranging from machine learning methods to predict outcomes (Strumbelj and Vacar, 2012, Assif and McHale, 2016, Baboota and Kaur, 2019), fuzzy set representations (Hassanniakalager et al. 2020) and statistical models (Dyte and Clark 2000, Goddard 2005, Boshnackov, 2017) to Bayesian models (Baio and Blangiardo 2010, Constantinou et al., 2012, Wetzels et al., 2016, Santos-Fern\'andez and Mergensen, 2019).\\
One of the main issues in the study of sports is to disentangle the relative relevance of the possible determinants of outcomes. While {\em ability} and {\em luck} constitute, at least for both the press and the fan base, the main explanatory factors of the degree of success in competitions, the motivation of players is usually invoked only to explain epic outcomes or catastrophic failures. One possible reason for the neglect of motivation is that, unlike ability and luck, it is hard to assess. Nevertheless, it is known that interest in a sport induces a higher level of effort in the play (Yukhymenko-Lescroart, 2021). Accordingly, in this paper we define a particular notion of {\em effort} in Rugby games as a proxy for motivation and develop a Bayesian model of the final scores of teams in a tournament. These outcomes will be explained by several variables, among which we distinguish the ability of the teams and the effort exerted by them. We also include as explanatory variables other possible sources of psychological stimuli, as to capture a pure motivation to win, separated from those other factors.\\
Some advantages of using Bayesian techniques to model sports are that beliefs or expert information can be incorporated as priors, to obtain posterior distributions of the parameters of interest, easily updated when new data becomes available, dealing more effectively with small datasets. In our case, we propose a Bayesian hierarchical model to explain score differences on a rugby match, i.e, the difference between home team points and away team ones. The main parameters of the model are the ability of teams, the effort exerted by them and the advantage (or disadvantage) of home teams.\\
There are many papers that use Bayesian methods to model the score of a rugby game. Stefani (2009) finds that the past performance is better predictor of score difference than of the score total, and suggest that teams should focus strategy on score differences (to win or draw) rather than in score total. Pledger and Morton (2011) use Bayesian methods to model the 2004 Super Rugby competition and explore how home advantage impacts the outcomes. Finally, Fry et al. (2020) propose a Variance Gamma model where analytical results are obtained for match outcomes, total scores and the awarding of bonus points. The main difference between these works and ours, is that their primary goal is to predict outcomes, while ours is to explain them.\\
The \textit{ability} of a team can be conceived as its ``raw material''. The skills of its players, the expertise of its coaches and its human resources in general (medical staff, managers, etc.) constitute the team's basic assets. Their value can vary during a season due to injuries, temporary loss of skilled players called to play for the national team, players leaving the team, etc. In this model we assume that the capabilities of teams do not change much from a season to the next. Accordingly, the ability of a team at the start of the season is assumed to be a at a bounded distance from the performance in the previous season.\\
\textit{Luck} in games and sports has been largely studied, from philosophical perspectives (Simon, 2007, Morris, 2015) to statistical ones (Denrell and Liu, 2012, Pluchino et al., 2018). Maubossin (2012) define that games that are high in luck are the ones that are highly unpredictable, is not able to achieve great advantages through repetition and the 'reversion to the mean' effect in performance is high. Elias et al (2012) and Gilbert and Wells (2019) define many types of luck, that we will later introduce. In this model, following the line from the later authors, we consider that luck is when the unexplained part differs markedly from the mean value in the distribution of noises. That is, a series of unobserved variables have a huge impact on the outcome. \\
\textit{Effort}, in turn, can be conceived as the cost of performing at the same level over time and staying steadily engaged on a determinate task (Herlambang et al 2021). Different measures of effort can be defined. In the context of decision making, effort can be the total number of elementary information processing operations involved (Kushnirok, 1995) or the use of cognitive resources required to complete a task (Russo and Dosher, 1983, Johnson and Payne, 1985). Another measure of effort (or lack of it) can be defined in terms of the extent of {\em anchoring} in a self-reported rating scales, that is, the tendency to select categories in close proximity to the rating category used for the immediately preceding item (Lyu and Bolt, 2021).\\
In order to define a measure of the effort exerted by a rugby team, we follow the lead of Lenten and Winchester (2015), Butler et al. (2020) and Fioravanti et al. (2021). These works analyze the effort exerted by rugby teams under the idiosyncratic incentives induced in this game. Besides gaining points for winning or drawing in a game, teams may earn ``bonus'' points depending on the number of times they score tries on a game. Accordingly, any appropriate effort measure should also be defined taking into account the number of tries. In our model the effort is measured as the ratio between the number of tries scored and the sum of tries and scoring kicking attempts\footnote{By scoring kicks we refer to conversions, penalty and drop kicks; and by kick attempt we mean every time the team decided to do a scoring kick, no matter the outcome.}. \\
Several studies detected the relevance of \textit{home advantage}, i.e. the benefit over the away team of being the home team. Schwartz and Barsky (1977) suggested that crowds exert an invigorating motivational influence, encouraging the home side to perform well. Still, a full explanation of this phenomenon requires taking into account the familiarity with the field of the home team, the travel fatigue of the away team, the social pressure exerted by the local fans over the referees, among other factors. Many other researchers investigated this advantage from different points of view, such as the physiological (Neave and Wolfson 2003), the psychological (Agnew and Carron 1994, Legaz-Arrese et al. 2013), the economic one (Carmichael and Thomas 2005, Boudreaux et al. 2015, Ponzo and Scoppa 2018) and even exploring the possibility that referees may be favorably biased towards home teams (Downward and Jones 2007, Page and Page 2010). Home advantage in Rugby Union and Rugby League has been studied and confirmed by Kerr and van Schaik (1995), Jones et al. (2007), Page and Page (2010), García et al. (2013) and Fioravanti et al (2021). In our model, home advantage is explored depending on if there is public allowed to attend the game\footnote{This tournament was played during the Covid-19 pandemic, during which several games were played without attendance.}, and in what day it is played. An extra parameter intends to capture the influence of factors other than public attendance inducing home advantage.\\
The plan of the paper is as follows. In section 2 we present the data of the English Premiership championship in Rugby, played in 2020/2021. Section 3 presents a Bayesian hierarchical model of the variables that explain the difference of scores in that championship. Section 4 runs a statistical descriptive analysis of the matches of the Premier League in the light of the variables defined in the Bayesian model. Section 5 presents the results of estimating our model with the data of the Rugby Union competition. Section 6 considers the outliers found in the previous section, treating them as the result of luck in those games. We assess the aspects that justify considering them as instances of luck. Finally section 7 discusses the results obtained.\\
\section{Data}
The Premiership Rugby championship is the top English professional Rugby Union competition. The 2020/2021 edition was played by 12 teams. The league season comprises 22 rounds of matches, with each club playing each other home and away. The top 4 teams classify to the playoffs. Four points are awarded for the winning team, two to each team in case of a draw, and zero points to the loser team. However, a bonus point is given to the losing team in case the score difference is less than eight points, and also teams receive a bonus point in case they score four or more tries. During this season, if a game was canceled due to Covid-19, two points were awarded to the team responsible, and four to the other, while the match result was deemed to be $0-0$. The 2020/2021 season was won by the Harlequins, who claimed their second title after ending in the fourth league position.\\
The score, number of tries, converted tries, converted penalties, attempted penalties, converted drops, attempted drops and attendance at each of the 122 games of the 2020/21 Premiership season have been taken from the corresponding Wikipedia entry\footnote{Ten games were canceled because some players tested positive for COVID 19. We do not consider the playoff games as we assume that the incentives are not the same at the elimination stage as in the classification games.}. We generate the priors of our Bayesian model based on the final ranking from the 2019/20 season as follows. An attack and defense ranking is built using the number of points scored and received by each team: the team with most tries scored and less points received is ranked first in both rankings. These rankings are then normalized, and their corresponding means are computed. \\
\section{Model}
We base our model in a previous work of Kharratzadeh (2017) that models the difference in scores for the English Premier League. The score difference in game $g$, is denoted as $y_g$, and is assumed to follow a $t_{student}$ distribution,
$$y_g\sim t_\nu (a_{diff}(g)+eff_{diff}(g)+ha(g),\sigma_y),$$
where $a_{diff}(g)$ is the difference in the ability of the teams, $eff_{diff}(g)$ is the difference in the effort exerted and $ha(g)$ the home advantage at game $g$. The distribution of score differences is $\sigma_{y}$. We give it a $N(0.5,1)$ prior. In turn, we assign a prior $Gamma(9,0.5)$ to the distribution of degrees of freedom $\nu$. \footnote{Kharratzadeh (2017) uses a prior of $Gamma(2,0.5)$ based on the work of Ju\'arez and Steel (2010). We use a $Gamma$ with a higher mean than in that analysis due to the large differences in scoring between the Premier League in soccer and the Premiership Rubgy championship.}\\
We model the difference in abilities as follows:
$$a_{diff}(g)=a_{hw(g),ht(g)}-a_{aw(g),at(g)}$$
where $a_{hw(g),ht(g)}$ is the ability of the home team in the week where the game $g$ is played (analogously for the away team). We assume that the ability may vary during the season,
$$a_{hw(g),ht(g)}=a_{hw(g)-1,ht(g)}+\sigma \cdot \eta_{hw(g),ht(g)}\, for\, hw\geq 2$$
where $\sigma$ and $\eta$ have weak informative priors $N(0,0.1)$ and $N(0,0.5)$, respectively. The model is analogous for the away team. The abilities for the first week depend on the previous performance of the teams,
$$a_{1,ht(g)}=\beta_{prev}\cdot prevperf (ht(g))+\eta_{1,ht(g)} $$
where $\beta_{prev}$ is given the weakly informative prior $N(0.5,1)$ and $prevperf(j)$ is the previous performance of team $j$. This value is obtained in the following way: first, two rankings of tries scored and received during the last season are built, where being on the top in both rankings means that the team scored the most and received the least tries in the last season. Then, the two rankings are normalized and averaged.\\
The variable that captures the relative {\em motivations} of the teams in the game is the difference in efforts:
$$ eff_{diff}(g)=\beta_{effort}\cdot(effH(g)-effA(g))$$
where $\beta_{effort}$ has a $N(0.5,1)$ prior and the effort of the home team is defined as (analogously for the away team):
$$effH(g)=\dfrac{number\, of\, home\, tries\, in\, game\, g}{number\, of\, home\, tries + attempted\,scoring\, kicks\, in\, game\, g}.$$
Our intention with this is to capture the idea that scoring tries demands more effort than other means of scoring points, and motivated teams try to maximize this value.\\
Finally, to capture home advantage, we consider both the attendance and non-attendance (such as the weather, long trips to play the game, etc.) effects .
$$ha(g)=\beta_{home}+\beta_{atten}\cdot atten(g),$$
where $\beta_{home}$ and $\beta_{atten}$ have $N(0.5,1)$ priors and $atten(g)$ is $0$ if no fans were allowed and $1$ otherwise. A graphical representation of the model is depicted in Figure \ref{fig:graf}.\\
To ensure robustness in our results we work with four different models. Model I does not include attendance as a variable of home advantage. Model II includes the attendance variable, while Model III, incorporates a day variable $day(g)$, with a $N(0.5,1)$ prior, which has value $1$ if the game was played on Saturday or Sunday and $0$ otherwise. This day variable allows to find out whether playing on a day in which almost all the fans can attend the game benefits either the home or the away team. Finally, Model IV includes a variation of $prevperf(j)$, where instead of the tries, we use total points scored and received by each team.
\begin{figure}[h!]
\begin{tikzpicture}[node distance={25mm},main/.style = {draw, circle,minimum size=12mm}]
\node[main] (1) {$\beta_{prev}$};
\node[main](2) [right of =1]{$\eta$};
\node[main](3) [right of =2]{$\sigma$};
\node[main] (4) [right of =3]{$\beta_{effort}$};
\node[main](5) [right of =4]{$\beta_{home}$};
\node[main](6) [right of =5]{$\beta_{atten}$};
\node[main] (7) [below of =2]{$a_{diff}$};
\node[main](8) [below of =4]{$eff_{diff}$};
\node[main](9) [right of =8]{$ha$};
\node[main](10)[below of =8]{$y_g$};
\draw[->] (1) -- (7);
\draw[->] (2) -- (7);
\draw[->] (3) -- (7);
\draw[->] (4) -- (8);
\draw[->] (5) -- (9);
\draw[->] (6) -- (9);
\draw[->] (7) -- (10);
\draw[->] (8) -- (10);
\draw[->] (9) -- (10);
\end{tikzpicture}
\caption{Graph corresponding to Model II}
\label{fig:graf}
\end{figure}
\section{Descriptive Statistics}
On Table \ref{tab:home} and Table \ref{tab:away} we can see the descriptive statistics for home and away teams respectively where \textit{Score, T, C, P, D, AC, AP, AD} and {\it eff} indicate, respectively, total points, tries scored, conversions scored, penalty kicks scored, drop kicks scored, attempted conversions, attempted penalty kicks, attempted drop kicks and effort exerted.
\begin{table}[h!]
\centering
\caption{Home team}
\label{tab:home}
\begin{tabular}{rlllllllll}
\hline
& ScoreH & TH & CH & PH & DH & ACH & APH & ADH &effH\\
\hline
Min. & 3.00 & 0.00 & 0.00 & 0.00 & 0 & 0.00 & 0.00 & 0 & 0.00\\
1st Qu & 17.00 & 2.00 & 1.00 &1.00 & 0 & 2.00 & 1.00 & 0 &0.28 \\
Median & 23.00 & 3.00 & 2.00 & 2.00 & 0 & 3.00 & 2.00 & 0 &0.40 \\
Mean & 26.22 & 3.248 & 2.407 & 1.708 & 0 & 3.186 & 2.027 & 0 &0.37 \\
3rd Qu & 34.00 & 4.00 & 3.00 & 3.00 & 0 & 4.00 & 3.00 & 0 &0.46 \\
Max. & 74.00 & 12.00 & 7.00 & 6.00 & 0 & 12.00 & 7.00 &0 & 0.55 \\
\hline
\end{tabular}
\end{table}
\begin{table}[h!]
\centering
\caption{Away team}
\label{tab:away}
\begin{tabular}{rlllllllll}
\hline
& ScoreA & TA & CA & PA & DA& ACA & APA & ADA& effA\\
\hline
Min & 3.00 & 0.00 & 0.00 & 0.00 & 0.00 &0.00 & 0.00 & 0.00&0.00 \\
1st Qu. & 15.00 & 2.00 & 1.00 & 0.00 & 0.00 &0.00 & 0.00 & 0.00 &0.28 \\
Median & 22.00 & 3.00 & 2.00 & 1.00 & 0.00&3.00 & 1.00 & 0.00&0.37 \\
Mean & 22.35 & 2.796 & 1.929 & 1.434 & 0.017 &2.717 & 1.655 & 0.026&0.36 \\
3rd Qu. & 28.00 & 4.00 & 3.00 & 2.00 & 0.00 &4.00 & 3.00 & 0.00&0.5 \\
Max. & 62.00 & 9.00 & 6.00 & 5.00 & 1.00 &8.00 & 5.00 & 2.00 &0.66\\
\hline
\end{tabular}
\end{table}
Figure \ref{fig:score_diff} shows that most of the score differences are around zero (even though there very few draws obtained in the championship). This indicates that games usually end with little difference. Figures \ref{fig:ratioH} and \ref{fig:ratioA} depict the effort histograms. We can see great number of cases of $effort=\frac{1}{2}$. This is because teams almost always have the chance to go for a conversion after scoring a try.
\begin{figure}[hbt!]
\centering
\caption{Score Difference Histogram}
\includegraphics[width=0.75\textwidth]{score_diff}
\label{fig:score_diff}
\end{figure}
\begin{figure}[hbt!]
\centering
\caption{Effort Histograms}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\scriptsize
\includegraphics[width=1\textwidth]{ratioH}
\caption{Effort Home Team. \label{fig:ratioH}}
\end{subfigure
~
\begin{subfigure}[t]{0.47\textwidth}
\centering
\scriptsize
\includegraphics[width=1\textwidth]{ratioA}
\caption{Effort Away Team. \label{fig:ratioA}}
\end{subfigure}
\end{figure}
\section{Results}
We estimate the model with the R package \textit{rstan} (Stan Development Team (2020)). We use 4 cores, each one to run 2500 iterations and 1500 warm-up ones. The Stan code of the estimated model can be found in the Appendix.\\
The results obtained are sound:
\begin{itemize}
\item The $Rhat$ (or Gelman-Rubin) statistic measures the discrepancies between the chains generated in simulations of Bayesian models. The further its value is from $1$, the worse. But we can see in all our results that $Rhat$ is very close to $1$.
\item $n_{eff}$ is an estimate of the effective sample (of parameters) size. A large value indicates a low degree of error in the expected value of the parameter. We can see that, indeed, this is the case for all the parameters of interest.
\end{itemize}
In tables 3 and 4, we can see the results of Model I and II, respectively. The difference between them is the presence of $\beta_{atten}$ in the latter one. The confidence intervals of model II, and the corresponding histograms, shown in figures 4 and 5, indicate that $\beta_{prev}$ and $\beta_{effort}$ are the parameters distributed above zero. On the other hand, $\beta_{home}$ has a wider confidence interval that includes zero value if $\beta_{atten}$ is included, as in the comparison between Model I and Model II.
\begin{table}[ht]
\centering
\caption{Posterior Summary Statistics, Model I}
\begin{tabular}{lrrrrrrr}
\toprule
Parameter & Rhat & n\_eff & mean & sd & 2.5\% & 50\% & 97.5\% \\
\midrule
b\_home & 1.000 & 7792 & 0.366 & 0.168 & 0.036 & 0.362 & 0.704 \\
b\_prev & 1.001 & 3919 & 1.760 & 0.722 & 0.360 & 1.757 & 3.194 \\
b\_effort & 1.000 & 7228 & 3.147 & 0.767 & 1.641 & 3.139 & 4.686 \\
nu & 1.000 & 3048 & 12.375 & 5.036 & 5.057 & 11.566 & 24.360 \\
sigma\_y & 1.000 & 4263 & 1.664 & 0.159 & 1.359 & 1.661 & 1.985 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[ht]
\centering
\caption{Posterior Summary Statistics, Model II}
\begin{tabular}{lrrrrrrr}
\toprule
Parameter & Rhat & n\_eff & mean & sd & 2.5\% & 50\% & 97.5\% \\
\midrule
b\_home & 1.000 & 7025 & 0.324 & 0.175 & -0.028 & 0.327 & 0.657 \\
b\_prev & 1.000 & 4516 & 1.758 & 0.729 & 0.297 & 1.768 & 3.185 \\
b\_atten & 1.000 & 8830 & 0.376 & 0.496 & -0.588 & 0.370 & 1.354 \\
b\_effort & 1.000 & 9014 & 3.114 & 0.799 & 1.574 & 3.118 & 4.639 \\
nu & 1.000 & 3788 & 13.011 & 5.206 & 5.445 & 12.153 & 25.335 \\
sigma\_y & 1.000 & 5291 & 1.683 & 0.158 & 1.380 & 1.680 & 2.012 \\
\bottomrule
\end{tabular}
\end{table}
Table 5 shows the difference of estimations when adding the $\beta_{day}$ parameter. The results seem to be stable, in the sense that the estimated values of $\beta_{prev}$ and $\beta_{effort}$ remain in similar intervals. We also show the result of changing the mean of prior of $nu$ to $1$.\\
Finally, Table 6 presents the results of Model IV. The ranking score is here based on the points (not the tries) of the previous season. We find in this case that the $\beta_{effort}$ parameter is similar as that found in the other models, while $\beta_{prev}$ has a lower mean.\\
\begin{table}[ht]
\centering
\caption{Posterior Summary Statistics, Model III}
\begin{tabular}{lrrrrrrr}
\toprule
Parameter & Rhat & n\_eff & mean & sd & 2.5\% & 50\% & 97.5\% \\
\midrule
b\_home & 1.001 & 4146 & 0.322 & 0.320 & -0.302 & 0.321 & 0.944 \\
b\_prev & 1.000 & 3782 & 1.721 & 0.727 & 0.295 & 1.717 & 3.147 \\
b\_atten & 1.000 & 5355 & 0.279 & 0.480 & -0.670 & 0.276 & 1.249 \\
b\_effort & 1.000 & 6561 & 3.064 & 0.770 & 1.571 & 3.066 & 4.538 \\
b\_day & 1.001 & 3683 & -0.003 & 0.365 & -0.732 & -0.004 & 0.715 \\
nu & 1.001 & 3424 & 6.876 & 2.301 & 3.355 & 6.498 & 12.389 \\
sigma\_y & 1.000 & 3915 & 1.556 & 0.165 & 1.242 & 1.551 & 1.896 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[ht]
\centering
\caption{Posterior Summary Statistics, Model IV}
\begin{tabular}{lrrrrrrr}
\toprule
Parameter & Rhat & n\_eff & mean & sd & 2.5\% & 50\% & 97.5\% \\
\midrule
b\_home & 1.0 & 3328 & 0.3 & 0.3 & -0.3 & 0.3 & 0.9 \\
b\_prev & 1.0 & 3071 & 1.1 & 0.7 & -0.3 & 1.1 & 2.5 \\
b\_effort & 1.0 & 5604 & 3.1 & 0.8 & 1.6 & 3.1 & 4.6 \\
b\_atten & 1.0 & 5391 & 0.2 & 0.5 & -0.7 & 0.2 & 1.2 \\
b\_day & 1.0 & 3382 & 0.0 & 0.4 & -0.7 & 0.0 & 0.7 \\
nu & 1.0 & 2681 & 5.2 & 1.4 & 2.9 & 5.1 & 8.4 \\
sigma\_y & 1.0 & 2996 & 1.5 & 0.2 & 1.2 & 1.5 & 1.8 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[hbt!]
\centering
\caption{Histograms}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\scriptsize
\includegraphics[width=1\textwidth]{b_home}
\caption{Beta Home. \label{fig:b_home}}
\end{subfigure
~
\begin{subfigure}[t]{0.47\textwidth}
\centering
\scriptsize
\includegraphics[width=1\textwidth]{b_prev}
\caption{Beta Previous. \label{fig:b_prev}}
\end{subfigure}
\end{figure}
\begin{figure}[hbt!]
\centering
\caption{Histograms (Cont.)}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\scriptsize
\includegraphics[width=1\textwidth]{b_att}
\caption{Beta Attendance. \label{fig:b_att}}
\end{subfigure
~
\begin{subfigure}[t]{0.47\textwidth}
\centering
\scriptsize
\includegraphics[width=1\textwidth]{b_effort}
\caption{Beta Effort. \label{fig:beta_effort}}
\end{subfigure}
\end{figure}
\begin{figure}[hbt!]
\centering
\caption{Bivariate relations between parameters}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\scriptsize
\includegraphics[width=1\textwidth]{effvshome}
\caption{Beta Home versus Beta effort. \label{fig:effvshome}}
\end{subfigure
~
\begin{subfigure}[t]{0.47\textwidth}
\centering
\scriptsize
\includegraphics[width=1\textwidth]{effvsprev}
\caption{Beta Previous versus Beta effort. \label{fig:effvprev}}
\end{subfigure}
\end{figure}
Besides the histograms corresponding to the values of the parameters , we can see in Figures 6 to 9, further results of our model. Figure 6 indicates that there are no apparent relation between the values obtained for $\beta_{prev}$ and $\beta_{home}$ and those of $\beta_{effort}$. This is a strong suggestion that effort captures an effect that differs from both the ability of the team and the potential support (or antagonism) received in the field. This is what we should expect from a representation of the intrinsic motivation of a team.\\
Figure 7, shows one of the 10000 simulated histograms of score differences. It can be compared to the original histogram corresponding to the actual championship. This is further detailed in Figure 8, where the histograms of the distributions of means and standard deviations, obtained in the replications, is depicted with gray bars, while the blue ones correspond to the values of the statistics computed from the observed data.\\
Finally, Figure 9 shows the relation between the observed data and the estimated data. The blue line indicates the fitted linear relation. \\
\begin{figure}[hbt!]
\centering
\caption{Simulated Score Difference Histogram}
\includegraphics[width=0.75\textwidth]{score_diff_sim}
\label{fig:score_diffsim}
\end{figure}
\begin{figure}[hbt!]
\centering
\caption{Histogram of replicated Score Differences}
\includegraphics[width=1\textwidth]{hist_yrep}
\label{fig:hist_yrep}
\end{figure}
\begin{figure}[hbt!]
\centering
\caption{Observed versus Predicted}
\includegraphics[width=1\textwidth]{ave_yrep}
\label{fig:ave_yrep}
\end{figure}
\section{Luck in games}
We assume that {\em luck} plays a significant role in games. According to the definition of Tango et al. (2007), luck in a game can be understood as the difference between the actual performance observed and the ability of a team. In our case, we could identify it with the difference between the performance and both the effort and ability of a team. \\
Formally, for Tango and coauthors, the variability of luck is $\frac{p (1-p)}{g} $, where $ p = 0.5 $ and $ g = 22 $ is the number of games. Then $Var(Luck) = 0.01136364 $. On the other hand, performance variability is the deviation in the number of games won by the teams in the league, yielding $ 0.03798835 $. Finally, effort variability can be captured by the variance of the effort ratio, $ 0.01563645 $. Then:
$$Var(Performance) = Var(Luck) + Var(Effort) + Var(Ability)$$
$$ 0.03798835 = 0.01136364 + 0.01563645 + Var(Ability)$$
\noindent then, we find that:
$$ Var(Ability) = 0.01098826 $$
Then we can see that the variabilities in effort, ability and luck have slightly the same weight in the composition of the variability of performance.\\
An alternative definition of {\em luck} is that it arises when the residual in the regression of $y_g$ on the explanatory variables defined in section 3 differs markedly from the mean value in the distribution of noises. That is, the presence of luck is revealed by a large impact of unobserved variables (Mauboussin 2012). Elias et al. (2012) and Gilbert and Wells (2019) consider four types of luck. The first one arises from physical randomization, by the use of dice, cards, etc. (I). The second kind of luck is due to simultaneous decision making (II). The third one is due to human performance fluctuating unpredictably (III), while the last one arises from matchmaking (IV).\footnote{A yellow card shown to a player (type II and III) can be seen as bad luck for that player's team and good luck for the rival, while having to play a game in the 15th round of the tournament between the top team and the bottom team means good luck for the top one and bad luck for the bottom team (Type IV).} \\
In Figure 9, we can distinguish in our sample three games in which the residual deviates largely from the mean value of noises in all the 10.000 replications. The types of luck that may have affected the outcomes in these cases are the following:
\begin{itemize}
\item Round 5 - London Wasps 34 vs 5 Exeter Chiefs: contrary to what happened in the previous week, the Wasps regained its captain and 3 players from the English national team, while Exeter missed 8 of its players who, after playing for the national team, were either injured or where forced to rest after the Autumn's Cup (type IV). Also, the Chiefs conceded 15 penalties (a high number for this level of competition) and were shown a yellow card, letting the Wasps score a try during the sin bin time (types II and III).\footnote{A yellow card shown to a player means that he must leave the field for ten minutes (or is in the sin bin) while a red card means that he is expelled from the game.}
\item Round 15 - Worcester Warriors 14 vs 62 Northampton Saints: the Warriors changed 10 players from the previous week's game, and had three injured players (the fullback and two tighthead props), while the Saints recovered two players from the national team. A victory would close the gap for the Saints on the top four (type II and IV). A Warriors player was shown a red card at the 49th minute, allowing 6 tries after that (type III).
\item Round 20: Exeter Chiefs 74 vs 3 Newcastle Falcons: the Chiefs, already classified for the semifinals, needed a win to gain the home advantage during the playoffs. It was also the first game after more than 5 months that fans were allowed again to attend the play (type IV). Besides that, it was a record performance of Exeter, since it scored the largest difference in a top competition (type III). For the Falcons it was the longest trip of the tournament (type IV) and were shown a yellow card at minute 26, allowing two tries during the sin bin time (type II and III).
\end{itemize}
\section{Discussion}
In our analysis we found that while the results of rugby matches can be explained by the ability of the teams, another highly significant variable is the motivation of players, reflected by the effort exerted by them. On the contrary, luck seems to have had an impact only in a few games in our sample. \\
We have followed here Lenten and Winchester (2015), Butler et al. (2020) and Fioravanti et al (2021), representing {\em effort} as the ratio of tries over the sum of tries and the scoring kick attempts. While the results obtained in our Bayesian analysis are sound, there exist many other ways of defining a proxy of motivation in Rugby. We can argue that an increased number of tackles indicates that a team has exerted a lot of effort, revealing that it is highly motivated. On the other hand, this team has not exerted a large effort in keeping the ball. One could also, with the help of GPS, track the {\em physical} effort of the players, and identify it with their motivation. In any case, the definition of a measure of effort as a proxy of motivation, like that of ability or luck has a degree of arbitrariness. \\
Future lines of research can be to explore and compare the impact of our proxy for motivation in other tournaments, and even compare the correct definition of ``effort'' in different sports. Another topic that it is worth studying is the evolution of effort along time. The results of such investigation could be useful to assess how the incentives to the players may have changed, affecting the motivation of players.
\section*{Funding}
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
\section*{References}
- Agnew, G. A., \& Carron, A. V. (1994). Crowd effects and the home advantage. \textit{International Journal of Sport Psychology}, 25(1), 53–62.\\
-Asif, M., \& McHale, I. G. (2016). In-play forecasting of win probability in One-Day International cricket: A dynamic logistic regression model. \textit{International Journal of Forecasting}, 32(1), 34-43.\\
- Baboota, R., \& Kaur, H. (2019). Predictive analysis and modelling football results using machine learning approach for English Premier League. \textit{International Journal of Forecasting}, 35(2), 741-755.\\
-Baio, G., \& Blangiardo, M. (2010). Bayesian hierarchical model for the prediction of football results. \textit{Journal of Applied Statistics}, 37(2), 253-264.\\
- Boudreaux, C. J., Sanders, S. D., \& Walia, B. (2017). A natural experiment to determine the crowd effect upon home court advantage. \textit{Journal of Sports Economics}, 18(7), 737-749.\\
-Boshnakov, G., Kharrat, T., \& McHale, I. G. (2017). A bivariate Weibull count model for forecasting association football scores. \textit{International Journal of Forecasting}, 33(2), 458-466.\\
-Butler, R., Lenten, L. J., \& Massey, P. (2020). Bonus incentives and team effort levels: Evidence from the "Field". \textit{Scottish Journal of Political Economy}, 67(5), 539-550.\\
- Carmichael, F., \& Thomas, D. (2005). Home-field effect and team performance: evidence from English premiership football. \textit{Journal of Sports Economics}, 6(3), 264-281.\\
- Carpenter, B., Gelman, A., Hoffman, M. D., Lee, D., Goodrich, B., Betancourt, M., ... \& Riddell, A. (2017). Stan: A probabilistic programming language. \textit{Journal of Statistical Software}, 76(1), 1-32. \\
-Constantinou, A. C., Fenton, N. E., \& Neil, M. (2012). pi-football: A Bayesian network model for forecasting Association Football match outcomes. \textit{Knowledge-Based Systems}, 36, 322-339.\\
-Denrell, J., \& Liu, C. (2012). Top performers are not the most impressive when extreme performance indicates unreliability. \textit{Proceedings of the National Academy of Sciences}, 109(24), 9331-9336.\\
- Downward, P., \& Jones, M. (2007). Effects of crowd size on referee decisions: Analysis of the FA Cup. \textit{Journal of Sports Sciences}, 25(14), 1541-1545.\\
-Dyte, D., \& Clarke, S. R. (2000). A ratings based Poisson model for World Cup soccer simulation. \textit{Journal of the Operational Research Society}, 51(8), 993-998.\\
-Elias, G. S., Garfield, R., \& Gutschera, K. R. (2012). \textbf{Characteristics of Games}. MIT Press.\\
-Fioravanti, F., Delbianco, F., \& Tohm\'e, F. (2021). Home advantage and crowd attendance: Evidence from rugby during the Covid 19 pandemic. {\tt arXiv:2105.01446}.\\
- Fioravanti, F., Tohm\'e, F., Delbianco, F., \& Neme, A. (2021). Effort of rugby teams according to the bonus point system: a theoretical and empirical analysis. \textit{International Journal of Game Theory}, 50, 447–474. \\
-Garc\'ia, M. S., Aguilar, Ó. G., Vázquez Lazo, C. J., Marques, P. S., \& Fernández Romero, J. J. (2013). Home advantage in Home Nations, Five Nations and Six Nations rugby tournaments (1883-2011). \textit{International Journal of Performance Analysis in Sport}, 13(1), 51-63\\
-Gilbert, D. E., \& Wells, M. T. (2019). Ludometrics: luck, and how to measure it. \textit{Journal of Quantitative Analysis in Sports}, 15(3), 225-237.\\
-Goddard, J. (2005). Regression models for forecasting goals and match results in association football. \textit{International Journal of Forecasting}, 21(2), 331-340.\\
-Hassanniakalager, A., Sermpinis, G., Stasinakis, C., \& Verousis, T. (2020). A conditional fuzzy inference approach in forecasting. \textit{European Journal of Operational Research}, 283(1), 196-216.\\
-Herlambang, M. B., Taatgen, N. A., \& Cnossen, F. (2021). Modeling motivation using goal competition in mental fatigue studies. \textit{Journal of Mathematical Psychology}, 102, 102540.\\
-Johnson, E. J., \& Payne, J. W. (1985). Effort and accuracy in choice. \textit{Management Science}, 31(4), 395-414.\\
-Jones, M. V., Bray, S. R., \& Olivier, S. (2005). Game location and aggression in rugby league. \textit{Journal of Sports Sciences}, 23(4), 387-393.\\
- Juárez, M. A., \& Steel, M. F. (2010). Model-based clustering of non-Gaussian panel data based on skew-t distributions. \textit{Journal of Business \& Economic Statistics}, 28(1), 52-66. \\
-Kerr, J. H., \& van Schaik, P. (1995). Effects of game venue and outcome on psychological mood states in rugby. \textit{Personality and Individual Differences}, 19(3), 407-410.\\
-Kharratzadeh, M. (2017). Hierarchical Bayesian Modeling of the English Premier League. \texttt{https://github.com/milkha/}\\{\tt EPLKDD} and \texttt{https://mc-stan.org/events/stancon2017-notebooks/stancon2017-kharratzadeh-epl.pdf.}\\
-Kushniruk, A. (1995). Trading Off Accuracy for Effort in Decision Making. Review of The Adaptive Decision Maker, by John Payne, James Bettman, and Eric Johnson.\\
-Lenten, L. J.,\& Winchester, N. (2015). Secondary behavioural incentives: Bonus points and rugby professionals. \textit{Economic Record}, 91(294), 386-398.\\
- Legaz-Arrese, A., Moliner-Urdiales, D., \& Munguía-Izquierdo, D. (2013). Home advantage and sports performance: evidence, causes and psychological implications. \textit{Universitas Psychologica}, 12(3), 933-943.\\
-Lyu, W., \& Bolt, D. M. (2021). A psychometric model for respondent‐level anchoring on self‐report rating scale instruments. \textit{British Journal of Mathematical and Statistical Psychology}.\\
- Mauboussin, M. J. (2012). \textbf{The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing}. Harvard Business Press.\\
-Morris, S. P. (2015). Moral luck and the talent problem. \textit{Sport, Ethics and Philosophy}, 9(4), 363-374.\\
- Neave, N., \& Wolfson, S. (2003). Testosterone, territoriality, and the 'home advantage'. \textit{Physiology \& Behavior}, 78(2), 269-275.\\
-Page, L., \& Page, K. (2010). Evidence of referees' national favoritism in rugby. {\it NCER Working Paper Series}, 62.\\
- Page, K., \& Page, L. (2010). Alone against the crowd: Individual differences in referees' ability to cope under pressure. \textit{Journal of Economic Psychology}, 31(2), 192-199.\\
-Pluchino, A., Biondo, A. E.,\& Rapisarda, A. (2018). Talent versus luck: The role of randomness in success and failure.\textit{ Advances in Complex systems}, 21(03n04), 1850014.\\
- Ponzo, M., \& Scoppa, V. (2018). Does the home advantage depend on crowd support? Evidence from same-stadium derbies. \textit{Journal of Sports Economics}, 19(4), 562-582.\\
-Russo, J. E.,\& Dosher, B. A. (1983). Strategies for multiattribute binary choice. \textit{Journal of Experimental Psychology: Learning, Memory, and Cognition}, 9(4), 676.\\
-Santos-Fernandez, E., Wu, P., \& Mengersen, K. L. (2019). Bayesian statistics meets sports: a comprehensive review. \textit{Journal of Quantitative Analysis in Sports}, 15(4), 289-312.\\
- Schwartz, B., \& Barsky, S. F. (1977). The home advantage. \textit{Social forces}, 55(3), 641-661.\\
-Simon, R. (2007). Deserving to be lucky: reflections on the role of luck and desert in sports. \textit{Journal of the Philosophy of Sport}, 34(1), 13-25.\\
- Stan Development Team (2020). RStan: the R interface to Stan.
R package version 2.21.2. {\tt http://mc-stan.org/ }\\
-Stefani, R. T. (2009). Predicting score difference versus score total in rugby and soccer. \textit{IMA Journal of Management Mathematics}, 20(2), 147-158.\\
- Štrumbelj, E., \& Vračar, P. (2012). Simulating a basketball match with a homogeneous Markov model and forecasting the outcome. \textit{International Journal of Forecasting}, 28(2), 532-542.\\
- Tango, T. M., Lichtman, M. G., \& Dolphin, A. E. (2007). \textbf{The Book: Playing the Percentages in Baseball}. Potomac Books, Inc. \\
- Yukhymenko-Lescroart, M. (2021). The role of passion for sport in college student-athletes' motivation and effort in academics and athletics. {\it International Journal of Educational Research Open}, 2–2, 100055.\\
-Wetzels, R., Tutschkow, D., Dolan, C., Van der Sluis, S., Dutilh, G., \& Wagenmakers, E. J. (2016). A Bayesian test for the hot hand phenomenon. \textit{Journal of Mathematical Psychology}, 72, 200-209.
\newpage
\section*{Appendix: STAN code}
\begin{lstlisting}
data {
int nteams; int ngames; int nweeks;
int home_week[ngames]; int away_week[ngames];
int home_team[ngames]; int away_team[ngames];
vector[ngames] score_diff; row_vector[nteams] prev_perf;
vector[ngames] RatioH; vector[ngames] RatioA;
vector[ngames] Att; vector[ngames] Day;
}
parameters {
real b_home; real b_prev; real b_effort; real b_att; real b_day;
real nu; real sigma_y; row_vector[nteams] sigma_a_raw;
matrix[nweeks,nteams] eta_a;
}
transformed parameters {
matrix[nweeks, nteams] a;
a[1,] = b_prev * prev_perf + eta_a[1,];
for (w in 2:nweeks) {
a[w,] = a[w-1,] + sigma_a_raw .* eta_a[w,];
}
}
model {
vector[ngames] a_diff; vector[ngames] e_diff;
vector[ngames] home_adv;
nu ~ gamma(9,0.5); b_prev ~ normal(0.5,1);
sigma_y ~ normal(0.5,1); b_home ~ normal(0.5,1);
b_effort ~ normal(0.5,1); b_att ~ normal(0.5,1);
b_day ~ normal(0.5,1);
sigma_a_raw ~ normal(0,0.1); to_vector(eta_a) ~ normal(0,0.5);
for (g in 1:ngames) {
a_diff[g] = a[home_week[g],home_team[g]] - a[away_week[g],away_team[g]];
}
for (g in 1:ngames) {
e_diff[g] = b_effort * (RatioH[g] - RatioA[g] );
}
for (g in 1:ngames) {
home_adv[g] = b_home + b_att * Att[g] + b_day*Day[g];
}
score_diff ~ student_t(nu, a_diff + e_diff + home_adv, sigma_y);
}
generated quantities {
vector[ngames] score_diff_rep;
for (g in 1:ngames)
score_diff_rep[g] = student_t_rng(nu,
(a[home_week[g] ,home_team[g]] -
a[away_week[g],away_team[g]])
+ b_effort * (RatioH[g] - RatioA[g]) +
b_att * Att[g] + b_day*Day[g] + b_home, sigma_y);
}
}
\end{lstlisting}
\end{document} | {
"attr-fineweb-edu": 2.185547,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfNrxK7IACwaaA7-1 | \section{Introduction}
In golf, correct swing forms lead to excellent outcomes.
For golf beginners, a common way to understand and learn a correct swing motion naturally is to observe professional players' swings and imitate their forms.
However, it is hard for beginners to specify key frames that they should focus on and which part of the body they should correct due to the inconsistent timing and the lack of knowledge.
In this work, we propose a golf swing analysis tool using deep neural networks to help the user recognize the difference between the user's swing video and the expert's one from the Internet or broadcast media.
The tool visualizes the image frames and human motion where the discrepancy between the expert and the user's swing is large, and the user can easily grasp a motion that needs to be corrected.
\begin{figure*}[h]
\centering
\includegraphics[width=0.85\textwidth]{teaser2.jpg}
\caption{Comparison among the distance in the latent space, the two input images and the estimated 3D skeletons. The red line in the graph indicates the threshold for discrepancy detection. The colored skeleton and black skeleton indicate the user's pose and expert's pose, respectively. The intensity and radius of red circles indicate the degree of joint position difference of the two skeletons.}~\label{fig:teaser2}
\end{figure*}
\section{OUR APPROACH}
We create a prototype application that visualizes the distance of the latent space of two input videos, detects discrepant frames with adaptive threshold, and compares the detected frames with 3D human poses (Figure~\ref{fig:teaser1}).
Since the phasing and timing of a golf swing can vary from person to person, we synchronize the input video clips by implementing the Temporal Cycle-Consistency (TCC) network~\cite{TCC}.
The self-supervised learning algorithm, which is used in the TCC, can be trained with unlabeled data such as videos on the Internet and broadcast, and so can generally fit many other types of motion analysis.
The inputs for the network are two video clips, as the outputs of the network are two compressed latent vectors.
A distance between two latent vectors indicates the similarity of the two videos.
This characteristic of the latent space can be used for synchronizing videos within the same category of motion, and in this work, we focus on the distance between two videos in the latent space for detecting and retrieving the fine-grained discrepancy to compare between two different swing forms, especially between beginners and experts.
For the application implementation, we first extract the latent vectors frame by frame, which is the output of the TCC network, and then synchronize the input videos by calculating the Euclidean distance in the latent vectors. At this point, similar motions should appear to be close in the latent space; however, repeated motions appear frequently during a golf swing, i.e. a particular posture can be observed both during the backswing and downswing phase.
Therefore, we apply the Dynamic Time Warping algorithm to penalize the distance in the latent space to gain a better alignment.
To visualize the 3D poses of players, we use the HRNet~\cite{Hrnet} and a simple linear network~\cite{3dposebaseline}, then we applied Procrustes analysis to align two joint sets from different camera positions. This feature allows the users to see the motion difference between themselves and the expert and intuitively recognize which part should correct to mimic the expert's motion.
\begin{table}[]
\caption{Pearson's correlation between the joint difference of each body part and the distance in the latent space.}
\label{tab:corr}
\begin{tabular}{lclc}
Joint & Correlation & Joint & Correlation \\ \hline
Whole body & \multicolumn{1}{c|}{0.523} & Wrist & 0.468 \\
Elbow & \multicolumn{1}{c|}{0.507} & Spine & 0.468 \\
Shoulder & \multicolumn{1}{c|}{0.477} & Knee & 0.460 \\
Neck & \multicolumn{1}{c|}{0.473} & Foot & 0.455 \\
Head & \multicolumn{1}{c|}{0.471} & Hip & 0.441
\end{tabular}
\end{table}
\section{EVALUATION}
As our video database, We use the GolfDB~\cite{GolfDB}, which consists of 1400 high-quality video clips of experienced golfers' swings from online video platforms.
To verify the effectiveness of the application, we investigated whether the distance in the latent space can represent the discrepancy between two input video frames.
We calculated the Mean Per Joint Position Error (MPJPE) of the two estimated 3D skeletons and compared it with the corresponding distance in the latent space.
In the experiment, we observed the correlation between the distance in the latent space and the MPJPE.
This means that when the distance in the latent space is small, the difference of the two 3D poses is small (Figure~\ref{fig:teaser2}.c); when the distance is large, 3D poses have more differences (Figure~\ref{fig:teaser2}.b and d).
Furthermore, when digging into human body parts, we found that particular body parts affect the distance in the latent space more than other body parts.
As shown in Table~\ref{tab:corr}, upper body joints are more correlative to the distance in the latent space than lower body joints.
Based on the results, we observed that the proposed tool can retrieve motion features that help the user target the key moves that they have to correct in priority.
As depicted in Figure~\ref{fig:teaser2}.a, we also observed that even in certain circumstances when the two skeletons appear similar, the distance in the latent space remains large.
This may be explained by the fact that the network is inferring the difference of not only the human motion but also features outside the human body.
In this case, the pose of the golf club, which is also considered critical during the swing, might be the main factor causing the large distance in the latent space.
\section{CONCLUSION}
In this work, we create the application for analyzing and visualizing the discrepancy of two input golf swing motions. The user can easily grasp the difference between the swings of various experts and their own during self-training.
Through our early experiment, it was confirmed that the proposed tool can detect discrepancies between two swing videos.
Furthermore, we give interpretation about the network that can help understand the correlation between the learned latent space and motion of the human body as well as the other props.
In the future, we plan to implement the system that proposes optimal correction ways with analyzed factors that directly affect the user's golf swing performance.
\bibliographystyle{ACM-Reference-Format}
| {
"attr-fineweb-edu": 2.875,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUd2A5qhLBXQtDHlH2 | \section{Introduction}\label{sec:introduction}
In 2019 the Dutch population travelled a total distance of well over 200 billion kilometres within the Netherlands \cite{vervoersprestatie}, or equivalently, 250.000 round trips to the moon \cite{afstandmaan}. To be able to construct and maintain the right infrastructure for all these travel movements, it is very important to have reliable data regarding the travel habits of the Dutch population. Statistics Netherlands (CBS from the Dutch name: Centraal Bureau voor de Statistiek) aims to use travel surveys based on Mobile Device Location Tracking (MDLT), which includes GPS and other location tracking methods, to study the mobility of the Dutch population. Respondents use an app that tracks their location over a period of seven days. At the end of each day, they are asked to check whether the app has detected stops correctly and predicted the right mode of travel (for example: walking, cycling or taking the train). Using MDLT has many benefits over the more traditional way of mobility studies. Traditionally respondents were asked to write down all their travel movements at the end of the day. This leads to an underestimation of travel distance, number of trips and travel time, as respondents do not recollect every movement \cite{mccool2021app}. Writing down all travel movements is also more time consuming for respondents than verifying the data already collected by the app.
The usage of MDLT in travel surveys is a relatively new development and doing so at a national scale is unique \cite{smeets2019automatic}. However, ecologists have been using GPS trackers for a long time to study the territories or migrational routes of animals \cite{animalBBMM}. Compared to the MDLT travel surveys of humans, the GPS trackers used on animals have a low sample rate: the CBS app measures the location of a moving participant every second, while animal GPS trackers typically measure the location every few minutes or even every few hours.
A difficulty with the use of MDLT travel surveys is that there can be gaps of missing data points. These gaps are periods of time during which no location data is collected. The reasons for these gaps vary, ranging from loss of connection due to a tunnel or bridge, to the respondents phone shutting down the app, or the phone running out of battery. As there are multiple reasons for missing data, the gaps can vary from relatively small, perhaps only a few data points, to gaps of thousands of consecutive data points. Filling in the missing data perfectly is not possible, though by using an approximation it is still possible to get a good idea of the travel habits of the population. However, the missing data cannot be ignored.
Hence, it is necessary to find a way to properly describe travel habits, in particular the distance travelled and in what area an individual is travelling. To quantify these travel habits we look at the length of a path taken by the traveller to quantify distance travelled and the so called Radius of Gyration (RoG) to quantify the area an individual is travelling through. The latter is a time weighted measure of the spread of the location of a person during a certain period of time. A low RoG indicates that a person spends most of their time in more or less the same area or that their travel paths are "bunched up", while a high RoG indicates that a person travels great distances. To fill the missing data we are now interested in making a good estimate for the missing data with respect to the above attributes of the data. A very simple model would be filling the gap with a straight line path, but it has some disadvantages. For example, it can significantly underestimate the distance travelled, as people probably did not travel in an exact straight line. A gap of many missing data points may also contain one or several stops. Conversely in such a gap a small round trip, for example to walk your dog, can be missed completely.
Our aim is to contribute to methods that can better estimate relevant attributes of the missing data. To do this we will be using a stochastic model, called a Brownian bridge, to approximate the missing paths.
We will start this report by introducing our ideas, experiments, and models in Chapter \ref{sec:methods}. Then we will give the background theory to understand the specific models we are using in Chapter \ref{sec:theory}. A detailed explanation of the ideas, experiments, and models that were briefly discussed before, can be found in Chapter \ref{sec:model}. We then present the results of our experiments in Chapter \ref{sec:simulation}, which we discuss afterwards in Chapter \ref{sec:discussion}. Then in the end we formulate a conclusion based on those results in Chapter \ref{sec:conclusion}.
\clearpage
\section{Methods}\label{sec:methods}
To fill up gaps in the data, we generally want to assume that a person moves in accordance with some stochastic process. Suppose that we have a missing segment in some movement data, then we want to use the available data to predict the movement in the gap. We will assume that the movement in the gap is realised by the same stochastic process with the same parameters as in the available data. Therefore, we now want to characterise our process in case we know it starts at a specific point and ends at a specific point. A stochastic process conditioned on its endpoint is called a bridge.
\subsection{Approaches}
The first of the processes we will discuss is Brownian motion, the continuous limit of a random walk. We looked into this for two reasons. The first and main reason is that Brownian motion is easy to do calculations with. This means it is possible to get theoretical results and to obtain good estimators for model parameters.
The second reason is that on average Brownian motion usually captures the dynamics of a process very well. When given two coordinates and a time between them, one can then construct the so-called Brownian bridge.
This concept has already been used in ecology to model the movement of animals \cite{animalBBMM}. Now, whereas ecologists are interested in finding the habitat of the animal based on limited data points, we primarily want to know the path length, i.e. the distance travelled. However, as Brownian motion is nowhere differentiable, its path length is not well-defined \cite{kallenberg1997foundations}. To solve this problem, we discretise the Brownian bridge, so we get a random walk of which we can calculate the path length. We have found an expression for the expected value \eqref{eq:pathlengthexpectation} of the distance travelled for such a discretised Brownian bridge.
The second type of process we have briefly looked into, are Feller processes. This is a special type of continuous time process. For the interested reader, see \cite{kallenberg1997foundations}.
Specifically, we considered a type of the Feller process, which consists of a position and an internal state.
The movement of the process is determined by the internal state, which is independent of the actual position of the process.
The internal state can be used to model the velocity of the person or different modes of transportation.
In such a case, transition probabilities of the internal state represent changing one's modes of transportation, direction or speed.
For example, we say that someone who is stationary has a certain probability to remain stationary or to start moving again.
Similarly, for motion, we expect that the velocity of a person is dependent on their previous velocity, i.e. we do not expect someone to suddenly turn 180 degrees at high speed.
It turns out that calculating a bridge for such processes was too difficult for us.
So, for filling gaps in the location data, we have only considered the Brownian bridge model.
However, we have still simulated processes with an internal state numerically to assess how well the Brownian bridge model performs on motion that does not behave precisely as Brownian motion. All the stochastic models that we are going to describe, besides the Brownian bridge model, are purely for data generating purposes which will be more clear later on.
\subsection{Implementation}\label{subsec:implementation}
Note that we have briefly mentioned two type of models. For a formal definition of these models and their properties, the reader can look at Chapters \ref{sec:theory} and \ref{sec:model}. The aim of this section is to give a brief overview of our implementation of these models and how we have set up our experiments. Each model and experiment is implemented in Python. For the reader who is interested in our code, we refer to our \href{https://github.com/kerimdelic/MathForIndustry/blob/main/CBS.ipynb}{notebook}. The results of our experiments will be displayed in Chapter \ref{sec:simulation}.
First, we have implemented the Brownian bridge model. The parameters of our model are the following:
\begin{itemize}
\item The starting position of the bridge.
\item The ending position of the bridge.
\item The number of time steps.
\item The \emph{diffusion coefficient} $\sigma_m$.
\end{itemize}
With the above parameters, we can simulate a Brownian bridge and choose where it will be located. Furthermore, we can also choose what its deviation will be from the mean by fixing the diffusion coefficient $\sigma_m$. This coefficient will thoroughly be discussed in Section \ref{subsec:brownianbridge}. Another important parameter is the number of time steps, as this number is the same as the total number of location points that form the path.
The model itself basically simulates a path that follows a Brownian motion. Of course, we can calculate certain properties of this path. The two properties that we will see in several parts throughout the paper are the path length and the RoG. The formal definition of the RoG can be found in Section \ref{subsec:rogtheory}. These properties can be extracted from a complete path, meaning that for a given path without gaps, we can numerically calculate the path length and the RoG. However, as we will discuss paths with a gap, we want to extract the same properties by filling the gap by a certain procedure.
We followed the procedure as outlined in \cite{animalBBMM} to be able to estimate the diffusion coefficient. More precisely, it is of interest to estimate the mobility of a person during the travel. This description coincides with the definition of the diffusion coefficient. Thus, this procedure actually derives an estimate of this coefficient.
\subsubsection*{The Estimation Procedure}
Suppose the location data is of the form $(z_0, t_0), \dots, (z_{2n}, t_{2n})$. At each time $t_i$, there is a location $z_i$. The estimation procedure can be described by three steps.
\begin{enumerate}
\item Between every other point we construct independent Brownian bridges with the same unknown parameter $\sigma_m$, so there is a bridge between $z_0$ and $z_2$, then a bridge between $z_2$ and $z_4$, and so on.
\item We consider each skipped point $z_{2k+1}$ as a realisation at time $t_{2k+1}$ of the Brownian bridge between $z_{2k}$ and $z_{2k+2}$.
\item Then, we can use maximum likelihood estimation for these odd observations $(z_1, z_3, z_5,...)$ to estimate the unknown parameter $\sigma_m$. The explicit likelihood function can be found in \ref{eq:likelihood}.
\end{enumerate}
The difference between our procedure and the procedure in the paper \cite{animalBBMM} is that we maximise the log-likelihood function. Besides this, we also use a different algorithm for finding the maximum of the function, namely ternary search.
An illustration of the estimation procedure is given in Figure \ref{fig:graph1}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\textwidth]{figures/estimate_sigma.pdf}
\caption{We have created seven ``fake'' points. The green dots are the even location points $(z_0, z_2, z_4, z_6)$ and the blue dots are the odd location points $(z_1, z_3, z_5)$. Between each pair of green dots, a Brownian bridge is simulated. All three bridges are modelled independently and form a large path. Each blue dot is interpreted as a location that results from the corresponding Brownian bridge. This location does not have to be on the path, and is used for our estimation procedure that obtains the most likely $\sigma_m$ for the three Brownian bridges.}
\label{fig:graph1}
\end{figure}
Given some movement data with a gap, we can now estimate the diffusion coefficient. Moreover, we can also extract the starting and ending position of the gap and the number of time steps of this gap. This means we have all the parameters for the Brownian bridge model, and therefore we can calculate the expected path length during the gap. This expected value as a function of the parameters will be given in Section \ref{subsec:pathlength}.
For the RoG we were not able to find an analytical expression for the expected value. However, since we have all the parameters for the Brownian bridge model we can simulate paths for the gap, and then calculate the RoG. To estimate the RoG we simulate this numerous times and take the average RoG over all simulations.
\subsection{Experiments}\label{subsec:experiments}
We have introduced our estimation procedure for the diffusion coefficient, whose accuracy we want to quantify in terms of path length and RoG.
Therefore, we have constructed two main experiments. One experiment concentrates on the path length and the other experiment is based on the RoG.
The general setup of these experiments is as follows:
\begin{enumerate}
\item A path is created from some process.
\item The actual values of the path length and RoG are calculated.
\item A gap is created, by deleting a certain number of consecutive location points.
\item We estimate $\sigma_m$ by feeding our estimation procedure the incomplete path.
\item With the estimated $\sigma_m$, we also estimate the path length and RoG.
\end{enumerate}
In Figure \ref{fig:procedure} an illustration of the above framework is given.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/fixedvelocityrandomwalk.pdf}
\caption{A fixed velocity random walk from $(0,0)$ to $(-24.21,-41.12)$.}
\label{fig:path}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/gap.pdf}
\caption{In green, a gap from time step $200$ to $350$.}
\label{fig:gap}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/fill_gap.pdf}
\caption{In red, the gap filled by a Brownian bridge with estimated $\sigma_m$.}
\label{fig:fillgap}
\end{subfigure}
\caption{In \ref{fig:path}, a path is created of $500$ time steps which follows from the fixed velocity random walk model. Then, in \ref{fig:gap}, a gap is created where we omit the location points of all time steps in $[200, 350]$. Thereafter, in \ref{fig:fillgap} the gap is filled by creating a Brownian bridge of $150$ time steps with an estimated $\sigma_m$ from the incomplete path of \ref{fig:gap}.}
\label{fig:procedure}
\end{figure}
With this setup we can measure how our estimation procedure compares to the actual value in terms of path length and RoG. Moreover, we will also compare our estimation procedure with linear interpolation. Linear interpolation fills the gap with a straight line with constant velocity.
In the next two sections we will discuss in more detail which specific choices we have made regarding the production of data. We have used different models and different choices of parameters for the experiments. For the interested reader, we refer to Appendix \ref{app:processes} for the technical details of these data generating models. The reason behind this is that we had no access to empirical data. Note that comprehension of the generating models is not important for understanding our experiment.
\subsubsection{Experiments Regarding the Path Length}
For the experiments regarding path length, we have used four different models. These models are the discrete Brownian motion model, angular random walk model, random walk with internal state model and run-and-tumble model. Each model has its own parameters (for the details we refer again to Appendix \ref{app:processes}). These parameters affect the amount of randomness in their realisations. We will now describe our choices of parameters for reproducibility of the experiments.
The discrete Brownian motion model has a clear parameter for the randomness, since it is made by realisations of a normal distribution with some variance. We will see that this variance $\sigma$ is indirectly controlled by the diffusion coefficient. In our experiments, we will use a fixed travelling distance of $10$ in $200$ time steps and model this with different standard deviations of $\sigma \in \{0.01,0.1,1,10\}$ to give a far spread in the amount of randomness in the system. As our estimation procedure is based on the assumption that the movement is a Brownian motion, we expect our procedure to perform well on this type of data. The result of this experiment should therefore be mainly seen as a sanity check that our procedure works for the best possible case.
The angular random walk model gives quite a natural way of moving in an open space without any abrupt stops or tight corners. This kind of movement can correspond to a car on a highway or a person walking in a field. Here the amount of randomness is dictated by the variance for the angular acceleration. For a very large variance we will almost get a completely uniform random direction in each time step (which is referred to as the fixed velocity random walk model), which will give more chaotic behaviour. We will consider standard deviations $\sigma \in \{0.1, 0.5, 1, 5\}$. The spread here is a bit smaller, since this model converges to straight lines quite fast for small $\sigma$, which is not that interesting to consider.
The run-and-tumble model has more sharp corners and sudden changes in behaviour. This looks more of a movement in urban areas. The parameter $l$ here that we will vary is related to the chance that the direction will be changed. We will use the parameter $l \in \{ 0.1, 0.5, 1, 3\}$.
One of the similarities between the discrete Brownian motion-, angular random walk- and run-and-tumble model is that they all seem to behave the same at all times. In the random walk with internal state model, there is an internal state where it can occur that the movement stops for a while. Here there are discrete chances for what will be the next move. To dictate the randomness that will appear, we have varied the chances that the state changes and certain movements will happen. In the most extreme case, all options will be performed uniformly random.
For each such model, we have simulated paths of $200$ data points and created a large gap by removing $100$ data points in the middle of these paths. We have simulated this process $1000$ times for each choice of parameters. We have normalised each simulation with the actual path length. The results of the specific comparison with linear interpolation will be displayed in Figure \ref{fig:subfigures}.
\subsubsection{Experiments Regarding the RoG}
To be as concise as possible, we have considered three models (fixed velocity random walk, fixed angular random walk and run-and-tumble). The main idea of this experiment is to compare the RoG of a path with the RoG of the same path, but a part of the path is deleted and filled by a Brownian bridge (according to the estimation procedure).
Each model has generated $1000$ paths where the corresponding parameter is fixed. Fixed velocity random walk with a velocity of $1$, fixed angular random walk with $\sigma =0.1$ and run-and-tumble model with $l=1$. We have chosen these settings, as they create paths with a small RoG and with a larger RoG. This way, we have more diversity in our results. For each path, the first half of the location points is deleted. The reason behind this is to consider an extreme case where the gap is rather large. The number of location points are $1000$, so the first $500$ points will be deleted. These are all the choices that we have made for these particular experiments. The results can be found in Section \ref{subsec:rog}.
\clearpage
\section{Introduction to Brownian Bridges} \label{sec:theory}
In the CBS travel survey, we sometimes encounter gaps in the data. We want to fill in these gaps by modelling a path between the last data point before and the first data point after the gap. One of the most common methods modelling this path is called the Brownian bridge. In this chapter we will set up a two-dimensional Brownian bridge and derive its distribution.
In the first section, we will elaborate on the mathematical theory of Brownian motion, and in the second section we construct a Brownian bridge. If the reader is already familiar with the concept of a Brownian bridge, then this chapter may be skipped.
\subsection{Brownian Motion}
Before we introduce Brownian motion, we will first explain the notion of a stochastic process in an informal manner, for more details and background one can look at \cite{kallenberg1997foundations}. To do so, let us consider the Euclidean space $\mathbb{R}^n$ and the movement of a particle in such a space. For example, such a particle could model a molecule in a solution or a person walking around. However, in most cases we cannot possibly predict such movement exactly. To capture such non-deterministic behaviour, we turn to probability theory and view the path itself as a collection of random vectors.
In general, a \emph{stochastic process} in continuous time \cite[pg. 189]{klenke} is a collection $X = \{X_t: t \in I\}$ of random vectors in $\mathbb{R}^n$, where $I$ is some continuous index set, like $(0, \infty)$ or $[0, T]$. A continuous-time process $X$ is called \emph{continuous} if $t \mapsto X_t$ is continuous almost surely.
And in the case of movement through Euclidean space, $X_t$ represents the distribution of our paths at time $t$.
Now, there exist many types of continuous stochastic processes. Although these processes are useful, as they can capture quite general dynamics, it can be difficult to explicitly calculate their distribution. So, we will only consider \emph{Brownian motion}. For a concrete definition, see \Cref{def:brownianmotion}. This is a stochastic process where all time increments are independent, normally distributed random variables. Specifically, let $W_t$ be a Brownian motion process. Then, for times $t$ and $s$ with $t \geqslant s$ we have that
\begin{equation*}
W_t - W_s \sim W_{t-s} \sim N(0, t - s),
\end{equation*}
where $N(\mu, \sigma^2)$ denotes a normal distribution with mean $\mu$ and variance $\sigma^2$. Since we have such a concrete distribution for the intervals, it is very feasible to do calculations with Brownian motion.
One particular property we wish to highlight, is the scale invariance of the Brownian motion process. Notably, suppose we scale space by $\varepsilon$ and time by $\varepsilon^{-2}$, then we get the process $\varepsilon W_{\varepsilon^{-2} t}$. In particular, we see that for $t \geqslant s$
\begin{equation*}
\varepsilon W_{\varepsilon^{-2} t} - \varepsilon W_{\varepsilon^{-2} s} \sim \varepsilon N(0, \varepsilon^{-2}(t -s)) \sim N(0, t - s).
\end{equation*}
As the above in a sense entirely characterises the Brownian motion, we see that $\varepsilon W_{\varepsilon^{-2} t} \sim W_t$. This scale invariance means that the movement of the process looks the same no matter at what scale one considers the process. In general, this scale invariance is not observed in the movement of people. On a scale of centimetres, a person will walk in a straight line. On a scale of hundreds of metres, they will not move in a straight line due to constraints of the local environment i.e. how one is forced to move due to obstructions such as buildings. However, on a scale of kilometres, they will move roughly in a straight line, since one generally wants to take the shortest path between two locations. Hence, when modelling any type of movement, one should account for the scale of the movement being modelled. We will see this again in Chapter \ref{sec:simulation}.
\subsection{Brownian Bridge}\label{subsec:brownianbridge}
Now let us move on to a formal construction of the Brownian bridge. This section is inspired by \cite{animalBBMM}. Let $\{W_t\}_{t \geqslant 0}$ be a Brownian motion and fix a time $T \in (0, \infty)$. Then we can define a \emph{standard one-dimensional Brownian bridge} \cite[pg. 175]{shrevenfinancialmath} by $X = \{X_t: t \in [0, T]\}$, with
\begin{equation*}
X_t := W_t - \frac{t}{T} W_T.
\end{equation*}
This is a standard one-dimensional Brownian bridge in the sense that one can interpret it as a continuous walk on the real line that begins and ends in zero.
We can also write $X_t$ as
\begin{equation*}
X_t = \frac{T-t}{T} W_t - \frac{t}{T} (W_T - W_t).
\end{equation*}
Since $W$ is a Brownian motion, it follows that $W_T - W_t \sim W_{T-t}$. This implies that
\begin{equation*}
\frac{T-t}{T} W_t \sim N\left(0, \frac{t(T-t)^2}{T^2}\right)\;\;\; \text{ and }\;\;\; \frac{t}{T}(W_T - W_t) \sim N\left(0, \frac{t^2(T-t)}{T^2}\right).
\end{equation*}
A Brownian motion has independent increments, so
\begin{equation*}
X_t \sim N\left( 0, \frac{t(T-t)^2}{T^2} + \frac{t^2(T-t)}{T^2}\right) \sim N\left(0, \frac{t(T-t)}{T}\right)
\end{equation*}
as the sum of independent normals is again normal, see \Cref{prop:sumnormals}.
Note that the process is still purely one-dimensional. The application to location data is of course two-dimensional, so we need to extend this idea. Since we expect there to be no preference for a specific cardinal direction, we may assume that the coordinates will be independent. That way, if we let $Y$ be another standard Brownian bridge distributed identically to $X$, then the standard Brownian bridge in two dimensions is simply
\begin{equation*}
\begin{pmatrix} X_t \\ Y_t \end{pmatrix} \sim N_2\left(0, \frac{t(T-t)}{T} \mathbb{I}_2 \right),
\end{equation*}
as a vector of two i.i.d.\ normal random variables becomes a bivariate normal random vector, by \Cref{prop:distrnormals}.
When we look at this process, we see that when $t = 0$ and when $t = T$, the process has an $N_2(0, 0\mathbb{I}_2)$ distribution, which means it is a constant in the origin $O$, i.e.\ the process will always begin and end in $O$.
\begin{figure}[H]
\centering
\includegraphics[width=0.7\linewidth]{figures/bridge_0.pdf}
\caption{A simulated realisation of a two-dimensional Brownian bridge starting and ending in $O$, with $T = 1000$.}
\label{fig:standard2DBB}
\end{figure}
As we can see in the figure above, this process simulates a \emph{round trip}: we begin and end in the origin. However, we will be using this process to fill in a gap between two different points. We can do this easily by adding a time dependent path between these points. Suppose we want to model a path from the origin $O$ to some $d \in \mathbb{R}^2$, then on average, we assume the path to be a straight line. This is parametrised by the function
\begin{equation}
\mu(t) = \frac{t}{T} d.
\end{equation}
This is a two dimensional vector, so we can simply say that our process becomes $\mu(t) + (X_t, Y_t)$. Recall here that we have made the assumption that the traveller travels in a straight line. With no further information about the traveller or their surroundings, this is a valid assumption. If one has reason to believe the traveller would behave differently, one can model that with a different function for $\mu$.
Since there are many different ways of travel, we want to include another parameter that accounts for this. In the literature it is often called the \emph{diffusion coefficient}, and we denote it by $\sigma_m$. It works as follows. If we consider $\mu(t) + \sigma_m(X_t, Y_t)$, then this is a normally distributed random vector with covariance matrix
\begin{equation*}
\sigma_m \frac{t(T-t)}{T} \mathbb{I}_2.
\end{equation*}
We see that if $\sigma_m$ is small, the process is unlikely to move far from its expected value, in this case $\mu(t)$. If $\sigma_m$ is large, it becomes quite likely to do so. In this way we can model different modes of transport: a train is much more likely to move in a straight line than someone on a bicycle, so the train will have a lower $\sigma_m$ than the bicycle.
With this, we can finally define the process we will use to model what happens in a gap in the data. We define a \emph{general Brownian bridge} $Z = \{Z_t: t \in [0, T]\}$ by
\begin{equation*}
Z_t := \mu(t) + \sigma_m \begin{pmatrix} X_t \\ Y_t \end{pmatrix}.
\end{equation*}
If we let $\sigma^2(t) := \sigma_m^2 t (T-t) / T$, then $Z_t \sim N_2(\mu(t), \sigma^2(t)\mathbb{I}_2)$. Let us look at a simulated realisation of such a process.
\begin{figure}[H]
\centering
\includegraphics[width=0.7\linewidth]{figures/bridge_d.pdf}
\caption{A simulated realisation of a general Brownian bridge with $\sigma_m = 2$, $T = 100$ and $d = (30, 15)$.}
\label{fig:general2DBB}
\end{figure}
We see that this process indeed simulates a path from $O$ to $d$. As is usual for a Brownian movement, the path is fairly erratic. However, on average it doesn't stray too far from the path. This is because $\sigma_m$ is quite small.
\clearpage
\section{Model} \label{sec:model}
We are interested in modelling gaps in the location data of a traveller. A common way to model movement of a person or animal is with the Brownian Bridge Movement Model, or BBMM \cite{animalBBMM}. This can be used to fill the gaps in between each measurement, or to fill in a larger gap, where data is missing. We will be doing the latter. Recall that we are not interested in the exact path a traveller takes, as this can be very different with each realisation. Furthermore, to be realistic it should depend on the traveller's environment, which is something that we cannot take into consideration, as it would make the problem much too complex. Instead, we are interested in more general properties, like the expected length of the travellers path and the RoG. We will define these concepts in the following sections.
\subsection{Expectation of Discrete Brownian Bridge Path Length}\label{subsec:pathlength}
In this section we will derive an expression for the expectation of the length of a realisation of a discretised Brownian bridge. To do this, we first need to discretise the continuous time Brownian bridge we have already seen, and then we define a path using that discrete process. We will investigate the distribution of the length of that path, and look at some limiting properties of its expected value.
\subsubsection{Path Length of a Discretised Brownian Bridge}
The paths that are defined in Section \ref{subsec:brownianbridge} are all paths continuous over time. We discretise the general Brownian bridge process as follows.
Consider the process $Z = \{Z_t : t \in [0, T]\}$ with
\begin{equation} \label{eq:generalbridge}
Z_t = \mu(t) + \sigma_m \begin{pmatrix} X_t \\ Y_t \end{pmatrix} \sim N_2\left(\mu(t), \sigma^2(t) \mathbb{I}_{2}\right),
\end{equation}
where $\mu(t) = td/T$ and $\sigma^2(t) = \sigma_m^2t(T-t) / T$, as defined in Section \ref{subsec:brownianbridge}. Fix $n \in \mathbb{N}$ and let $0 = t_0 < t_1 < \dots < t_{n-1} < t_n = T$ be any discretisation of the interval $[0, T]$. A path can now be defined as taking straight lines between these $n+1$ points. That is, define the path
\begin{equation*}
t \mapsto \sum_{k =1}^n \mathbbm{1}_{\left(t_{k-1}, t_k \right]}(t) \left(Z_{t_{k-1}} + (Z_{t_k} - Z_{t_{k-1}}) (t - t_{k-1}) \right).
\end{equation*}
The length of this path is then given by
\begin{equation*}
\ell^n(Z) = \sum_{k=1}^n \norm{Z_{t_k} - Z_{t_{k-1}}}.
\end{equation*}
This path length is a random variable, and though it is a function of a process with a fairly simple distribution, the square root makes the situation a bit more complicated. In the next part we will find the distribution of this path length.
\subsubsection{Distribution and Results} \label{subsec:bbresults}
To find an explicit distribution for $\ell^n(Z)$, we need to look at the distribution of $Z_{t_k} - Z_{t_{k-1}}$. Recall from \eqref{eq:generalbridge} that $Z_t$ has a normal distribution with parameters $\mu(t)$ and $\sigma^2(t)$. Then
\begin{equation*}
Z_{t_k} - Z_{t_{k-1}} = \frac{t_k - t_{k-1}}{T}d + \sigma_m \begin{pmatrix} X_{t_k} - X_{t_{k-1}} \\ Y_{t_k} - Y_{t_{k-1}} \end{pmatrix},
\end{equation*}
so we first need to find the distributions of $X_{t_k} - X_{t_{k-1}}$ and $Y_{t_k} - Y_{t_{k-1}}$. This follows fairly straightforwardly from one of the properties of the Brownian motion, as seen in the proof of \Cref{prop:1dbridgedifference}. This proposition tells us that $X_{t_k} - X_{t_{k-1}} \sim X_{t_k - t_{k-1}}$ and $Y_{t_k} - Y_{t_{k-1}} \sim Y_{t_k - t_{k-1}}$, so
\begin{equation*}
Z_{t_k} - Z_{t_{k-1}} \sim \mu(t_k - t_{k-1}) + \sigma_m \begin{pmatrix} X_{t_k - t_{k-1}} \\ Y_{t_k - t_{k-1}} \end{pmatrix} = Z_{t_k - t_{k-1}}.
\end{equation*}
This implies that we can write
\begin{equation*}
\ell^n(Z) \sim \sum_{k=1}^n \norm{Z_{t_k - t_{k-1}}}.
\end{equation*}
This holds for any discretisation $0 = t_0 < \dots < t_n = T$ of the interval $[0, T]$. A reasonable assumption would be to split $[0, T]$ into $n$ intervals of equal length by setting $t_k = kT/n$. Then, $Z_{t_k - t_{k-1}} = Z_{T/n}$ for each $k \in \{1, \dots, n\}$. That means that
\begin{equation} \label{eq:discpathlength}
\ell^n(Z) \sim \sum_{k=1}^n \norm{Z_{T/n}} = n \norm{Z_{T/n}} = \norm{nZ_{T/n}}.
\end{equation}
As $\mu(T/n) = d/n$ and $\sigma^2(T/n) = \sigma_m^2T(n-1) / n^2$, we see that $nZ_{T/n}$ is distributed as
\begin{equation*}
nZ_{T/n} \sim N_2\left( d, \sigma_m^2T(n-1)\mathbb{I}_2 \right).
\end{equation*}
\Cref{def:riciandistribution} now tells us that $\ell^n(Z) \sim \norm{nZ_{T/n}}$ has the Rice distribution with parameters
\begin{equation*}
A = \norm{d}, \;\;\; B = \sigma_m^2T(n-1).
\end{equation*}
By that same definition we find that the expected value of the path length $\ell^n(Z)$ is given by
\begin{equation}\label{eq:pathlengthexpectation}
\mathbb{E}[\ell^n(Z)] = \sigma_m\sqrt{T(n-1)}\sqrt{\frac{\pi}{2}}L_{\frac{1}{2}}\left(-\frac{\norm{d}^{2}}{2\sigma_{m}^{2}T(n-1)}\right).
\end{equation}
Note here that $L_{\frac12}$ is a Laguerre function \cite{laguerre}. This expected value is the main result of this section. In the next part, we will look at limits of this expectation, but first we will see that this expected value has a lower bound of $\norm{d}$.
This is quite a logical lower bound: you cannot find a shorter distance than the shortest distance, the norm. Applying Jensen's inequality \cite[pg. 107]{IKSboek}
to the convex function $(x, y) \mapsto \norm{(x, y)}$ we indeed see that
\begin{equation*}
\mathbb{E} \left[\ell^n(Z)\right] = \mathbb{E} \norm{nZ_{T/n}} = n\mathbb{E} \norm{Z_{T/n}} \geqslant n \norm{\mathbb{E}\left[Z_{T/n}\right]} = \norm{d}.
\end{equation*}
\subsubsection{Limiting Properties of the Expected Path Length}\label{subsec:limits}
Let us look at some limits of the expected path length, i.e.\ the function
\begin{equation*}
f(\sigma_m, T, d, n) := \mathbb{E}[\ell^n(Z)] = \sigma_m \sqrt{T(n-1)}\sqrt{\frac{\pi}2} L_{\frac12}\left(-\frac{\norm{d}^2}{2\sigma_m^2 T(n-1)} \right),
\end{equation*}
parameter by parameter. We will use the following property of the Laguerre function to calculate limits of $f$,
\begin{equation} \label{eq:limit}
\lim_{x \to \infty} \sqrt{\frac{\pi}{2x}} L_{\frac12}\left( - \frac12 x \right) = 1.
\end{equation}
Although it is possible to show this analytically using Bessel functions, we have simply plugged this expression into a graphing software.
First, let us look at limits of $\sigma_m$. If $\sigma_m \downarrow 0$, we know for certain that the traveller moves over the straight line from origin $O$ to endpoint $d$, without deviating from that path. However, if $\sigma_m$ is large, the traveller is very likely to deviate from the straight line. So likely in fact, that the length of the path they walk blows up, as they might go very far from the straight line before they finally end up in $d$. Using \eqref{eq:limit}, we indeed see that
\begin{equation*}
\lim_{\sigma_m \downarrow 0} f(\sigma_m, T, d, n) = \norm{d}, \;\;\; \lim_{\sigma_m \to \infty} f(\sigma_m, T, d, n) = \infty.
\end{equation*}
For $T$ the intuition is a bit different. We get the limits
\begin{equation*}
\lim_{T \downarrow 0} f(\sigma_m, T, d, n) = \norm{d}, \;\;\; \lim_{T \to \infty} f(\sigma_m, T, d, n) = \infty,
\end{equation*}
but if $T = 0$, our problem is undefined whenever $d \neq O$. When $d = O$, the Brownian bridge process is just a constant, namely $O$, so then the limit is reasonable, but useless. The limit $T \to \infty$ makes a bit more sense. Then the traveller wanders around so long, they will never arrive, which means that their path length must be infinite as well. Note that although $n$ is discrete, we might consider an analytic continuation. In this case, the behaviour of $n - 1$ is exactly identical to that of $T$. The limit $n \to 1$ then corresponds to estimating the path by a single line segment for which we get an expectation of $\norm{d}$. On the other hand, if $n$ goes to infinity, the path length goes to infinity. This is as expected, since as $n$ goes to infinity, the random walk becomes Brownian motion, which has an infinite path length.
Moving on to limits of $d$, it follows that since $\norm{d}$ is a lower bound for the expected path length, the expected path length blows up as $\norm{d} \to \infty$. The limit where $\norm{d} \downarrow 0$, i.e.\ $d \to O$, is a round trip. We find
\begin{equation*}
\lim_{\norm{d} \to \infty} f(\sigma_m, T, d, n) = \infty, \;\;\; \lim_{\norm{d} \downarrow 0} f(\sigma_m, T, d, n) = \sigma_m \sqrt{\frac\pi2 T(n-1)}.
\end{equation*}
Recall that $\ell^n(Z)$ has the Rice distribution. It turns out that when $\mu(t) = 0$, as it is when $d = O$, the distribution of $\ell^n(Z)$ is a special case of the Rice distribution, called the Rayleigh distribution, see \Cref{def:rayleighdistribution}. This distribution indeed has the expected value given above.
We see that all limits of this function are as we would expect them to be, so \eqref{eq:pathlengthexpectation} does seem to make sense. Since $d$ and $T$ follow directly from the data, and the choice of $n$ is up to us, the only thing we need to consider is how to choose $\sigma_m$. It turns out that we can estimate a value for $\sigma_m$ based on the data points we do have, i.e.\ the data points before and/or after the gap.
\subsection{Estimating the Diffusion Coefficient}
It is essential to estimate $\sigma_m$ correctly since the modelled path depends heavily on this. We expect that this diffusion coefficient can be very different when considering two distinct travel movements. For example, a train moves in a very straight line compared to a pedestrian walking in a city centre and thus their diffusion coefficients should reflect that. Note that we can not determine a uniform diffusion coefficient for a fixed mode of transportation, there are many more parameters that have an influence on the value of $\sigma_m$. For instance, a car driving on a highway is expected to have a different diffusion coefficient than a car driving in the centre of a city, as a car on a highway has a very linear path and in a city centre there are a lot of opportunities to turn.
Hence, we want to estimate $\sigma_m$ for some travel movement based on the available location data for that movement. To do this, we consider location data in a single movement, with a single mode of transportation. The following is based on the section \emph{Parameter estimation} in \cite{animalBBMM}.
Let us assume we have $2n+1$ location measurements, denoted by $(z_0, t_0), (z_1, t_1), \dots, (z_{2n}, t_{2n})$, where $z \in \mathbb{R}^2$ is the travellers location at time $t$. On each time interval $[t_{2k}, t_{2k+2}]$ we will construct a Brownian bridge with diffusion coefficient $\sigma_m$ between $z_{2k}$ and $z_{2k+2}$, and then we will view $z_{2k+1}$ as a realisation of that Brownian bridge at time $t_{2k+1}$. Doing this for every increment, we can find a maximum likelihood estimator for $\sigma_m$.
Let us look at the distribution of the random variable of which we assume that $z_{2k+1}$ is a realisation. Define
\begin{align*}
T_k = t_{2k+2} - t_{2k},
\qquad d_k = z_{2k+2} - z_{2k}
\qquad m_k(t) = \frac{t}{T_k}d_k,
\qquad s_k^2(t) = \sigma_m^{2} \frac{t(T_k - t)}{T_k}
\end{align*}
for $t \in [0, T_k]$.\footnote{Note that compared to the discussion in \cite{animalBBMM}, we ensure our time interval for each Brownian bridge $k$ starts in 0. Furthermore, in \cite{animalBBMM} there is a parameter $\delta$ which represents the vector of the locations error in the data. We assume that our data is perfectly correct.} Now the Brownian bridge between $z_{2k}$ and $z_{2k+2}$ looks like $z_{2k} + Z^k$, where $Z^k = \{Z_{t - t_{2k}}^k: t \in [t_{2k}, t_{2k+2}]\}$ with
\begin{equation*}
Z_t^k \sim N_2(m_k(t), s_k^2(t)\mathbb{I}_2).
\end{equation*}
The observation $z_{2k+1}$ is then a realisation of the random vector $z_{2k} + Z_{t_{2k+1} - t_{2k}}^k$. This random vector has the probability density function
\begin{equation*}
f_k(z) = \frac{1}{2\pi s_k^2(t_{2k+1} - t_{2k})} \exp \left( -\frac12 \frac{\norm{z - z_{2k} - m_k(t_{2k+1} - t_{2k})}^2}{s_k^2(t_{2k+1} - t_{2k})} \right),
\end{equation*}
see \Cref{def:pdf2}. We assume that all these bridges are independent, so the likelihood function of $\sigma_m$ given the realisations $(z_0, t_0), \dots, (z_{2n}, t_{2n})$ is
\begin{equation}\label{eq:likelihood}
L(\sigma_m \mid (z_0, t_0), \dots, (z_{2n}, t_{2n})) = \prod_{k=0}^{n} f_k(z_{2k+1})
\end{equation}
As all parameters in this equation are known except for $\sigma_m^2$, we can use the maximum likelihood estimator to estimate it. To do this we can numerically optimise the likelihood function over values of $\sigma_m^2$. By using the known data points we can estimate $\sigma_m^2$ such that the probability that the odd locations came from the Brownian bridges between the even locations is maximal.
Now that we have estimated $\sigma_m$, we can estimate the path length through equation \ref{eq:pathlengthexpectation}. Here, we have currently fixed $n$ to be number of missing data points. However, this might not be the best choice, as a choice of discretisation is essentially a choice of minimum straight path length. Increasing $n$ will break up a straight segment into a few smaller segments increasing the path length. So, for different types of processes, we might tune the number of points in our discretisation to more accurately model the path length behaviour of the considered process. One might use data for which the path length is known to get a estimate of what $n$ gives a good estimate of the path length. We will not consider this idea further because it goes beyond the scope of this report.
\subsection{Radius of Gyration} \label{subsec:rogtheory}
Consider a dataset of the form $(z_0, t_0), \dots, (z_n, t_n)$, so at time $t_i$ we observe the location $z_i$.We call this a path $P$.\footnote{This notation will be useful later.} Considering the fact that our location data is given as coordinates in a plane, we can model the RoG of $P$ as
\begin{equation}\label{eq:RoG}
\text{RoG} = \sqrt{\frac{1}{n}\sum_{i=1}^n \norm{z - z_c}^2},
\end{equation}
where $z_c = \bar{z}_n = \frac1n \sum_{i=1}^n z_i$ can be seen as a weighted centre, akin to a centre of gravity.
This definition for the RoG originates from equation (4) in \cite{9378374}. Note that we have simplified this formula due to our assumption that the location points are coordinates in a plane.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{figures/rog.pdf}
\caption{A path generated by the fixed velocity random walk of $1000$ time steps with $\sigma =1$. A circle is plotted with the RoG as a radius. The blue dot is the centre. This circle can be seen as the ``territory'' of the person.}
\label{fig:rog}
\end{figure}
\clearpage
\section{Results of the Simulation}\label{sec:simulation}
In Section \ref{subsec:implementation} we have discussed the implementation of the Brownian bridge model. After that, we described the setup of our experiments.
One of the topics that we have discussed is the filling of a gap by a Brownian bridge. A question regarding this topic could for example be: \textit{If we use a Brownian bridge with an estimated diffusion coefficient to fill a gap, is the path length then similar to the original part of the path that is missing?}
In this section, we will show the results of the experiments. The main idea of these results is to see if our estimator for the path length works compared to linear interpolator (draw a straight line that connects the two sub paths divided by the gap).
Also recall that in Section \ref{subsec:implementation} and Chapter \ref{sec:theory}, we have shortly described a procedure to estimate the diffusion coefficient assuming that the path follows a Brownian bridge. To determine the correctness of our estimator, we have simulated a thousand Brownian bridges with different variances. For each simulated path, the estimation procedure is used to obtain the most likely estimate for $\sigma_m$. Thereafter, we have used $\sigma_m$ to derive the actual variance $\sigma$. In Figure \ref{fig:graph2}, a plot of the actual variance versus this estimated variance is displayed.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{figures/estimated_sigma.pdf}
\caption{The actual variance $\sigma$ ranges from $0$ to $1$. On the $x$-axis, the actual variance is given. On the $y$-axis, the estimated variance is given. The orange line is drawn as a reference to the perfect estimation or ``true value''. So, if the actual variance is $1$, on the orange line the estimated variance is also $1$. For each actual variance, the blue dots are the estimated variances.}
\label{fig:graph2}
\end{figure}
We see that our estimation procedure comes quite close to the actual diffusion coefficient. As the diffusion coefficient becomes greater the absolute error seems to increase.
\subsection{Path Length}
In Section \ref{subsec:experiments} we have fully described our experiment to quantify the accuracy of our estimation procedure. We have performed those simulations and got the results depicted in \Cref{fig:subfigures}. The error of our estimation is shown as a fraction of the actual path length. In particular the estimated path length divided by the actual path length is shown, this means an error of $1$ is a perfect estimation.
In Figure \ref{fig:subfigures}, it can be seen how our estimation procedure and the straight line estimate performed on different kind of datasets. For all described choices of parameters, a box plot is shown with the estimated path length as a fraction of the actual path length.
\begin{figure}[H]
\centering
\begin{subfigure}[b]
{0.49\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/ConclusionBB.pdf}
\caption{Results for discrete Brownian motion model}
\label{fig:brownianmotion}
\end{subfigure}
\begin{subfigure}[b]
{0.49\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/ConclusionARW.pdf}
\caption{Results for angular random walk model}
\label{fig:angularrandomwalk}
\end{subfigure}
\begin{subfigure}[b]
{0.49\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/ConclusionISM.pdf}
\caption{Results for random walk with internal state model.}
\label{fig:internalstate}
\end{subfigure}
\begin{subfigure}[b]
{0.49\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/ConclusionRNT.pdf}
\caption{Results for run and tumble model}
\label{fig:runandtumble}
\end{subfigure}
\caption{In these figures it can be seen how our estimation procedure performed compared to the straight-line estimator. In each figure it can be seen on the x-axis what choice of parameters is made and there are two boxplots for each choice. One that describes the estimation from our estimation procedure and one that comes from linear interpolation. For the internal state model there are $19$ outliers not visible for the first boxplot, the greatest outlier is $5.89$.}
\label{fig:subfigures}
\end{figure}
In \Cref{fig:brownianmotion} we see that our estimator for the path length consistently performs a lot better than the straight line estimator. This is not unexpected, as in our model we assume the travel movement to be a realisation of a Brownian bridge. \Cref{fig:angularrandomwalk} shows that especially for large values of $\sigma$, our estimator performs better. Moreover we also see that our estimation procedure generally underestimates the path length. \Cref{fig:internalstate} looks a lot like \Cref{fig:brownianmotion}, but our estimator applied to random walks generated by the internal state model has more outliers, especially for low values of $S$. Lastly, \Cref{fig:runandtumble} shows a trend where our estimation procedure works better for higher values of $l$. Similarly to the \Cref{fig:angularrandomwalk} our estimation procedure seems to underestimate the path length.
We conclude that, as we expected, linear interpolation always underestimates the path length, and that this underestimation becomes greater for more random behaviour.
\subsection{Radius of Gyration} \label{subsec:rog}
Similar comparisons have been performed regarding the RoG. First, we will quantify the error of the procedure as
\begin{equation*}
\text{error} = \frac{\text{RoG}(P_{\text{after}})}{\text{RoG}(P_{\text{before}})},\end{equation*}
where $P_{\text{after}}$ is the path after filling the gap and $P_{\text{before}}$ is the simulated path before the gap. In Figure \ref{fig:rogprocedure}, the results are shown. Every error value has been rounded off to 1 decimal place. Figure \ref{fig:rogprocedure} shows histograms of different data generating models, where for each rounded error the number of simulations (out of 1000) with that certain error can be found. An error of $1$ implies that the RoG has been perfectly estimated. If the error is less than $1$, then the RoG is being underestimated. If the error is greater than $1$, then the RoG is being overestimated.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.30\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/roghist.pdf}
\caption{Fixed velocity random walk.}
\label{fig:roghistofvrw}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.30\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/roghist_farw.pdf}
\caption{Fixed angular random walk.}
\label{fig:roghistofarw}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.30\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/roghist_runand.pdf}
\caption{Run-and-tumble.}
\label{fig:roghistorun}
\end{subfigure}
\caption{For every path generating model, a thousand paths have been simulated. For each path, we have omitted the first half of the location data and filled the gap by a Brownian bridge that follows from our estimation procedure. On the $x$-axis one can find the error and on the $y-$axis the number of paths with this error.}
\label{fig:rogprocedure}
\end{figure}
For each model, the majority of the errors are less than $1$. This implies that we tend to underestimate the RoG. In general, interpolating by the Brownian bridge, gives a path that is more close to the centre point. There are certain outliers, where we have a strong underestimation or overestimation, and these are the cases where the missing part of the path behaves very differently in comparison with the part of the path that is available. Note that the half of the path is deleted and that indeed our data generating models can produce this random behaviour, where in the first half of the path we have a movement that is similar to a straight line and in the second half it could be that the path deviates a lot from the straight line.
In \Cref{tab:rogresults}, we summarise these results. Again, one can see that on average, the RoG is being underestimated. We approach the paths that are obtained from the fixed velocity random walk the best in terms of RoG. However, there is a large deviation from the mean. Again, this shows how difficult it is to capture a trend when half of the path is missing.
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& Mean $\text{RoG}(P_{\text{before}})$ & Mean $\text{RoG}(P_{\text{after}})$ & Mean error & Std. dev.\\
\hline
Fixed velocity & 12.300 & 9.807 & 0.841c& 0.231 \\
\hline
Fixed angular & 194.250 & 112.265 & 0.600 & 0.135 \\
\hline
Run-and-tumble & 25.693 & 18.130 & 0.737 & 0.198 \\
\hline
\end{tabular}
\caption{The mean $\text{RoG}(P_{\text{before}})$, $\text{RoG}(P_{\text{after}})$, error and standard deviation of the errors over the $1000$ paths per model. Recall that the error is not a difference but a fraction that measures the $\text{RoG}(P_{\text{after}})$ relative to the actual RoG, i.e. $\text{RoG}(P_{\text{before}})$.}
\label{tab:rogresults}
\end{table}
In the same way, we have obtained results by using linear interpolation to fill the gap, instead of our estimation procedure. In \Cref{fig:rogstraightline} these results are shown in similar fashion as above.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.30\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rog_straight_fvrw.pdf}
\caption{Fixed velocity random walk.}
\label{fig:roghistofvrwstraight}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.30\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rog_straight_farw.pdf}
\caption{Fixed angular random walk.}
\label{fig:roghistofarwstraight}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.30\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rog_straight_runand.pdf}
\caption{Run-and-tumble.}
\label{fig:roghistorunstraight}
\end{subfigure}
\caption{For every path generating model, a thousand paths have been simulated. For each path, we have omitted the first half of the location data and filled the gap by a straight line from the origin to the end of the gap. On the $x$-axis one can find the error and on the $y-$axis the number of paths with this error. }
\label{fig:rogstraightline}
\end{figure}
For each model, linear interpolation also tends to underestimate the RoG. However, we can also see that there is less overestimation. Besides this, the overestimation is limited to an error of $1.2$.
In \Cref{tab:rogresultsstraight}, we summarise these results. Again, one can see that on average, the RoG is being underestimated. We approach the paths that are obtained from the run-and-tumble model the best in terms of RoG. However, for each model, the mean RoG after linear interpolation is less than the mean RoG after interpolating with a Brownian bridge. Also note that the deviation of the errors is less, so there is more stability in this case.
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& Mean $\text{RoG}(P_{\text{before}})$ & Mean $\text{RoG}(P_{\text{after}})$ & Mean error & Std. dev.\\
\hline
Fixed velocity & 12.319 & 8.115 & 0.684 & 0.188 \\
\hline
Fixed angular & 196.910 & 113.531 & 0.594 & 0.130 \\
\hline
Run-and-tumble & 25.437 & 16.933 & 0.690 & 0.184 \\
\hline
\end{tabular}
\caption{The mean $\text{RoG}(P_{\text{before}})$, $\text{RoG}(P_{\text{after}})$, error and standard deviation of the errors over the $1000$ paths per model. Note that $P_{after}$ in this context means: path after filling the gap by linear interpolation.}
\label{tab:rogresultsstraight}
\end{table}
\clearpage
\section{Discussion}\label{sec:discussion}
\subsection{Interpretation of the Theoretical Results}
Suppose we have a dataset
\begin{equation*}
(z_0, t_0), \dots, (z_i, t_i), (z_{i+j+1}, t_{i+j+1}), \dots, (z_k, t_k),
\end{equation*}
so the data points $(z_{i+1}, t_{i+1}), \dots, (z_{i+j}, t_{i+j})$ are missing. Our first step would be to calculate an estimator for $\sigma_m$ based on the data that we do have, as explained in Section \ref{subsec:implementation}. Now, we have $T = t_{i+j+1} - t_i$ and $d = z_{i+j+1} - z_i$. For now, let us choose our discretisation such that we simulate exactly one point for each missing data point, so $n =T$. In this case, an estimator for the expected path length of the gap is given by
\begin{equation*}
\hat{L} = \hat\sigma_m\sqrt{T(T-1)}\sqrt{\frac{\pi}{2}}L_{\frac{1}{2}}\left(-\frac{\norm{d}^{2}}{2\hat\sigma_{m}^{2}T(T-1)}\right),
\end{equation*}
where $L_{\frac12}$ is a Laguerre function \cite{laguerre}. As we have seen in Section \ref{subsec:pathlength}, this estimator is bounded from below by $\norm{d}$. We think that $\hat{L}$ can be a more accurate estimator for the length of the path taken in the gap than $\norm{d}$.
Note that it can be quite hard to estimate $\sigma_m$. First, one needs to make sure that the properties of the travel movement stay the same for all the points you use to estimate $\sigma_m$, and the points that are missing. This also means that in principle it does not matter how large the gap is, as long as the properties of the travel movement stay the same.
It can also be quite computationally expensive to calculate $\sigma_m$ for every single gap one encounters. Therefore, one could do empirical research into what value of $\sigma_m$ belongs to which travel movement. A couple of parameters $\sigma_m$ could depend on are the following:
\begin{itemize}
\item Mode of transportation (walking, cycling, riding a car, taking the train, et cetera)
\item Demographic properties of the respondent: age, medical condition, marital status, owning a car, owning a dog, address, job (and therefore educational background)
\item Area of the travel movement (rural area, city centre, suburban, etc.)
\item Temporal factors (time of day and day of the week)
\item The weather
\item Motivation and mood of traveller.
\end{itemize}
\subsection{Interpretation of the Experimental Results}
As expected the estimating procedure gives a great accuracy for estimation of data that was realised from the discrete Brownian motion model. This should not be a surprise and mainly shows that our estimating procedure is correctly implemented. Nonetheless, this is a natural form of motion that can certainly appear, so this is already a good sign for our procedure. The decrease of accuracy that the straight line estimation has for more chaotic behaviour does not appear for the accuracy of our procedure.
For the angular random walk model we see that there is a local minimum for the accuracy of our procedure. This mainly seems to be because our procedure increases in precision when more chaotic behaviour appears. Moreover almost straight line movement is also relatively easy to detect and estimate. There seem to be quite a few outliers for both our procedure and the straight line estimate. An explanation for these outliers was not found, but as there were $1000$ simulations some outliers can be expected.
For the random walk with internal state model we again see that our procedure seems to work a lot better than the straight line estimate. It is interesting however that this seems to be the only model where our procedure makes big overestimations. In the most extreme of the $1000$ simulations it overestimated the distance travelled by a factor $5.89$. It also seems to be that the straight line estimation does well on these outlier cases. A complete explanation for these outliers cannot be given at the moment, but this does correspond with the idea that this model has different possible states. If the path was moving around before and after the gap but had a long pause in the middle our estimation procedure does not factor in this idea of another state.
For the run-and-tumble model it looks like our procedure does not work that well with this structure of behaviour. It did quite poorly for small variances and little chaos. As less of the run and tumble structure is visible because of higher variances and more random behaviour we see that our procedure starts to perform better. In all cases it did still outperform the straight line approach on average.
Besides the path length, we have also obtained results regarding the RoG. In these results, we have defined an error that quantifies how well the RoG is being estimated. Again, our estimation procedure was compared with linear interpolation. On average, the estimation procedure seems to underestimate the RoG as the error is on average less than $1$. For the fixed velocity random walk, we see that the mean error comes closest to $1$. Therefore, the RoG is estimated the best for the fixed velocity random walk. The reason for this could be that the movement is more similar to the Brownian bridge. Intuitively, it has the same deviation from its expected path. Another aspect is the size of the RoG. It could also be that a larger RoG is harder to estimate than a smaller RoG. In other words, a path that has more points far away from the centre point is more difficult to estimate. This is in line with our intuition, as paths with unexpected behaviour are in general more difficult to comprehend.
In conclusion, our estimates seem to work fine on paths that are modelled by some stochastic process. We have not looked at empirical data. However, we believe that our procedure can also be useful for such data. The gaps can be filled by the Brownian bridge with an estimated diffusion coefficient, and we can see that the path length of this Brownian bridge can be realistic: it comes close to the underlying path of the gap. For the radius of gyration, there are more dependencies which should be examined. One could also look at the RoG of the incomplete path and use this as a parameter for estimating the diffusion coefficient. This way, it could also be possible to tackle incomplete paths with a larger RoG.
\subsection{Possible Flaws in the Experiment}
Quite an important difficulty in the test of our estimating procedure is that we were not able to use real location data of actual travellers. All of our experimentation has been performed on artificial data, which we could not compare to real life data, so we cannot say what kind of models depict real life best. For example, we did not model change in mode of transport, but we do expect this quite regularly in real-life travel movements. A person walking to their car and then driving gives quite a different estimation for their diffusion coefficient in time. In our procedure this is averaged over all available data points which gives a less complete picture. Moreover, there are a lot of reasonable assumptions you can make for real life movement, such as a velocity or turn-frequency dependent on the mode of travel. Our procedure does not take into account any of such assumptions.
Another more theoretical problem with our procedure is that Brownian motion is highly dependent on the discretisation of the model. If we would take a finer discretisation, the expectation value of the path length would increase. Our procedure is bases on a discretisation with one point every second. This is an arbitrary choice that does match up with actual measurements. One way to avoid this problem would be to use our estimation procedure to calculate the expected length for the intervals that we do have complete data of. Then we can use the error from these known intervals to tweak the expected value for the gap.
Our procedure works by estimating the variance as if the given movement was Brownian motion. The variance could have been estimated differently, for example dependent on the scale and discretisation.
\clearpage
\section{Conclusion and Outlook}\label{sec:conclusion}
In summary, we have used the Brownian Bridge Movement Model to model a person's movement and to fill up gaps between measurements. Through a maximum likelihood estimator the relevant parameter is can be estimated. By discretisation of a Brownian bridge, we have found a distribution for the path length. This distribution is given by Equation \ref{eq:discpathlength}.
To test the Brownian bridge model, we have applied it to gaps in four numerically simulated processes meant to mimic travel data. Then, we estimated two properties of the simulated paths. The length of a part of the path and the radius of gyration.
From these numerical results we can conclude we consistently estimate path length better in comparison to a straight line. This means that when we would use the Brownian bridge as a way to fill a gap, it would have a better estimate of the length of the sub-path (that has been deleted) than a straight line procedure. This is a meaningful result, as it could indicate that Brownian bridge interpolation is more realistic than linear interpolation.
We have also measured the estimation procedure in terms of RoG. Again, we have compared the estimation procedure with linear interpolation and the results show that the estimation procedure has a better estimate for the RoG than that of a straight line. However, we still strongly underestimate the actual RoG. Thus, there seems to be a small improvement, but there is still room for more.
\subsection{Outlook}
First of all, due to privacy regulations we could not access any data from actual people travelling, so it is natural to suggest for future work the CBS verifies our results with their data.
The CBS could also look into the sample rate the app uses.
Perhaps equally good results can be achieved by lowering the sample rate, or even making the rate dynamic.
For instance, a high sample rate could be used in cases where the diffusion coefficient is high, meaning the travel behaviour is less predictable, and a low rate in cases where the travel behaviour is very predictable and has a low diffusion coefficient.
A consequence of this could be that the app uses less battery power, making the amount of data per survey increase both because the internal software is less likely to shut the app to preserve battery power and because respondents are more likely to continue with the survey if it is less of a battery strain.
In particular, future work can be done on estimating the diffusion coefficient in various cases. For instance, empirical data can be used to decide on which variables the diffusion coefficient depends. Possible variables could be the mode of transportation, the demographic properties of the respondent, the area of the travel movement, temporal factors or the weather.
\clearpage
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 2.697266,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfJ025V5jdpIru0tH | \section*{TO DO LIST}
\section{Introduction}
\label{intro}
How athletes perform as they age is a question that undermines models of player valuation and prediction across sport. The impact of age is typically measured using age curves, which reflect the expected average performance at each age among all players that participate.
Because players receive different opportunities at different ages, a selection bias exists, one that is linked to both entry into a league (more talented players typically start younger) and drop out (less talented players stop earlier). More talented players are more likely to be observed at both ends of the age curve, which makes it challenging to extrapolate the impact of age. For example, Figure \ref{fig:observedwithsplinewithoutimputation} shows standardized points per game for National Hockey League (NHL) forwards, along with a spline regression model, one that uses age as the only predictor, fit to that observed data. There seems to be a slight bump at 28 or 29, but then the curve starts \textit{increasing}, which is the opposite of what we would expect from an age curve. Because average and below average players tend drop out, the only players competing in their late 30s tend to be very good players.
In Figure \ref{fig:exampleplayers}, we highlight observed data for four example players who were in the league at least until their late 30s (left) and three players who were in the NHL through their late 20s and early 30s (right). These data for individual players initially exhibit the same general trend -- increasing through their mid-to-late 20s. Unlike Figure \ref{fig:observedwithsplinewithoutimputation}, performance for these players decreases in their 30s, which is closer to what we'd expect in an age curve.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{img/StandardizedPointsPerGameByAgeNHLForwards.jpg}
\caption{Standardized points per game by age for NHL forwards between the 1995-96 and 2018-19 seasons, along with a cubic spline model fit to the data. The spline model does not decrease as it should for older ages because of selection bias: the only players that are observed at older ages are very good players. }
\label{fig:observedwithsplinewithoutimputation}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{img/StandardizedPointsPerGameByAge4examples.jpg}
\includegraphics[width=.45\textwidth]{img/StandardizedPointsPerGameByAge3examples.jpg}
\caption{Standardized points per game for 4 examples of very good forwards who played in the NHL until at least there late 30s (left), and 3 examples of good NHL forwards did not stay in the NHL through their late 30s. Individual player statistics over time tend to follow an inverted U-shaped trend, increasing from the mid to late 20s, and decreasing thereafter. For the players on the right, once their production fell, they exited the league, and have no observed data in their late 30s that would factor into the curve generated in Figure \ref{fig:observedwithsplinewithoutimputation}.}
\label{fig:exampleplayers}
\end{figure}
Along with selection bias, a second challenge when estimating age curves is lower sample sizes for higher ages. For some methods, a lack of data for mid-to-late 30s can lead to curves with higher variance for those age ranges. In Figure \ref{fig:bootstrap}, we show 100 age curves derived using the Delta method (left, \cite{mlicht2009}) and a cubic spline using only observed data and no player effects (right) for estimating the age curves for 100 bootstrapped samples of the NHL forwards data. Both methods are described further in Section \ref{sec:methodologies}. Variance is low for some ages (e.g. mid 20s) but variance increases for older ages as fewer observed data are available.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{img/BootstrappedAgeCurvesforNHLForwardsDeltaMethod.jpg}
\includegraphics[width=.45\textwidth]{img/BootstrappedAgeCurvesforNHLForwardsSpline.jpg}
\caption{100 bootstrapped age curves using the Delta Method(left) and Natural Spline regression without player effects (right) with the NHL Forwards data. Variation in the age curves increases for older ages where there are fewer observed data.}
\label{fig:bootstrap}
\end{figure}
Our goal in this paper is to assess the performance of various methods for age curve estimation under the selection bias of player entry and issues of small samples at younger and older ages. We propose several methods for player age curve estimation, introduce a missing data framework, and compare these new methods to more familiar approaches including both parametric and semi-parametric modeling. Using simulations derived from NHL data age patterns, we compare the accuracy of these methods with respect to estimating a true, known age curve. We conclude by exploring these methods using actual NHL data.
\section{Methodologies} \label{sec:methodologies}
For consistent terminology, we begin with the following notation. Let $Y_{it}$ to represent the performance value of player $i$ at age $t$. We assume discrete observations at each age (one per year), though it is possible to treat age in a more granular way.
A basic model is:
\begin{equation}
Y_{it} = g(t) + f(i, t) + \epsilon_{it}
\label{eq:model}
\end{equation}
where $g(t)$ is the average performance at age $t$ for all players, $f(i, t)$ represents a possible performance adjustment at age $t$ for player $i$, and $\epsilon_{it}$ is the model error at age $t$ for player $i$.
\subsection{Related literature}
Broadly, current approaches can be split based on assumptions for estimating $g(t)$ (parametric, semi-parametric, or non-parametric) and whether or not to model age specific curves, $f(i, t)$.
Table \ref{tab:table1} summarizes how several authors have modeled age effects. Columns for author, sport or league, sample of players, and method. Articles in Table \ref{tab:table1} are arranged by a rough categorization of approach for modeling the age term and then alphabetically by author.
\begin{table}[h!]
\begin{center}
\caption{Summary of work on age curves}
\label{tab:table1}
\begin{tabular}{l|c|c|c|c|c}
\textbf{Paper} & \textbf{Sport/League} & \textbf{Unobserved?} & \textbf{Model} \\
\hline \hline
Schulz et al\cite{schulz1994relationship} & MLB & No & Average \\
Lichtman\cite{mlicht2009} & MLB & No & Fixed effects (Delta Method) \\
Tulsky\cite{Tulsky2014} & NHL & No & Fixed effects (Delta Method) \\
Albert\cite{albert2002smoothing} & MLB & No& Fixed effects (Quadratic) \\
Bradbury\cite{bradbury2009peak} & MLB & No & Fixed effects (Quadratic) \\
Fair\cite{fair2008estimated} & MLB, others & No & Fixed effects (Quadratic) \\
Villareal et al\cite{villaroel2011elite}&Triathlon &No&Fixed effects \\
Brander et al\cite{brander2014estimating} & NHL & Yes & Fixed effects (Quadratic, Cubic) \\
Tutoro\cite{tutoro2019} & NHL & No & Semiparametric \\
Judge\cite{jjudge2020b} & MLB & Yes & Semiparametric \\
Wakim and Jin\cite{wakim2014functional} & MLB, NBA & No & Semiparametric\\
Vaci et al\cite{vaci2019large} & NBA & No & Fixed, random effects \\
Lailvaux et al\cite{lailvaux2014trait} & NBA & No & Random effects \\
Berry et al\cite{berry1999bridging} & MLB, NHL, Golf & No & Random effects \\
Kovalchik and Stefani\cite{kovalchik2013longitudinal} & Olympic & No & Random effects \\
\hline
\end{tabular}
\end{center}
\end{table}
A most naive approach assumes $f(i, t)= 0$ and $\hat{g}(t) = \sum_{i=1}^{n} Y_{it}/ n$ for all $t$, as in \cite{schulz1994relationship}, such that the average of observed $Y_{it}$ is sufficient for estimating $g(t)$. Such an approach would only be valid if players were chosen to participate in sport completely at random, making it too unrealistic for professional sport. As in Figure \ref{fig:observedwithsplinewithoutimputation}, we can see that this approach yields results that are not credible.
A common parametric assumption (\cite{albert2002smoothing}, \cite{brander2014estimating}, \cite{bradbury2009peak}, \cite{fair2008estimated}, \cite{villaroel2011elite}) is that $Y_{it}$ is quadratic in age. If $g(t)$ is quadratic, performance ``peaks'' at some age, with players improving until this peak and eventually declining after the peak. Quadratic age curves are also symmetric across the peak performance age. \cite{brander2014estimating} and \cite{villaroel2011elite} also consider a cubic age effect, while \cite{albert2002smoothing} uses a Bayesian model and weighted least squares, with weights proportional to player opportunity. \cite{fair2008estimated} do not assume that quadratic form of $f(i, t)$ is symmetric. The Delta Method (\cite{mlicht2009}) is a modified fixed effects model using different subsamples at each consecutive pairs of ages, is expanded upon in Section \ref{sect::Delta}.
A second suite of approaches assumes a semiparametric approach for estimating age curves, including spline regression techniques (\cite{wakim2014functional}) and the broader family of generalized additive models (\cite{jjudge2020b}, \cite{tutoro2019}). Either approach is more flexible in their ability to pick up on non-linear patterns in age effects. A final assumption models age effects via individual age curves $f(i, t)$, as in \cite{berry1999bridging} and \cite{lailvaux2014trait}.
In all but two examples above, authors use observed data only to make inferences on the impact of age. Conditioning on players being observed at each age potentially undersells the impact of age; only players deemed good enough to play will record observations. The two exceptions to this assumption are \cite{brander2014estimating}, who extrapolate model fits to players not observed in the data, and \cite{jjudge2020b}, who uses a truncated normal distribution to estimate errors. Importantly, \cite{jjudge2020b} confirms that if dropout rate is linked to performance, that effectively shifts the age effect downward, relative to surviving players.
\subsection{Notation}
Our focus in this paper will be on the estimation of $g(t)$, the average aging curve for all players. Because not all of the players will be observed in a given year of their career (due to injury, lack of talent, etc), we create $\psi_{it}$, an indicator for if $Y_{it}$ was observed for player $i$ at a given age $t$, where $t = t_1, \ldots,t_K$. That is,
\begin{eqnarray}
\psi_{it}=\left\{
\begin{array}{ll}
1 & \mbox{if $Y_{it}$ is observed, and}\\
0 & \mbox{otherwise}.\\
\end{array}
\right.
\end{eqnarray}
we let $t_1$ be the youngest age considered and $t_K$ will be the oldest age considered.
Similarly we will have players who we would like to include in our analysis but their careers are not yet complete. To that end we let $\phi_{it}$ is an indicator of if $Y(t)_{it}$ is observable. For our application to NHL data $\phi_{it}$ is an indicator that the performance could have happened by October 2020, when the data for this project was collected.
\begin{eqnarray}
\phi_{it}=\left\{
\begin{array}{ll}
1 & \mbox{if $Y_{it}$ is observable, and }\\
0 & \mbox{otherwise}.\\
\end{array}
\right.
\end{eqnarray}
As an example, NHL star Sidney Crosby's age 40 data is $\phi_{it}=0$ because that is not yet observable as of this writing. See Table \ref{tab:observableexamples} for more examples of $\psi_{it}$ and $\phi_{it}$.
\begin{table}[]
\centering
\begin{tabular}{l|r|r|r|r|l}
\hline
Player $i$ & Age $t$ & Season & Observable ($\psi_{it}$) & Observed ($\phi_{it}$) & Note \\
\hline
Jagr & 38 & 2010-11 & 1 & 0 & Played overseas in KHL \\
Jagr & 39 & 2011-12 & 1 & 1 & Returned to NHL \\
Crosby & 40 & 2027-28 & 0 & 0 & Future season, not yet observable\\
\hline
\end{tabular}
\caption{Examples of player-ages that are observable but not observed, observable and observed, and not observable.}
\label{tab:observableexamples}
\end{table}
For the methods below, we also add the following definitions and notation. The data for a given player will be represented by the vector of values ${\mbox{\bf{Y}}}_i=(Y_{it_1},
\ldots,Y_{it_K})^T$ and the observed subset of those values will be denoted by the vector of values ${\mbox{\bf{Y}}}_i^{obs}=(Y_{it} \mid \phi_{it} =1)^T$.
Let ${\mbox{\bf{Y}}}^{obs} =((Y_{1}^{obs})^T, (Y_{2}^{obs})^T, \ldots, \linebreak[3]
(Y_{N}^{obs})^T)^T$ be the vector of all observed values, while ${\mbox{\bf{Y}}} = (Y_{1}^T, Y_{2}^T, \ldots, Y_{N}^T)^T$.
\subsection{Estimation Methods}
In this section we describe several current and novel approaches to estimation of the mean aging curve, $g(t)$. We begin by describing the de facto standard methodology in the sport analytics literature, the Delta method, \cite{mlicht2009}, and an extension that we call Delta-plus. Below we
outline some facets of our proposed approaches to the problem of estimation of $g(t)$. Roughly these methods breakdown into the general approach to estimation of $g(t)$, the data to be used for model fitting and the additional fixed or random effects terms. To help the reader we develop a notational shorthand for combining these facets that is \textit{method:data:effects}.
\subsection{Mean aging curve estimation}
For this paper we consider four approaches to estimation of the mean player aging curve. The first is the non-parametric Delta method and a simple extension of this approach which we call \textit{delta-plus}. The Delta Method which been discussed by \cite{jjudge2020a}, \cite{tutoro2019} and \cite{mlicht2009}, is commonly used in practice. The second approach that we consider is a natural spline regression approach which we denote by \textit{spline} and the third is a quadratic model \textit{quad}. Finally, we propose a novel \textit{quantile} methodology that utilizes information about the ratio of observed players to observable players at a given age.
\subsubsection{Delta Method} \label{sect::Delta}
The Delta Method is an approach to estimation that focuses on the maxima of $g(t)$ over $t$. The basic idea is to estimate the year of year change in the average player response curve by averaging among only the players that we observed in both years. A benefit of this approach is that it implicitly adjusts for a player effect by only using those players who have appeared in both years $t_k$ and $t_k + 1$. Further, by simply calculating year over year averages, this approach is hyper-localized. However, the requirement that $Y_{t_k}$ and $Y_{t_k +1}$ are both observed (i.e. that $\psi_{i t_k}=\psi_{i t_k+1}=1$) can be limiting in sports where there is significant drop-in/drop-out across years or seasons.
To develop the delta method, let
\begin{equation}
\delta_{t_k} = \overline{Y}_{i*_k,t_{k}+1} - \overline{Y}_{i*_k,t_{k}}
\end{equation}
where $$\overline{Y}_{i*_k,t_{k}} = \frac{1}{n*_{t_k}} \sum_{i} Y_{i \in i*_k, t_{k}},$$
$i*_k=\left\{ i \mid \psi_{it_k} = \psi_{it_{k}+1}=1\right\}$, and $n*_{t_k} = \vert i*_k \vert$ is the number of elements in $i*_k$. Thus, $i*_k$ is the set of players whose performance value was observed at age $t_k$ and $t_k +1$. Traditionally, the delta method is standardardized
\begin{equation}
\hat{g}(t) = \delta_{t} - \max_k \delta_{t_k}
\end{equation}
for $t = t_1, \ldots, t_K$ so that the largest value of $\hat{g}(t)$ is zero.
To date the Delta Method has proved effective at estimation of the maxima of $g(t)$.
See \cite{tutoro2019} and \cite{jjudge2020a} for details on some evaluation of the performance of the Delta method. This performance relative to other methods that use only observed data is likely due to this focus on year over year differences, which are susceptible to small sample size issues for older ages that can cause non-smooth age curves. Older ages are not (as big of) an issue for regression techniques (e.g. spline regression), which use information from nearby ages and result in a smooth age curve even when sample sizes are small.
\subsubsection{Delta Plus Method}
One drawback of the Delta Method as described above is that the maximal value for $\hat{g}(t)$ is forced to take the value zero. In part, this has been the case because the focus of estimation was on the age of maximal performance not on the estimation of the full aging curve; though the application of the Delta Method has taken on the latter function. Using the notation from above we define the estimation of $\mu_t$ as the following:
\begin{equation}
\hat{g}(t) = \delta_{t} - \max_k \delta_{t_k} + \max_{k} \overline{Y}^{obs}_{\cdot t_k},
\end{equation}
where
$\overline{Y}^{obs}_{\cdot t_k} = \frac{1}{n_{t_k}} \sum_{i : \psi_{it_k}=1} Y^{obs}_{it_k}$ is the average of the $n_{t_k}$ observed values at age $t_k$. Below we will refer to this method as \textit{delta-plus}. We only use the Delta Plus Method in our evaluation below since it offers a more flexible approach than the Delta Method.
\subsubsection{Spline approach}
For this spline approach (spline), we utilize flexible natural basis spline regression and apply them to the $Y_{it}$'s, the player performance data. Thus, our spline regression use age, $t$, as a predictor and the performance metric $Y_{ik}$ as the response. In particular, we use the \texttt{s()} option with 6 degrees of freedom in the \texttt{mgcv} package from \texttt{R}. In our shorthand for these methods we call this approach \textit{spline} and use $s(t)$ to denote a spline model for the mean aging curve.
\subsubsection{Quadratic approach}
As the name implies, this methodology assumes that the mean aging curve is quadratic in terms of a player's age. In our shorthand we will use `quad' and the model will be written as $g(t) = \gamma_0 + \gamma_1 t + \gamma_2 t^2$. Some authors including \cite{fair2008estimated}, \cite{albert2002smoothing} and \cite{brander2014estimating} have assumed that $g(t)$ has this particular functional form. So we include this approach for comparison.
\subsubsection{Quantile approach}
In this method we aim to estimate $g(t)$ by utilizing the distribution and number of the observed values at each age $t$. We observe at each age $t$ some fraction of the players which is reasonably assumed to be a truncated sample from a larger population. Let $n_t$ be the number of players whose performance is observed at age $t$ from the population of $N_t$ players whose performance was observable at any age. If we know the fraction of players relative to the larger population (and the form of the distribution) then we can map percentiles in our sample to percentiles in the larger population.
For example, suppose we had a corpus of $N_{32} =1000$ players and that at age 32 we observed $n_{32}=400$ of them. The 75th percentile of observed metrics among the age 32 players might be reasonably used as an estimate for the 90th percentile of the population of 1000. Explicitly this is because the $100^{th}$ best $Y_{it}$ among the combined observed can be assumed to be the $100^{th}$ best $Y_{it}$ among the $n_{32} =400$ observed and the $N=1000$ unobserved $Y_{it} 's$. Similarly the 25th percentile of the above example would be the 70th percentile in the population. Below we assume a Normal distribution and use the additivity of the Normal distribution to obtain estimates for $g(t)$ at each age $t$.
More generally, we can calculate
the $q \cdot 100^{th}$ percentile from among the $n_t$ observed values. Call that value $\nu_t$. The value of $\nu_t$ is approximately the $G_t = \left(1-\frac{n_t}{N_t}(1-q)\right) \cdot 100^{th}$ percentile from the population of values at age $t$.
From our example above the $0.90=1-\frac{400}{1000} (1-0.75).$
From $G_t$ we can estimate the mean of the full population, $g(t)$ at age $t$ via: $ \hat{\zeta}_t = \nu_t - \Phi^{-1}(G_t) \hat{\sigma}_t$ where $\Phi(~)$ is the cumulative density of a standard Normal distribution. From $\hat{\zeta}_t$ we can estimate other percentiles as long as we can assume Normality and we have a reasonable estimate of the standard deviation, $\hat{\sigma}_t$. Our justification for assuming Normality notes that most performance metrics are averages or other linear combinations of in-game measurements and that the Central Limit Theorem applies to these linear combinations.
For estimation of $\sigma_t$, the standard deviation of the population at age $t$, we can estimate the standard deviation at age $t$, denote that by $s_{t}$, and adjust based upon the proportion of truncation. Note that the variance of a truncated Normal is always less than the variance of an untruncated one with similar underlying variance. So then we use $\hat{\sigma}_t = \frac{s_t}{\theta_t}$ where $\theta_t$ is the ratio of truncation from a standard Normal for these data at age $t$. To obtain $\theta_t$ we use the \texttt{vtruncnorm} function in the \texttt{truncnorm} library in \textit{R} \cite{rsoftware}. For our methodology shorthand we will use \textit{quant} for this approach and denote the quantile based aging curve approach as $\zeta(t)$.
\subsection{Data Imputation Methods}
Another modeling choice to be made when working with missing data is whether or not to impute the missing values.
We consider three possible options for imputation of responses, $Y_{it}$'s. The first option is simply to use the observed data and only the observed data. When we do this, we will use \textit{obs} as part of our shorthand. The second and third options involve imputation of the unobserved but observable data, that is the set of values $\{ Y_{it} \mid \psi_{it}=0, \phi_{it}=1 \}.$ In the second option \textit{trunc}, we impute values for $Y_{it}$ with truncation. In the third option \textit{notrunc} we impute values without truncation. Below we describe our algorithm for imputation.
In this general imputation algorithm, we first estimate a naive aging curve using a regression approach on only the observed data. Then we use that estimated curve to generate imputed values for the unobserved values, those with $\psi_{it}=0$, ensuring that these unobserved values are below a smoothed boundary upper threshold at each age which is defined by a lower percentile of the observed values. Our default percentile is the $75^{th}$ percentile. The choice of the $75^{th}$ percentile is arbitrary, though we considered several choices, in particular, the $20^{th}$ and $50^{th}$ percentiles and found that using the $75^{th}$ percentile produced better estimation performance. It is sensible that there is an upper bound on the performance a player who is not in a top league would have; otherwise they would be in the top league. We then refit our estimated model using both the observed and imputed values and use this new estimated curve to generate a second set of imputed values. It is this second set of imputed values that we report as our estimate of the the mean aging curve $g(t)$. The impetus behind the second imputation and curve estimation is to wash out any initial impact on estimated curve which is based solely on the observed values.
For the algorithm below:
\begin{enumerate}
\item Fit the model $Y^{obs}_{it} =\eta^{'}_{it}+\epsilon_{ij}$ for just the fully observed data, i.e. $Y_{it}$ where $\psi_{it}=1$. Calculate the estimated standard deviation of the $\hat{\epsilon}_{ij}$, call that $\sigma_0$.
\item Estimate a smoothed boundary via splines of performance for players in the NHL based upon the $q^{th}$ percentile of players at each age, say the $75^{th}$ percentile. Call the boundary value $\tilde{\beta}_{t}$. Note that for imputation without truncation, \textit{notrunc}, $\tilde{\beta}_{t} = \infty$.
\item Simulate values for the $Y_{it}$ with $\psi_{it}=0$ from a Normal distribution with mean $\eta^{'}_{it}$ and standard deviation $\sigma_0$ but truncated so they are not larger than $\tilde{\beta}_{t}$. That is the range of possible values for these $Y_{it}$ is $(-\infty, \tilde{\beta}_{t})$, we will denote this by $Y_{it}^{imp} \sim TN (\hat{\eta}^{'}_{it}, \hat{\sigma}_{0}^2, -\infty, \tilde{\beta}_{t})$ where $Y \sim TN(\mu, \sigma^2, a, b)$ denotes a random variable $Y$ with a Truncated Normal distribution with mean $\mu$, variance $\sigma^2$ and takes values such that $a \leq Y \leq b$. Call these simulated values $Y^{imp}_{it_k}$.
\item Fit $Y^{full}_{it} = \eta_{it}+\epsilon_{it}$ using \linebreak[3] $${\mbox{\bf{Y}}}^{full} = \left\{ Y^{obs}_{it_k}, Y^{imp}_{it_k} \mid i=1,\dots, N, k=1,\ldots, K\right\}.$$
\item We next impute the original missing values again. We do this because the first set of imputed values, the $Y^{imp}_{it}$'s, was based upon predicted values from a model that used only fully observed data. So then we imputed again, this time using $Y_{it}^{imp} \sim TN(\hat{\eta}_{it}, \hat{\sigma}^2_0, -\infty, \hat{\beta}_{t})$.
\item We then refit the model above using
$Y^{full}_{it} = g(t) + f(i,t) + \epsilon_{it}$ based upon the imputed data from the previous step.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{img/ModelFittingandDataImputation.jpg}
\includegraphics[width=.45\textwidth]{img/SimulatedDataWithUpperBoundsUsedForTruncation.jpg}
\caption{Modeling fitting and imputation without truncation for one example player in one simulation (left), and all simulated players (gray dots) and upper bounds (red line) used for truncation (right). In the left figure, red dots are observed data for one example player from one simulation, the black line denotes fitted values using the cubic spline model \textit{spline:trunc:fixed} for this player. The black dots, which lie on the line, are the means of the normal distribution that is used for imputing data for the ages for which data is missing, and the gray lines are 3 standard deviations away from the mean for those normal distributions.}
\label{fig:modelfittingandimputation}
\end{figure}
Figure \ref{fig:modelfittingandimputation} depicts Steps 1 and 3, model fitting and imputation without truncation for one example player in one simulation (left) and the upper bounds used for truncation at each age for one of the simulations that did use truncation (right).
Our reasoning for steps five and six is that there is possibly some initial effect in the model $\eta_{it}=\mu_{0t} + \gamma_{0i}$ which comes from fitting to \textit{only} fully observed data. By adding the second iteration we hope to wash out some of that initial impact. Our use of the boundary for imputation is based upon the idea that there are players who may be good enough to have their performance observed, that is play in whatever top league, but do have that opportunity, while at the same time there are players whose performance puts them well below the performance of other in the same league. Additionally, the estimation of $\eta_{it}$ for generation of the imputed values was done two ways: using the quantile approach described above as well as the spline approach.
\subsection{Player Effects}
In addition to the basic model methodology and the data that we fit to these models we considered some additional fixed and random effects for players. The most common approach we used was a fixed constant player effect denoted as \textit{fixed}. Our notation for a this fixed effect is $\gamma_{0i}$. We considered two random effects models one with quadratic random effects and the other with spline random effects. The former we denote by \textit{random-quad} and it has the following functional form: $g_{0i} + g_{1i} t + g_{2i} t^2$. For the spline random effects component of our model our shorthand is \textit{random-spline} and our function form is $g_{0i} + \xi_i(t)$.
If the model did not have a fixed or random effect for player we use \textit{none} in our shorthand.
In the subsections above, we have described three facets of our estimation approaches for $g(t)$. We combine these three facets to consider a range of estimation methods though we do not consider all the possible combinations of these approaches. We chose a specific subset of these combinations to make comparisons between our proposed methodologies and existing methodologies. Table \ref{tab:estim} shows the full list of methods that we considered for evaluation along with shorthand and descriptions of the methodology. For example, \textit{spline:trunc:fixed} uses
natural splines to estimate $g(t)$ fit to both observed and truncated imputed data via a model that includes a fixed effect for each player. Likewise \textit{quant:obs:none} use the quantile methodology described above to estimate the mean aging curve using only the observed $Y_{it}$'s. Somewhat unique is \textit{quant:trunc:fixed} which generates the mean of the truncated imputed values via the quantile approach then fits a natural spline with fixed player effects to those truncated values.
\begin{table}[]
\centering
\caption{Summary of Estimation Methods} \label{tab:estim}
\begin{tabular}{l|l|c|c}
\hline Method Name & Model formulation &Estimation of $g(t)$ & Imputation \\
\hline
delta-plus & Non-parametric & Piecewise & No\\
spline:obs:none&
$s(t)$ & Natural Splines & No\\
spline:obs:fixed& $s(t) + \gamma_{0i}$& Natural Splines & No\\
spline:trunc:fixed& $s(t)+\gamma_{0i}$& Natural Splines & Truncated\\
spline:notrunc:fixed &$s(t)+\gamma_{0i}$& Natural Splines & Not Truncated\\
quant:trunc:fixed& $\zeta(t)+\gamma_{0i}$ & Quantile Approx. & Truncated \\
quant:obs:none & $\zeta(t)$ &Quantile Approx. & No\\
quad:trunc:fixed& $\gamma_0+\gamma_{0i} +\gamma_1 t + \gamma_2 t^2$& Quad. Linear Model& Truncated\\
spline:trunc:random-quad&$s(t)+g_{0i} + g_{1i} t + g_{2i} t^2$ & Natural Splines & Truncated\\
spline:trunc:random-spline& $s(t) +g_{0i} + \xi_i(t)$ & Natural Splines & Truncated\\
\hline
\end{tabular}
\end{table}
\begin{comment}
\subsection{Estimation Methods}
In this section we describe several current and novel approaches to estimation of the mean aging curve, $g(t)$.
\subsubsection{Delta Method} \label{sect::Delta}
The delta method is an approach to estimation that focuses on the maxima of $\mu_t$ over $t$. The basic idea is to estimate the year of year change in the average player response curve by averaging among only the players that we observed in both years. The brilliance of the \textit{Delta Method} is that it implicitly adjusts for a player effect by only using those players who have appeared in both years $t_k$ and $t_k + 1$. Further, by simply calculating year over year averages, this approach is hyper-localized. However, the requirement that $Y_{t_k}$ and $Y_{t_k +1}$ are both observed (i.e. that $\psi_{i t_k}=\psi_{i t_k+1}=1$) can be limiting in sports where there is significant drop-in/drop-out across years or seasons.
To develop the Delta Method, let
\begin{equation}
\delta_{t_k} = \overline{Y}_{i*_k,t_{k}+1} - \overline{Y}_{i*_k,t_{k}}
\end{equation}
where $$\overline{Y}_{i*_k,t_{k}} = \frac{1}{n*} \sum_{i} Y_{i \in i*_k, t_{k}},$$
$i*_k=\left\{ i \mid \psi_{it_k} = \psi_{it_{k}+1}=1\right\}$, and $n* = \vert i*_k \vert$ is the number of elements in $i*_k$. Thus, $i*_k$ is the set of players whose performance value was observed at age $t_k$ and $t_k +1$. Then,
\begin{equation}
\hat{g}(t) = \delta_{t} - \max_k \delta_{t_k}
\end{equation}
for $t = t_1, \ldots, t_K$ so that the largest value of $\hat{\mu}_{t_k}$ was zero.
To date the Delta Method has proved quite effective at estimation of the maxima of $\mu_t$.
See \cite{tutoro2019} for details on some evaluation of the performance of the \textit{Delta method}. This performance relative to other methods, particularly regression based approaches that use only fully observed data, to which it has been compared is likely due to this focus on local, year over year behaviour.
\subsubsection{Delta Plus Method}
One drawback of the \textit{Delta method} as described above is that the maximal value for $\hat{\mu}_t$ is forced to take the value zero. Historically this is because the focus of estimation was on finding the age of maximal performance not on the estimation of the aging curve; though the application of the \textit{Delta method} has taken on the latter function. Using the notation from above we define the estimation of $\mu_t$ as the following:
\begin{equation}
\hat{g}_{t} = \delta_{t} - \max_k \delta_{t_k} + \max_{k} \overline{Y}^{obs}_{\cdot t_k},
\end{equation}
where
$\overline{Y}^{obs}_{\cdot t_k} = \frac{1}{n_i} \sum_{i : \psi_{it_k}=1} Y^{obs}_{it_k}$ is the average of the observed values at age $t_k$.
\subsubsection{Spline Method with Observed Data}
This methodology estimates the mean player aging function, $\mu_t$ by using the observed values ${\mbox{\bf{Y}}}^{obs}$ and fits a `thin plate regression spline' to these data with 5 degrees of freedom and a fixed effect for each player. Thus we estimate a generalized additive function $Y^{obs}_{it} = g(t) + \gamma_{0i} + \epsilon_{it}$ where our estimation of $g(t)$ is via the spline regression.
\subsubsection{First Imputation Approach (Miss1)}
This is the first of the imputation methods that we use to estimate $\mu_t$.
In this algorithm, we first estimate a naive aging curve using the spline method above on the observed data. Then we use that estimated curve to generate imputed values for the unobserved values, those with $\psi_{it}=0$, ensuring that these unobserved values are below a smoothed boundary upper threshold at each age which is defined by a lower percentile of the observed values. Our default percentile is the $75^{th}$ percentile. The choice of the $75^{th}$ percentile is arbitrary, though we considered several choices, in particular, the $20^{th}$ and $50^{th}$ percentiles and found that using the $75^{th}$ percentile produced better estimation performance. However, it does seem logical that there is an upper bound on the performance a player who is not in a top league would have; otherwise they would be in the top league. We then refit our spline model using both the observed and imputed values and use this new curve to generate a second set of imputed values. It is this second set of imputed values that we report as our estimate of the the mean aging curve $\mu_t$. The impetus behind the second imputation and curve estimation is to wash out any impact of the initial estimated curve which is based solely on the observed values.
For the algorithm below:
\begin{enumerate}
\item Start by fitting the model $Y^{obs}_{it} =\mu_{0t}+ \gamma_{0i}+\epsilon_{ij}$ for just the fully observed data, i.e. player years where $\psi_{it}=1$. Calculate the estimated standard deviation of the $\hat{\epsilon}_{ij}$, call that $\sigma_0$ below. And where $\gamma_{0i}$ is the effect of a given player. For this method we assume the effect of a given player is on the y-intercept for this player.
\item Estimate a smoothed lower boundary of performance for players in the NHL based upon the $q^{th}$ percentile of players at each age, say the $75^{th}$ percentile. Call the boundary value $\tilde{\beta}_{0t}$.
\item Simulate values for the $Y_{it}$ with $\psi_{it}=0$ from a Normal distribution with mean $\eta_{it}$ and standard deviation $\sigma_0$ but truncated so they are not larger than $\tilde{\beta}_{0t}$. That is the range of possible values for these $Y_{it}$ is $(-\infty, \tilde{\beta}_{0t})$, we will denote this by $Y_{it}^{imp} \sim TN (\hat{\mu}_{0t} + \hat{\gamma}_{0i}, \hat{\sigma}_{0}^2, -\infty, \tilde{\beta}_{0t})$ where $Y \sim TN(\mu, \sigma^2, a, b)$ denotes a Truncated Normal distribution with mean $\mu$, variance $\sigma^2$ and takes values such that $a \leq Y \leq b$.. Call these simulated values $Y^{imp}_{it_k}$.
\item Fit $Y^{full}_{it} = \mu_t+ \gamma_{0i}+\epsilon_{it}$ using a natural basis spline where \linebreak[3] $${\mbox{\bf{Y}}}^{full} = \left\{ Y^{obs}_{it_k}, Y^{imp}_{it_k} \mid i=1,\dots, N, k=1,\ldots, K\right\}.$$
\item We next imputed the original missing values again. We did this because the first set of imputed values, the $Y^{imp}_{it}$'s, was based upon predicted values from a model that used only fully observed data. So then we imputed again, this time using $Y_{it}^{imp} \sim TN(\hat{\mu}_t+\hat{\gamma}_{0i}, \hat{\sigma}^2_0, -\infty, \hat{\beta}_{0t})$.
\item We then refit the model above using
$Y^{full}_{it} = \mu_t + \gamma_{0i} + \epsilon_{it}$ based upon the imputed data from the previous step.
\end{enumerate}
Our reasoning for steps five and six is that there is possibly some initial effect in the model $\eta_{it}=\mu_{0t} + \gamma_{0i}$ which comes from fitting to \textit{only} fully observed data. By adding the second iteration we hope to wash out some of that initial impact. Our use of the boundary for imputation is based upon the idea that there are players who may be good enough to have their performance observed, that is play in whatever top league, but do have that opportunity, while at the same time there are players whose performance puts them well below the performance of other in the same league.
Figure \ref{fig:observeddatawithtruncationcutoffs}, we show the observed data with the cutoffs used for the truncated normal distribution (left), and the observed data with the imputed data (right).
\end{comment}
\begin{comment}
\subsubsection{Estimation of Central Tendency by Inferring Quantiles (miss\underline{ }quantile)}
In this method we aim to estimate $\mu_t$ by utilizing the distribution of the observed values at each age $t$. We observe at each age $t$ some fraction of the players which is reasonably assumed to be a truncated sample from a larger population. Let $n_t$ be the number of players whose performance is observed at age $t$ from the population of $N$ players whose performance is observed at any age. If we know the fraction of players relative to the larger population (and the form of the distribution) then we can map percentiles in our sample to percentiles in the larger population. For example, suppose we had a database of 1000 players and that at age 32 we observed $n_{32}=400$ of them. The 75th percentile of the age 32 players might be reasonably used as an estimate for the 90th percentile of the population of 1000. Explicitly this is because the $100^{th}$ best player among the observed is assumed to be the $100^{th}$ best among the observed and $n_{32} =400$, the unobserved $N=1000$. Similarly the 25th percentile of the above example would be the 70th percentile in the population. Below we assume a Normal distribution and use the additivity of the Normal distribution to obtain estimates for $\mu_t$ at each age $t$.
More formally, let the number of observed player performance metrics observed at age $t$ be $n_t$ and let $N$ be the total number of players in the database of interest. That is, we have $N$ players who had at least one observed year. We can calculate
the $q \cdot 100^{th}$ percentile from among the $n_t$ observed values. Call that value $\nu_t$. The value of $\nu_t$ is approximately the $G_t = \left(1-\frac{n_t}{N}(1-q)\right) \cdot 100^{th}$ percentile from the population of values at age $t$.
From our example above the $0.90=1-\frac{400}{1000} (1-0.75).$
From $G_t$ we can estimate the mean of the full population at age $t$ via: $ \hat{\mu}_t = \nu_t - \Phi^{-1}(G_t) \hat{\sigma}_t$ where $\Phi(~)$ is the cumulative density of a standard Normal distribution. From $\hat{\mu}$ we can estimate other percentiles as long as we can assume Normality and we have a reasonable estimate of the standard deviation, $\hat{\sigma}_t$. Our justification for assuming Normality is based on noting, first, that most performance metrics are averages or other linear combinations of in-game measurements and, second, that the Central Limit Theorem applies to these metrics.
For estimation of $\sigma_t$, the standard deviation of the population at age $t$, we can estimate the standard deviation at age $t$, denote that by $s_{t}$, and adjust based upon the proportion of truncation. Note that the variance of a truncated Normal is always less than the variance of an untruncated one with similar underlying variance. So then we use $\hat{\sigma}_t = \frac{s_t}{\theta_t}$ where $\theta_t$ is the ratio of truncation from a standard Normal for these data at age $t$. To obtain $\theta_t$ we use the \textit{vtruncnorm} function in the \texttt{truncnorm} library in \textit{R} \cite{rsoftware}.
\subsubsection{Second Imputation Based Approach}
This imputation method combines the previous two methods. We use the estimated $\hat{\mu}_t$ from the previous estimation method above as the mean for imputation of the truncated values unobserved values. These truncation cut-off values, the $\beta_t$'s at each age $t$, are based upon the truncated values used in \textit{Imputation Method 1} above.
\begin{enumerate}
\item Start by estimating $Y^{obs}_{it} =\mu_{0t}+ \gamma_{0i}+\epsilon_{0ij}$ for just the fully observed data, i.e. player years where $psi_{it}=1$. Calculate the estimated standard deviation of the $\hat{\epsilon}_{0ij}$'s, call that $\sigma_0$ below. And where $\gamma_{0i}$ is the effect of a given player. For this method we assume the effect of a given player is on the y-intercept for this player.
\item Estimate a smoothed lower boundary of performance for players in the NHL based upon the $q^{th}$ percentile of players at each age, say the $25^{th}$ percentile. Call the estimated smoothed boundary values,$\tilde{\beta}_t$'s.
\item Calculate
the $q \cdot 100^{th}$ percentile from among the $n_t$ observed values. Call that value $\nu_t$. The value of $\nu_t$ is approximately the $G_t = \left(1-\frac{n_t}{N}(1-q_t)\right) \cdot 100^{th}$ percentile from the population of values at age $t$.
\item From $G_t$ we can estimate the mean of the full population at age $t$ via: $ \hat{\mu}_t = \nu_t - \Phi^{-1}(G_t) \hat{\sigma}_t$. From $\hat{\mu}$ we can estimate other percentiles as long as we can assume Normality and as along as we have a reasonable estimate of the standard deviation, $\hat{\sigma}_t$.
\item Next we simulate values for the $Y_{it}$ with $\psi_{it}=0$ from a Normal distribution with mean $\eta_{it}=\hat{\mu}_t + \hat{\gamma}_{0i}$ and standard deviation $\sigma_0$ but truncated so they are not larger than $\tilde{\beta}_t$. That is the range of possible values for these $Y_{it}$ is $(-\infty,\tilde{\beta}_t)$. Call these simulated values $Y^{imp}_{it_k}$.
\item Now fit $Y^{full}_{it} \sim \mu_t+ \gamma_{0i}+\epsilon_{it}$ using a smoothed spline where $${\mbox{\bf{Y}}}^{full} = \left\{ Y^{obs}_{it_k}, Y^{imp}_{it_k}\right. \mid \left. i=1,\dots, N, k=1,\ldots, K\right\}.$$
\end{enumerate}
\subsubsection{lmer 1 (quad\underline{ }random)}
The above methods have all assumed fixed effects for the player effects. In this method for estimation of the mean aging curve, we use quadratic random effects for each player. The data that are fit via this model are the ${\mbox{\bf{Y}}}^{full}$ data which are simulated through the truncated imputation from Miss1. To these data we fit a natural spline approach following similar methodology to other approaches and we include constant, linear and quadratic effects for each player. Thus, we have the following:
$$Y^{full}_{it} = g(t) + \gamma_{0i} + \gamma_{1i} t + \gamma_{2i} t^2$$
where $\gamma_{0i}$, $\gamma_{1i}$ and $\gamma_{2i}$ are random constant, linear and quadratic effects for each player $i$, respectively.
\subsubsection{ lmer 2}
Like the previous method, this method uses random effects for each player. In this case we allow for
a random natural spline to be fit for each player.
And in this way we have a good deal of functional flexibility for modelling the data from each player. The model is:
$$Y^{full}_{it} = g(t) + \gamma_{0i} + \xi_i (t)$$
where $\xi_i (t)$ is a random effect spline function of the same degree as $g(t)$. This approach is fit to data
that has been simulated from truncated data.
\subsubsection{quadratic only (miss\underline{ }quadratic)}
One common assumption in the literature on player aging curves is that $g(t)$ is quadratic, cf Table \ref{tab:table1}. To that end we included an explicit methodology that assumed that the form for $g(t)$ would involve estimation of a fixed effect quadratic model. That is, we fit the model $g(t) = g_0+g_1 t + g_2t^2 + \gamma_{0i}$ to imputed data following the imputation methods of Miss1 above. That is the model has a quadratic form with a constant fixed effect for each player.
\subsubsection{imputation with no truncation(miss\underline{ }notrunc)}
For this method we repeat the methodology and the function form of method Miss1 but we do not truncate the imputed values, the $Y^{imp}_{it}$. Consequently, they are imputed from an untruncated Normal distribution in Steps 3 and 5 of the Miss1 imputation approach.
\end{comment}
\section{Simulation Study}
\subsection{Simulation Design}
We created a series of simulations to evaluate how well the various approaches described in the previous section estimate $g(t)$. To generate data we followed a similar approach to that of \cite{tutoro2019} and created an underlying smooth curve, and generated values for player performance based upon that curve.
We then modelled the dropout of players through a missingness process that generated values for $\psi_{it}$.
These simulations focused on the distribution of players and the methodology for their missingness.
\subsection{Simulating individual player observations} Our simulated performance values for the $i^{th}$ player in year $t$ are generated in the following manner:
\begin{equation}
Y_{it} = \omega + \gamma_{0i} +a (t-t_{max})^2 + (b+b_i) (t-t_{max})^2 I_{\{ t> t_{max}\}} + c(t-t_{max})^3 I_{\{ t> t_{max}\}}+ \epsilon_{it} \label{eq:generatingcurve}
\end{equation}
\noindent where $i=1,\ldots, N$ and $t=t_1,\ldots,t_K$. The first three terms form a piecewise quadratic curves that serves as the underlying generating curve. The term $\gamma_{0i} \sim N(0,\sigma^2_{\gamma})$. Thus this model is a piecewise cubic function with constant and quadratic player random effects.
We can rewrite Equation \ref{eq:generatingcurve} to highlight the model components as:
\begin{eqnarray}
Y_{it}&=& g(t) + f(i,t) + \epsilon_{it},\\
g(t) &=& \omega +a (t-t_{max})^2 + b (t-t_{max})^2 I_{\{ t> t_{max}\}} + c(t-t_{max})^3 I_{\{ t> t_{max}\}},\\
f(i,t)&=& \gamma_{0i} + b_i (t-t_{max})^2 I_{\{ t> t_{max}\}}.
\end{eqnarray}
The term $\epsilon_{it} \sim N(0, \sigma^2_{\epsilon})$ is random noise that simulates year-to-year randomness in player performance and variation unaccounted for elsewhere in the model.
For our simulations we fixed $t_{max}=25$, $a = -1/9$, $b =-6/1000$, $c = 45/10000$ $t_{1}=18$, $t_{K} =40$, and $\sigma^2_{\epsilon}=1$. Our choice of these values for $a$, $b$, and $c$ was based upon trying to match the pattern of drop-in/drop-out for a subset of National Hockey League Forwards.
We used varying values of $N$, $\omega$, and $\sigma_{\gamma}$ in our simulations. In particular we simulated a full factorial from $N=300, 600, 1000$, $\omega = 0, 1$ and $\sigma_{\gamma} = 0.4, 0.8, 1.5$. The values of $N$ that we chose were to illustrate the impact of sample size on estimation performance while also have a reasonable range of values. For $\omega$ which represents the maximum value that our simulated $g(t)$ would take, we want to have both a zero and a non-zero option. Finally, we based our choices for $\sigma_{\gamma}$ on the standard deviation of the estimated player effects from fitting a natural spline with player effects model to standardized points per game from National Hockey League forwards. That value was approximately $0.8$ and we subsequently chose values for $\sigma_{\gamma}$ that were half and twice as large. For each combination of values for $N$, $\omega$ and $\sigma_{\gamma}$, we simulated $200$ data sets and calculated the estimated $g(t)$ for each method on each of these data sets.
For example, the components of $g(t)$ from \eqref{eq:generatingcurve} are depicted in Figure \ref{fig:simulatingplayercurves}. Shown in that figure are the underlying generating curve, the player intercept terms for overall player ability, the player quadratic curves for player aging, the random noise terms, and the final simulated player curves. These were taken from one simulation with the parameters $N=600$, $\omega=0$, and $\sigma_{\gamma}=0.8$. All simulated players are shown in gray, and the curves generated for one particular simulated player are shown in red. This highlighted player is slightly below average, intercept is slightly below zero, and ages slightly more slowly than average. The player quadratic is slightly concave up, which when added to the generating curve, gives a curve that indicates that this player's decline will not be as steep as an average player.
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{img/Generatingcurvetypesofnoiseandfinalsimulateddata.jpg}
\caption{The underlying generating curve, different types of noise, and the final simulated data used in one simulation. The gray curves represent simulated players. The curve for one simulated player is highlighted in red.}
\label{fig:simulatingplayercurves}
\end{figure}
\subsection{Simulating missingness of observations}
For generating the $\psi_{it}$ values for each player and age, we chose a methodology that tried to mirror the drop-in/drop-out behavior of forwards in the National Hockey League. Recall that $\psi_{it} = 1$ if observation $Y_{it}$ was observed. Using the methodology from the previous section, we simulated $Y_{it}$ for each player $i$ and each age $t$ between $t_1=18$ and $t_K=40$. Using those values, we obtained $\psi_{it}$ for each player via the following steps:
\begin{enumerate}
\item For each age $t$, round $N \pi_t$ to a whole number and call that $n^{+}_t$.
\item For each player $i$, calculate $\rho_{it} = \exp \{\sum_{k=1}^{t} Y_{ik}\}.$
\item Sample without replacement $n^{+}_t$ players from the $N$ players with each having
probability of selection $p_{it} = \frac{\rho{it}}{\sum_{j}\rho_{jt}}.$
\item Make $\psi_{it} =1$ for the $n^{+}_t$ players selected in the previous step and $\psi_{it}=0$ for the remaining players.
\end{enumerate}
Simulations that used the cumulative performance missingness generation were paired with a full factorial of values for $\omega = 0, 1$, $N=600, 800, 1000$, and $\sigma_{\gamma} = 0.4,0.8, 1.5$. For simplicity we assumed that all simulated value of $Y_{it}$ were observable, i.e.~ $\phi_{it}=1$ for all $i$ and $t$.
To evaluate the methods proposed in the previous section, we generated simulated data following the data generation and missingness approaches described above in this section.
Figure \ref{fig:percentobserved} shows the percentage of players that are observed by age for NHL forwards (left) and for simulated data (right) with $N=600$,$\omega = 0$ and $\sigma_{\gamma}=0.8$.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{img/PercentofForwardsObservedAtEachAge.jpg}
\includegraphics[width=.45\textwidth]{img/PercentofSimulatedPlayersObservedAtEachAge.jpg}
\caption{Percent of players that are observed at each age for NHL forwards and for the simulated data. The distributions follow similar shapes and peak around ages 23-24.}
\label{fig:percentobserved}
\end{figure}
\subsection{Simulation Results}
Next we consider how each of our methods performed in their estimation of the mean aging curve on their average error and on their overall shape.
\subsubsection{Root Mean Squared Error}
Estimated curves are first compared to $g(t)$ on their average root mean squared error (RMSE) at each age. Figure \ref{fig:rmse_age_numbplayers} shows RMSE by age, averaged across simulations, and split for each of simulations with 300 versus 1000 players. Six curves are shown; ones not presented in Figure \ref{fig:rmse_age_numbplayers} showed RMSE's that were larger and required a re-scaling of the y-axis that rendered comparisons of the remaining methods difficult.
\begin{figure}
\centering
\includegraphics[width=14cm,
keepaspectratio]{img/rmse_by_age.jpg}
\caption{Root Mean Squared Error by Age and Number of Players.}
\label{fig:rmse_age_numbplayers}
\end{figure}
RMSE at each age is lowest at around age 24, which corresponds to when the majority of NHL players in Figure \ref{fig:rmse_age_numbplayers} are observed. At entry (age 18) and at the end of careers (age 30 onwards), RMSE's tend to be higher.
Overall, two methods -- \textit{spline:obs:fixed} and \textit{spline:notrunc:fixed} --- boast the lowest average RMSEs at each age. This suggests that for estimating age curves, either imputation or player specific intercept models are preferred. These two methods yield nearly identical average RMSEs at each age, which is why, the \textit{spline:notrunc:fixed} (blue) method is not visible in Figure \ref{fig:rmse_age_numbplayers}.
All models better estimate age curves with the larger sample size (the right facet of Figure \ref{fig:rmse_age_numbplayers}). The impact of sample size is largest for the \textit{delta-plus} method. For example, at age 40, \textit{delta-plus} shows the 3rd lowest RMSE with 1,000 players, but the 5th lowest RMSE with 300 players. Additionally, \textit{delta-plus} is worse with the lowest age group -- for both 300 and 1000 simulated players, it shows the largest RMSE in Figure \ref{fig:rmse_age_numbplayers} among players at age 18.
In general, simulations with higher standard deviations (1.5, versus 0.8 and 0.4) averaged higher RMSEs, although the impact of increasing standard deviation appeared uniform across age curve estimating method.
\subsubsection{Shape Based Distance}
To supplement simulation results based on RMSE, we use shape based distance (SBD, \cite{paparrizos2015k}), a metric designed to approximate the similarity of time series curves. SBD normalizes the cross-correlation between time series curves, which in our example is the estimated age curve and the truth. SBD scores ranges from 0 to 1, where lower scores reflect more similar curves.
Figure \ref{fig:sbd} shows a boxplot with shape based distances at each simulation for each method.
\begin{figure}
\centering
\includegraphics[width=12cm,
keepaspectratio]{img/boxplot_sbd_full.jpg}
\caption{Shape Based Distance between true and estimated age curves, across simulations}
\label{fig:sbd}
\end{figure}
As in Figure \ref{fig:rmse_age_numbplayers},
\textit{spline:obs:fixed} and \textit{spline:notrunc:fixed}
boast curve shapes that are closest to the true age curve in Figure \ref{fig:sbd}. The basic spline model without a player intercept (\textit{spline:obs:fixed}) has the third lowest median SBD; however, this method also boasts several outlying SBD observations. Similarly, the \textit{delta-plus} shows SBD outliers. The method that assumes a quadratic response surface has the highest median SBD, and is in the top row of Figure \ref{fig:sbd}, suggesting that quadratic-based approaches may not identify the shape of age curves.
\section{Application to National Hockey League Data}
To illustrate the impact of the methods proposed in this paper, we apply some of the approaches in this paper to data on 1079 NHL forwards who were born on or after January 1, 1970. The data were obtained from \url{www.eliteprospects.com} and only players who played at least one season in the National Hockey League were included.
For our measure of player performance, $Y_{it}$, we chose a standardized points (goals plus assists) per game where the z-score was calculated relative to the mean and standard deviation for the NHL in that particular season. Our selection of this metric was based upon its availability for all players across the range of seasons (1988-89 season to 2018-19 season).
In Figure \ref{fig:estimatedagecurves} we show estimated age curves for NHL player points per game using different methods. The differences in our two best methods, \textit{spline:notrunc:fixed} (red) and \textit{spline:obs:fixed} (lightred), neither of which use truncation, are not perceptible as their lines overlap. The methods with truncation, \textit{spline:trunc:fixed} (blue), \textit{spline:trunc:random-quad} (gray) and
\linebreak[4]
\textit{spline:trunc:random-spline} (black) are very similar and have overlapping curves as well. Those curves are lower than the curves from our best methods, which is expected since imputed values were taken from a truncated normal. The overall pattern for all of these curves does not seem quadratric.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{img/EstimatedAgeCurvesForPointsPerGame.jpg}
\caption{Estimated age curves for NHL points per game using different methods.}
\label{fig:estimatedagecurves}
\end{figure}
\section{Discussion}
Understanding how player performance changes as players age is important in sports, particularly for team management who need to sign players to contracts.
In this paper we have proposed and evaluated several new methodologies for
estimation of mean player aging curves. In particular, we have presented formal methods for adding imputed data to augment the missingness that regularly appears in data from some professional sports leagues. The models that performed best in our simulation study had either a flexible form or incorporated player effects (through imputation, or directly). This paper has also presented a framework for incorporating imputed data into estimation of player aging effects. With player effects, the age curve term contains information about the relative changes in performance from age to age and overall performance is part of the player term. When estimation is done with only observed data, the relative changes in the mean aging curve are only utilized where a particular player is present. Without player effects as a model component, the individual player and the overall are entangled. Specifically, we find that a spline methodology with fixed player effects allow perform better than other methods and these methods have the flexibility to appropriately and efficiently estimate player aging curves.
There are some additional considerations that might improve the performance of the methods we have considered. One possibility would be to consider a fully Bayesian approach that treats all of the unknown aspects of the model both the $g(t) + f(i,t)$ and unobserved $Y_{it}$'s as random variables. This approach could consider a complete posterior inference given the uncertainty in estimation. Along similar lines, an approach that does multiple imputation for each unobserved $Y_{it}$ could improve performance. Another assumption that has been made in the literature is to treat ages as whole numbers. It certainly seems possible for a regression based approaches for estimation of $g(t)$ to deal in fractional ages $t$ for a given season.
Our simulation study was focused on mean aging curves that yielded similar player drop-in/drop-out patterns to those observed in a particular professional league, the National Hockey League. It would certainly be reasonable to consider other functional forms for the underlying $g(t)$ and $f(i,t)$, though we believe that the results from such a study would be in line with those found in this paper. Another avenue of possible future work would be to consider additional mechanisms for generation of the $\psi_{it}$ values.
Overall the novel methods proposed and evaluated in this paper via simulation study have improved our understanding of how to estimate player aging curves. It is clear from the results in this paper that the best methods for estimation of player aging are those that have model flexibility and that include player effects. The use of imputation also has potential to impact this methodology and, thus, being aware of what we don't observe can make our estimation stronger.
\begin{comment}
\begin{enumerate}
\item Model Without Player effect: Given age is 36, E(PPG) is xxxx (bigger than it should be because only good players are left at age 36). The E(PPG) is an average over observed players.
\item Model With Player effect:
\begin{enumerate}
\item Given age is 36 AND player is Wayne Gretzky, E(PPG) is xxxx.
\item Given age is 36 AND player is (an average observed/unobserved player whose random intercept is 0), E(PPG) is (something lower).
\end{enumerate}
\end{enumerate}
In (1), E(PPG) is an average over observed players, and in (2), the age curve term in E(PPG) is an average over observed/unobserved players. 2(b) is basically the age curve term in \verb'loess_nomiss', \verb'spline_nomiss' models, and that is lower for older ages than the curve in (1).
\end{comment}
\begin{acknowledgements}
We would like to thank CJ Turturo for making his code available.
\end{acknowledgements}
\section*{Conflict of interest}
The authors declare that they have no conflict of interest.
\begin{comment}
\begin{table}[]
\centering
\caption{Summary of Estimation Methods}
\begin{tabular}{l|l|c|c}
\hline
Method Name & Model formulation &Estimation of $g(t)$ & Data \\
\hline
Delta-plus (delta-plus) & Non-parametric & Piecewise & No\\
Spline\underline{ }noplayer\underline{ }nomiss (spline:obs:none)&
$g(t)$ & Natural Splines & No\\
Spline\underline{ }nomiss
(spline:obs:fixed)& $g(t) + \gamma_{0i}$& Natural Splines & No\\
Miss1 (spline:trunc:fixed)& $g(t)+\gamma_{0i}$& Natural Splines & Yes (truncated)\\
Miss\underline{ }notrunc (spline:notrunc:fixed)&$g(t)+\gamma_{0i}$& Natural Splines & Yes (not truncated)\\
Miss2 (quant:trunc:fixed)& $g(t)+\gamma_{0i}$ & Quantile Approx. & Yes (truncated) \\
Miss\underline{ }quantile (quant:obs:none)& $g(t)+\gamma_{0i}$ &Quantile Approx. & No\\
Miss\underline{ }quadratic
(quad:trunc:fixed)& $g_0+g_1 t + g_2t^2 + \gamma_{0i}$& Quad. Linear Model& Yes (truncated)\\
Quad\underline{ }random (spline:trunc:random-quad)&$g(t)+\gamma_{0i} + \gamma_{1i} t + \gamma_{2i} t^2$ & Natural Splines & Yes (truncated)\\
Spline\underline{ }ns2 (spline:trunc:random-spline)& $g(t) +\gamma_{0i} + \xi_i(t)$ & Natural Splines & Yes (truncated)\\
\hline
\end{tabular}
\end{table}
\end{comment}
\bibliographystyle{spmpsci}
| {
"attr-fineweb-edu": 2.337891,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdnc5qdmDKh-ws8lK | \section{Introduction}
According to almost all demographic projections, the coming decades will see a global increase in the population of older adults \cite{ortman2014aging}. What is more, if current trends in societal and technological development hold, this new population of older adults will be healthier, more affluent and mobile than any other older adults cohort in recorded history \cite{sanderson2015faster}, thus they will have more time, resources and abilities to participate in tourism and traveling, both domestic and international, as well as engage in on-line communities.
According to current research concerning the topic of older adult tourism, and on-line community engagement, these two types of activities provide many positive effects for the older adult population. Tourism can improve overall wellbeing \cite{mitas2012jokes}, as well as the way older adults cope with health issues \cite{ferri2013functional,ferrer2015social}, while community engagement has many positive psychological effects on older adults \cite{sum2009internet}. However, research shows that older adults who wish to participate in life as tourists, experience many barriers and limitations related to their age, and the face other factors both external and internal\cite{kazeminia2015seniors}. Research has shown that addressing these barriers, such as the ones related to safety (as a study by Balcerzak et al. suggest\cite{balcerzak2017press}), can be achieved through listening to the experiences of older adults and applying them in intelligent solutions
With modern developments in fields such as data mining and natural language processing, a new promise opens for addressing the needs of senior travelers directly, through listening, and systematic extracting what they have to say, when spontaneously engaging in activities within on-line travel communities.
However before designing any forms of automated knowledge extraction, one needs to explore what kind of themes and topics are prevalent in the analyzed corpus of text. By doing so one is able to recognize what specific parts of the texts should be extracted, and what kind of ontology one needs to build, in order to properly represent the structure of the data hidden in a raw text. In this paper we wish to present the results of such an exploratory task.
\subsection{Research contribution \& research question}
We wish to focus on the design of the aforementioned framework, and the information about senior tourists on-line it helps to extract. The following detailed research questions are addressed:
\begin{itemize}
\item{\textbf{What dimensions of social and individual activity are present in on-line communities of older adult tourists.}}
\item{\textbf{In what ways, based on the before mentioned dimensions, do senior tourists differ from other on-line tourists communities?}}
\end{itemize}
These research questions can be translated into the following hypotheses:
\begin{itemize}
\item{\textbf{Hypothesis I:} Tourists aged 50+ mention their age more often than members of other groups}
\item{\textbf{Hypothesis II:} Tourists aged 50+ more often than members of other groups speak about their limitations and barriers}
\item{\textbf{Hypothesis III:} Tourists aged 50+ more often than members of other groups will recall their past experiences of traveling.}
\item{\textbf{Hypothesis IV:} Tourists aged 50+ connect their experiences of limitations and barriers with their age.}
\end{itemize}
The choice of these hypotheses is based on a set of presumption derived from the available literature. Works such as done by \cite{lazar2017going} show that older adults who participate in an on-line community often center their identity around their age. Since this work was based mostly on research of bloggers, we wanted to see if it extends to forum posters.
The rest of this paper is organized as follows. In the Related Work section, we present the review of studies of senior tourism, older adult on-line communities, as algorithms used in related tasks of content extraction. Next, the Analysis tools section, describes in detail the dataset, and methodology used by the researchers. The Results section presents obtained results, while the Conclusions and Further Work section presents the ramifications of these results.
\section{Related work}
\subsection{Older adults as tourists - constraints and limitations}
Studies involving touristic activity among older adults can be split into two main categories. First one, deals with describing travel and touristic patterns among older adults, often compared to other age groups. A study done by Collia \cite{collia20032001}, in which travel patterns of older citizens of the US were analyzed, would be a prime example. Other examples include work by Nyaupane \cite{nyaupane2008seniors}. Such general overviews of travel patterns by older adults were not limited to the US, but were also attempted among other places in Poland by Omelan \cite{omelan2016tourist} and China, by Gu \cite{Gu_2016}. Both studies focused on differences within the older adults demographic, with the study by Omelan finding differences based on place of residence (rural vs Urban), and the study by Gu analyzing through logistic regression the impact of well-being, gender and social status on older adults travel patterns. Studies in this category show also often a positive correlation between various measures of psychological and physical well-being and traveling, like in studies by Ferri \cite{ferri2013functional} and Ferrer \cite{ferrer2015social}.
Another line of research in this category are studies concerning the perception of traveling among older adults. A study done by Gao \cite{gao2016never}, which focuses on travel perception among older Chinese women, is a good example. Research was also done with regard to emotional aspects of traveling, for example a study by Mitas \cite{mitas2012jokes} focused on the role of positive emotion and community building among senior tourists, while a study by Zhang \cite{zhang2016too} described the constraints related to visiting places connected to historical atrocities.
The other category of studies focuses more on researching the constraints faced by older adults who partake in traveling. The prime example of such research, which served as an inspiration for the framework presented in this paper, was work done by Kazeminia \cite{kazeminia2015seniors} where a dedicated tool for content analysis called THE LEXIMANCER \textit{http://info.leximancer.com/} was used on a corpus of posts made by older adults on the Trip Advisor web page, to extract a framework of constraints faced by older adults who travel. Other exploratory work in this regard was done by Kim \cite{kim2015examination}, where three main types of constraints were identified, the intrapersonal, interpersonal, and structural ones. A similar study done by Nimrod \cite{nimrod2012online} revealed the most persistent constraints through content analysis. However, this study was criticized by authors like Mitas \cite{mitas2012jokes} for failing to focus on the emotional aspects of the older adults traveling community.
The topic of travel constraint among older adults was also approached by Bacs \cite{bacs2016common}, and Fleisher \cite{fleischer2002tourism} as well as, Gudmundsson \cite{gudmundsson2016challenges}, Reed \cite{Reed2007687} and Page \cite{page2015developing} who focused more on health related issues.
\subsection{Older adults and on-line communities}
Substantial work in this topic was done by Nimrod who not only managed to conduct quantitative analysis in order to discover specific clusters within the older adult communities (information swappers, aging-oriented, socializers) \cite{nimrod2013probing}, extracted, through content analysis tools, topics mostly discussed in senior on-line forums \cite{nimrod2010seniors}, but also, through on-line surveys identified psychological benefits of participating in on-line communities \cite{nimrod2014benefits}. There were also studies, like the one by Kowalik and Nielek which focused on communities of older programmers \cite{kowalik2016senior}.
Another thread of research was focused on barriers concerning participation in on-line communities. Research like the one done by Gibson \cite{gibson2010designing} and Jaeger \cite{jaeger2008developing} states that issues of privacy and accessibility form one of the biggest barriers. Noteworthy is also research done by Lazar \cite{lazar2017going} who, by analyzing blog posts of bloggers aged 50+, showed how people within the community of older adults can address their own identity and combat ageist stereotypes.
As participating in on-line communities is crucial for improving well-being of older adults many researchers tried to animate such activities \cite{kopec2017livinglab,kopec2017location,nielek2017turned}.
\subsection{Age, aging, ageism}
Another factor important for the design of a framework for understanding the experiences of senior tourists is the perception of older adults' age, by themselves, the older adults, as well as from the perspective of others.
In the context of senior tourism, research, such as the one done by Cleaver \cite{cleaver2002want} shows that the perception of ones age is in a mutual relation with ones particular motivations for travelling. Work done by Ylanne \cite{ylanne-mcewen_1999}, on the other hand, represents the role of age perception among travel agencies as an important factor.
\subsection{Automated information extraction in the tourist domain}
With the current availability of on-line tourist reviews, research developed into the application of text mining and data mining solutions that would make use of this data. Authors have already mentioned the recent work by Gu \cite{Gu_2016} in which travel tips would be extracted from travel reviews. Other recent threads of research in this field are represented by Bhatnagar, who also proposed a framework for extracting more subtle, albeit more general and simpler aspects of travel experiences, and a study concerning a methodology of mapping the data related to medical tourism in South Korea done by \cite{huh2017method}.
\section{Study design}
\subsection{Collected data}
In order to design the framework and develop its applications, we decided to collect textual data from a single source related, which is the 'Older traveler' forum on the Lonely Planet website.
In total, over 30 thousand posts from the last 15 years of the forum's existence were collected into a single database to serve as a base for future testing and annotating jobs. When compared to previous attempts at content analyzing tourism among older adults such as works by \cite{kazeminia2015seniors}, this dataset provides a definitely larger scope of data to be collected and analyzed due to its sheer size. However, since the paper presented here focuses on preliminary work in preparing an annotation protocol, only a fraction (100 posts) will be used for the current coding and content analysis tasks. This fraction, like all of the forum posts used in this study, were selected randomly with a use of a simple random sampling.
The choice of this particular site and section of a site dedicated only to older adults was based on methodological considerations. Lonely Planet is one of the most popular websites dedicated to tourism, based on a popular series of guide books for tourists. Within the site one can find various forums dedicated to specific locations, as well as specific demographics groups of travelers, such as older adults, women and members of the LGBTQ community.
Selecting the forum was based on established precedent within the research of on-line communities, like the one done by Nimrod \cite{nimrod2010seniors}, and Berdychevsky \cite{berdychevsky2015let}. Using a dedicated older adult forum, instead of all posts produced by older adults within the forum, was a deliberate choice on the part of the authors. Instead of using an arbitrary age for threshold of defining someone as an older adult, used used self-labeling in form of participation in the older adults forums as a defining point for being treated as an older adult. The collected data bears out this assumption, since all of the posts are written on the behalf of the users, we did not observe a situation when, a younger adult would write on a behalf of the older adults. Since the main focus of this paper is to explore the topic ontology for future extracting and knowledge mapping of older adults as a community, it was important to collect posts from a source were such community exists. Moreover, by obtaining data from a single thematic forum, authors gained an opportunity to include such aspects of community life as emotional stances, as well as arguments used in discussion. These two constitute an important element in the design of the future framework, as well as differentiate the research work presented here, from the established corpus of literature where usually barriers, identity and emotions were researched in isolation.
In order to address the research questions presented at the beginning of this article, we decided to collect additional sources from other Lonely Planet forums. In total 17 thousand posts were extracted, out of which 100 were designated for the coding task. This set serves as a baseline for comparing the older adults against the general population of travelers. out of this set a subset of 100 posts was drawn with the use of simple random sampling.
In total 200 posts will be used for the analysis, such dataset fits perfectly within the standard dataset size used in studies related to the topic \cite{lin2004representation}. This number of posts also ensures that the results of further statistical analysis will be valid as statistical significance is concerned.
\subsection{Literature review}
In order to verify the dimensions of the framework, a literature review was conducted in accordance to the guidelines described by \cite{lin2004representation}. First, a v was conducted in order to acquire the most persistent themes in regards to experiences of older adults on-line in general, and senior travelers in general. The thus extracted themes are presented in APPENDIX I.
\begin{table*}[h!]
\centering
\caption{Code book}
\label{Code_book}
\begin{tabular}{| p{1,5cm} | p{5cm} | p{7,5cm} |}
\hline
Dimension & Definition & Examples \\
\hline
Age Reference & All instances of referring to age either of oneself or of somebody else. Age used as an argument in discussion or mentioned as a part of a narrative. Includes both literal and figurative mentions of age. & \textit{~~\llap{\textbullet}~~{As a 70 year old... For people my age} ~~\llap{\textbullet}~~{In your age it is important} ~~\llap{\textbullet}~~{A senior has different needs} ~~\llap{\textbullet}~~{I am getting to old for this}}\\
\hline
Location reference & All instances of referring to a specific location, which the user has visited, wants to visits, considers visiting, asks for advice about. Locations include whole continents, countries, cities, provinces, villages or particular locations like tourist attractions, churches, building, land marks etc. & \textit{~~\llap{\textbullet}~~{Has anyone been to Romania?} ~~\llap{\textbullet}~~{Visit Singapore, it's awesome.} ~~\llap{\textbullet}~~{I am off to Cambodia} ~~\llap{\textbullet}~~{I was In China in the seventies}}\\
\hline
Limitations & This dimension refers to instances of external factors which, in the perception of the poster hinder the individuals ability to travel, or may provide difficulties while traveling to a specific location. Includes both reports of such factors, complaints about such limitations, as well as advice and asking for advice on the topic of external limitations. External factors, are related to traits typical to the environment, rather than the individual.& \textit{~~\llap{\textbullet}~~{ATMs don't work in Somalia, it's a pain in the ass}
~~\llap{\textbullet}~~{I couldn't go to Vietnam, because my visa expired}
~~\llap{\textbullet}~~{I can't stay abroad, because I can lose my country's pension}
~~\llap{\textbullet}~~{I didn't feel good in a country which treats women badly}
~~\llap{\textbullet}~~{Use a visa card, and exchange your dollars to a local currency, it is better that way}
~~\llap{\textbullet}~~{Ulaanbatar has only one working ATM}
~~\llap{\textbullet}~~{The local police is corrupt, they wouldn't help me unless I bribed them}
~~\llap{\textbullet}~~{Any advice on dealing with altitude sickness in Peru?}}
\\
\hline
Ageism & Refers to any instance using ageist stereotypes in discussion, or a description of ageist experiences held by the user.& \textit{~~\llap{\textbullet}~~{You old people are all the same with your fear} ~~\llap{\textbullet}~~{When I was standing in line for the cruise, some young fellow told me that in my age, I am not fit for ocean cruises to Antarctica}}\\
\hline
Barriers & This dimension refers to instances of internal factors which, in the perception of the pster, hinder the individuals ability to travel, or may provide difficulties while traveling to a specific location. Includes both reports of such factors, complaints about such limitations, as well as advice and asking for advice on the topic of internal barriers. These are typical to the individual. & \textit{~~\llap{\textbullet}~~{I would love to travel, but my husband doesn't allow it}
~~\llap{\textbullet}~~{The fact that I have to work limits my travel plans}
~~\llap{\textbullet}~~{This year I skipped South Asia, because my mother died}
~~\llap{\textbullet}~~{I wish I could go backpack, but with my sick knees it became impossible}
~~\llap{\textbullet}~~{I was afraid of malaria in India}}
\\
\hline
Experiences & Refers to experiences of traveling, recounting of encounters made during ones travels, regardless of location and time perspective. This includes particular experiences, as well as general observations taken from the single experiences.&\textit{ ~~\llap{\textbullet}~~{When we entered the market, I saw elephants and arabian spices on display}
~~\llap{\textbullet}~~{The cruise was amazing!}
~~\llap{\textbullet}~~{The hostels I've been to always were full of interesting people of all ages}}
\\
\hline
Motivation & Refers to experiences of traveling, recounting of encounters made during ones travels, regardless of location and time perspective. This includes particular experiences, as well as general observations taken from the single experiences. & \textit{~~\llap{\textbullet}~~{I would love to visit India, because I want to spiritually awake}
~~\llap{\textbullet}~~{Cambodia is great if you want escape from the fast paced life of western cities}
~~\llap{\textbullet}~~{This is why I travel, this gives me a sense of being alive}
~~\llap{\textbullet}~~{I really love to travel solo, it lets me fully enjoy my experiences}}
\\
\hline
\end{tabular}
\end{table*}
\subsection{Coding}
The coding job was initiated with the objective of marking the entire corpus. In order to conduct the task, dedicated software called QDA MINER LITE (version 2.0.1). A group of annotators was selected for the task and instructed in the tagging protocol. Each post was read by two annotators who were to mark whether or not the extracted themes appeared in a given post. During the coding task the annotators would discuss any discrepancies in their tagging. The conclusions of this discussion were later applied to the code book. After the coding job was finished and all discrepancies would be addressed, tagging annotation agreement was calculated with the use of Cohen's Kappa, the value achieved (83\%) indicated a high level of inter-rater agreement.
\begin{table*}[h]
\label{tab_freq}
\centering
\caption{Frequencies of codes in the collected samples}
\begin{tabular}{|p{2,5cm}|p{1,2cm}|p{1,2cm}|p{1,2cm}|p{1,15cm}|p{1,2cm}|p{1,2cm}|p{1,2cm}|}
\hline
Group & Age reference & Location reference & Ageism & Limi-
tations & Barriers & Exper
-iences & Moti-
vation \\
\hline
Senior tourist & 20\% & 39\% & 6\% & 22\% & 9\% & 37\% & 15\% \\
\hline
General population & 2\% & 42\% & 0\% & 20\% & 1\% & 20\% & 4\% \\
\hline
\end{tabular}
\newline
Since more than one dimension could occur in a single post, the percentages do not add up to 100\%.
\end{table*}
\section{Results}
Due to the nature of the tagged corpora, we decided to use two statistical measures for testing the validity of the claimed hypotheses. First measure, the T-student's test was used for testing the significance of the observed differences between seniors and general population. Therefore, the hypotheses I to III can be translated to a set of statistical hypotheses about the average difference between posts from two independent samples. For the sake of further testing, the significance level for all of the tests has been set to 0,05. However, due to the fact that multiple pairs of values are compared at once, a correction was applied to the significance threshold, based on the Bonferroni correction. The adjusted significance threshold was therefore set to 0,007. Hypothesis IV, on the other hand, will be addressed with the use of Cohen's kappa as a measure of agreement between two dimensions in a single post.
The frequency of the dimensions present in the tagged posts, is shown in table \ref{tab_freq}. The values given in the table present how often within all of the posts from a specific group did a given dimension occur. Since more than one dimension could occur in a single post, the percentages do not add up to 100\%. Even without conducting the statistical tests, one can notice some general trends visible in the data. First, the references to location and personal experiences are the most frequent aspects of the framework, visible in all three groups. There is also a stark contrast between the references of external and internal hindrances. While limitations, for the most part, are mentioned quite often, the barriers are scant, especially outside of the senior tourist group.
\begin{table*}[h]
\centering
\label{T_test}
\caption{T-test scores}
\begin{tabular}{|p{3cm}|p{1,2cm}|p{1,2cm}|p{1,2cm}|p{1,15cm}|p{1,2cm}|p{1,2cm}|p{1,2cm}|}
\hline
Older adults compared to: & Age reference & Location reference & Ageism & Limi-
tations & Barriers & Exper
-iences & Moti-
vation \\
\hline
General population& 4.24**& -0.43 & 2.52** & -0.34 & 2.64** & 2.7** & 2.7**\\
\hline
\end{tabular}
\newline
*significant difference at a=0,05 \newline
*significant difference at a=0,007
\end{table*}
Intergroup comparison provides further insight into the differences between the designed dimension. By comparing the occurrences of the age reference and ageism between the senior tourists and the general population, one can clearly see that age references within the Lonely Planet forums are almost exclusively related with the senior tourist community, which confirms the validity of Hypothesis I. It has to be pointed out, however, that experiences of ageism are reported rare (only 6\% of the posts). This may mean that explicit forms of ageism are not the main facet of the older adult identity, unlike more subtle implicit forms such as internalized ageism which may be expressed by age references in a specific context. This confirms the findings on the language of ageism made by \cite{gendron2015language}, and provide an additional topic for developing algorithms for text mining.
The relation between age, limitations and barriers expressed in Hypothesis II, when analyzed with the given data also provides interesting results. Although there is no significant difference between senior tourists and general population groups when it comes to mentioning limitations, the difference between mentioning barriers is visible and significant, even tough it is a very scarcely appearing dimension. This lends credence to the notion that older adults who are tourists may experience a varying set of problems when engaging in their travels. However, in view of these results, the hypothesis was validated only partly within the dimension of internal factors such as health and personal issues. This lines with results obtained by other researchers \cite{kazeminia2015seniors}.
Hypothesis IV stated that older adult tourists, when discussing their limitation and barriers, associate them with their age. In order to validate this hypothesis, authors decided to calculate the Cohen's kappa agreement index between the age reference and limitations and barriers dimensions within the senior tourist group. The obtained result of Cohen's Kappa was 0 in both cases, which indicates that the relation between the given dimension is no different from the one that would arise from pure chance. The ramifications of this observation are crucial. It shows that there is no direct, explicit relation between the perception of age and factors that hinder the older adults plans for travel. Therefore, Hypothesis IV has to be deemed as falsified. However, this finding corroborates the observations presented when analyzing Hypothesis II.
Hypothesis III focused on recalling the traveling experiences of older adults. As shown in table II, it is clear that tourists from the senior tourist group are significantly more likely to recall their travel experiences when engaging within their on-line community. This leads to the conclusion that third hypothesis is validated.
\section{Conclusions \& Future work}
By conducting the literature review and a coding task, supplemented by a the analysis of codes, authors planned to better understand what aspects of the dynamics of senior tourist on-line communities are persistent and demand more focus when designing a framework for the automated mapping and extraction of knowledge from unstructured text created by an on-line community. The analysis of data leads the authors to accept non trivial conclusions in regards to the nature of on-line communities of older adult tourists.
First of all, the role of age or the perception of ones age exists in a complex relation with other aspects of the community. Even though the perception of age is an important part of the community of older adult tourists, it is not, at least when explicitly stated, related to the problems they face while traveling. What is more, there is no difference between mentions of external limitations between older adults and general users of the travel forum, which means that older adults face problems at similar rates to other groups.
What sets them aside, however, is their tendency to report problems related to health and personal issues. Moreover, older adults are more likely to share their experiences to the travels they made.
These findings have significant ramifications for the knowledge extraction projects as proposed in the Introduction to this article. The fact that older adults do not verbally associate their problems with their age, indicated that the machine learning algorithms would have to be focused not only on detecting the dimensions of limitations experienced by older adults, but also should be able to extract implicit relations between them and the age of tourists.
The importance of storytelling and motivation for older adults also provides an additional level of challenge for the framework. Here the use of such algorithms like recursive neural networks is suggested by the authors to properly extract the sequence of events described by the poster, while the motivation aspect can be connected to the limitations and age dimensions, so that a more comprehensive picture of older adults experiences can be extracted.
What is more, the task of extracting knowledge not explicitly stated by the members of the on-line community would constitute an interesting topic in the field of natural language processing.
The full corpus of texts from the traveling forums will be tagged by a group of trained taggers and later will be used for training a set of machine learning algorithms in order to automatically detect the designed dimensions of the framework, as well as mapping the relations between them. The database thus created would connect specific locations with the specific problems and experiences typical to the members of the 50+ demographic category. In the final stage the framework would be embedded in a web application that would allow us to extract structured implicit knowledge that can be learned from the communities of older adult tourists.
Such algorithms can also be combined with knowledge on developer team work in order to further improve its quality\cite{turek2011wikiteams},\cite{wierzbicki2010learning}.
\section{Acknowledgments.}
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 690962.
\bibliographystyle{splncs03.bst}
| {
"attr-fineweb-edu": 2.285156,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUgxPxK7IAAwXptnH7 | \section{Introduction}
Given $n$ cities with their pairwise distances $d_{i,j}$ the \emph{traveling salesman problem} (TSP)
asks for a shortest tour that visits each city exactly once.
This problem is known to be NP-hard~\cite{Kar1972} and therefore much effort has been spent
to design efficient heuristics that are able to find good tours.
A heuristic $A$ for the traveling salesman problem is said to have \emph{approximation ratio}
$c$ if for every TSP instance it finds a tour that is at most
$c$ times longer than a shortest tour.
We will consider here only \emph{metric} TSP instances, i.e., TSP instances where
the distances between the $n$ cities satisfy the triangle inequality $d_{i,j}\le d_{i,k} + d_{k,j}$
for all $1\le i,j,k\le n$. Well studied special cases of the metric TSP are the
\emph{euclidean} and the \emph{rectilinear} TSP. In these instances the cities are points in the plane
and the distance between two cities is defined as the euclidean respectively
rectilinear distance. A third example of metric TSP instances are the \emph{graphic}
TSP instances. Such an instance is obtained from an (unweighted, undirected)
connected graph $G$ which has as vertices
all the cities. The distance between two cities is then defined as the length of a shortest path
in $G$ that connects the two cities.
One of the most natural heuristics for the TSP is the \emph{nearest neighbor rule} (NNR)~\cite{Flo1956}.
This rule grows \emph{partial tours} of increasing size where a partial tour is an ordered subset
of the cities. The nearest neighbor rule starts with a partial tour consisting of a single city $x_1$.
If the nearest neighbor rule has constructed a partial tour $(x_1, x_2, \ldots, x_k)$ then it extends
this partial tour by a city $x_{k+1}$ that has smallest distance to $x_k$ and is not yet contained
in the partial tour. Ties are broken arbitrarily.
A partial tour that contains all cities yields a TSP tour by going from the last
city to the first city in the partial tour. Any tour that can be obtained this way is called an
NNR tour.
\subsection{Known Results}
Rosenkrantz, Stearns, and Lewis~\cite{RSL1977} proved that on an $n$ city metric TSP instance
the nearest neighbor rule has approximation ratio at most
$\frac12\lceil\log n\rceil +\frac12$, where throughout the paper $\log$ denotes the logarithm with base 2.
They also constructed a family of metric TSP instances that show
a lower bound of $\frac13\log n$ for the approximation ratio of the
nearest neighbor rule.
Johnson and Papadimitriou~\cite{JP1985} presented a simplified construction that yields a lower
bound of $\frac16\log n$.
Hurkens and Woeginger~\cite{HW2004} constructed a simple family of graphic TSP instances
that proves a lower bound of $\frac14\log n$. Moreover they present in the same paper
a simple construction of a family of euclidean TSP instances that proves
that the nearest neighbor rule has approximation
ratio at least $\left(\sqrt 3 -\frac32\right) \left(\log n -2\right)$ where
$\sqrt 3 -\frac32 \le 0.232$.
For the graphic TSP Pritchard~\cite{Pri2007} presents a more complicated construction
that shows that for each $\epsilon > 0$
the approximation ratio of the nearest neighbor rule on graphic TSP instances
is at least $\frac1{2+\epsilon} \log n$.
\subsection{Our Contribution}
We will present in the next section a very simple construction that proves a lower bound of
$\frac14\log n$ for the graphic, euclidean, and rectilinear TSP at the same time.
Our construction proves for the first time a lower bound on the approximation ratio
of the nearest neighbor rule for rectilinear TSP instances. Moreover, we improve
the so far best known lower bound of~\cite{HW2004} for the approximation ratio of the nearest
neighbor rule for euclidean TSP instances.
\section{The Construction of Bad Instances}\label{sec:construction}
We will now describe our construction of a family of metric TSP instances $G_k$ on which
the nearest neighbor rule yields tours that are much longer than optimum tours.
As the set of cities $V_k$ we take the points of a $2\times (8\cdot 2^k-3)$ subgrid of $\mathbb{Z}^2$.
Thus we have $|V_k|= 16 \cdot 2^k-6$.
Let $G_k$ be any TSP-instance defined on the cities $V_k$ that satisfies the following conditions:
\begin{itemize}
\item[(i)] if two cities have the same x-coordinate or the same y-coordinate
their distance is the euclidean distance between the two cities.
\item[(ii)] if two cities have different x-coordinate and different y-coordinate
then their distance is at least as large as the absolute difference between
their x-coordinates.
\end{itemize}
Note that if we choose as $G_k$ the euclidean or the rectilinear TSP instance on $V_k$
then conditions (i) and (ii) are satisfied.
We can define a graph on $V_k$ by adding an edge between each pair of cities
at distance 1. The graphic TSP that is induced by this graph is exactly the rectilinear TSP.
The graph for $G_0$ is shown in Figure~\ref{fig:G0}. We label the lower left vertex
in $V_k$ as $l_k$ and the top middle vertex in $V_k$ as $m_k$.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1]
\draw[step = 1cm] (0,0) grid (4,1);
\foreach \x in {0,1,2,3,4}
\foreach \y in {0,1}
\fill (\x, \y) circle(1mm);
\draw[blue,->] (0.1,0.1) -- (0.1, 0.85);
\draw[blue,->] (0.1,0.9) -- (0.9,0.9);
\draw[blue,->] (0.9,0.85) -- (0.9,0.1);
\draw[blue,->] (1.1,0.1) -- (1.9,0.1);
\draw[blue,->] (2.1,0.1) -- (2.9,0.1);
\draw[blue,->] (3.1,0.1) -- (3.9,0.1);
\draw[blue,->] (3.9,0.1) -- (3.9, 0.85);
\draw[blue,->] (3.9,0.9) -- (3.1,0.9);
\draw[blue,->] (2.9,0.9) -- (2.1,0.9);
\draw (0,0) node[anchor = east] {$l_0$};
\draw (2,1) node[anchor = south] {$m_0$};
\end{tikzpicture}
\caption{The graph defining the graphic TSP $G_0$ together with a partial NNR tour that
connects $l_0$ with $m_0$.}
\label{fig:G0}
\end{figure}
Our construction of an NNR tour in $G_k$ will only make use of the properties (i) and (ii) of $G_k$.
Thus with the same proof we get a result for euclidean, rectilinear and graphic TSP instances.
We will prove by induction on $k$ that the nearest neighbor rule can find a rather long tour in $G_k$.
For this we need to prove the following slightly more general result which is
similar to Lemma~1 in~\cite{HW2004}.
\begin{lemma}
\label{lemma}
Let the cities of $G_k$ be embedded into $G_m$ with $m > k$. Then there exists a partial NNR tour
in $G_m$ that
\begin{itemize}
\item[\rm(a)] visits exactly the cities in $G_k$,
\item[\rm(b)] starts in $l_k$ and ends in $m_k$, and
\item[\rm(c)] has length exactly $(12+4k)\cdot 2^k-3$.
\end{itemize}
\end{lemma}
\begin{proof}
We use induction on $k$ to prove the statement. For $k=0$ a
partial NNR tour of length $12\cdot 2^0-3=9$ that satisfies (a), (b), and (c)
is shown in Figure~\ref{fig:G0}.
Now assume we already have defined a partial NNR tour for $G_k$.
Then we define a partial NNR tour for $G_{k+1}$ recursively as follows.
As $|V_{k+1}| = 16 \cdot 2^{k+1}-6 = 2\cdot(16 \cdot 2^k-6) + 6 = 2\cdot |V_k| + 6$
we can think of $G_{k+1}$ to be the disjoint union of two copies $G_k'$ and $G_k''$ of $G_k$
separated by a $2\times 3$ grid. This is shown in Figure~\ref{fig:Gk}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw (0,0) -- (0,1) -- (4,1) -- (4,0) -- (0,0) ;
\draw[step = 1cm] (5,0) grid (7,1);
\draw (8,0) -- (8,1) -- (12,1) -- (12,0) -- (8,0) ;
\fill (0,0) circle(0.1); \draw (0,0) node[anchor = north] {$l_{k+1}=l_k'$};
\fill (2,1) circle(0.1); \draw (2,1) node[anchor = south] {$m_k'$};
\foreach \x in {5,6,7} \fill (\x,0) circle(0.1);
\foreach \x in {5,6,7} \fill (\x,1) circle(0.1);
\fill (8,0) circle(0.1); \draw (8,0) node[anchor = north] {$l_k''$};
\fill (10,1) circle(0.1); \draw (10,1) node[anchor = south] {$m_k''$};
\draw (6,1) node[anchor = south] {$m_{k+1}$};
\draw[dash pattern = on 2pt off 2pt,blue,->] (0.1,0.05) -- (1.9,0.95);
\draw[blue,->] (2.1,0.95) .. controls (4.5,0.75) .. (4.9,0.95);
\draw[blue,->] (5.1,0.9) -- (5.1, 0.15);
\foreach \x in {5,6,7} \draw[blue,->] (\x.1,0.1) -- (\x.9, 0.1);
\draw[dash pattern = on 2pt off 2pt,blue,->] (8.1,0.05) -- (9.9,0.95);
\draw[blue,->] (9.9,1.05) .. controls (7.5,1.25) .. (7.1,1.05);
\draw[blue,->] (6.9,0.9) -- (6.1, 0.9);
\draw (2,0.4) node {$G'_k$};
\draw (10,0.4) node {$G''_k$};
\end{tikzpicture}
\caption{The recursive construction of a partial NNR tour for the instance $G_{k+1}$. The dashed lines
indicate partial NNR tours in $G_k'$ and $G_k''$. }
\label{fig:Gk}
\end{figure}
Now we can construct a partial NNR tour for $G_{k+1}$ as follows. Start in vertex $l_k'$ and follow
the partial NNR tour in $G'_k$ that ends in vertex $m_k'$. The leftmost top vertex of the $2\times 3$-grid
is now a closest neighbor of $m_k'$ that has not been visited so far. Go to this vertex,
visit some of the vertices of the $2\times 3$-grid as indicated in Figure~\ref{fig:Gk}, and
then go to vertex $l_k''$. From this vertex follow the partial NNR tour in $G_k''$ which ends in
vertex $m_k''$. A nearest neighbor for this vertex now is the rightmost top vertex of
the $2\times3$-grid. Continue with this vertex and go to the left to reach vertex $m_{k+1}$.
The partial NNR tour constructed this way obviously satisfies conditions (a) and (b). Moreover,
this partial NNR tour is also a partial NNR tour when $G_{k+1}$ is embedded into some $G_l$ for $l>k+1$.
The length of this partial NNR tour is twice the length of the partial NNR tour in $G_k$
plus five edges of length 1 plus two edges of length $\frac12\left(\frac12\cdot |V_k|+1\right)$.
Thus we get a total length of
$$2 \left((12+4k)\cdot 2^k-3\right) + 5 + 8\cdot 2^k -2 ~=~ (12+4(k+1))\cdot 2^{k+1}-3$$
for the partial NNR tour constructed in $G_{k+1}$. This proves condition (c).
\end{proof}
\begin{theorem}
\label{thm:main}
On graphic, euclidean, and rectilinear TSP instances with $n$ cities
the approximation ratio of the nearest neighbor rule is no better than $\frac14\cdot \log n - 1$.
\end{theorem}
\begin{proof}
The instance $G_k$ defined above has $n:=16 \cdot 2^k-6$ cities and an optimum TSP tour
in $G_k$ has length $n$. As shown in Lemma~\ref{lemma} there exists a partial NNR tour in $G_k$
of length at least $(12+4k)\cdot 2^k-3$. Thus the approximation ratio of the nearest neighbor rule
is no better than
$$\frac {(12+4k)\cdot 2^k-3}{16 \cdot 2^k-6} ~\ge~ \frac{12+4k}{16}~=~\frac{3+k}{4}~=~\frac{3+\log\left(\frac{n+6}{16}\right)}{4}~\ge~\frac14\left(\log n - 1\right).$$
\end{proof}
\section{Comments}
Conditions (i) and (ii) in Section~\ref{sec:construction} are satisfied whenever
the distances in $G_k$ are defined by an $L^p$-norm. Thus Theorem~\ref{thm:main} not only holds
for the $L^2$- and the $L^1$-norm but for all $L^p$-norms.
As already noted in~\cite{JP1985} and~\cite{HW2004}
one can make the NNR tour unique in the euclidean and rectilinear case (and more generally in the $L^p$-case)
by moving all cities by some small amount.
The $\Theta(\log n)$ lower bound of our family is independent of the city in which the nearest neighbor rule starts.
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 2.1875,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeH84eIOjRtRwj0-7 | \section{Introduction}
Suppose we want to plan a sporting event, where there are $2n$ teams, each team should play against each other team exactly once, and no team plays more than one game in the same round (this is known as a \emph{round-robin} tournament). What is the minimum number of rounds that such a competition should take? How can we find such a scheduling?
This problem was first appeared in a paper of Kirkman \cite{Kirk} from 1847, and even though an answer to it is quite simple, it leads to many interesting real-life related problems. For example, suppose that some specific teams cannot play against each other (at all or in specific rounds), no team plays twice in a row in the same field, etc. An obvious attempt to translate this type of problems into a graph theoretical setting leads us to the notions of a $1$-\emph{factorization} and a \emph{proper edge coloring} which will be defined bellow.
Let $G$ be a graph on $2n$-vertices with edge set $E(G)$. A $1$-\emph{factor} of $G$ (also known as a \emph{perfect matching}) is a collection of $n$ edges in $E(G)$, no two share a vertex. A $1$-\emph{factorization} of $G$ is a collection $\mathcal F$ of edge-disjoint $1$-factors, such that each edge $e\in E(G)$ belongs to exactly one $F\in \mathcal F$. Going back to the sports scheduling example, one can form a graph $G$ on $2n$ vertices, each vertex represents a team, and the edge set consists of all pairs of teams that should play against each other (here we take into account the possibilities of teams which are not allowed to play against each other. If there are no such teams, then $G$ is the complete graph). The objective is to plan a scheduling that consists of as few rounds as possible. Clearly, a $1$-factorization (if exists) corresponds to such a scheduling.
It is easy to see that if a graph $G$ has a $1$-factorization $\mathcal F$, then it must be $|\mathcal F|$-\emph{regular} (that is, all its vertices are of degree exactly $|F|$). The opposite statement can be easily disproved. Indeed, for example, take $G$ to be a collection of vertex disjoint cliques of size $2d+1$ for some $d$. Such a graph is $2d$-regular and does not contain a $1$-factor, and therefore cannot contain a $1$-factorization. In general, even if a $1$-factorization of the given graph $G$ does not exist, every round of the sporting event still consists of a matching which is not necessarily perfect (that is, just a collection of disjoint edges without the requirement of covering all the vertices). Moreover, every $e\in E(G)$ should belong to exactly one matching. Suppose that $\mathcal M$ is such a collection of matchings and for each $e\in E(G)$, let $M_e$ denote the unique $M\in \mathcal M$ for which $e\in M$. Such a map $c:E(G)\rightarrow \mathcal M$ is called a \emph{proper edge coloring} of $G$ using $|\mathcal M|$ colors, which will be referred to as an $|\mathcal M|$-\emph{coloring}. In other words, a $k$-coloring is a function $c:E(G)\rightarrow \{1,\ldots, k\}$ in such a way that each color class (that is, a set of the form $c^{-1}(\{i\})$) forms a matching. The \emph{chromatic index} of a graph $G$, denote by $\chi'(G)$, is the least $k$ such that a $k$-coloring of $E(G)$ exists. Observe that $\chi'(G)$ corresponds to the minimum possible number of rounds in a tournament with a `restriction graph' $G$.
The chromatic index is a well studied graph parameter and there are quite tight bounds on it. For example, in order to obtain an easy lower bound, observe that for every grpah $G$ we have $\chi'(G)\geq \Delta:=\Delta(G)$, where $\Delta(G)$ stands for the maximum degree of $G$. For the upper bound, perhaps surprisingly, in 1964 Vizing \cite{Vizing} proved that $\chi'(G)\leq \Delta+1$. Even though $\chi'(G)$ can be only one out of two possible numbers, the problem of computing its actual value is known to be NP hard! (see \cite{Hoy}).
Observe that if $G$ is $\Delta$-regular, then $\chi'(G)=\Delta$ is equivalent to the proerty `a $1$-factorization of $G$ exists'. This leads us to the following, well studied, natural problem:
\begin{problem}
Find sufficient conditions on a $d$-regular graph $G$ that ensures $\chi'(G)=d$.
\end{problem}
\begin{remark} Note that a $1$-factorization cannot exists if $|V(G)|=n$ is odd, and therefore, from now on, even if not explicitly stated, we will always assume that $n$ is even.
\end{remark}
The above problem has a relatively long history and has been studied by various researchers. Perhaps the most notable result is the following recent theorem due to Csaba, K\"uhn, Lo, Osthus, and Treglown \cite{CKLOT}, proving over 170 pages a long standing conjecture of Dirac from the '50s.
\begin{theorem} [Theorem 1.1.1 in \cite{CKLOT}]
\label{theorem: long}
Let $n$ be a sufficiently large even integer, and let $d\geq 2\lfloor n/4\rfloor-1$. Then, every $d$-regular graph $G$ on $n$ vertices contains a $1$-factorization.
\end{theorem}
Observe that Theorem \ref{theorem: long} is best possible as for every $d\leq 2\lfloor n/4\rfloor-2$, there exist $d$-regular graphs that do not contain even one perfect matching. Therefore, for smaller values of $d$, it makes sense to restrcited our attnetion to more specific families of graphs.
One natural candidate to begin with is the random $d$-regular graph, denoted by $G_{n,d}$. That is, $G_{n,d}$ denotes a $d$-regular graph on $n$ vertices, chosen uniformly at random among all such graphs. The study of this random graph model has attracted the attention of many researchers in the last few decades. Unlike the traditional binomial random graph $G_{n,p}$ (where each edge of the complete graph is being added with probability $p$, independently at random), in this model we have many dependencies and therefore it is harder to work with. For a detailed discussion about this model, along with many results and open problems, the reader is referred to the survey of Wormald \cite{Wormald}. In 1997, Molloy, Robalewska, Robinson, and Wormald \cite{M} proved that a typical $G_{n,d}$ admits on a $1$-factorization for all fixed $d\geq 3$ where $n$ is a sufficiently large even number. Their proof cannot be extended to all values of $d$ as it is based on the analysis of the number of $1$-actorizations in the so called `configuration model', which is a way to generate a random $d$-regular graph, for $d$ which is `not too large' (for more details see \cite{Bol}). In this paper, as a first main result, we prove the existence of a $1$-factorization in a typical $G_{n,d}$ for all sufficiently large $d$. Therefore, together with the work in \cite{M} we settle this problem for all $3\leq d\leq n-1$.
\begin{theorem}
\label{main}
There exists $d_0\in \mathbb{N}$ such that for all $d_0\leq d\leq n/2$, whp a random $d$-regular gaph $G_{n,d}$ has a $1$-factorization.
\end{theorem}
In fact, Theorem \ref{main} follows as a corollary of a more general theorem about the existence of $1$-factorization in certain `pseudorandom' graphs. Before stating the theorem we need the following definition. Given a $d$-regular graph $G$ on $n$ vertices, let $A_G$ denote its adjacency matrix (that is, $A_G$ is an $n\times n$, $0/1$ matrix with $A_G(ij)=1$ if and only if $ij\in E(G)$). Clearly, $A_G\cdot \textbf{1}=d\textbf{1}$, where $\textbf{1}$ is the all $1$ vector in $\mathbb{R}^n$, and therefore $d$ is an eigenvalue of $A_G$ (and is maximal in absolute value!). Moreover, as $A_G$ is a symmetric, real-valued matrix, it has $n$ real valued eigenvalues (not necessarily distinct though). Let
$$d:=\lambda_1\geq \lambda_2\geq \ldots \lambda_n$$ denote its eigenvalues, labeled in a decreasing order.
Let $\lambda(G):=\max\{|\lambda_2|,|\lambda_n|\}$,
and we say that $G$ is an $(n,d,\lambda)$-graph if $\lambda(G)\leq \lambda$.
It turns out that if $\lambda$ is `bounded away' from $d$ then the graph $G$ has some `nice' expansion properties (see for example \cite{KS}). For example, it is known (see \cite{T} and Friedland) that a typical $G=G_{n,d}$ has
$\lambda(G)=O(\sqrt{d})$, and therefore is an $(n,d,O(\sqrt{d}))$-graph. For a more detailed discussion about $(n,d,\lambda)$-graphs we refer the curious reader to the survey of Krivelevich and Sudakov \cite{KS}.
In our second main result we show that if $\lambda$ is `polynomialy' bounded-away from $d$, then any $(n,d,\lambda)$-graph has a $1$-factorization.
\begin{theorem}
\label{main:pseudorandom}
There exists $d_0$ such that for all large enough $n$ and $d_0\leq d\geq n/2$ the following holds. Suppose that $G$ is an $(n,d,\lambda)$-graph, with $\lambda\leq d^{0.9}$. Then, $G$ has a $1$-factorization.
\end{theorem}
In particular, our proof also gives us a non-trivial lower bound for the number of $1$-factorizations in a typical $G_{n,d}$. Observe that the same proof as in \cite{LL} gives that the number of 1-factorizations of any $d$-regular graph is at most
$$\left((1+o(1))\frac{d}{e^2}\right)^{dn/2}.$$
In the following theorem we show that one can obtain a lower bound which is off by a factor of $2$ in the base of the exponent.
\begin{theorem}
\label{main:counting} Let $d\geq ?$. Then whp the number of $1$-factorizations of $G_{n,d}$ is at least
$$\left((1+o(1))\frac{d}{2e^2}\right)^{dn/2}.$$
\end{theorem}
\end{comment}
\section{Introduction}
The \emph{chromatic index} of a graph $G$, denoted by $\chi'(G)$, is the minimum number of colors
with which it is possible to color the edges of $G$ in a way such that every color class consists of a matching (that is, no two edges of the same color share a vertex). This parameter is one of the most fundamental and widely studied
parameters in graph theory and combinatorial optimization and in particular, is related to optimal scheduling and resource allocation problems and round-robin tournaments (see, e.g., \cite{GDP}, \cite{Lynch}, \cite{MR}).
A trivial lower bound on $\chi'(G)$
is $\chi'(G)\geq \Delta(G)$, where $\Delta(G)$ denotes the maximum degree of $G$. Indeed, consider any vertex with maximum degree, and observe that all edges incident to this vertex must have distinct colors.
Perhaps surprisingly, a classical theorem of Vizing \cite{Vizing} from the 1960s shows that $\Delta + 1$ colors are always sufficient, and therefore, $\chi'(G)\in \{\Delta(G),\Delta(G)+1\}$ holds for all graphs. In particular, this shows that one can partition all graphs into two classes: \emph{Class 1} consists of all graphs $G$ for which $\chi'(G)=\Delta(G)$, and \emph{Class 2} consists of all graphs $G$ for which $\chi'(G)=\Delta(G) + 1$. Moreover, the strategy in Vizing's original proof can be used to obtain a polynomial time algorithm to edge color any graph $G$ with $\Delta(G) + 1$ colors (\cite{MG}).
However, Holyer \cite{Hoy} showed that it is actually NP-hard to decide whether a given graph $G$ is in Class 1 or 2. In fact, Leven and Galil \cite{Leven} showed that this is true even if we restrict ourselves to graphs with all the degrees being the same (that is, to \emph{regular graphs}).
Note that for $d$-regular graphs $G$ (that is, graphs with all their degrees equal to $d$) on an even number of vertices, the statement `$G$ is of Class 1' is equivalent to the statement that $G$ contains $d$ edge-disjoint \emph{perfect matchings} (also known as \emph{$1$-factors}). A graph whose edge set decomposes as a disjoint union of perfect matchings is said to admit a \emph{$1$-factorization}.
Note that if $G$ is a $d$-regular \emph{bipartite} graph, then a straightforward application of Hall's marriage theorem immediately shows that $G$ is of Class 1. Unfortunately, the problem is much harder for non-bipartite graphs, and it is already very interesting to find (efficiently verifiable) sufficient conditions
which ensure that $\chi'(G)=\Delta(G)$. This problem is the main focus of our paper.
\subsection{Regular expanders are of Class 1}
Our main result shows that $d$-regular graphs on an even number of vertices which are `sufficiently good' spectral expanders, are of Class 1. Before stating our result precisely, we need to introduce some notation and definitions. Given a $d$-regular graph $G$ on $n$ vertices, let $A(G)$ be its adjacency matrix (that is, $A(G)$ is an $n\times n$, $0/1$-valued matrix, with $A(G)_{ij}=1$ if and only if $ij\in E(G)$). Clearly, $A(G)\cdot \textbf{1}=d\textbf{1}$, where $\textbf{1}\in \mathbb{R}^n$ is the vector with all entries equal to $1$, and therefore, $d$ is an eigenvalue of $A(G)$. In fact, as can be easily proven, $d$ is the eigenvalue of $A(G)$ with largest absolute value. Moreover, since $A(G)$ is a symmetric, real-valued matrix, it has $n$ real eigenvalues (counted with multiplicities). Let
$$d:=\lambda_1\geq \lambda_2\geq \ldots \lambda_n \geq -d$$ denote the eigenvalues of $A(G)$, and let $\lambda(G):=\max\{|\lambda_2|,|\lambda_n|\}$. With this notation, we say that $G$ is an $(n,d,\lambda)$-graph if $G$ is a $d$-regular graph on $n$ vertices with $\lambda(G)\leq \lambda$. In recent decades, the study of $(n,d,\lambda)$ graphs, also known as `spectral expanders', has attracted considerable attention in mathematics and theoretical computer science. An example which is relevant to our problem is that of finding a perfect matching in $(n,d,\lambda)$-graphs. Extending a result of Krivelevich and Sudakov \cite{KS}, Ciob\v{a}, Gregory and Haemers \cite{CGH} proved accurate spectral conditions for an $(n,d,\lambda)$-graph to contain a perfect matching. For much more on these graphs and their many applications, we refer the reader to the surveys of Hoory, Linial and Wigderson \cite{HLW}, Krivelevich and Sudakov \cite{KS}, and to the book of Brouwer and Haemers \cite{BH}. We are now ready to state our main result.
\begin{theorem}
\label{main:pseudorandom}
For every $\varepsilon>0$ there exist $d_0,n_0 \in \mathbb{N}$ such that for all even integers $n\geq n_0$ and for all $d \geq d_0$ the following holds. Suppose that $G$ is an $(n,d,\lambda)$-graph with $\lambda\leq d^{1-\varepsilon}$. Then, $\chi'(G)=d$.
\end{theorem}
\begin{remark}
It seems plausible that with a more careful analysis of our proof, one can improve our bound to $\lambda\leq d/poly(\log{d})$. Since we believe that the actual bound should be much stronger, we did not see any reason to optimize our bound at the expense of making the paper more technical.
\end{remark}
In particular, since the eigenvalues of a matrix can be computed in polynomial time, \cref{main:pseudorandom} provides a polynomial time checkable sufficient condition for a graph to be of Class 1. Moreover, our proof gives a probabilistic polynomial time algorithm to actually find an edge coloring of such a $G$ with $d$ colors. Our result can be viewed as implying that `sufficiently good' spectral expanders are easy instances for the NP-complete problem of determining the chromatic index of regular graphs. It is interesting (although, perhaps a bit unrelated) to note that in recent work, Arora et al. \cite{Arora} showed that constraint graphs which are reasonably good spectral expanders are easy for the conjecturally NP-complete Unique Games problem as well.
\subsection{Almost all $d$-regular graphs are of Class 1}
The phrase `almost all $d$-regular graphs' usually splits into two cases: `dense' graphs and random graphs. Let us start with the former. \\
{\bf Dense graphs: }It is well known (and quite simple to prove) that every $d$-regular graph $G$ on $n$ vertices, with $d\geq 2\lceil n/4\rceil-1$ has a perfect matching (assuming, of course, that $n$ is even). Moreover, for every $d\leq 2\lceil n/4\rceil-2$, it is easily seen that there exist $d$-regular graphs on an even number of vertices that do not contain even one perfect matching.
In a (relatively) recent breakthrough, Csaba, K\"uhn, Lo, Osthus, and Treglown \cite{CKLOT} proved a longstanding conjecture of Dirac from the 1950s, and showed that the above minimum degree condition is tight, not just for containing a single perfect matching, but also for admitting a $1$-factorization.
\begin{theorem} [Theorem 1.1.1 in \cite{CKLOT}]
\label{theorem: long}
Let $n$ be a sufficiently large even integer, and let $d\geq 2\lceil n/4\rceil-1$. Then, every $d$-regular graph $G$ on $n$ vertices contains a $1$-factorization.
\end{theorem}
Hence, every sufficiently `dense' regular graph is of Class 1. It is worth mentioning that they actually proved a much more general statement about finding edge-disjoint \emph{Hamilton cycles}, from which the above theorem follows as a corollary.\\
\textbf{Random graphs: }As noted above, one cannot obtain a statement like \cref{theorem: long} for smaller values of $d$ since the graph might not even have a single perfect matching.
Therefore, a natural candidate to consider for such values of $d$ is the random $d$-regular graph, denoted by $G_{n,d}$, which is simply
a random variable that outputs a $d$-regular graph on $n$ vertices, chosen uniformly at random from among all such graphs. The study of this random graph model has received much interest in recent years. Unlike the traditional binomial random graph $G_{n,p}$ (where each edge of the complete graph is included independently, with probability $p$), the uniform regular model has many dependencies, and is therefore much harder to work with. For a detailed discussion of this model, along with many results and open problems, we refer the reader to the survey of Wormald \cite{Wormald}.
Working with this model, Janson \cite{Jans}, and independently, Molloy, Robalewska, Robinson, and Wormald \cite{M}, proved that a typical $G_{n,d}$ admits a $1$-factorization for all fixed $d\geq 3$, where $n$ is a sufficiently large (depending on $d$) even integer. Later, Kim and Wormald \cite{KW} gave a randomized algorithm to decompose a typical $G_{n,d}$ into $\lfloor \frac{d}{2}\rfloor$ edge-disjoint \emph{Hamilton cycles} (and an additional perfect matching if $d$ is odd) under the same assumption that $d \geq 3$ is fixed, and $n$ is a sufficiently large (depending on $d$) even integer. The main problem with handling values of $d$ which grow with $n$ is that the so-called `configuration model' (see \cite{Bol} for more details) does not help us in this regime.
Here, as an almost immediate corollary of \cref{main:pseudorandom}, we deduce the following, which together with the results of \cite{Jans} and \cite{M} shows that a typical $G_{n,d}$ on a sufficiently large even number of vertices admits a $1$-factorization for all $3 \leq d \leq n-1$.
\begin{corollary}
\label{main}
There exists a universal constant $d_0\in \mathbb{N}$ such that for all $d_0\leq d\leq n-1$, a random $d$-regular graph $G_{n,d}$ admits a $1$-factorization asymptotically almost surely (a.a.s.).
\end{corollary}
\begin{remark}
By asymptotically almost surely, we mean with probability going to $1$ as $n$ goes to infinity (through even integers). Since a $1$-factorization can never exist when $n$ is odd, we will henceforth always assume that $n$ is even, even if we do not explicitly state it.
\end{remark}
To deduce \cref{main} from \cref{main:pseudorandom}, it suffices to show that we have (say) $\lambda(G_{n,d}) = O(d^{0.9})$ a.a.s. In fact, the considerably stronger (and optimal up to the choice of constant in the big-oh) bound that $\lambda(G_{n,d}) = O(\sqrt{d})$ a.a.s. is known. For $d = o(\sqrt{n})$, this is due to Broder, Frieze, Suen and Upfal \cite{BFSU}. This result was extended to the range $d=O(n^{2/3})$ by Cook, Goldstein, and Johnson \cite{CGJ} and to all values of $d$ by Tikhomirov and Youssef \cite{T}. We emphasize that the condition on $\lambda$ we require is significantly weaker and can possibly be deduced from much simpler arguments than the ones in the references above.\\
It is also worth mentioning that very recently, Haxell, Krivelevich and Kronenberg \cite{HKK} studied a related problem in a random multigraph setting; it is interesting to check whether our techniques can be applied there as well.
\subsection{Counting $1$-factorizations}
Once the existence of $1$-factorizations in a family of graphs has been established, it is natural to ask for the number of \emph{distinct} $1$-factorizations that any member of such a family admits. Having a `good' approximation to the number of $1$-factorizations can shed some light on, for example, properties of a `randomly selected' $1$-factorization. We remark that the case of counting the number of $1$-factors (perfect matchings), even for bipartite graphs, has been the subject of fundamental works over the years, both in combinatorics (e.g., \cite{Bregman}, \cite{Egorychev}, \cite{Falikman}, \cite{Schrijver}), as well as in theoretical computer science (e.g., \cite{Valiant}, \cite{JSV}), and had led to many interesting results such as both closed-form as well as computational approximation results for the permanent of $0/1$ matrices.
As far as the question of counting the number of $1$-factorizations is concerned, much less is known. Note that for $d$-regular bipartite graphs, one can use estimates on the permanent of the adjacency matrix of $G$ to obtain quite tight results. But quite embarrassingly, for non-bipartite graphs (even for the complete graph!) the number of $1$-factorizations in unknown. The best known upper bound for the number of $1$-factorizations in the complete graph is due to Linial and Luria \cite{LL}, who showed that it is upper bounded by
$$\left((1+o(1))\frac{n}{e^2}\right)^{n^2/2}.$$
Moreover, by following their argument verbatim, one can easily
show that the number of 1-factorizations of any $d$-regular graph is at most
$$\left((1+o(1))\frac{d}{e^2}\right)^{dn/2}.$$
On the other hand, the previously best known lower bound for the number of $1$-factorizations of the complete graph (\cite{Cameron}, \cite{Zinoviev}) is only
$$\left((1+o(1))\frac{n}{4e^2}\right)^{n^{2}/2},$$
which is off by a factor of $4^{n^2/2}$ from the upper bound.
An immediate advantage of our proof is that it gives a lower bound on the number of $1$-factorizations which is better than the one above by a factor of $2$ in the base of the exponent, not just for the complete graph, but for all sufficiently good regular spectral expanders with degree greater than some large constant. More precisely, we will show the following (see also the third bullet in \cref{section-concluding-rmks})
\begin{theorem}
\label{main:counting} For any $\epsilon >0$, there exist $D=D(\epsilon), N=N(\epsilon) \in \mathbb{N}$ such that for all even integers $n\geq N(\epsilon)$ and for all $d\geq D(\epsilon)$, the number of $1$-factorizations in any $(n,d,\lambda)$-graph with $\lambda \leq d^{0.9}$ is at least
$$\left((1-\epsilon)\frac{d}{2e^2}\right)^{dn/2}.$$
\end{theorem}
\begin{remark} As discussed before, this immediately implies that for all $d\geq D(\epsilon)$, the number of $1$-factorizations of $G_{n,d}$ is a.a.s.
$$\left((1-\epsilon)\frac{d}{2e^2}\right)^{dn/2}.$$
\end{remark}
\begin{comment}
One immediate advantage of our proof is that it also gives a non-trivial lower bound for the number of $1$-factorizations of $(n,d,\lambda)$-graphs with $n$ and $d$ sufficiently large. More precisely, we will show the following.
\begin{theorem}
\label{main:counting} For any $\epsilon >0$, there exist $D=D(\epsilon), N=N(\epsilon) \in \mathbb{N}$ such that for all even integers $n\geq N(\epsilon)$ and for all $d\geq D(\epsilon)$, the following holds. The number of $1$-factorizations in any $(n,d,\lambda)$-graph with $\lambda \leq d^{0.9}$ is at least
$$\left((1-\epsilon)\frac{d}{2e^2}\right)^{dn/2}.$$
\end{theorem}
\begin{remark} As discussed before, this immediately implies that for all $d\geq D(\epsilon)$, the number of $1$-factorizations of $G_{n,d}$ is a.a.s.
$$\left((1-\epsilon)\frac{d}{2e^2}\right)^{dn/2}.$$
\end{remark}
Observe that the same proof as in \cite{LL} shows that the number of 1-factorizations of any $d$-regular graph is at most
$$\left((1+o(1))\frac{d}{e^2}\right)^{dn/2}.$$ Thus, our lower bound is off by at most a factor of essentially $2$ in the base of the exponent. On the other hand, even for the complete graph on $n$ vertices, the previously best known lower bound on the number of $1$-factorizations (\cite{Cameron}, \cite{Zinoviev}) is only
$$\left((1+o(1))\frac{n}{4e^2}\right)^{n^{2}/2},$$
which is worse than our bound by another factor of $2$ in the base of the exponent.
\end{comment}
\subsection{Outline of the proof}
It is well known, and easily deduced from Hall's theorem, that any regular bipartite graph admits a $1$-factorization (\cref{Hall-application-1fact-reg-bipartite}). Therefore, if we had a decomposition $E(G)=E(H'_1)\cup\dots E(H'_t)\cup E(\mathcal{F})$, where $H'_1,\dots H'_t$ are regular balanced bipartite graphs, and $\mathcal{F}$ is a $1$-factorization of the regular graph $G\setminus\bigcup_{i=1}^{t}H'_i$, we would be done. Our proof of \cref{main:pseudorandom} will obtain such a decomposition constructively.
As shown in \cref{existence-good-graph}, one can find a collection of edge disjoint, regular bipartite graphs $H_1,\dots,H_t$, where $t\ll d$ and each $H_i$ is $r_i$ regular, with $r_i\approx d/t$, which covers `almost' all of $G$. In particular, one can find an `almost' $1$-factorization of $G$. However, it is not clear how to complete an arbitrary such `almost' $1$-factorization to an actual $1$-factorization of $G$. To circumvent this difficulty, we will adopt the following strategy. Note that $G':=G\setminus\bigcup_{i=1}^{t}H_i$ is a $k$-regular graph with $k\ll d$, and we can further force $k$ to be even (for instance, by removing a perfect matching from $H_1$). Therefore, by Petersen's $2$-factor theorem (\cref{2-factor theorem}), we easily obtain a decomposition $E(G')=E(G'_1)\cup\dots E(G'_t)$, where each $G'_i$ is approximately $k/t$ regular. The key ingredient of our proof (\cref{proposition-completion}) then shows that the $H_i$'s can initially be chosen in such a way that each $R_i:=H_i\cup G'_i$ can be edge decomposed into a regular balanced bipartite graph, and a relatively small number of $1$-factors.
The basic idea in this step is quite simple. Observe that while the regular graph $R_i$ is not bipartite, it is `close' to being one, in the sense that most of its edges come from the regular balanced bipartite graph $H_i=(A_i\cup B_i, E_i)$. Let $R_i[A_i]$ denote the graph induced by $R_i$ on the vertex set $A_i$, and similarly for $B_i$, and note that the number of edges $e(R_i[A_i])=e(R_i[B_i])$. We will show that $H_i$ can be taken to have a certain `goodness' property (\cref{defn-good-graph}) which, along with the sparsity of $G'_i$, enables one to perform the following process to `absorb' the edges in $R_i[A_i]$ and $R_i[B_i]$: decompose $R_i[A_i]$ and $R_i[B_i]$ into the same number of matchings, with corresponding matchings of equal size, and complete each such pair of matchings to a perfect matching of $R_i$. After removing all the perfect matchings of $R_i$ obtained in this manner, we are clearly left with a regular balanced bipartite graph, as desired.
Finally, for the lower bound on the number of $1$-factorizations, we show that there are many ways of performing such an edge decomposition $E(G)=E(H'_1)\cup\dots\cup E(H'_t)\cup E(\mathcal{F})$ (\cref{rmk:counting}), and there are many $1$-factorizations corresponding to each choice of edge decomposition (\cref{rmk:counting-completion}).
\section{Tools and auxiliary results}
In this section we have collected a number of tools and auxiliary results to be used in proving our main theorem.
\subsection{Probabilistic tools}
Throughout the paper, we will make extensive use of the following well-known concentration inequality due to Hoeffding (\cite{Hoeff}).
\begin{lemma}[Hoeffding's inequality]
\label{Hoeffding}
Let $X_{1},\dots,X_{n}$ be independent random variables such that
$a_{i}\leq X_{i}\leq b_{i}$ with probability one. If $S_{n}=\sum_{i=1}^{n}X_{i}$,
then for all $t>0$,
\[
\text{Pr}\left(S_{n}-\mathbb{E}[S_{n}]\geq t\right)\leq\exp\left(-\frac{2t^{2}}{\sum_{i=1}^{n}(b_{i}-a_{i})^{2}}\right)
\]
and
\[
\text{Pr}\left(S_{n}-\mathbb{E}[S_{n}]\leq-t\right)\leq\exp\left(-\frac{2t^{2}}{\sum_{i=1}^{n}(b_{i}-a_{i})^{2}}\right).
\]
\end{lemma}
Sometimes, we will find it more convenient to use the following bound on the
upper and lower tails of the Binomial distribution due to Chernoff
(see, e.g., Appendix A in \cite{AlonSpencer}).
\begin{lemma}[Chernoff's inequality]
Let $X \sim Bin(n, p)$ and let
${\mathbb E}(X) = \mu$. Then
\begin{itemize}
\item
$\text{Pr}[X < (1 - a)\mu ] < e^{-a^2\mu /2}$
for every $a > 0$;
\item $\text{Pr}[X > (1 + a)\mu ] <
e^{-a^2\mu /3}$ for every $0 < a < 3/2$.
\end{itemize}
\end{lemma}
\noindent
\begin{remark}\label{rem:hyper} These bounds also hold when $X$ is
hypergeometrically distributed with
mean $\mu $.
\end{remark}
Before introducing the next tool to be used, we need the following
definition.
\begin{definition}
Let $(A_i)_{i=1}^n$ be a collection of events in some probability
space. A graph $D$ on the vertex set $[n]$ is called a
\emph{dependency graph} for $(A_i)_i$ if $A_i$ is mutually
independent of all the events $\{A_j: ij\notin E(D)\}$.
\end{definition}
The following is the so called Lov\'asz Local Lemma, in its symmetric version (see, e.g., \cite{AlonSpencer}).
\begin{lemma}[Local Lemma]\label{LLL}
Let $(A_i)_{i=1}^n$ be a sequence of events in some probability
space, and let $D$ be a dependency graph for $(A_i)_i$. Let $\Delta:=\Delta(D)$ and
suppose that for every $i$ we have $\text{Pr}\left[A_i\right]\leq q$, such that $eq(\Delta+1)<1$. Then, $\text{Pr}[\bigcap_{i=1}^n \bar{A}_i]\geq \left(1-\frac{1}{\Delta+1}\right)^{n}$.
\end{lemma}
We will also make use of the following asymmetric version of the Lov\'asz
Local Lemma (see, e.g., \cite{AlonSpencer}).
\begin{lemma}[Asymmetric Local Lemma]\label{ALLL}
Let $(A_i)_{i=1}^n$ be a sequence of events in some probability
space. Suppose that $D$ is a dependency graph for $(A_i)_i$, and
suppose that there are real numbers $(x_i)_{i=1}^n$, such that $0\leq x_{i} <1$ and
$$\text{Pr}[A_i]\leq x_i\prod_{ij\in E(D)}(1-x_j)$$
for all $1\leq i\leq n$.
Then, $\text{Pr}[\bigcap_{i=1}^n \bar{A}_i]\geq\prod_{i=1}^{n}(1-x_{i})$.
\end{lemma}
\subsection{Perfect matchings in bipartite graphs}
Here, we present a number of results related to perfect matchings in bipartite graphs. The first result is a slight reformulation of the classic Hall's marriage theorem (see, e.g., \cite{SS}).
\begin{theorem}
\label{Hall}
Let $G=(A\cup B,E)$ be a balanced bipartite graph with $|A|=|B|=k$. Suppose $|N(X)|\geq |X|$ for all subsets $X$ of size at most $k/2$ which are completely contained either in $A$ or in $B$. Then, $G$ contains a perfect matching.
\end{theorem}
Moreover, we can always find a maximum matching in a bipartite graph in polynomial time using standard network flow algorithms (see, e.g., \cite{West}).
The following simple corollaries of Hall's theorem will be useful for us.
\begin{corollary}\label{matching in regular}
Every $r$-regular balanced bipartite graph has a perfect matching, provided that $r\geq 1$.
\end{corollary}
\begin{proof}
Let $G=(A\cup B,E)$ be an $r$-regular graph. Let $X\subseteq A$ be a set of size at most $|A|/2$. Note that as $G$ is $r$-regular, we have $$e_G(X,N(X))=r|X|.$$
Since each vertex in $N(X)$ has degree at most $r$ into $X$, we get
$$|N(X)|\geq e_G(X,N(X))/r\geq |X|.$$
Similarly, for every $X\subseteq B$ of size at most $|B|/2$ we obtain
$$|N(X)|\geq |X|.$$
Therefore, by \cref{Hall}, we conclude that $G$ contains a perfect matching.
\end{proof}
Since removing an arbitrary perfect matching from a regular balanced bipartite graph leads to another regular balanced bipartite graph, a simple repeated application of \cref{matching in regular}
shows the following:
\begin{corollary}
\label{Hall-application-1fact-reg-bipartite}
Every regular balanced bipartite graph has a $1$-factorization.
\end{corollary}
In fact, as the following theorem due to Schrijver \cite{Schrijver} shows, a regular balanced bipartite graph has many 1-factorizations.
\begin{theorem}
\label{Schrijver-lower-bound-1-fact}
The number of $1$-factorizations of a $d$-regular bipartite graph with $2k$ vertices is at least
$$\left(\frac{d!^2}{d^d}\right)^k.$$
\end{theorem}
The next result is a criterion for the existence of $r$-\emph{factors} (that is, $r$-regular, spanning subgraphs) in bipartite graphs, which follows from a generalization of the Gale-Ryser theorem due to Mirsky \cite{Mirsky}.
\begin{theorem}
\label{gale}
Let $G=(A\cup B,E)$ be a balanced bipartite graph with $|A|=|B|=m$, and let $r$ be an integer. Then, $G$ contains an $r$-factor if and only if for all $X\subseteq A$ and $Y\subseteq B$
\begin{align*}
e_G(X,Y)\geq r(|X|+|Y|-m).
\end{align*}
\end{theorem}
Moreover, such factors can be found efficiently using standard network flow algorithms (see, e.g., \cite{Anstee}).
\begin{comment}
Next we present Br\'egman's Theorem which provides an upper bound for the number of perfect matchings in a bipartite graph based on its degrees (see e.g. \cite{AlonSpencer} page 24).
\begin{theorem}(Br\'egman's Theorem)
\label{Bregman}
Let $G=(A\cup B,E)$ be a bipartite graph with $|A| = |B|$. Then the number of perfect matchings in $G$ is at most
$$\prod_{a\in A} (d_G(a)!)^{1/d_G(a)}.$$
\end{theorem}
\begin{remark}
\label{Bregman wrt max degree}
It will be useful for us to give an upper bound with respect to the maximum degree of $G$ (which also holds in case $G$ is regular). Suppose that $|A|=|B|=m$ and let $\Delta:=\Delta(G)$. Using Theorem \ref{Bregman} and Stirling's approximation, one obtains that the number of perfect matchings in $G$ is at most
$$(\Delta!)^{m/\Delta}\leq (8\Delta )^{m/\Delta }\left(\frac{\Delta}{e}\right)^m.$$
\end{remark}
Lastly, we require the following result which provides a lower bound for the number of perfect matchings in a regular bipartite graph. This result is known as the Van der Waerden Conjecture, and it was proven by Egorychev \cite{Egorychev}, and independently by Falikman \cite{Falikman}.
\begin{theorem} (Van der Waerden's Conjecture)
\label{VDW}
Let $G=(A\cup B,E)$ be a $d$-regular bipartite graph with both parts of size $m$. Then the number of perfect matchings in $G$ is at least
$$d^m\frac{m!}{m^m}\geq \left(\frac{d}{e}\right)^m.$$
\end{theorem}
As a corollary from Theorems \ref{Bregman} and \ref{VDW} we obtain an asymptotically tight bound on the number of $1$-factorizations of an $r$-regular bipartite grpahs.
\begin{corollary}
\label{cor:1factorizations of bipartite}
Let $G=(A\cup B,E)$ be an $r$-regular bipartite graph with $|A|=|B|=m$. Then, the number of $1$-factorizations of $G$ is
$$\left((1+o(1))\frac{r}{e^2}\right)^{rm}.$$
\end{corollary}
\begin{proof}
We generate a $1$-factorization in the following way: Start with $G_1:=G$ and let $M_1$ be an arbitrary perfect matching of $G_1$. By theorem \ref{Bregman} (more specifically, by Remark \ref{Bregman wrt max degree}) there are at most $\left((1+o(1))\frac{r}{e}\right)^m$ ways to choose such a factor. On the other hand, by Theorem \ref{VDW} there are at least $\left(\frac{r}{e}\right)^m$ ways to choose $M_1$. Let $G_2:=G_1\setminus M_1$, observe that $G_2$ is $(r-1)$-regular, and repeat.
All in all, letting $F(G)$ be the number of $1$-factorizations of $G$, we obtain
$$\prod_{i=0}^{r-1}\left(\frac{r-i}{e}\right)^m\leq F(G)\leq \prod_{i=0}^{r-1}\left((1+o(1))\left(\frac{r-i}{e}\right)\right)^m.$$
Observe that both sides of the above inequality are asymptotically of the form
$$\left((1+o(1))^r\frac{r!}{e^r}\right)^{m}=\left((1+o(1))\frac{r}{e^2}\right)^{rm}.$$
This completes the proof.
\end{proof}
\end{comment}
As we are going to work with pseudorandom graphs, it will be convenient for us to isolate some `nice' properties that, together with \cref{gale}, ensure the existence of large factors in balanced bipartite graphs.
\begin{lemma}
\label{finding-large-factors}
Let $G=(A\cup B,E)$ be a balanced bipartite graph
with $|A|=|B|=n/2$. Suppose there exist $r,\varphi\in\mathbb{R}^{+}$ and
$\beta_{1},\beta_{2},\beta_{3},\gamma\in(0,1)$ satisfying the following
additional properties:
\begin{enumerate} [$(P1)$]
\item $\deg_{G}(v)\geq r(1-\beta_{1})$ for all $v\in A\cup B$.
\item $e_{G}(X,Y)< r\beta_{2}|X|$ for all $X\subseteq A$ and $Y\subseteq B$
with $|X|=|Y|\leq r/\varphi$.
\item $e_{G}(X,Y)\geq2r(1-\beta_{3})|X||Y|/n$ for all $X\subseteq A$
and $Y\subseteq B$ with $|X|+|Y|>n/2$ and $\min\{|X|,|Y|\}>r/\varphi$.
\item $\gamma\geq\max\{\beta_{3},\beta_{1}+\beta_{2}\}$
\end{enumerate}
Then, $G$ contains an $\lfloor r(1-\gamma)\rfloor$-factor.
\end{lemma}
\begin{proof} By \cref{gale}, it suffices to verify
that for all $X\subseteq A$ and $Y\subseteq B$ we have
$$e_{G}(X,Y)\geq r(1-\gamma)\left(|X|+|Y|-\frac{n}{2}\right).$$
We divide the analysis into five cases:
\begin{enumerate} [$\text{Case }1$]
\item $|X|+|Y|\leq n/2$. In this case, we trivially have
$$e_{G}(X,Y)\geq 0\geq r(1-\gamma)\left(|X|+|Y|-\frac{n}{2}\right),$$ so there is nothing to prove.
\item $|X|+|Y|>n/2$ and $|X|\leq r/\varphi$. Since
$|Y|\leq|B|=\frac{n}{2}$, we always have $|X|+|Y|-\frac{n}{2}\geq|X|$.
Thus, it suffices to verify that
$$e_{G}(X,Y)\geq r(1-\gamma)|X|.$$
Assume, for the sake of contradiction, that this is not the case. Then, since there
are at least $r(1-\beta_{1})|X|$ edges incident to $X$, we must
have
$$e_{G}(X,B\backslash Y)\geq r(1-\beta_1)|X|-e_G(X,Y)\geq r(\gamma-\beta_{1})|X|\geq r\beta_{2}|X|.$$
However, since $|B\backslash Y|\leq|X|$, this contradicts $(P2)$.
\item $|X|+|Y|>n/2$ and $|Y|\leq r/\varphi$. This is
exactly the same as the previous case with the roles of $X$ and $Y$ interchanged.
\item $|X|+|Y|>n/2$, $|X|,|Y|>r/\varphi$ and $|Y|\geq|X|$.
By $(P3)$, it suffices to verify that
$$2r(1-\beta_{3})|X||Y|/n\geq r(1-\gamma)\left(|X|+|Y|-n/2\right).$$
Dividing both sides by $rn/2$, the above inequality is equivalent to $$xy-\frac{(1-\gamma)}{(1-\beta_{3})}(x+y-1)\geq0,$$
where $x=2|X|/n$, $y=2|Y|/n$, $x+y\geq1$, $0\leq x\leq1$, and $0\leq y\leq1$.
Since $\frac{1-\gamma}{1-\beta_{3}}\leq1$ by $(P4)$, this is readily verified
on the (triangular) boundary of the region, on which the inequality reduces to one of the following: $xy\geq 0$; $x \geq \frac{1-\gamma}{1-\beta_{3}}x$; $y\geq \frac{1-\gamma}{1-\beta_{3}}y$. On the other hand, the
only critical point in the interior of the region is possibly at $x_{0}=y_{0}=\frac{1-\gamma}{1-\beta_{3}}$,
for which we have $x_{0}y_{0}-\frac{1-\gamma}{1-\beta_{3}}(x_{0}+y_{0}-1)=\frac{1-\gamma}{1-\beta_{3}}\left(1-\frac{1-\gamma}{1-\beta_{3}}\right)\geq0$,
again by $(P4)$.
\item $|X|+|Y|>n/2$, $|X|,|Y|>r/\varphi$ and $|X|\geq|Y|$.
This is exactly the same as the previous case with the roles of $X$ and $Y$
interchanged.
\end{enumerate}
\end{proof}
\subsection{Matchings in graphs with controlled degrees}
In this section, we collect a couple of results on matchings in (not necessarily bipartite) graphs satisfying some degree conditions. A $2$-\emph{factorization} of a graph is a decomposition of its edges into $2$-factors (that is, a collection of vertex disjoint cycles that covers all the vertices). The following theorem, due to Petersen \cite{Pet}, is one of the earliest results in graph theory.
\begin{theorem}[2-factor Theorem]
\label{2-factor theorem}
Every $2k$-regular graph with $k\geq 1$ admits a 2-factorization.
\end{theorem}
The next theorem, due to Vizing \cite{Vizing}, shows that every graph $G$ admits a proper edge coloring using at most $\Delta(G)+1$ colors.
\begin{theorem}[Vizing's Theorem]
\label{Vizing's theorem}
Every graph with maximum degree $\Delta$
can be properly edge-colored with $k\in\{\Delta,\Delta+1\}$ colors.
\end{theorem}
\subsection{Expander Mixing Lemma}
When dealing with $(n,d,\lambda)$ graphs, we will repeatedly use the following lemma (see, e.g., \cite{HLW}), which bounds the difference between the actual number of edges between two sets of vertices, and the number of edges we expect based on the sizes of the sets.
\begin{lemma}[Expander Mixing Lemma]
\label{Expander Mixing Lemma}
Let $G=(V,E)$ be an $(n,d,\lambda)$ graph, and let $S,T\subseteq V$. Let $e(S,T) = |\{(x,y)\in S\times T: xy \in E\}|$. Then,
$$\left|e(S,T) - \frac{d|S||T|}{n}\right|\leq \lambda \sqrt{|S||T|}.$$
\end{lemma}
\section{Random partitioning}
\label{section-random-partitioning}
While we have quite a few easy-to-use tools for working with balanced bipartite graphs, the graph we start with is not necessarily bipartite (when the starting graph \emph{is} bipartite, the existence problem is easy, and the counting problem is solved by \cite{Schrijver}). Therefore, perhaps the most natural thing to do is to partition the edges into `many' balanced bipartite graphs, where each piece has suitable expansion and regularity properties. The following lemma is our first step towards obtaining such a partition.
\begin{lemma}
\label{lemma:finding initial partitions}
Fix $a \in (0,1)$, and let $G=(V,E)$ be a $d$-regular graph on $n$ vertices, where $d=\omega_{a}(1)$
and $n$ is an even and sufficiently large integer. Then, for every integer $t\in [d^{a/100}, d^{1/10}]$, there exists a collection $(A_i,B_i)_{i=1}^t$ of balanced bipartitions for which the following properties hold:
\begin{enumerate}[$(R1)$]
\item Let $G_i$ be the subgraph of $G$ induced by $E_G(A_i,B_i)$. For all $1\leq i\leq t $ we have
$$\frac{d}{2}-d^{2/3}\leq \delta(G_i)\leq \Delta(G_i)\leq \frac{d}{2}+d^{2/3}.$$
\item For all $e\in E(G)$, the number of indices $i$ for which $e\in E(G_i)$ is $\frac{t}{2}\pm t^{2/3}$.
\end{enumerate}
\end{lemma}
We will divide the proof into two cases -- the \emph{dense case}, where $\log^{1000}{n}\leq d \leq n-1$, and the \emph{sparse case}, where $ d \leq \log^{1000}{n} $. The underlying idea is similar in both cases, but the proof in the sparse case is technically more involved as a standard use of Chernoff's bounds and the union bound does not work (and therefore, we will instead use the Local Lemma).
\begin{proof}[Proof in the dense case]
Let $A_{1},\dots,A_{t}$ be random subsets chosen independently from the uniform distribution on all subsets of $[n]$ of size exactly $n/2$, and let $B_{i}=V\setminus A_{i}$ for all $1\leq i\leq t$. We will show that with high probability, for every $1\leq i\leq t$, $(A_{i},B_{i})$ is a balanced bipartition satisfying $(R1)$ and $(R2)$.
First, note that for any $e\in E(G)$ and any $i \in [t]$, $$\text{Pr}\left[e\in E(G_{i})\right]=\frac{1}{2}\left(1+\frac{1}{n-1}\right).$$ Therefore, if for all $e\in E(G)$ we let $C(e)$ denote the set of indices $i$ for which $e\in E(G_{i})$, then
$$\mathbb{E} \left[|C(e)|\right] = \frac{t}{2}\left(1+\frac{1}{n-1}\right).$$
Next, note that, for a fixed $e\in E(G)$, the events $A_i:=`i\in C(e)$' are mutually independent, and that $|C(e)|=\sum_i X_i$, where $X_i$ is the indicator random variable for the event $A_i$. Therefore, by Chernoff's bounds, it follows that
$$\text{Pr}\left[|C(e)|\notin \frac{t}{2} \pm t^{2/3}\right]\leq \exp\left(-t^{1/3}\right)\leq \frac{1}{n^3}.$$
Now, by applying the union bound over all $e \in E(G)$, it follows that the collection $(A_{i},B_{i})_{i=1}^{t}$ satisfies $(R2)$ with probability at least $1-1/n$. Similarly, it is immediate from Chernoff's inequality for the hypergeometric distribution that for any $v\in V$ and $i \in [t]$, $$\text{Pr}\left[d_{G_i}(v)\notin \frac{d}{2} \pm d^{2/3}\right] \leq \exp\left(-\frac{d^{1/3}}{10}\right)\leq \frac{1}{n^3},$$ and by taking the union bound over all such $i$ and $v$, it follows that $(R1)$ holds with probability at least $1-1/n$. All in all, with probability at least $1-2/n$, both $(R1)$ and $(R2)$ hold. This completes the proof.
\begin{comment}
Indeed, for any $v\in V$, $e\in E(G)$ and $i\in [t]$, we have $\mathbb{E}\left[\deg_{G_{i}}(v)\right]=d/2$ and . Therefore, by Chernoff's inequality, it follows that for all $n$ sufficiently large $$\text{Pr}\left[\deg_{G_{i}}(v)\notin \frac{d}{2}\pm d^{2/3}\right]\leq \exp\left(-\frac{d^{1/3}}{3}\right)\leq\frac{1}{n^{4}}$$ and By taking the union bound over all choices of $v, e$ and $i$, it follows that the random collection $(A_{i},B_{i})$ of bipartitions satisfy (R1) and (R2) except with probability at most
Finally, note that by Chernoff's inequality,
\end{comment}
\end{proof}
\begin{proof}[Proof in the sparse case]
Instead of using the union bound as in the dense case, we will use the symmetric version of the Local Lemma (\cref{LLL}).
Note that there is a small obstacle with choosing \emph{balanced} bipartitions, as the local lemma is most convenient to work with when the underlying experiment is based on independent trials. In order to overcome this issue, we start by defining an auxiliary graph $G'=(V,E')$ as follows: for all $xy\in \binom{V}{2}$, $xy\in E'$ if and only if $xy \notin E$ and there is no vertex $v\in V(G)$ with $\{x,y\}\subseteq N_G(v)$. In other words, there is an edge between $x$ and $y$ in $G'$ if and only if $x$ and $y$ are not connected to each other, and do not have any common neighbors in $G$.
Since for any $x\in V$, there are at most $d^{2}$ many $y \in V$ such that $xy \in E$ or $x$ and $y$ have a common neighbor in $G$, it follows that $\delta(G') \geq n-d^{2}\geq n/2$ for $n$ sufficiently large. An immediate application of Hall's theorem shows that any graph on $2k$ vertices with minimum degree at least $k$ contains a perfect matching. Therefore, $G'$ contains a perfect matching.
Let $s=n/2$ and let $M:=\{x_1y_1,\ldots,x_{s}y_s\}$ be an arbitrary perfect matching of $G'$.
For each $i\in[t]$ let $f_{i}$ be a random function chosen independently and uniformly from the set of all functions from $\{x_{1},\dots,x_{s}\}$ to $\{\pm 1\}$. These functions will define the partitions according to the vertex labels as follows:
$$A_i:=\{x_j \mid f_i(x_j)=-1\}\cup \{y_j \mid f_i(x_j)=+1\},$$ and
$$B_i:=[n]\setminus V_i.$$
Clearly, $(A_i, B_i)_{i=1}^{t}$ is a random collection of balanced bipartitions of $V$.
If, for all $i\in[t]$, we let $g_{i}\colon V(G)\to \{A,B\}$ denote the random function recording which of $A_i$ or $B_i$ a given vertex ends up in, it is clear -- and this is the point of using $G'$ -- that for any $i \in [t]$ and any $v\in V(G)$, the choices $\{g_i(w)\}_{w\in N_G(v)}$ are mutually independent. This will help us in showing that, with positive probability, this collection of bipartitions satisfies properties $(R1)$ and $(R2)$.
Indeed, for all $v\in V(G)$, $i\in [t]$, and $e \in E(G)$, let
$D_{i,v}$ denote the event that `$d_{G_i}(v)\notin \frac{d}{2}\pm d^{2/3}$', and let $A_{e}$ denote the event `$|C(e)| \notin \frac{t}{2} \pm t^{2/3}$'. Then, using the independence property mentioned above, Chernoff's bounds imply that
$$\text{Pr}[D_{i,v}] \leq \exp\left(-d^{1/3}/4\right)$$ and
$$\text{Pr}[A_{e}] \leq\exp\left(-t^{1/3}/4\right).$$
In order to complete the proof, we need to show that one can apply the symmetric local lemma (\cref{LLL}) to the collection of events consisting of all the $D_{i,v}$'s and all the $A_{e}$'s. To this end, we first need to upper bound the number of events which depend on any given event.
Note that $D_{i,v}$ depends on $D_{j,u}$ only if $i=j$ and $\text{dist}_G(u,v)\leq 2$ or $uv\in M$. Note also that $D_{i,v}$ depends on $A_{e}$ only if an end point of $e$ is within distance $1$ of $v$ either in $G$ or in $M$. Therefore, it follows that any $D_{i,v}$ can depend on at most $2d^{2}$ events in the collection. Since $A_{e}$ can depend on $A_{e'}$ only if $e$ and $e'$ share an endpoint in $G$ or if any of the endpoints of $e$ are matched to any of the endpoints in $M$, it follows that we can take the maximum degree of the dependency graph in \cref{LLL} to be $2d^{2}$. Since $2d^{2}\exp(-t^{1/3}/4)=o_d(1/e)$, we are done.
\begin{comment}
(Note that $D_{i,v}$ depends only on $D_{j,u}$ with $j=i$ and $u$ of distance at most $2$ from $v$ in $G$. Therefore, it depends on at most $d^2$ other events (so it is a win in local lemma). This is still not enough because we also need to take care of the second property regarding the edges.)
By Chernoff bounds, $\text{Pr}[A_e]=o(1/t^{10})$.
Now, let us calculate the number of dependencies between the events. First, let us fix $i\in [t]$ and $v\in V(G)$. As mentioned above, $D_{i,v}$ depends on at most $d^2$ other $D_{j,u}$s. On how many events $A_e$s does it depend on? It seems like (CHECK THAT I'M NOT CHEATING) if $v$ does not have any neigbor in $e$, then these events are independent. Therefore, again, it depends on at most $d^2$ many $A_e$'s. All in all, using the (here we can take the) symmetric local lemma, we obtain the desired.
\end{comment}
\end{proof}
\section{Completion}
In this section, we describe the key ingredient of our proof, namely the completion step. Before stating the relevant lemma, we need the following definition.
\begin{definition}
\label{defn-good-graph}
A graph $H=(A\cup B,E)$ is called $(\alpha,r,m)$-good if it
satisfies the following properties:
\begin{enumerate}[$(G1)$]
\item $H$ is an $r$-regular, balanced bipartite graph with $|A|=|B|=m$.
\item Every balanced bipartite subgraph $H'=(A'\cup B',E')$ of $H$ with
$|A'|=|B'|\geq(1-\alpha)m$ and with $\delta(H')\geq(1-2\alpha)r$
contains a perfect matching.
\end{enumerate}
\end{definition}
The motivation for this definition comes from the next proposition, which shows that a regular graph on an even number of vertices, which can be decomposed as a union of a good graph and a sufficiently sparse graph, has a 1-factorization.
\begin{proposition}
\label{proposition-completion}
For every $\alpha\leq 1/10$, there exists an integer $r_0$ such that for all $r_0 \leq r_1$ and $m$ a sufficiently large integer, the following holds. Suppose that $H=(A\cup B, E(H))$ is an $(\alpha,r_1,m)$-good graph. Then, for every $r_2\leq \alpha^5r_1/\log{r_1}$, every $r:=r_1 + r_2$-regular (not necessarily bipartite) graph $R$ on the vertex set $A\cup B$, for which $H\subseteq R$, admits a 1-factorization.
\end{proposition}
For clarity of exposition, we first provide the somewhat simpler proof (which already contains all the ideas) of this proposition in the `dense' case, and then we proceed to the `sparse' case.
\begin{proof}[Proof in the dense case: $m^{1/10}\leq r_1\leq m$]
First, observe that $e(R[A])=e(R[B])$. Indeed, as $R$ is $r$-regular, we have for $X\in \{A,B\}$ that
$$rm=\sum_{v\in X}d_R(v)=2e(R[X])+e(R[A,B]),$$ from which the above equality follows. Moreover, $\Delta(R[X])\leq r_2$ for all $X\in \{A,B\}$.
Next, let $R_0:=R$ and $f_0:=e(R_0[A])=e(R_0[B])$.
By Vizing's theorem (\cref{Vizing's theorem}), both $R_0[A]$ and $R_0[B]$ contain matchings of size exactly $\lceil f_0/(r_2+1)\rceil$. Consider any two such matchings $M_A$ in $A$ and $M_B$ in $B$, and for $X\in \{A,B\}$, let $M'_X\subseteq M_X$,
denote a matching of size $|M'_X| =\lfloor \alpha f_0/2r_2 \rfloor$ such that no vertex $v \in V(H)$ is incident to more than $3\alpha r_1/2$ vertices which are paired in the matching. To see that such an $M'_X$ must exist, we use a simple probabilistic argument -- for a random subset $M'_X\subseteq M_X$ of this size, by a simple application of Hoeffding's inequality and the union bound, we obtain that $M'_X$ satisfies the desired property, except with probability at most $2m\exp(-\alpha^{2}r_1/8)\ll 1$.
Delete the \emph{vertices} in $\left(\cup M'_A\right)\bigcup \left(\cup M'_B\right)$, as well as any edges incident to them, from $H$ and denote the resulting graph by $H'=(A'\cup B',E')$. Since $|A'|=|B'|\geq (1-\alpha)|A|$ and $\delta(H')\geq (1-3\alpha/2)r_1$ by the choice of $M'_X$, it follows from $(G2)$ that $H'$ contains a perfect matching $M'$. Note that $M_0:=M'\cup M'_A\cup M'_B$ is a perfect matching in $R_0$. We repeat this process with $R_1:=R_0 - M_1$ (deleting only the edges in $M_1$, and not the vertices) and $f_1 :=e(R_1[A])=e(R_1[B])$ until we reach $R_k$ and $f_k$ such that $f_k\leq r_2$. Since $f_{i+1} \leq \left(1-\alpha/3r_2\right)f_i$, this must happen after at most $3r_2\log{m}/\alpha \ll \alpha^{2} r_1$ steps. Moreover, since $\deg(R_{i+1}) = \deg(R_i)-1$, it follows that during the first $\lceil 3r_2\log {m}/\alpha\rceil$ steps of this process, the degree of any $R_j$ is at least $r_1 - \alpha^2 r_1$. Therefore, since $(r_1 - \alpha^2r_1) - 3\alpha r_1/2 \geq (1-2\alpha)r_1$, we can indeed use $(G2)$ throughout the process, as done above.
From this point onwards, we continue the above process (starting with $R_k$) with matchings of size one i.e. single edges from each part, until no more edges are left. By the choice of $f_k$, we need at most $r_2$ such iterations, which is certainly possible since $r_2 + 3r_2\log{m}/\alpha \ll \alpha^{2} r_1$. After removing all the perfect matchings obtained via this procedure, we are left with a regular, balanced, \emph{bipartite} graph, and therefore it admits a 1-factorization (\cref{Hall-application-1fact-reg-bipartite}). Taking any such 1-factorization along with all the perfect matchings that we removed gives a 1-factorization of $R$.
\end{proof}
\begin{proof}[Proof in the sparse case: $r_1\leq m^{1/10}$]
Let $C$ be any integer and let $k = \lfloor 1/\alpha^{4}\rfloor$.
We begin by showing that any matching $M$ in $X\in\{A,B\}$ with $|M|=C$ can be partitioned into $k$ matchings $M_1,\dots, M_k$
such that no vertex $v\in V(H)$ is incident to more than $\alpha r_1$ vertices in $\cup M_i$ for any $i\in[k]$. If $C<\alpha r_1/2$, then there is nothing to show. If $C \geq \alpha r_1/2$, consider an arbitrary partition of $M$ into $\lceil C/k\rceil $ sets $S_1,\dots,S_{\lceil C/k\rceil}$
with each set (except possibly the last one) of size $k$. For each $S_{j}$, $j\in \lfloor C/k \rfloor$, choose a permutation of $[k]$ independently and uniformly at random, and let $M_i$ denote the random subset of $M$ consisting of all elements of $S_1,\dots,S_{\lceil C/k \rceil}$ which are assigned the label $i$. We will show, using the symmetric version of the Local Lemma (\cref{LLL}), that the decomposition $M_1,\dots,M_k$ satisfies the desired property with a positive probability.
To this end, note that for any vertex $v$ to have at least $\alpha r_1$ neighbors in some $M_i$, it must be the case that the $r_1$ neighbors of $v$ in $H$ are spread throughout at least $\alpha r_1$ distinct $S_j$'s. Let $D_v$ denote the event that $v$ has at least $\alpha r_1$ neighbors in some matching $M_i$. Since $v$ has at least $\sqrt{k}$ neighbors in at most $r_1/\sqrt{k} \ll \alpha r_1$ distinct $S_j$'s, it follows that $\text{Pr}[D_v] \leq k(1/\sqrt{k})^{\alpha r_1/2}$. Finally, since each $D_v$ depends on at most $r^{2}k$ many other $D_w$'s, and since $k^{2}r^{2}(1/\sqrt{k})^{\alpha r_1/2} <1/e$, we are done.
Now, as in the proof of the dense case, we have $e(R[A])=e(R[B])$ and $\Delta(R[X])\leq r_2$ for $X\in\{A,B\}$. By Vizing's theorem, we can decompose $R[A]$ and $R[B]$ into exactly $r_2+1$ matchings each, and it is readily seen that these matchings can be used to decompose $R[A]$ and $R[B]$ into at most $\ell \leq 2(r_2+1)$ matchings $M^{A}_1,\dots,M^{A}_\ell$ and $M^{B}_1,\dots,M^{B}_\ell$ such that $|M^{A}_i|=|M^{B}_i|$ for all $i\in[\ell]$. Using the argument in the previous paragraph, we can further decompose each $|M^{X}_i|$, $X\in\{A,B\}$ into at most $k$ matchings each such that no vertex $v\in V(H)$ is incident to more than $\alpha r_1$ vertices involved in any of these smaller matchings. Since $4r_2/\alpha^{4} \ll r_1$, the rest of the argument proceeds exactly as in the dense case.
\end{proof}
\begin{remark}
\label{rmk:counting-completion}
In the last step of the proof, we are allowed to choose an arbitrary $1$-factorization of an $r'$-regular, balanced bipartite graph, where $r'\geq r_1 - r_2$. Therefore, using \cref{Schrijver-lower-bound-1-fact} along with the standard inequality $d! \geq (d/e)^d$, it follows that $R$ admits at least $\left((r_{1}-r_{2})/e^{2}\right)^{(r_{1}-r_{2})m}$ $1$-factorizations.
\end{remark}
\begin{comment}
\textbf{Completion step for sparse case: }
We want to show that every $r'=r+r^{0.9}$-regular graph $R$ which
has an $(\alpha,r,n/2)$-good graph $H=(A\cup B,E)$ as a subgraph
contains a $1$-factorization. In our case, we can take $r=d^{0.95}$(say) and $\alpha=0.01$. The point is that we can fix $\alpha$ to
be some small constant, and then take $d$ as large as we need.
\textbf{Preliminary lemma (informal): }Let $M$ denote a matching
in $A$ of size $m\geq\alpha r/2$. Then, $M$ can be partitioned
into $k=poly(\log r,1/\alpha)$ many equitable matchings $M_{1},\dots,M_{k}$
such that no vertex is incident (in $H$) to more than $\alpha r$
many vertices involved in any $M_{i}$.
\textbf{Proof: }Note that $\leq2r'm$ vertices in $M$ can be incident
to any vertex in the matching. Let's call these vertices $v_{1},\dots,v_{s}$.
First, we deterministically (arbitrarily) partition $M$ into $m/k$
sets, each of size $k$. Next, for each of the $m/k$ ``boxes'',
we choose a random permutation of $[k]$. Taking all edges with the
same number gives us a decomposition of the matching. For any vertex
$v_{i}$ to have more than $\geq\alpha r$ many vertices involved
in some $M_{j}$, it must necessarily be the case that the $r$ edges
from $v_{i}$ hit at least $\alpha r$ many boxes. Moreover, any vertex
$v_{i}$ can have $\geq\sqrt{k}$ many edges to at most $r/\sqrt{k}\ll\alpha r$
many boxes. Thus, if we let $D_{i}$ denote the event that $v_{i}$
has $\geq\alpha r$ many neighbours in any of the $M_{j}$'s, we get
that $\text{Pr}[D_{i}]\leq k(1/\sqrt{k})^{\alpha r}$. Finally, since each
$D_{i}$ depends on at most $O(kr^{2})$ many $D_{j}'s$, an application
of the local lemma completes the proof.
\\
As before, note that $e(R[A])=e(R[B])$ and $\Delta(R[X])\leq r^{0.9}$.
In particular, by Vizing's theorem, $R[A]$ and $R[B]$ can each be
decomposed into at most $r^{0.9}+1$ matchings (not necessarily perfect).
Let the sizes of these matchings be $a_{1}\geq a_{2}\geq\dots\geq a_{\Delta+1}$
and $b_{1}\geq b_{2}\geq\dots\geq b_{\Delta+1}$. Note that $\sum_{i}a_{i}=\sum_{i}b_{i}$.
Without loss of generality, let $a_{1}\geq b_{1}$. In the first round,
we select the matching $b_{1}$ and $b_{1}$ elements in $a_{1}$.
Let $c_{1}:=a_{1}-b_{1}$ and continue the same process with the remaining
matchings. This takes $\leq4r^{0.9}$ steps. Now, every time that
we are working with a matching, we take $k$ steps using the preliminary
lemma above. In conclusion, we take $\leq r^{0.91}$ steps.
\end{comment}
\begin{comment}
\section{Finding a good subgraph of $G$}
\label{section-goodgraph}
This section is devoted to proving the following proposition, which shows that a good spectral expander contains a sufficiently sparse good subgraph, in the sense of \cref{defn-good-graph}.
\begin{proposition}
\label{existence-good-graph}
Let $G=(V,E)$ be an $(n,d,\lambda)$-graph with $\lambda < d/4t^{2}$
and $d\geq 2^{10000}$ where $t=\lceil d^{1/100}\rceil$. Let $G'=(A\cup B,E')$ be a balanced bipartite subgraph
of $G$ with $|A|=|B|=n/2$ and such that $\frac{d}{2}-d^{2/3}\leq\delta(G')\leq\Delta(G')\leq\frac{d}{2}+d^{2/3}$. Then, $G'$ contains an $\left(\alpha, \lfloor\bar{r}\rfloor, \frac{n}{2}, \frac{n}{2t^{2}}\right)$-good subgraph $H$ with $\alpha=2^{-10}$ and $\bar{r}=\frac{d}{t}\left(1-\frac{2}{t}\right)$.
\end{proposition}
The proof of \cref{existence-good-graph} will be based on the following technical lemma, which shows that we can find a sparse subgraph of $G'$ with degrees and expansion properties close to what one would expect from a random sparse subgraph of the same density.
\begin{lemma}
\label{technical-existence-good-graphs}
Let $H'$ denote the random graph obtained by keeping
each edge of $G'$ independently with probability $2/t$. Then, with
positive probability, $H'$ satisfies the following properties:
\begin{enumerate}[$(Q1)$]
\item $\frac{d}{t}\left(1-\frac{1}{t}\right)\leq\delta(H')\leq\Delta(H')\leq\frac{d}{t}\left(1+\frac{1}{t}\right)$.
\item $e_{H'}(X,Y)\leq(1-2\alpha)\bar{r}|X|$ for all $X,Y$ with $\frac{n}{2t} \leq |X|=|Y|\leq \frac{n}{4}$.
\item \textbf{THE 2 HERE SEEMS TO BE A PROBLEM LATER} $e_{H'}(X,Y)\in 2d|X||Y|/nt\pm \left(2\lambda\sqrt{|X||Y|}/t+d|X||Y|/nt^{3}\right)$
for all $X\subseteq A$, $Y\subseteq B$ such that $|X|+|Y|>n/2$ and $|X|,|Y|\geq n/2t^{2}$.
\end{enumerate}
\end{lemma}
\begin{comment}
Before proving this lemma, let us show how it can be used to prove \cref{existence-good-graph}.
\begin{proof}[Proof of \cref{existence-good-graph}]
First, we show that the graph $H'$ obtained from \cref{technical-existence-good-graphs} satisfies the hypotheses of \cref{gale} with
\[
r=\frac{d}{t},\varphi=\frac{2rt^{2}}{n},\beta_{1}=\beta_{2}=\beta_{3}=\frac{1}{t},\gamma=\frac{2}{t}.
\]
(P4) is satisfied by the choice of parameters, whereas (P1) is satisfied due to (Q1).
\begin{comment}
(2)Apply Lemma 3.1 with
\begin{align*}
r & =d^{0.55}\\
\varphi & =r/d^{0.49}n\\
\beta_{1} & =d^{-0.04}\\
\beta_{2} & =d^{-0.04}\\
\beta_{3} & =d^{-0.04}\\
\gamma & =\max\{\beta_{3},\beta_{1}+\beta_{2}\}
\end{align*}
\end{comment}
\begin{comment}
For (P2), note that for $|X|=|Y|$, the expander mixing lemma gives $$e_{G'}(X,Y)\leq e_{G}(X,Y) \leq d|X|^{2}/n+\lambda|X|\leq r\beta_{2}|X|$$ for $|X|\leq \frac{r}{\varphi}=\frac{n}{2t^{2}}$.
For (P3), let $X\subseteq A$, $Y\subseteq B$ with $|X|+|Y|>n/2$
and $|X|,|Y|\geq n/2t^{2}$. We want to show that $e_{H'}(X,Y)\geq 2d(1-t^{-1})|X||Y|/nt$. By (Q3), it suffices to show that
\begin{equation}
\label{inequality-good-graph}
\frac{2d|X||Y|}{nt^2}\geq \frac{d|X||Y|}{nt^{3}} + \frac{2\lambda\sqrt{|X||Y|}}{t}
\end{equation}
which is readily seen to be implied by our assumption that $\frac{d}{4t^{2}}\geq \lambda$.
Now, let $H$ be any $\frac{d}{t}\left(1-\frac{2}{t}\right)$-factor of $H'$. It remains to check that $H$ satisfies properties (G2) and (G3) in \cref{defn-good-graph}. We will show the stronger statement that $H$ satisfies (G2) and (G3).
The fact that $H$ satisfies (G3) follows directly from (Q3) along with
\cref{inequality-good-graph}. For (G2), let $H''=(A''\cup B'',E'')$ be a subgraph of $H'$ with $A''\subseteq A$, $B''\subseteq B$ such that $|A''|=|B''|\geq(1-\alpha)n/2$ and $\delta(H'')\geq(1-2\alpha)\bar{r}$.
Suppose $H''$ does not contain a perfect matching. Then, by \cref{Hall}, there must exist $X\subseteq A$ and
$Y\subseteq B$ such that $|X|=|Y|\leq|A|/2$ and $e_{H'}(X,Y)\geq(1-2\alpha)\bar{r}|X|$.
On the other hand, we know that for such a pair, $e_{H'}(X,Y)\leq e_{G}(X,Y)\leq d|X|^{2}/n+\lambda|X|$. Thus, we must necessarily have that $|X| \geq n/2t$, which contradicts (Q2).
\end{comment}
\begin{comment}
Let $\alpha=0.01$
and let $H'=(A'\cup B',E')$ be a subgraph of $G'$ with $A'\subseteq V_{1}$,
$B'\subseteq V_{2}$ such that $|A'|=|B'|\geq(1-\alpha)n/2$ and $\delta(H')\geq(1-2\alpha)d^{0.55}$.
Suppose $H'$ does not contain a perfect matching. Then, by the 2-sided
version of Hall's theorem, there must exist $X\subseteq V_{1}$ and
$Y\subseteq V_{2}$ such that $|X|=|Y|\leq|A'|/2$ and $e_{G'}(X,Y)\geq(1-2\alpha)d^{0.55}|X|$.
On the other hand, we know that for such a pair, $e_{G}(X,Y)\leq d|X|^{2}/n+\lambda|X|\leq d|X|^{2}/n+d^{0.51}|X|$.
So, this can only be true if $d|X|^{2}/n+d^{0.51}|X|\geq(1-2\alpha)d^{0.55}|X|$
i.e. $|X|\geq n\{(1-2\alpha)d^{0.55}-d^{0.51}\}/d$ \textendash{}
in particular, $|X|\geq nd^{-0.46}$. But now, our expansion property
guarantees that there do not exist such $X,Y$ in $G'$.
\end{comment}
\begin{comment}
\begin{proof}[Proof of \cref{technical-existence-good-graphs}]
Consider the probability that $e_{H'}(X,Y)\geq(1-2\alpha)\bar{r}|X|$
for fixed $X,Y$ with $n/2t \leq|X|=|Y|\leq n/4$. By the expander mixing lemma,
we know that $$e_{G'}(X,Y)\leq e_{G}(X,Y)\leq d|X|^{2}/n+ \lambda|X| \leq d|X|/4 + \lambda |X|.$$ Therefore,
by Hoeffding's inequality, it follows that this probability is at most $\exp\left(-d|X|/50t\right)$, which is less than $\exp\left(-dn/100t^{2}\right)$.
Union bounding over all at most $2^{n}$ such $X,Y$, we get that the probability that (Q2) is not satisfied is at most $\exp(-dn/200t^{2})$.
Consider the probability that $e_{H'}(X,Y)\notin 2d|X||Y|/nt\pm \left(2\lambda\sqrt{|X||Y|}/t-d|X||Y|/nt^{3}\right)$
for fixed $X\subseteq A$, $Y\subseteq B$, $|X|+|Y|>n/2$ and $|X|,|Y|\geq n/2t^{2}$.
By the expander mixing lemma and Hoeffding's inequality, it follows that this probability is bounded above by $2\exp\left(-\frac{d|X||Y|}{nt^{6}}\right) \leq 2\exp\left(-\frac{dn}{4t^{8}}\right)$. Union bounding over all at most $2^{n}$ such $X,Y$, we get that the probability that (Q3) is not satisfied is bounded above by $\exp\left(-\frac{dn}{8t^{8}}\right)$.
Let $D_{v}$ denote the event that that
$d_{H'}(v)\notin \frac{d}{t}(1\pm t^{-1})$.
By Chernoff's inequality, we get that $\text{Pr}[D_{v}]\leq\exp(-d/10t^{3})$. By the symmetric local lemma (\cref{LLL}), since
for the collection of events $\{D_{v}\}_{v\in V}$, each event can
depend on at most $d^{2}$ many events, it follows that $$\text{Pr}[\bigcap_{v}\bar{D}_{v}]\geq\left(1-\frac{1}{d^{2}+1}\right)^{n}\geq e^{-2n/(d^{2}+1)},$$
where in the second inequality, we use the fact that $1-x \geq\exp(-2x)$ for $0\leq x \leq 1/2$.
\begin{comment}
Now, for a fixed partition (equivalently, fixed choice of function
$f_{i}$), consider the probability that $e_{G'}(X,Y)\geq(1-2\alpha)d^{0.55}|X|=0.98\times d^{0.55}|X|$
for $nd^{-0.46}\leq|X|=|Y|\leq n/4$. By the expander mixing lemma,
we know that $e_{G}(X,Y)\leq d|X|^{2}/n+O(\sqrt{d}|X|)$. Therefore,
by Hoeffding, we get that $\text{Pr}[e_{G'}(X,Y)\geq2d^{0.55}|X|/3]\leq\exp\left(-d^{0.95}n\right)$.
Union bounding over all such $X,Y$, we get that the probability is
$\leq\exp(-d^{0.9}n)$.
Consider the probability that $e_{H'}(X,Y)\leq2d^{0.55}|X||Y|/n-2d^{-0.45}\lambda\sqrt{|X||Y|}-d^{0.505}|X||Y|/n$
for $X\subseteq A$, $Y\subseteq B$, $|X|+|Y|>n/2$ and $|X|,|Y|\geq n/2t^{2}$.
By the expander mixing lemma followed and Hoeffding's inequality, we
get that $\text{Pr}[e_{G'}(X,Y)\leq2d^{0.55}|X||Y|/n-2d^{-0.45}\lambda\sqrt{|X||Y|}-d^{0.505}|X||Y|/n]\leq\exp\left(-\frac{d^{1.1}|X||Y|}{2n}\right)$
which is bounded above by $\exp\left(-\frac{d^{1.1}\min\{|X|,|Y|\}}{10}\right)\leq\exp\left(-\frac{d^{0.61}n}{10}\right)$.
Union bounding over all such $X,Y$, we get that the probability is
$\leq\exp(-d^{0.6}n)$.
Since $\exp(-d^{0.6}n)\ll\exp(-2n/d^{2})$, it follows that we can
find a partition $V=V_{1}\cup V_{2}$ such that the induced graph
$G'$ satisfies $d^{0.55}-d^{0.51}\leq\delta(G')\leq\Delta(G')\leq d^{0.55}+d^{0.51}$
as well as the two ``expansion'' properties mentioned above.
\end{comment}
\begin{comment}
In this section, we show that for sufficiently large $n$, a graph $G$ as considered above contains an $(\alpha, d/\log^{2}n, n/2)$ good subgraph. Our construction will be probabilistic. Let $\mathcal{S}$ denote the collection of all subsets of $V$ of size exactly $n/2$. Let $V_{1}$ denote a random sample from the uniform distribution
on $\mathcal{S}$, let $V_{2}:=V\backslash V_{1}$ and let $E(H_{0}):=\{e\in G:e\cap V_{1}\neq\emptyset,e\cap V_{2}\neq\emptyset\}$
denote the set of crossing edges between $V_{1}$ and $V_{2}$. Let
$E(H_{1})$ denote the (random) subset of edges generated by including
each edge in $E(H_{0})$ independently with probability $p=3/\log^{2}n$. Let $\alpha=0.025$.
\begin{comment}
\begin{proposition}
\label{existence-good-subgraphs}
For $n$ sufficiently large, the (random) balanced
bipartite graph \textbf{$H_{1}:=(V_{1}\cup V_{2},E(H_{0}))$ }contains
an $(\alpha,d/\log^{2}n,n/2)$-good subgraph with positive probability.
\end{proposition}
\begin{proof}
By Lemma \ref{factor-for-good-graph}, $H_{1}$ contains a $d/\log^{2}n$-factor
with probability at least $1-O(n^{3/2}\exp(-p^{2}d/5000))$. By Lemma
\ref{second-property-goodness}, with probability at least $1-O(ne^{-n})$, every balanced bipartite subgraph of $H_{1}$ with parts of size at least $(1-\alpha)n/2$
and minimum degree at least $(1-2\alpha)d/\log^{2}n$ contains a perfect
matching. Therefore, with probability at least $1-O(ne^{-n}+n^{3/2}\exp(-p^{2}d/5000))$, which is positive for $n$ sufficiently large since $d\geq\log^{6}n$,
$H_{1}$ contains a $d/\log^{2}n$-factor $H$ which automatically
satisfies condition (2) in Definition REF.
\end{proof}
\noindent
The rest of this section is devoted to proving the lemmas we need in the proof of the previous proposition. We begin with a simple application of Hoeffding's
inequality.
\begin{lemma}
\label{not-too-many-edges}
Let $\nu\geq1/2$ and $\mu\leq2\nu/7$ be fixed constants, and let $Q_{0}^{c}$ denote the event that there exist
$X,Y\subseteq V$ with $|X|=|Y|\leq\mu n$ and $e_{H_{1}}(X,Y)>\nu pd|X|/3$. Then, $\text{Pr}(Q_{0}^{c}) = O(ne^{-n})$.
\end{lemma}
\begin{proof}
Fix $X,Y\subseteq V$ with $|X|=|Y|\leq\mu n$. Since $e_{H_{0}}(X,Y)\leq e_{G}(X,Y)\leq\frac{d}{n}|X|^{2}+\lambda|X|$, and since
$E(H_{1})$ is constructed by including each edge in $H_{0}$
independently with probability $p=3/\log^{2}n$, it follows by Hoeffding's
inequality that for all $n$ sufficiently large,
\begin{align*}
\text{Pr}\left[e_{H_{1}}(X,Y)\geq\nu pd|X|/3 \right] & \leq\text{Pr}\left[e_{H_{1}}(X,Y)-\mathbb{E}[e_{H_{1}}(X,Y)]\geq\frac{d|X|}{150\log^{2}n}\right]\\
& \leq\exp\left(-\frac{dn}{20000\log^{4}n}\right).
\end{align*}
where in the first inequality, we have used that $\mu\leq2\nu/7$ and $\lambda = o(d)$.
Thus, by union bounding over all such choices of $X$ and $Y$, it follows that $\text{Pr}(Q_{0}^{c})$ is bounded above by
\begin{align*}
\sum_{|X|=1}^{\mu n}{n \choose |X|}{n \choose |X|}\exp\left(-\frac{dn}{20000\log^{4}n}\right) & \leq n\exp\left(n\log n\right)\exp\left(-\frac{dn}{20000\log^{4}n}\right)\\
& \leq{ne^{-n}}
\end{align*}
for all $n$ sufficiently large, where in the last inequality, we have used that $d\geq\log^{5}n$.
\end{proof}
\begin{lemma}
\label{second-property-goodness}
Let $Q_{1}^{c}$ denote the event that there exists
a balanced bipartite subgraph $H'=(A'\cup B',E')$ of $H_{1}$, with
$|A'|=|B'|=m\geq(1-\alpha)n/2$ and with $\delta(H')\geq(1-2\alpha)d/\log^{2}n$,
such that $H'$ does not contain a perfect matching. Then, $\text{Pr}(Q_{1}^{c})=O(ne^{-n})$.
\end{lemma}
\begin{proof}
Consider a fixed (but otherwise arbitrary) partition
$V=V_{1}\cup V_{2}$, and let $H'=(A'\cup B',E')$ be as in the statement
of the lemma with $A'\subseteq V_{1}$ and $B'\subseteq V_{2}$. By the 2-sided formulation of Hall's condition, if $H'$ does not contain a perfect matching, then there must exist either some $A''\subseteq A'$
with $|A''|\leq m/2$ and $|N_{H'}(A'')|<|A''|$ or some
$B''\subseteq B'$ with $|B''|\leq m/2$ and $|N_{H'}(B'')|<|B''|$. In either case, since $\delta(H')\geq(1-2\alpha)d/\log^{2}n$, we get subsets $X\subseteq V_{1}$ and $Y\subseteq V_{2}$ with
$|X|=|Y|\leq m/2$ and $|e_{H_{1}}(X,Y)|\geq(1-2\alpha)d|X|/\log^{2}n$. Since by Lemma \ref{not-too-many-edges} with $\mu=1/4$ and $\nu=(1-2\alpha)$, this happens with probability $O(ne^{-n})$, it follows that $\text{Pr}(Q_{1}^{c})=O(ne^{-n})$ as well.
\end{proof}
\noindent
The argument in the proof of the next lemma will be used again later.
\begin{lemma}
\label{degree-concentrate}
Let $Q_{2}$ denote the event that $|\deg_{H_{1}}(v)-pd/2|\leq pd/100$
for all $v\in V$. Then, $\text{Pr}(Q_{2}^{c})=O\left(n^{3/2}\exp\left(-p^{2}d/5000\right)\right)$.
\end{lemma}
\begin{proof}
Let $V_{1}'$ denote a random sample from the uniform
distribution on the subsets of $V$, let $V_{2}':=V\backslash V_{2}$,
let $E(H_{0}')$ denote the crossing edges between $V_{1}'$ and $V_{2}'$,
and let $E(H_{1}')$ denote the (random) subset of edges generated
by including each edge in $E(H_{0})$ independently with probability
$p:=3/\log^{2}n$. Let $M$ denote the event that $|V_{1}'|=n/2$,
and note that conditioned on $M$, the distribution of $V_{1}'$ coincides with that of $V_{1}$. Hence, if we let $Q_{2}'$
denote the event that $|\deg_{H_{1}'}(v)-pd/2|\leq pd/100$ for all
$v\in V$, it follows that
\begin{align*}
\text{Pr}(Q_{2}^{c}) & =\text{Pr}((Q_{2}')^{c}|M)\\
& \leq\frac{\text{Pr}((Q_{2}')^{c})}{\text{Pr}(M)}\\
& =O(\sqrt{n}\text{Pr}((Q_{2}')^{c}))
\end{align*}
where we have used the fact that $\text{Pr}(M)=\Theta(1/\sqrt{n})$. Thus,
it suffices to show that $\text{Pr}((Q_{2}')^{c})=O\left(n\exp\left(-p^{2}d/5000\right)\right)$. This follows easily from Hoeffding's inequality and the union bound.
Indeed, for fixed $v\in V$, each of the $d$ edges incident to $v$ in $G$ is included in $E(H_{1}')$ independently with probability $p/2$.
Therefore, by Hoeffding's inequality, \[
\text{Pr}\left[\left|\deg_{H_{1}'}(v)-pd/2\right|\geq pd/100\right]\leq2\exp\left(-p^{2}d/5000\right)
\]
and union bounding over all $v\in V$ shows that $\text{Pr}((Q_{2}')^{c})\leq2n\exp\left(-p^{2}d/5000\right)$, as desired.
\end{proof}
\begin{comment}
\begin{lemma}
\label{factor-for-good-graph}
Let $Q_{3}$ denote the event that $H_{1}$ contains
a $d/\log^{2}n$-factor. Then, $\text{Pr}(Q_{3}^{c})=O(n^{3/2}\exp(-p^{2}d/5000))$.
\end{lemma}
\begin{proof}
We will show this by upper bounding the probability that the balanced
bipartite graph $H_{1}$ fails to satisfy the conditions in the statement
of Lemma \ref{finding-large-factors} with $r=pd/2$, $\varphi=10r/n$, $\beta_{1}=1/50$,
$\beta_{2}=2/7,\beta_{3}=1/3$ and $\gamma=1/3$. Note that (P4) is satisfied for this choice of parameters. Let $A_{1}$, $A_{2}$ and
$A_{3}$ denote respectively the events that (P1), (P2) and (P3) are not satisfied.
By Lemma \ref{degree-concentrate}, we have $\text{Pr}(A_{1})=O\left(n^{3/2}\exp\left(-p^{2}d/5000\right)\right)$. By Lemma \ref{not-too-many-edges} with $\mu=1/10$ and $\nu=3/8$, we have $\text{Pr}(A_{2})=O(ne^{-n})$. We now bound $\text{Pr}(A_{3})$.
Consider a fixed (but otherwise arbitrary) realization of $H_{0}:=(V_{1}\cup V_{2},E(H_{0}))$.
For any $X\subseteq V_{1}$ and $Y\subseteq V_{2}$,
we have $e_{H_{0}}(X,Y)=e_{G}(X,Y)\in[d|X||Y|/n-\lambda\sqrt{|X||Y|},d|X||Y|/n+\lambda\sqrt{|X||Y|}]$.
In particular, if $|X|,|Y|\geq n/10$, then $e_{H_{0}}(X,Y)\in[0.99d|X||Y|/n,1.01d|X||Y|/n]$
for $n$ sufficiently large, where we have used that $\lambda=o(d)$. Since $E(H_{1})$ is constructed by including
each edge in $H_{0}$ independently with probability $p$, Hoeffding's
inequality shows that
\begin{align*}
\text{Pr}\left[e_{H_{1}}(X,Y)\leq\frac{2}{3}pd|X||Y|/n\right] & \leq\exp\left(-\frac{p^{2}d|X||Y|}{20n}\right)\\
& \leq\exp\left(-\frac{p^{2}dn}{2000}\right).
\end{align*}
By union bounding over all choices of $X$ and $Y$, and using $d\geq\log^{6}n$, we get that $\text{Pr}(A_{3})\leq n^{n}\exp\left(-\frac{p^{2}dn}{2000}\right)\leq\exp(-p^{2}dn/5000)$.
\end{proof}
\end{comment}
\section{Finding good subgraphs which almost cover $G$}
\label{section-structuralresult}
In this section we present a structural result which shows that a `good' regular expander on an even number of vertices can be `almost' covered by a union of edge disjoint good subgraphs.
\begin{comment}
\textbf{I MADE SOME CHANGES IN THIS SECTION AT THE END; THE NOTATION ISN'T CONSISTENT, AND THINGS MAY BE WRONG}
In this section, we work with $W:=G\setminus H$, where $H$ is the subgraph of $G$ obtained using \cref{existence-good-graph}. In particular, $W=(V(W),E(W))$ satisfies the following properties:
\begin{enumerate}[$(L1)$]
\item $W$ is $d'$ regular, with $d'=d-\lfloor\bar{r}\rfloor$
\item \textbf{THIS IS NOT WHAT WE WANT} $e_{W}(X,Y)\geq (d-3\bar{r})\frac{|X||Y|}{n} - \lambda \sqrt{|X||Y|}$ for every $X,Y$
with $|X|,|Y|>n/2t'^{2}$ and $|X|+|Y|>n/2$.
\end{enumerate}
\end{comment}
\begin{comment}
Consider the graph $G':=G\backslash H$, where $H$ is an $(\alpha,d/\log^{2}n,n/2)$-good graph whose existence is guaranteed by Proposition \ref{existence-good-subgraphs}. In particular, $G'$ is $d'=d(1-\frac{1}{\log^{2}n})$-regular. Let $t=\lfloor(d')^{1/5}\rfloor$. The following proposition is the main result of this section.
and for any $X,Y\subseteq V$, we have
\begin{align*}
e_{G}'(X,Y) & \geq e_{G}(X,Y)-\min\{|X|,|Y|\}\frac{d}{\log^{2}n}\geq\frac{d}{n}|X||Y|-\lambda\sqrt{|X||Y|}-\min\{|X|,|Y|\}\frac{d}{\log^{2}n}\\
e_{G}'(X,Y) & \leq e_{G}(X,Y)\leq\frac{d}{n}|X||Y|+\lambda\sqrt{|X||Y|}.
\end{align*}
\begin{proposition}
\label{almost-1-factorization-existence}
Let $d\geq \log^{14}n$ and let $G',t$ be as above. Then, for $n$ sufficiently large, there exist subgraphs $G_{1},\dots,G_{t}$ of $G'$ such that:
\begin{enumerate}
\item $E(G_{1})\cup\dots\cup E(G_{t})$ is a partition of $E(G')$
\item $\frac{d'}{t}\left(1-\frac{8}{t^{1/4}}\right)\leq\delta(G_{i})\leq\Delta(G_{i})\leq\frac{d'}{t}\left(1+\frac{8}{t^{1/4}}\right)$ for all $i\in[t]$
\item $G_{i}$ contains a $\left(1-\frac{1}{\log^{5}n}\right)\left(1-\frac{20}{t^{1/4}}\right)\delta(G_{i})$-factor for all $i\in[t]$.
\end{enumerate}
\end{proposition}
\begin{proof}
Immediate from Lemma \ref{random-partition-degrees} and Lemma \ref{random-partition-factors}.
\end{proof}
\noindent
As in the proof of Proposition \ref{existence-good-subgraphs}, we will proceed via a probabilistic construction. Let $\mathcal{S}$ denote the collection of all subsets of $V$ of size exactly
$n/2$. Let $L_{1},\dots,L_{t}$
denote $t$ independent random samples from the uniform distribution
on $\mathcal{S}$. For each $i\in[t]$, let $R_{i}:=V\backslash L_{i}$ and
$E(H_{i}):=\{e\in G':e\cap L_{i}\neq\emptyset,e\cap R_{i}\neq\emptyset\}$
denote the set of crossing edges between $L_{i}$ and $R_{i}$. For
each edge $e\in G'$, let $C(e):=\{i\in[t]\colon e\in E(H_{i})\}$.
Let $\{X(e)\}_{e\in G'}$ denote independent samples where $X(e)\sim\text{Unif}(C(e))$,
and construct pairwise disjoint sets of edges $E(G_{1}),\dots,E(G_{t})$
by including $e$ in $E(G(X_{e})$) for all $e\in G'$.
\begin{comment}
\begin{lemma}
\label{random-partition-degrees}
Let $T_{1}$ denote the event that $E(G_{1})\cup\dots\cup E(G_{t})=E(G')$
and $\frac{d'}{t}\left(1-\frac{8}{t^{1/4}}\right)\leq\delta(G_{i})\leq\Delta(G_{i})\leq\frac{d'}{t}\left(1+\frac{8}{t^{1/4}}\right)$ for all $i\in[t]$.
Then, $\text{Pr}(T_{1}^{c})=O(n^{3/2}d'\exp(-2\sqrt{t}))$ for $d\geq \log^{6}n$.
\end{lemma}
\begin{proof}
Let $L_{1}',\dots,L_{t}'$ denote
independent random samples from the uniform distribution on subsets
of $V$, and let $R_{i}',H_{i}'$ and $G_{i}'$ be the corresponding
objects as before, and let $T_{1}'$ be the event that $E(G_{1}')\cup\dots\cup E(G_{t}')=E(G')$
and $\frac{d'}{t}\left(1-\frac{8}{t^{1/4}}\right)\leq\delta(G_{i}')\leq\Delta(G_{i}')\leq\frac{d'}{t}\left(1+\frac{8}{t^{1/4}}\right)$ for all $i\in[t]$. Then, by the same argument as in the proof of Lemma \ref{degree-concentrate}, it suffices to show that
$\text{Pr}((T_{1}')^{c})=O(nd'\exp(-2\sqrt{t}))$. First, note that for any $e\in G'$, Hoeffding's inequality shows
that $\text{Pr}\left[\left|C(e)-\frac{t}{2}\right|\geq t^{3/4}\right]\leq2\exp\left(-2\sqrt{t}\right)$.
Therefore, by the union bound,
\[
\text{Pr}\left[\exists e\in G':\left|C(e)-\frac{t}{2}\right|\geq t^{3/4}\right]\leq2d'n\exp\left(-2\sqrt{t}\right).
\]
Next, by Hoeffding's inequality, for any $v\in V$ and any $i\in[t]$,
$\text{Pr}\left[\left|\deg_{H_{i}'}(v)-\frac{d'}{2}\right|\geq d'^{3/4}\right]\leq2\exp(-2\sqrt{d'})$. Hence, by the union bound,
\[
\text{Pr}\left[\exists v\in V,i\in[t]:\left|\deg_{H_{i}'}(v)-\frac{d'}{2}\right|\geq d'^{3/4}\right]\leq2nt\exp(-2\sqrt{d'}).
\]
Thus, if we let $T_{2}'$ denote the event that $\left|C(e)-\frac{t}{2}\right|\leq t^{3/4}$
for all $e\in G'$ and $\left|\deg_{H_{i}'}(v)-\frac{d'}{2}\right|\leq d'^{3/4}$
for all $v\in V,i\in[t]$, it follows that
\[
\text{Pr}((T_{2}')^{c})\leq2n\left(d'\exp(-2\sqrt{t})+t\exp(-2\sqrt{d'})\right)\leq4nd'\exp(-2\sqrt{t}).
\]
Note that whenever $T_{2}'$ holds, $E(G_{1}')\cup\dots\cup E(G_{t}')=E(G')$.
Note also that for any $v\in V$ and any $i\in[t]$, $\mathbb{E}\left[\deg_{G_{i}'}(v)|T_{2}'\right]\in\left[\frac{d'(1-d'^{-1/4})}{t(1+2t^{-1/4})},\frac{d'(1+d'^{-1/4})}{t(1-2t^{1/4})}\right]$,
and hence,
\[
\mathbb{E}\left[\deg_{G_{i}'}(v)\right]\in\left[\frac{d'(1-d'^{-1/4})}{t(1+2t^{-1/4})},\frac{d'(1+d'^{-1/4})}{t(1-2t^{1/4})}+4nd'^{2}\exp(-2\sqrt{t})\right].
\]
Provided that $2nd't^{5/4}\exp(-2\sqrt{t})\leq1$,
which is certainly true for $n$ sufficiently large by our choice
of $d$, Hoeffding's inequality gives
\begin{align*}
\text{Pr}\left[\left|\deg_{G_{i}'}(v)-\frac{d'}{t}\right|\geq8\frac{d'}{t^{5/4}}\right] & \leq\text{Pr}\left[\left|\deg_{G_{i}'}(v)-\mathbb{E}\left[\deg_{G_{i}'}(v)\right]\right|\geq2\frac{d'}{t^{5/4}}\right]\\
& \leq2\exp\left(-8\frac{d'}{t^{10/4}}\right)
\end{align*}
so that by the union bound,
\[
\text{Pr}\left[\exists v\in V,i\in[t]:\left|\deg_{G_{i}'}(v)-\frac{d'}{t}\right|\geq8\frac{d'}{t^{5/4}}\right]\leq2nt\exp(-8\sqrt{d'})\leq2nd'\exp(-2\sqrt{t}).
\]
Putting everything together, it follows that $\text{Pr}((T_{1}')^{c})\leq6nd'\exp(-2\sqrt{t})$
for all $n$ sufficiently large, which completes the proof.
\end{proof}
\end{comment}
\begin{comment}
\begin{lemma}
\label{random-partition-factors}
For $i\in[t]$, let $T_{3,i}$ denote the event that
$G_{i}$ contains a $\left(1-\frac{1}{\log^{5}n}\right)\left(1-\frac{20}{t^{1/4}}\right)\delta(G_{i})$-factor,
and let $T_{3}:=\cap_{i\in[t]}T_{3,i}$. Then, $\text{Pr}(T_{3}^{c})=O(n^{3/2}d't\exp(-2\sqrt{t})+nt\exp(-d^{4/5}/\log^{5}n))$ for $d\geq\log^{14}n$.
\end{lemma}
By the union bound, it suffices to show that for any
fixed $i\in[t]$, $\text{Pr}(T_{3,i}^{c})=O(n^{3/2}d'\exp(-2\sqrt{t}))$. As in the proof of Lemma \ref{factor-for-good-graph}, we will do this by upper bounding the probability that the balanced bipartite graph $G_{i}$ fails to satisfy the conditions in the statement of Lemma \ref{finding-large-factors} with $r=\frac{d'}{t}\left(1-\frac{8}{t^{1/4}}\right)$, $\varphi=r\log^{6}n/n$, $\beta_{1}=0$,
$\beta_{2}=\beta_{3}=\gamma=1/\log^{5}n$. Note that with this choice of parameters, (P1) and (P4) are trivially satisfied. Let $B_{2,i}$ (respectively $B_{3,i}$) denote the event that $G_{i}$ does not satisfy (P2) (respectively (P3)). We will now upper bound $\text{Pr}(B_{2,i})$ and $\text{Pr}(B_{3,i})$.
As in the proof of Lemma \ref{random-partition-degrees}, let $T_{2}$ denote the event that $|C(e)-\frac{t}{2}|\leq t^{3/4}$ for all $e\in G'$ and $\left|\deg_{H_{i}}(v)-\frac{d'}{2}\right|\leq d'^{3/4}$ for all $i\in[t]$, $v\in V$, and note that our proof of Lemma \ref{random-partition-degrees} shows that $\text{Pr}((T_{1}\cap T_{2})^{c})\leq O(n^{3/2}d'\exp(-2\sqrt{t}))$. Since $\text{Pr}(B_{2,i})\leq\text{Pr}(B_{2,i}|T_{1}\cap T_{2})+\text{Pr}((T_{1}\cap T_{2})^{c})$,
it suffices to show that $\text{Pr}(B_{2,i}|T_{1}\cap T_{2})\leq n\exp(-d^{4/5}/\log^{5}n)$.
For fixed $X\subseteq L_{i}$ and $Y\subseteq R_{i}$ with $|X|=|Y|$,
we know that $e_{H_{i}}(X,Y)\leq e_{G}(X,Y)\leq\frac{d}{n}|X|^{2}+\lambda|X|$.
Since given the event $T_{1}\cap T_{2}$, $r\geq d/1.9t\geq d^{4/5}/2$,
and $E(G_{i})$ is constructed by including each edge in $H_{i}$
independently with probability at most $q=3/t\leq4/d^{1/5}$, it follows that
\begin{align*}
\text{Pr}\left[e_{G_{i}}(X,Y)\geq r\beta_{2}|X|\bigg|T_{1}\cap T_{2}\right] & \leq{\frac{d}{n}|X|^{2}+\lambda|X| \choose r\beta_{2}|X|}p^{r\beta_{2}|X|}\\
& \leq\left(\frac{3p(\frac{d}{n}|X|^{2}+\lambda|X|)}{r\beta_{2}|X|}\right)^{r\beta_{2}|X|}\\
& \leq\left(\frac{48}{\log n}\right)^{d^{4/5}|X|/2\log^{5}n}
\end{align*}
Union bounding over all choices of $X,Y$ with $|X|=|Y|\leq n/\log^{6}n$,
we therefore get that
\begin{align*}
\text{Pr}(B_{2,i}|T_{1}\cap T_{2}) & \leq\sum_{|X|=1}^{n/\log^{6}n}{n/2 \choose |X|}{n/2 \choose |X|}\left(\frac{48}{\log n}\right)^{d^{4/5}|X|/2\log^{5}n}\\
& \leq\sum_{|X|=1}^{n/\log^{6}n}\exp(2|X|\log n-d^{4/5}|X|\log\log n/2\log^{5}n)\\
& \leq n\exp(-d^{4/5}/\log^{5}n).
\end{align*}
Similarly, it suffices to show that $\text{Pr}(B_{3,i}|T_{1}\cap T_{2})\leq\exp(-n)$.
For fixed $X\subseteq L_{i}$ and $Y\subseteq R_{i}$ with $|X|,|Y|\geq n/\log^{6}n$
and $|X|+|Y|>n/2$, we know that \vnote{Add this to the properties of the graph H which we remove at the start. This follows easily from Hoeffding and union bound.} $e_{H_{i}}(X,Y)\in\left[\left(1-\frac{1}{\sqrt{t}}\right)\left(\frac{d'}{n}|X||Y|-\lambda\sqrt{|X||Y|}\right),\frac{d}{n}|X||Y|+\lambda\sqrt{|X||Y|}\right]$.
Since given the event $T_{1}\cap T_{2}$, $E(G_{i})$ is constructed
by including each edge in $H_{i}$ independently with probability
at least $2/(t+2t^{3/4})$ and at most $3/t$, $\mathbb{E}[e_{G_{i}}(X,Y)|T_{1}\cap T_{2}]\geq\frac{2}{t}\left(1-\frac{1}{t^{1/4}}\right)\left(1-\frac{1}{\sqrt{t}}\right)\frac{d'}{n}|X||Y|-\frac{3\lambda}{t}\sqrt{|X||Y|}\geq\frac{2}{t}\left(1-\frac{2}{t^{1/4}}\right)\frac{d'}{n}|X||Y|$,
where we have used $\lambda\leq d^{9/10}/\log^{6}n$ in the last inequality.
Therefore, by Hoeffding's inequality, we get that for such $X,Y$
\begin{align*}
\text{Pr}\left[e_{G_{i}}(X,Y)\leq2r(1-\beta_{3})\frac{|X||Y|}{n}\bigg|T_{1}\cap T_{2}\right] & \leq\text{Pr}\left[e_{G_{i}}(X,Y)\leq\frac{2}{t}\left(1-\frac{8}{t^{1/4}}\right)\frac{d'}{n}|X||Y|\bigg|T_{1}\cap T_{2}\right]\\
& \leq\exp\left(-\frac{16d'|X||Y|}{nt^{5/2}}\right)\\
& \leq\exp\left(-\frac{4d'n}{t^{5/2}\log^{6}n}\right)\\
& \leq\exp\left(-\frac{2\sqrt{d'}n}{\log^{6}n}\right)
\end{align*}
By union bounding over all (at most $2^{n}$) such choices of $X$
and $Y$, we get that
\begin{align*}
\text{Pr}(B_{3,i}|T_{1}\cap T_{2}) & \leq\exp\left(n-\frac{2\sqrt{d'}n}{\log^{6}n}\right)\\
& \leq\exp(-n)
\end{align*}
as desired, where the last inequality uses $d'\geq\log^{12}n$.
Next, we distribute the edges between the partitions and make sure that all the expansion properties are being preserved. Note that I'm cheating here because I haven't removed our $H$! hopefully we should be able to first find $H$, remove it from $G$, and then apply these two lemmas for the graph $G':=G\setminus H$.
\textbf{Existence of good graph ($\omega(1)\leq d\leq\log^{100}n$)}
Note that if we only care about existence, and not about counting,
then the graph we remove at the start need not be sparse. Accordingly,
we will find and remove an $(\alpha,\frac{d}{2}(1-d^{-1/3}),\frac{n}{2})$-good
graph $H$ in the first step. Here's a proof sketch:
(1) Using Asaf's local lemma argument, we can find a partition $V=V_{1}\cup V_{2}$
such that the induced graph $G'$ satisfies $\frac{d}{2}-d^{2/3}\leq\delta(G')\leq\Delta(G')\leq\frac{d}{2}+d^{2/3}$.
(2) Apply Lemma 3.1 with
\begin{align*}
r & =d/2\\
\varphi & =nd/\beta_{2}\\
\beta_{1} & =20d^{-1/3}\\
\beta_{2} & =d^{-1/3}\\
\beta_{3} & =d^{-1/3}\\
\gamma & =\max\{\beta_{3},\beta_{1}+\beta_{2}\}
\end{align*}
(P4) is satisfied by the choice of parameters. (P1) is satisfied since
$\delta(G')\geq\frac{d}{2}-d^{2/3}$.
(P2): Let $|X|=|Y|\leq\mu n$. The expander mixing lemma gives $e_{G'}(X,Y)\leq d|X|^{2}/n+\lambda|X|$.
We want this to be $<\frac{d}{2}\beta_{2}|X|$. Since $\lambda\ll d\beta_{2}/2=d^{2/3}/2$,
it suffices to have $d|X|^{2}/n<d\beta_{2}|X|/2$ i.e. $|X|/n<\beta_{2}/2$,
which is true by our choice of $\varphi$.
(P3): Let $X\subseteq V_{1}$, $Y\subseteq V_{2}$ with $|X|+|Y|>n/2$
and $|X|,|Y|\geq\beta_{2}n/2$. By the expander mixing lemma, we get
$e_{G'}(X,Y)\geq d|X||Y|/n-\lambda\sqrt{|X||Y|}=2r|X||Y|/n-\lambda\sqrt{|X||Y|}$.
On the other hand, note that in this range of parameters, we have
$\lambda\sqrt{|X||Y|}=O(\sqrt{d}n)$ whereas $2r|X||Y|/n\geq\Omega(d\beta_{2}n)=\Omega(d^{2/3}n)$.
Since $d=\omega(1)$, we are done.
Note that this step is purely deterministic. All the probability was
in Step (1).
(3) Now, we need to check the perfect matching condition. Let $H'=(A'\cup B',E')$
be a subgraph of $G'$ with $A'\subseteq V_{1}$, $B'\subseteq V_{2}$
such that $|A'|=|B'|\geq(1-\alpha)n/2$ and $\delta(H')\geq(1-2\alpha)d/2$,
where $\alpha=0.01$. Suppose $H'$ does not contain a perfect matching.
Then, by the 2-sided version of Hall's theorem, there must exist $X\subseteq V_{1}$
and $Y\subseteq V_{2}$ such that $|X|=|Y|\leq|A'|/2$ and $e_{G'}(X,Y)\geq(1-2\alpha)d|X|/2$.
On the other hand, by the expander mixing lemma, for any such $X,Y$,
we have $e_{G'}(X,Y)\leq d|X|^{2}/n+\lambda|X|$. Since $\lambda|X|\ll\alpha d|X|/2$,
it suffices to show that $d|X|^{2}/n\leq(1-3\alpha)d|X|/2$ i.e. $|X|\leq n\times0.97/2$,
which is true since $|X|\leq|A'|/2\leq n/4$.
\\
\end{comment}
\begin{proposition}
\label{existence-good-graph}
For every $c>0$ there exists $d_0$ such that for all $d\geq d_0$ the following holds. Let $G=(V,E)$ be an $(n,d,\lambda)$-graph with $\lambda < d/4t^{4}$
where $t$ is an integer in $[d^{c/100},d^{c/10}] $.
Then, $G$ contains $t$ distinct, edge disjoint $\left(\alpha, \lfloor\bar{r}\rfloor, \frac{n}{2}\right)$-good subgraphs $W_{1},\dots,W_{t}$ with $\alpha=\frac{1}{10}$ and $\bar{r}=\frac{d}{t}\left(1-\frac{16}{t^{1/3}}\right)$.
\end{proposition}
The proof of this proposition is based on the following technical lemma, which lets us apply \cref{gale} to each part of the partitioning coming from \cref{lemma:finding initial partitions} in order to find large good factors.
\begin{lemma}
\label{lemma-almost-factorization-technical}
There exists an edge partitioning $E(G)=E_1\cup \ldots E_{t}$ for which the following properties hold:
\begin{enumerate}[$(S1)$]
\item $H_i:=G[E_i]$ is a balanced bipartite graph with parts $(A_i,B_i)$ for all $i\in[t]$.
\item For all $i\in[t]$ and for all $X\subseteq A_{i}$, $Y\subseteq B_{i}$ with $|X|=|Y| \leq n/2t^{2}$ we have $$e_{H_i}(X,Y) < d|X|/t^{2}.$$
\item For all $i\in [t]$ and all $X\subseteq A_i$, $Y\subseteq B_i$ with $|X|+|Y|>n/2$ and $\min\{|X|,|Y|\}>\frac{n}{2t^{2}}$, $e_{H_i}(X,Y) \geq 2\frac{d}{t}\left(1-\frac{8}{t^{1/3}}\right)\frac{|X||Y|}{n}.$
\item $d_{H_i}(v)\in \frac{d}{t}\pm \frac{8d}{t^{4/3}}$ for all $i\in[t]$ and all $v\in V(H_i)=V(G)$.
\item $e_{H_i}(X,Y)\leq(1-4\alpha)\frac{d}{t}|X|$ for all $X,Y\subseteq V(H_i)$ with $\frac{n}{2t^{2}} \leq |X|=|Y|\leq \frac{n}{4}$.
\begin{comment}
\item $\frac{d}{t}-2\left(\frac{d}{t}\right)^{2/3}\leq \delta(H_i)\leq \Delta(H_i)\leq \frac{d}{t}+2\left(\frac{d}{t}\right)^{2/3}$, and
\item
\item for all sets $X\subseteq V_i$ and $Y\subseteq U_i$ with $|X|=|Y|\leq \frac{0.8n}{t}$ we have $e_{H_i}(X,Y)<\frac{0.9d}{t}|X|$, and
\item for all $X\subseteq V_i$ and $Y\subseteq U_i$ with $|X|+|Y|>n/2$ and $\min\{|X|,|Y|\}>\frac{0.8n}{t}$ we have
$$e_{H_i}(X,Y)\geq 2\frac{d}{t}\left(1-\left(\frac{t}{d}\right)^{1/4}\right)|X||Y|/n.$$
\end{comment}
\end{enumerate}
\end{lemma}
Before proving this lemma, let us show how it can be used to prove \cref{existence-good-graph}.
\begin{proof}[Proof of \cref{existence-good-graph}]
Note that each balanced bipartite graph $H_1,\dots,H_t$ coming from \cref{lemma-almost-factorization-technical} satisfies the hypotheses of \cref{gale} with
$$r=\frac{d}{t}, \varphi = \frac{2rt^{2}}{n}, \beta_{1}=\beta_{2}=\beta_{3}=\frac{8}{t^{1/3}}, \gamma=\frac{16}{t^{1/3}}.$$
Indeed, $(P1)$ follows from $(S4)$, $(P2)$ follows from $(S2)$, $(P3)$ follows from $(S3)$ and $(P4)$ is satisfied by the choice of parameters. Therefore, \cref{gale} guarantees that each $H_i$ contains an $\lfloor \bar{r}\rfloor$-factor, and by construction, these are edge disjoint.
\begin{comment}
\begin{proof}[Proof of \cref{existence-good-graph}]
First, we show that the graph $H'$ obtained from \cref{technical-existence-good-graphs} satisfies the hypotheses of \cref{gale} with
\[
r=\frac{d}{t},\varphi=\frac{2rt^{2}}{n},\beta_{1}=\beta_{2}=\beta_{3}=\frac{1}{t},\gamma=\frac{2}{t}.
\]
(P4) is satisfied by the choice of parameters, whereas (P1) is satisfied due to (Q1).
\begin{comment}
(2)Apply Lemma 3.1 with
\begin{align*}
r & =d^{0.55}\\
\varphi & =r/d^{0.49}n\\
\beta_{1} & =d^{-0.04}\\
\beta_{2} & =d^{-0.04}\\
\beta_{3} & =d^{-0.04}\\
\gamma & =\max\{\beta_{3},\beta_{1}+\beta_{2}\}
\end{align*}
\end{comment}
\begin{comment}
For (P2), note that for $|X|=|Y|$, the expander mixing lemma gives $$e_{G'}(X,Y)\leq e_{G}(X,Y) \leq d|X|^{2}/n+\lambda|X|\leq r\beta_{2}|X|$$ for $|X|\leq \frac{r}{\varphi}=\frac{n}{2t^{2}}$.
For (P3), let $X\subseteq A$, $Y\subseteq B$ with $|X|+|Y|>n/2$
and $|X|,|Y|\geq n/2t^{2}$. We want to show that $e_{H'}(X,Y)\geq 2d(1-t^{-1})|X||Y|/nt$. By (Q3), it suffices to show that
\begin{equation}
\label{inequality-good-graph}
\frac{2d|X||Y|}{nt^2}\geq \frac{d|X||Y|}{nt^{3}} + \frac{2\lambda\sqrt{|X||Y|}}{t}
\end{equation}
which is readily seen to be implied by our assumption that $\frac{d}{4t^{2}}\geq \lambda$.
\end{comment}
Now, let $W_1,\dots,W_t$ be any $\lfloor\bar{r} \rfloor$-factors of $H_1,\dots,H_t$.
It remains to check that $W_1,\dots,W_t$ satisfy property $(G2)$. We will actually show the stronger statement that $H_1,\dots,H_t$ satisfy $(G2)$.
Indeed, let $H'_i=(A'_i\cup B'_i,E'_i)$ be a subgraph of $H_i$ with $A'_i\subseteq A_i$, $B'_i\subseteq B_i$ such that
$$|A'_i|=|B'_i|\geq(1-\alpha)n/2$$ and
$$\delta(H'_i)\geq(1-2\alpha)\lfloor\bar{r}\rfloor.$$
Suppose $H'_i$ does not contain a perfect matching. Then, by \cref{Hall}, without loss of generality, there must exist $X\subseteq A_i$ and
$Y\subseteq B_i$ such that
$$|X|=|Y|\leq|A_i|/2\leq n/4$$ and
$$N_{H'_i}(X)\subseteq Y.$$
In particular, by the minimum degree assumption, it follows that
$$e_{H'_i}(X,Y)\geq(1-2\alpha)\lfloor\bar{r}\rfloor|X|.$$
On the other hand, we know that for such a pair,
$$e_{H'_i}(X,Y)\leq e_{G}(X,Y)\leq d|X|^{2}/n+\lambda|X|.$$
Thus, we must necessarily have that $|X| \geq n/2t^{2}$, which contradicts $(S5)$. This completes the proof.
\end{proof}
\begin{proof}[Proof of \cref{lemma-almost-factorization-technical}]
Our construction will be probabilistic. We begin by applying \cref{lemma:finding initial partitions} to $G$ to obtain a collection of balanced bipartitions $(A_i,B_i)_{i=1}^{t}$ satisfying Properties $(R1)$ and $(R2)$ of \cref{lemma:finding initial partitions}. Let $G_i:=G[A_i,B_i]$, and for each $e\in E(G)$, let $C(e)$ denote the set of indices $i\in[t]$ for which $e\in E(G_i)$. Let $\{c(e)\}_{e\in E(G)}$ denote a random collection of elements of $[t]$, where each $c(e)$ is chosen independently and uniformly at random from $C(e)$.
Let $H_i$ be the (random) subgraph of $G_i$ obtained by keeping all the edges $e$ with $c(e)=i$. Then, the $H_i$'s always form an edge partitioning of $E(G)$ into $t$ balanced bipartite graphs with parts $(A_i,B_i)_{i=1}^{t}$.
It is easy to see that these $H_i$'s will always satisfy $(S2)$. Indeed, if for any $X,Y\subseteq V(G)$ with $|X|=|Y|$, we have $e_{H_i}(X,Y)\geq d|X|/t^{2}$,
then since
$$e_{H_i}(X,Y)\leq e_G(X,Y)\leq \frac{d|X|^{2}}{n}+\lambda |X|$$
by the Expander Mixing Lemma (\cref{Expander Mixing Lemma}), it follows that
$$\frac{d}{t^{2}}\leq \frac{d|X|}{n} +\lambda,$$ and therefore, we must have
$$|X|> \frac{n}{2t^{2}}.$$
We now provide a lower bound on the probability with which this partitioning also satisfies $(S3)$ and $(S4)$.
To this end, we first define the following events:
\begin{itemize}
\item For all $v\in V(G)$ and $i\in [t]$, let $D_{i,v}$ denote the event that $d_{H_i}(v)\notin \frac{d}{t}\pm \frac{8d}{t^{4/3}}$.
\item For all $i\in [t]$ and all $X\subseteq A_i$, $Y\subseteq B_i$ with $|X|+|Y|>n/2$ and $\min\{|X|,|Y|\}>\frac{n}{2t^{2}}$, let $A(i,X,Y)$ denote the event that $e_{H_i}(X,Y)\leq 2\frac{d}{t}\left(1-\frac{8}{t^{1/3}}\right)\frac{|X||Y|}{n}$
\end{itemize}
Next, we wish to upper bound the probability of occurrence for each of these events.
Note that for all $i\in[t]$ and $v\in V(G)$, it follows from $(R1)$ and $(R2)$ that $$\mathbb{E}[d_{H_i}(v)]\in \frac{d/2\pm d^{2/3}}{t/2\mp t^{2/3}}\in \frac{d}{t} \pm \frac{4d}{t^{4/3}}.$$
Therefore, by Chernoff's inequality, we get that for all $i\in[t]$ and $v\in V(G)$,
\begin{equation}
\label{bound on Dv}
\text{Pr}[D_{i,v}]\leq \exp\left(-\frac{d}{t^{5/3}}\right).
\end{equation}
Moreover, for all $i\in[t]$ and for all $X\subseteq A_i$, $Y\subseteq B_i$ with $|X|+|Y|>n/2$ and $\min\{|X|,|Y|\}>\frac{n}{2t^{2}}$, we have from the expander mixing lemma and $(R2)$ that
$$\mathbb{E}[e_{H_i}(X,Y)]\geq2\frac{d}{t}\left(1-\frac{4}{t^{1/3}}\right)\frac{|X||Y|}{n}.$$
Therefore, by Chernoff's bounds, we get that for $i\in[t]$ and all such $X,Y$,
\begin{equation}
\label{bound on edges between X and Y}
\text{Pr}\left[A(i,X,Y)\right]\leq \exp\left(-\frac{d|X||Y|}{nt^{8/3}}\right).
\end{equation}
Now, we apply the asymmetric version of the Local Lemma (\cref{ALLL}) as follows: our events consist of all
the previously defined $D_{i,v}$'s and $A(j,X,Y)$'s.
Note that each $D_{i,v}$ depends only on those $D_{j,w}$ for which $\text{dist}_{G}(v,w)\leq 2$. In particular, each $D_{i,v}$ depends on at most $td^{2}$ many $D_{j,w}$. Moreover, we assume that $D_{i,v}$ depends on all the events $A(j,X,Y)$ and that each $A(j,X,Y)$ depends on all the other events.
For convenience, let us enumerate all the events as $\mathcal E_k$, $k=1,\ldots \ell$. For each $k\in[\ell]$, let $x_k$ be $\exp\left(-\sqrt{d}\right)$ if $\mathcal E_k$ is of the form $D_{j,v}$, and $x_k$ be $\exp\left(-\sqrt{d}|X||Y|/n\right)$ if $\mathcal E_k$ is of the form $A(j,X,Y)$
To conclude the proof, we verify that $$\text{Pr}[\mathcal E_k]\leq x_k\prod_{j\sim k}(1-x_j)$$ for all $k$.
Indeed, if $\mathcal E_k$ is of the form $D_{j,v}$ then we have
\begin{align*}
e^{-\sqrt{d}}\left(1-e^{-\sqrt{d}}\right)^{td^{2}}\prod_{x,y}\left(1-e^{-\sqrt{d}xy/n}\right)^{{n \choose x}{n \choose y}} & \geq e^{-\sqrt{d}}e^{-2td^{2}e^{-\sqrt{d}}}\prod_{x,y}\left(e^{-2e^{-\sqrt{d}xy/n}}\right)^{{n \choose x}{n \choose y}}\\
& \geq e^{-\sqrt{d}}e^{-2td^{2}/e^{\sqrt{d}}}\prod_{x,y}\exp\left(-2e^{-\sqrt{d}n/8t^{2}}{n \choose x}{n \choose y}\right)\\
& \geq e^{-\sqrt{d}}e^{-2td^{2}/e^{\sqrt{d}}}\exp\left(-e^{-\sqrt{d}n/8t^{2}}2^{3n}\right)\\
& >\text{Pr}[\mathcal{E}_{k}].
\end{align*}
On the other hand, if $\mathcal E_k$ is of the form $A(j,X,Y)$, then we have
\begin{align*}
e^{-\sqrt{d}xy/n}(1-e^{-\sqrt{d}})^{nt}\prod_{x,y}(1-e^{-\sqrt{d}xy/n})^{{n \choose x}{n \choose y}} & \geq e^{-\sqrt{d}xy/n}e^{-2e^{-\sqrt{d}}nt}\prod_{x,y}\left(e^{-2e^{-\sqrt{d}xy/n}}\right)^{{n \choose x}{n \choose y}}\\
& \geq e^{-\sqrt{d}xy/n-2e^{-\sqrt{d}}nt}\prod_{x,y}\exp\left(-2e^{-\sqrt{d}n/8t^{2}}{n \choose x}{n \choose y}\right)\\
& \geq e^{-\sqrt{d}xy/n-2e^{-\sqrt{d}}nt}\exp\left(-e^{-\sqrt{d}n/8t^{2}}2^{3n}\right)\\
& >\text{Pr}[\mathcal{E}_{k}].
\end{align*}
Therefore, by the asymmetric version of the Local Lemma, Properties $(S3)$ and $(S4)$ are satisfied with probability at least $$\left(1-e^{-\sqrt{d}}\right)^{nt}\prod_{x,y}(1-e^{-\sqrt{d}xy/n})^{{n \choose x}{n \choose y}}\geq e^{-nt}.$$
To complete the proof, it suffices to show that the probability that $(S5)$ is not satisfied is less than $\exp(-nt)$. To see this, fix $i\in [t]$ and $X,Y\subseteq V(H_i)$ with $n/2t^{2} \leq|X|=|Y|\leq n/4$. By the expander mixing lemma,
we know that $$e_{G}(X,Y)\leq d|X|^{2}/n+ \lambda|X| \leq d|X|/4 + \lambda |X|,$$ so by $(R2)$ we get
$$\mathbb{E}[e_{H_i}(X,Y)]\leq\frac{d}{2t}\left(1+\frac{4}{t^{1/3}}\right)|X|.$$
Therefore,
by Chernoff's bounds, it follows that
$$\text{Pr}\left[e_{H_i}(X,Y)\geq(1-4\alpha)d|X|/t\right] \leq \exp\left(-d|X|/50t\right) \leq \exp\left(-dn/100t^{3}\right).$$
Applying the union bound over all $i\in [t]$, and all such $X,Y\subseteq V(G)$, implies that the probability for $(S5)$ to fail is at most $\exp(-dn/200t^{3})<\exp(-nt)/2$. This completes the proof.
\end{proof}
\begin{remark}
\label{rmk:counting}
The above proof shows that there are at least $\frac{1}{2}\exp(-nt)\left(\frac{t}{2}-t^{2/3}\right)^{nd/2}$ (labeled) edge partitionings satisfying the conclusions of \cref{lemma-almost-factorization-technical}.
\end{remark}
\section{Proofs of \cref{main:pseudorandom} and \cref{main:counting}}
In this section, by putting everything together, we obtain the proofs of our main results.
\begin{proof}[Proof of \cref{main:pseudorandom}]
Let $c=\varepsilon/10$, and apply \cref{existence-good-graph} with $\alpha=1/10$, $c$, and $t$ being an odd integer in $[d^{c/100},d^{c/10}]$ to obtain $t$ distinct, edge disjoint $\left(\alpha,r,\frac{n}{2}\right)$-good graphs $W_1,\dots,W_t$, where $r=\lfloor\frac{d}{t}\left(1-\frac{16}{t^{1/3}}\right) \rfloor$. Let $G':=G\setminus \bigcup_{i=1}^{t}W_i$, and note that $G'$ is $r':=d-rt$ regular. After possibly replacing $r$ by $r-1$, we may further assume that $r'$ is even. In particular, by \cref{2-factor theorem}, $G'$ admits a 2-factorization. By grouping these 2-factors, we readily obtain a decomposition of $G'$ as $G'=G'_1\cup\dots\cup G'_t$ where each $G'_i$ is $r'_i$-regular, with $r'_i \in \frac{r'}{t} \pm t\leq 40\frac{d}{t^{4/3}}$. Finally, applying \cref{proposition-completion} to each of the regular graphs $R_1,\dots,R_t$, where $R_i:=W_i\cup G'_i$, finishes the proof.
\end{proof}
We will obtain `enough' 1-factorizations by keeping track of the number of choices available to us at every step in the above proof.
\begin{proof}[Proof of \cref{main:counting}]
Suppose that $\lambda\leq d^{1-\varepsilon}$ and let $c=\varepsilon/10$. Now, fix $\epsilon > 0$. Throughout this proof, $\epsilon_1,\dots, \epsilon_4$ will denote positive quantities which go to $0$ as $d$ goes to infinity. By \cref{rmk:counting}, there are at least $\left((1-\epsilon_1)\frac{t}{2}\right)^{nd/2}$ edge partitionings of $E(G)$ satisfying the conclusions of \cref{lemma-almost-factorization-technical} with $\alpha = 1/10$, $c$, and $t$ an odd integer in $[d^{c/100}, d^{c/10}]$. For any such partitioning $E(G)=E_1 \cup \dots \cup E_t$, the argument in the proof of \cref{main:pseudorandom} provides a decomposition $E(G)=E(R_1)\cup\dots\cup E(R_t)$. Recall that for all $i\in[t]$, $R_i:=W_i\cup G'_i$, where $W_i$ is an $(\alpha,r,n/2)$-good graph with $r\geq \lfloor\frac{d}{t}\left(1-\frac{16}{t^{1/3}}\right) \rfloor-1$ and $E(W_i)\subseteq E_i$, and $G'_i$ is an $r'_i$-regular graph with $r'_i \leq 40d/t^{4/3}$. In particular, by \cref{rmk:counting-completion}, each $R_i$ has at least $\left((1-\epsilon_2)\frac{d}{te^{2}}\right)^{nd/2t}$ $1$-factorizations. It follows that the multiset of $1$-factorizations of $G$ obtained in this manner has size at least $\left((1-\epsilon_3)\frac{d}{2e^2}\right)^{nd/2}$.
To conclude the proof, it suffices to show that no $1$-factorization $\mathcal{F}=\{F_1,\dots,F_d\}$ has been counted more than $\left(1+\epsilon_4\right)^{nd/2}$ times above. Let us call an edge partitioning $E(G)=E_1\cup\dots\cup E_t$ \emph{consistent} with $\mathcal{F}$ if $E(G)=E_1\cup \dots \cup E_t$ satisfies the conclusions of \cref{lemma-almost-factorization-technical}, and $\mathcal{F}$ can be obtained from this partition by the above procedure.
It is clear that the multiplicity of $\mathcal{F}$ in the multiset is at most the number of edge partitionings consistent with $\mathcal{F}$, so that it suffices to upper bound the latter.
For this, note that at least $d - 57d/t^{1/3}$ of the perfect matchings in $\mathcal{F}$ must have all of their edges in the same partition $E_i$. Thus, the number of edge partitionings consistent with $\mathcal{F}$ is at most ${d \choose 57d/t^{1/3}}\left(\frac{t}{2}+t^{2/3}\right)^{57nd/2t^{1/3}}d^{t}$, and observe that this last quantity can be expressed as $\left(1+\epsilon_4\right)^{nd/2}$.
\end{proof}
\section{Concluding remarks and open problems}
\label{section-concluding-rmks}
\begin{itemize}
\item In \cref{main:pseudorandom}, we proved that every $(n,d,\lambda)$-graph contains a $1$-factorization, assuming that $\lambda \leq d^{1-\varepsilon}$ and $d_0\leq d\leq n-1$ for $d_0$ sufficiently large. As we mentioned after the statement, it seems reasonable that one could, by following our proof scheme with a bit more care, obtain a bound of the form $\lambda \leq d/\log^cn$. In \cite{KS}, Krivelevich and Sudakov showed that if $d-\lambda\geq 2$ (and $n$ is even) then every $(n,d,\lambda)$-graph contains a perfect matching (and this, in turn, was further improved in \cite{CGH}). This leads us to suspect that our upper bound on $\lambda$ is anyway quite far from the truth. It will be very interesting to obtain a bound of the form $\lambda \leq d -C$, where $C$ is a constant, or even one of the form $\lambda \leq \varepsilon d$, for some small constant $\varepsilon$. Our proof definitely does not give it and new ideas are required.
\item In \cite{KW}, Kim and Wormald showed that for every fixed $d\geq 4$, a typical $G_{n,d}$ can be decomposed into perfect matchings, such that for `many' prescribed pairs of these matchings, their union forms a Hamilton cycle (in particular, one can find a Hamilton cycle decomposition in the case that $d$ is even). Unfortunately, our technique does not provide us with any non trivial information about this kind of problem, but we believe that a similar statement should be true in $G_{n,d}$ for all $d$.
\item In \cref{main:counting}, we considered the problem of counting the number of $1$-factorizations of a graph. We showed that the number of $1$-factorizations in $(n,d,\lambda)$-graphs is at least
$$\left((1-o_d(1))\frac{d}{2e^2}\right)^{nd/2},$$ which is off by a factor of $2^{nd/2}$ from the conjectured upper bound (see \cite{LL}), but is still better than the previously best known lower bounds (even in the case of the complete graph) by a factor of $2^{nd/2}$.
In an upcoming paper, together with Sudakov, we have managed to obtain an optimal asymptotic formula for the number of $1$-factorizations in $d$-regular graphs for all $d\geq \frac{n}{2}+\varepsilon n$. It is not very unlikely that by combining the techniques in this paper and the one to come, one can obtain the same bound for $(n,d,\lambda)$-graphs, assuming that $d$ is quite large (at least $\log^Cn$). It would be nice, in our opinion, to obtain such a formula for all values of $d$.
\item A natural direction would be to extend our results to the hypergraph setting. That is, let $H^k_{n,d}$ denote a $k$-uniform, $d$-regular hypergraph, chosen uniformly at random among all such hypergraphs. For which values of $d$ does a typical $H^k_{n,d}$ admit a $1$-factorization? How many such factorizations does it have? Quite embarrassingly, even in the case where $H$ is the complete $k$-uniform hypergraph, no non-trivial lower bounds on the number of $1$-factorizations are known. Unfortunately, it does not seem like our methods can directly help in the hypergraph setting.
\end{itemize}
| {
"attr-fineweb-edu": 2.578125,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeH7xK5YsWOzBDA4I | \section{Introduction}
\noindent
It has come to our attention that there are still some unresolved problems
relating to football flight. Dr. Timothy Gay, author of a popular book on
Football Physics \cite{tim1} suggested a challenge at a recent meeting of the Division
of Atomic, Molecular and Optical Physics, in Tennessee, May 2006.
The challenge given at the conference was to explain why the long axis of the ball
appears to turn (pitch) and follow the parabolic trajectory of the flight path.
After an extensive literature search we have found that the theory of why a football
pitches in flight has been explained quite well by Brancazio \cite{bran1}.
The more general theory of a football in flight, giving elaborate mathematical details
has been given by both Brancazio and Rae \cite{bran2, bran3, rae1}.
We have found that the literature (especially comments found online) do tend to use
improper moments of inertia for the football and incorrectly apply the Magnus force
(known as spin drift in ballistics) to
explain yaw of the football in flight. Clearly, since a challenge was posed by an expert
in football physics, we feel the need to summarize the literature in this area
and update it where necessary.\\
We intend to give the reader all the mathematical details needed for an accurate
description of the moments of inertia for the football. The needed theory, on torque free precessional
motion (for more than one spin axis) and gyroscopic motion when torque is applied,
is already available in standard mechanics texts \cite{goldstein, marion}.
With the use of these texts and papers like that
of Brancazio \cite{bran1, bran3} we take the theory to be well enough expounded. We will merely
quote the needed equations and cite the references.\\
The second author would also
like to take this opportunity to write a paper on ``something typically American" to
celebrate becoming an American citizen in June 2006.\\
Several scientists at SUNY; State University of New York at Buffalo, have an e-print online
with flight data taken with an onboard data recorder for an American football.
They used a ``Nerf" ball and cut out the foam to incorporate circuit boards and four
accelerometers inside the ball \cite{nowak}. They confirm that a typical spin rate is
10 rev/sec or 600 RPM (revolutions per minute) from their data. They also give a precession frequency of
340 RPM which gives a 1.8 value for the
spin to precession rate ratio (spin to wobble ratio).
We will discuss this further later on.
The usual physics description of football flight, of the kind you can find online,
is at the level of basic center of mass
trajectory, nothing more complex than that \cite{hsw}. It is our
intention in this article to go into somewhat greater detail in order
to discuss air resistance, spin, pitch and yaw. We find that the literature uses, for the most part,
a bad description of the moments of inertia of the football. We intend to remedy that.
Our review will encompass all possible flight trajectories,
including kicking the ball and floating the ball.
We will briefly review the equations needed for all possible scenarios.\\
\noindent
In order to conduct this analysis it will be necessary to make some basic assumptions. We assume
that solid footballs may be approximated by an ellipsoid, we ignore any effect of the laces. We note that
a Nerf ball (a foam ball with no laces) has a very similar flight pattern to a regular inflated football,
so the laces can be thought to have little or no effect, they just help in putting the spin on the ball.
More about this later. For game footballs we will consider a prolate spheroid shell and
a parabola of revolution for the shapes of the pigskin.
We calculate the moments of inertia (general details in Appendix A and B) and take the long axis
of the ball to be between 11--11.25 inches and the width of the ball to be between 6.25--6.8 inches.
These measurements correspond to the official size as outlined by the
Wilson football manufacturer \cite{wilson}. The variation must be due to how well the ball is inflated,
and it should be noted the balls are hand made so there is likely to be small variation in size
from one ball to the next.
(The game footballs are inflated to over 80 pounds pressure to ensure uniformity of shape,
by smoothing out the seams.)
In the following we give the description of the football in terms of
ellipsoidal axes, whose origin is the center of mass (COM) of the ball.
\noindent
\begin{figure}
\includegraphics[width=3.0in]{figure1.eps}\\
\caption{Basic ellipsoidal shape of football showing axes.}\label{Fig. 1.}
\end{figure}
\noindent
For explanation of the terms in an ellipse see references \cite{goldstein, marion}.
We take the semi--major axis to be $a$. The semi--minor axes are $b$ and $c$.
For the football, we will assume both the semi--minor axes are the same
size $b=c$ and they are in the transverse, $e_1$ and the $e_2$, unit vector directions.
The semi--major axis is in direction $e_3$. These are the principal axes of the football.\\
\noindent
We will take the smallest length of an official size football,
$11$ inches and largest width of $6.8$ inches, we get $a = 5.5$ inches (or $14.1 cm $) and
$b=3.4$ inches (or $8.6 cm$) which gives a ratio $a/b = 1.618$ which agrees
with the average values given by Brancazio \cite{bran3}.
Using the solid ellipsoid model, this gives us the principal moments of inertia
$I_1 = I_2 = I = \frac{1}{5} m b^2 \left( 1 + a^2/b^2 \right) $
and $I_3 = 2 m b^2/5 $.\\
\noindent
The torque free spin the wobble ratio (or its inverse) can be found in most advanced text books on
gyroscopic motion or rigid body motion, \cite{marion}. We can give the formula here for the torque free
ratio of spin to wobble (or precession);
\begin{equation}
\frac{ \mbox{ spin}}{\mbox{wobble}} = \frac{\mbox{$\omega$}_3}{\dot{\phi}}= \frac{I \cos \theta }{I_3} =
\frac{1}{2} \left( 1 + \frac{a^2}{b^2} \right) \cos \theta
\end{equation}
\noindent
Clearly, for a horizontal ball, the Euler angle $\theta =0$,
if we use the ratio of semi major to semi minor axis $ a/b = 1.618$ we get
spin/wobble $ = 1.603$, which is less than the experimentally observed value,
of $1.8$ \cite{nowak,tim1}. This corresponds to a vacuum
spin to wobble ratio, no air resistance has been taken into account. It appears that the precession
is somewhat effected by air drag. Also the shape may not be that accurate,
since an ellipsoid has fairly rounded ends and a football is more pointed.
\noindent
There are several available expressions for air resistance. In ballistic theory there is
the Prandtl expression for force of air resistance (or drag) \cite{marion} which goes
$f_d = 0.5 c_w \rho A v^2$, where
$c_w$ is some dimensionless drag coefficient (tabulated), $\rho$ is the density of air,
$A$ is the cross--sectional area and $v$ is the velocity of the projectile. This formula
is only valid for high velocities, of $24 $ m/s or above, up to Mach--1 where the air resistance
increases rapidly. (Mach--1 is the speed of sound, which is $330$ m/s or about $738$ mph. )
It turns out for lower speeds (less than $24 $ m/s or equivalently $54$ mph)
the Stokes' Law formula is more appropriate which has an air resistance proportional
to velocity not the square of the velocity. Generally, air resistance formulae that go with the square
of the velocity are called Newton's laws of resistance and those that go as the velocity
are called Stokes' Laws \cite{marion}.\\
\noindent
From the information online and in papers and books, we have discovered that the average
speed an NFL player can throw the ball is about $20$ m/s or $44.74$ mph \cite{watts}. The top speed
quoted by a football scout from a $ 260 \; \ell$ b quarterback was clocked at $56$ mph or $25$ m/s.
It is doubtful that any quarterback can throw faster than $60$ mph or $26.8$ m/s. It appears that these
velocities are right on the boarder line between using one form of air drag force and another.
If we assume that most quarterbacks will throw under less than ideal conditions, under stress, or
under the threat of imminent sacking, they will most likely throw at slightly lower speeds, especially
as the game continues and they get tired. We further suggest that it is easier to
throw the ball faster when you throw it in a straight line horizontally,
as opposed to an angle of 45 degrees above the horizontal for
maximum range. Therefore, we suggest that the air drag force law
should be of the Stokes variety, which is proportion to velocity and cross
sectional area of the ball and air density. We shall use an air resistance of
$f_d = \gamma \rho A v $ where $\rho $ is the density of air,
$A$ is the cross--sectional area of the football and $v$ is the velocity of the ball.
The $\gamma$ factor takes into account all other dimensions and can be altered to get
accurate results. We shall assume that an average throw reaches speeds of up to $20 $m/s or
$44.74$ mph. (The conversion factor is $0.44704 $ m/s $ = 1 $ mph, see \cite{iop} ).\\
\noindent
In rocket theory, and generally the theory of flight, there is a parameter known as the
center of pressure (CP) \cite{space}. The center of pressure of a rocket is the point
where the pressure forces act through. For example, the drag force due to air resistance would
be opposite in direction to the velocity of the ball and it would act through the CP point.
It is generally not coincident with the center of mass
due to fins or rocket boosters, wings, booms any number of ``attachments" on aeroplanes and rockets.
It is clear that if the CP is slightly offset (by distance $\ell$ say) from the COM of
the football this would lead to a
torque about the COM since torque is equal to the cross product of displacement and force,
$ \vec{\tau}=\vec{\ell} \times \vec{f_d }$.
\noindent
\begin{figure}
\includegraphics[width=6.5in]{Figure2.eps}\\
\caption{Center of mass, center of pressure (CP), velocity and drag force on the ball. The drag force
$f_d$ acts along the velocity direction and through the CP. We take $\theta$ as the angle
between the force and the $e_3$ axis. The aerodynamic torque is then $\tau = f_d \ell \sin \theta $.}
\label{Fig. 2.}
\end{figure}
\noindent
The center of pressure was mentioned also by Brancazio \cite{bran1, bran2, bran3}
as an important factor in the flight of the football. In fact
the laces will tend to make the CP slightly offset from the COM which will add a small torque, but
this is negligible in comparison to other effects which we describe in the next section.
The offset of the CP from the COM is caused by aerodynamic drag on the leading
edge of the football in flight. This would offset the CP slightly forward of the COM.
The CP offset results in gyroscopic precession
which in turn is due to torque since the drag forces act through the CP not the COM. A second
form of precession, called torque free precession, comes from the way the football is thrown.
If there is more than one spin axis the ball will precess in flight. \\
\noindent
With these definitions made, we are now ready to discuss the flight of the football.
\noindent
\section{Discussion of minimization of energy in flight}
\noindent
We wish to explain the pitching and possible yaw of the football during its flight with energy
minimization principals which have not been discussed previously in the literature.
The center of mass motion of the ball follows a perfect parabolic path, as explained in the
basic treatments already mentioned. We may treat the rotational motion separately, and consider the
body frame of the football, centered on the COM of the ball, as it moves.\\
We will assume that the ball is thrown, by a professional footballer, so that it has a ``perfect"
spin along the $e_3$ axis, see Fig. 1.
However, no matter how well the ball is thrown, there will always be
a slight torque imparted to the ball because of the downward pull of the fingers needed to
produce the spin. The effect of this ``finger action" is to tilt the top of the ball upward away
from the velocity or thrust direction, $\vec{v}$. Thus, the velocity vector $\vec{v}$ is not
exactly in the same direction
as the spin, $e_3$ axis. This misalignment of the initial throw (initial thrust vector not in
the $e_3$ direction) results in a torque about the $e_1$ axis. This produces a pitch of the ball
during its flight. This pitching is furthermore advantageous, since it tends to reduce the air
resistance of the football. We stated in the introduction that the drag was proportional to the
cross sectional area and the velocity of the football. If the ball is thrown at speed $v_0$ at an
angle to the horizontal of $\pi/4$ to maximize range, then initially the horizontal and vertical
velocities are the same. Both directions suffer air resistance but the vertical direction also
works against gravity so the vertical velocity will decrease faster than the horizontal.
The air drag is proportional to cross sectional area, so it is energetically favorable for the
football to pitch in flight and present a larger cross section vertically as its upward speed slows down.
When the ball is at maximum height, there is zero vertical velocity. The football can present
its maximum cross section vertically since this will not effect the air resistance vertically, which
depends on velocity. At this maximum height, the horizontal cross section is also a minimum.
This reduces the horizontal drag to a minimum which is again energetically favorable. One should note that
objects in nature tend to move along paths of least resistance and along paths which minimize energy.
This is exactly what the football will do. One should further note that the moments of inertia
about the $e_1$ and $e_2$ axes are the same and $1.655$ times larger (using experimental values) than the
moment of inertia about the spin axis $e_3$, for $a/b = 1.618$.
It is well known in space physics when a rotating body (a satellite in space
for example) has $2$ or $3$ different moments of inertia along different axes, and there is some
dissipation of energy, no matter how small, there is a natural tendency for kinetic energy to be
minimized, which results in a change of spin axis. Let us elaborate.
If there is no torque (gravity cannot exert a torque) then any initial angular
momentum is a conserved quantity. If there is some kind of energy dissipation, (in the case of the
football air resistance) then the kinetic energy will be minimized
by rotating the spin axis to the axis of largest moment of inertia. The kinetic energy is
$L^2/( 2I)$ where $L$ is angular momentum and $I$ is the moment of inertia. Clearly the largest
moment of inertia minimizes energy. Since the $I$ in the $e_1$ and $e_2$ directions is the same,
why does the football not ``yaw", rotate about the $e_2$ axis? Well it turns out that the initial
throw does not initiate this torque (not if the ball is thrown perfectly anyway!) and this rotation
would act to increase the air drag in both the horizontal and vertical directions, so it is not
energetically favorable. \\
\noindent
For a non--perfect throw, which most throws are, there is a slight initial torque in
the $e_2$ direction also caused by the handedness of the player. A right handed player will
pull with the fingers on the right of the ball, a left handed player pulls to the left,
this will result in a yaw (lateral movement). This yaw will result in the football moving slightly
left upon release and then right for a righthanded player and slightly right upon release and then
left for a left handed player. This strange motion has been observed by Rae \cite{rae1}.
The lateral motion is a small effect
which would only be noticeable for a long (touchdown pass of approx 50 m) parabolic throw.
The effect is sometimes
attributed to the Magnus (or Robin's) force, which we believe is an error. The angular velocity of
the spin is not sufficiently high for the Magnus force to have any noticeable effect. We suggest
that the slight lateral motion is just a small torque imparted by the initial throw, due to the fingers
being placed on one side of the ball and slightly to the rear of the ball. One can imagine the initial
``thrust" CP being slightly off center (towards the fingers) and towards the rear of the ball. Hence the
resemblance to a faulty rocket thruster!
The force of the throw greatly exceeds the drag force hence the CP is momentarily closer
to the rear of the ball and slightly toward the fingers or laces. As soon as the football
is released the aerodynamic drag will kick in and the CP shifts forward of the COM.
During flight through the air the CP of the football moves forward because
now the aerodynamic drag is the only force acting through the CP which can produce a torque, and
the drag force is directed towards the leading edge of the football not the rear. For a right handed player,
the initial torque from the throw will send the football slightly to the left. This will switch
almost immediately upon release as the CP shifts forward of the COM and then the ball will move towards
the right of the player. The overall lateral motion will be to the right for a right handed player. This
fairly complex motion has been observed in practice by Rae \cite{rae1}.\\
\begin{figure}
\includegraphics[width=6.0in]{Figure3.eps}\\
\caption{This figure shows the initial throw and the position of the CP towards the hand, and the CP in flight when it shifts forward.
The torque directions are also shown as $e_2$. We assume the football is thrown by a right handed player. First it yaws left and then
right. The overall motion is slightly to the right for a right handed player.}\label{Figure 3.}
\end{figure}
The Magnus force will be important for kicking the football, where the velocity of the ball and
the imparted spin is much greater. To give an example when the Magnus force is great, consider
a curve-ball in baseball. The typical velocity of the ball is $v= 80$ mph (35.76 m/s) the
mass $m= 0.145$ kg, the distance travelled is approximately $L=18$ m and the spin can be as high as
$\omega = 2000$ rpm (or 33.33 rev/sec ). The mass parameter $K_2 = 5.5 \times 10^{-4} $ kg. This
gives a Magnus deflection $d$ of \cite{ferrer}
\begin{equation}
d = \frac{K_2 L^2 \omega}{2 m v} \;\;\; .
\end{equation}
\noindent
For the curve-ball, $d= 0.57$ m. Now let us consider typical values for a football. Consider a pass
of length $L=50$ m. The football mass is $m=0.411$ kg, a throwing velocity of $v= 20$ m/s and a
spin of $\omega= 600 $ rpm ( equivalent to 10 rev/sec). These numbers give a Magnus deflection
of $d=0.8$ m for a 50 m pass. This would be hardly noticeable. A strong gust of wind is more likely to
blow the football off course than it is to cause a Magnus force on it. The only way to really account
for a right handed player throwing slightly to the right and a left handed player throwing slightly
to the left would be in the initial throw. This must have to do with finger positioning, and laces,
on one side of the ball or the other.
\section{The theory of flight}
\noindent
After an extensive search of the literature, we have discovered a detailed analysis
on the rigid dynamics of a football by Brancazio \cite{bran1,bran2, bran3}.
Clear observations of lateral motion, of the type discussed above,
have been made by Rae \cite{Rae1}.
The moments of inertia used are that for an ellipsoidal shell which have been calculated
by subtracting a larger solid ellipsoid of semi--major and minor axes $a+t$ and $b+t$ from
a smaller one with axes $a$ and $b$. The results are kept in the first order of the
thickness of the shell, $t$. We have calculated the exact prolate spheroid shell moments (in Appendix A)
and we have also used a parabola of revolution, (calculation in Appendix B)
which we believe more closely fits the shape of a football.
Our results are seen to complement those of Brancazio \cite{bran3}.\\
\noindent
The angular momentum of a football can only change if there is an applied torque.
Aerodynamic drag on a football can produce the torque required to pitch
the ball to align the $e_3$ axis along the parabolic trajectory \cite{bran1}. The air
pressure over the leading surface of the football results in a drag force which acts through the
center of pressure (CP). The CP depends on the speed of the football and the inclination of
the $e_3$ axis to the horizontal. For inclinations of zero and 90 degrees there is no torque since the CP
is coincident with the COM, ignoring any effect of the laces. During flight when there is an acute angle
of inclination of the $e_3$ axis to the horizontal, the CP should be slightly forward of the COM, since
the air hits the leading edge of the ball. The gyroscopic precession of the ball is caused by this
aerodynamic torque. The resulting motion of the football is very similar to
a gyroscope \cite{bran1, goldstein, marion} but has extra complexity due to the drag forces
changing with pitch angle of the football. For stability we require that \cite{bran1},
\begin{equation}
\omega_3 = \frac{ \sqrt{ 4 \tau I_1 }}{I_3} \;\;\; ,
\end{equation}
\noindent
where $\tau = f_d \ell \sin \theta$ is aerodynamic torque, $\theta$ is the angle between
the aerodynamic drag force and the $e_3$ axis,
$I_1$and $I_3$ are the moments of inertia in the transverse and
long $e_3$ axis directions and $\omega_3$ is the angular velocity of the football about the $e_3$ axis.
The gyroscopic precession rate, defined as the angular rotation rate of the velocity axis
about the $e_3$ axis, $\dot{\phi}$ is given by \cite{bran3}
\begin{equation}
\dot{\phi} = \frac{ f_d \ell}{I_3 \omega_3 }
\end{equation}
where $f_d$ is the aerodynamic drag (which is the cause of torque $\tau$ above) and $\ell$ is
the distance from the CP to the COM. Both of these values will change with the pitch of the ball,
hence the football dynamics is rather more complex than that of a simple top.
\noindent
For low speeds of the ball, when the aerodynamic drag is small,
there can still be a precession of the football due to an imperfect throw. That is,
if there is more than one spin axis of the football it will precess in flight with no torque.
The ball gets most of its spin along the long $e_3$ axis. However, because the ball is held at one end,
a small spin is imparted about the transverse axis. A slight upward tilt of the ball on release
is almost unavoidable
because the fingers are pulling down on rear of the ball to produce the spin. Thus, there is an
initial moment about the $e_1$ axis which will tend to pitch the football. This non--zero spin
will result in torque--free precession or wobble.
\begin{figure}
\includegraphics[width=6.0 in]{Figure4.eps}\\
\caption{The surface area the football presents,
at different inclination angles in flight. Fig 4(a) show maximum surface area $ \pi ab$ vertically
and minimum surface area $ \pi b^2 $ horizontally. Fig. 4(b) has an inclination of $\alpha$ and the
surface area this football presents to the vertical and horizontal has changed.}\label{Fig. 4.}
\end{figure}
\noindent The aerodynamic drag forces are linearly dependent on
the surface area $A$ of the football. The surface area $A$ would
be in the direction of motion. Figure 4(a) shows a football which
is perfectly horizontal, or with zero inclination angle. The
vertical surface area 1, it presents has a maximum area $\pi a b $
and the horizontal surface area 2 it presents has a minimum area $
\pi b^2$. Figure 4(b) shows a football at angle of inclination
$\alpha$. The surface area has now changed. The vertical surface
area 1 has become $ \pi b ( a \cos \alpha + b \sin \alpha )$ and
the horizontal surface has an area $\pi b ( a \sin \alpha + b \cos
\alpha)$. The velocity of the football can easily be transformed
into the vertical and horizontal components and thus the
aerodynamic drag $f_d$ can also be written in terms of vertical
and horizontal components for each angle $\alpha$. The equations
of motion are tedious to write out but a computer code can easily
be constructed to plot the football position and orientation in
flight. It is recommended that the parabola of rotation moments of
inertia be used as the most accurately fitting the football.
\section{Conclusions}
\noindent
It appears that footballs, have something in common with natural phenomenon, in that
they tend to follow the path of least resistance (air resistance) and the motion tends to
minimize energy. Also, when in doubt about projectile motion, ask a rocket scientist!
\noindent
The experimental values of the moments for the football, as determined
by Brody \cite{brody} using a torsion pendulum and measuring periods of oscillation are;
$I_1 = 0.00321 $kg $m^2$ and $I_3 = 0.00194 $kg $m^2$ and the ratio $I_3 / I_1 = 0.604 $.
Drag forces on a football have been measured in a wind tunnel by Watts and Moore \cite{watts}
and independently by Rae and Streit \cite{rae2}.
\noindent
For the prolate spheroid shell football we obtained the following moments of inertia,
(these results were checked numerically on Mathematica 5.2),
using $a=0.141 $ m (or $5.5$ in), $b=0.086$ m ( or $3.4$ in) and $M =0.411$ kg
were $I_1 = 0.003428 $ kg$m^2 $ and $I_3 = 0.002138$ kg$m^2$.
When we use the same parameters in our exact formulae (see Appendix A)
we find exactly the same numbers, so we are
confident that the above results are correct. \\
\noindent
For the parabola of revolution, we get $I_1 = 0.002829 $ kg$m^2$ and $I_3 = 0.001982$ kg$m^2$
(see Appendix B for details).
We suggest that the moment of inertia $I_1$ is slightly lower than the experimental value
because of extra leather at the ends due to the seams which we have not taken into account.
This is caused by the four leather panels being sewn together and there being a little
excess material at the ends making the football slightly heavier at the ends than we have accounted for.
If we add a small mass to either end of the football this would account for the very small
increase in the experimentally found value. The increase in moment of inertia required
(experimental value - our value)is
$\Delta I_1 = 0.000381$ kg$m^2$ which could be accounted for by a small mass of leather $m_0/2$ at
either end of the ball, where $\Delta I = m_0 a^2$ and $a$ is the semi-major axes $0.141 $ m. Hence,
$m_0 = 19.164 $ g (grams) which implies $m_0/2 = 9.582$ grams excess leather at each end of the ball.
This is a very small amount of leather! We believe this is a more accurate description of the
football than the prolate spheroid shell or the solid ellipsoid.\\
\noindent
Furthermore, the solid ellipsoid gives quite different moments.
For the solid, $I_1 = (1/5)m(a^2 + b^2) = 0.002242 $ kg$ m^2 $ , for the same $a,b$ as above
and $ I_3 = (2/5) m b^2 = 0.001216 $ kg$ m^2 $.\\
\section{Acknowledgments}
\noindent
We would like to thank Dr. M. Khakoo of CSU Fullerton for telling
us about the ``football" challenge set by Dr. Timothy Gay, at the
37th meeting of the Division of Atomic, Molecular and Optical Physics (DAMOP)
meeting in Knoxville Tennessee, May 16--20th 2006. Mr. Horn completed this project as part
of the requirements of a Masters degree in Physics at CSU Fullerton under the
supervision of Prof. H. Fearn.
\section{Appendix A}
Derivation of the principal moments of inertia of a prolate spheroidal shell (hollow structure).
The football is roughly the shape of a prolate spheroid, which is an ellipsoid with two semi
major axes the same length. The equation for the prolate spheroid is;
\begin{equation}
\frac{x^2}{b^2} +\frac{y^2}{b^2} +\frac{z^2}{a^2}=1 \label{elli}
\end{equation}
where $a$ is the semi major axis aligned along the length of the football. This will be the spin axis.
$b$ is the semi minor axis in both the $x$ and $y$ directions. We assume $a > b$. In fact for
an official size football we will take $a=5.5$ inches and $b= 3.4$ inches, this will be useful for
later numerical examples.
It is appropriate to introduce prolate spheroidal coordinates \cite{schaum},
to calculate the moments of inertia.
\begin{eqnarray}
x &=& \mbox{$\alpha$} \sinh \varepsilon \sin \mbox{$\theta$} \cos \phi \mbox{$\nonumber$} \\
y &=& \mbox{$\alpha$} \sinh \varepsilon \sin \mbox{$\theta$} \sin \phi \mbox{$\nonumber$} \\
z &=& \mbox{$\alpha$} \cosh \varepsilon \cos \mbox{$\theta$}
\end{eqnarray}
It is appropriate to introduce the semi major and minor axes by the substitution,
\begin{eqnarray}
a &=& \mbox{$\alpha$} \cosh \varepsilon \mbox{$\nonumber$} \\
b &=& \mbox{$\alpha$} \sinh \varepsilon
\end{eqnarray}
This will then reproduce Eq. \ref{elli} above. Hence we use,
\begin{eqnarray}
x &=& b \sin \mbox{$\theta$} \cos \phi \mbox{$\nonumber$} \\
y &=& b \sin \mbox{$\theta$} \sin \phi \mbox{$\nonumber$} \\
z &=& a \cos \mbox{$\theta$}
\end{eqnarray}
We also require the surface area of the ellipsoid. This can be calculated once we have the
area element $dA$ equivalent to $ r^2 d\Omega$ in spherical polar coordinates.
In the prolate spheroidal coordinate system we find,
\begin{equation}
d A = h_{\mbox{$\theta$}} \, h_{\phi}\, d\mbox{$\theta$}\, d\phi =
( a^2 \sin^2 \mbox{$\theta$} + b^2 \cos^2 \mbox{$\theta$} )^{1/2} b \sin \mbox{$\theta$} \, d\mbox{$\theta$} \, d\phi \;\; .
\end{equation}
where the usual $h_k$ terms are defined by the length element squared,
\begin{equation}
ds^2 = h_{\varepsilon}^2 \, {d\varepsilon}^2 + h_{\mbox{$\theta$}}^2 \,{d\mbox{$\theta$}}^2 + h_{\phi}^2 \, {d\phi}^2 \;\; .
\end{equation}
Now we can easily integrate the area element over all angles, $ 0 \le \mbox{$\theta$} \le \pi$ and $0 \le \phi < 2 \pi$.
We will need this surface area for the moments of inertia later on.
The surface area of the ellipsoid is;
\begin{eqnarray}
\mbox{ Area } &=& \int_0^{2 \pi} d\phi \int_0^{\pi}
( a^2 \sin^2 \mbox{$\theta$} + b^2 \cos^2 \mbox{$\theta$} )^{1/2} b \sin \mbox{$\theta$} \; d\mbox{$\theta$} \mbox{$\nonumber$} \\
&=& 2 \pi a b \int_{-1}^{1} ( 1 - e^2 x^2 )^{1/2} dx \mbox{$\nonumber$} \\
&=& 4 \pi a b \int_0^e (1 - z^2 )^{1/2} dz \mbox{$\nonumber$} \\
&=& \frac{4 \pi a b}{e} \int_0^{\; \sin^{-1} e} \cos^2 \mbox{$\theta$} \; d\mbox{$\theta$} \mbox{$\nonumber$} \\
\Rightarrow \mbox{ Area} &=& 2 \pi b \left( \frac{a \sin^{-1} e}{e} + b \right)
\label{area}
\end{eqnarray}
where in the first step we set $ x= \cos \mbox{$\theta$}$, then $z = ex$, and then $z= \sin \mbox{$\theta$}$.
We used the double angle formula for $\sin 2 \mbox{$\theta$} = 2 \sin \mbox{$\theta$} \cos \mbox{$\theta$} $ and from tables \cite{tables}
we have that $ \sin^{-1} x = \cos^{-1} \sqrt{ 1 -x^2} $ so that
$ \cos ( \sin^{-1} e) = b/a$ where $e = \sqrt{ 1-b^2/a^2 }$.
At this point the derivation of the principal moments of inertia is reasonably straight forward, although
a little messy. We introduce the surface mass density $\rho = M/{\mbox{Area}}$, where the Area
is that given by Eq. \ref{area}, and define the following
principal moments;
\begin{eqnarray}
I_1 = I_2 &=& \rho \int \int ( x^2 + z^2 ) dA \mbox{$\nonumber$} \\
I_3 &=& \rho \int \int ( x^2 + y^2 ) dA
\end{eqnarray}
where $ I_1 = I_{xx}\; $, $I_2 = I_{yy}$ and $I_3 = I_{zz}$ and
$dA = ( a^2 \sin^2 \mbox{$\theta$} + b^2 \cos^2 \mbox{$\theta$} )^{1/2} b \sin \mbox{$\theta$} \, d\mbox{$\theta$} \,d\phi $.
To save space here we give only one derivation, the other being very similar.
\begin{eqnarray}
I_3 &=& \rho \int \int ( x^2 + y^2 ) dA \mbox{$\nonumber$} \\
&=& \rho \int_0^{2 \pi} d\phi \int_0^{\pi} b^3 \sin^3 \mbox{$\theta$}
( a^2 \sin^2 \mbox{$\theta$} + b^2 \cos^2 \mbox{$\theta$} )^{1/2} \; d\mbox{$\theta$} \mbox{$\nonumber$} \\
&=& 4 \pi a b^3 \rho \int_0^{\pi/2} \sin^3 \mbox{$\theta$} ( 1 - e^2 \cos^2 \mbox{$\theta$} )^{1/2} \; d\mbox{$\theta$} \mbox{$\nonumber$} \\
&=& 4 \pi a b^3 \rho \int_0^1 (1-x^2 ) ( 1 - e^2 x^2 )^{1/2} dx \mbox{$\nonumber$} \\
&=& 4 \pi a b^3 \rho \int_0^e \left( 1-\frac{z^2}{e^2} \right)\left(1-z^2 \right)^{1/2}
\frac{dz}{e} \mbox{$\nonumber$} \\
&=& \frac{ 4 \pi a b^3 \rho}{e} \int_0^{ \sin^{-1} e}
\left( \cos^2 \mbox{$\theta$} - \frac{ \sin^2 2 \mbox{$\theta$} }{4e^2} \right) \; d\mbox{$\theta$} \mbox{$\nonumber$} \\
&=& \frac{ 4 \pi a b^3 \rho}{e} \left[ \frac{1}{2} \sin^{-1}e + \frac{e}{2} \frac{b}{a}
- \frac{1}{8 e^2} \sin^{-1} e + \frac{1}{8e} \frac{b}{a} \left( \frac{b^2}{a^2} - e^2 \right) \right]
\label{i3}
\end{eqnarray}
After substituting for $\rho$ and some algebra we find;
\begin{equation}
I_3 = m b^2 \left[ \left( 1 - \frac{1}{4 e^2} \right) +
\frac{ \frac{b^2}{a^2} \frac{b}{2 e^2}}{ \left( \frac{a \sin^{-1} e}{e} + b \right)} \right]
\end{equation}
It should be noted that,
\begin{eqnarray}
\left( 1 - \frac{1}{4 e^2} \right)&=& \frac{1}{4} \left( \frac{ 3 a^2 -4b^2}{a^2 -b^2} \right) \mbox{$\nonumber$} \\
\rho &=& \frac{ M}{2 \pi b \left( \frac{a \sin^{-1}e}{e} + b \right) } \;\; .
\end{eqnarray}
As an interesting aside, one could also calculate the $I_3$ moment using rings and then
integrating from $-a \le z \le a$. This is possible because one can set $x^2 + y^2 = r^2 $ and
then from the equation of a prolate ellipsoid, Eq.(\ref{elli}), arrive at an equation for $r(z)$, $r'$
and the width of a ring $ds$ as;
\begin{eqnarray}
r(z) &=& b \left( 1 - \frac{z^2}{a^2} \right)^{1/2} \mbox{$\nonumber$} \\
r'(z) = \frac{dr}{dz} &=& \frac{ bz/a^2}{ \left( 1 - \frac{z^2}{a^2} \right)^{1/2} } \mbox{$\nonumber$} \\
ds = \sqrt{ dr^2 + dz^2 } &=& \left[ 1 + \left( \frac{dr}{dz} \right)^2 \right]^{1/2} dz \;\; .
\end{eqnarray}
Therefore, with the mass of the ring as $dm_{\mbox{\small{ring}}}= \rho \, 2 \pi r \, ds $ we have;
\begin{eqnarray}
I_3 &=& \int r^2 dm_{\mbox{\small{ring}}} \mbox{$\nonumber$} \\
&=& 2 \pi \rho \int r^3 ds \mbox{$\nonumber$} \\
&=& 4 \pi \rho \int_0^a r^3 \left[ 1 + \left( \frac{dr}{dz} \right)^2 \right]^{1/2} \, dz
\mbox{$\nonumber$} \\
&=& 4 \pi \rho \int_0^a b^3 \left( 1 - \frac{z^2}{a^2} \right)
\left( 1 + z^2 \frac{ (b^2 -a^2 )}{a^4} \right)^{1/2} \, dz
\end{eqnarray}
which after setting $z = a \cos \mbox{$\theta$}$ we arrive at the third line of Eq. (\ref{i3}).\\
\noindent
Finally, we give the result for the principal axes $ I_1 = I_2 $.
\begin{eqnarray}
I_1 &=& \rho \int \int ( x^2 + z^2 ) dA \mbox{$\nonumber$} \\
&=& \rho \int_0^{2 \pi} d\phi \int_0^{\pi} d\mbox{$\theta$} \, ( b^2 \sin^2 \mbox{$\theta$} \cos^2 \phi + a^2 \cos^2 \mbox{$\theta$} )
(a^2 \sin^2 \mbox{$\theta$} + b^2 \cos^2 \mbox{$\theta$} )^{1/2} \; b \sin \mbox{$\theta$} \mbox{$\nonumber$} \\
&=& 2 \pi a b \rho \int_0^{\pi/2 } \, d\mbox{$\theta$}
\left[ b^2 \sin^3 \mbox{$\theta$} +
2 a^2 \sin \mbox{$\theta$} \cos^2 \mbox{$\theta$} \right] \left( 1 - e^2 \cos^2 \mbox{$\theta$} \right)^{1/2}
\end{eqnarray}
After integration, substituting for $\rho$ and some algebra we get,
\begin{equation}
I_1 = \frac{1}{2} m b^2 \left[ \left( 1 - \frac{1}{4e^2} + \frac{ a^2}{2 e^2 b^2} \right) +
\frac{\frac{b}{e^2}\left( \frac{b^2}{2a^2} -1 \right)}{ \left( \frac{a \sin^{-1} e}{e} +
b \right)} \right] \;\; .
\end{equation}
These results were checked numerically on Mathematica 5.2, the numerical answers,
for the prolate spheroid, (shell)
using $a=0.141 $m (or $5.5$ in), $b=0.086$m ( or $3.4$ in) and $M =0.411$ Kg
were $I_1 = 0.003428 kg m^2 $ and $I_3 = 0.002138Kg m^2$.
When we use the same parameters in our exact formulae above we find exactly the same numbers, so we are
confident that the above results are correct. Mathematica did not give nicely simplified answers
so we did not attempt to use its non-numerical output.\\
\section{Appendix B}
Moments of inertia for a parabola of revolution. We also show photographs of an American football and a
Rugby ball superposed onto graph paper and curve fit using Mathematica to show how well these
respective game balls fit to a parabola of revolution and an ellipse.
It is quite clear that the American football fits the parabola of revolution
much more precisely than an ellipse. The Rugby ball is a closer fit to the ellipse shape.\\
\noindent
Consider Figure 5. of the parabola of revolution. We will show how to calculate the surface area,
and the moments of inertia along the z($e_3$) and x ($e_1$)
axes, corresponding to $I_3$ and $I_1$ respectively.
\begin{figure}
\includegraphics[width=4.0in]{Figure5.eps}\\
\caption{Parabola of revolution, with equation $r(z) = b - z^2 /(4a)$,
the length of the football is $2x$ where $x=2 \sqrt{ab}$}
\label{Figure 5.}
\end{figure}
\noindent
To calculate the moments we must first determine the surface area of the parabola of revolution.
The surface area is calculated by the simple integral $ A = \int 2 \pi r ds $ where
$ds = ( 1 + r'^2 )^{1/2} dz $ and $r' = dr/dz $. We define the semi--major axis here to be
$x = 2 \sqrt{ab}$ which will simplify the integrations considerably.
Using the parabolic equation $ r(z) = b - z^2/(4a)$
we find that the surface area of revolution is given by,
\begin{eqnarray}
A &=& \int 2 \pi r(z) ds \mbox{$\nonumber$} \\
&=& 2 \pi \int_{-x}^{x} \; r [ 1 + r'^2 ]^{1/2} \; dz \mbox{$\nonumber$} \\
&=& 2 \pi b^2 \int_{-x}^{x} \; \left( 1 - \frac{z^2}{x^2} \right)
\left( \frac{1}{b^2} + \frac{4 z^2}{x^4} \right)^{1/2} \; dz \\
\end{eqnarray}
This can easily be solved on mathematica and the result for $x = 0.141$ m and $b = 0.086$ m
is found to be
$A = 0.114938 \;\; m^2$. The calculation can be done by hand but it is very long winded and tedious.
We have not written out the full expression because it
does not lead to any great insight.\\
\noindent
The moment of inertia for the $e_3$, or long axis, is found most easily by summing over rings.
Using the area of a ring to be $ 2 \pi r ds $ and the mass of a ring
is $dm_{\mbox{\tiny ring}} = \rho 2 \pi r ds $
where $\rho = M/A$, $M = 0.411 $kg, is the total mass of the football and $A$ is the surface area
given above.
\begin{eqnarray}
I_3 &=& \int r^2 dm \mbox{$\nonumber$} \\
&=& 2 \pi \rho \int r^3 ds \mbox{$\nonumber$} \\
&=& 2 \pi \rho b^4 \int_{-x}^{x} \; \left( 1 - \frac{z^2}{x^2} \right)^3
\left( \frac{1}{b^2} + \frac{4 z^2}{x^4} \right)^{1/2} \; dz \\
\end{eqnarray}
\noindent
Making substitutions of the form $z= i x^2 \sin \theta /(2b)$ simplifies the square root term
and may allow you to solve this and the $I_1$ below by hand,
but we would not recommend it. Mathematica again comes to the rescue and we find a value of
$I_3 = 0.001982 $ kg$m^2$.\\
\noindent
There are two ways to proceed with the moment of inertia about the $e_1$ or $x$ axis. You can chop the
football into rings again and use the parallel axis theorem. Or you can directly integrate over
the surface using small elemental areas. We show the small area method below. Consider a small
area of the surface and take the mass to be $dm = \rho dA$.
Then the contribution of this small area
to the moment about $e_1$ is given by $dI_1 = \rho ( y^2 + z^2) dA$. We have taken the vertical
(or $e_2$) axis to be $y$ here. Convert to polar coordinates, using $ x = r \cos \theta$ and
$y = r \sin \theta $. Note that $x^2 + y^2 = r^2$ since there is a circular cross--section.
In the xy direction we may change to polar coordinates, $r d\theta$. In the z-direction we must use
the length $ds$ for accuracy. Therefore, an element of the surface has an
area $dA = r d\theta \; ds$ where $ds$ is defined above.
\begin{eqnarray}
I_1 &=& 2 \rho \int_0^{2 \pi} \; d\theta \; \int_0^x \; ds \; r ( y^2 + z^2 ) \mbox{$\nonumber$} \\
&=& 2 \rho \int_{0}^{x} \; \int_0^{2 \pi} \; \left( 1 - \frac{z^2}{x^2} \right)
\left( 1 + \frac{4 b^2 z^2}{x^4} \right)^{1/2}
\left[ b^2 \left( 1 - \frac{z^2}{x^2} \right)^2 \sin^2 \theta + z^2 \right] d\theta \; dz \mbox{$\nonumber$} \\
&=& 2 \pi \rho \int_0^x \; \left( 1 - \frac{z^2}{x^2} \right)
\left( 1 + \frac{4 b^2 z^2}{x^4} \right)^{1/2}
\left[ b^2 \left( 1 - \frac{z^2}{x^2} \right)^2 + 2 z^2 \right] \; dz \mbox{$\nonumber$} \\
\end{eqnarray}
\noindent
For the parabola of revolution, we get $I_1 = 0.002829 $ kg $m^2$.
\begin{figure}
\includegraphics[width=3.5in]{fig6.eps}\\
\caption{American Football Photo and curve fit. Plot(a) shows the photograph of
the football with the outline of the curve fitting to show how well they match.
Plot(b) shows the curve fit from Mathematica alone with the points taken from the
original photograph of the football.}\label{Fig6}
\end{figure}
\noindent
To clarify our point, we show photographs of both an American football (pro NFL game ball) above
and a Rugby ball below. The photographs were taken with both balls on top of graph paper.
We used the outer edge of each photograph of the ball to get points to plot and curve fit with
Mathematica 5.2. The results are shown in figures 6 and 7.
\begin{figure}
\includegraphics[width=3.5in]{fig7.eps}\\
\caption{Rugby ball Photo and curve fit. Plot(a) shows the photograph of
the rugby ball with the outline of the curve fitting to show how well they match.
Plot(b) shows the curve fit from Mathematica alone with the points taken from the
original photograph of the rugby ball.} \label{Fig7}
\end{figure}
\noindent
From figures 6 and 7 we see that the American football closely fits the shape of a parabola of revolution.
The rugby ball more closely fits the shape of an ellipsoid.
\newpage
| {
"attr-fineweb-edu": 2.632812,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdLs5qhDBUi61T0fg | \section{Introduction}
Billiard systems are very important model systems in classical and quantum chaos.
One of the most studied billiards is
the stadium billiard introduced by L. Bunimovich \cite{Bun1979} in 1979, where it was proven to be
rigorously ergodic and mixing. It is also a K-system, as its maximal Lyapunov exponent is positive.
In Fig. \ref{figlr1} we show and define the geometry and our notation of the stadium.
\begin{figure}
\centering
\includegraphics{Sketch}
\caption{The geometry and notation of the stadium billiard of Bunimovich.}
\label{figlr1}
\end{figure}
The radius of the two half circles is unity, while the length of the straight line is ${\varepsilon}$. By
$\alpha$ we denote the angle of incidence, which is equal to the angle of reflection at the collision
point. The phase space is defined by the
Poincar\'e-Birkhoff coordinates $(s,p)$, where $s$ is the arclength parameter
defined counterclockwise from $s=0$ to ${\cal L}=2\pi+2{\varepsilon}$ (${\cal L}$ is the length of the boundary),
and the canonically conjugate momentum is $p=\sin \alpha$. As $s=0$ and $s={\cal L}$ are identified, we have a phase cylinder with the borders $p=\pm 1$.
We assume that the billiard particle has unit speed.
The discrete bounce map of the billiard $\Phi$, connecting two successive collisions,
$\Phi: (s,p) \rightarrow (s',p')$, is area preserving (see e.g. \cite{Ber1981}).
For the circle billiard ${\varepsilon}=0$ the momentum $p$,
which is also the angular momentum, is a conserved quantity, while for small ${\varepsilon}>0$ we observe
slow chaotic diffusion in the momentum space $p$. The maximal
Lyapunov exponent is positive for
all values of ${\varepsilon}>0$.
The first systematic study of the Lyapunov exponents in some representative chaotic
billiards, including the stadium billiard, was published by Benettin \cite{Ben1984}, who
has shown by
numerical calculations that for small ${\varepsilon}$ the Lyapunov exponent goes as $\propto \sqrt{{\varepsilon}}$.
The diffusion regime of slow spreading of an ensemble of initial conditions in the
momentum space has been observed in Ref. \cite{BCL1996}
and confirmed for ${\varepsilon} \le 0.3$ in the present work.
For larger ${\varepsilon} > 0.3$ the diffusion regime is hardly observable,
as the orbit of any initial conditions quickly spreads over the entire
phase space, already after a few ten collisions.
The characteristic time scale on which transport phenomena occur in classical dynamical systems is termed the {\em classical transport time} $t_T$. This is the typical time that an ensamble of particles needs to explore the available phase space. In classical chaotic billiards the characteristic diffusion time in momentum space is the relevant estimate for $t_T$.
The present work was motivated by the study of chaotic billiards in the context of
quantum chaos\cite{Stoe,Haake}, in order to obtain good estimates of the characteristic
times $t_T$ which must be related/compared to the Heisenberg time $t_H$
for the purpose of assessing
the degree of quantum localization of chaotic eigenstates. The Heisenberg time
is an important time scale in any quantum system with a discrete energy spectrum,
defined as $t_H=(2\pi\hbar)/\Delta E$, where $\Delta E$ is the mean energy level
spacing, i.e. the mean density of states is $\rho(E)=1/\Delta E$. If $t_H/t_T$ is smaller
than $1$, we observe localization, while for values larger than $1$ we see
extended eigenstates and the {\em Principle of uniform semiclassical condensation} (PUSC) of Wigner functions applies. (See Refs. \cite{Rob1998,BR2010} and references therein.)
Therefore a detailed investigation of the diffusion in the stadium billiard is necessary if
the localization of the Wigner functions or Poincar\'e-Husimi functions should be well
understood.
This analysis is precisely along the lines of our recent works
\cite{BR2010,BR2013,BR2013A} for mixed type chaotic billiards, where
the regular and (localized) chaotic eigenstates have been separated and
the localization measure of the chaotic eigenstates has been introduced
and studied. It was shown that the spectral statistics is uniquely
determined by the degree of localization, which in turn is expected to
be a unique function of the parameter $t_H/t_T$. This kind of analysis
has been performed also for the quantum kicked rotator
\cite{Cas1979,Chi1981,Chi1988,Izr1990} by Chirikov, Casati,
Izrailev, Shepelyansky, Guarneri, and further developed by many others.
It was mainly Izrailev who has studied the relation between the
spectral fluctuation properties of the quasienergies (eigenphases)
of the quantum kicked rotator and the localization properties
\cite{Izr1988,Izr1989,Izr1990}. This picture has been recently extended
in \cite{BatManRob2013,ManRob2013,ManRob2014,ManRob2015}
and is typical for chaotic time-periodic
(Floquet) systems. Similar analysis in the case of the stadium as a time
independent system is in progress \cite{BLR2017}.
The study of diffusion in the stadium goes back to the early works of
Casati and coworkers \cite{BCL1996,CP1999}. An excellent review of
classical and quantum chaotic billiards was published by Prosen \cite{Pro2000}
with special interest in the quantum localization. Some of the analytic results
about the diffusion constant have been obtained by the study of an approximate
map \cite{BCL1996}, or of more general periodic Hamiltonian maps
\cite{DMP1989}, along with the special case of the sawtooth map \cite{CDMMP1990}.
However, quite often
not only the pointwise orbits but even the statistical properties of
conservative dynamical systems exhibit extremely sensitive dependence
on the control parameters and on initial conditions, as exemplified
e.g. in the standard map by Meiss \cite{Mei1994}.
The aforementioned studies used approximations of the stadium dynamics
in order to obtain analytical results.
Here we want to perform exact analysis of diffusion in the stadium billiard,
which unavoidably must rest upon the numerical calculations
of the exact stadium dynamics. Fortunately, the simple geometry of the stadium enables us to calculate the dynamics using analytical formulas subject only to round-off errors.
Thus far the study of momentum diffusion in the stadium billiard was, to the best of our knowledge, limited to the regime where ensembles are still narrow and far from the border of the phase space at $p=\pm 1$. The momentum diffusion there is normal and homogeneous and no border effects can be observed. In this work we extend this to include the global aspects of diffusion taking into account the finite phase space and the specifics of the billiard dynamics. As the stadium billiard is an archetype of systems with slow ergodicity many of our findings should be applicable to other systems sharing this trait.
The structure of the paper is as follows. In section II we show that the
chaotic diffusion is normal, but inhomogeneous, in section III we do a detailed
analysis of the variance of the distribution function and
explore its dependence on the shape parameter ${\varepsilon}$ and the initial conditions,
in section IV we examine the coarse grained dynamics and compare our
results for the stadium with some preliminary results on a mixed type billiard defined in
\cite{Rob1983}, and in section V we discuss the results and conclude.
\section{Diffusion in the stadium billiard and the mathematical model}
The study of diffusion in the stadium billiard was initiated in \cite{BCL1996}, where
it was shown that for small ${\varepsilon}$ we indeed see normal diffusion in the momentum space
for initial conditions $p=0$ and uniformly distributed on $s$ along the boundary.
For sufficiently short times (number of bounces), so that the spreading is close
to $p=0$, the diffusion constant can be considered as $p$-independent.
Consequently the effects of the boundaries at $p=\pm 1$ are not yet visible.
Moreover, it has been found \cite{CP1999} that the diffusion constant is
indeed a function of the angular momentum, which for small ${\varepsilon}$
coincides with $p$. Therefore we have to deal with inhomogeneous normal diffusion.
Our goal is to elaborate on the details of this picture.
We begin with the diffusion
equation for the normalized probability density $\rho(p,t)$ in the $p$-space
\begin{equation} \label{eq1}
\frac{\partial \rho}{\partial t} = \frac{\partial}{\partial p}
\left( D(p) \frac{\partial \rho}{\partial p} \right).
\end{equation}
The time $t$ here is the continuous time, related to the "discrete time"
$N$, the number of collisions, by $t=N \overline{l}$, where $\overline{l}$ is the average
distance between two collision points and the speed of the particle is unity.
In agreement with \cite{CP1999} the diffusion constant is assumed in the form
\begin{equation} \label{eq2}
D(p) = D_0({\varepsilon}) (1-p^2),
\end{equation}
where $D_0({\varepsilon})$ is globally an unknown, to be determined, function of the shape
parameter ${\varepsilon}$. Note that in Ref. \cite{CP1999} the dependence of $D_0({\varepsilon})$ on the
angular momentum was studied, while here we consider the dependence
on $p$. It is only known that for sufficiently small ${\varepsilon}$,
smaller than a characteristic value ${\varepsilon}_c \approx 0.1$ determined in the present work,
or sufficiently larger than ${\varepsilon}_c$,
we have the power law $D_0= \gamma {\varepsilon}^{\beta}$, where the exponent $\beta$ is $5/2$ or $2$ correspondingly, while our $\gamma$ is a numerical prefactor, $\gamma \approx 0.13$ and $0.029$, respectively,
also to be analyzed later on.
In the transition region, ${\varepsilon}\approx {\varepsilon}_c \approx 0.1$, we have no theoretical predictions
and also the numerical calculations are not known or well established so far.
As we see in Eq. (\ref{eq1}), the diffusion constant $D$ is defined in
such a way that the probability current density $j$ is proportional to $D$ and
the negative gradient of $\rho$, that is $j=-D\;\partial\rho/\partial p$.
Thus, the diffusion equation (\ref{eq1}) is just the continuity equation
for the probability (or number of diffusing particles),
as there are no sources or sinks.
For $p\approx 0$ and at fixed ${\varepsilon}$ we can regard $D$ as locally constant
$D\approx D_0$. However,
for larger $|p|$ we must take into account the dependence of $D$ on $p$.
Due to the symmetry the lowest correcting term in power expansion in $p$
is the quadratic one, with the proportionality coefficient $\nu$.
For larger $|p|$, close to $1$, the
diffusion constant should vanish, so that near the border of the
phase space cylinder $p=\pm 1$ there is no diffusion at all.
These arguments lead to the assumption
(\ref{eq2}), which will be a posteriori justified as correct in our detailed
empirical model.
Let us first consider the case of locally constant $D$ at $p$ around $p_0$,
without the boundary conditions, i.e. the free diffusion on the real line $p$.
Assuming initial conditions in the form of a Dirac delta distribution
$\rho(p,t=0)=\delta(p-p_0)$ peaked at $p=p_0$, we recover the well known
Green function
\begin{equation} \label{eq3}
\rho(p,t) = \frac{1}{2\sqrt{\pi D t}}
\exp\left( -\frac{(p-p_0)^2}{4D t}\right)
\end{equation}
according to which the variance ${\rm Var}(p)$ is equal to
\begin{equation} \label{eq4}
\langle (p-p_0)^2 \rangle = \int_{-\infty}^{\infty} \rho(p,t) (p-p_0)^2 dp
= 2 D t.
\end{equation}
%
This model is a good description for small ${\varepsilon}$ and short times $t$.
However, for times comparable with the transport time
the boundary conditions must be taken into account.
There we assume that the {\em currents} on the boundaries $p=\pm 1$
must be zero,
i.e. $\partial \rho/\partial p=0$, so that the total probability in the
momentum space is conserved and equal to unity.
This will ultimately lead to the
asymptotic equilibrium distribution $\rho(p,t) = 1/2$, with the variance
${\rm Var}(p)=1/3$. The solution of the diffusion equation with these boundary conditions
reads \cite{Pol2002}
\begin{eqnarray} \label{eq5}
\rho(p,t) = \frac{1}{2} + \sum_{m=1}^{\infty} A_m \cos (\frac{m\pi}{2}(p+1))
\\ \nonumber
\times \;\exp\left( - \frac{D m^2\pi^2 t}{4} \right).
\end{eqnarray}
Thus, the approach to the equilibrium $\rho=1/2$ is always exponential. We will refer to this as the {\em homogeneous normal diffusion} model.
In the case of a delta function initial condition $\rho(p,t=0)=\delta(p-p_0)$,
we find by a standard technique $A_m=\cos (\frac{m\pi}{2}(p_0+1))$.
In the special case $p_0=0$ we have
\begin{equation} \label{eq6}
\rho(p,t) = \frac{1}{2}\sum_{-\infty}^{\infty} \cos (m\pi p) \exp(-m^2\pi^2Dt),
\end{equation}
with the variance
\begin{equation} \label{eq7}
{\rm Var}(p) = \langle p^2 \rangle = \frac{1}{3} + 4 \sum_{m=1}^{\infty}
\frac{(-1)^m}{m^2\pi^2} \exp(-m^2\pi^2D t),
\end{equation}
which approaches exponentially the equilibrium value ${\rm Var}(p)=1/3$ at large time $t$
\begin{equation} \label{eq8}
{\rm Var}(p) \approx \frac{1}{3} - \frac{4}{\pi^2} \exp( -\pi^2Dt),
\end{equation}
where the higher exponential terms $m>1$ have been neglected.
Next we want to understand the behavior of the diffusion when the full
general expression for the $p$-dependent diffusion constant $D$, defined
in (\ref{eq2}), is taken into account.
We find the solution (see also \cite{LL2007})
in terms of the Legendre polynomials $P_l(p)$ as follows
\begin{equation} \label{eq9}
\rho(p,t) = \sum_{l=0}^{\infty} A_l P_l(p) \exp (-l(l+1) D_0t),
\end{equation}
where the expansion coefficients $A_l$ expressed by the initial
conditions at time $t=t_0$ are
\begin{equation} \label{eq10}
A_l = \frac{2l+1}{2} \int_{-1}^{1} P_l(p) \rho(p,t=t_0) dp.
\end{equation}
It can be readily verified that the solution (\ref{eq9}) satisfies the
diffusion equation (\ref{eq1}) with $D=D_0(1-p^2)$ as in (\ref{eq2}). It
also satisfies the boundary conditions of vanishing currents at $p=\pm 1$,
since $D=0$ there. Because the
set of all Legendre polynomials is a complete basis set of functions on the
interval $-1 \le p \le 1$, an arbitrary initial condition may be
satisfied. Therefore (\ref{eq9}) is the general solution.
From (\ref{eq9}) we also see that $\rho(p,t)$
approaches its limiting value $A_0$ exponentially, and moreover, in (\ref{eq10})
that for any
normalized initial condition we have $A_0=1/2$. We will refer to this as the {\em inhomogeneous normal diffusion} model.
For a general diffusion constant $D(p)$ that is an even function of $p$,
which in our case is due to the physical $p$-inversion
symmetry in the phase space, we can derive a general equation for the
moments and variance of $p$. Starting from Eq. (\ref{eq1}) and using the
boundary conditions we first show that the total probability is conserved.
Second, for the centered initial condition $p_0=0$, that is
$\rho(p,t=0) = \delta(p)$, we find that the first moment vanishes
$\langle p\rangle=0$, and for the time derivative of the variance we
obtain
\begin{equation} \label{eq11}
\frac{d\langle p^2\rangle}{dt} = -4\rho(1,t)D(1)+2\int_{-1}^{1} \rho(p)
\frac{d(pD(p))}{dp} dp.
\end{equation}
In the special case $D=D_0(1 - \nu p^2)$ we get the differential equation
\begin{equation} \label{eq12}
\frac{d\langle p^2\rangle}{dt} = -4\rho(1,t)D(1) +2D_0(1 - 3\nu
\langle p^2\rangle).
\end{equation}
This is an interesting quite general result. In our system we have
$\nu=1$, therefore $D(1) =0$, and we find for the variance the explicit
result by integration
\begin{equation} \label{eq13}
\langle p^2\rangle = \frac{1}{3} \left(1 - \exp(-6D_0 t)\right).
\end{equation}
Thus, again, the approach to equilibrium value ${\rm Var}(p)=1/3$ is exponential,
with the important {\em classical transport time} $t_T= 1/(6D_0)$.
If this equation is rewritten in terms of the discrete time $N$
(the number of collisions), then $t=N \overline{l}$, and we find
\begin{equation} \label{eq14}
\langle p^2\rangle = \frac{1}{3} \left(1 - \exp(-\frac{N}{N_T})\right),
\end{equation}
where {\em the discrete classical transport time} $N_T$ is now
defined as
\begin{equation} \label{eq15}
N_T = \frac{1}{6D_0\overline{l}}.
\end{equation}
Here $\overline{l}$ is the average distance between two successive collision points.
We also define the discrete diffusion constant as $D_{\rm dis}=D_0 \overline{l}$.
In the case of ergodic motion the mean free path $\overline{l}$
as a function of the billiard area ${\cal A}$ and the length ${\cal L}$ is known to
be \cite{Santalo}
\begin{equation} \label{eq16}
\overline{l} = \frac{\pi {\cal A}}{{\cal L}} = \frac{\pi(\pi+2{\varepsilon})}{2\pi+2{\varepsilon}} \approx \frac{\pi}{2}.
\end{equation}
Thus, by measuring $D_{\rm dis}$ we determine $N_T$, which plays an important role
in quantum chaos when related to the Heisenberg time \cite{BR2013,BR2013A},
$t_H/t_T = 2k/N_T = 2\sqrt{E}/N_T$, as discussed in the introduction.
Here $E=k^2$ is the energy of the billiard particle.
It is well known that the bouncing ball modes (the continuous family of
period two periodic orbits), within $s\in(\pi/2,\pi/2+{\varepsilon})$ and
$s\in(3\pi/2+{\varepsilon},3\pi/2+2{\varepsilon})$ and $p=0$ present sticky objects in the
classical phase space, as illustrated in Fig. \ref{figlr2}. If we choose
initial conditions inside these bouncing ball areas, we find a centrally
positioned delta peak which never decays. Moreover, even orbits close
to these bouncing ball areas stay inside for very long times,
because the transition times for exiting (and also entering)
these regions are very large. Such correlations have been studied
in Refs. \cite{VCG1983,AHO2004}.
\begin{figure}
\centering
\includegraphics{Phase_space}
\caption{The phase space of the stadium for ${\varepsilon}=1$, with $10^4$ bounces
along an orbit emanating from $(s=\pi/4,p=0)$,
showing the avoidance of the bouncing ball areas.}
\label{figlr2}
\end{figure}
Therefore in studying the diffusion in the momentum space emanating from
$\rho(p,t=0)=\delta(p-p_0)$ we have used the initial conditions $p_0=0$
and uniformly distributed over the $s$ excluding the two intervals
$s\in(\pi/2-{\varepsilon},\pi/2+2{\varepsilon})$ and $s\in(3\pi/2,3\pi/2+3{\varepsilon})$, to exclude the
slowly decaying peak in the distribution located at $p=0$.
The result for ${\varepsilon}=0.1$ is shown in Fig. \ref{figlr3}. As we see, the model of the
inhomogeneous diffusion Eq. (\ref{eq9}) is a significant improvement over
the model of homogeneous diffusion Eq. (\ref{eq6}) and works very well.
The results are the same if we randomly vary the initial momenta according
to a narrow uniform distribution $p_0\in [-0.01, 0.01]$.
\begin{figure}
\centering
\includegraphics{Distributions1}
\caption{The distribution function $\rho(p,N)$ after $N=100$ collisions (a),
$N=300$ collisions (b) and $N=700$ collisions (c). The $10^5$
initial conditions at $p_0=0$ are as described in the text. ${\varepsilon}=0.1$.
The blue full line is the
theoretical prediction for the inhomogeneous diffusion (\ref{eq9}), the
dashed red curve corresponds to the homogeneous diffusion (\ref{eq6}).
}
\label{figlr3}
\end{figure}
Although we shall study the dependence of statistical
properties on initial conditions in
section III, we should explore the time evolution of the diffusion in the momentum space
for nonzero initial conditions $p_0\not=0$ already at this point,
starting from $\rho(p,t=0)=\delta(p-p_0)$, where
now the $10^5$ initial conditions are uniformly distributed over
all $s\in [0,{\cal L})$. It turns out that there is some transient time
period, where the diffusive regime is not yet well established,
which we demonstrate for ${\varepsilon}=0.1$ in Figs. \ref{figlr4} - \ref{figlr6}
for $p_0=0.25, 0.50$ and $0.75$, correspondingly. The black short-dashed
curve corresponds to the theoretical prediction based on the
inhomogeneous diffusion model (\ref{eq9}) starting with the
initial delta spike $\rho(p,t=0)=\delta(p-p_0)$,
and the initial conditions are uniform on all $s\in[0,{\cal L})$.
The red long-dashed
curve corresponds to the homogeneous diffusion model (\ref{eq6}),
the blue full line corresponds to the inhomogeneous diffusion
model (\ref{eq9}). In both latter cases the initial conditions were
taken from the histogram at the time of $100$ collisions,
and the coefficients in Eqs. (\ref{eq5},\ref{eq6},\ref{eq9},\ref{eq10})
were determined.
The delay of $100$ collisions has been chosen due to the initial
transient behaviour where the diffusion is not yet well defined.
Nevertheless, the time evolution of the diffusion
excellently obeys the inhomogeneous law (\ref{eq9})
for longer times.
\begin{figure}
\centering
\includegraphics{Distributions2}
\caption{The distribution function $\rho(p,N)$ after $N=100$ collisions (a),
$N=300$ collisions (b) and $N=700$ collisions (c). The $10^5$
initial conditions at $p_0=0.25$ are uniformly distributed over $s\in [0,{\cal L})$,
and are taken at $N=100$ collisions. ${\varepsilon}=0.1$. The blue full line is the
theoretical prediction for the inhomogeneous diffusion (\ref{eq9}), the
long-dashed red curve corresponds to the homogeneous diffusion (\ref{eq6}), while
the short-dashed black line is the theoretical prediction of
the inhomogeneous diffusion starting from the initial delta
spike $\rho(p,t=0)=\delta(p-p_0)$ rather than from the delayed
histogram of (a).
}
\label{figlr4}
\end{figure}
\begin{figure}
\centering
\includegraphics{Distributions3}
\caption{As in Fig.\ref{figlr4} but with $p_0=0.50$.
}
\label{figlr5}
\end{figure}
\begin{figure}
\centering
\includegraphics{Distributions4}
\caption{As in Fig.\ref{figlr4} but with $p_0=0.75$.
}
\label{figlr6}
\end{figure}
In these plots we observe qualitatively good agreement with the theory,
except for some visible deviation of the numerical histogram from the
theoretical prediction, around $p\approx 0$ in Figs.
\ref{figlr5}-\ref{figlr6}. We believe that these effects
are related to sticky objects around the
bouncing ball areas whose existence is demonstrated in the phase
space plot of Fig. \ref{figlr2}, and is thus a system-specific feature,
which would disappear in a "uniformly ergodic" system. However,
it must be admitted that chaotic billiards often have such
continuous families of marginally stable orbits
\cite{Alt2008}.
\section{Analysis of the variance of the distribution function}
In order to determine the value of the diffusion constant $D_0$ and its
dependence on ${\varepsilon}$ as defined in Eqs. (\ref{eq1}-\ref{eq2}) we can use
either the evolution of the entire distribution function $\rho(p,t)$
or of its second moment, the variance. When this was done in special
cases, agreement has been found. However, the second moment of distribution
function is much more stable than the distribution function itself, so
we have finally decided to use the variance of $p$ to extract the value
of $D_0$ from Eq. (\ref{eq13}), where the delta spike initial
condition at $p_0=0$ is assumed. This approach would be ideal, if our
model of the inhomogeneous diffusion in Eqs.
(\ref{eq1},\ref{eq2},\ref{eq9},\ref{eq10}) were exact.
However, this is not the case due to the bouncing ball regions
described in the previous section and the fact that the diffusive
regime is not yet well established for short times. Moreover, even if the model were exact,
Eq. (\ref{eq13}) does not apply to initial conditions at nonzero $p_0$.
The variance for the more general initial conditions $\rho(p,t=0)=\delta(p-p_0)$
is easily obtained by calculating the first two moments of the distribution (\ref{eq9}).
This is done by inserting the initial conditions into Eq. (\ref{eq10}) and using
the orthogonality relations of the Legendre polynomials. The average momentum is given by
\begin{equation}\label{eq17a}
\langle p\rangle =p_0\exp(-2D_0 t),
\end{equation}
and the variance by
\begin{multline}\label{eq17}
{\rm Var}(p) = \langle p^2\rangle - (\langle p\rangle)^2 =\\
= \frac{1}{3} \left(1 - \exp(-6D_0 t)\right)+{p_0}^2\exp(-6D_0 t)-\\-{p_0}^2\exp(-4D_0 t).
\end{multline}
As we see there is no clear way to define the transport time since two exponential functions with different exponents are present. Furthermore, as we saw earlier in Figs. \ref{figlr4}-\ref{figlr6} the diffusive regime is only well established after enough time has passed. We therefore chose to empirically generalize equation Eq.(\ref{eq14}) by the introduction of a prefactor $C$ as follows
\begin{equation} \label{eq18}
{\rm Var}(p) = \frac{1}{3} \left(1 - C \exp(-\frac{N}{N_T})\right).
\end{equation}
As we shall see the prefactor $C$ effectively compensates the generalized initial conditions and allows us to estimate the transport time.
In Fig. \ref{figlr9} we show the evolution of the variance
as a function of the number of collisions, for four different
values of ${\varepsilon}=0.08, 0.10, 0.12, 0.20$. The initial conditions
are the same as in Fig. \ref{figlr3} of Sec. II.
The agreement with the empirical model (\ref{eq18}) (which in this case coincides with the theoretical prediction (\ref{eq14}) if $C=1$) is excellent. The inset shows the initial non-diffusive phase of the dynamics.
The fitting procedure was as follows: first, $N_T$ has been extracted from the best fitting of all data, and then the fitting was repeated by excluding
the first $20\%$ of collisions, but not less than $20$ of them,
up to maximum of $300$ collisions, in order to be in the optimal
interval for the determination of the two fitting parameters
$N_T$ and $C$ and to exclude the non-diffusive phase.
\begin{figure}
\centering
\includegraphics{Variance1}
\caption{The variance of $p$ as a function of the number
of collisions $N$ for four different values of ${\varepsilon}$, starting
at the initial condition $p_0=0$. From top to bottom we show the numerical results for ${\varepsilon}=0.2$ (magenta), ${\varepsilon}=0.12$ (red), ${\varepsilon}=0.1$ (green), ${\varepsilon}=0.08$ (blue) calculated with $10^5$ initial conditions.
The theoretical fitting curves according to Eq. (\ref{eq18}) are shown with black dashed lines. The inset is a magnification of the first 100 bounces and shows the initial non-diffusive phase.
}
\label{figlr9}
\end{figure}
Finally, we take a fixed value of ${\varepsilon}=0.1$ and observe the variance
as a function of discrete time $N$ for various values of the initial
condition $\delta(p-p_0)$, $p_0=0., 0.25, 0.50, 0.75$, and use the same
fitting procedure. The initial conditions are the same as in Figs. \ref{figlr3}
- \ref{figlr6}, correspondingly.
Again, the agreement with the empirical
formula (\ref{eq18}) is excellent, as is seen in Fig. \ref{figlr10}.
The variance is well described also by the theoretical prediction Eq. (\ref{eq17}), particularly for larger values of $p_0$. From there we may extract the value of $D_{\rm dis}=D_0 \overline{l}$. Note that for $p_0=0$ the two descriptions coincide. Alternatively we could also extract the values of $D_{\rm dis}$ from the average of the momenta using Eq. (\ref{eq17a}). This yields equivalent results for values of $p_0>0.35$, but for lower nonzero values, Eq. (\ref{eq17a}) fails to correctly describe the numerical time dependent averages. This is because the initial non-diffusive phase significantly changes the average of the distribution of momenta from the one predicted from the initial delta distribution (compare the histogram with the black dashed line in Fig. \ref{figlr4}). This effect is diminished for $p_0>0.35$, probably because the peak of the distribution is further from area of phase space near the marginally unstable bouncing ball orbits (see Figs. \ref{figlr5}
- \ref{figlr6} ).
In table 1 we present the list of values of $N_T$ at $p_0=0$, as a function of
${\varepsilon}$, extracted by the described methodology. They are important
in understanding the quantum localization of chaotic eigenstates as
previously discussed.
\begin{table}
\center
\begin{tabular}{ | p{1.7cm} | p{1.7cm} || p{1.7cm} | p{1.7cm}|}
\hline
\multicolumn{4}{|c|}{Transport times} \\
\hline
${\varepsilon}$ & $N_T$ & ${\varepsilon}$ & $N_T$\\ \hline
0.001 & $3 \times 10^{7}$ & 0.105 & 303\\ \hline
0.005 & $4.3 \times 10^{5}$& 0.110 & 275\\ \hline
0.010 & $7.8 \times 10^{4}$& 0.115 & 253\\ \hline
0.015 & $2.9\times 10^4$ & 0.120 & 233\\ \hline
0.020 & $1.4 \times 10^4$& 0.125 & 215\\ \hline
0.025 & 8410 & 0.130 & 299\\ \hline
0.030 & 5520 & 0.135 & 186\\ \hline
0.035 & 3750 & 0.140 & 172\\ \hline
0.040 & 2760 & 0.145 & 161\\ \hline
0.045 & 2110 & 0.150 & 150\\ \hline
0.050 & 1630 & 0.155 & 141\\ \hline
0.055 & 1340 & 0.160 & 131\\ \hline
0.060 & 1100 & 0.165 & 123\\ \hline
0.065 & 907 & 0.170 & 115\\ \hline
0.070 & 767 & 0.175 & 108\\ \hline
0.075 & 647 & 0.180 & 102\\ \hline
0.080 & 560 & 0.185 & 95\\ \hline
0.085 & 494 & 0.190 & 90\\ \hline
0.090 & 433 & 0.195 & 86\\ \hline
0.095 & 386 & 0.200 & 82\\ \hline
0.100 & 341 & & \\ \hline
\end{tabular}\\
\caption{The discrete transport time $N_T$ (number of collisions)
as function of ${\varepsilon}$ as defined in Eq. (\ref{eq18}), with initial conditions at $p_0=0$, and therefore Eq. (\ref{eq14}) also applies. The sizes of the ensembles were $10^5$ for ${\varepsilon}<0.02$, $2.5\times10^5$ for $0.02\leq {\varepsilon}<0.08$ and $10^6$ for ${\varepsilon}>0.08$.}
\end{table}
\begin{figure}
\centering
\includegraphics{Variance2}
\caption{The variance as a function of the number
of collisions $N$ for ${\varepsilon}=0.1$, starting
at four initial conditions $p_0=0, 0.25, 0.50, 0.75$.
Each curve is shifted upwards by $0.1$ in order to avoid the overlapping
of curves. They all converge to $1/3$ when $N\rightarrow \infty$. The dotted black curve shows the theoretical prediction for the variance (\ref{eq17}), while the dashed curve shows the empirical model (\ref{eq18}).
}
\label{figlr10}
\end{figure}
In Fig. \ref{figlr7} we show the result for the
diffusion constant in terms of the discrete time $N$,
that is $D_{\rm dis}= D_0\overline{l}$, as a function of ${\varepsilon}$, as well
as $C$ as a function of ${\varepsilon}$, with the initial conditions
$\rho(p,t=0)=\delta(p-p_0)$ at $p_0=0$.
\begin{figure}
\centering
\includegraphics{Eps_dependence}
\caption{$D_{\rm dis}= D_0\overline{l}=1/(6N_T)$ and $C$ as
functions of ${\varepsilon}$, at $p_0=0$,
using the Eq. (\ref{eq18}). The $\log$ is decadic.
Ideally, $C$ should be unity according to Eqs. (\ref{eq13},\ref{eq14}).
}
\label{figlr7}
\end{figure}
We clearly observe the confirmation of the two limiting power laws
$D_{\rm dis} \propto {\varepsilon}^{5/2}$ for small ${\varepsilon}$ and
$D_{\rm dis} \propto {\varepsilon}^2$ for large ${\varepsilon}$. In the transition
region ${\varepsilon}\approx {\varepsilon}_c \approx 0.1$ which is about half
of a decade wide, the analytic description is unknown.
The dependence of $D_{\rm dis}$ in accordance with Eq. (\ref{eq17}) on the parameter $p_0$ for the special case ${\varepsilon}=0.1$ is shown in Fig. \ref{figlr8} (a). The value of $D_{\rm dis}$ is minimal at $p_0=0$. This may be because the phase space contains sticky objects near the marginally unstable bouncing ball orbits. Here we must understand that at larger times, asymptotically, the initial conditions are forgotten, and we expect that $D_{\rm dis}$ tends to a constant value which is the case. The values of $D_{\rm dis}$ at $p_0=0.75$ exhibit the same power law dependences on ${\varepsilon}$ as those at $p_0=0$. $N_T$ and $C$ in accordance with Eq. (\ref{eq18}) are shown in Fig. \ref{figlr8} (b-c). The value of $N_T$ increases for larger values of $p_0$.
This is an effect of the local transport being slower in the vicinity of the border $p=\pm 1$ due to the parabolic diffusion law (\ref{eq2}). The shortest estimate for the classical transport time, the one at $p_0=0$, is the one relevant for the study of localization of the eigenstates of the quantum billiard.
\begin{figure}
\centering
\includegraphics{p0_dependence}
\caption{$D_{\rm dis}= D_0\overline{l}$ from (\ref{eq17}), and $N_T$ with $C$ from (\ref{eq18}) as functions of $p_0$ for ${\varepsilon}=0.1$.
}
\label{figlr8}
\end{figure}
\section{Comparison of chaotic diffusion in a mixed type billiard}
In this section we study the coarse grained dynamics of the stadium billiard. We partition the phase space into a grid of cells and record the number of times each cell is visited by the orbit.
It is interesting to briefly discuss the observed differences between
ergodic systems like the stadium and the behavior in the chaotic components of
mixed type systems. Examples are the billiard introduced in \cite{Rob1983}, the border of which is given by a conformal mapping of the unit circle in the complex plane $|z|=1$
\begin{equation}
z \rightarrow z+\lambda z^2,
\end{equation}
at various shape parameter values $\lambda$
and the standard map \cite{Mei1994}.
In the case of $\lambda=1/2$ the billiard was proven to be ergodic \cite{Mar1993}. Because this billiard is strongly chaotic the classical transport time is of the order of a few 10 bounces. An analysis of the diffusion along the lines of the previous sections is therefore not possible.
The first important observation is, that
the so-called {\em random model} (Poissonian filling of the coarse grained
network of cells in the phase space) introduced in \cite{Rob1997}, works very
well in the stadium and also in other ergodic systems like $\lambda=1/2$ billiard.
In the process of filling, the cells are considered as filled (occupied) as soon as the orbit visits them.
The approach to the asymptotic value $1$ for the relative size of
the filled chaotic component $\chi$ as a function of the discrete time
(number of collisions) $N$ is exponential,
\begin{equation} \label{eq20}
\chi(N) = 1 -\exp \left( -\frac{N}{N_c} \right).
\end{equation}
where $N_c$ is the number of cells.
This is demonstrated in Fig.\ref{figlr11} for the case ${\varepsilon}=0.1$, where
chaos, dependence on initial conditions due to the large Lyapunov exponent,
is strong, and agreement with the random model is excellent,
while for the case ${\varepsilon}=0.01$ chaos is weak, and the agreement is not so
good as seen in Fig. \ref{figlr12}.
\begin{figure}
\centering
\includegraphics{CellFillStadium1}
\caption{The filling of the cells $\chi(N)$ for the stadium billiard ${\varepsilon}=0.1$ in the lin-lin plot (a),
the log-lin plot (b) and the distribution of cells with the occupancy number $M$ (c).
The black dashed curve in (a) and (b) is the random model Eq. (\ref{eq20}).
There are $25\times 10^6$ collisions and $N_c=10^6$ cells, so that the mean occupancy number $\mu =\langle M \rangle = 25$. The chaotic orbit has the initial conditions $(s=\pi/4, p=0)$. The black full
curve is the best fitting Gaussian, while the red dashed curve is the best fitting Poissonian distribution. The theoretical values $\mu$ and $\sigma^2= \mu b =\mu (1-a)$ and their numerical values agree very well.}
\label{figlr11}
\end{figure}
\begin{figure}
\centering
\includegraphics{CellFillStadium2}
\caption{As in Fig. \ref{figlr11} but ${\varepsilon}=0.01$. Some deviations from the
random model are seen, which are always negative due to the sticky objects in
the phase space which delay the diffusion.}
\label{figlr12}
\end{figure}
In both cases we show the lin-lin plot
of (\ref{eq20}) in (a), and also the log-lin plot in (b) for $1-\chi$. In
(c) we show the distribution of the occupancy number $M$ of
cells, which clearly is very close to a Gaussian. The size of the grid of
cells is $L=1000$, thus $N_c=L^2= 10^6$.
The latter observation can be easily explained by the following theoretical
argument within the Poissonian picture. We start an orbit in one of the
$N_c$ cells of the chaotic region and follow its evolution for a fixed
number of collisions $N$. Let $a=1/N_c$ be the uniform
probability that at the given discrete time $N$ one of the cells will
be visited (by the orbit), while its complement $b=1-a$
is the probability that the cell will not be visited. As we assume
absence of any correlations between the visits, the calculation of
the distribution of the occupancy $M$ of the cells is easy: The probability
$P(M)$ to have a cell containing $M$ visits is simply the binomial
distribution
\begin{equation} \label{eq21}
P_B(M) = {{N}\choose{M}} a^M b^{N-M},
\end{equation}
which has the exact values for the mean and variance
\begin{equation} \label{eq22}
\mu= \langle M \rangle =Na, \;\;\; \sigma^2 = \langle (M-\mu)^2 \rangle=Nab
= \mu b.
\end{equation}
For sufficiently large $N$ this can be approximated by the Gaussian with the
same $\mu$ and $\sigma^2$,
\begin{equation} \label{eq23}
P_G(M)= \frac{1}{\sqrt{2\pi\sigma^2} }
\exp\left( -\frac{(M-\mu)^2}{2\sigma^2} \right).
\end{equation}
In the Poissonian limit $a\rightarrow 0$ and $N\rightarrow \infty$, but
$\mu=aN={\rm const.}$, we find the Poissonian distribution
\begin{equation} \label{eq24}
P_P(M)= \frac{\mu ^M e^{-\mu}}{M!}.
\end{equation}
We see that the mean value and the variance agree with the theoretical
prediction $\mu=N/N_c=25$ and $\sigma^2=\mu b\approx \mu=25$, so that
the standard deviation $\sigma=5$, in the case of ${\varepsilon}=0.1$.
However, in the case ${\varepsilon}=0.01$ we see a quantitative discrepancy with
the random model prediction in Fig. \ref{figlr12} (a), but nevertheless
the approach to the asymptotic values is exponential with a slightly
different coefficient (b). Due to the sticky objects the filling of the cells
is slower. The variance of the distribution in (c)
is $38$, which is larger than the predicted value $\mu b=25$,
meaning that the relative fraction of more and of less richly
occupied cells is larger than expected by the binomial distribution.
Note that the Poisson distribution with the same $\mu$ (dashed red)
significantly deviates from the histogram.
In the chaotic components of mixed type systems things are different.
The random model does not work well, the approach to the equilibrium value
is not exponential, but instead is perhaps a power law
as reported by Meiss \cite{Mei1994}, or even something else
as observed in our work \cite{LR2017}.
Here we just show for comparison in Fig. \ref{figlr13} the time
dependence of the relative fraction of occupied cells $\chi(N)$
for the billiard introduced in \cite{Rob1983} with the shape parameter
$\lambda=0.15$ (a slightly deformed circle), in analogy with Figs. \ref{figlr11}-\ref{figlr12}.
\begin{figure}
\centering
\includegraphics{CellFillLambda}
\caption{As in Fig. \ref{figlr11} but for the mixed type billiard
\cite{Rob1983} with $\lambda=0.15$, for an orbit with $10^8$ collisions.
The asymptotic value of $\chi$ is $\chi_A=0.80$, as we see in (a).
Large deviations from the
random model are seen also in (b), which are always negative due to the sticky objects in
the phase space which delay the diffusion. In (c) we see the distribution
of cells at time $N=5\times 10^7$ according to the occupancy $M$, which now consists of the
the Gaussian bulge corresponding to the cells of the largest chaotic region,
and the delta peak at $M=0$ corresponding to the regular and other
smaller chaotic regions.
The initial condition (using the standard Poincar\'e-Birkhoff coordinates)
is $(s=4.0,p=0.75)$.
}
\label{figlr13}
\end{figure}
\noindent
It is clear that the random model is not good, and the approach to
the asymptotic value $\chi_A\approx 0.80$ is neither exponential nor
a power law, but something different to be studied further \cite{LR2017},
as one can see in the lin-lin plot (a) and in the log-lin plot in (b)
of Fig. \ref{figlr13}, and also in Fig. \ref{figlr14} for
three different initial conditions. The selected initial conditions are well separated yet yield very similar results. We therefore expect an ensemble average would not change the overall shape of the curve.
\begin{figure}
\centering
\includegraphics{CellFillLambdaLogLog}
\caption{As in Fig. \ref{figlr13} (b) but now in log-log
plot to show that the approach to the asymptotic value $\chi_A$ is
neither an exponential function nor a power law. The curves correspond to three
different initial conditions, $(s,p)= (4.0,0.75),\; (2.5,0.75),\; (2.5,0.15)$, in
green, blue and red correspondingly.}
\label{figlr14}
\end{figure}
One should note that the
measured $\chi (N)$ is always below the prediction of the random model,
which is due to the sticky objects in the phase space, that delay
the diffusion process, and occasionally also cause some plateaus on the
curve (due to temporary trapping).
Also the cell occupancy numbers shown in (c) are different from
the simple binomial distribution for the cells. Clearly, the empty
cells at $M=0$ represent the regular part of the phase space, and all chaotic components not linked to the largest one. These cells remain permanently empty for all $N$. There is an approximately Gaussian
distribution around the mean value $\mu = \langle M \rangle =N/N_c$, where
$N_c = \chi_A L^2$, with $L=1000$, and thus approximately $\mu \approx 62.5$,
for $L=1000$, $N=5\times 10^7$ and $\chi_A \approx 0.80$. The numerical
value of $\sigma^2$ is slightly larger, $\sigma^2=72$.
In between we observe a shallow minimum, sparsely populated. Further work along
these lines is in progress \cite{LR2017}.
\section{Discussion and conclusions}
In conclusion we may say that the major aspects of global diffusion in the
stadium billiard are well understood, and that many aspects can be manifested
in other systems with slow ergodicity. The applicability of the random model
\cite{Rob1997} is
largely confirmed, the coarse grained phase space divided into cells
is being filled exponentially. The diffusion constant obeys the parabolic law
$D=D_0({\varepsilon}) (1-p^2)$, which we may expect to apply in other slow
ergodic billiards as well. The distribution function
emanating from an arbitrary initial
condition obeys very well the inhomogeneous diffusion equation, and the
diffusion is normal for all ${\varepsilon}$ and initial conditions $p_0$.
The boundary effects in the evolution of $\rho(p,t)$
are correctly described by the model. The approach to uniform
equilibrium distribution $\rho=1/2$ with the variance ${\rm Var}(p)=1/3$ is
always exponential, for all ${\varepsilon}$ and all initial conditions $p_0$.
The diffusion constant $D_0$ has been calculated for many different values
of ${\varepsilon}$. At small ${\varepsilon} \le {\varepsilon}_c \approx 0.1$ $D_0 \propto {\varepsilon}^{5/2}$,
while for larger ${\varepsilon} \ge {\varepsilon}_c\approx 0.1$ it goes as $\propto {\varepsilon}^2$,
in agreement with the previous works \cite{BCL1996,CP1999}, but in between
there is no theoretical analytical approximation, so we have to resort to
the numerical calculations performed in this work. The value of the classical
transport (diffusion) time $N_T$, in terms of the discrete time (number
of collisions) has been determined for all values of ${\varepsilon}\le0.2$,
which plays an important role in
the quantum chaos of localized chaotic eigenstates
\cite{BR2013,BR2013A,BLR2017}.
In the mixed type systems, exemplified by the billiard introduced in
the Ref. \cite{Rob1983}, with the shape parameter $\lambda=0.15$, we have
shown that the behavior is quite different from ergodic fully chaotic
systems, which is in agreement with the report of Meiss \cite{Mei1994}
on the standard map.
Further work along these lines is important for the understanding of
classical and quantum chaos in billiard systems as model systems,
but the approach
should also be applicable to other smooth Hamiltonian systems, such
as e.g. the hydrogen atom in a strong magnetic field
\cite{Rob1981,Rob1982,HRW1989,WF1989}, or the helium atom etc.
\section{Acknowledgement}
The authors acknowledge the financial support of the Slovenian Research Agency (research core funding P1-0306). We would like to thank Dr. Benjamin Batisti\'c for useful discussions an providing the use of his excellent numerical library available at {\em https://github.com/benokit/time-dep-billiards}.
| {
"attr-fineweb-edu": 2.390625,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdtE5qsBDHn4Qz6ZF | \section{Introduction}
Recommending items to users based on her personal preferences and choices has been a long standing problem. With the emergence of social networks and aggressive expansion of e-commerce, this field has attracted serious attention. One of the many variants of recommender systems include contextual suggestion. Contextual suggestion aims to make appropriate points-of-interest recommendations to a traveler traveling to an unfamiliar city, given her past preferences \cite{Dean15}. The premise of the problem is challenging because users (in this instance travelers) come from various backgrounds and their needs and taste vary widely. This problem is sometimes compounded when the user is reluctant in sharing her opinion of some particular point(s)-of-interest or attraction that she has visited earlier. Moreover, it might sometimes be difficult to judge an attraction based on the limited amount of information that is available on the web. \\For ease of understanding let us consider an example. Say, Radha is a person living in India who loves to travel. Her past travel experiences include Goa, Darjeeling, Kochi etc. Let us assume that we are given a snapshot of the attractions or points-of-interests that she has visited over past trips. Given that we are provided with some background knowledge about her tastes and choices, would it be possible for us to suggest her new point-of-interests at a new place? Let us first take a look at her choice of attractions from past trips (Table \ref{tab:eg}).\\
\begin{table}[h!]
\centering
\scriptsize
\begin{tabular}{|p{1.5cm}|p{2.5cm}|c|p{3cm}|} \hline
Place& Attraction & Rating & Tags\\ \hline\hline
\multirow{4}{4pc}{Goa}& Sahakari Spice Farm&6/10&Foodie, Peace, Nature Lover, Beach \\\cline{2-4}
&Tomb of St. Francis Xavier&7/10&Peace, History Buff, Art \& Architecture\\\cline{2-4}
&Panjim&8/10&Foodie, Beach Goer, Nature Lover, Peace\\\cline{2-4}
&Club Cabana& 5/10 & Beach Goer, Like a Local, Thrill Seeker, Nightlife Seeker\\\hline
\multirow{4}{4pc}{Kochi} & Wonderela Amusement Park & 7/10 & Theme Parks, Water, Amusement\\ \cline{2-4}
& Folklore Musuem & 9/10 & History Museums, Art \& Architecture, History Buff\\ \cline{2-4}
& Santa Cruz Basilica & 8/10 & Sights \& Landmarks, Architectural Buildings\\ \cline{2-4}
& LuLu Mall & 6/10 & --\\ \hline
\multirow{4}{4pc}{Darjeeling} & Kanchenjunga Mountain & 10/10 & Mountains, Nature \& Parks\\ \cline{2-4}
& Padmaja Naidu Zoological Park & 8/10 & Outdoor Activities, Zoos \& Aquariums, Nature \& Parks\\ \cline{2-4}
& Peace Pagoda &--&--\\ \cline{2-4}
& Passenger Ropeway & 8/10& Tramways, Thrill Seeker, Transportation\\ \hline
\end{tabular}
\caption{Snapshot of places visited by Radha}
\label{tab:eg}
\end{table}
Now suppose she is planning to take a trip to Shimla. Assuming that the above Table \ref{tab:eg} information is available with the system can it make proper recommendations to suit her needs? While for a human being it is easier to decipher what Radha likes and dislikes, for a computer to do so is certainly cumbersome. In such cases, rule-based systems could come in handy. For example, from the above table we see that Radha is inclined towards peaceful environments followed by architectural landmarks, historical sites etc. If we assume that 7/10 is her par rating and anything below that is something she didn't like much we can set up some rules for attractions so that points-of-interest with her likings and ratings are suggested to her on the top of the list. Considering such constraints, a recommender system can then suggest her the following attractions at Shimla (Table \ref{tab:sugg}).\\
\begin{table}[!h]
\centering
\begin{tabular}{|c|p{3cm}|}\hline
Attraction& Tags\\ \hline \hline
Viceregal Lodge& Architectural Buildings, Sights \& Landmarks\\ \hline
Shimla Christ Church & Sights \& Landmarks, Architectural building\\ \hline
Gaiety Heritage Cultural Complex & Historical landmark, Art \& Architecture\\ \hline
Kalka-Shimla Railway & Scenic Railroads, Tours, Peace \\ \hline
\end{tabular}
\caption{Sample of recommendation list}
\label{tab:sugg}
\end{table}
The TREC Contextual Suggestion track was started in 2012 with the aim of investigating complex information needs that are highly dependent on context and user interests. The problem was modeled as a travel-recommendation problem where given some user profiles and contexts (containing information about points-of-interest), the participants had to suggest attractions to the users visiting a new city. Over the years, the track has updated and modified their guidelines as to how those contexts could be utilized and the platform for participation (live or batch experiment). In 2015, the contexts consisted of a mandatory city name which represents which city the trip will occur in and several pieces of optional data about the trip (such as trip type, trip duration, season \emph{etc.}). Instead of providing the detailed information about each attraction, they were accompanied with a URL. Consequently, the data had to be extracted by participants from open-web for processing. We have used the dataset provided in this track and evaluated our procedure using the same metrics (P@5 and MRR) that were used by TREC to evaluate its participants. Our experimental results show that our rule-based system outperforms all other systems in both the metrics.
\section{Related Work}
The advent of e-commerce has revolutionized many industries including tourism \cite{Werthner04}. With better reach in internet connectivity and easy access to portable devices, online travel bookings have seen an upward surge along with online itinerary planning \cite{Haubl04}. Since recommendation systems are perceived as one of the fastest growing domains of internet applications, its footprint in tourism industry is also evident \cite{Fesenmaier06}. Designing recommender systems for the tourism industry is challenging as a travel-related recommendation must refer to a variety of aspects such as locations, attractions, activities \emph{etc.} to provide a meaningful suggestion. Also, the recommender system needs to keep a record of the past history of users (which can be done either implicitly or explicitly) and analyze the same \cite{Bobadilla13}. \\Context-aware recommendation has been successfully applied in various forms to suggest items to users across domains. Travel recommendation is no exception to that. But, inherently there is a catch to it since usage of too many contextual variables may lead to a drastic increase in dimensionality and loss of accuracy in recommendation. Zheng \emph{et al.} \cite{Zheng12} tackles this problem by decomposing the traditional collaborative filtering into three context-sensitive components and then unifying them using a hybrid approach. This unification involves relaxation of contextual constraints for each component. The problems and challenges of tourism recommendation systems have discussed in \cite{Berka04} along with their potential applications. A detailed survey of travel recommendation systems covering aspects like techniques used, functionalities offered by each systems, diversity of algorithms \emph{etc.} can be found in \cite{Borras14}.\\
Tourists today have complex, multi-faceted needs and are often flexible, experienced and demand both perfection and diversity in recommendation \cite{Ricci02}. To deal with such vagaries, travel recommendation systems today rely on latest technological offerings such as GPS \cite{Zheng09}\cite{Zheng10}, geo-tagged images \cite{Kurashima10}\cite{Majid13} and community contributed photos \cite{Cheng11}. Choi \emph{et al.} \cite{Choi09} has suggested the use of travel ontology for recommending places to users. Sun \emph{et al.} \cite{Sun15} build a recommendation system that provides users with the most popular landmarks as well as the best travel routing between the landmarks based on geo-tagged images.\\
Portable devices such as mobile phones and tablets provide information which can be used to recommend travel-related multimedia content, context-aware services, ratings of peers \emph{etc.} \cite{Park07} \cite{Gavalas11}. A systematic study on mobile recommender systems can be found in \cite{Gavalas14}.
\section{Proposed Approaches}
In this section we describe the various approaches adopted for recommending the best set of results to the user based on his/her context. Although we will show in a later section that \emph{R-Rec} turns out to be the best system for the current scenario, we give a brief description of other approaches that led to R-Rec.
\subsection{Experimental Setup}
\subsubsection{Dataset}
For experimental purposes, we used the dataset provided as part of the shared task in TREC 2015 Contextual Suggestion track.
\subsubsection{Preprocessing}
Both the candidate suggestions and user preferences (contexts) had URLs of the attractions. To get more information about the point-of-interests we initially crawled the webpages using a script but found that many of the links were either broken or dead. As an alternative measure, we sought the help of various publicly available APIs such as Alchemy API, Yelp API and Foursquare API to get more details about the attractions. Of all the three APIs, Foursquare proved to be the most useful one because the content it provided was rich as well as concise. Even after doing so, there were some residual websites for which the APIs failed to fetch any information. In such cases, we used whatever information we could crawl out of the webpages. Such contents were curated by removing stopwords and unintelligible characters. Also for convenience we retained only nouns, prepositions and adjectives from such content.
\subsubsection{Profile enrichment} \label{sec:tagging}
It is interesting to note that user assigned tags play an important role in determining how close our recommendation is to user's need. Taking a cue from this, we tagged the user preference attractions with tags from the predefined tag set that matched the description of URLs. For example, say user \textit{X} has 32 user preferences \emph{i.e.} attractions that she has visited previously. Now, it may so happen that she might feel reluctant to assign tags to all of them. Say, \textit{X} has rated all 32 attractions but tagged only 7 of them. So essentially, for those 25 attractions we have no idea what she liked or disliked about the place. The ratings might reflect how satisfied she was with the facilities at a particular attraction, but it does not reveal her qualitative judgement of that attraction.\\
To incorporate this fact, we extracted information of the user visited attractions from the URLs and matched them against the complete set of tags. Hence, an attraction which matched with say \textit{beach} or \textit{beach walk} was tagged with the tag ``beach''. Now while this was a naive approach this introduced a dilemma-- \textit{Which tags would be appropriate for a particular attraction?} It is evident from the dataset that almost all attractions have been tagged with multiple tags. Because it is customary that an attraction which is tagged with ``Restaurant'' should most certainly contain the tag ``Food''. While it is up to the users, whether they tag an attraction or not, this kind of incomplete information is certainly detrimental to our recommender system.\\
To tackle this scenario, we utilized WordNet \cite{Miller95}, from which we extracted synonyms for each tag and created a list (synset). Each element in the synset of each of the tags was matched against the descriptions, and if any of the synonyms matched with high accuracy (similarity greater than $0.75$), we assigned that particular attraction with the original tag present in the provided tag set. For example, if any of the descriptions contained a word \emph{meal}, it was tagged with ``Food'' since `meal' is a synonym of `food'. It is to be noted that in the case of bi-gram or tri-gram tags each of the terms present in the tags were expanded using WordNet simultaneously.
This exercise was repeated for all the user visited attractions until there were no untagged attractions\footnote{\scriptsize It should be noted that we did not assign any new tags for attractions having user provided tags.}.
\subsubsection{Evaluation}
The evaluation of our approaches was performed using the relevance judgment and evaluation scripts provided for the batch task by TREC. The metrics used were Precision at k=5 (P@5) and Mean Reciprocal Rank. In this evaluation, a suggestion is relevant if it is rated 3 or 4 by user. Each of the approaches were compared against the TREC 2015 Contextual Suggestion (CS) batch experiment results median. \emph{T-Rec} was additionally compared against the best system available at TREC-CS 2015.
\subsection{D-Rec}
User profiles contain tags for various attractions along with ratings. These tags are essential in capturing the user's taste because it gives us a window view into why she liked or disliked that particular attraction. At the same time, it is also obvious that the description of user preference attractions will give us more information about the attraction. We hypothesized that appropriate selection of either measure would result in a better recommendation list. With this in mind, we computed two scores for a candidate suggestion-- one for user assigned tags to attractions $\mathcal{S}_c^t$ and the other for description of user visited attraction $\mathcal{S}_c^d$.
\subsubsection{Methodology}
\paragraph{Computing $\mathcal{S}_c^d$}
As stated earlier, we had extracted descriptions of both candidate suggestion attractions and user profile attractions using various APIs and scripts. For computing $\mathcal{S}_c^d$, the candidate suggestion attraction description is matched against each of the user profile attraction description and the matching score is calculated using the following formula:
\begin{center} \begin{equation} \label{eq:1}\mathcal{S}_{c_j}^d=\frac{similarity(d_{u_{ix}},d_{c_j})}{|d_{u_ix}||d_{c_j}|} \end{equation}\end{center}
where $d_{u_{ix}}$ and $d_{c_j}$ stands for profile attraction description and candidate suggestion description respectively. The denominator represents the product of the number of words present in each of the descriptions $d_{u_{ix}}$ and $d_{c_j}$ respectively. The similarity measure used here was WordNet similarity\footnote{\tiny http://search.cpan.org/tpederse/WordNet-Similarity-2.07/lib/WordNet/Similarity.pm}.
\paragraph{Computing $\mathcal{S}_c^t$}
Now, for calculating scores $\mathcal{S}_c^t$, we consider the profile attraction tags as a single phrase and use a similar formula as in Equation \ref{eq:1}.
\begin{center} \begin{equation} \label{eq:2}\mathcal{S}_{c_j}^t=\frac{similarity(<t_{u_i}>,d_{c_j})}{|<t_{u_i}>||d_{c_j}|} \end{equation}\end{center}
where $<t_{u_i}>$ denotes the tags of a profile attraction $u_i$ being considered as a phrase. $|<t_{u_i}>|$ denotes the number of words in the phrase $<t_{u_i}>$.
\paragraph{Ranking of candidate suggestions}
We assume that the tags provided by users in their profiles play a definitive role in describing user's choice and taste. Consequently, it must have higher priority than the description extracted from the attraction's website. Because in general, an attraction's website will contain information about the type, facilities and services that are provided by them. This may very well differ from how the user perceives that attraction and her experience of the same. Adhering to this fact, we use $\mathcal{S}_{c_j}^t$ to rank the candidate suggestions. In case, there is a tie it is resolved in favor of a candidate suggestion having higher $\mathcal{S}_{c_j}^d$. The complete procedure is summarized in Algorithm \ref{algo:att1}.\\
\begin{algorithm}[h]
\SetAlgoLined
\KwData{Tagged user preferences $U$, Candidate suggestions $C$, Suggestion descriptions $d_C$}
\KwResult{Recommendation list $R$}
\caption{D-Rec}
\label{algo:att1}
\For{$1\leq i \leq |U|$}{
\For{$1\leq j \leq |C|$}{
\For{$1\leq x \leq |u_i|$}{
$\mathcal{S}_{c_j}^d\leftarrow \frac{similarity(d_{c_j},d_{u_{ix}})}{|d_{u_{ix}}||d_{c_j}|}$\;
\For{$\forall t_k \in u_i$}{
$append$($<t_{u_i}>$,$t_k$)\;
}
$\mathcal{S}_{c_j}^t\leftarrow \frac{similarity(d_{c_j},<t_{u_i}>)}{|<t_{u_i}>||d_{c_j}|}$\;
}
}
}
$R\leftarrow Sort_{\mathcal{S}_{c_j}^t}(C)$\Comment{\textrm{\small resolve ties using }$\mathcal{S}_{c_j}^d$}\;
\end{algorithm}
The \emph{Sort()} function in Algorithms \ref{algo:att1} through \ref{algo:rrec} sorts the list of suggestions based on some score and returns the sorted list.
\subsubsection{Results and Analysis}
In Table \ref{tab:desc}, we present the results of this approach.
\begin{table}[!ht]
\centering
\caption{Evaluation results of D-Rec}
\label{tab:desc}
\begin{tabular}{|c|c|c|}\hline
System &P@5 & MRR\\ \hline \hline
Median of TREC-CS 2015 \cite{TREC15}& 0.5090 & 0.6716\\ \hline
D-Rec& 0.4701 & 0.6051\\ \hline
\end{tabular}
\end{table}
Clearly, this approach did not yield satisfactory results. The reason for this system's failure could be ascribed to the naive matching of tags against descriptions. While WordNet \emph{similarity} tends to capture semantic similarity to a certain extent, it is quite possible for it to miss on few terminologies and inter-word relationships. This could have resulted in a poor overall score. So, in the rest of the approaches we use direct string matching instead of WordNet similarity. Another drawback of this approach is that it is expensive in terms of time complexity because of matching $m\times n$ combinations where $m$ and $n$ are the lengths of any two descriptions respectively.
\subsection{C-Rec}
When the previous attempt of matching descriptions against tags did not yield decent results, we resorted to matching tags against tags. The possible reason for \emph{D-Rec}'s under-performance was that matching long descriptions was diminishing the overall score. But, instead if we had tags for the candidate suggestions like user profile attractions, intuitively the comparison and agreement between two sets of tags would be better. Thus, we first followed a similar procedure of tagging candidate suggestion descriptions as described in Section \ref{sec:tagging}. Thus, each candidate suggestion now consists of a set of tags from the available tag set. We propose two measures for capturing user's preference namely \emph{Coverage} and \emph{Completeness}.
\subsubsection{Coverage}
Our first step was to ensure that our recommendation should be able to satisfy user's diverse needs as stated in \cite{He14}. Tags were our only source of information about the facilities and services user is looking for in an attraction. Hence, for each user profile we compiled a set of non-redundant tags (denoted by $\tau_{u_i}$) from tags of all the visited attractions. Now, tags of each of the candidate suggestions $c_j$ are matched against all the tags in $\tau_{u_i}$. The number of matched tags are divided by the total number of tags in $\tau_{u_i}$, to generate a normalized coverage score $\theta_{c_j}$ for candidate suggestion $c_j$. Coverage ensures that the results in the recommended list embody most of the user specified needs. $\theta$ is computed as :
\begin{center}\begin{equation} \label{eq:3} \theta_{c_j}=\frac{|{\tau_{u_i}\cap \mathbf{t}_{c_j}|}}{|\tau_{u_i}|}\end{equation}\end{center}
where $\mathbf{t}_{c_j}$ stands for the set of tags pertaining to a candidate suggestion $c_j$.
\subsubsection{Completeness}
To ensure that along with coverage, individual type of needs are also catered to, matching of the candidate suggestion $c_j$'s tags with each of the taglist of user profile attractions is carried out. The sum of tags matching across all the profile attractions is divided with the total number of tags present in that user profile. This gives us a normalized score $\omega_{c_j}$ representing the \textit{completeness} of $c_j$. \emph{E.g.,} let us consider that there are five tags for some $c_j$, out of which $p$ tags match with profile attraction $i$, $q$ tags match with profile attraction $i+1$ and so on. So, if we sum the number of matched tags ($p+q+$...) and normalize it over total number of tags across profile attractions, we get an idea of how relevant the candidate suggestion is to the user profile. $\omega_{c_j}$ can be expressed in the form of Equation \ref{eq:4}.
\begin{center} \begin{equation}\label{eq:4} \omega_{c_j}=\frac{\sum_{i}{|\mathbf{t}_{u_i}\cap \mathbf{t}_{c_j}|}}{\sum_{i}|\mathbf{t}_{u_i}|}\end{equation}\end{center}
Here, $\mathbf{t}_{u_i}$ denotes the complete tag list for a user profile attraction $u_i$.
\subsubsection{Ranking of suggestions}
The scores calculated using Equations \ref{eq:3} and \ref{eq:4} are then used to rank candidate suggestions. We formulate two alternate approaches to generate the recommendation list. In first approach termed as \emph{Cov-Rec}, we give more preference to coverage over completeness. The other approach \emph{Cmp-Rec} is \textit{vice-versa} of \emph{Cov-Rec}. In either case, the non-preferred score is used to resolve ties among candidates. The reason behind incentivizing coverage in \emph{Cov-Rec} is that we want to cover as many user specified facilities as possible. The argument for favoring completeness over coverage in \emph{Cmp-Rec} is that we want to cater to important needs rather than serving all interests of a user. For either case, Algorithm \ref{algo:att2} states the generalized procedure \emph{C-Rec}.
\begin{algorithm}[!h]
\SetAlgoLined
\KwData{Tagged user preferences $U$, Candidate suggestions $C$}
\KwResult{Recommendation list $R$}
\caption{C-Rec}
\label{algo:att2}
\For{$1\leq i \leq |C|$}{
\For{$\forall u_k \in U$}{
$\tau_{u_k}\leftarrow \cup t_{k}$\;
$\theta_{c_i}^k$ \small is computed using Equation \ref{eq:3} \;
}
$\omega_{c_i}$ \small is computed using Equation \ref{eq:4}\;
}
$R \leftarrow Sort(C)$ \Comment{\scriptsize based on either $\omega_{c_i}$ or $\theta_{c_i}$}\;
\end{algorithm}
\subsubsection{Result and Analysis}
Table \ref{tab:tag} presents the results obtained with \emph{C-Rec}.
\begin{table}[!ht]
\centering
\caption{Evaluation results of C-Rec }
\label{tab:tag}
\begin{tabular}{|c|c|c|}\hline
System &P@5 &MRR\\ \hline \hline
Median of TREC-CS 2015 \cite{TREC15}& 0.5090 & 0.6716\\ \hline
Cov-Rec & 0.5346 & 0.6735\\ \hline
Cmp-Rec & 0.5441 & 0.6839\\ \hline
\end{tabular}
\end{table}
In this case, although there is a noticeable improvement over \emph{D-Rec} there is still some scope for improvement. But an interesting observation is that preferring completeness over coverage gives better results than \textit{vice-versa}. Thus, it is imperative that major user needs be identified and catered to for suggesting the best recommendation to the user.\\
Unlike \emph{D-Rec}, this method establishes that tags are essential in capturing user's wishes and demands. It is to be noted that in both these attempts (\emph{D-Rec} and \emph{C-Rec}), we have not considered ratings for generation of recommendation list. This was based on the fact that user's rating may not inform us of her intention. She may very well be interested in the facilities offered by a kind of attraction. But, it may be the case that at some particular attraction, her satisfaction levels (pertaining to those facilities) weren't met and consequently she rated it low.
\subsection{R-Rec}
\subsubsection{Tag enrichment}
All the while we have been experimenting keeping in mind the requirements of the user under consideration. One interesting thing to note is the fact we have ignored that user assigned tags play an important role in determining how close our recommendation is to user's need. Taking cue from this, in this approach we tagged the user preference attractions with tags from the tagset that matched with the description of URLs. For example say user X has 32 user preferences \emph{i.e.} attractions that she has visited previously. Now, it may so happen that she might feel reluctant to assign tags to all of them. Say, she has rated all 32 attractions but tagged only 7 of them. So essentially, for those 25 attractions we have no idea what she liked or disliked about the place. The ratings might reflect how satisfied she was with the facilities at a particular attraction, but it doesn't reveal her qualitative judgement of that attraction.\\
To incorporate this fact, we extracted information of the user visited attractions from the URLs and matched them against the complete set of tags. Hence, an attraction which matched with say \textit{beach} or \textit{beach walk} was tagged with the tag ``beach''. Now while this was a naive approach this introduced a dilemma-- \textit{Which tags would be appropriate for a particular attraction?}. It is evident from the dataset that almost all attractions have been tagged with multiple tags. Because it is customary that an attraction which is tagged with ``Restaurant'' should most certainly contain the tag ``Food''. While it is upto users whether she has tagged an attraction or not, this kind of incomplete information is certainly detrimental to our recommender system.\\
To tackle this scenario, we reverted to WordNet \cite{Miller95}, from which we extracted synonyms for each tag and created a list for each tag. Another observation was that some of the tags were generic like \textit{Food} which could alternatively be expressed with the tag \textit{Cuisine}. On the other hand, some tags were very specific like ``Shopping for shoes''. Consequently, using a uniform tag matching approach would very likely result in wrong tagging of the attractions. To avoid such mistakes, we manually curated the tags into two categories--
\begin{itemize}
\item Generic Tags
\item Specific Tags
\end{itemize}
Matching of specific tags was simple. We matched the descriptions with the tag directly using WordNet's \textit{wup\_similarity}. If the similarity score was above 0.95 (to ensure a high degree of matching), we assigned that particular attraction with that specific tag. Again, each element in the synset of each of the generic tags were matched against the descriptions and if any of the synonyms matched with high accuracy we assigned that particular attraction with the generic tag present in the original tagset. For example, if any of the description contains a word \emph{meal}, it was tagged with ``Food'' since `meal' is a synonym of `food'. It is to be noted that in case of bi-gram or tri-gram tags each of the terms present in the tags were expanded using WordNet simultaneously.
This exercise was repeated for all the user visited attractions until there were no untagged attractions\footnote{\scriptsize It should be noted that in we left those attractions untouched for which user had already provided tags.}.
\subsubsection{Tag-matching and ranking of suggestions}
Once the above procedure of tagging user preferences is over, we move on to the most important part of the algorithm-- scoring each candidate suggestion and ranking them.
\paragraph{Computing individual tag scores}
There is always a trade-off when it comes to giving importance to either ratings or tags for attractions visited by users. For our consideration we consider rating as the primary factor, because we hypothesize that a user will only rate an attraction X higher if he likes that particular kind of attraction and was satisfied with the service or facilities available at X. Keeping this in mind we assign scores to each tag pertaining to each rating for each user. The steps below summarize the procedure:
\begin{enumerate}
\item For each of the available ratings in the user preference list (context) we create an empty list $l$.
\item Initially for all tags occurring in a particular user context we assign a score of zero \emph{i.e.} \textit{Food} is assigned 0, \textit{Park} is assigned 0 and so on.
\item Now, it is quite common for users to rate two or more attractions with same ratings. In that case, it is important that we suggest the candidate items to users with equal ratings if they are on higher side. So, for each rating the total number of tags associated with it are computed and assigned to the previously empty list $l[r_i]$. Essentially it creates a list of lists like in Table \ref{tab:rate}.
\begin{table}[!h]
\centering
\begin{tabular}{|c|c|}\hline
Rating& Count of Tags\\ \hline \hline
4& 13\\ \hline
3&17\\ \hline
2&9\\ \hline
1&11\\ \hline
0&2\\ \hline
-1&0\\ \hline
\end{tabular}
\caption{List of tags for each rating}
\label{tab:rate}
\end{table}
\item To calculate the normalized score of each tag $\hat{s}(t_k^{r_i})$ pertaining to a rating $r_i$, we normalize the count of a particular tag $count_{r_i}(t_k)$ occurring in that rating by dividing it with the total count of tags for that particular rating $l[r_i]$ where $r_i\in r=\{4,3,2,1,0,-1\}$. \emph{E.g.} if the tag ``Food'' occurs four times among all the user preferences having rating 3, then the normalized score for ``Food'' at rating 3 is $\frac{4}{17}$.\\ This step ensures that the sum of scores of all tags for a particular rating is equal to one\footnote{\scriptsize $\sum_{n}s(t_n^{r_i})=t_1^{r_i}+t_2^{r_i}+...+t_n^{r_i}=1$}, so that no bias is induced for any rating. Lines 2-13 of Algorithm \ref{algo:rrec} summarizes the above steps.
\end{enumerate}
\paragraph{Scoring and ranking of candidate suggestions}
Once score for each tag for a user context has been computed we match each of the candidate suggestion tags with the rated attraction tags. Initially, all the candidate suggestions are assigned a score $\mathcal{S}_{c_j}=0$. For each suggestion, if any of the rated attraction's tag matches with candidate's, then its normalized score is added to the total score of the candidate suggestion. Mathematically,
\begin{center}\begin{equation} \mathcal{S}_{C_j}=\sum_{1\leq k \leq m}\hat{s}(t_k^{r_i}) \end{equation}\end{center}
where $m$ is the number of matched tags for a particular candidate suggestion $j$.\\
This process of matching a candidate suggestion is carried out for each of the user preferences. Let the rating user preference for which a candidate suggestion $j$'s score is maximum be $r$. Then, that particular candidate $j$ is assigned a rating of $r$ as well along with the candidate score of $\argmax_S \mathcal{S}_{C_j}$. Based on this candidate score, the candidate suggestions are ranked in descending order. The candidates with highest ratings are kept on top. If there are multiple candidates with same rating then they are sorted on the basis of their individual score $\mathcal{S}_c$. If any candidate suggestion's tags didn't match with any of the user preference attraction tags, then it is assigned a score of zero. This process is reflected in lines 14-26 of Algorithm \ref{algo:rrec}.
\begin{algorithm}[h]
\SetAlgoLined
\KwData{Tagged user preferences $U$, Rating $r$, Tagged candidate suggestions $C$, Empty list $l$}
\KwResult{Recommendation list $R$}
\caption{R-Rec}
\label{algo:rrec}
\For{$1\leq j \leq |U|$}{
\For{$\forall r_i \in U_j$}{
$l[r_i]\leftarrow 0$ \Comment{$r_i\in r=\{4,3,2,1,0,-1\}$}\;
}
\For{$\forall r_i \in U_j$}{
\If{$t_k \in tagset[U_{j_x}] \land rating[U_{j_x}]=r_i$}{
$l[r_i]\leftarrow l[r_i]+1$ \Comment{\textrm{\scriptsize $1\leq x \leq |U_j|$,$1\leq k \leq |t_{U_j}|$}}\;
}}
\For{$\forall t_k \in r_i$}{
$s(t_k^{r_i})\leftarrow count_{r_i}(t_k)$\;
$\hat{s}(t_k^{r_i})\leftarrow \frac{s(t_k^{r_i})}{l[r_i]}$\;
}
\For{$\forall C_i \in C$}{
\For{$\forall U_{j_x} \in U_j$}{
\If{$\exists t_{C_i}\in match(t_{U_{j_x}},t_{C_i})=True$}{
$\mathcal{S}_{C_i}^x\leftarrow \sum_{m} \hat{s}(t_{U_{j_x}}^r)$ \Comment{\textrm{\scriptsize m = |matched tags|}}\;
}
\Else{
$\mathcal{S}_{C_i}^x\leftarrow 0$\;}
}
$\mathcal{S}_{C_i}\leftarrow \argmax_S \mathcal{S}_{C_i}^x$\;
$rating[C_i]\leftarrow rating[\argmax_S \mathcal{S}_{C_i}^x]$\;
}
$R\leftarrow Sort(C)$\Comment{\textrm{\small in descending order of rating}[$C_i$]}\;
}
\end{algorithm}
\subsubsection{Results and Analysis}
Our intuition behind R-Rec was to give more preference to user assigned ratings followed by user assigned tags. This method generated a ranked list of candidate suggestion which was then evaluated using the TREC evaluation metrics. The results are presented in Table \ref{tab:rrec}.
\begin{table}[!h]
\centering
\begin{tabular}{|c|c|c|}\hline
&P@5 & MRR\\ \hline \hline
Median of TREC-CS 2015 \cite{TREC15}& 0.5090 & 0.6716\\ \hline
Best system at TREC-CS 2015 \cite{Ali15} & 0.5858 & 0.7404\\ \hline
\textbf{R-Rec}& \textbf{0.5886}& \textbf{0.7461}\\ \hline
\end{tabular}
\caption{Comparison of R-Rec with TREC 2015 results}
\label{tab:rrec}
\end{table}
As can be seen, our approach outperforms the best system that took part in TREC 2015 Contextual Suggestion track. The highlight of our technique is its simplicity and elegance in capturing the intuition behind user's rating and tagging. The main reason why we lagged behind in our earlier approaches was due to the fact that we over enriched the tags for user preferences which might have led to a diluted overall score of a suggestion in the ranked list. Also noteworthy, is the fact that our hypothesis that ratings capture more essence of user's preference than tags is corroborated by the significant improvement in results.
\section{Conclusions}
Recommender systems has pervaded across all internet based e-commerce applications. Travel and tourism industry is no exception. In light of this, contextual suggestion aims at providing recommendations to users with past personal preferences, attractions or points-of-interest that the user might be interested in visiting. TREC Contextual Suggestion track investigates and explores this particular user-preference oriented problem. Previous approaches have tried various machine learning based approaches to address this problem while differing in methodology. In this paper, we show by means of thorough experimentation how a rule-based technique \textit{R-Rec} which takes into account both user ratings and tags can outperform all other approaches. Since each and every user has particular needs, we prove that in such cases individual preferences must be given more priority than collaborative choices. Our system was evaluated using two established metrics Precision at 5 (P@5) and MRR where it performed better than others.
\bibliographystyle{abbrv}
| {
"attr-fineweb-edu": 2.140625,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUarLxK6Ot9Pm3tBSE | \section{Introduction}
In computer vision based sports video analytics, one of the fundamental challenges is precise localization of the sport field. Which is determining where the broadcast camera is looking, at any video frame. Result of this can be used for different purposes, such as determining the players' coordinates on the sport field, camera calibration \cite{chen2019sports}, placement of the visual ads on the sport field.
In the literature, this problem is addressed by finding the homography between video frames and the top-view model of the sports field. Generally, the state-of-the-art procedures for localizing the sports fields \cite{citraro2020realtime, jiang2020optimizing, sha2020endtoend}
utilize deep structures for registering the video frames on the model of the sport filed, by minimizing a distance metric between the ground-truth and predicted points (or areas) of a frame on the model (or vice versa). These predictions are usually performed in a frame-by-frame manner, so, the result of registration (localization) for a given video sequence usually suffers from jittering. Therefore, an additional smoothing step on predicted homography of the consecutive frames is required. For smoothing or refinement of the initial homography estimation, some
works in the literature such as\cite{chen2019sports}, use Lucas-Kanade algorithm \cite{baker2004lucas}, which is based on tracking feature patches in the video frames. So, the success of this procedure highly depends on the detecting precision of the corresponding features.
In this work, a shot boundary detection procedure for broadcast hockey videos is implemented by adopting the cut transition method in \cite{yazdi2016shot}. A frame-based hockey ice-rink localization is performed by inspiration from \cite{jiang2020optimizing}. A ResNet18-based regressor is implemented which, for an input frame, regresses to the four control points on the hockey ice-rink model. In the inference phase, to remove the jittering of warped frames on the ice-rink model, for consecutive video-frames, the trajectories of the control points on the ice-rink model are smoothed by using a simple moving average operation based on Hann window. Then, the homography matrix is computed by using the direct linear transform (DLT) method \cite{dubrofsky2008combining}.
To the best of our knowledge there is no publicly available dataset on hockey videos for ice-rink localization. Therefore, a dataset of 72 hockey video-shots, along with their key-frames and their corresponding homography matrices, is generated and used to train, evaluate and test our method.
Our contributions in this paper are:
\begin{itemize}
\item Adopting a cut transition detection procedure, for fast and accurate shot boundary detection in broadcast hockey videos.
\item Implementing a frame-by-frame regressor for localizing the video-shot frames on the ice-rink model.
\item Introducing a new smoothing procedure by convolving a Hann window to the trajectory of the predicted control points, for the successive video-shot frames.
\item Generating a new hockey data set for ice-rink localization.
\end{itemize}
\section{Related Works to Sports Field Localization}
In this section, the related works in the literature for localizing the sports fields are reviewed.
Sports field localization is a special case of homography estimation, where the structure of the playing field plane is known \cite{nie2021robust}.
Homayounfar \textit{et al.} detect lines with a VGG16 semantic segmentation network, use them to minimize the energy of the vanishing point, and estimate the camera position via branch and bound \cite{homayounfar2017sports}.
Sharma \textit{et al.} and Chen and Little use the pix2pix network to extract the lines from the playing surface on a dataset of soccer broadcast video. Both works then compare the extracted edge images to a database of synthetic edge images with known homographies in order to localize the playing field \cite{chen2019sports, sharma2018automated}.
Cuevas \textit{et al.} detect the lines on a soccer field and classify them to match them to a template of the field \cite{cuevas2020automatic}. Tsurusaki \textit{et al.} use the line segment detector to find intersections of lines on a soccer field, then match them to a template of a standard soccer field using an intersection refinement algorithm \cite{tsurusaki2021sports}.
Sha \textit{et al.} detect the zones from soccer and basketball datasets. They initialize the camera pose estimation through a dictionary lookup and refine the pose with a spatial transformer network \cite{sha2020endtoend}.
Tarashima performs semantic segmentation of the zones as part of a multi-task learning approach for a basketball dataset \cite{tarashima2020sflnet}.
Citraro \textit{et al.} segment keypoints based on intersections of the lines on the playing surface and match them to a template for basketball, volleyball, and soccer datasets \cite{citraro2020realtime}.
Similarly, Nie \textit{et al.} segment a uniform grid of points on the playing surface and compute dense features for localizing video of soccer, football, hockey, basketball, and tennis \cite{nie2021robust}.
Jiang \textit{et al.} propose a method for refining homography estimates by concatenating the warped template and frame and minimizing estimation error. They report results on soccer and hockey videos \cite{jiang2020optimizing}.
The wide variety of methods that have been described in the literature shows that there is no one method that works particularly well for all sports applications. Recently described techniques show that deep network architectures achieve better performance with faster computation \cite{jiang2020optimizing, tarashima2020sflnet, citraro2020realtime, chen2019sports}.
\section{Methodology}
The implemented method to localize hockey ice-rink for an input hockey video has three main steps, which are illustrated in Fig.\ref{fig:framework} and are listed here: 1) Shot boundary detection (SBD), which is breaking the input video into smaller temporal sequences, called video-shots\cite{yazdi2016shot}, 2) Localizing each frame of the video-shot on the top-view model of the ice-rink, by regressing into four control points on the model\cite{jiang2020optimizing}. 3) Smoothing the trajectory of the control points for the input video-shot, using a moving window, and calculating the smoothed homography matrix for each frame by using the DLT algorithm \cite{dubrofsky2008combining}. These steps are explained thoroughly in the following subsections.
\begin{figure*}[t]
\begin{center}
\subfloat(a) SBD{
\includegraphics[width=0.83\linewidth]{Images/SBD.PNG}
}\\
\subfloat(b) Regressor{
\includegraphics[width=0.8\linewidth]{Images/Regressor2.png}
}\\
\subfloat(c) Smoothing{
\includegraphics[width=0.8\linewidth]{Images/smoothing2.PNG}
}
\end{center}
\caption{General framework for localizing the hockey video-shots on the ice-rink model. (a) Shot boundary detection is performed to segment the input video into smaller temporal units, i.e., video-shots. (b) A ResNet 18-based regressor is implemented to regress to the four control points, for all frames of the input video-shot (c) By using a moving average operation, trajectory of the control point on the ice-rink model is smoothed for the input video-shot.}
\label{fig:framework}
\end{figure*}
\subsection{Shot Boundary Detection}
\label{sec:SBD}
Shot boundary detection (SBD) is the primary step for any further video analysis, that temporally breaks a video into smaller units, called video-shots. Each video-shot is composed of a sequences of frames that are semantically consecutive and are captured with a single run of a camera. Here, SBD is performed by adopting the procedure used for cut transition detection in \cite{yazdi2016shot}. As shown in \ref{fig:framework}(a), first, the input video with frame-rate of 30 fps, is hierarchically partitioned. One-minute window (plus one extra frame) of the video (i.e., 1801 frames) is considered as a mega-group (MG). Each mega-group includes nine groups (G), and each group includes ten segments (Sg), where each segment includes 21 frames. The neighbouring segments within a group have one frame overlap. The neighbouring mega-groups have one segment overlap. Based on this partitioning a thresholding mechanism is defined to detect the video-shot boundaries. \\
Each frame is divided to 16 blocks. The block histogram of the marginal frames (first and last frame) of the segment are computed. If the distance of the marginal histograms for a segment goes beyond the group threshold, $T_G = 0.5 \mu_{G} + 0.5 \left[ 1+\ln\left(\frac{\mu_{MG}}{\mu_{G}}\right)\sigma_{G} \right]$, that segment would be a candidate for including shot transition.\\
For each candidate segment, the differences between block histograms of all the successive frames are computed. If the maximum difference in a segment goes beyond a threshold, i.e., $\frac{max(dist_{sg})}{\mu_{G}} > \ln\left(\frac{\mu_{G}}{\mu_{Sg}\sigma_{Sg}}\right)$, the frame associated with the maximum difference is detected as the shot boundary.
\subsection{Localizing Frames on the Ice-Rink Model}
Localizing hockey frames on the ice-rink model is performed in a frame-by-frame fashion by using a ResNet18-based regressor which is shown in Fig.~\ref{fig:framework}(b). The ResNet18 model is pre-trained on ImageNet dataset and two fully connected layers are appended to the network.\\
\textbf{Training.} The network is trained on our dataset of NHL hockey video key-frames annotated with ground truth homographies. For each key-frame of size $h\times w \times 3$, four pre-determined control points are considered, i.e., $[[0, 0.6h], [0, h],$\\ $[w, h], [w, 0.6h]]$. Having the ground truth homography of the key-frames, the network is trained to regresses the four projected control points on the ice-rink model, by minimizing the squared error between the ground-truth points, $P_{gt}$, and the estimated points, $P_{est}$ as per eq. \ref{eq:loss}.
\begin{equation}
\label{eq:loss}
L =\| P_{est}-P_{gt}\|^2_{2}
\end{equation}
\textbf{Inference.} In the inference phase, each frame of the input video-shot, i.e., $Frame_{i}, i \in \{1,2,...,len(shot)\}$, is fed into t he network and the four control points on the ice-rink model, i.e., $[p_{i1}, p_{i2}, p_{i3}, p_{i4}]$, are estimated. Having the four pre-determined control points on the frame, the four pairs of points are used to calcule the homography matrix by using the DLT algorithm. However, after warping the successive frames of the video-shot on the ice-rink model according to the inferred homography matrices, a jittering phenomenon is observed. This is due to the frame-by-frame homography estimation method. Incorporating the temporal information in calculation of the homography matrix, as explained in the next sub-section, can smooth the projection of the video-shot.
\subsection{Smoothing the Trajectories of Points }
Here, a straightforward, and effective smoothing method has been proposed and implemented, which is based on smoothing the trajectory of the four estimated control points on the ice-rink model.\\
\textbf{Hypothesis.} For a given video-shot we can assume that camera moves smoothly in every direction in the 3D space. In other words, camera's field of view (FOV) changes smoothly in the 3D space. A mapping of the 3D-FOV of the camera on the 2D space can be achieved by homography. Therefore, for a video-shot, changes of the 2D-FOV of the camera (i.e., changes of the consecutive warped frames on the ice-rink model) should be smooth as well. From this, we can directly conclude that the four control points should have an smooth trajectory on the ice-rink model. \\
A Hann window of size $m+1$ (with an odd size) is used to smooth the trajectory of the control points on the ice-rink model. The coefficient of this window can be computed as given in eq. \ref{eq:Hann}.
\begin{equation}
\label{eq:Hann}
w(n)= 0.5 \left( 1-cos(2\pi\frac{n}{m})\right)
\end{equation}
Fig.~\ref{fig:framework}(c) demonstrate how the estimated control points of $Frame~\#i$ are being smoothed by applying the Hann window on control points of $m+1$ neighboring frames, i.e., frames $\frac{(i-m)}{2}$ to $\frac{(i+m)}{2}$, which is essentially a convolution operation. The smoothed homography is then computed by using DLT method between the smoothed points, $[p_{i1}^{s}, p_{i2}^{s}, p_{i3}^{s}, p_{i4}^{s}]$, and the four fixed control points on the frame.
\section{Results}
In the following subsections some quantitative an qualitative experimental results are given. As there is no publicly available dataset of hockey, for ice-rink localization purpose, to conduct the experiments, we have prepared a dataset that is described here.
\textbf{Dataset.} Our dataset includes 72 video-shots (30 fps), that are captured from 24 NHL hockey games, by using the SBD procedure that is explained in section \ref{sec:SBD}. For each video-shot, a number of key-frames are extracted sparsely (i.e., 20 frames apart from each other) and are annotated to get their ground truth homography matrices. Annotations are collected by annotating point correspondences on each hockey broadcast frame and the rink model. Key-frames from 11 video-shots of four games are used for validation, the key-frames from 11 video-shots of four other games are used for test and rest of the data is used for training. Data augmentations, such as flipping the frames, are performed on the training data. Inference is performed on all frames of test video-shots.
\subsection{Qualitative Evaluation of Smoothing}
The effect of smoothing the trajectory of the four control points for the consecutive frames of a test video-shot is demonstrated in Fig. \ref{fig:smoothing}. The x-coordinates and the y-coordinates of the four points, i.e., $ [p_{1}, p_{2}, p_{3}, p_{4}] = [(x_{1}, y_{1}), (x_{2}, y_{2}), (x_{3}, y_{3}), (x_{4}, y_{4}) ]$, are smoothed separately by using the Hann window of size 15. The x and y-coordinates, before and after smoothing, are coded with different colors and are shown in Fig. \ref{fig:smoothing}(a) and Fig. \ref{fig:smoothing}(b). Also the trajectory of point $p_{1}$ on the ice-rink, before and after smoothing, is given in Fig. \ref{fig:smoothing}(c).
These figures show the jittering phenomenon (over and under shooting of the estimated coordinates) in the trajectories of the control points. The smoothing removes this jittering noise. In practice, when the ice-rink model is projected on the frames using the smoothed homography, the resulting video-shot is visually continuous and free from jittering.
\begin{figure}[!t]
\begin{center}
\subfloat{
\includegraphics[width=1\linewidth]{Images/xcoords.png}(a)
}\\
\subfloat{
\includegraphics[width=1\linewidth]{Images/ycoords.png}(b)
}\\
\subfloat{
\includegraphics[width=1\linewidth]{Images/trajectory.png}(c)
}
\end{center}
\caption{Smoothing the trajectory of the four control points for a test video-shot using the Hann window reduces jittering in the output. (a) The x-coordinates of the four control points in successive frames of the video-shot, before and after smoothing. (b) The y-coordinates of the four control points in successive frames of the video-shot, before and after smoothing. (c) The trajectory of control point $p_{1}$ on the ice-rink model coordinates, before and after smoothing.}
\label{fig:smoothing}
\end{figure}
\subsection{Quantitative Evaluation of Smoothing}
The effect of smoothing on IOU\textsubscript{part} as well as the accuracy of projected points are evaluated and given in Table \ref{tab:accuracy}. Our method before and after applying smoothing step is compared to single feed forward (SFF) network \cite{jiang2020optimizing}. The result for SFF is directly reported from the related paper. The experiment shows that the results almost remain the same with or without applying the homography smoothing.
In the second experiment a smoothness factor for the test video-shots are computed and reported in Table \ref{tab:smoothing_eval}.
The network regresses the positions of four control points on the broadcast frame after it has been warped onto the rink model.
The smoothness score is the average of the smoothness of the four control points. Smoothness is calculated for a sequence of points by calculating the standard deviation of the differences of each of the coordinates. The score is normalized by the mean difference. For this score, lower is better. Smoothing the network output gives a lower mean smoothness than directly using the network smoothness.
\begin{table}[!t]
\centering
\caption{Mean IOU\textsubscript{part}, mean, median and variance of projected points on the ice-rink model, for our method before and after smoothing (BFS and AFS). A comparison with SFF \cite{jiang2020optimizing} is also performed.}
\footnotesize
\setlength{\tabcolsep}{0.2cm}
\begin{tabular}{|c|c|c|c|c|}\hline
method & mean IOU\textsubscript{part} & Mean\textsubscript{proj} & Var\textsubscript{proj} & Median\textsubscript{proj} \\\hline\hline
Ours(BFS)& 94.63 & 12.87 & 53.22 & 11.64 \\
\hline
Ours(AFS) & 94.65 & 12.91 & 54.02& 11.90\\
\hline
SFF\cite{jiang2020optimizing} & 90.1 & - & -&-\\
\hline
\end{tabular}
\label{tab:accuracy}
\end{table}
\begin{table}[!t]
\centering
\caption{Smoothness evaluation of the hockey ice-rink localization before and after smoothing step ( i.e, S\textsubscript{BFS} and S\textsubscript{AFS}), and the difference of the two are reported in this table.}
\footnotesize
\setlength{\tabcolsep}{0.2cm}
\begin{tabular}{|c|c|c|c|c|}\hline
Test video-shots & S\textsubscript{BFS} & S\textsubscript{AFS} & Difference \\
\hline\hline
$Video-Shot_{1}$ & 33.62 & \textbf{14.20} & 19.42 \\
\hline
$Video-Shot_{2}$ & 131.90 & \textbf{97.96}& 33.93 \\
\hline
$Video-Shot_{3}$& 93.64 & \textbf{34.31} & 59.34 \\
\hline
$Video-Shot_{4}$ & 221.50 & \textbf{201.81} & 19.68 \\
\hline
$Video-Shot_{5}$ & 20.96 & \textbf{12.88} & 8.08 \\
\hline
$Video-Shot_{6}$ & 59.98 & \textbf{49.51} & 10.46 \\
\hline
$Video-Shot_{7}$ & 34.94 & \textbf{13.26} & 21.68 \\
\hline
$Video-Shot_{8}$ & 183.80 & \textbf{124.93} & 58.87 \\
\hline
$Video-Shot_{9}$ & 290.31 & \textbf{262.14} & 28.16 \\
\hline
$Video-Shot_{10}$ & 58.49 & \textbf{40.17} & 18.31 \\
\hline
$Video-Shot_{11}$ & 411.07 & \textbf{327.46} & 83.60 \\
\hline
\end{tabular}
\label{tab:smoothing_eval}
\end{table}
The mean output smoothness before smoothing (S\textsubscript{BFS}) across all test video shots is 140.02, and after smoothing (S\textsubscript{AFS}) is 107.15. This is a reduction of 32.86, meaning that the smoothing step increases the measured smoothness.
\section{Conclusion}
In this work, hockey ice-rink localization from broadcast videos is performed. The major focus of our work was finding an efficient framework for breaking the raw hockey videos into shots, finding the homography of each video-shot, and addressing the jittering phenomenon that occurs due to the frame-by-frame nature of estimating the homography. To fulfill our aims, we first adopted a threshold-based shot boundary detection approach and applied that on hockey videos. Then, in order to calculate the homography between each frame of the shot and the ice-rink model, like similar works in the literature, we implemented a ResNet18-based regressor to regress into four control points of the frames on the model. In our last stage, we smoothed the predicted control points of a frame by convolving a Hann window on the trajectory of the control points in $m+1$ neighboring frames. The achieved results show that smoothing does not improve or worsen the accuracy of predicted points or homography. However, it can effectively omit the jittering and provide smoother results.
\section{Acknowledgement}
This work is supported by Stathletes through the Mitacs Accelerate Program.
\bibliographystyle{IEEEbib}
| {
"attr-fineweb-edu": 2.060547,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdFfxK3xgpbERDCWt | \section{Introduction}
In a common form of competition, a group of judges scores contestants' performances to determine their ranking. Unlike one-on-one pairwise direct competitions such as football or basketball where strict rules for scoring points must be followed and accordingly a clear winner is produced in the open, a complete reliance on the judges' subjective judgments can often lead to dissatisfaction by the fans and accusations of bias or even corruption~\cite{birnbaum1979source, zitzewitz2006nationalism,whissell1993national}. Examples abound in history, including the Olympics that heavily feature the said type of competitions. A well-documented example is the figure skating judging scandal at the 2002 Salt Lake Olympics that can said to have been a prototypical judging controversy where the favorites lost under suspicious circumstances, which led to a comprehensive reform in the scoring system~\cite{looney2003evaluating,kang2015dealing}. A more recent, widely-publicized example can be found in the prestigious 17th International Chopin Piano Competition of 2015 in which judge Philippe Entremont gave contestant Seong-Jin Cho an ostensibly poor score compared with other judges and contestants. That Cho went on to win the competition nonetheless rendered the low score from Entremont all the more noteworthy, if not determinant of the final outcome~\cite{justin2015, onejuror2015}. Given that the competition format depends completely on human judgment, these incidents suggest that the following questions will persist: How do we detect a biased score? How much does a bias affect the outcome of the competition? What is the effect of the bias in our understanding of system's behavior? Here we present a network framework to find answers and insights into these problems.
\begin{figure}
\resizebox{\figurewidth}{!}{\includegraphics{01_definition.eps}}
\caption{ (a) The bipartite network representation of the judge--contestant competition. The edge weight is the score given from a judge to a contestant. (b) The one-mode projection onto the judges produces a weighted complete network whose edge weights are the similarity between the judges. We use the cosine similarity in our work.}
\label{definition}
\end{figure}
\begin{table*}[ht]
\caption{The final round scores from the 17th International Chopin Piano Competition. Judge Dang was not allowed to score the three contestants that were his former students (Liu, Lu, and Yang), noted ``NA''. The expected scores $\hat{b}$ of Eq.~\eref{expscore} in these cases are given in parentheses.}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
& Cho & Hamelin & Jurinic & Kobayashi & Liu & Lu & Osokins & Shiskin & Szymon & Yang \\ \hline
Alexeev & 10 & 8 & 2 & 1 & 7 & 9 & 3 & 6 & 4 & 5 \\ \hline
Argerich & 9 & 9 & 4 & 6 & 4 & 5 & 4 & 5 & 4 & 5 \\ \hline
Dang & 8 & 8 & 2 & 7 & \begin{tabular}{@{}c@{}}NA \\ (6.124) \end{tabular} & \begin{tabular}{@{}c@{}}NA \\ (5.673) \end{tabular} & 1 & 5 & 4 & \begin{tabular}{@{}c@{}}NA \\ (4.776) \end{tabular} \\ \hline
Ebi & 9 & 9 & 3 & 5 & 3 & 4 & 5 & 5 & 3 & 8 \\ \hline
Entremont & 1 & 8 & 3 & 2 & 5 & 8 & 4 & 7 & 4 & 6 \\ \hline
Goerner & 9 & 10 & 2 & 5 & 5 & 8 & 2 & 6 & 2 & 6 \\ \hline
Harasiewicz & 6 & 7 & 6 & 2 & 9 & 3 & 5 & 7 & 2 & 2 \\ \hline
Jasiński & 9 & 6 & 3 & 8 & 10 & 6 & 2 & 3 & 5 & 2 \\ \hline
Ohlsson & 9 & 8 & 6 & 1 & 9 & 4 & 5 & 7 & 2 & 3 \\ \hline
Olejniczak & 10 & 7 & 1 & 5 & 9 & 8 & 3 & 2 & 6 & 4 \\ \hline
Paleczny & 9 & 6 & 1 & 4 & 10 & 8 & 2 & 3 & 5 & 7 \\ \hline
Pobłocka & 9 & 7 & 1 & 6 & 8 & 8 & 2 & 5 & 2 & 6 \\ \hline
Popowa-Zydroń & 9 & 10 & 1 & 6 & 9 & 8 & 1 & 1 & 4 & 6 \\ \hline
Rink & 9 & 9 & 5 & 3 & 8 & 4 & 7 & 6 & 2 & 1 \\ \hline
Świtała & 9 & 8 & 1 & 5 & 10 & 7 & 4 & 1 & 5 & 5 \\ \hline
Yoffe & 9 & 9 & 5 & 3 & 8 & 7 & 6 & 2 & 4 & 2 \\ \hline
Yundi & 9 & 9 & 4 & 5 & 6 & 6 & 2 & 4 & 3 & 5 \\ \hline
\end{tabular}
\label{BB}
\end{table*}
\section{Methodology and Analysis}
The judge--contestant competition can be modeled as a bipartite network with weighted edges representing the scores, shown in Fig.~\ref{definition}~(a). It is a graphical representation of the $l\times r$--bipartite adjacency matrix
\begin{align}
\mathbf{B} = \set{b_{ij}},
\end{align}
whose actual values from the 17th International Chopin Piano Competition are given in Table~\ref{BB}~\cite{finaloceny2015}, composed of $l=17$ judges and $r=10$ contestants. The entries ``NA'' refer to the cases of judge Thai Son Dang ($i=3$) and his former pupils Kate Liu ($j=4$), Eric Lu ($j=5$), and Yike (Tony) Yang ($j=10$) whom he was not allowed to score. For convenience in our later analysis, we nevertheless fill these entries with expected scores based on their scoring tendencies using the formula
\begin{align}
\hat{b}_{ij} = \sqrt{\frac{\sum'_ib_{ij}}{l-1}\times\frac{\sum''_jb_{ij}}{r-3}},
\label{expscore}
\end{align}
where the summations $\sum'$ and $\sum''$ indicate omitting these individuals. It is the geometric mean of Dang's average score given to the other contestants and the contestant's average score obtained from the other judges. The values are given inside parentheses in Table~\ref{BB}. A one-mode projection of the original bipartite network onto the judges is shown in Fig.~\ref{definition}~(b), which is also weighted~\cite{zhou2007bipartite}. The edge weights here indicate the similarity between the judges, for which we use the cosine similarity
\begin{align}
\sigma_{ij} \equiv \frac{\vec{b}_i\cdot\vec{b}_j}{|\vec{b}_i||\vec{b}_j|} = \frac{\sum_kb_{ik}b_{jk}}{\sqrt{\sum_kb_{ik}^2}\sqrt{\sum_kb_{ik}^2}}
\label{cosine}
\end{align}
with $\hat{b}$ naturally substituted for $b$ when applicable. This defines the $17\times 17$ adjacency matrix $\mathbf{S}=\set{\sigma_{ij}}$ of the one-mode projection network.
\begin{figure}
\resizebox{\figurewidth}{!}{\includegraphics{02_simlarity.eps}}
\caption{The mutual average similarity between the judges. (a) In the full data Philippe Entremont is the most dissimilar, atypical judge. (b) In the truncated data with Cho removed, Entremont now becomes the seventh most similar judge, placing himself as the more typical one. This abrupt change indicates the outsize effect of the single uncharacteristic score given to Cho by Entremont.}
\label{meansimilarity}
\end{figure}
\subsection{Determining the Atypicality of Judges}
Perhaps the most straightforward method for determining the atypicality of a judge before analyzing the network (i.e., using the full matrix) is to compare the judges' average mutual similarities (average similarity to the other judges), shown in Fig.~\ref{meansimilarity}. Using the full data set (Fig.~\ref{meansimilarity}~(a)) we find Yundi to be the most similar to the others with $\av{\sigma}=0.94\pm0.04$, and Entremont to be the least so with $\av{\sigma}=0.83\pm 0.03$. The global average similarity is $0.90\pm 0.05$, indicated by the red dotted line. Yundi is not particularly interesting for our purposes, since a high overall similarity indicates that he is the most typical, average judge. Entremont, on the other hand, is the most interesting case. Given the attention he received for his low score to Cho, this makes us wonder how much of this atypicality of his was a result of it. To see this, we perform the same analysis with Cho removed from the data, shown in Fig.~\ref{meansimilarity}~(b). Entremont is now ranked 7th in similarity, indicating that his score on Cho likely was a very strong factor for his atypicality first seen in Fig.~\ref{meansimilarity}~(a).
\begin{figure*}[htpb]
\begin{center}
\includegraphics[scale=0.5]{03_dendrogram.eps}
\caption {Clustering dendrogram of the judges in 17th Chopin Competition. (a) Using the full data. Entremont joins the dendrogram at the last possible level ($z=16$). (b) Using the truncated data with Cho removed. Entremont joins the dendrogram early, at level $z=3$. The elimination of a single contestant has rendered Entremont to appear to be one of the more typical judges, consistent with Fig.~\ref{meansimilarity}~(b).}
\label{clusterDendro}
\end{center}
\end{figure*}
Although the effect of a single uncharacteristic score was somewhat demonstrated in Fig.~\ref{meansimilarity}, by averaging out the edge weights we have incurred a nearly complete loss of information on the network structure. We now directly study the network and investigate the degree to which biased scores affect its structure and our inferences about it. While there exists a wide variety of analytical and computational methods for network analysis~\cite{newman2010networks,barabasi201609}, here we specifically utilize \emph{hierarchical clustering} for exploring our questions at hand. Hierarchical clustering is most often used in classification problems by identifying clusters or groups of objects based on similarity or affinity between them~\cite{johnson1967hierarchical,newman2004fast,murtagh2012algorithms}. The method's name contains the word ``hierarchical'' because it produces a hierarchy of groups of objects starting from each object being its own group at the bottom to a single, all-encompassing group at the top. The hierarchy thus found is visually represented using a dendrogram such as the one shown in Fig.~\ref{clusterDendro}, generated for the judges based on cosine similarity $\sigma_{ij}$ of Eq.~\eref{cosine}. We used agglomerative clustering with average linkage~\cite{eisen1998cluster,newman2004finding}. Before we use the dendrogram to identify clusters, we first focus on another observable from the dendrogram, the level $z$ at which a given node joins the dendrogram. A node with small $z$ joins the dendrogram early, meaning a high level of similarity with others; a large $z$ means the opposite. This $z$ is consistent with Fig.~\ref{meansimilarity}: For Entremont $z=16$ (the maximum possible value with 17 judges) in the full data set, being the last one to join the dendrogram in the full data set, while $z=3$ when Cho is removed. The two dendrograms and Entremont's $z$ are shown in Fig.~\ref{clusterDendro}~(a)~and~(b). $z$ is therefore a simple and useful quantity for characterizing a node's atypicality. To see if any other contestant had a similar relationship with Entremont, we repeat this process by removing the contestants alternately from the data and measuring Entremont's $z$, the results of which are shown in Fig.~\ref{entryPoint}. No other contestant had a similar effect on Entremont's $z$, once again affirming the uncharacteristic nature of Entremont's score of Cho's performance.
\subsection{Racism as a Factor in Scoring}
A popular conjecture regarding the origin of Cho's low score was that Entremont may have been racially motived. We can perform a similar analysis to find any such bias against a specific group (e.g., non-Caucasians) of contestants. To do so, we split the contestants into two groups, non-Caucasians and Caucasians plus Cho (the ethnicities were inferred from their surnames and, when available, photos) as follows:
\begin{itemize}
\item Non-Caucasians (5): Cho, Kobayashi, Liu, Lu, and Yang
\item Caucasians plus Cho (6): Cho, Hamelin, Jurinic, Osokins, Shiskin, and Szymon.
\end{itemize}
We then plot figures similar to Fig.~\ref{entryPoint}. If Entremont had truly treated the two racial groups differently, the effect of his score to Cho would have had significantly different effects on each group. The results are shown in Fig.~\ref{R5_EastWestEntryPoint}. As before, Entrement's low score of Cho stands out amongst the non-Caucasian contestants, a strong indication that the race hadn't played a role, although it should be noted that Entremont appears to have been more dissimilar overall from the other judges in scoring the non-Caucasian contestants when Cho was not considered ($z=8$ compared with $z=2$ in the Caucasian group).
\begin{figure}
\resizebox{\figurewidth}{!}{\includegraphics{04_entrypoint.eps}}
\caption{Entremont's entry point $z$ into the dendrogram as the contestants are alternately removed from the data one by one. The removal of any contestant other than Cho has no visible effect on the typicality of of Entremont.}
\label{entryPoint}
\end{figure}
\begin{figure*}[htpb]
\begin{center}
\includegraphics[scale=0.5]{05_caucasian.eps}
\caption {Entremont's entry point into the clustering dendrogram in two distinct groups of contestants, the non-Caucasian group (a) and the Caucasian group plus Cho (b). In both cases, Entremont's $z$ decreases from the maximum value only when Cho is removed. This suggests that Entremont's uncharacteristic score of Cho's performance was not ethnically motivated.}
\label{R5_EastWestEntryPoint}
\end{center}
\end{figure*}
\subsection{Impact of Biased Edges on Inference of Network's Modular Structure}
As pointed out earlier, hierarchical clustering is most often used to determine the modular structure of a network. This is often achieved by making a ``cut'' in the dendrogram on a certain level~\cite{langfelder2008defining}. A classical method for deciding the position of the cut is to maximize the so-called \emph{modularity} $Q$ defined as
\begin{align}
Q = \frac{1}{2m}\sum_{ij}\biggl[A_{ij}-\frac{k_ik_j}{2m}\biggr]\delta(c_i,c_j)
\label{modularity}
\end{align}
where $m$ is the number of edges, $c_i$ is the module that node $i$ belongs to, and $\delta$ is the Kronecker delta~\cite{newman2006modularity}. The first factor in the summand is the difference between the actual number of edges ($0$ or $1$ in a simple graph) between a node pair and its random expectation based on the nodes' degrees. We now try to generalize this quantity for our one-mode projection network in Fig.~\ref{definition}~(b) where the edge values are the pairwise similarities $\sigma\in[0,1]$. At first a straightforward generalization of Eq.~\eref{modularity} appears to be, disregarding the $(2m)^{-1}$ which is a mere constant,
\begin{align}
Q' = \sum_{ij}\bigl(\sigma_{ij}-\av{\sigma_{ij}}\bigr)\delta(c_i,c_j)\equiv\sum_{ij}q'_{ij}\delta(c_i,c_j)
\label{qtemp}
\end{align}
where $\av{\sigma_{ij}}$ is the expected similarity obtained by randomly shuffling the scores (edge weights) in the bipartite network of Fig.~\ref{definition}~(a). In the case of the cosine similarity this value can be computed analytically using its definition Eq.~\eref{cosine}: it is equal to the average over all permutations of the elements of $\vec{b}_i$ and $\vec{b}_j$. What makes it even simpler is that permutating either one is sufficient, say $\vec{b}_j$. Denoting by $\vec{b}_j^{(k)}$ the $k$-th permutation of $\vec{b}$ out of the $r!$ possible ones, we have
\begin{align}
\av{\sigma_{ij}}
&= \frac{1}{r!|\vec{b}_i||\vec{b}_j|} \biggl(\vec{b}_i\cdot\vec{b}_j^{(1)}+\cdots+\vec{b}_i\cdot\vec{b}_j^{(r!)}\biggr) \nonumber \\
&= \frac{1}{r!|\vec{b}_i||\vec{b}_j|}\vec{b}_i\cdot\biggl(\vec{b}_j^{(1)}+\cdots+\vec{b}_j^{(r!)}\biggr)\nonumber \\
&= \frac{1}{r!|\vec{b}_i||\vec{b}_j|}\biggl(b_{i1}\bigl(b_{j1}^{(1)}+\cdots+b_{j1}^{(r!)}\bigr) \nonumber \\
&~~~+b_{i2}\bigl(b_{j2}^{(1)}+\cdots+b_{j2}^{(r!)}\bigr)+\cdots\bigr)\nonumber \\
&=\frac{(\sum_{k=1}^rb_{ik})(\sum_{k=1}^rb_{jk})}{r!|\vec{b}_i||\vec{b}_j|}\equiv\frac{B_iB_j}{r!|\vec{b}_i||\vec{b}_j|}.
\label{expectation}
\end{align}
When we insert this value into Eq.~\eref{qtemp} and try to maximize it for our network, however, we end up with a single module that contains all the judges as the optimal solution, a rather uninteresting and uninformative result. On closer inspection, in turns out, this stems from the specific nature of summand $q'$ with regards to our network. For a majority of node pairs the summand is positive (even when Entremont is involved), so that it is advantageous to have $\delta(c_i,c_j)=1$ for all $(i,j)$ for $Q'$ to be positive and large, i.e. all judges belonging to a single, all-encompassing module, as noted. The reason why $Q$ of Eq.~\ref{modularity} has worked so well for sparse simple networks was that most summands were negative (since $A_{ij}=0$ for most node pairs in a sparse network, and $k_ik_j/(2m)>0$ always), so that including all nodes in a single group was not an optimal solution for $Q$. To find a level of differentiation between the judges, therefore, we need to further modify $Q'$ so that we have a reasonable number of negative as well as positive summands. We achieve this by subtracting a universal positive value $q'_0$ from each summand, which we propose to be the mean of the $q'$, i.e.
\begin{align}
q'_0 &= \mean{q'}=\mean{\sigma_{ij}-\av{\sigma_{ij}}}
= \frac{2}{l(l-1)}\sum_{i<j}\bigl(\sigma_{ij}-\av{\sigma_{ij}}\bigr) \nonumber \\
&=\mean{\sigma}-\mean{\av{\sigma}}.
\end{align}
At this point one must take caution not to be confused by the notations: $\mean{\sigma}$ is the average of the actual similarities from data, while $\av{\sigma}$ is the pairwise random expectation from Eq.~\eref{expectation}, and therefore $\mean{\av{\sigma}}$ is the average of the pairwise random expectations. Then our re-modified modularity is
\begin{align}
Q'' &\equiv\sum_{ij}q_{ij}''\delta(c_i,c_j) \equiv \sum_{ij}\bigl(q'_{ij}-q'_0\bigr)\delta(c_i,c_j)\nonumber \\
&=\sum_{ij}\bigl[(\sigma_{ij}-\av{\sigma_{ij}})-\overline{(\sigma_{ij}-\av{\sigma_{ij}})}\bigr]\delta(c_i,c_j)
\end{align}
This also has the useful property of vanishing to $0$ at the two ends of the dendrogram (i.e. all nodes being separate or forming a single module), allowing us to naturally avoid the most trivial or uninformative cases.
\begin{figure*}[htpb]
\begin{center}
\includegraphics[scale=0.75]{06_networks.eps}
\caption {Modified modularity $Q''$ and the modular structure of the judges' network. (a) In the full network, maximum $Q''$ occurs at $z=14$, resulting in three modules shown in (c). (b) In the truncated network with Cho removed the optimal solution is at $z=15$, resulting in two modules shown in (d). A comparison of (c) and (d) tells us that the single biased score can be powerful enough to produce cause Entremont to form his own module, and also present the solution as the dominant one; in the absence of the edge, several comparable solutions can exist (marked by an encompassing rectangle) in (b).}
\label{clusterNetworks}
\end{center}
\end{figure*}
We now plot $Q''$ as we traverse up the dendrograms from Figs.~\ref{clusterNetworks}. For the full data with Cho, maximum $Q''$ occurs at level $z=14$, yielding $3$ modules (Fig.~\ref{clusterNetworks}~(c)). With Cho removed, in contrast, maximum $Q''$ occurs at level $z=15$, resulting in $2$ modules (Fig.~\ref{clusterNetworks}~(d)). In the full data Entremont forms an isolated module on his own, but otherwise the $Q''$-maximal modular structures are identical in both cases. This is another example of how a single uncharacteristic, biased score from Entremont to Cho is responsible for a qualitatively different observed behavior of the system. There is another issue that warrants further attention, demonstrating the potential harm brought on by a single biased edge: We see in Fig.~\ref{clusterNetworks}~(a) that the $Q''$-maximal solution ($z=14$) eclipses all the possibilities ($\Delta Q''=2.3687$ between it and the second optimal solution, for a relative difference $\Delta Q''/Q''_{\textrm{max}}=0.2653$), compared with Fig.~\ref{clusterNetworks}~(b) where the difference between the two most optimal solutions is much smaller ($\Delta Q''=1.8430$ and $\Delta Q''/Q''_{\textrm{max}}=0.0554$). Furthermore, there are at least three other solutions with comparable $Q''$ $(z=13,~12,~11)$ in Fig.~\ref{clusterNetworks}~(b). Given the small differences in $Q''$ between these solutions, it is plausible that had we used slightly a different definition of modularity or tried alternative clustering methods, any of these or another comparable configuration may have presented itself as the optimal solution. But a single biased edge was so impactful that not only an apparently incorrect solution was identified as the most optimal, but also much more dominant than any other.
\section{Discussions and Conclusions}
Given the prevalence of competition in nature and society, it is important to understand the behaviors of different competition formats know their strengths, weakness, and improve their credibility. Direct one-on-one competitions are the easiest to visualize and model as a network, and many centralities can be applied either directly or in a modified form to produce reliable rankings~\cite{park2005network, shin2014ranking}. Such competition formats are mostly free from systematic biases, since the scores are direct results of one competitor's superiority over the other. The jury--contestant competition format, while commonly used, provides a more serious challenge since it relies completely on human judgement; when the public senses unwarranted bias they may lose trust in the fairness of the system, which is the most serious threat against the very existence of a competition.
Here we presented a network study of the jury-contestant competition, and showed how we can use the hierarchical clustering method to detect biased scores and measure their impact on the network structure. We began by first identifying the most abnormal jury member in the network, i.e. the one that is the least similar. While using the individual jury member's mean similarity to the others had some uses, using the dendrogram to determine the atypicality of a judge graphically was more intuitive and allowed us gain a more complete understanding of the network. After confirming the existence of a biased score, we investigated the effect of the bias on the network structure. For this analysis, we introduced a modified modularity measure appropriate for our type of network. This analysis revealed in quite stark terms the dangers posed by such biased edges; even a single biased edge that accounted for less than 1\% of the edges led us to make unreliable and misleading inferences about the network structure.
Given the increasing adoption of the network framework for data modeling and analysis in competition systems where fairness and robustness are important, we hope that our work highlights the importance of detecting biases and understanding their effect on network structure.
\acknowledgements{
This work was supported National Research Foundation of Korea (NRF-20100004910 and NRF-2013S1A3A2055285), BK21 Plus Postgraduate Organization for Content Science, and the Digital Contents Research and Development program of MSIP (R0184-15-1037, Development of Data Mining Core Technologies for Real-time Intelligent Information Recommendation in Smart Spaces)
}
\bibliographystyle{apsrev4-1}
| {
"attr-fineweb-edu": 2.898438,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUago25V5hRtaNi-ws | \section{Introduction}
\noindent Finding characteristics that are specific to a geographic region is challenging because it requires local knowledge to identify what is particularly salient in that area.
Knowledge of regional characteristics becomes critical when communicating about or describing regions, for instance in the context of a mobile travel application that provides country, city and neighborhood summaries. This knowledge is especially useful when comparing regions to make geo-based recommendations: tourists visiting Singapore, for example, might be interested in exploring the Tiong Bharu neighborhood if they are aware it is known for its coffee in the same way the San Francisco Mission district is.
Photo-sharing websites, such as Instagram and Flickr, contain photos that connect geographical locations with user-generated annotations. We postulate that local knowledge can be gleaned from these geotagged photos, enabling us to discriminate between annotations (i.e.,\ tags) that specifically characterize a region (e.g.,\ neighborhood) and those that characterize surrounding areas or more general themes. This is, however, challenging because much of the data produced within a region is not necessarily specifically descriptive for that area. Is the word ``desert'' specifically descriptive of Las Vegas, or rather of the surrounding area? Can we quantify to what extent the word ``skyscraper'' is descriptive of Midtown, Manhattan, New York City or the United States as a whole?
In this paper, we propose the geographical hierarchy model (GHM), a probabilistic hierarchical model that enables us to find terms that are specifically descriptive of a region within a given hierarchy. The model is based on the assumption that the data observed in a region is a random mixture of terms generated by different levels of the hierarchy. Our model further gives insight into the diversity of local content, for example by allowing for identification of the most unique or most generic regions amongst all regions in the hierarchy. The GHM is flexible and generalizes to both balanced and unbalanced hierarchies that vary in size and number of levels. Moreover, it is able to scale up to very large datasets because its complexity scales linearly with respect to the number of regions and the number of unique terms. To evaluate our model, we compare its performance with two methods: Naive Bayes~\cite{Lewis:1998} and a hierarchical variant of TF-IDF~\cite{rattenbury2009methods}.
To investigate how well the descriptive terms surfaced by our model for a given region correspond with human descriptions, we focus on annotations of photos taken in neighborhoods of San Francisco and New York City. We apply our method to a dataset of 8 million geotagged photos described by approximately 20 million tags. We are able to associate to each neighborhood the tags that describe it specifically, and coefficients that quantify its uniqueness. This enables us to find the most unique neighborhoods in a city and to find mappings between similar neighborhoods in both cities.
We contrast the neighborhood characteristics uncovered by our model with their human descriptions by conducting a survey and a user study. This allows us not only to assess the quality of the results found by the GHM, but mainly to understand the human reasoning about what makes a feature distinctive (or not) for a region. Beyond highlighting individual differences in people's local experiences and perceptions, we touch topics such as the importance of supporting feature understanding, and consideration of adjacency and topography through porous boundaries.
The remainder of this paper is organized as follows. We first discuss related work in Section~\ref{section:relatedwork}. We then present our geographical hierarchical model in Section~\ref{section:model}, evaluate its performance in Section~\ref{section:evaluation}, and describe the user study in Section~\ref{section:userstudy}. We finally conclude the paper and present future outlooks in Section~\ref{section:conclusions}.
\section{Related Work}
\label{section:relatedwork}
\noindent Human activities shape the perception of coherent neighborhoods, cities and regions; and vice versa. As Hillier and Vaughan~\cite{Hillier2007} pointed out, urban spatial patterns shape social patterns and activity, but in turn can also reflect them. Classic studies such as those by Milgram and Lynch~\cite{Lynch1960} on mental maps of cities and neighborhoods reflect that people's perceptions go well beyond spatial qualities. They illustrate individual differences between people, but also the effects of social processes affecting individual descriptions. People's conceptualization of region boundaries are fuzzy and can differ between individuals, even while they are still willing to make a judgement call on what does or does not belong to a geographic region~\cite{montello2003s}. Urban design is argued to affect the imageability of locales, with some points of interests for example may be well known, whereas other areas in a city provide less imageability and are not even recalled by local residents~\cite{Lynch1960}. The perceived identity of a neighborhood differs between individuals, but can relate to their social identity and can influence behavior~\cite{Stedman2002}. Various qualitative and quantitative methods, from surveys of the public to trained observation, have been employed over the years to gain such insights~\cite{montello2003s,Stedman2002,egenhofer1995naive}, with the fairly recent addition of much larger datasets of volunteered geographic information~\cite{elwood2013} and other community-generated content to benefit our understanding of human behavior and environment in specific regions. Geotagged photos and their tags are also an invaluable data source for research about social photography, recommendations, and discovery~\cite{1149949,1437389}.
We differentiate our approach from those aiming to discover new regions~\cite{Thomee:region} or redefine neighborhood boundaries~\cite{cranshaw2012livehoods} using geotagged data. Backstrom et al.~\cite{Backstrom:2008:SVS:1367497.1367546} presented a model to estimate the geographical center of a search engine query and its spatial dispersion, where the dispersion measure was used to distinguish between local and global queries. Ahern et al.~\cite{ahern2007world} used \textit{k}-means clustering to separate geotagged photos into geographic regions, where each region was described by the most representative tags. This work was followed up by Rattenbury and Naaman~\cite{rattenbury2009methods}, in which the authors proposed new methods for extracting place semantics for tags. While these prior works principally attempted to identify new regions or arbitrary spaces, we instead aim to find the unique characteristics of regions in a known hierarchy. For each region at each level in the hierarchy we aim to find a specifically descriptive set of tags, drawn from the unstructured vocabulary of community-generated annotations.
Hollenstein and Purves~\cite{hollenstein2010exploring} examined Flickr images to explore the terms used to describe city core areas. For example, they found that terms such as \texttt{downtown} are prominent in North America, \texttt{cbd} (central business district) is popular in Australia and \texttt{citycenter} is typical for Europe. The authors further manually categorized tags in the city of Z\"{u}rich according to a geographical hierarchy; even with local knowledge they found that doing so was tedious because of tag idiosyncrasies and the diversity of languages. In contrast, our method enables us to automatically obtain such classifications. Moreover, we are not just able to distinguish between local and global content, but can classify a tag according to a geographical hierarchy with an arbitrary number of levels.
Hierarchical mixture models can be used for building complex probability distributions. The underlying hierarchy, that encodes the specificity or generality of the data, might be known a priori~\cite{McCallum:1998} or may be inferred from data by assuming a specific generative process, such as the Chinese restaurant process~\cite{Blei:chinese} for Latent Dirichlet Allocation~\cite{Blei:LDA,Ahmed:tweets}, where the regions and their hierarchy are learned from data. We emphasize that, in this paper, we do not try to learn latent geographical regions, but rather aim to describe regions that are known a priori. Moreover, these regions are structured according to a known hierarchy.
\section{Geographical Hierarchy Model}
\label{section:model}
\noindent Finding terms that are specifically descriptive of a region is a challenging task. For example, the tag \texttt{paname} (nickname for Paris) is frequent across many neighborhoods of Paris but is specific to the city, rather than to any of the neighborhoods in which the photos labeled with this tag were taken. The tag \texttt{blackandwhite} is also a frequent tag, which describes a photography style rather than any particular region in the world where the tag was used. The main challenge we face is to discriminate between terms that (i) specifically describe a region, (ii) those that specifically describe a sibling, parent or child region, and (iii) those that are not descriptive for any particular region. To solve this problem, we introduce a hierarchical model that is able to isolate the terms that are specifically descriptive of a given region by considering not only the terms used within the region, but also those used elsewhere. Our model goes beyond the distinction between only global and local content~\cite{Pang2011} by probabilistically assigning terms to a particular level in the hierarchy. In this paper we present our model from a geographic perspective, even though it is generic in nature: in principle any kind of hierarchy can be used where labeled instances are initially assigned to the leaf nodes.
\subsection{Definitions}
\begin{description}
\item[tag] is the basic semantic unit and represents a term from a vocabulary indexed by $t \in \lbrace 1, \ldots, T \rbrace$.
\item[neighborhood] is the basic spatial unit. The semantic representation of a neighborhood is based on the collection of tags associated with the geotagged photos taken within the neighborhood. Consequently, we associate with each neighborhood $n \in \lbrace 1, \ldots, N \rbrace$, where $N$ is the number of neighborhoods, the vector $\bm{x}_{n} \in \mathbb{N}^T$, where $x_{nt}$ is the number of times tag $t$ is observed in neighborhood $n$.
\item[geo-tree] is the tree that represents the geographical hierarchy. Each node $v$ of the geo-tree is associated with a multinomial distribution $\theta_v$ such that $\theta_v(t)$ is the probability to sample the tag $t$ from node $v$. We designate the set of nodes along the path from the leaf $n$ to the root of the geo-tree as $R_n$, whose cardinality is $\vert R_n \vert$. We use the hierarchy Country $\rightarrow$ City $\rightarrow$ Neighborhood, illustrated in Figure~\ref{figure:geotree}, as the leading example throughout this paper.
\end{description}
\begin{figure}
\qsetw{0.01cm}
\Tree[.Country
[.{City A} [.{Nbrhood A.1} [.{\textit{Tags}} ]] [.{Nbrhood A.2} [.{\textit{Tags}} ] ]]
[.{City B} [.{Nbrhood B.1} [.{\textit{Tags}} ] ]
[.{$\dots$} ]
[.{$\dots$} ]]]
\caption{We represent the geographical hierarchy Country $\rightarrow$ City $\rightarrow$ Neighborhood as a geo-tree. Each node $v$ of the geo-tree is associated to a multinomial distribution over tags $\theta_v$.}
\label{figure:geotree}
\end{figure}
\subsection{Model}
\noindent The principle behind our model is that the tags observed in a given node are a mixture of tags specific to the node and of tags coming from different levels in the geographic hierarchy. We represent the tags as multinomial distributions associated with nodes that are along the path from leaf to root. This model enables us to determine the tags that are specifically descriptive of a given node, as well as quantify their level of specificity or generality. The higher a node is in the tree, the more generic its associated tags are, whereas the lower a node is, the more specific its tags are. The tags associated with a node are shared by all its descendants.
We can formulate the random mixture of multinomial distributions with respect to a latent (hidden) variable
$z \in \lbrace 1, \ldots, \vert R_n \vert \rbrace$ that indicates for each tag the level in the geo-tree from which it was sampled. For a tag $t$ observed in neighborhood $n$, $z = 1$ means that this tag $t$ was sampled from the root node corresponding to the most general distribution $\theta_{\text{root}}$, whereas $z = \vert R_n \vert$ implies that the tag $t$ was sampled from the most specific neighborhood distribution $\theta_{n}$. This is equivalent to the following generative process for the tags of neighborhood $n$: (i) randomly select a node $v$ from the path $R_n$ with probability $p(v \vert n)$, and (ii) randomly select a tag $t$ with probability $p(t \vert v)$. We suppose that tags in different neighborhoods are independent of each other, given the neighborhood in which they were observed. Consequently, we can write the probability of the tags $\bm{x}_1, \ldots, \bm{x}_n$ observed in the $N$ different neighborhoods as
\begin{equation}
\label{equation:inter}
p(\bm{x}_1, \dots, \bm{x}_n) = p(\bm{x}_1) \dots p(\bm{x}_N) .
\end{equation}
We further assume tags are independent of each other given their neighborhood, such that we can write the probability of the vector of tags $\bm{x}_n$ as:
\begin{equation}
\label{equation:intra}
p(\bm{x}_n) = \prod_{t=1}^{T} p(t \vert n)^{x_{nt}} .
\end{equation}
The probability of observing tag $t$ in neighborhood $n$ is then:
\begin{align}
\label{equation:mixture}
p(t \vert n) & = \sum_{v \in R_n} p(t \vert v) \, p(v \vert n) = \sum_{v \in R_n} \theta_{v}(t) \, p(v \vert n) ,
\end{align}
which expresses the fact that the distribution of tags in neighborhood $n$ is a random mixture over the multinomial distributions $\theta_{v}$ associated with the nodes along the path from the leaf $n$ up to the root of the geo-tree. The random mixture coefficients are the probabilities $p(v \vert n)$. By combining \eqref{equation:inter}, \eqref{equation:intra} and \eqref{equation:mixture} we obtain the log-likelihood of the data:
\begin{align}
\label{equation:likelihood}
\log p(\bm{x}_1, \dots, \bm{x}_N) &= \log \prod_{n=1}^{N} \prod_{t=1}^{T} p(t \vert n)^{x_{nt}} \nonumber \\
&= \sum_{n=1}^{N} \sum_{t =1}^{T} x_{nt} \log p(t \vert n) \nonumber \\
& = \sum_{n=1}^{N} \sum_{t =1}^{T} x_{nt} \log \sum_{v \in R_n} \theta_{v}(t) \, p(v \vert n) .
\end{align}
\paragraph*{Classification}
For a tag $t$ observed in neighborhood $n$, we can compute the posterior probability that it was generated from a given level $z$ of the geo-tree: we apply Bayes' rule to compute the posterior probability of the latent variable $z$
\begin{equation}
\label{eq:posterior}
p(z \vert t, n) = \dfrac{p(t \vert n, z) p(z \vert n)}{\sum_{z = 1}^{\vert R_n \vert} p(t \vert n, z) p(z \vert n)} .
\end{equation}
Since we assume that the distribution of tags in neighborhood $n$ is a random mixture over the distributions $\theta_{v}$ associated with nodes that forms the path $R_{n}$ from leaf $n$ up to the root of the geo-tree, the probability~\eqref{eq:posterior} is equal to
\begin{equation*}
p(v' \vert t, n) = \dfrac{\theta_{v'}(t) \, p(v' \vert n)}{ \sum_{v \in R_n} \theta_{v}(t) \, p(v \vert n)} ,
\end{equation*}
where $v'$ is the node in $R_n$ that is at level $z$. Classifying a tag $t$ observed in leaf $n$ amounts to choosing the node $v' \in R_n$ that maximizes the posterior probability $p(v' \vert t, n)$.
\subsection{Learning}
\noindent The parameters of our model are the multinomial distributions $\theta_v$ associated with each node $v$ of our geo-tree and the mixture coefficients $p(v \vert n)$. In order to learn the model parameters that maximize the likelihood of data, given by~\eqref{equation:likelihood}, we use the Expectation-Maximization (EM) algorithm. This iterative algorithm increases the likelihood of the data by updating the model parameters in two phases: the E-phase and M-phase. The structure of our model allows us to derive closed form expressions for these updates. Due to lack of space, we omit to show our computations and refer the interested reader to Bishop~\cite{Bishop:PCML}.
If a node in the geo-tree might contain a tag that was not observed in the training set, maximum likelihood estimates of the multinomial parameters would assign a probability of zero to such a tag. In order to assign a non-zero probability to every tag, we ``smooth'' the multinomial parameters: we assume that the distributions $\theta_v$ are drawn from a Dirichlet distribution. The Dirichlet distribution is a distribution of $T$-dimensional discrete distributions parameterized by a vector $\bm{\alpha}$ of positive reals. Its support is the closed standard $\left(T-1 \right)$ simplex, and it has the advantage of being the conjugate prior of the multinomial distribution. In other words, if the prior of a multinomial distribution is the Dirichlet distribution, the inferred distribution is a random variable distributed also as a Dirichlet conditioned on the observed tags. In order to avoid favoring one component over the others, we choose the symmetric Dirichlet distribution as a prior. We also assume a symmetric Dirichlet prior for the mixture coefficients $p(v \vert n)$.
\paragraph*{Complexity}
Learning the parameters of our model using EM algorithm has, for each iteration, a worst case running-time complexity of $\bigO{N T D}$, where $N$ is the number of leaves of the tree, $T$ the vocabulary cardinality and $D$ the tree depth. GHM has therefore an important strength, as the time-complexity of training GHM scales linearly with respect to the number of leaves of the tree and the number of \textit{unique} tags rather than the number of tag instances.
Furthermore, the number of iterations needed for EM to converge, when trained on the Flickr dataset introduced in Section~\ref{section:dataset}, is typically around 10.
\section{Evaluation}
\label{section:evaluation}
\noindent
In this section, we apply our model to a large collection of geotagged Flickr photos taken in neighborhoods of San Francisco and New York City.
The quality of the descriptive tags found by the GHM strongly supports the validity of our approach, which we further confirm using the results
of the user study in Section~\ref{section:userstudy}: we approximate the probability that GHM classifies a tag ``correctly" by the average
number of times its classification matches the experts' classification. Moreover, the GHM allows us to quantify the uniqueness of these neighborhoods and to obtain a mapping between neighborhoods in different cities (Section~\ref{section:classification}), enabling us to answer questions such as ``How unique is the Presidio neighborhood in San Francisco?'' or ``How similar is the Mission in San Francisco to East Village in New York City?''. Finally, we compare the performance of our model with the performance of other methods in classifying data generated according to a given hierarchy (Section~\ref{section:synthetic}).
\subsection{Dataset and classification}
\label{section:classification}
\noindent We now apply our model to a large dataset of user-generated content to surface those terms considered to be descriptive for different regions as seen through the eyes of the public.
\subsubsection*{Flickr dataset}
\label{section:dataset}
\noindent Describing geographical areas necessitates a dataset that associates rich descriptors with locations. Flickr provides an ample collection of geotagged photos and their associated user-generated tags. We couple geotagged photos with neighborhood data from the city planning departments of San Francisco\footnote{\url{https://data.sfgov.org} (Accessed 11/2014)} and New York City\footnote{\url{http://nyc.gov} (Accessed 11/2014)}, and focus on the neighborhoods of San Francisco (37 neighborhoods) and Manhattan (28 neighborhoods). Flickr associates an accuracy level with each geotagged photo that ranges from 1 (world level) to 16 (street level). In order to correctly map photos to neighborhoods, we only focus on photos whose accuracy level exceeds neighborhood level. We acquired a large sample of geotagged photos taken in San Francisco (4.4 million photos) and Manhattan (3.7 million photos) from the period 2006 to 2013. We preprocessed the tags associated with these photos by first filtering them using a basic stoplist of numbers, camera brands (e.g.\ Nikon, Canon), and application names (e.g\ Flickr, Instagram), and then by stemming them using the Lancaster\footnote{\url{http://comp.lancs.ac.uk/computing/research/stemming/}(Accessed 11/2014)} method. We further removed esoteric tags that were only used by a very small subset of people (less than 10). To limit the influence of prolific users we finally ensured no user can contribute the same tag to a neighborhood more than once a day. The resulting dataset contains around 20 million tags, of which 7,936 unique tags, where each tag instance is assigned to a neighborhood. These tags form the vocabulary we use for describing and comparing different regions. The geo-tree we build from this dataset has 3 levels: level one with a single root node that corresponds to the United States, level two composed of two nodes that correspond to San Francisco and Manhattan, and level three composed of 65 leaves that correspond to the neighborhoods of these cities.
\subsubsection*{Classifying tags}
\noindent We applied our model to the Flickr dataset in order to find tags that are specifically descriptive of a region. We show the results for two neighborhoods in San Francisco---Mission and Golden Gate Park--- and two neighborhoods in Manhattan---Battery Park and Midtown---in particular. In Table~\ref{tab:TopTags} we show the top 10 \textit{most likely} tags for each of these four neighborhoods, where each tag is further classified to the \textit{most likely} level that have generated it. Effectively, each tag observed in a neighborhood is ranked according to the probability $p(t \vert n)$. Then, we apply~\eqref{eq:posterior} to compute the posterior probability of the latent variable $p(z \lvert t,n)$ and classify the tag accordingly. Recall that the value of the latent variable $z$ represents the level of the geo-tree from which it was sampled. For example, a tag observed in neighborhood $n$ is assigned to the neighborhood level if the most likely distribution from which it was sampled is the neighborhood distribution $\theta_n$. We consider such a tag as being specifically descriptive of neighborhood $n$.
\begin{table}
\centering
\begin{small}
\begin{tabular}{l l | l l}
\toprule
\textbf{Mission} & \textbf{GG Park} & \textbf{Battery Park} &
\textbf{Midtown} \\
\midrule
california \ddag & california \ddag & newyork \ddag & newyork \ddag \\
\textbf{mission} & \textbf{flower} & manhattan \ddag & manhattan \ddag \\
sf \ddag & \textbf{park} & \textit{usa} \dag & \textit{usa} \dag \\
\textit{usa} \dag & sf \ddag & \textbf{wtc} & \textbf{midtown} \\
\textbf{graffiti} & \textit{usa} \dag & \textbf{brooklyn} & \textbf{skyscraper} \\
\textbf{art} & \textbf{museum} & \textbf{downtown} & \textbf{timessquare} \\
\textbf{mural} & \textbf{tree} & \textbf{bridge} & \textit{light} \dag \\
\textbf{valencia} & \textbf{ggpark} & gotham \ddag & \textbf{moma} \\
\textbf{food} & \textbf{deyoung} & \textbf{memorial} & \textbf{broadway} \\
car \ddag & architecture \ddag & \textit{light} \dag & \textbf{rockefeller} \\
\bottomrule
\end{tabular}
\end{small}
\caption{For each neighborhood, we show the $10$ most likely tags ranked according to their probability $p(t \vert n)$. We use the posterior probability $p(z \vert t,n)$ in order to assign each tag to the distribution that maximizes this posterior probability: country (\dag), city (\ddag) or neighborhood (\textbf{bold}).}
\label{tab:TopTags}
\end{table}
Despite the fact that the tag \texttt{california} is the most likely (frequent) tag in both Mission and Golden Gate Park, it is not assigned to the neighborhood distribution but rather to the city distribution (we presume, if we were to add a new State level to the geo-tree, that the tag \texttt{california} would most likely be assigned to it). This confirms the notion that the most frequent tag in a neighborhood does not necessarily describe it specifically. We see that \texttt{architecture} is applicable not only to the buildings in Golden Gate Park---notably the De Young---but it also is a descriptor for San Francisco in general. Our method is able to discriminate between frequent and specifically descriptive tags, whereas a naive approach would still consider a very frequent tag in a neighborhood as being descriptive. The same observation is valid for the tags \texttt{usa} and \texttt{light}, which are tags that are too general and therefore not specifically descriptive of a neighborhood. The tag \texttt{car} may seem misclassified at the city level in San Francisco, until one realizes the city is well known for its iconic cable cars. Considering that the tags \texttt{art}, \texttt{mural} and \texttt{graffiti} are classified as being specific to Mission is not surprising, because this neighborhood is famous for its street art scene. The most probable tags that are specifically descriptive of Midtown in Manhattan include popular commercial zones, such as Rockefeller Center, Times Square and Broadway, as well as the museum of modern art (MoMa). The tag \texttt{gotham}, one of the most observed tags in Battery Park, is assigned to city level, which is not unexpected given that Gotham is one of New York City's nicknames.
\subsubsection*{Neighborhood uniqueness}
\noindent \textit{If you are interested in visiting the most unique neighborhoods in San Francisco, which ones would you choose?} With our framework, we can quantify the uniqueness of neighborhood $n$ by using the probability $p(z \vert n)$ of sampling local tags, where $z = \vert R_n \vert$.
In fact, a high probability indicates that we sample often from the distribution of local tags $\theta_n$, and can therefore be interpreted as indicator of a more unique local character. We show a map of the city of San Francisco in Figure~\ref{figure:sf}, in which each neighborhood is colored proportionally to its local-mixture coefficient. The darker the color is, the more unique the personality of the neighborhood is. The four most unique neighborhoods in San Francisco are Golden Gate Park (0.70), Presidio (0.67), Lakeshore (0.65) and Mission (0.60), which is not surprising if you know these neighborhoods. The Golden Gate park is the largest urban park in San Francisco, famous for its museums, gardens, lakes, windmills and beaches. The Presidio is a national park and former military base known for its forests, scenic points overlooking the Golden Gate Bridge and the San Francisco Bay. Lakeshore is known for its beaches, the San Francisco zoo and also for San Francisco State University. The Mission is famous for its food, arts, graffiti, and festivals.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{map_local_mix_coeff.pdf}
\caption{Neighborhoods of San Francisco colored according to their local mixture coefficient $p(z \vert n)$, where $z = \vert R_n \vert$. A darker color indicates a larger local mixture coefficient (`uniqueness').}
\label{figure:sf}
\end{center}
\vspace{-6pt}
\end{figure}
\subsubsection*{Mapping neighborhoods between cities}
\noindent \textit{Given a neighborhood in San Francisco, what is the most similar neighborhood in Manhattan?} To answer such a question, we can use our framework to find a mapping between similar neighborhoods that are in different cities and even different countries. Recall that each neighborhood $n$ is described by its local distribution $\theta_n$. In order to compare two neighborhoods $n$ and $n'$, we compute the cosine similarity between their respective local distributions $\theta_n$ and $\theta_{n'}$, given by:
\begin{equation}
\label{eq:cosim}
\text{sim}(\theta_n,\theta_{n'}) =
\frac{\sum_{t=1}^{T}\theta_n(t) \,\theta_{n'}(t)}{\sqrt{\sum_{t=1}^{T}\theta_n(t)^2} \sqrt{\sum_{t=1}^{T}\theta_{n'}(t)^2}} .
\end{equation}
The similarity range is $[0,1]$, with $\text{sim}(\theta_n,\theta_n')=1$ if and only if $\theta_n=\theta_n'$. Table~\ref{table:mapping} shows the mapping from six San Francisco neighborhoods to the most similar neighborhoods in Manhattan respectively. We also include the second most similar neighborhood when the similarities are very close. In order to give some intuition about
the mapping obtained, we also show the top five common local tags obtained by ranking the tags $t$ according to the product of tag probabilities $\theta_n(t) \, \theta_{n'}(t)$. For example, the East Village in Manhattan is mapped to Mission in San Francisco; the strongest characteristics they share are graffiti/murals, food, restaurants and bars. Moreover, despite the major differences between San Francisco and Manhattan, their Chinatowns are mapped to each other and exhibit a highly similar distribution of local tags (cosine similarity of 0.85). Finally, it is not surprising that Treasure Island for San Francisco and Roosevelt Island for Manhattan are mapped to each other, since both are small islands located close to each city. We however emphasize that the top \textit{common} local tags between neighborhoods are not necessarily most descriptive for the \textit{individual} neighborhoods. The top tags may rather provide a shallow description (e.g.\ dragons in Chinatown, boats for island) and are useful to gain some insight into the mapping obtained, but we are aware that every Chinatown is unique and not interchangeable with another one.
\begin{table*}
\centering
\begin{small}
\begin{tabular}{l | l | l}
\toprule
\textbf{San Francisco} & \textbf{Manhattan} & \textbf{Top common
local tags} \\
\midrule
Mission & East Village (0.23) & \texttt{graffiti}, \texttt{food}, \texttt{restaurant}, \texttt{mural}, \texttt{bar} \\
Golden Gate Park & Washington Heights (0.26), Upper West Side (0.22) & \texttt{park}, \texttt{museum}, \texttt{nature}, \texttt{flower}, \texttt{bird} \\
Financial District & Battery Park (0.29), Midtown Manhattan (0.27) & \texttt{downtown}, \texttt{building}, \texttt{skyscraper}, \texttt{city}, \texttt{street} \\
Treasure Island & Roosevelt Island (0.38) & \texttt{bridge}, \texttt{island}, \texttt{water}, \texttt{skylines}, \texttt{boat} \\
Chinatown & Chinatown (0.85) & \texttt{chinatown}, \texttt{chinese}, \texttt{downtown}, \texttt{dragons}, \texttt{lantern} \\
Castro & West Village (0.06) & \texttt{park}, \texttt{gay}, \texttt{halloween}, \texttt{pride}, \texttt{bar} \\
\bottomrule
\end{tabular}
\end{small}
\caption{Mapping from San Francisco neighborhoods to the most similar ones in Manhattan. For each neighborhood pair $n$ and $n'$, we give the cosine similarity between their local distributions $\theta_n$ and $\theta_{n'}$, and list the top ``common'' local tags ranked according to the product of tag probabilities $\theta_n(t) \, \theta_{n'}(t)$.}
\label{table:mapping}
\end{table*}
\subsection{Experimental evaluation}
\label{section:synthetic}
\noindent Evaluating models such as the GHM on user-generated content is hard because of the absence of ground truth. There is no dataset that associates, with objectivity, regions to specifically descriptive terms, or assigns terms to levels in a geographic hierarchy. This is due to the intrinsic subjectivity and vagueness of the human conception of regions and their descriptions~\cite{hollenstein2010exploring}. In the absence of ground truth, a classic approach is to generate a dataset with known ground truth, and then use it to evaluate the performance of different classifiers. We follow the generative process presented in Algorithm~\ref{alg:data} using the geo-tree (3 levels, 68 nodes) built from the dataset presented in Section~\ref{section:dataset}. For each node $v$ in our geo-tree, we sample the distribution of tags $\theta_v$ from the symmetric Dirichlet distribution $Dir(\alpha)$. We set $\alpha = 0.1$ to favor sparse distributions $\theta_v$. If the node is a leaf, i.e.\ it represents a neighborhood, we also sample the mixture coefficient $p(z \vert v)$ from a symmetric Dirichlet distribution $Dir(\beta)$. We set $\beta = 1.0$ to sample the mixture coefficients uniformly over the simplex and have well-balanced distributions. Now that we have the different distributions that describe the geo-tree, we can start generating the tags in each neighborhood. We vary the number of samples per neighborhood in order to reproduce a realistic dataset that might be very unbalanced: data is sparse for some neighborhoods while very dense for some others. For each neighborhood $n$, we sample uniformly the continuous random variable $\gamma$, which represents the order of the number of tags in neighborhood $n$, from the interval $\left[3,6 \right]$. The endpoints of the support of $\gamma$ are based on the the minimum and maximum values observed in the Flickr dataset presented in Section~\ref{section:dataset}, such that an expected number of $18.8 \times 10^6$ tags will be generated, similar to the quantity of tags available in the Flickr dataset. Once the number of tags $\nu$ to be generated is fixed, we sample, for each iteration, a level of the geo-tree and then a tag from the distribution associated to the corresponding node.
\begin{algorithm}
\KwIn{Geo-tree V, neighborhoods $n \in \lbrace 1, \ldots, N \rbrace$, tags $t \in \lbrace 1, \ldots, T \rbrace$, hyper-parameters $\alpha, \beta$.}
\KwOut{Tags observed in each neighborhood $\bm{x}_1, \ldots, \bm{x}_N$.}
\For{$v \in V$}{%
Sample distribution $\theta_v \sim Dir(\alpha)$\;
\If{v is a leaf}{Sample mixture coefficients $p(z \vert v) \sim Dir(\beta)$\;}
}
\For{$n \in \lbrace 1, \ldots, N \rbrace$}{%
Initialize $\bm{x}_n = \bm{0}$\;
Sample order $\gamma \sim \mathcal{U} \left[3,6 \right]$\;
Set $\nu = \lfloor 2 \times 10^{\gamma} \rfloor$\;
\While{$ \sum_t x_{nt} \leq \nu $}{%
Sample tree level $z \sim p(v \vert n)$\;
Sample tag $t \sim p(t \vert z,n)$\;
Increment tag count $x_{nt} \gets x_{nt} + 1$\;
}
}
\caption{Generating tags in neighborhoods}
\label{alg:data}
\end{algorithm}
We can now assess the performance of a model by quantifying its ability to correctly predict the level in the geo-tree from which a tag observed in a neighborhood was sampled. In addition to our model, we consider the following methods:
\begin{description}
\item[Naive Bayes (NB)] is a simple yet core technique in information retrieval~\cite{Lewis:1998}. Under this model, we assume that the tags observed in a class (node) are sampled independently from a multinomial distribution. Each class is therefore described by a multinomial distribution learnt from the count of tags that are observed in that class. However, since we do not use the class membership to train our methods, we assign a tag $t$, observed in neighborhood $n$, to all the classes (nodes) along the the path $R_n$, which amounts to having a uniform prior over the classes.
\item[Hierarchical TF-IDF (HT)] is a variant of TF-IDF that incorporates the knowledge of the geographical hierarchy. This variant was used in the TagMaps method~\cite{rattenbury2009methods} to find tags that are specific to a region at a given geographical level. The method assigns a higher weight to tags that are frequent within a region (node) compared to the other regions at the same level in the hierarchy. We are able to represent each node with a normalized vector in which each tag $t$ has a weight that encodes its descriptiveness.
\end{description}
For the classification, we map a tag $t$ observed in the leaf $n$ to the level $\hat{z}$ that maximizes the probability $p(z \vert t, n)$ (NB and GHM), or the tag weight (HT). Using our ground truth, we can then approximate the probability of correct classification $p(\hat{z} = z)$ by the proportion of tags that were correctly classified. In our evaluation, we repeat the following process 1000 times: we first generate a dataset, hold out $10\%$ of the data for test purposes and train the model on the remaining $90\%$. For fair comparison, we initialize and smooth the parameters of each method similarly. Then, we measure the classification performance of each method. The final results, shown in Table~\ref{tab:pref}, are therefore obtained by averaging the performance of each method over 1000 different datasets.
\begin{table}
\begin{center}
\begin{tabular}{ r | c }
\toprule
& Classification Accuracy (std) \\
\midrule
Random & 0.33 (0.00) \\
NB & 0.51 (0.02) \\
HT & 0.59 (0.02)\\
\midrule
GHM & \textbf{0.75} (0.01) \\
\bottomrule
\end{tabular}
\caption{The average classification accuracy is computed, for each method, over 1000 generated datasets. We also indicate this accuracy if we classify tags uniformly at random.}
\label{tab:pref}
\end{center}
\end{table}
Our GHM model is the most accurate at classifying the tags to the correct level, greatly outperforming NB by 47\% and HT by 27\%. Even though both GHM and HT take advantage of the geographical hierarchy in order to classify the tags, the probabilistic nature of GHM enables a more resilient hierarchical clustering of the data, while the heuristic approach of HT suffers from overfitting. For example, if the number of samples available for a neighborhood is low, HT might overfit the training data by declaring a frequent tag as being characteristic, although not enough samples are available to conclude this. This is not case for GHM, because the assumptions of random mixture enable us to obtain a resilient estimate of the distributions, which declare a tag as characteristic of a given level only if it has enough evidence for it. This observation is strengthened if we choose the maximum order $\gamma$ of the number of tags per node to be $4$ instead of $6$: the classification accuracy of GHM decreases by $5\%$ only (0.71), whereas the performance of HT decreases by $13\%$ (0.51). Taken together, these results suggest that, if the data observed in a neighborhood is a mixture of data generated from different levels of a hierarchy that encodes the specificity/generality of the data, our method will be successfully able to accurately associate a tag to the level from which it was generated.
\section{Perception focused user study}
\label{section:userstudy}
\noindent The results produced by our model might not necessarily be intuitive nor expected, especially in the light of people's differing individual geographic perspectives~\cite{montello2003s,Stedman2002,egenhofer1995naive}. Trying to objectively evaluate these, without taking into account human subjectivity and prior knowledge, could be misleading. For the Castro neighborhood in San Francisco, for example, the GHM classified the tag \texttt{milk} as specifically descriptive. Someone who is not familiar with Harvey Milk, the first openly gay person to be elected to public office in California and who used to live in the Castro, would most probably not relate this tag to the neighborhood. We thus need to understand the correspondences and gaps between our model's results and human reasoning about regions.
We conduct a user-focused study to explore the premise that the posterior probability of a tag being sampled from the distribution associated with a region is indicative of the canonical descriptiveness of this region. We further aim to identify potential challenges in user-facing applications of the model, and to uncover potential extensions to our model. We held ten interviews and conducted a survey with local residents of the San Francisco Bay Area focusing on their reactions to the tags that our model surfaced. To assess the performance of our model while reducing the bias of subjectivity, we used the results of our user survey to obtain the human classification of tags to nodes in the hierarchy, allowing us to approximate the probability that our model classified a tag correctly by the average number of times it corresponded with human classification. We use the interviews to understand the reasons behind matches and mismatches.
\subsection{Interview and survey methodology}
\noindent Interviewees and survey respondents reacted to a collection of 32 tags per neighborhood. To ensure a certain diversity among the tags presented to the users, we select randomly a subset of tags that are classified by the GHM as being descriptive of (i) neighborhood level (e.g.,\ \texttt{graffiti}), (ii) city level (e.g.,\ \texttt{nyc}), (iii) country level (e.g.,\ \texttt{usa}), or (iv) another neighborhood (e.g.,\ \texttt{mission} for the Castro neighborhood). We selected these tags randomly, with the probability of choosing a given tag $t$ proportional to the probability $p(t \vert n)$. The survey aims to highlight to which extent locals' perspectives match the results produced by our model, while the interviews highlight the reasons why and allow us to understand better the human perception of descriptiveness.
\paragraph*{Interview procedure}
Our semi-structured interviews focused on how people describe neighborhoods, and investigated their reasoning behind the level of specificity associated to
a tag in a given neighborhood. Each one-to-one interview lasted 25--45 minutes. To ensure a wide range of (former) local to newcomer perspectives, we interviewed a total of 10 people (5F, 5M; ages 26--62, $\mu=37$, $\sigma=12$) that (had) lived in the San Francisco Bay Area from two months to 62 years. Three of the participants worked in the technology industry, two were students, one was a real estate agent, one a building manager and one a photographer. Each interview covered three different neighborhoods chosen by the participant out of 11 well-known San Francisco neighborhoods. However, one participant described only one neighborhood (due to time constraints), while another participant described four of them. Our interviews addressed:
\begin{enumerate}
\item Participants' characterization of the neighborhoods using their own words to get an understanding of the factors that are important to them.
\item Their considerations in whether a tag is perceived as specifically descriptive or not. Interviewees were first asked to classify the 32 tags presented to them as (not) specifically descriptive for the neighborhood, and explain the reasons. Then, they were shown the subset of tags that were classified by our model as specifically descriptive.
\end{enumerate}
We emphasize the fact that the participants were not told about our model, nor that the terms presented to them were actually Flickr tags. This provided broader insight into the factors that led them to classify terms as (not) specifically descriptive of a neighborhood, and allowed for identification of factors not yet addressed by the model, without biasing their judgment towards the assumptions we made (i.e., hierarchical mixture of tags). The interviews were recorded, and transcriptions were iteratively analyzed, focusing on the identification of themes in interviewees' reasoning affecting classifications of tags as (non-)descriptive.
\paragraph*{Survey procedure}
A total of 22 San Francisco Bay Area residents (5F, 17M; ages 22--39, $\mu=33$, $\sigma=4.8$), who had lived there for an average of 5.6 years ($\sigma=4.2$), participated in our survey about San Francisco and three of its neighborhoods (Mission, Castro and North Beach). Of these 22 respondents, 18 provided tag classifications for the Mission neighborhood, 12 for the Castro and 11 for North Beach. This resulted in 1291 tag classifications, of which 561 for the Mission, 381 for the Castro and 349 for North Beach. The survey asked respondents to describe each neighborhood with their own words using open text fields, and then to classify the 32 tags presented to them as descriptive for a given neighborhood, for a higher level (the city or country), or for another neighborhood. They could also indicate if they did not find the tag descriptive for any level, or did not understand its meaning.
\subsection{Results}
\noindent In this section, we present the results of the survey, and provide examples from the interviews to illuminate the reasoning processes on whether a tag is descriptive for a specific region. We compare the tag classifications provided by the GHM with those supplied by the participants, and we specifically focus on disagreements between neighborhood-level tag classifications in order to identify challenges in human interpretation of modeling results, and potential extensions to our model.
\begin{table*}
\centering
\begin{small}
\begin{tabular}{l | rr | rr | rr}
\toprule
& \multicolumn{2}{c|}{\textbf{Mission}}
& \multicolumn{2}{c|}{\textbf{Castro}} & \multicolumn{2}{c}{\textbf{North Beach}} \\
\midrule
Neighborhood-level tag count & 17 & & 17 & & 17 & \\
Neighborhood-level tag classifications & 287 & (100\%) & 201 & (100\%) & 185 & (100\%) \\ \midrule
Tags not understood & 22 & (8\%) & 3 & (1\%) & 0 & (0\%)\\
Tags seen as non-descriptive for any level & 116 & (40\%) & 50 & (25\%) & 62 & (33\%)\\
(Part of) neighborhood & 109 & (39\%) & 67 & (33\%) & 37 & (20\%)\\
Higher-level node (CA/USA) & 24 & (8\%) & 38 & (19\%) & 7 & (4\%)\\
Other neighborhood & 16 & (5\%) & 43 & (22\%) & 79 & (43\%)\\
\bottomrule
\end{tabular}
\end{small}
\caption{The distribution of answers given by survey participants. The (rounded) percentages are computed with respect to the total number of answers given. }
\label{tab:tagclass}
\end{table*}
\begin{table*}
\centering
\begin{small}
\begin{tabular}{l | lr | lr | lr}
\toprule
& \multicolumn{2}{c|}{\textbf{Mission}}
& \multicolumn{2}{c|}{\textbf{Castro}} & \multicolumn{2}{c}{\textbf{North Beach}} \\
\midrule
Tag count & 32 & & 32 & & 32 & \\
Tag classications & 561 & & 381 & & 349 & \\
\midrule
GHM alignment & 0.84 && 0.81 && 0.66 \\
Misaligned tag count & 5 && 6 && 11 \\
Misaligned tags & \texttt{night}, \texttt{coffee}, \texttt{sidewalk} && \texttt{mission}, \texttt{streetcar}, \texttt{dolorespark} && \texttt{embarcadero}, \texttt{bridge}, \texttt{water}\\
& \texttt{cat}, \texttt{brannan} && \texttt{church}, \texttt{sign}, \texttt{night} && \texttt{alcatraz}, \texttt{seal}, \texttt{sea}, \texttt{crab}, \\ &&&&& \texttt{wharf}, \texttt{boat}, \texttt{bay}, \texttt{pier} \\
\bottomrule
\end{tabular}
\end{small}
\caption{Alignment of GHM tag assignments with the majority of survey respondents' assignment for each survey neighborhood. Misalignment would for example be a tag classified as neighborhood level by the GHM while the majority of survey respondents had assigned it to another neighborhood or another level.}
\label{tab:tagalignment}
\end{table*}
\subsubsection{Participant and model congruency}
\noindent The model's premise that locally frequent content is not necessarily specific to that locale was strongly supported by the interviews and the survey. None of the participants classified all of the tags that occurred \emph{frequently} in a neighborhood as specifically \textit{descriptive} of the given neighborhood. This supports the results of the GHM in classifications of very frequent but wide-spread tags as not being specifically descriptive of the neighborhood. Without prompting, interviewees mentioned terms as being too generic or specific for a given neighborhood. For example, one interviewee (F 28), when describing the Western Addition neighborhood, picked \texttt{haight} as a descriptive tag, ``because there's Haight Street in this neighborhood'', but not the tag \texttt{streets}, as ``there's streets everywhere''. Similar interview examples included: ``\texttt{california} or \texttt{usa} is a generic, or general term'' (F 28), ``I don't think of ever describing Golden Gate Park as in the USA\@. Unless I'm somewhere far away, but then I wouldn't even say USA, I would say California or San Francisco" (F 41). This result implies that participants tend to classify tags according to a geographical hierarchy, which supports the validity of the assumptions we make about the hierarchy of tags: tag specificity/generality depends on the hierarchical level from which it was sampled.
We are aware that people will not agree with every classification made by our model. Tags classified by the GHM as specifically descriptive of a neighborhood, were not necessarily perceived as such by all respondents; variations occurred between neighborhoods and between participants. As a consequence, evaluating the results of an aggregate model `objectively', as if there were only a single correct representation of a neighborhood, is difficult. However, to place the the GHM classification into context with human classification, we use the results of our survey to reduce subjective biases: we obtain a majority vote human classification by assigning each tag to the class that users have chosen most often. We then approximate the probability that the GHM classifies a tag `correctly' by the average number of times its classification matches this human majority classification. For most tags the majority assignment is aligned with the assessment of the model (Table~\ref{tab:tagalignment}). We obtained an average classification correspondence of 0.77, with a highest classification accuracy of 0.84 for the Mission neighborhood. The alignment between the model and human classification for the Castro neighborhood was 0.81. Alignment was lowest for the North Beach neighborhood with a correspondence of 0.66, mainly caused by tag classifications as `another neighborhood' (Table~\ref{tab:tagclass}). Such mismatches occur for a multitude of reasons. First of all, as a very basic requirement, participants have to understand what a tag refers to, before they can assign it to a specific level. For example, in the survey, 8\% of the answers given for the Mission neighborhood were ``I don't know what this is'' (Table~\ref{tab:tagclass}). This issue occurred less for the selection of tags for the Castro (1\%) and North Beach (0\%). The terms that users understood were necessarily perceived as either descriptive (i.e.\ assignable to a level in the geographical hierarchy) or non-descriptive (i.e.\ not belonging to any level in the hierarchy). The proportion of individual answers that are ``non-descriptive" is around $34\%$, which included answers to tags such as \texttt{cat}, \texttt{sticker} and \texttt{wall} for the Mission. The fact that these tags indeed describe content that occurs frequently in the Mission doesn't imply that they are perceived as descriptive per se by \emph{all} users.
People's local experiences shape and differentiate their perceptions. The interviews illustrated how tags were interpreted in multiple ways; \texttt{church} was taken to refer to a church building, or to the streetcars servicing the J-Church light-rail line, and was not seen as descriptive by the majority of participants (see Castro in Table~\ref{tab:tagalignment}). Local terms surfaced by our model, such as \texttt{walls} in the Mission, represented the neighborhood's characteristic murals to a long-time local interviewee (M 49), whereas this term was a meaningless for others. While the majority of survey respondents classified \texttt{night} and \texttt{coffee} as unspecific for the Mission, the same interviewee (M 49) for example saw \texttt{night} as descriptive for its bars, restaurants and clubs and thought \texttt{coffee} referred to the copious amounts of coffee shops. Similarly, while one interviewee described North Beach as ``party land'' (M 46), another claimed ``I don't believe there's much of a nightlife there'' (F 26). While these results highlight an opportunity for discovery and recommendation of local content that users may not be aware of, it also means that a careful explanation might be necessary.
\subsubsection{Model extensions}
\noindent Beyond misunderstanding tags or finding them not specifically descriptive of a certain neighborhood,
the interviews provided additional clues about the mismatches identified in our survey, as well as individual
differences. They are organized below in potential opportunities for extensions of the GHM model.
\paragraph*{Sub-region detection}
According to the majority of our survey participants, 11 tags classified by the GHM as specifically descriptive of North Beach were actually descriptive of another neighborhood. From our interviews, we learned that the terms surfaced by our model for the North Beach neighborhood included references to the bay's waterside and to the tourist attraction Fisherman's Wharf (see for example \texttt{wharf}, \texttt{seal}, \texttt{crab}, \texttt{bay}, \texttt{pier} in Table~\ref{tab:tagalignment}). The locals that responded to our survey (see Table~\ref{tab:tagclass}) and also the interviewees clearly made a distinction between Fisherman's Wharf and North Beach, whereas the set of administrative regions we used for our model did not: according to the data from the city planing department of San Francisco, Fisherman's Wharf is not a neighborhood in itself and is simply a sub-region of North Beach. Clarifications of region borders and detecting emerging sub-neighborhoods (e.g.\ ``as you go further down south it's a totally different neighborhood''), would improve the quality of the neighborhood tags presented to the user. From the modeling perspective, the GHM already allows for considering predefined sub-regions in certain neighborhood, since the geo-tree can be unbalanced and have a different depth for certain sub-trees.
\paragraph*{Permeable adjacency and topography}
Using \textit{permeable adjacency} by considering adjacent neighborhoods appears promising. For example, the mismatching tag \texttt{dolorespark} (see Table~\ref{tab:tagalignment}) refers to the park right on the edge of the Mission and Castro,
which however was placed outside the Castro neighborhood by the majority of our survey respondents. Interviewees defined neighborhood boundaries differently, and sometimes indicated they didn't know neighborhoods well ``It's a little difficult because it's right next to the Presidio. I kind of, maybe confuse them from each other'' (F 26). Interviewees extended their reasoning about activities or points-of-interest that could spill over into adjacent neighborhoods. For example, the Golden Gate Bridge, officially part of the Presidio neighborhood, but photographed from a wide range of other neighborhoods was mentioned as a distant-but-characteristic feature: ``you can see the Bay Bridge from there'' (M 46). Extensions that consider wider topography and fuzzy boundaries could improve the quality of the results.
\paragraph*{Representative lower levels}
Interviewees at times cited neighborhoods' unique character as representative of wider regional developments, e.g.\ gentrification of neighborhoods, or \ ``California is known for being\ldots very liberal\ldots and it's almost as if a lot it comes from Castro'' (M 35). Hierarchical modeling offers opportunities beyond the identification of locally descriptive content; it could also help find neighborhoods that are representative of a higher level in the geographical hierarchy.
\paragraph*{Temporality}
Time of day, shifting character of a neighborhoods, events, long-term history, and even change itself were referenced by interviewees: ``I think of night, \ldots there's a lot of activity during night time with the bars and the clubs\ldots but before\ldots you wouldn't be caught there at nighttime\ldots 20 years ago it was a different neighborhood.'' (M 49). Since the Flickr tag collection we used spanned multiple years, some aspects of neighborhoods, represented in tags surfaced by our model, that were characteristic at one time but no longer as prominent at present (such as the Halloween celebrations in the Castro neighborhood), were considered out of place (M 32). Yet other interviewees still found such tags characteristic, and freely associated: ``I think of outrageous costumes, and also the costumes that people see when it's not Halloween" (F 26). Combining hierarchical modeling with features detecting diurnal rhythms~\cite{Grinberg:Extract} or events~\cite{Shamma2011} would provide additional insight in such content changes over time. However, we must be aware that this necessitates a dataset that is much more prolific than the one at hand: we need to have a sufficient number of geotagged samples per time period.
\paragraph*{The non-tagged}
More recent socio-economic developments were mentioned by the interviewees, but not represented in the dataset of tags. For example, one interviewee argued that, while the terms for SoMa did capture the neighborhood (``I think it's really descriptive and pretty accurate"), he would himself add ``something about startups and expensive apartments that really aren't all that great. \$3500 a month for one bedroom. That would be a good descriptor.'' (M 32). Note that even the absence of a feature could be characteristic: ``There's not a lot to do around there." (M 35). Distinguishing however between what is absent in a neighborhood, and what is not represented in a dataset is a challenge~\cite{Rost2013}. Findings proxies for such features, such as proposed by Quercia et al.~\cite{Quercia2014}, requires rigor because it might introduce error and biases. Combining different data sources and model features can however provide additional opportunities.
\section{Discussion and Conclusions}
\label{section:conclusions}
\noindent In this work we proposed a probabilistic model that allows for uncovering terms that are \textit{specifically descriptive} of a region within a given geographical hierarchy. By applying our model to a large-scale dataset of 20 million tags associated with approximately 8 million geotagged Flickr photos taken in San Francisco and Manhattan, we were able to associate each node in the hierarchy to the tags that specifically describe it. Moreover, we used these descriptions to quantify the uniqueness of neighborhoods, and find a mapping between similar but geographically distant neighborhoods. We further conducted interviews and a survey with local residents in order to evaluate the quality of the results given by the GHM and its ability to surface neighborhood-characteristic terms, which span both terms known and unknown to locals. The classification accuracy of GHM, measured with respect to the classification of tags made by locals, provides strong support for the validity of our approach. However, the results of the interviews also highlighted the difference between the performance of a model at classifying community-generated data, and its performance as judged subjectively by an individual. This highlights the importance of taking user subjectivity into account, and the need for explanation and framing. As a consequence, the traditional evaluation of modeling approaches that are fed with user-generated data faces the challenge of both the representation and the subjectivity inherent to vernacular geography. It is important to note that the dataset itself was not formally labeled and rather only contained free-form, often noisy user-generated tags. While unstructured user-generated tags can be cleaned, for instance through canonicalization~\cite{Thomee:2014}, one can utilize other structures inherent in the data to find meaning. Our work takes advantage of the latent geographical patterns exhibited by geotagged data, and shows how we can relate tags to regions within a geographical hierarchy.
\paragraph*{Representation}
We used a specific type of dataset (tags) in this paper to describe geographical regions. We are aware that the tags generated by other online communities that use different social photo sharing services, such as Instagram, may yield different descriptions of regions. For example, if tagging in such a service is more emotive rather than descriptive of the content, our method would surface different local qualities. User-generated data is only representative of the activity of a community within it, and not necessarily of all people in a certain region. Large-scale social media datasets, whether they consist of check-ins, status updates, or photos uploaded to a community-based service, are the result of communicative acts influenced by service design features and evolving community norms~\cite{Rost2013, Shamma2007, 1149949}. In our case, not all local features are captured and shared on Flickr. The content that is present can however be used as relative hints at local trends, and provide comparative insights. Consequently, when we here state locally descriptive, we therefore mean descriptive for the content generated in that locale, not necessarily for all human activity.
\paragraph*{Vernacular geography}
Spatial knowledge that is used to communicate about space and regions has been referred to as vernacular geography~\cite{hollenstein2010exploring} or naive geography~\cite{egenhofer1995naive} that carries with it a certain intrinsic vagueness in its nature~\cite{davies2009user}. Individuals' descriptions of places are inherently subjective, as are interpretations of what is and what is not descriptive for a locale~\cite{Bentley2012draw}. Towards any community, online or offline, the perception of place is considered to be of a shared frame of reference~\cite{montello2003s}. Photo tags generated by the Flickr community are no exception. However, our qualitative exploration is aimed at understanding \textit{what factors come into play} when
describing space, and how might they manifest into a probabilistic model. Fuzzy regional boundaries, temporal events, hidden landmarks all underscore types of regions and terms we surfaced through the interviews, as well as the kinds of tag descriptors discovered quantitatively. We find a mixed method approach brings a clearer understanding of online communities and naive geography alike.
\paragraph*{Future work}
As illustrated by our generative interviews, the presentation of our model's results ought to take into account subjective interpretations, support the discovery of new conceptual clusters and their explanation. The interviews also pointed to the possibility of using certain neighborhoods as embodying the spirit of its encompassing region, e.g.,\ the Castro described as an influential San Francisco neighborhood, or technology companies moving into SoMa reflecting changes in the city as a whole. The relationship between the leaves of a geographical tree, temporal change and events, sub-regions, and fluid local boundaries that consider wider topographical features represent potential viable extensions to our model, conditioned on the fact that the dataset at hand includes enough data and spans a long period of time. For example, we could extend our model to account for the porosity of frontiers between neighborhoods and the fact that there is no consensus about the exact boundaries of a neighborhood: we define, for each region, a set that is composed of the regions that are adjacent to it. The probability of observing tag in a certain region then includes a new term that accounts for the possibility of sampling tags from adjacent regions. The learning procedure for this extension would be very similar to the GHM's learning procedure.
| {
"attr-fineweb-edu": 2.695312,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfUrxK0fkXPSOrSyq | \section{Introduction}\label{sec1}
Due to the worldwide appeal of football, the football industry has occupied an unassailable position in the sports business since its inception. The popularity of the sport is continuing to increase and increasing sports fans are becoming involved in the football industry. At the same time, betting on sporting events, including football, has become a growth industry. The number of legal sports betting organisers and participants worldwide is increasing every year.
Meanwhile, the pre-match predictability of the sport is fairly low with some attribution placed on the sheer length of the matches as well as the number of players involved \cite{hucaljuk2011predicting}. Others \cite{bunker2022application} have argued that the lower predictability of football can be explained by its low-scoring natures as well as the higher competitiveness relative to other sports and the fact that football can have more outcomes than merely wins and losses, but also draws, which contribute to the uncertainty of predicting the outcome of the matches\cite{wunderlich2018betting}. Other reasons have also been suggested such as power struggles within the leadership of football clubs\cite{huggins2018match}, match-fixing\cite{huggins2018match}, referees with low ethical standards\cite{carpenter2012match} and tacit understandings between teams on both sides of the game simultaneously reduce the predictability of football matches. The limited ability to counter match-fixing of local football leagues\cite{park2019should} and illegal internet gambling\cite{hill2010critical} are contributing components to the uncertainty of match outcomes.
The drive for increasing the predictability of football matches and profitability in gambling contexts, has encouraged researchers to work on devising effective strategies. These approaches have taken the form of statistical\cite{baio2010bayesian}, machine learning\cite{baboota2019predictive}, \cite{fialho2019predicting}, natural language processing of football forums\cite{beal2021combining} as well as in-play image processing from camera footage of players\cite{lopes2014predicting} have been used in the attempt to create more accurate predictive systems of football matches. However, even in recent years, research efforts have realised limited and incremental successes at predicting the outcome of pre-match football matches.
In contrast to other sports, such as horse racing, there is no significant difference in the predictive accuracy between the opinions of the general public on internet forums, or the judgement of football experts and that of sophisticated machine learning methods\cite{brown2019wisdom} which underscores the challenge of the task.
Even with access to rich sources of raw data and information, as well as the expert judgement of pundits, the accuracy of the best predictions from literature after converting odds set by bookies into probabilities of events reaches $\sim$55\%.
\subsubsection*{Contribution}
This study joins the growing body of literature that is attempting to improve the predictive accuracy of football matches using machine learning techniques, demonstrating the proposed methods on three seasons of Premier League matches in Europe sourced from publicly available datasets. In this work, we use a range of new features and algorithms that have not yet been used on this problem domain, and we employ tools from the explainable AI field in order to infuse the predictive models with interpretability and the ability to expose their reasoning behind the predictions. This study ultimately proposes a novel strategy for maximising the return on investment which relies on the Kelly Index in order to first categorise matches into levels of confidence with respect to their outcomes that are calculated on betting odds. We show how the decomposition of the problem with the aid of the Kelly Index and subsequent machine learning can indeed form an investment strategy that returns a profit.
\section{Related work}
This review divides previous studies in predicting the outcome of football matches into the target type of outcome (win/loss/draw) and those that have attempted at predicting the final scores of matches. However, because of the emergence of new betting options on betting platforms, predictions have emerged about whether or not both teams in the match will score goals\cite{da2022forecasting} \cite{inan2021using}. In fact, researchers now rarely predict the score of a match, as the uncertainty of the score and the existence of draws make score prediction approach challenging and the prediction accuracy low, even with the inclusion of betting odds data\cite{reade2021evaluating}. The methods used for predicting the result of football matches fall into 3 categories. In the literature, statistical models, machine learning algorithms and rating systems have been explored.
\subsection*{Statistical and rating-system approaches}
\citet{inan2021using} fitted the team's offensive and defensive capabilities calculated from the goals made and conceded by teams each week to a Poisson distribution in his study while also taking the home advantage into account. After fitting the team's offensive and defensive capabilities, home team advantage and betting parameters into a Poisson model, \citet{egidi2018combining} has developed a method of testing the prediction results to obtain more accurate predictions.
\citet{robberechts2018forecasting} used ELO ratings in predicting the result of FIFA World Cup matches. this approach considers the team's earlier result (whether it won or not) as the team's performance, and then gradually adjusts for subsequent results to obtain the final team performance rating. In order to identify and adjust a team's score, \citet{beal2021combining} used goals by the team. They also considered the strength of the opponent of the match which is the source of the data to correct the team performance scores. In addition, \citet{constantinou2019dolores} used a hybrid Bayesian network to develop a dynamic rating system. The system corrects the team performance ratings by assigning greater weight to data on the results of the most recent matches that occurred. The researchers found that the study that uses scoring systems to predict the results of a football match usually uses databases with less content. These approaches need a small amount of data to extract features that can be used for prediction.
\citet{koopman2019forecasting} used both a bivariate Poisson distribution model and a team performance rating system in their research. They held home team advantage constant when predicting National League outcomes but varied the team's offensive and defensive capabilities over time. The results proved that there was no major difference in prediction between the two methods and that both could make good predictions. However, the team performance rating system was ineffective at predicting draws. Most of the above research on the use of rating systems noted the near absence of ties in the predicted results.
\citet{berrar2019incorporating} used team performance scores as a predictive feature for machine learning.
\subsection*{Machine learning approaches}
Efforts to improve the predictability of football have focused on two aspects. They have either attempted to use more sophisticated machine learning algorithms, or on feature engineering in order to develop more descriptive variables. Sometimes both aspects have been pursued. Table \ref{Prediction accuracy of different studies} summarises all the key studies and their attributes.
\subsubsection*{Algorithms}
A mixture of more simplistic approaches like Linear Regression\cite{prasetio2016predictingLR}, K-nearest neighbors\cite{brooks2016usingKNN} and Decision Trees\cite{yildiz2020applyingDT} have been explored. Subsequently, the algorithms have increased in their sophistication with the use of Random Forests\cite{groll2019hybridRF}, Support Vector Machines\cite{oluwayomi2022evaluationSVM}, Artificial Neural Networks\cite{bunker2019machineANN} and boosting Boosting\cite{hubavcek2019learningBoosting}. More recently, Deep learning methods have emerged with the use of
Convolutional Neural Networks \cite{hsu2021usingCNN} and LSTM \citet{zhang2021sports,malini2022deep} where the authors concluded that the LSTM model displayed superior characteristics to traditional machine learning algorithms and artificial neural networks in predicting the results of matches \cite{malini2022deep}.
\subsubsection*{Features}
Literature has shown that some of the effective features so far are half-time goal data as used by \citet{yekhande2582predicting}, first goal team data and individual technical behaviour data used by \citet{parim2021prediction}, ball possession and passing over data used by \citet{bilek2019predicting}, key player position data used by \citet{joseph2006predicting}.
Kınalıoğlu and Coskun\cite{kinaliouglu2020prediction} compared predictive models of six machine learning algorithms and evaluated models with different hyperparameters settings using a large number of model evaluation methods. \citet{beal2021combining} improved prediction accuracy by $\sim$7\% by extracting expert and personal opinions published on self-published media and combining environmental, player sentiment, competition, and external factors in their predictions.
Overall, studies using machine learning methods mention the need to use richer predictive features and validate them more robustly in future works. However, even newer studies have a tendency to reuse existing features and limited proposals have been made for innovating with novel features.
Moreover, suggestions for new predictive features have primarily focused on ones that are not available in public datasets \cite{wunderlich2018betting} such as player transfer data, injuries, expert advice, and psychological data.
Direct prediction of odds are rarely researched in this context, this is because the calculation of the odds itself contains the bookmaker's prediction of the results of the matches\cite{zeileis2018probabilistic}. Using odds to predict the outcome of a match requires a combination of machine learning or statistical methods. \citet{zeileis2018probabilistic} and \citet{wunderlich2018betting} both reverse the odds into the probability of a team winning and fit it into the ELO team performance scoring system to predict the outcome of the match. \citet{vstrumbelj2014determining} used traditional linear regression models and Shin models to predict the probability of winning directly from the odds, with the aim of inverting the bookmaker's method of calculating odds based on the betting.
\begin{sidewaystable}[]\tiny
\caption{\label{Prediction accuracy of different studies}Prediction accuracy of different studies}
\resizebox{\textwidth}{30mm}{
\begin{tabular}{lllllllll}
\hline
Study &
Competition &
Features &
Non-pre-match features &
Best Algorithm &
Accuracy &
Test set duration &
Matches &
Class \\ \hline
\citet{joseph2006predicting}\citeyear{joseph2006predicting} &
Tottenham Hotspur Football Club &
7 &
None &
Bayesian networks &
58\% &
1995-1997(2 years) &
76 &
3 \\
\citet{constantinou2019dolores} \citeyear{constantinou2019dolores} &
52 football leagues &
4 &
None &
Hybrid bayesian networks &
- &
2000-2017(17 years) &
216,743 &
3 \\
\citet{hubacek2019score}\citeyear{hubacek2019score} &
52 football leagues &
2 &
None &
Double Poisson &
48.97\% &
2000-2017(17 years) &
218,916 &
3 \\
\citet{mendes2020comparing}\citeyear{mendes2020comparing} &
6 Leagues &
28 &
None &
Bagging &
51.31\% &
2016-2019(3 years) &
1,656 &
3 \\
\citet{danisik2018football}\citeyear{danisik2018football} &
5 Leagues &
139 &
None &
LSTM regression &
52.5\% &
2011-2016 &
1520 &
3 \\
\citet{berrar2019incorporating}\citeyear{berrar2019incorporating} &
52 football leagues &
8 &
None &
KNN &
53.88\% &
2000-2017(17 years) &
216,743 &
3 \\
\citet{herbinet2018predicting}\citeyear{herbinet2018predicting} &
5 leagues &
6 &
None &
XGBoost &
54\% &
2014-2016(2 years) &
3,800 &
3 \\
\citet{carloni2021machine}\citeyear{carloni2021machine} &
12 countries &
47 &
None &
SVM &
57\% &
2008-2020(12 years) &
49,319 &
3 \\
\citet{baboota2019predictive}\citeyear{baboota2019predictive} &
English Premier League &
12 &
None &
Random Forest &
57\% &
2017-2018(1 year) &
380 &
3 \\
\citet{chen2019neural}\citeyear{chen2019neural} &
La Liga &
34 &
None &
Convolution neural network &
57\% &
2016-2017(1 year) &
380 &
3 \\
\citet{esme2018prediction}{}\citeyear{esme2018prediction} &
Super League of Turkey &
17 &
None &
KNN &
57.52\% &
2015-2016(half year) &
153 &
3 \\
\cite{rodrigues2022prediction}\citeyear{rodrigues2022prediction} &
English Premier League &
31 &
None &
SVM &
61.32\% &
2018-2019(1 years) &
380 &
3 \\
\citet{alfredo2019football}\citeyear{alfredo2019football} &
English Premier League &
14 &
Half-time results &
Random Forest &
68,16\% &
2007-2017(10 years) &
3,800 &
3 \\
\citet{pathak2016applications}\citeyear{pathak2016applications}&
English Premier League &
4 &
None &
Logistic Regression &
69.5\% &
2001-2015(14 years) &
2280 &
2 \\
\citet{prasetio2016predictingLR}\citeyear{prasetio2016predictingLR}&
English Premier League &
13 &
Ball possession, Distance run &
Logistic Regression &
69.51\% &
2015-2016(1 year) &
380 &
3 \\
\citet{danisik2018football}\citeyear{danisik2018football} &
5 leagues &
139 &
None &
LSTM regression &
70.2\% &
2011-2016 &
1520 &
2 \\
\citet{ievoli2021use}\citeyear{ievoli2021use} &
UEFA Champions League &
29 &
Number of passes and other features &
BLR &
81\% &
2016-2017(1 year) &
75 &
3 \\
\citet{azeman2021prediction}\citeyear{azeman2021prediction} &
English Premier League &
11 &
Shooting, fouls and other 8 features &
Multiclass decision forest &
88\% &
2005-2006(1 year) &
380 &
3 \\ \hline
\end{tabular}
}
\end{sidewaystable}
\subsection*{Summary of literature}
The reported accuracies by existing studies as summarised in Table~\ref{Prediction accuracy of different studies} are chiefly determined first by the nature of the features used, and specifically if these are generated in real-time during in-play match outcome predictions. Features generated in-play are more deterministic and thus result in considerably higher accuracies than more static pre-match features when predicting outcomes. Therefore these kinds of studies are different in nature than those that simply rely on pre-match prediction.
Next, the accuracy is strongly affected by the nature of the predictive problem in the manner in which it is formulated. Models that attempt to predict a simple win or loss scenario are more accurate than those that also attempt to predict draws. This is expected since all predictive problems become more complex as the number of categories being predicted increases. For this reason, a number of researchers have opted for eliminating the prediction of tied matches.
In addition, the size of the datasets also affects the overall capability of the models, however, the number of features used in football match prediction has a small effect on prediction accuracy (Figure~\ref{fig:Feature-Acc}). The figure highlights that the accuracy of pre-match prediction is consistently below 60\%. However, when predicting pre-match results over multiple seasons, the prediction accuracy was almost always below 55\%. The researchers were unable to break the bottleneck in predicting football matches with more advanced algorithms and scoring systems.
When predictions are made over a large time span, for example, predicting the results of a game over 10 seasons, then these models will be less accurate than those predicting the results of a game over 1 season.
Furthermore, the choice of machine learning method does not seem to have a significant impact on the accuracy of the model.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{Feature-Acc.png}
\caption{The plot of the number of features against prediction accuracy for all types of literature}
\label{fig:Feature-Acc}
\end{figure}
\subsubsection*{Research question}
In light of the literature, this study poses the following research questions:
\begin{enumerate}
\item Is it possible to use public data to find out which matches are difficult to predict and which results are already certain before the match?
\item Can prediction accuracy be improved by training models using hard-to-predict matches or easy-to-predict matches?
\item Which machine learning predictions perform best for hard-to-predict matches and easy-to-predict matches?
\item Is it possible to increase returns by applying different weights to hard-to-predict matches and easy-to-predict matches when investing?
\end{enumerate}
\section{Methodology}
\subsection{Machine learning algorithms}
The machine learning methods used are drawn from open-source machine learning libraries. These include Scikit-learn\cite{pedregosa2011scikit}, XGboost and CatBoost.
\paragraph*{Logistic regression} Logistic regression predicts the probability on the basis of linear regression. It converts predictions to probabilities and then implements classification.
The algorithm demonstrates a strong ability to deal with data with a strong linear relationship between features and labels. Logistic regression also has the advantage of being noise-resistant and performs well on small data sets. \citet{prasetio2016predictingLR} used logistic regression in predicting the English Premier League. Using in-play features, the authors obtained a prediction accuracy of 69\%.
\paragraph*{Decision Tree} This algorithm builds a Decision Tree \cite{quinlan1986induction} by choosing features that have the greatest information gain. The advantage is that it does not require any domain knowledge or parameter assumptions and is suitable for high-dimensional data. However, this algorithm is prone to overfitting and tends to ignore correlations between attributes. The use of a small number of features in the Decision Tree methods can prevent overfitting when predicting the results of football matches. The Decision Tree approach may therefore produce a more explanatory and thus informative model.
\paragraph*{Random Forest} This is an ensemble-based method which combines multiple Decision Trees. The Random Forest approach integrates multiple Decision Trees and builds each one using a non-backtracking approach. Each of these Decision Trees relies on independent sampling and has the same randomness in data selection as the other trees. When classifying the data, each tree is polled to return the category with the most votes, giving a score for the importance of each variable and assessing the importance of each variable in the classification. Random Forests tend to perform better than Decision Trees in preventing overfitting.
The correlation between the model features and the test predictions in the Random Forest approach is not strong. For poorly predicted football match results, this may be an advantage of the Random Forest approach. Using a training set containing a large number of features can result in a generated model with strong predictive power on a small number of them and weak predictive power on others. The classification criteria are different for each Decision Tree in the Random Forest model. This can also be a good way to reduce the correlation between Decision Trees.
\paragraph*{k-Nearest Neighbor} The k-Nearest Neighbor(KNN) method classifies samples by comparing a particular sample with its \textit{k} closest data points in a given dataset and therefore does not generate a classifier or a model \cite{peterson2009k}.
KNN methods have an excellent performance when using data from in-play contexts to predict the results of contests\cite{esme2018prediction}. However, the method needs to consider all the data in the sample when generating the prediction model. Such an approach creates a strong risk of overfitting in the prediction of pre-match results in competitive sports. Nevertheless, it is still not known how well the KNN method can perform for hard-to-predict matches.
\paragraph*{Gradient Boosting} The idea behind gradient boosting is to iteratively generate multiple weak models and then add up the predictions of each weak model. Each residual calculation increases the weight of the wrongly split samples, while the weights of the split pairs converge to zero, so the generalisation performance is better. The importance of the features of the trained model can be extracted.
In previous studies\cite{alfredo2019football} of match prediction gradient boost methods have often failed to produce the best models. However, the predictive power of the method in hard-to-predict matches cannot be denied outright.
\paragraph*{XGBoost} XGBoost extends and improves upon the Gradient Boosting. The XGBoost algorithm is faster. It takes advantage of the multi-threading of the CPU based on traditional Boosting and introduces regularisation to control the complexity of the model. Prior to the iteration, the features are pre-ranked for the nodes and by traversing them the best segmentation points are selected, which results in lower complexity of data segmentation.
The XGBoost method is popular amongst machine learning researchers. However, the method does not perform well in capturing high-dimensional data, such as images, audio, text, etc. For predicting football matches with based on lower-dimensional data, XGBoost may produce promising models.
\paragraph*{CatBoost}
CatBoost algorithm is a type of gradient boosting algorithm. In contrast to the gradient boosting algorithm, this algorithm can also achieve excellent results using data that has not been feature engineered. It does better than gradient boosting in preventing overfitting. The CatBoost method has not been explored in previous studies of football match result prediction.
\paragraph*{Voting and Stacking} The voting classifier can vote on the results of different models, with the majority deciding the outcome. The Voting classifier is divided into Hard and Soft implementation. The Hard voting is votes on the results obtained using multiple machine learning methods, with the majority getting the result. Soft voting is the process of calculating the mean of the classification probabilities of the different methods, and finally selecting the one with the highest mean as the prediction result.
The stacking algorithm refers to a hybrid estimator approach in which a number of estimators are fitted separately on training data, while the final estimator is trained using the stacked predictions of these basic estimators. However, the stacking method uses cross-validation which is not favoured in the field of football result prediction when training the model.
Voting and stacking methods have produced the best prediction models in other studies of athletic match result prediction\cite{li2019high}. However, in football these two methods are rarely used. This study uses both methods in an attempt to find the most suitable model for prediction in difficult-to-predict matches.
\subsection{Dataset characteristics and feature engineering}
The Premier League data used in this study was sourced from the website https://www. footballdata.co.uk/englandm.php. The results from the 2001 to 2018 seasons were used to develop features based on ELO ratings, attack and defence ratings and home-team and away-team winning percentages. Data from the 2019 to 2021 seasons were used for testing. The raw data was recorded using a total of 107 columns of data for the 2019 to 2021 seasons.
The Premier League system divides the season into 38 rounds and guarantees that each of the 20 teams will play once in each of the 10 matches in each round. After 380 matches per season, each team plays 19 times in the home position and 19 times in the away position. This study predicts the results of 3 seasons, implying a total of 1140 predicted matches.
The Premier League is based on a points system, with the winning team gaining three points after the match and the two drawing teams gaining one point each. 24 teams are entered for the 2019-2021 season, all of which appeared in the 2001-2018 season. However, due to differences in the raw data on each season on bookmaker odds records, in fact, only 21 of the 83 columns of data were used in the 2019-2021 season predictions. The records are for the six European bookmakers and the European average odds respectively.
The premise of this study for feature engineering is the assumption that all matches occur continuously.
A total of 52 features were generated and explored in this project. Their names and descriptions are given in Table~\ref{Table 1. Description of features}.
\begin{table}[]
\centering
\caption{\label{Table 1. Description of features}Description of features}
\fontsize{7pt}{9pt}
\selectfont
\begin{tabular}{@{}ll@{}}
\toprule
Feature Name & Description \\ \midrule
AvgGoalDiff & Average goal difference between the two teams in the previous six games \\
TotalGoalDiff & Goal difference between the two teams in the previous six games \\
HomeELO & ELO ratings for the home team \\
AwayELO & ELO ratings for away teams \\
ELOsta & Standard deviation of the ELO ratings of the two teams \\
ELOHomeW & Probability of home team winning converted by ELO ratings \\
ELOAwayW & Probability of away team winning converted by ELO ratings \\
ELODraw & Probability of a draw occurring converted by ELO ratings \\
one\_ELO & Probability of conversion from ELO ratings after PCA dimensionality reduction \\
HomeHELO & ELO ratings for the home team's half-time result \\
AwayHELO & ELO ratings for the away team's half-time result \\
HELOSta & Standard deviation of the half-time ELO ratings of the two teams \\
ELOHHomeW & Probability of home team winning converted by half-time ELO ratings \\
ELOHAwayW & Probability of away team winning converted by half-time ELO ratings \\
ELOHDrawW & Probability of a draw occurring converted by half-time ELO ratings \\
one\_HELO & Probability of conversion from half-time ELO ratings after PCA dimensionality reduction \\
HomeTeamPoint & Current home team points in the season \\
AwayTeamPoint & Current away team points in the season \\
PointDiff & Current point difference between home and away teams in the season \\
AvgHOddPro & Average of pre-match home team odds offered by all bookmakers in the available data \\
AvgAOddPro & Average of pre-match away team odds offered by all bookmakers in the available data \\
AvgDOddPro & Average of pre-match draw odds offered by all bookmakers in the available data \\
one\_Odd\_Pro & Average odds after PCA dimensionality reduction \\
HomeOff & Rating of the home team's offensive capabilities \\
HomeDef & Rating of the home team's defensive capabilities \\
AwayOff & Rating of the away team's offensive capabilities \\
AwayDef & Rating of the away team's defensive capabilities \\
Offsta & Difference in offensive capability rating \\
Defsta & Difference in defensive capability rating \\
AvgShotSta & Standard deviation of shots on goal for both sides in six matches \\
AvgTargetSta & Standard deviation of shots on target for both sides in six matches \\
ShotAccSta & Standard deviation of shot accuracy between the home team and the away team \\
AvgCornerSta & Standard deviation of corners for both sides in six matches \\
AvgFoulSta & Standard deviation of fouls for both sides in six matches \\
HomeHWin & Home team's winning percentage in home games \\
HomeHDraw & Home team's draw percentage in home games \\
AwayAWin & Away team's winning percentage in away games \\
AwayADraw & Away team's draw percentage in away games \\
HomeWin & Home team's win percentage in all previous matches \\
HomeDraw & Home team's draw percentage in all previous matches \\
AwayWin & Away team's win percentage in all previous matches \\
AwayDraw & Away team's draw percentage in all previous matches \\
LSHW & Home team win percentage last season \\
LSHD & Home team draw percentage last season \\
LSAW & Away team win percentage last season \\
LSAD & Away team draw percentage last season \\
Ysta & Difference in the number of yellow cards between the home team and the away team \\
Rsta & Difference in the number of red cards between the home team and the away team \\
StreakH & Home team winning streak index \\
StreakA & Away team winning streak index \\
WStreakH & Home team weighted winning streak index \\
WStreakA & Away team weighted winning streak index
\\
\midrule
\end{tabular}
\end{table}
\subsubsection*{ELO Rating}
The ELO rating was first introduced by ELO\cite{ELO1978} to assess the ability of players in chess competitions and has been extended to the assessment of the performance of players or teams in many sports. The rationale is to achieve a dynamic evaluation of a particular player's ability by comparing the ability of the player in a recent match with that player's past performance. The time-series nature of football matches makes the ELO model suitable for scoring the performance of football teams\cite{jain2021sports}.
The score of the participating teams in a football match is determined by their performance in past matches (Eq~\ref{Eq.1}) and the rating of their performance in the current match of the competition (Eq`\ref{Eq.2}) are as follows:
\begin{align}\label{Eq.1}
E_{}^{H} & = \cfrac{1}{1 + \cfrac{c^{R^{H} -R^{A}} }{d}} \quad\quad\quad E_{}^{A} = 1-E_{}^{H}
\end{align}
\begin{align}\label{Eq.2}
S_{}^{H} & = \left\{\begin{matrix}
1 \qquad Home\quad team\quad win\\
0.5 \qquad Draw\\
0\qquad Away\quad team\quad win
\end{matrix}\right. \qquad
\end{align}
\begin{align}\label{Eq.3}
S_{}^{A} = 1 -S_{}^{H}
\end{align}
where $E^H$ and $E^A$ are the performance ratings that teams are supposed to have in that match, $S^H$ and $S^A$ are the actual performance ratings of two teams. $R^H$ and $R^A$ are the ratings that the two teams have constantly revised to represent their own team's ability, and $c$ and $d$are two constants based on the scale of scoring. In this study $c$ was set to 10 and $d$ to 400 to ensure that the effect of the new matches' results on the previous ELO ratings was kept in a balanced range.. The scoring of the two teams is appropriately corrected by the error between the ability of the two teams during the newly occurring match and previously assessed performance, using the following:
\begin{align}
R^{'H} = R^{H} +k(S^{H}-S^{E})
\end{align}
where $k$ indicates the magnitude of the correction. $k$ is used to determine the effect of the results of the new competition on the scoring. $k$ is calculated as follows:
\begin{align}
k = k_0(1+\delta)^{\gamma }
\end{align}
where $\delta$ is the absolute goal difference and $k_0$ and $\gamma$ are set to 10 and 1.
The performance ability of teams was first assessed in this study using match results from the 2001-2005 English Premier League season. Teams were ranked and assigned ratings by comparing the total scores and wins of teams in the 2001-2002 season. The team ratings were corrected using match data from the 2002-2017 season. 2018-2019 season was formally characterised using the ratings generated by the ELO model, and each team's rating was subsequently corrected for the previous match results.
In a subsequent study on ELO ratings, a regression model was generated to predict wins and losses by the ELO rating profile of the teams from both sides before the match\cite{hvattum2010using}. The model has a high degree of confidence among those involved in football gambling. The probability of different match outcomes occurring can be simply predicted using the ELO scores of the teams on both sides of the match using the following formula:
\begin{align}
P_{H} & = 0.448+(0.0053*(E^{H} -E^{A} ))
\end{align}
\begin{align}
P_{A} & = 0.245+(0.0039*(E^{A} -E^{H} ))
\end{align}
\begin{align}
P_{D} = 1- (P_{H}+P_{A})
\end{align}
where $P^H$ indicates the probability of the home team victory, $P^A$ indicates the probability of the away team victory occurring and $P^D$ indicates the probability of a draw situation occurring.
\subsubsection*{Offensive and defensive capabilities}
The Offensive and Defensive Model (ODM) \cite{govan2009offense} is used to generate a team's offensive and defensive abilities. The calculation formula (Eq~\ref{Eq.9}) for scoring given in the introduction of the ODM model by \citet{govan2009offense} is as follows:
\begin{align}\label{Eq.9}
o_j = \sum_{i = 1}^{n} \frac{{A_i}_j}{d_i} \qquad dj = \sum_{i = 1}^{n} \frac{{A_j}_i}{o_i}
\end{align}
where ${A_i}_j$ is a goal scored by team $j$ in a match between teams from both sides of the match, $o_j$ is the offensive rating and $d_j$ is the defence rating. The equation shows that the two ratings affect each other. We directly assess the offensive and defensive ratings of a team one season and apply them to the next season. A team with high goal-scoring has a high offensive rating. Likewise, a team with a high number of goals conceded has an inefficient defensive rating.
For teams new to the season, we extrapolate their offensive and defensive ratings based on that team's ranking in goals scored and goals against during the season. This is calculated as follows:
\begin{align}
o_j = \frac{XG_2+X_2G_1-X_1G_1-X_1G_2}{X_2-X_1}
\end{align}
where $O_j$ represents the new team's offensive score. $G$ is the goals scored by the new team during the season. $G1$ and $R1$ are the goals scored and offensive rankings of the teams that are one place above the new team in the season goal rankings. $G2$ and $R2$ are the goals scored and rankings of the teams that are one place below. The defensive rating is calculated in the same way.
\subsubsection*{Streak Index}
The Streak Index represents a team's capacity to go on a winning streak which captures a team's most recent form. The index was generated by \citet{baboota2019predictive} in predicting the results of the Premier League for the two seasons 2014-2016. the Streak Index is further divided into two categories, the first of which directly measures the team's form over the previous $k$ games, with the following formula:
\begin{align}
S = (\sum_{p=j-k}^{j-1} res_{p} )/3k
\end{align}
where $j$ indicates the number of matches to be predicted and $res_p$ indicates the team's score in a particular match, where a win and a draw correspond to 3 points and 1 point respectively. No points are awarded for losses. In the second category, the Streak Index, is weighted and normalised based on the first category. Relatively low weights are given to matches that occur further back in time. The formula for this is as follows:
\begin{align}
\omega _{S} = \sum_{p=j-k}^{j-1} 2\frac{p-(j-k-1)res_{p} }{3k(k-1)}
\end{align}
The Streak Index is logically problematic when calculating across seasons. This means that when calculating the Streak Index for the first few games of a season, the data used is from the last season's end. The changes in team form that occur during the intervening season can compromise the effectiveness of the Streak Index.
\subsubsection*{Principal Component Analysis}
As the outcome of a football match includes a draw, bookmakers also offer odds in the event of a draw. This requires a feature reduction method to turn three-dimensional data into one-dimensional data.
This study uses the Principal Component Analysis (PCA) \cite{wold1987principal} to reduce the dimensionality of the data. This function linearly reduces the dimensionality on the original data by projecting the original data to an alternative dimension where the data loss is minimised.
PCA is generally used to process and compress high-dimensional data sets. The main objective is to reduce the dimensionality of a large feature set into one that is most essential with the tradeoff that the new features lose interpretability.
\subsubsection*{Kelly Index}
The Kelly index\cite{thorp1975portfolio} was used as a tool to classify the matches in the dataset into categories that define different levels of predictability.
In literature, the Kelly Index was originally used to calculate the flow-through rate of electronic bits\cite{kelly1996synchronization}. However, due to its probabilistic nature and its similarity to the nature of betting, the Kelly Index is also widely used by bettors.
Each bookmaker calculates the Kelly Index before a match and makes it available to members who gamble.
The Kelly Index can be calculated from the available data using the following formula:
\begin{align}
K_{H}=(O_{H}/avgO_{H} )F_{99} &\\K_{A}= (O_{A}/avgO_{A} ) F_{99} &\\K_{D}=(O_{D}/avgO_{D} )F_{99}
\end{align}
where $K_{H}$, $K_{A}$, and $K_{D}$ indicate the three results of matches in terms of the Kelly Index. $avgO_{H}$, $avgO_{A}$, and $avgO_{D}$ indicate in our dataset the average European betting market odds for each of the three results of the matches. $F_{99}$ indicates the return rate of the European average odds, which is calculated by the following formula:
\begin{align}
F_{99} =\frac{1}{\frac{1}{avgO_{H}}+ \frac{1}{avgO_{A}}+\frac{1}{avgO_{D}}}
\end{align}
If a bookmaker has set lower odds on an event occurring than any other bookmaker, this means that the bookmaker is confident enough to believe that a particular outcome will occur. This confidence may come from the bookmaker's perception of possessing more effective prediction models, having access to more extensive sources of information, or illegal black market trading. The bookmaker as a dealer is willing to set the lowest odds on a more likely event to maximise profits. Therefore a bookmaker's Kelly Index is the most likely event to occur. At the same time, if there is a low Kelly Index at one bookmaker, there will be a high Kelly Index for other events accordingly. However, there is no guaranteed correlation between the level of the Kelly Index and the result of a match as this is just a technique for understanding what odds have been determined to be in the best interest of the bookmaker for maximising their returns.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{boxplot01.png}
\caption{The distribution of the Kelly Index of the six European bookmakers for the 2019-2021 season.}
\label{fig:The distribution of the Kelly Index of the six European bookmakers for the 2019-2021 season}
\end{figure}
As shown in Figure ~\ref{fig:The distribution of the Kelly Index of the six European bookmakers for the 2019-2021 season}, the Kelly Indexes of these six bookmakers are mainly concentrated in the range of 0.9-1.0. In the setup of the predictive problem, the matches for the 2005-2021 season were divided into 3 categories with each one reflecting the confidence levels of their predictability. These were, matches with Kelly indexes greater than 1 (Type 1), matches with only one Kelly index greater than 1 (Type 2), and matches with no Kelly Index greater than 1 (Type 3).
\subsection{Model Optimisation}
\subsubsection*{Hyperparameter tuning}
This study used the RandomizedSearchCV method from the scikit-learn library to select the most effective parameter values. All the parameter tuning values for the machine learning algorithms used in this study are shown in Table~\ref{Values for parameter tuning}.
\begin{table}[hbt]
\centering
\caption{\label{Values for parameter tuning}
Values for parameter tuning
}
\fontsize{7pt}{9pt}
\selectfont
\begin{tabular}{@{}lll@{}}
\midrule
Algorithm & Parameter & Range of values \\
\midrule
\vcell{CatBoost} & \vcell{learning\_rate} & \vcell{0.01,0.02,0.03,0.04} \\[-\rowheight]
\printcelltop & \printcellmiddle & \printcellmiddle \\
& iterations & 10,20,30,40,50,60,70,80,90 \\
& depth & 4,5,6,7,8,9,10 \\
\vcell{Decision Tree} & \vcell{criterion} & \vcell{'gini', 'entropy'} \\[-\rowheight]
\printcelltop & \printcellmiddle & \printcellmiddle \\
& max\_depth & 2,4,6,8,10,12 \\
\vcell{Gradient boost} & \vcell{min\_samples\_leaf} & \vcell{2,5,8} \\[-\rowheight]
\printcelltop & \printcellmiddle & \printcellmiddle \\
& min\_samples\_split & 3,5,7,9 \\
& max\_features & Based on number of filtered features \\
& max\_depth & 2,5,7,10 \\
& learning\_rate & 0.1,1.0,2.0 \\
& subsample & 0.5,0.8,1 \\
\textcolor[rgb]{0.125,0.129,0.141}{k-Nearest Neighbor} & n\_neighbors & Based on sample size \\
\vcell{\textcolor[rgb]{0.071,0.071,0.071}{Logistic regression}} & \vcell{solver} & \vcell{liblinear} \\[-\rowheight]
\printcelltop & \printcellmiddle & \printcellmiddle \\
& penalty & 'l1','l2' \\
& class\_weight & 1:2:1,3:3:4,4:3:3 \\
\vcell{Random Forest} & \vcell{min\_samples\_leaf} & \vcell{2,5,8} \\[-\rowheight]
\printcelltop & \printcellmiddle & \printcellmiddle \\
& max\_features & Based on number of filtered features \\
& max\_depth & 2,5,7,10 \\
Stacking & Same as above algorithm & Same as above algorithm \\
Voting & Same as above algorithm & Same as above algorithm \\
\hline
\end{tabular}
\end{table}
\subsubsection*{Feature selection}
Models developed with feature selection tend to have greater explanatory power, run faster, and have a decreased risk of over-fitting. In some cases, feature selection will improve the predictive power of models. With the exception of the two ensemble algorithms Voting classifier and Stacking, feature selection was used to enhance the models.
Feature selection decisions were made using Shapley Additive exPlanations\cite{lundberg2017unified} (SHAP). SHAP is based on the Shapley value\cite{winter2002shapley}, a concept derived from cooperative games, and is a common metric for quantitatively assessing the marginal contribution of users in an equitable manner. SHAP has been implemented as a visualisation tool for model interpretation that is able to score every feature in terms of its effects on the final outcome. Efficient feature selection can be achieved by removing features that are indicated to have low impacts on predictions. For multiclassification gradient boosting models where the SHAP explainer is not available, the Recursive Feature Elimination method in the scikit-learn library was used to remove the least important features recursively selected in the dataset.
In the feature selection process, after generating a model that has been modelled using all features and tuned with hyperparameters, the features in the validation set with an average SHAP value of less than zero were removed and the dataset was remodelled after the low-impact features were. This loop continues until the number of features is less than 2 or no features with a SHAP value less than zero appear. After the iterative process is completed, the model with the highest prediction accuracy in the validation set is selected as the prediction model for the test set.
\subsection{Model training and testing framework}
\subsubsection{Modelling process}
This study uses an extended window approach to test the models in order to avoid the mistake of using information from future matches to predict past matches which is in contrast to a number of other studies on sports outcome prediction. \citet{bunker2019machineANN} outline in detail why standard cross-validation is not suitable for evaluation of prediction models in this domain. The pattern of the training and testing sequences comprising windows of division is shown in Figure ~\ref{fig:Window extension strategies for predictive models} while the modelling process is shown in Figure ~\ref{fig:modelling process}.
Training sets from the immediately preceding season are used to make predictions for the next season. This strategy was followed since it has been shown that it is more informative to choose the most recent matches for predictions of upcoming matches. The reason for this is that the game of football, being a high-intensity competitive sport, advances over time in terms of its macro tactics and the detail of the players' actions\cite{smith2007nature}. As the number of different types of matches varies from season to season, the size of the test set is set at 20 to 30 matches depending on the number of matches of a particular type in the current season.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{Windowapproach.png}
\caption{Window extension strategies for predictive models}
\label{fig:Window extension strategies for predictive models}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Step.png}
\caption{Modelling process}
\label{fig:modelling process}
\end{figure}
The validation set is used in order to select the best model during parameter tuning. When using models where parameter tuning has little impact on model quality (e.g. Voting Classifier) the validation set is not used and the part of the model that would otherwise be part of the validation set is merged with the training set. The validation set is usually set to the same size as the test set for that window.
\subsubsection{Model evaluation}
This study evaluates the predictive ability of each classifier by using four metrics. These four reported metrics are accuracy, precision, recall and F1 Score.
Accuracy is used as a primary metric for ranking the performances of the algorithms which the remaining metrics are used as a reference for completion.
These evaluation metrics are calculated as follows:
\begin{align}
Accuracy = \frac{TP+TN}{TP+FN+FP+TN}
\end{align}
where the denominator part indicates the number of all samples and the numerator part indicates the number of all correctly predicted samples.
\begin{align}
Precision = \frac{TP}{TP+FP}
\end{align}
where the denominator part indicates the number of samples for a particular result and the numerator part indicates the number of correctly predicted samples for that result species. The precision of the three classification questions is weighted by the proportion of each result in the sample using the precision of the three results.
\begin{align}
Recall = \frac{TP}{TP+FN}
\end{align}
Theoretically, the three classification problems have the same recall and accuracy on a given result. The recall in this study uses macro-Recall without considering the weights of each result.
\begin{align}
F_{1} = \frac{2\times Precision \times Recall}{Precison + Recall}
\end{align}
To evaluate the relative capability of the models, two baseline models were added for comparison. Both models are referenced from the Dummy Classifier class in the scikit-learn library. Both models ignore the data in the training set and make random predictions on the test set. Baseline model 1 is a random selection of one result from three results of matches as the predicted result. Baseline model 2 has a similar prediction process to baseline model 1, but predictions follow the distribution of each result in the test set.
\section{Result and discussion}\label{sec2}
\subsection{Match classification}\label{fm}
This study classified 1,140 matches in the English Premier League for the three seasons of 2019-2021 into three possible categories, namely win, loss, draw. Matches with multiple bookmakers having a Kelly Index greater than 1 were categorised as Type 1. Matches with only one bookmaker having a Kelly Index greater than 1 were categorised as Type 2. Matches with no bookmaker having a Kelly Index greater than 1 were categorised as Type 3. Their precise numbers in each season are shown in Table~\ref{Number of matches divided into different types}. The number of Type 1 matches decreases yearly, while the number of Type 2 matches increases yearly. This may indicate that bookmakers are generally becoming progressively more conservative in setting their odds.
\begin{table}
\centering
\caption{\label{Number of matches divided into different types}Number of matches according to different types and categories}
\label{Number of matches divided into different types}
\fontsize{7pt}{9pt}
\selectfont
\begin{tabular}{lcccccc}
\hline
\multirow{2}{*}{Category} & \multirow{2}{*}{Season} & \multicolumn{4}{c}{Result Number} & \multirow{2}{*}{Draw proportion} \\
& & Home & Away & Draw & \multicolumn{1}{l}{All} & \\
\hline
\multicolumn{1}{c}{Type 1} & 2019 & 61 & 28 & 17 & 106 & 16.0\% \\
& 2020 & 50 & 43 & 18 & 111 & 16.2\% \\
& 2021 & 42 & 18 & 16 & 76 & 21.1\% \\
\multicolumn{1}{c}{Type 2} & 2019 & 35 & 28 & 18 & 81 & 22.2\% \\
& 2020 & 36 & 29 & 22 & 87 & 25.3\% \\
& 2021 & 55 & 42 & 27 & 124 & 21.8\% \\
\multicolumn{1}{c}{Type 3} & 2019 & 76 & 60 & 57 & 193 & 29.5\% \\
& 2020 & 58 & 81 & 43 & 182 & 23.6\% \\
& 2021 & 66 & 69 & 45 & 180 & 25.0\% \\
\multicolumn{1}{c}{All} & 2019 & 172 & 116 & 92 & 380 & 24.2\% \\
& 2020 & 144 & 153 & 83 & 380 & 21.9\% \\
& 2021 & 163 & 129 & 88 & 380 & 23.2\%
\\
\hline
\end{tabular}
\end{table}
Nearly half of all 1140 matches belonged to the Type 3. In these matches, the bookmakers were not confident in setting odds that constituted an increased liability for them.
Ultimately, Type 3 matches produced the highest percentage of draws, while Type 1 matches had the lowest percentage of draws confirming the efficacy of the Kelly Index to reduce uncertainty to some degree. Although draws are difficult to predict, they do not account for more than a third of matches in any category. In addition, home-team wins in Type 1 matches were much higher than the number of away-team wins. However, away-team wins were higher in the Type 3 matches. Bookmakers consider both draws and home team advantages when formulating their predictions. Bookmakers are more likely to reduce the odds of a home win when considering setting odds or home teams.
Although bookmakers can adjust the odds based on their own predictions, upsets often happen in football matches. It is not always straightforward to precisely define which matches are upsets based on the available data. This study identifies matches as an upset when the study's prediction was correct and the bookmaker not only predicted incorrectly, but also set the highest odds on the actual outcome of the match. See the next sub-section for the predicted results. The percentage of upsets for each type is shown in Figure~\ref{fig:Percentage of upset in each type of match}.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{coldGame.png}
\caption{Percentage of upset in each type of match}
\label{fig:Percentage of upset in each type of match}
\end{figure}
Betting agencies are poor predictors of Type 3 matches compared to the other two types. Upsets are very unlikely in the Type 1 matches, but they become a regular occurrence in the Type 3 matches. Figure~\ref{fig:Distribution of odds set by the six bookmakers on the predicted results of this study} aggregates the spread of the odds across all six betting agencies in this study. According to this figure, on average, bookmakers set significantly lower odds on results in Type 1 matches than in the other two categories.
\begin{figure}[htb]
\centering
\includegraphics[width=0.6\textwidth]{avg_odd_range.png}
\caption{Distribution of odds set by the six bookmakers on the predicted results of this study}
\label{fig:Distribution of odds set by the six bookmakers on the predicted results of this study}
\end{figure}
\subsection{Result prediction}
The prediction accuracy, precision, recall and average ranking of each algorithm across the prediction process for Type 1, Type 2 and Type 3 matches are all shown in Table~\ref{Prediction results from various machine learning algorithms in the Type 1 match}, Table~\ref{Prediction results from various machine learning algorithms in the Type 2 match} and Table~\ref{Prediction results from various machine learning algorithms in the Type 3 match} respectively. The algorithms are rank-ordered from best-performing onwards. The average ranking is likewise calculated by the accuracy of each algorithm across all different forecasting windows. Meanwhile, the accuracies of all the algorithms across the different Types are contrasted with the accuracies attained across all matches without taking the Kelly Index into account. These results can be seen in Table~\ref{Prediction results from various machine learning algorithms in the Type all matches}.
\begin{table}[]
\centering
\caption{\label{Prediction results from various machine learning algorithms in the Type 1 match}Prediction results from various machine learning algorithms in the Type 1 match}
\fontsize{7pt}{9pt}
\selectfont
\begin{tabular}{@{}
>{\columncolor[HTML]{FFFFFF}}l
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c @{}}
\toprule
\textbf{Algorithm} & \textbf{Accuracy} & \textbf{Precision} & \textbf{Recall} & \textbf{F1\_Score} & \textbf{Rank} \\ \midrule
CatBoost & 70.0\% & 66.6\% & 54.8\% & 63.6\% & 1.1 \\
Logistic Regression & 67.9\% & 64.8\% & 53.6\% & 62.3\% & 3.1 \\
Random Forest & 68.9\% & 64.3\% & 54.9\% & 63.4\% & 3.2 \\
Voting & 66.6\% & 59.4\% & 52.4\% & 61.7\% & 4.3 \\
KNN & 62.1\% & 60.6\% & 52.8\% & 61.3\% & 4.5 \\
Gradient Boosting & 62.5\% & 59.6\% & 51.7\% & 60.8\% & 4.8 \\
Stacking & 65.5\% & 54.1\% & 50.9\% & 59.1\% & 5.0 \\
Decision Tree & 60.4\% & 56.7\% & 48.9\% & 58.2\% & 5.1 \\
XGBoost & 63.8\% & 57.1\% & 51.0\% & 59.6\% & 5.3 \\
Baseline 1 & 47.1\% & 33.5\% & 31.5\% & 37.4\% & 8.8 \\
Baseline 2 & 30.4\% & 36.9\% & 29.9\% & 32.1\% & 9.8 \\ \bottomrule
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{\label{Prediction results from various machine learning algorithms in the Type 2 match}Prediction results from various machine learning algorithms in the Type 2 match}
\fontsize{7pt}{9pt}
\selectfont
\begin{tabular}{@{}
>{\columncolor[HTML]{FFFFFF}}l
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c @{}}
\toprule
\textbf{Algorithm} & \textbf{Accuracy} & \textbf{Precision} & \textbf{Recall} & \textbf{F1\_Score} & \textbf{Rank} \\ \midrule
CatBoost & 49.7\% & 38.8\% & 42.0\% & 43.3\% & 2.8 \\
Logistic Regression & 51.0\% & 49.3\% & 45.4\% & 48.8\% & 3.1 \\
Random Forest & 52.7\% & 50.9\% & 46.5\% & 49.2\% & 3.4 \\
Decision Tree & 45.9\% & 45.3\% & 42.6\% & 45.6\% & 3.7 \\
Stacking & 49.3\% & 44.5\% & 42.1\% & 43.8\% & 4.3 \\
Voting & 47.9\% & 44.5\% & 42.7\% & 45.7\% & 4.9 \\
XGBoost & 48.6\% & 46.0\% & 43.6\% & 46.7\% & 5.0 \\
KNN & 39.7\% & 40.1\% & 37.5\% & 39.9\% & 5.9 \\
Gradient Boosting & 40.8\% & 39.7\% & 37.6\% & 40.0\% & 6.2 \\
Baseline 1 & 40.1\% & 29.0\% & 32.3\% & 31.7\% & 6.9 \\
Baseline 2 & 31.5\% & 33.5\% & 31.3\% & 32.1\% & 8.7 \\ \bottomrule
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{\label{Prediction results from various machine learning algorithms in the Type 3 match}Prediction results from various machine learning algorithms in the Type 3 match}
\fontsize{7pt}{9pt}
\selectfont
\begin{tabular}{@{}
>{\columncolor[HTML]{FFFFFF}}l
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c @{}}
\toprule
\textbf{Algorithm} & \textbf{Accuracy} & \textbf{Precision} & \textbf{Recall} & \textbf{F1\_Score} & \textbf{Rank} \\ \midrule
Decision Tree & 40.0\% & 38.5\% & 37.5\% & 38.5\% & 3.9 \\
CatBoost & 40.0\% & 37.6\% & 36.3\% & 35.2\% & 4.0 \\
Random Forest & 41.1\% & 39.4\% & 38.4\% & 39.1\% & 4.2 \\
Logistic Regression & 40.7\% & 38.8\% & 38.2\% & 39.1\% & 4.5 \\
Voting & 40.0\% & 39.1\% & 37.7\% & 38.5\% & 4.6 \\
Gradient Boosting & 37.7\% & 38.0\% & 37.0\% & 37.8\% & 5.0 \\
Stacking & 39.1\% & 35.5\% & 35.9\% & 36.2\% & 5.2 \\
XGBoost & 39.6\% & 38.8\% & 37.9\% & 39.0\% & 5.4 \\
KNN & 35.3\% & 34.9\% & 33.9\% & 34.8\% & 5.8 \\
Baseline 2 & 37.3\% & 38.3\% & 37.0\% & 37.6\% & 6.2 \\
Baseline 1 & 36.4\% & 26.3\% & 33.4\% & 26.4\% & 6.4 \\ \bottomrule
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{\label{Prediction results from various machine learning algorithms in the Type all matches}Prediction results from various machine learning algorithms in the all matches}
\fontsize{7pt}{9pt}
\selectfont
\begin{tabular}{@{}
>{\columncolor[HTML]{FFFFFF}}l
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c @{}}
\toprule
\textbf{Algorithm} & \textbf{Accuracy} & \textbf{Precision} & \textbf{Recall} & \textbf{F1\_Score} & \textbf{Rank} \\ \midrule
CatBoost & 51.9\% & 45.7\% & 44.7\% & 45.3\% & 2.6 \\
Logistic Regression & 50.6\% & 44.0\% & 43.5\% & 45.2\% & 3.2 \\
Random Forest & 52.0\% & 47.7\% & 44.7\% & 45.7\% & 3.4 \\
Stacking & 51.3\% & 46.7\% & 44.5\% & 45.9\% & 3.6 \\
Decision Tree & 50.2\% & 46.6\% & 44.6\% & 46.2\% & 3.8 \\
Voting & 49.9\% & 46.7\% & 44.5\% & 47.0\% & 4.6 \\
Gradient Boosting & 47.1\% & 45.8\% & 43.3\% & 46.3\% & 5.0 \\
XGBoost & 48.4\% & 45.7\% & 43.5\% & 46.5\% & 5.7 \\
KNN & 45.4\% & 44.4\% & 42.0\% & 44.8\% & 5.7 \\
Baseline 1 & 40.5\% & 28.4\% & 32.8\% & 29.7\% & 8.0 \\
Baseline 2 & 31.9\% & 33.2\% & 32.1\% & 32.2\% & 9.2 \\ \bottomrule
\end{tabular}
\end{table}
The model offering the greatest predictive performance for the Type 1 matches across all forecasting windows was produced by CatBoost, with an accuracy of 70\%. This performance was significantly better than that of the baseline methods which scored well below 50\%.
The best model in the Type 2 matches was generated by Random Forest, although Catboost ranked as the winner across all the forecasted windows. Random Forest achieved 53\% which represents a significant reduction in accuracy compared to Type 1 models. However, the best-performing models under the Type 2 setting still clearly outperformed baseline predictions.
The accuracy of Type 3 accuracies continued the decrease down to 41\%, which was the best accuracy registered by Random Forest. Though overall, both the Decision Tree and CatBoost performed more consistently across all the forecasting windows. While the best-performing models' efficacy significantly reduced for Type 3 matches, they still marginally contributed more value than baseline models.
Unexpectedly, stacking and voting methods were consistently outperformed by other algorithms which is in contrast to the performance of these algorithms in other studies covering competitive sports.
The results from the previous tables are depicted in Figure~\ref{fig:As the extended window progresses, the trend in prediction accuracy of the algorithms with the best predictive ability for different types of matches} highlighting the accuracy of each of the best-performing algorithms across time for each season, and Type of matches. The dashed lines in the figure represent interpolations in cases where no matches were predicted for a given Type when matches meeting the Kelly Index criteria did not exist. The figure demonstrates that Type 1 predictions consistently outperformed other Types.
.
Taken altogether, the results demonstrate the efficacy of the Kelly Index to reduce an element of uncertainty from the predictive problem and thereby increase the overall predictive accuracy.
\begin{figure}[hbt]
\centering
\includegraphics[width=1.0\textwidth]{United.png}
\caption{As the extended window progresses, the trend in prediction accuracy of the algorithms with the best predictive ability for different types of matches.}
\label{fig:As the extended window progresses, the trend in prediction accuracy of the algorithms with the best predictive ability for different types of matches}
\end{figure}
The next analysis considers the internals of the predictive moedls and uses SHAP in order to understand the impact of the key features on the best-performing algorithms based on their ranks, across each of the Types of matches. The feature importances and their effects are shown in Figure~\ref{fig:Scatterplot of SHAP values for the top 10 features of importance when being predicted for the three different types of matches and for all matches.}.
For Type 1 matches, the top most influential features are based on the team's historical match statistics based on their home-win record. These are matches where the home team's advantage exists and it is easier to determine whether the match will be one-sided based on the features of the home team. The rarety of draw results of such matches also leads to a higher prediction accuracy.
In the cases of Type 2 matches, we can see that as the prediction becomes more difficult, historical data on both teams becomes less important in and is replaced by odds data provided by bookmakers before the match.
Meanwhile, in predicting the Type 3 matches, historical data appears to be inconsequential. The features based on bookmakers' prediction estimates still have the advantage in predicting these matches. The already low accuracy accuracy of the Type 3 matches still depends on the results of the bookmakers' forecasts. The odds in this case have an almost direct influence on the prediction of uncategorised matches.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\textwidth]{new_united_shap.png}
\caption{Scatterplot of SHAP values for the top 10 features of importance when being predicted for the three different types of matches and for all matches.}
\label{fig:Scatterplot of SHAP values for the top 10 features of importance when being predicted for the three different types of matches and for all matches.}
\end{figure}
\subsection{Betting returns}
This section compares traditional betting methods in football betting with the returns of classified matches betting to examine if there is a way to maximise returns or minimise risk. We first begin by highlighting the distribution in confidence probabilities of the machine learning algorithms across different Types as this will form a key factor when comparing betting strategies.
Table \ref{The distribution of confidence in the predicted results of each type across seasons in different ranges} shows the distribution of confidence in the predicted outcomes for the three types of matches. Even for the most predictable matches, machine learning has difficulty predicting a match's result with confidence. Surprisingly, the proportion of these predictions that machine learning considered extremely confident was similar for the easy-to-predict matches and the hard-to-predict matches, and these matches were rarely present in both types of matches.
\begin{table}
\centering
\caption{\label{The distribution of confidence in the predicted results of each type across seasons in different ranges}The distribution of confidence in the predicted results of each type across seasons in different ranges}
\fontsize{7pt}{9pt}
\selectfont
\begin{tabular}{lccccccc}
\toprule
\multirow{2}{*}{Type} & \multicolumn{1}{l}{\multirow{2}{*}{Season}} & \multicolumn{5}{c}{Range of confidence} & \multicolumn{1}{l}{\multirow{2}{*}{Match number}} \\
& \multicolumn{1}{l}{} & 0.7-1.0 & 0.6-0.7 & 0.5-0.6 & 0.4-0.5 & 0.33-0.4 & \multicolumn{1}{l}{} \\\midrule
\multirow{3}{*}{Type1} & \vcell{2019} & \vcell{8.5\%} & \vcell{10.4\%} & \vcell{15.09\%} & \vcell{55.7\%} & \vcell{10.4\%} & \vcell{106} \\[-\rowheight]
& \printcellmiddle & \printcellmiddle & \printcellmiddle & \printcellmiddle & \printcellmiddle & \printcellmiddle & \printcellmiddle \\
& 2020 & 0.0\% & 0.0\% & 4.50\% & 42.3\% & 53.2\% & 111 \\
& 2021 & 2.6\% & 25.0\% & 27.63\% & 36.8\% & 7.9\% & 76 \\
\multirow{3}{*}{Type2} & \vcell{2019} & \vcell{17.3\%} & \vcell{28.4\%} & \vcell{13.58\%} & \vcell{25.9\%} & \vcell{14.8\%} & \vcell{81} \\[-\rowheight]
& \printcellmiddle & \printcellmiddle & \printcellmiddle & \printcellmiddle & \printcellmiddle & \printcellmiddle & \printcellmiddle \\
& 2020 & 5.8\% & 10.3\% & 33.33\% & 41.4\% & 9.2\% & 87 \\
& 2021 & 11.3\% & 33.1\% & 18.55\% & 24.2\% & 12.9\% & 124 \\
\multirow{3}{*}{Type3} & \vcell{2019} & \vcell{6.2\%} & \vcell{6.2\%} & \vcell{19.69\%} & \vcell{48.7\%} & \vcell{19.2\%} & \vcell{193} \\[-\rowheight]
& \printcellmiddle & \printcellmiddle & \printcellmiddle & \printcellmiddle & \printcellmiddle & \printcellmiddle & \printcellmiddle \\
& 2020 & 2.2\% & 8.2\% & 21.98\% & 47.8\% & 19.8\% & 182 \\
& 2021 & 2.8\% & 8.9\% & 21.11\% & 47.2\% & 20.0\% & 180 \\
\midrule
\end{tabular}
\end{table}
We next construct a simulated betting scenario in order to determine the return on investment(ROI) based on several strategies. We hypothetically invest \$1 on each match and receive corresponding profits according to the odds for a correct prediction, and a loss of the investment of \$1 for a wrong prediction. The ROI is calculated by dividing the investment profit by the total amount invested. A positive or negative ROI is used to determine whether an investment strategy is profitable or not.
Firstly, a set of benchmark ROIs are prepared for this study and are shown in Table~\ref{All invested in one type of return}. These are returns based on the model with the highest prediction accuracy across each of the Kelly index categories as well as the baseline which includes all matches, investing in the six bookmakers. A naive approach indicates marginal returns on two cases only, with higher losses across all other scenarios.
\begin{table}[]
\centering
\caption{\label{All invested in one type of return}Return on total investment by type of competition}
\fontsize{7pt}{9pt}
\selectfont
\begin{tabular}{@{}
>{\columncolor[HTML]{FFFFFF}}l
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c
>{\columncolor[HTML]{FFFFFF}}c @{}}
\toprule
\textbf{} & \textbf{Type1} & \textbf{Type2} & \textbf{Type3} & \textbf{Baseline} \\ \midrule
\textbf{Match number} & 292 & 293 & 555 & 1140 \\ \hline
\textbf{Bet365} & -1.1\% & -5.9\% & -3.8\% & -6.7\% \\
\textbf{Interwetten} & 0.9\% & -4.8\% & -3.4\% & -5.4\% \\
\textbf{Bet\&Win} & -0.2\% & -5.5\% & -3.8\% & -6.2\% \\
\textbf{Pinnacle} & 1.4\% & -3.5\% & -1.1\% & -4.1\% \\
\textbf{VC Bet} & -0.7\% & -6.0\% & -3.3\% & -6.3\% \\
\textbf{William Hill} & -0.5\% & -5.6\% & -3.6\% & -6.1\% \\ \bottomrule
\end{tabular}
\end{table}
The next step in the experiments was to calibrate the investment strategy based on the confidence level of the predictive model in the outcome of each match. Thus, a bet would be placed only if the predictive model's confidence level met or exceeded a predefined threshold. Several Thresholds were chosen and examined for returns. For each Type level, the best-performing model was selected. Odds from Pinnacle were chosen.
The results of the experiments are shown in Figure \ref{fig:Trend in returns} highlighting the ROI across time and seasons. Over time, all strategies based both on Type and models' confidence level conclude with a negative ROI. The exception to this is found in a single strategy based on Type 1 matches and a confidence threshold of 70\%. The final ROI using this strategy is ~17\%.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\textwidth]{Return_1117.png}
\caption{A graph of the return on investment as the match progresses, based on how confident the machine learning method is in predicting the result to determine whether to invest in the match.}
\label{fig:Trend in returns}
\end{figure}
\section{Conclusions and future work}\label{sec13}
This study has considered the problem of predicting the outcomes of football matches using the Premier League match data from 2019-2021 seasons. The proposed strategy used the Kelly Index to first categorise the matches into three groups, with each one representing different levels of uncertainty, or predictability. A range of machine learning algorithms were explored for predicting the outcomes of the matches from each category in order to determine the utility of the process of decomposing the predictive problem into sub-tasks. This paper devised a range of new features previously unexplored as well as machine learning algorithms not investigated in this domain. The study found that ensemble-based algorithms outperformed all other approaches including the benchmark approaches, while the models produced competitive results with prior works.
The paper also validates the proposed approaches by benchmarking them against bookmaker odds in order to determine with strategies are able to return a profit on investment. A method was developed that combines both the Kelly Index together with predictive confidence thresholds and investigated. The findings indicate that a strategy comprised of a combination of focusing on easy-to-predict matches that have a high predictive confidence level from machine learning models can return a profit over the long term. is a non-negligible part of the match data for machine learning. Moreover, the available features cannot explain the reasons for the match result.
| {
"attr-fineweb-edu": 2.279297,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUd3Y4eIXh5LWwO-9g | \section{Introduction}
\begin{figure}
\begin{center}
\includegraphics[scale=0.1]{prizes.jpg}
\caption{\sl Speed skydivers at a European championship award ceremony. The winner was Ms. Lucia Lippold (middle) who is a university graduate student, a member of the German National Team for Skydiving. Her record in speed skydiving is 417.68 km/h as measured by a GPS~\cite{paralog}. Photo: Private collection of Lucia Lippold.}
\label{fig:Lucy}
\end{center}
\end{figure}
Speed Skydiving is the fastest non-motorized sport where Earth's gravity and geometry of the skydiver through Earth's atmosphere play a major role in determining the outcome of the jump. It is a {\it space age} sport which relies heavily on sciences in almost all aspects of it, from the way the jump is performed to the equipment and the rules of the sport themselves. The main goal of the athletes (speed skydivers) is to achieve the fastest freefall speed possible over a given distance. There are in principle only two factors that influence the outcome of the jump according to physics: skydiver's mass and body orientation through the atmospheric friction (air resistance). In short what counts is the terminal speed achieved through aerodynamics. How the athletes achieve their higest terminal speed during a jump can depend on a number of factors\footnote{{\it e.g.\ } having a bad exit from the aircraft can affect the process of achieving the terminal speed negatively.}.
\noindent Speed skydiving is a discipline within air sports which heavily relies on technology for measurements. This skydiving discipline has been a part of international parachuting competitions for the past two decades. Rules and regulations have been updated a number of times. The speed skydivers strive to achieve their higest terminal speed using the aforementioned aspects---whether or not they are aware of the underlying physics of the sport.
In contrary to the standard form of skydiving in which skydivers face the Earth's surface in a belly position\footnote{In other freefall disciplines of the sport such as freefall style or free fly, skydivers are not necessarily in a belly position.} (Fig.~\ref{fig:Teamfire}) reaching typically 200 km/h, speed skydivers will try to form their body to be as streamlined as possible (see Fig.~\ref{fig:Daniel}). As such they easily reach speeds in the vicinity of over 400 km/h!
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.25]{teamfire1.jpg}
\caption{\sl Team Fire, a former national team of Sweden in 4-way skydiving formation (FS), under practice. In FS the skydivers face down toward the Earth with their belly at all times and hence their cross-sectional areas are always greater than their peers performing a so-called head-down body position whose $C_d$ can be approximated by a half-sphere. Photo: Team Fire.}
\label{fig:Teamfire}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.2]{daniel.jpg}
\caption{\sl A speed skydiver exiting an aircraft attempting to be as streamlined as possible. Shown in the figure is Mr. Daniel Hagström of Sweden's National Team right after an exit from the aircraft. He dives into his streamlined body orientation in order to fall to reach the maximum speed. Mr. Hagström's personal best is in excess of 470 km/h. Photo: Private collection of Daniel Hagström.}
\label{fig:Daniel}
\end{center}
\end{figure}
\noindent Skydiving itself captures the imagination of the public by opening a new world of adventure, speeds and unique experience of free-falling. Luckily this is something in which anyone can now participate with tandem skydive\footnote{It is not possible to experience speed skydiving through tandem skydiving for non-skydivers.}. However speed skydiving is an advanced level of skydiving as it requires a good command of motion through the freefall and additional procedures in order to complete the jump in a safe manner. To elaborate on this, we will walk the readers through the whole process of a typical speed skydiving. The first step being an exit from the aircraft at 4000 meters altitude. The second being the action itself, {\it i.e.\ } the speed skydiver forms herself into a streamlined shape in order to penetrate the ocean of air towards the altitude of 1700 meter after which the measurement ends. The speed skydiver would then have to try to slow down in order to deploy her parachute before 1000 meters altitude and subsequently fly it down to a designated landing site. The measuring device is then handed over to a team of judges at competitions or the jumpers themselves just take a look at the device for her own record during training.
No other disciplines in skydiving measures freefall speeds as a part of competitions. Understanding how freefall speeds change with mass, time, surface area as well as air density would enhance our understanding of downward motions with air resistance--especially in educational settings having speed skydiving as a case study. Our theoretical calculations can directly be compared with the measurements done using the equipment that the athletes utilize.
Note that we use the terminology "freefall" despite the fact that skydivers are not really in a freely falling motion due to atmospheric frictions. Literally, there is no genuine freefall in Earth's atmosphere, not even on Mars. It was recently shown by Jet Propulsion Laboratory of NASA that rotocraft (Mars helicopter named Ingenuity) could fly in Martian atmosphere despite the fact that pressure of Martian atmosphere is only approximately 1\% of Earth's~\cite{NASAmars}.
\section*{Speed skydiving as a sport discipline}
It is the International Skydiving Committee (ISC\footnote{https://www.fai.org/commission/isc (cited on May 14, 2021).}) within FAI that passes rules for competitions in all disciplines of skydiving. According to latest regulation, the score of a speed skydiving is the average vertical speed in km/h to the nearest hundredth of a km/h of the fastest 3 seconds, which the competitor achieves within the performance window. A speed skydiver has approximately 7400 ft for her performance (measured from the minimum exit altitude and breakoff altitude). It is clearly understood that the speed skydiver has to make sure that during a given distance, the best geometrical form is achieved as she has to reach the highest terminal speed.
Measurement of the average speed is done using an electronic equipment based on GPS. Prior to utilization of GPS as an official way of measuring, barometric measurement was used\footnote{In the old system of measurement, the world's record of 601.26 km/h was made by Mr. Henrik Raimer of Sweden at World Championships in IL, USA in 2016~\cite{raimer}. }. Currently one can find rankings in both systems on the website of The International Speed Skydiving Association (ISSA)~\cite{ISSAweb}.
\section{Derivation of terminal speed}
The setup for obtaining the terminal speed is accomplished using Newton's second law of motion. We consider all the forces under consideration acting on the center of gravity of the speed skydiver. This applies in general to any object falling through atmosphere\footnote{This would be true on Mars as well but may not apply well for other planets whose atmospheres are not gaseous.}. There are only two forces considered, the first being the force due Earth's gravitational field $F = mg$ with $m$ being total mass of the skydiver including her gear (the so-called exit mass) and $g$ being acceleration due to gravity at the location of skydiving. The second one being the atmospheric friction or drag force given by
\begin{equation}
F_d = kv^2,
\label{eq:drag}
\end{equation}
where $k = \frac{1}{2} \rho C_d A $ with $\rho$ being air density, $C_d$ a dimensionless drag coefficient (body orientation of a skydiver) and $A$ the cross sectional area of the skydiver. Taking into account the direction of motion (with the positive direction pointing downwards) we get
\begin{equation}
ma = F_{grav} - F_d.
\label{eq:3forces}
\end{equation}
Expressing the LHS of the Eq.(\ref{eq:3forces}) in terms of speed as a function of time we have
\begin{equation}
\frac{dv}{dt} = g - \frac{kv^2}{m}.
\label{eq:DE1}
\end{equation}
We have two options, {\it i.e.\ } solve Eq.(\ref{eq:DE1}) for $v$ as a function of $t$ or as a function of a vertical displacement $y$.
In both cases we will see that the amount of time (and vertical displacement) required to reach a terminal speed from a jump from an aircraft is affected by what is contained in $k$.
\subsection{Terminal speed as a function of displacement/altitude}
In order to solve Eq.(\ref{eq:DE1}) for $v(y)$ we would have to rewrite it using a chain rule:
\begin{equation}
\frac{dv}{dt} = \frac{dv}{dy}\cdot\frac{dy}{dt} = g - \frac{kv^2}{m},
\end{equation}
with $\displaystyle \frac{dy}{dt} = v$. Rearranging the equation above we arrive at
\begin{equation}
\frac{\displaystyle v dv}{\displaystyle g - \frac{kv^2}{m}} = dy.
\label{eq:integral1}
\end{equation}
Substituting $ u = v^2$ in Eq.(\ref{eq:integral1}) gives
\begin{equation}
\frac{du}{2 \paren{ g - \frac{ku}{m}} } = dy,
\end{equation}
which after integration yields
\begin{equation}
-\frac{m}{2k} \ln (mg -ku ) = y + C,
\end{equation}
where $C$ being a constant to be determined. The initial speed of a given speed skydiver before she exits the aircraft is zero, hence
$\displaystyle C = -\frac{m}{2k} \ln(mg)$. This gives us the vertical displacement
\begin{equation}
y = -\frac{m}{2k}\ln\paren{1 - \frac{kv^2}{mg}}.
\label{eq:vertical}
\end{equation}
Solving Eq.(\ref{eq:vertical}) for $v$ yields
\begin{equation}
\label{eq:speeddistance}
\displaystyle v(y) = \sqrt{\displaystyle \frac{mg}{k}\paren{\displaystyle 1 - e^{\displaystyle -2ky/m} }} \, .
\end{equation}
Reinserting $k$ in the equation above we arrive at
\begin{equation}
v(y) = \sqrt{\frac{2mg}{\rho C_d A}} \paren{\displaystyle 1 - e^{\displaystyle \frac{-\rho C_d A y}{m}}}^{1/2}.
\end{equation}
We can readily see that the square-root factor is the terminal speed\footnote{which can easily be obtained from setting the LHS of Eq.(\ref{eq:3forces}) to zero, {\it i.e.\ } when the motion is acceleration-free.}, $v_T$. Graphically we can read off when $v_T$ is reached. Mathematically it is when the second term in the parenthesis vanishes, meaning when $y \rightarrow \infty$. In \cite{brazil} the terminal speed is calculated directly from equilibrium, {\it i.e.\ } when drag and gravity pull are equal.
\subsection{Terminal speed as a function of time}
To obtain terminal speed as a function of time we rewrite Eq.(\ref{eq:DE1}) as
\begin{equation}
\displaystyle dt = \frac{\displaystyle dv}{\displaystyle g \paren{ 1 - kv^2/mg}}.
\end{equation}
Integrating this equation\footnote{With initial conditions $t=0$ and $v=0$ disregarding the speed of the aircraft prior to the exit.} yields
\begin{equation}
t = \frac{1}{g\gamma} \tanh^{-1}(\gamma v)
\end{equation}
where $\gamma = \sqrt{k/mg}$. Reinserting $k$ (from Eq.(\ref{eq:drag})) and $\gamma$ and inverting the expression to obtain the terminal speed as a function of time we arrive at
\begin{equation}
v(t) = \sqrt{\frac{2mg}{\rho C_d A}} \tanh \paren{\sqrt{\frac{\rho C_d A g}{2m}}t}.
\label{eq:speedtime}
\end{equation}
The expression together with Eq.(\ref{eq:speeddistance}) give us a clear picture of how the terminal speed is attained from an exit points.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.45]{naritgraphs.jpg}
\caption{\sl Typical graphs of terminal speeds as a function of time for skydivers in a belly position versus hypothetically a head-down position. In this example we use the author's data, {\it i.e.\ } exit mass $m = 82$ kg. For the belly position we use a drag coefficient of a half-sphere, $C_d = 0.4$ and a cross-sectional area of 0.8 m$^2$. Whereas in the head-down position we assume $C_d = 0.2$ and the cross-sectional area is reduced to 0.4. In both cases we use $g=9.81$ \; m/s$^2$ and $\rho = 1.23$ kg/m$^3$. It is readily seen that the terminal speeds vary drastically due to body geometry through air. The mass of the skydiver is invariant.}
\label{fig:naritgraphs}
\end{center}
\end{figure}
\noindent It is natural to assume that when we make a transfer in freefall from a standard belly position to a head-down position (for achieving a greater terminal speed) the only parameters affected by the process are $A$ and $C_d$\footnote{The appropriate drag coefficient for skydiving is rather uncertain. Different values are given in different sources. As a skydiver myself, the belly position is not a plat plane, rather an arch.} as the skydiver cannot reduce or gain her mass during the jump. We will study professional speed skydivers in the next section. It is to be noted that as $m$ increases while $A$ decreases the outcome would be that a skydiver reaches her terminal speed later and the terminal speed becomes greater. This is a common experience by all skydivers (and this is translated into how they learn to move their body during the fall through air resistance of Earth's atmosphere).
\subsubsection{Three other parameters as a function of time?}
It is clear that air density $\rho$, drag coefficient $C_d$ and the cross-sectional area $A$ as appeared in Eq.(\ref{eq:speeddistance}) and Eq.(\ref{eq:speedtime}) of the skydiver for each jump vary. It is however not the purpose of this paper to model each jump, rather we aim to give an overview of the underlying physics as well as comparisons with the current state of development of the sport.
\section{Theoretical models versus reality}
We can argue that air density $\rho$ is a function of altitude (time) for the sake of mathematical modeling. Body position is also a function of time as the speed skydivers need time to adjust their body position so $C_d = C_d (t)$ and $A = A(t)$ but it is nearly impossible to model skydiving in this way as each jump is unique. The air density being a function of altitude (which can be translated into the elapsed time from the exit) is unnecessary unless the jump under consideration is done from a very high altitude. It is conceivable that no skydivers can dive out of an aircraft into a planned body orientation instantly. They perform the exit in such a way that they gradually transform their body into the head-down and use their arms and legs to control the streamlined flow through the atmospheric frictions. A fully detailed modeling should cover the skydiver's transition from the exit to the final stable body position with $C_d$ and $A$ as functions of time. We leave this to future investigations.
\subsection*{Mathematical models of speed skydivers}
In private communication with three experienced competitive speed skydivers we are able to estimate their body orientation/geometry in numbers. It it obvious that their total mass, cross-sectional area and drag coefficient play a sensitive role in their terminal speed. Our method for an approximate model is as follows: the speed skydivers's best achieved terminal speeds are known, their cross sectional areas are approximated from interviews with them. Since this article is meant to be educational rather than a detailed mathematical modeling, hence we will not stress on the drag coefficients. We will see that $C_d$ varies greatly from the belly position to the speed skydiving position. We utilize Maple 2021~\cite{maple} as a tool for numerical computations and graph plotting.
Having these numbers together with other parameters such as air density, acceleration due to Earth's gravity we are able to obtain their $C_d$ through
\begin{equation}
C_d = \frac{2mg}{\rho A v^2_T}
\end{equation}
The cross-sectional area, $A$, is provided by the skydiver themselves by rough approximation as rectangular area, see Fig.~\ref{fig:marcohepp} as given by Mr. Marco Hepp, who has won German National Championships in Speed Skydiving.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.3]{marco_hepp.jpg}
\caption{\sl Approximation of Marco Hepp's cross sectional area. Photo: Marco Hepp}
\label{fig:marcohepp}
\end{center}
\end{figure}
\noindent Values of $\rho$ and $g$ are the standard values used in all case\footnote{Regardless of location of the jumps performed} in this paper. Highest achieved vertical speeds of the studied speed skydivers are retrieved from ISSA's website. We use the data from the eternal ranking measured by GPS~\cite{ISSA}.
It is conceivable that two biggest factors that affect the outcome of a speed skydive is $C_d$ and the cross-sectional area $A$\footnote{ A small parachute rig might change the cross-sectional area somewhat.} once in the head-down position. The mass of the skydiver is a fixed number. The mass of the speed skydivers is, as a matter of fact, one factor affecting the terminal speed. It is clear that the largest the mass, the higher the terminal speed provided everything else remains invariant.
As a gedankenexperiment we propose theoretical improvements for the three speed skydivers by a so-called 5 \% modifcations meaning that $C_d$ and $A$ are reduced by 5 \% whilst the mass increases by 5 \%. With these modifications we can see how their record terminal speeds improve as shown in Fig.~\ref{fig:marcograph}, \ref{fig:danielgraph}, \ref{fig:lucygraph}.
It is proposed that a standard drag coefficient for speed skydivers is in the vicinity of $0.25$. One might argue that $C_d = 0.2$ is a goal to achieve for anyone wishing to pursue this airsport discipline.
\subsubsection*{Marco Hepp of the German National Team for Skydiving}
Marco Hepp has an exit weight of 84.2 kg with $A = 0.66 \times 0.61$ m$^2$, we get $C_d = 0.17$ using his top vertical speed of 139.74 m/s.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.4]{marcographs.jpg}
\caption{\sl An approximate modeling of terminal speeds for Marco Hepp with his record speed and with 5\% modifications. With the proposed 5 \% modifications the top terminal speed for Marco would increase with 7.7\% reaching in excess of 540 km/h. }
\label{fig:marcograph}
\end{center}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Skydiver & $m_{\text{exit}}$ (kg) & $A$ (m$^2$)& $v_{\text max}$ in m/s & $C_d$ \\
\hline
Lucia Lippold & 73 & 0.30 & 116.02 & 0.26 \\
\hline
Marco Hepp & 84.2 & 0.40 & 139.74 & 0.17 \\
\hline
Daniel Hagström & 98 & 0.46 & 131.21 & 0.20 \\
\hline
\end{tabular}
\caption{Data for our approximate modeling of experienced speed skydivers.}
\end{center}
\end{table}
\newpage
\subsubsection*{Daniel Hagström of the Swedish National Team for Skydiving}
Daniel Hagström has an exit weight of 98 kg with $A = 0.70 \times 0.65$ m$^2$, we get $C_d = 0.20$ using his top vertical speed of 131.21 m/s.
\begin{figure}[hb!]
\begin{center}
\includegraphics[scale=0.4]{danielgraphs.jpg}
\caption{\sl A graph of the freefall speed as a function of time for Daniel Hagström with his record speed and with 5\% modifications. With the modifications, Daniel's theoretical top terminal speed would be above 500 km/h.}
\label{fig:danielgraph}
\end{center}
\end{figure}
\subsubsection*{Lucia Lippold of the German National Team for Skydiving}
Lucia (Lucy) Lippold who won the European Championship with the speed of 417.68 km/h (116.02 m/s) has $C_d = 0.26$ using her exit weight of 73 kg and a cross-sectional area of 0.30 m$^2$.
\begin{figure}[htb!]
\begin{center}
\includegraphics[scale=0.4]{lucygraphs.jpg}
\caption{\sl An approximate modeling of terminal speeds for Lucy Lippold with her record speed and with 5\% modifications. With the modifications, Lucy would fly towards the Earth's surface in the vicinity of 475 km/h!}
\label{fig:lucygraph}
\end{center}
\end{figure}
\newpage
\section{Summary and conclusion}
Skydiving is a physical activity commonly described in physics textbooks at all levels, especially in educational settings. It is a direct demonstration of how Newton's laws of motion are applied for objects moving in atmosphere with air resistance. When it comes to detailed mathematical modeling of skydiving it is a varying challenge in that it involves many variables which can be considered to be dependent on time or vertical trajectory since the skydiver's body orientation is not fixed ($C_d$ and $A$ as functions of time/altitude can be a subject in its own right\footnote{We should be able find approximations in which $C_d(t)$ and $A(t)$ from the exit altitude be linear functions which can then be inserted into Eq.(\ref{eq:3forces}). Solving it for $v(t)$ or $v(y)$ would directly be more demanding.}).
Gravity, air resistance, basic aerodynamics and calculus have been the main tools for discussing skydiving from exit to deployment of a parachute to the very end of the activity, {\it i.e.\ } the landing with parachute. Activities discussed using the concept of terminal speed and utilization of parachute(s) within physics education include landing of spacecraft returning to Earth as well as the recent landings of spacecraft on Mars~\cite{NASAmars} and Titan--the largest moon of Saturn in 2005~\cite{titan}. A well-known terminology for this is EDL (Entry Descent Landing)~\cite{EDL}. We do not study a parachuting part of speed skydiving in this paper. In studying/designing EDL, in particular through atmosphere the knowledge $C_d$, object's shape, cross-sectional area, terminal speeds, gravity as well as atmospheric density are necessary. See {\it e.g.\ }~\cite{marspaper} for physics of landing on the surface of Mars and other planets.
Speed skydiving is about the terminal speed which is mainly affected by what can be controlled by the skydiver herself. Body's mass, gravity and air density are in principle constants for each computation, but $C_d$ and $A$ do play a vital role for each jump. This can be translated into a training procedure for this sport. We have illustrated the underlying physics of speed skydiving, the way the sport is competed and presented some data from world class's speed skydivers. It is proposed that the drag coefficient for speed skydiving is in the vicinity of 0.25. Further investigations of this in wind tunnels with exact measurements and detailed mathematical modeling for each speed skydiver are encouraged.
\subsection*{Acknowledgments}
The author would like to extend sincere thanks to the following persons for feedback and various discussions: Stefan Ericson of University of Skövde, Surachate Kalasin of King Mongkut's University of Technology Thonburi, Gerda-Maria Klosterman-Mace, Marco Hepp, Daniel Hagström and Lucy Lippold. A special thank to Bob Kucera and Isabella Malmnäs for valuable comments.
\vspace{1cm}
| {
"attr-fineweb-edu": 2.994141,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfBHxK7FjYEB4XLQ4 |
\section{Introduction and Background}
\label{sec:introduction}
Inspired by a few seminal studies \cite{hagerstrand1970people, chapin1974human} highlighting how different life-course decisions are interrelated and should be analysed jointly, several studies \cite{muggenburg2015mobility} have analysed travel behaviour as a long-term life-course dimension, similar to employment spells or residential location choices. Retrospective surveys such as the one proposed by Beige and Axhausen \cite{beige12} seek to uncover how life events, such as the birth of a child or marriage, are connected to one another as well as to long term mobility decisions such as commuting mode or relocation. Retrospective surveys are generally based on a life-course calendar, where the respondents are asked a series of questions for each year in retrospect (which is a grid analogous to Figure A1). Life-course calendars are commonly used to collect such information where panel data is unavailable or too costly. Life-course calendars have become popular because they have two main advantages over traditional panel surveys: the visual and mental relation of different kinds of events improves the quality of the data by providing reference points and preventing time inconsistencies. Moreover, the visual aid of the calendar tool eases the task of listing a potentially high number of different short events, a task that would be more difficult in traditional surveys \cite{freedman1988life}. However, while scholars agree on the fact that life-course calendars provide a helpful tool to facilitate the recall of life events, many acknowledge the issue of recall of all relevant events and the effort required to do so. In particular, Manzoni et al. \cite{manzoni20102} and Manzoni \cite{manzoni2012and} find that the recall depends not only on respondents' socio-demographic characteristics but also on the type of events that are recalled. For example, people tend to omit or reduce the duration of spells of unemployment. In addition, it has been shown by Schoenduwe et al. \cite{schoenduwe2015analysing} that participants find it easiest to remember ``emotionally meaningful, momentous, unique or unexpected events''. It is precisely these kinds of events that we consider in our analysis.
The data for our research originates from a retrospective survey that was carried out between 2007 and 2012 at the Department of Transport Planning of the Faculty of Spatial Planning at TU Dortmund. The questionnaire contains a series of retrospective questions about residential and employment biography, travel behaviour and holiday trips, as well as socio-economic characteristics.
In our work, we try to investigate if causal relations between life events can be identified in our dataset, such that these can then be used for choice modelling and provide valuable insights into understanding people's behaviour. We do this through causal discovery, a tool that deals with the process of automatically finding, from available data, the causal relations between the available (random) variables. This typically follows a graphical representation, where nodes are the variables, and arcs are the causal relationships between them. A causal discovery algorithm thus has as main objective to find directed graphical causal models (DGCMs) that reflect the data. There is also research on acyclic graphs and combinations of both, which we will not focus on here. For a broad review, we recommend Glymour et al. \cite{causal_discovery_review}.
The ideal way to discover causal relations is through interventions or randomized experiments. However, in many cases, such as ours, this is in fact impossible. Having only pure observational data, causal discovery tools search for systematic statistical properties, such as conditional independence, to establish (possible) causal relations. This implies also that we are limited to what the data \emph{can tell} us about causality. If the true causality structure is not observable from the data, such methods will at most find correlations disguised as causation, as happens in many other non-causal methods, such as regression. In such cases, a possible solution relies on domain knowledge, such as postulating the existence of common causes (e.g. a latent variable), or the addition of constraints.
The concept of causal discovery thus consists of algorithms to discover causal relations by analyzing statistical properties of purely observational data. In this paper, we intend to apply such methods to automatically uncover causality through temporal sequence data. Time or sequence are hardly a true cause in decision making (e.g. one will not have children or buy a car just because X time has passed, but due to some deeper cause). Therefore, it is certainly a bit too bold to claim that there is inherent causality in all the relationships that we will find. Instead, we see this process as uncovering the most likely graph of sequential patterns throughout the life of individuals.
Given a resulting causal graph, together with a dataset of temporal sequence records, we can now estimate transition probabilities for each event pair from the graph. We model the transition probabilities through survival analysis, which is typically used for time-to-event (TTE) modelling (or similarly for time-to-failure (TTF) or durations) \cite{survival_analysis_2}. Classical survival analysis is widely applied in medical studies to describe the survival of patients or cancer recurrence \cite{10.1371/journal.pbio.0020108, aalen-history, Cheng181ra50, Royston2013, 10.1001/jama.2016.3775}. Beside medical applications, it is now applied far beyond this field, such as predictive maintenance, customer churn, credit risk, etc.
Through survival analysis, given that an initial event occurred, we can obtain probabilistic temporal estimates of another event occurrence, with the survival values defined as $ S(t) = \mathrm{P}(T > t) $. It represents the probability of the event happening after time $ t $ from the initial event (i.e. the probability of surviving past time $ t $), given that the event did not happen before time $ t $.
In our case, survival probabilities will give us estimates of how likely it is that a certain life event will occur in each coming year after another specific life event. This leads to a better understanding of people's life-course calendars and how these depend on their socio-demographics. Examples include how the number of cars affects a decision of moving after getting married, the effect of the number of children on buying a car after moving, the difference between homeowners and non-homeowners when it comes to having a child after wedding, to name a few.
\section{Methods}
\label{sec:methods}
Our approach is formulated as a bi-level problem: in the upper level, we build the life events graph, with causal discovery tools, where each node represents a life event; in the lower level, for selected pairs of life events, time-to-event modelling through survival analysis was applied to model the time-dependent probabilities of event occurrence.
\subsection{Causal Discovery}
Two causal discovery algorithms used in our research are Parents and Children (PC) and Greedy Equivalence Search (GES). In a resulting life events graph, we should have a set of key events as nodes, while the arcs correspond to temporal ``causality'' between pairs of events, where $ A \rightarrow B $ implies that $ A $ is precedent to $ B $. As discussed earlier, this is not assumed as true causality; instead, we see it as representing a high likelihood of observing a certain temporal order in such events.
The concept of conditional independence is key here:
\[ X \independent Y | S \,\, \iff \,\, P(X, Y|S) = P(X|S)P(Y|S) \,\,\textit{for all dataset} \]
The idea is quite intuitive: if two variables $ X $ and $ Y $ are independent when controlling for the (set of) variable(s) $ S $, then $ X $ and $ Y $ are conditionally independent given $ S $. This means that $ S $ is likely to be a common cause for $ X $ and $ Y $. Of course, just as testing for independence in general, testing for conditional independence is a hard problem \cite{shah2018hardness}, but approximate techniques exist to determine it from observational data \cite{dawid1979conditional}.
Algorithms such as Parents and Children (PC) or its derivatives such as the Fast Causal Inference (FCI) \cite{spirtes2000causation} are based on this principle to generate a causal graph. They essentially start from a fully connected undirected graph, and gradually ``cut'' links between conditionally independent nodes.
Another approach starts from the opposite direction, where we start from an empty graph, and gradually add nodes and arcs. The most well-known algorithm is the Greedy Equivalence Search (GES) \cite{chickering2002optimal}. At each step in GES, a decision is made as to whether adding a directed edge to the graph will increase fit measured by some fitting score (e.g. Bayesian information criterion (BIC), z-score). When the score can no longer be improved, the process reverses, by checking whether removing edges will improve the score. It is difficult to compare this with PC in terms of the end quality of the discovered graph, but it is known that, as the sample size increases, the two algorithms converge to the same Markov Equivalence Class \cite{causal_discovery_review}. The vast majority of causal discovery algorithms are variations or combinations of the two approaches just mentioned.
In our dataset, we have four events, namely \emph{child birth}, \emph{wedding}, \emph{new car} and \emph{moving}, and two state variables, namely \emph{married} (True or False) and \emph{children} (numeric). While the final life events graph shall have only events (the state variables will be useful later, but not as nodes in the graph), PC and GES allow for inclusion of these variables, therefore we applied both PC and GES algorithms \cite{kalainathan2019causal}.
\subsection{Survival Analysis}
In survival analysis, survival function $ S(t) $ can be defined from the cumulative distribution function $ F(t) $, as $ S(t) = 1 - F(t) $, which in terms of probabilities can be formulated as $ S(t) = \mathrm{P}(T > t) $. The survival function can be interpreted in terms of the gradient (slope) of a line, with steeper lines meaning an event is more likely to happen in the corresponding interval. In contrast, a flat line means there is a zero probability of an event happening in that interval.
Most commonly used statistical methods for survival analysis are the Kaplan-Meier method \cite{kaplan-meier} and the Nelson-Aalen method \cite{nelson, aalen}, which are univariate counting-based methods for general insights. To estimate the effect of covariates, a survival regression model can be formalized through the Cox proportional hazards model (CPH) \cite{coxph}. It assumes that all individuals share the same baseline hazard function, and the effect of covariates acts as its scaler. It provides interpretability but lacks the accuracy with larger datasets.
In survival analysis, time-to-event (TTE) is expressed through two events: the \textit{birth} event and the \textit{death} event (hence the term ``survival''). Time between the two events is survival time, denoted by $ T $, which is treated as a non-negative random variable. Given the \textit{birth} event, it represents the probability distribution of the \textit{death} event happening in time.
Observations of true survival times may not always be available, thus creating distinction between uncensored (true survival times available) and censored observations (true survival times unknown, but only the censorship time). This information is expressed through an event indicator $ E $, with binary outcome. The censorship time is assumed to be non-informative, and $ T $ and $ E $ are independent.
Two main outputs of survival analysis are the survival function and the (cumulative) hazard function. The survival function $ S(t) $ can be formulated starting from the cumulative distribution function (CDF), denoted by $ F $, which can be defined as the probability (\textrm{P}) of an event occurring up to time $ t $:
\begin{equation}
\label{eq:cdf}
F(t) = \mathrm{P}(T \le t) = \int_{0}^{t} f(x) dx
\end{equation}
with properties of being a non-decreasing function of time $ t $, having bounds on the probability as $ 0 \le F(t) \le 1 $, and its relation with the survival function as $ F(t) = 1 - S(t) $. Given this, we can define the survival function as the probability of surviving past time $ t $, i.e. the probability of an event happening after time $ t $ given that the event has not happened before or at time $ t $ as:
\begin{equation}
\label{eq:surv}
S(t) = \mathrm{P}(T > t) = \int_{t}^{\infty} f(x) dx
\end{equation}
with properties of being a non-increasing function of $ t $, and having bounds on the survival probability as $ 0 \le S(t) \le 1 $ and $ S(\infty) = 0 $, given its relation with the CDF as $ S(t) = 1 - F(t) $. The probability density function (PDF) of $ T $, $ f(t) $, can be formalized as:
\begin{equation}
\label{eq:surv-pdf}
f(t) = \lim_{\Delta t \rightarrow 0} \frac{\mathrm{P}(t < T \le t + \Delta t)}{\Delta t}
\end{equation}
The hazard function $ h(t) $ defines the instantaneous hazard rates of the event happening at time $ t $, given that the event has not happened before time $ t $. It can be defined as a limit when a small interval $ \Delta t $ approaches 0, with a probability of the event happening between time $ t $ and $ t + \Delta t $, as in \eqref{eq:hazard-def}. In addition, there is a direct correlation between the hazard function and the survival function.
\begin{equation}
\label{eq:hazard-def}
h(t) =
\lim_{\Delta t \rightarrow 0} \frac{\mathrm{P}(t < T \le t + \Delta t \left\vert\right. T > t)}{\Delta t} =
\frac{f(t)}{S(t)} =
-\frac{d}{dt} \log{(S(t))}
\end{equation}
The cumulative hazard function $ H(t) $ is also used, and is especially important for model estimation \eqref{eq:cum-hazard}. The survival function can therefore be derived by solving the differential equation above, which gives the relation with the hazard and cumulative hazard functions and is expressed in \eqref{eq:surv-hazard}.
\begin{equation}
\label{eq:cum-hazard}
H(t) = \int_{0}^{t} h(x) dx = - \log{(S(t))}
\end{equation}
\begin{equation}
\label{eq:surv-hazard}
S(t) = \exp{\left( -\int_{0}^{t} h(z) dz \right)} = \exp{(-H(t))}
\end{equation}
Survival analysis is typically conducted through univariate survival models or through survival regression. The former takes into account only survival times, while the latter also includes covariates. The most widely used regression model is the Cox proportional hazards (CPH) model \cite{coxph}. In addition, there have been recent developments in applying machine learning models in survival analysis to improve prediction accuracy, but compromising on model interpretability. Machine learning algorithms, such as support vector machines \cite{surv-svm} and random forests \cite{surv-rf} are common examples of such techniques. Beside better accuracy, they also demonstrated a superior efficiency over existing training algorithms, especially for large datasets.
To exploit non-linear relationship among the covariates, neural networks (NN) \cite{francisco-book-ch2} are used. The first time a simple neural network was proposed was by Faraggi and Simon \cite{faraggi}. The approach is similar to the CPH, but optimizes a slightly different partial likelihood function. More recent developments are the application of deep neural networks (DNNs), with models like DeepSurv \cite{deepsurv}, DeepHit \cite{deephit} and Nnet-survival \cite{nnet-survival}. DeepSurv is a proportional hazards deep neural network. It is an extension of the method proposed by \cite{faraggi}, used to connect more than one layer in the network structure. While CPH and DeepSurv rely on the strong proportional hazards assumption, other models relax this condition. DeepHit uses a DNN to learn the survival distribution directly. It imposes no assumptions regarding the underlying processes and allows for the effect of the covariates to change over time. Nnet-survival, on the other hand, is a discrete time survival model. The hazard rate, which for the previously discussed methods is calculated as the instantaneous risk, here is defined for a specific time interval.
Because of the relatively small dataset for deep learning applications and to focus on the interpretability, in this paper we used the univariate models and the CPH regression model.
\subsubsection{Univariate survival models}
\label{subsubsec:method-univariate-surv-models}
The estimation of the survival and cumulative hazard functions can be performed using non-parametric univariate models. Survival and cumulative hazard functions are functions of survival times with event indicator, i.e. $ S(t) = f(\mathrm{\mathbf{T}}, \mathrm{\mathbf{E}}) $ and $ H(t) = g(\mathrm{\mathbf{T}}, \mathrm{\mathbf{E}}) $. Given the ordered survival times, for each instant of interest on the timeline, the number of individuals at risk and the number of death events occurring is determined. The survival function is usually estimated using the Kaplan-Meier method \cite{kaplan-meier}, a product-limit estimator. It takes as input survival times and estimates the survival function as:
\begin{equation}
\label{eq:km}
\hat{S}(t) = \prod_{i: t_i \le t} \left( 1 - \frac{d_i}{n_i} \right)
\end{equation}
where $ d_i $ is the number of death events at time $ t $ and $ n_i $ is the number of individuals at risk of death just prior and at time $ t $. Similarly, the cumulative hazard function can be estimated using the Nelson-Aalen method \cite{nelson, aalen} as:
\begin{equation}
\label{eq:na}
\hat{H}(t) = \sum_{i: t_i \leq t} \frac{d_i}{n_i}
\end{equation}
Hazard and cumulative hazard values cannot be directly interpreted, but are used for relative comparison among themselves in discovering which timeline instants are more likely to encounter a death event than others.
Univariate non-parametric survival models are simple and can provide flexible survival and hazard curves. They provide population-level insight into survival behaviour, but not at the individual level.
\subsubsection{Survival regression}
\label{subsubsec:method-survival-regression}
To account for the effect of covariates on the output functions, the Cox proportional hazards model is typically used for survival regression. It is a semi-parametric regression model that estimates the risk, i.e. the log of the hazard rate, of an event occurring as a linear combination of the covariates as $ \bm{\beta}^{\mathsf{T}} \mathrm{\mathbf{x}} $, with $ \bm{\beta} $ being a vector of coefficients and $ \mathrm{\mathbf{x}} $ denoting a vector of time-constant covariates. It maximizes the Cox's partial likelihood function (of the coefficients) \cite{coxph}:
\begin{equation}
\label{eq:coxph_likelihood}
L(\bm{\beta}) = \prod_{i: E_i = 1} \frac{\exp \left( \bm{\beta}^{\mathsf{T}} \mathrm{\mathbf{x}}_i \right)}{\sum_{j: T_j \ge T_i} \exp \left( \bm{\beta}^{\mathsf{T}} \mathrm{\mathbf{x}}_j \right)}
\end{equation}
where only uncensored data ($ i: E_i = 1 $) are used (hence the term ``partial''). The corresponding negative log-likelihood function for minimization is given by \eqref{eq:coxph_loglikelihood}. As implemented in \cite{lifelines}, the optimization problem is solved using maximum partial likelihood estimation (MPLE) and Newton-Raphson iterative search method \cite{suli_mayers_2003}. Ties is survival times are handled using the Efron method \cite{efron}. The L2 regularization is used with a scale value of 0.1.
\begin{equation}
\label{eq:coxph_loglikelihood}
l(\bm{\beta}) = - \log L(\bm{\beta}) = - \sum_{i: E_i = 1} \left( \bm{\beta}^{\mathsf{T}} \mathrm{\mathbf{x}}_i - \log \sum_{j: T_j \ge T_i} \exp \left( \bm{\beta}^{\mathsf{T}} \mathrm{\mathbf{x}}_j \right) \right)
\end{equation}
The hazard function is computed for each individual as a product of the population-level time-dependent baseline hazard $ h_0(t) $ and the individual-level time-constant partial hazard expressed through the exponent of a linear combination of the covariates. It is formulated as:
\begin{equation}
\label{eq:coxph_hazard}
h(t \vert \mathrm{\mathbf{x}}) = h_0(t) \exp \left( \bm{\beta}^{\mathsf{T}} \mathrm{\mathbf{x}} \right)
\end{equation}
All individuals share the same baseline hazard, and the effect of covariates acts as a scaler through the partial hazard. The baseline hazard function is estimated using the Breslow method \cite{breslow}. The exponential form ensures that the hazard rates are always positive. The ratio of the hazard functions of two individuals is always constant and it is given by the ratio of their partial hazards (hence the term ``proportional''). Given the hazard function, cumulative hazard function and survival function can be computed using \eqref{eq:cum-hazard} and \eqref{eq:surv-hazard}, respectively.
The goodness-of-fit measure is the concordance index (C-index) \cite{regr-model-book}, a commonly used metric to validate the predictive capability of survival models (analogous to the $ R^2 $ score in regression). It is a rank correlation score, comparing the order of observed survival times from the data and order of estimated survival times (or negative hazard rates) from the model. It is formulated as:
\begin{equation}
\label{eq:c_index}
C = \frac{1}{\epsilon} \sum_{i: E_i = 1} \sum_{j: T_j \ge T_i} \mathbf{1}_{f(\mathrm{\mathbf{x}}_i) < f(\mathrm{\mathbf{x}}_j)}
\end{equation}
where $ \epsilon $ is the number of samples used in calculation and $ \mathbf{1} $ is the indicator function with binary outcome based on the given condition. The C-index is equivalent to the area under a ROC (Receiver Operating Characteristic) curve, and ranges from \numrange{0}{1}, with values less than 0.5 representing a poor model, 0.5 is equivalent to a random guess, \numrange{0.65}{0.75} meaning a good model, and greater than 0.75 indicating a strong model, with 1 being the perfect concordance and 0 being the perfect anti-concordance.
\section{Data elaboration and model fitting}
\label{sec:data}
The data used in model estimation contains both uncensored and censored observations. Uncensored observations for an individual are survival times available directly as the number of years between two specific events, with the event observed indicator set to 1. Censored observations are created when only the birth event happened, with survival times set to 20 years and the event observed indicator set to 0.
To fit the models and validate their performance, the dataset, which comprises 1,487 individuals, was split into the training set, containing \SI{60}{\percent} of the samples, and the test set, as the hold-out set containing the remaining \SI{40}{\percent} of the samples. Stratified splitting was used to ensure a balanced presence of covariates' values in both datasets.
Data elaboration and modelling was performed in Python \cite{python}, using packages for numerical and scientific computing, data manipulation and machine learning \cite{pandas, scikit-learn, numpy, scipy}. The visualizations were created using Python plotting libraries \cite{matplotlib}. Survival modelling was done in the lifelines library \cite{lifelines}.
An example of elaborated time-dependent data for one person from the dataset used for survival analysis is shown in Table A2.
\subsection{Covariates description}
The covariates are divided into five groups: general attributes, independent state attributes, dependent state attributes, transition attributes and indicator transition attributes. Their description, type and feasible values are shown in Table A3.
\subsection{Model fitting}
In survival regression, survival and cumulative hazard functions are functions of survival times with event indicator and covariates with corresponding coefficients, $ S(t) = f(\mathrm{\mathbf{T}}, \mathrm{\mathbf{E}}, \mathrm{\mathbf{x}}, \bm{\beta}) $ and $ H(t) = g(\mathrm{\mathbf{T}}, \mathrm{\mathbf{E}}, \mathrm{\mathbf{x}}, \bm{\beta}) $. Transition attributes are used as states for the analysis, whereas the remaining ones are used for conditioning. Table A4 shows results of fitting a CPH model for each pair of life events from the causality graph. The table also contains information on the number of existing samples for each pair, out of which, as mentioned above, 60 \% were used for fitting and 40 \% as hold-out data for validation.
\section{Results and Discussion}
\label{sec:results}
\subsection{Life Events Graph}
To build our graphs, we ran Parents and Children (PC) and Greedy Equivalence Search (GES) algorithms with prepared data. Not surprisingly, we verified that their final graphs were not mutually consistent in several components. \ref{fig:pc-ges} shows the results. We can also see that many event pairs have bidirectional arrows. This is neither surprising nor necessarily problematic, it only means that temporal precedence can flow in either direction, and there may be cyclic stages in life (child birth $ \rightarrow $ moving $ \rightarrow $ new car...).
\begin{figure}[h!
\centering
\includegraphics[width=0.65\linewidth]{graphs-pc-ges}
\caption{Life events graphs created using PC and GES algorithms. (\textit{A}) PC. (\textit{B}) GES.}
\label{fig:pc-ges}
\end{figure}
\begin{figure}[h!
\centering
\includegraphics[width=0.65\linewidth]{graph-ges-reduced}
\caption{Final events graph containing four life events.}
\label{fig:ges-reduced}
\end{figure}
The PC graph shows inconsistencies with respect to \emph{wedding} and \emph{child birth}, that we could not find in the dataset or in our personal intuition, particularly that \emph{child birth} precedes being \emph{married} (but not directly \emph{wedding}, which becomes an isolated event, only preceded by \emph{moving}). The child birth event almost certainly happens after other events (as can be seen in Figure A2), which was not uncovered by PC. It is apparent that the \emph{married} state variable confuses the algorithm. On the other hand, in the GES case, the relationships do not show contradiction with our intuition and the graph proposes more relationships. This is valuable from the perspective of our work, because it allows for more diversity in life-course trajectories. We therefore decided to keep working with the GES algorithm for the remainder of this paper. We recognize that this decision is certainly quite arbitrary, and it also reflects the well-known challenge of comparability of graphs in causal discovery \cite{singh2017comparative}. These algorithms themselves use different non-comparable score schemes \cite{causal_discovery_review}.
The last step in this process was to reduce the graph to only events variables; henceforth we will only use the graph in \ref{fig:ges-reduced}.
\subsection{Empirical Life Event Occurrence Distribution}
\begin{figure*}[ht
\centering
\includegraphics[width=1.0\linewidth]{surv-hist}
\caption{Empirical histograms and corresponding survival functions. The x-axis shows age, the left-hand side y-axis shows observed counts of specific event in the dataset (depicted in blue) and the right-hand side y-axis shows derived survival functions from the data (depicted in orange). Since all of our samples are fully observed (i.e. uncensored), all survival functions converge to a value of zero. (\textit{A}) New car event. (\textit{B}) Moving event. (\textit{C}) Child birth event. (\textit{D}) Wedding event. (\textit{E}) Divorce event.}
\label{fig:hist-surv}
\end{figure*}
When a certain life event is most likely to happen in a person's life can be analysed from observed life-course data. Empirical histograms are a good way to understand occurrence distribution of events throughout the lifetime. Since in this case we work with completely observed data (only uncensored observations), the corresponding cumulative distribution functions (CDF), and thus the survival functions, can be directly derived. \ref{fig:hist-surv} shows histograms and survival functions for five life events: new car, moving, child birth, wedding and divorce.
The results show that buying a new car is likely to happen by the age of 40, with decreasing probability in time, with the age of 18 having the highest probability, which can be related to a legal minimum age for obtaining a driving license for unrestricted car use in Germany (\ref{fig:hist-surv}\textit{A}). Home relocation is most likely to happen in the later 20s, with the probability increasing up to the age of 25, and then decreasing up to the age of 50, after which people seem unlikely to move again (\ref{fig:hist-surv}\textit{B}). Having a child follows a normal distribution peaking around the age of 30, with a small chance of happening after the age of 45 (\ref{fig:hist-surv}\textit{C}). Getting married roughly follows a skewed distribution with a heavier tail on the increasing age side, and is most likely to happen in the 20s, specifically around mid 20s (\ref{fig:hist-surv}\textit{D}). Relating the events of getting married and having a child, having a child is shifted a couple of years after getting married, implying that in general having a child will likely happen soon after the wedding. Divorce is most likely to happen in the 30s, and generally before the age of 50, however the small sample size indicates an overall low probability of this happening (\ref{fig:hist-surv}\textit{E}).
\subsection{Describing General Life Event Transitions}
Survival and cumulative hazard functions are estimated using Kaplan-Meier and Nelson-Aalen methods (\eqref{eq:km} and \eqref{eq:na}, respectively), and are shown in \ref{fig:graphs-ges-surv}. Kaplan-Meier estimates of the survival function are shown in \ref{fig:graphs-ges-surv}\textit{A}, where colors represent survival probabilities ranging from \numrange{0}{1}. The more the color at time $ t $ is on the green side, the higher the survival probability $ S(t) $ is, which indicates that there is a smaller probability that the event will happen before or at time $ t $. This can also be interpreted as showing that the event has a higher probability of surviving up to time $ t $. Normalized hazard functions are derived from the cumulative hazard function estimates and are shown in \ref{fig:graphs-ges-surv}\textit{B}, with a range from \numrange{0}{1}.
\begin{figure*}[h!
\centering
\includegraphics[width=1.0\linewidth]{graphs-ges-surv}
\caption{General life event transitions, represented in a life event graph. Nodes represent life events and directed edges represent life event pairs, with results shown on the right-hand side of the edges. Each edge is divided into 21 equal-length parts, representing a period of 20 years and the base year, with its color depicting the function value shown in the scale from the color bar on the side of each graph. (\textit{A}) Survival functions for each pair of life events. The function for each pair can be interpreted either individually or jointly compared among each other, as they all come from the same scale from \numrange{0}{1}. (\textit{B}) Normalized hazard functions derived from the cumulative hazard functions computed using the Nelson-Aalen method. These functions should be interpreted only individually, to compare hazard rates between intervals, and not regarded jointly, as they are normalized to a range from \numrange{0}{1} coming from different scales.}
\label{fig:graphs-ges-surv}
\end{figure*}
Looking at \ref{fig:graphs-ges-surv}\textit{A}, we can see that in the first 2-3 years after getting married, it is more likely that moving will happen next, rather than child birth or having a new car, which will then be superseded by having a child, as the survival probabilities decrease much faster after 3 years. Having a new car after getting married does not seem to be as important as child birth or moving. After having a child, moving is more likely than buying a new car. After moving, buying a car is unlikely compared to having a child, which is likely to happen within several years after moving. After buying a new car, there is a similar chance of moving as having a child.
\ref{fig:graphs-ges-surv}\textit{B} gives interesting insights regarding specific intervals when the event has a higher risk of happening on a timeline. For example, after getting married, the new car or moving event is very likely to happen within one year, whereas the child birth event has a higher risk of happening 10 or 11 years after getting married than in other periods. These estimates should be taken with caution: comparison of hazard rates among intervals gives individual difference between the rates. However, the hazard rates add up along the timeline, hence the cumulative risk of an event happening increases in time.
\subsection{The Effect of Covariates on Transition Probabilities}
\begin{figure*}[ht
\centering
\includegraphics[width=1.0\linewidth]{surv-avg-functions}
\caption{Average life event transitions behaviour for five pairs of life events. Predicted survival functions are computed for all dataset samples for each pair separately, which are then averaged, thus marginalizing over all the covariates to account for their empirical distributions. The blue line represents the survival function, and the light blue area around it is the confidence belt ($ \pm $ 1 standard deviation). The average survival functions have different shapes, with different confidence belts at different times. The narrower the confidence belt is, the less impact covariates have on the survival probability. Since not all of our samples are fully observed (some are censored as in some cases the event does not happen within the study period), survival functions do not end up at 0. (\textit{A}) Wedding to child birth transition. (\textit{B}) Wedding to new car transition. (\textit{C}) Moving to new car transition. (\textit{D}) Child birth to new car transition. (\textit{E}) Wedding to divorce transition.}
\label{fig:surv-avg-functions}
\end{figure*}
The Cox proportional hazards models were trained and validated for each pair of life events from the life events graph. To understand the relationships in the data and prevent drawing false interpretations from the findings, detailed data preparation and description, as well as models training and validation, can be found in the previous section and in Appendix. Important to note is that the covariates obtained from time-dependent variables (e.g. age, number of children, number of cars, etc.) are measured at the time of the initial baseline event, which may help understand some of the less intuitive results shown later in the text (e.g. those without (or fewer) children at the time of wedding are less likely to get divorced, which can be explained by the fact that those are the ones who do not bring children into a partnership, and this results in the lower divorce probability, as shown in \ref{fig:surv-functions}\textit{W}). Another explanation comes from the small sample size of some covariates values (see Figure A4), e.g. people with more cars tend to buy cars faster than the ones with less or no cars, as shown later in \ref{fig:surv-functions}\textit{N}.
\subsubsection{Average life event transitions behaviour}
Average transition behaviour can give us an insightful overview of the survival behaviour within each pair and among the pairs, as shown in \ref{fig:surv-avg-functions}. Having a child is very likely to happen within 5 years from getting married, and almost certainly within ten years, with a probability close to 0 after 10 years. There is a small probability of having a child in the same year the wedding happens (\ref{fig:surv-avg-functions}\textit{A}). Buying a new car after getting married, as depicted in \ref{fig:surv-avg-functions}\textit{B}, has almost identical probability as buying a new car after having a child (\ref{fig:surv-avg-functions}\textit{D}), which will happen within the first 10 years from the initial event, and is unlikely after that. There is big variability in buying a new car after moving, demonstrated by the wide confidence interval in \ref{fig:surv-avg-functions}\textit{C}, with a similar switch point at around 10 years. The divorce event is unlikely to happen, with survival probability slightly decreasing in time over the period of within 15 years (\ref{fig:surv-avg-functions}\textit{E}).
It is important to note that general survival functions shown in \ref{fig:graphs-ges-surv}\textit{A} and average survival functions in \ref{fig:surv-avg-functions} differ in the way that the former are obtained directly from the data using univariate models, while the latter are obtained by averaging over all predicted survival functions using fitted multivariate CPH regression models, thus including the effect of covariates and the model error.
\subsubsection{Life event transitions behaviour}
To examine the effect of socio-demographics on transition probabilities, we condition the prediction on one covariate for each transition pair (\ref{fig:surv-functions}).
\begin{figure*}[h!
\centering
\includegraphics[width=1.0\linewidth]{surv-functions}
\caption{Life event transitions behaviour for five selected pairs of life events expressed through survival functions. Each row with five graphs represent behaviour for a specific life events pair, with five columns indicating which socio-demographic covariates the transition behaviour is conditioned on. For each graph, the x-axis represents time in years passed from the initial event, modelling the probability of another event happening. The covariates are age, nationality, number of children, number of cars and owning a home, respectively in order of plotting. \textit{(A-E)} Wedding to child birth transitions. \textit{(F-J)} Wedding to new car transitions. \textit{(K-O)} Moving to new car transitions. \textit{(P-T)} Child birth to new car transitions. \textit{(U-Y)} Wedding to divorce transitions.}
\label{fig:surv-functions}
\end{figure*}
\paragraph*{\textit{Wedding} to \textit{Child birth}}
The decision to have a child after getting married is mainly driven by two factors: age and the number of children (\ref{fig:surv-functions}\textit{A-E}). In terms of age, after wedding younger people tend to have children later, where the age groups of $ [36, 45] $ and $ [46, 55] $ tend to have children before the others, with a high chance of having a child in the same year as the wedding or likely within 3 years from it. Regardless of age, if a couple does not have a child within 10 years of getting married, they will probably not have a child at all. In terms of nationality, Germans tend to have children slightly later than other nationals. Interestingly, couples with no previous children tend to have children later than couples with children, who are likely to have a child within 1 to 2 years after getting married. Couples who already have children before the wedding could be couples who had children together before getting married or who had children with previous partners. The number of cars does not play a significant role in the decision to have a child. Similarly, whether a couple owns a home, which can be related to their economic situation, does not seem to be relevant for having a child.
\paragraph*{\textit{Wedding} to \textit{New car}}
The decision to have a new car after getting married is driven by several factors, with the most influential being age and the current number of cars (\ref{fig:surv-functions}\textit{F-J}). The youngest age group $ [18, 21] $ tend to have a new car sooner than the others, with the probability gradually decreasing with increases in age, where people in the age group $ [36, 45] $ are very unlikely to have a new car after getting married. Germans tend to have a new car later than other nationals. Those who have no children or one child exhibit similar behaviour, while having two children speeds up the decision to buy a car, whereas couples with three children postpone it. Intuitively, couples with no car tend to buy a car much sooner than couples who already possess a car. Couples who own a home tend to buy a new car slightly sooner then couples who do not own a home, which may be related to a better economic situation.
\paragraph*{\textit{Moving} to \textit{New car}}
The decision to have a new car after moving is driven by all factors except nationality (\ref{fig:surv-functions}\textit{K-O}). The difference in behaviour between age groups is very distinguishable - the younger the person the more likely he/she is to have a new car after moving, with the youngest group $ [18, 21] $ tending to have a car within 2 years from moving. Germans tend to buy a new car later than other nationals. The number of children plays an important role, where people with fewer children tend to by a car sooner, whereas people with 3 or more children are less likely to buy a new car after moving. For people who had no children at the time of moving, they are more likely (compared to others) to have a new (first) child after moving, which in turn may explain the faster car acquisition. Surprisingly, people living in households with two cars or more are very likely to buy or replace a car soon after moving, most of them within two years, including the year of moving. The magnitude of this is very strong, which may be an artifact of a small sample group, or the different possibilities in temporal coding. People who own a home are significantly more likely to get a car before people who do not own a home.
\paragraph*{\textit{Child birth} to \textit{New car}}
The decision of having a new car after having a child is driven by several factors, of which the most influential are age and the number of cars (\ref{fig:surv-functions}\textit{P-T}). With the exception of the youngest age group $ [18, 21] $, the increase in age postpones buying a car after having a child. Germans are likely to have a new car slightly sooner than other nationals. People with one child will probably buy a car much later than people with more than one child. In line with intuition, after having a child people without a car tend to buy a car sooner than people with a car. Similarly, people who own a home are more likely to buy a car than people not owning a home are.
\paragraph*{\textit{Wedding} to \textit{Divorce}}
The overall probability of getting divorced is low, but some variability is still present (\ref{fig:surv-functions}\textit{U-Y}). In general, based on observed divorces, if a divorce does not happen within 15 years from the wedding, it will probably not happen. The youngest age group tend to get divorced sooner than others. Germans tend to get divorced later than other nationals. People with fewer children have lower divorce probability than people with more children, with a sensitive group being people with four children. An intuitive interpretation for this could be that those without (or fewer) children at the time of getting married are those that do not bring children into a partnership, and this results in the lower divorce probability. The number of cars can also affect divorce probability, where people with fewer cars are more likely to get divorced. Homeowners are more likely to get a divorce than non-homeowners.
\subsubsection*{The effect of socio-demographics on transitions}
Regarding the general effect of socio-demographics on transitions, younger age groups generally prefer to have a new car and divorce sooner, while transitions to having a child is faster as age increases, with the oldest group tending to postpone the decision of buying a car (\ref{fig:surv-functions}\textit{A,F,K,P,U}). Regarding nationality, other nationals make faster transitions than Germans, except when it comes to buying a car after having a child (\ref{fig:surv-functions}\textit{B,G,L,Q,V}). The number of children seems to play a role in all transitions (\ref{fig:surv-functions}\textit{C,H,M,R,W}). The number of cars, as expected, tends to have an effect in the decision to buy a new car (\ref{fig:surv-functions}\textit{D,I,N,S,X}). Homeowners generally have faster transitions than people not owning a home (\ref{fig:surv-functions}\textit{E,J,O,T,Y}).
\section{Conclusion}
\label{sec:conclusion}
Numerous studies in transport research and sociology seek to model the structure of the life course and the inter-connection between life-course events (birth of a child, home relocation, car change, etc.). The use of traditional statistical models does not generally allow an analyst to infer causality without overrelying on expert knowledge. Indeed, statistical models of choice, which are predominantly used in the field, provide correlation measures rather than causation measures. By introducing a bi-level model where the upper level is a causal discovery algorithm and the lower level is composed of a series of survival analyses modelling the transition rate between life course events, we introduce a more rigorous data driven assessment of causality in life-course analysis. The results of this research can be used for high-level activity modelling for mobility applications, to better understand how life events and socio-demographic characteristics affects and constraints people's mobility and travel choices.
\section*{Acknowledgements}
\label{sec:acknowledgements}
The research leading to these results has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement no. 754462. The Leeds authors acknowledge the financial support by the European Research Council through the consolidator grant 615596-DECISIONS.
\section*{Appendix}
\label{sec:appendix}
\subsection*{Data collection}
\label{subsec:app_data_collection}
A screenshot of the life-course calendar used in this study is shown in Fig. A1.
\begin{figure}[hb]
\centering
\includegraphics[width=0.75\textwidth]{LCC_new}
\caption{A screenshot of the life-course calendar used in this study (translated from German into English.}
\label{fig:att-lit-lcc}
\end{figure}
\subsection*{Causal discovery}
\label{subsec:app_causal_discovery}
In order to prepare the data for the algorithms, we created an observation for each event pair occurrence, as illustrated in Table A1. Observation 2 can be read this way: ``at some moment in time, a married individual with 2 children had a child birth event, which was followed by a moving event later in life''. Due to the nature of the dataset, an ``event occurrence'' always matches a year, so one may have multiple events preceding multiple events, as happens in observation 4.
\begin{sidewaystable}
\centering
\caption{An example of four \emph{event pair} observations.}
\label{tab:app-event-obs}
\begin{tabular}{ *{7}{c} *{4}{p{1.3cm}}}
ID & New car & Moving & Child birth & Wedding & Married & Children & New car next event & Moving next event & Child birth next event & Wedding next event \\
\midrule
1 & 0 & 1 & 0 & 0 & 1 & 2 & 1 & 0 & 0 & 0 \\
2 & 0 & 0 & 1 & 0 & 1 & 2 & 0 & 1 & 0 & 0 \\
3 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 \\
4 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 0 \\
\bottomrule
\end{tabular}
\end{sidewaystable}
Regarding the choice of the causal graph, Fig. A2 shows the survival probability marginalized over all other events (new car, moving, wedding) leading to the child birth event for the GES graph. There is a clear indication that the child birth happens almost certainly after each one of these events. These relations are not present in the PC graph, where only the new car event precedes the child birth event.
\begin{figure}
\centering
\includegraphics[width=0.38\linewidth]{surv-avg-birth-marg}
\caption{Average survival curve from all other events (new car, moving, wedding) to the child birth event.}
\label{fig:surv-avg-birth-marg}
\end{figure}
\subsection*{Survival analysis}
\label{subsec:app_survival_analysis}
Fig. A3 shows a general overview of observations in survival analysis.
\begin{figure}
\centering
\includegraphics[width=0.65\linewidth]{surv_data_example}
\caption{General example of survival analysis data. Uncensored observations (red) and censored observations (blue).}
\label{fig:app-surv-data}
\end{figure}
\subsection*{Data elaboration}
\label{subsec:app_data_elaboration}
An example of elaborated time-dependent data for one person from the dataset used for survival analysis is shown in Table A2.
\begin{sidewaystable}
\centering
\caption{An example of time-dependent data for one person from the dataset.}
\label{tab:app-example-person}
\begin{tabular}{ l *{20}{c}}
\multirow{2}{*}{Attribute} & \multicolumn{20}{c}{Year} \\ \cline{2-21}
& 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\
\midrule
Age & 18 & 19 & 20 & 21 & 22 & 23 & 24 & 25 & 26 & 27 & 28 & 29 & 30 & 31 & 32 & 33 & 34 & 35 & 36 & 37 \\
Owns home & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
Cars & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
Distance to work & 0 & 0 & 8 & 7 & 7 & 5 & 5 & 5 & 5 & 5 & 5 & 7 & 7 & 7 & 7 & 7 & 7 & 7 & 7 & 7 \\
Rides car & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
Children & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
Married & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
New car & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
Moving & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
Child birth & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
Wedding & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
Divorce & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\bottomrule
\end{tabular}
\end{sidewaystable}
\subsection*{Covariates description}
\label{subsec:app_covariates_description}
The covariates are divided into five groups: general attributes, independent state attributes, dependent state attributes, transition attributes and indicator transition attributes. Their description, type and feasible values are shown in Table A3.
General attributes are \textit{Gender}, \textit{Nationality}, \textit{No. of relocations} and \textit{City size}. They represent general characteristics of individuals. Their value neither depends on the year (they are time-constant), nor on the life events pair. \textit{Gender} and \textit{Nationality} are categorical attributes, \textit{No. of relocations} and \textit{City size} are discrete.
Independent state attributes are \textit{Age}, \textit{Owns home}, \textit{Distance to work} and \textit{Rides car}. These are time-dependent attributes. Given a life events pair, an independent state attribute takes a value it has in the year the cause event occurs. \textit{Age}, \textit{Owns home} and \textit{Rides car} are categorical, whereas \textit{Distance to work} is discrete (discretised to 1 km).
Dependent state attributes are \textit{No. of cars}, \textit{No. of children} and \textit{Married}. They are affected by the transition attributes. Their value is computed in the same way as for the independent state attributes. They are all categorical.
Transition attributes are \textit{New car}, \textit{Moving}, \textit{Child birth}, \textit{Wedding} and \textit{Divorce}. These are binary variables, having values of mostly 0, and 1 when the event happens. They increment a dependent state attribute (in case of \textit{No. of cars} and \textit{No. of children}), or alternate the value of \textit{Married}. These attributes are treated in a different way. To be used in estimations, they are expanded in a way how many years they happened before the cause event. Therefore, they take values in the range $ [0, -19] $, since the dataset for each person covers a period of 20 years. They are all discrete.
Indicator transition attributes are dummy variable used to account for whether the transition attribute happened or not before the cause effect. They are all categorical.
\begin{sidewaystable}
\centering
\caption{Covariates description}
\label{tab:app-covariates-description}
\begin{tabular}{ *{5}{l} }
Group & Name & Description & Type & Domain \\
\midrule
\multirow{4}{*}{General attributes} & Gender & gender, is the person male or female & categorical & \{0, 1\} \\
& Nationality & nationality, is the person German or international & categorical & \{0, 1\} \\
& No. of relocations & no. of times the person relocated with parents & discrete & [0, 9] \\
& City cize & the size of the city the person lives in & discrete & [0, 7] \\
\midrule
\multirow{4}{*}{Independent state attributes} & Age & age, a range of 20 years for every person & categorical & 5 age groups \\
& Owns home & does the person own a home or not & categorical & \{0, 1\} \\
& Distance to work & travelling distance to work [km] & discrete & [0, 100] \\
& Rides car & if the person uses the car to go to work & categorical & \{0, 1\} \\
\midrule
\multirow{3}{*}{Dependent state attributes} & No. of cars & the number of cars the person has & categorical & [0, 2] \\
& No. of children & the number of children the person has & categorical & [0, 3] \\
& Married & marital status, is the person married or not & categorical & \{0, 1\} \\
\midrule
\multirow{5}{*}{Transition attributes} & New car & the event of buying a new car & discrete & $ [0, -19] $ \\
& Moving & the event of moving & discrete & $ [0, -19] $ \\
& Child birth & the event of having a child & discrete & $ [0, -19] $ \\
& Wedding & the event of having a wedding (getting married) & discrete & $ [0, -19] $ \\
& Divorce & the event of having a divorce & discrete & $ [0, -19] $ \\
\midrule
\multirow{5}{*}{Indicator transition attributes} & Indicator - New car & indicator for the event of buying a new car & categorical & \{0, 1\} \\
& Indicator - Moving & indicator for the event of moving & categorical & \{0, 1\} \\
& Indicator - Child birth & indicator for the event of having a child & categorical & \{0, 1\} \\
& Indicator - Wedding & indicator for the event of having a wedding (getting married) & categorical & \{0, 1\} \\
& Indicator - Divorce & indicator for the event of having a divorce & categorical & \{0, 1\} \\
\bottomrule
\end{tabular}
\end{sidewaystable}
Distribution of covariates values is shown in Fig. A4.
\begin{figure}
\centering
\includegraphics[width=1.\linewidth]{surv-functions-hist}
\caption{Distribution of covariates values.}
\label{fig:app-surv-data-hist}
\end{figure}
\subsection*{Model fitting}
\label{subsec:app_model_fitting}
Model fitting results are shown in Table A4.
\begin{table}
\centering
\caption{Fitted CPH models}
\begin{tabular}{ llrrr }
Cause & Effect & Sample size & C-index - train & C-index - test \\
\midrule
New car & Moving & 819 & 0.687 & 0.605 \\
New car & Child birth & 736 & 0.801 & 0.757 \\
Moving & New car & 1172 & 0.863 & 0.864 \\
Moving & Child birth & 1704 & 0.831 & 0.824 \\
Child birth & Moving & 1481 & 0.670 & 0.679 \\
Child birth & New car & 873 & 0.784 & 0.809 \\
Wedding & Moving & 1175 & 0.591 & 0.604 \\
Wedding & Child birth & 1241 & 0.733 & 0.748 \\
Wedding & Divorce & 1237 & 0.719 & 0.682 \\
Wedding & New car & 877 & 0.608 & 0.652 \\
\bottomrule
\end{tabular}
\label{tab:app-fitted-coxph}
\end{table}
\section{Introduction and Background}
\label{sec:introduction}
Inspired by a few seminal studies \cite{hagerstrand1970people, chapin1974human} highlighting how different life-course decisions are interrelated and should be analysed jointly, several studies \cite{muggenburg2015mobility} have analysed travel behaviour as a long-term life-course dimension, similar to employment spells or residential location choices. Retrospective surveys such as the one proposed by Beige and Axhausen \cite{beige12} seek to uncover how life events, such as the birth of a child or marriage, are connected to one another as well as to long term mobility decisions such as commuting mode or relocation. Retrospective surveys are generally based on a life-course calendar, where the respondents are asked a series of questions for each year in retrospect (which is a grid analogous to Figure A1). Life-course calendars are commonly used to collect such information where panel data is unavailable or too costly. Life-course calendars have become popular because they have two main advantages over traditional panel surveys: the visual and mental relation of different kinds of events improves the quality of the data by providing reference points and preventing time inconsistencies. Moreover, the visual aid of the calendar tool eases the task of listing a potentially high number of different short events, a task that would be more difficult in traditional surveys \cite{freedman1988life}. However, while scholars agree on the fact that life-course calendars provide a helpful tool to facilitate the recall of life events, many acknowledge the issue of recall of all relevant events and the effort required to do so. In particular, Manzoni et al. \cite{manzoni20102} and Manzoni \cite{manzoni2012and} find that the recall depends not only on respondents' socio-demographic characteristics but also on the type of events that are recalled. For example, people tend to omit or reduce the duration of spells of unemployment. In addition, it has been shown by Schoenduwe et al. \cite{schoenduwe2015analysing} that participants find it easiest to remember ``emotionally meaningful, momentous, unique or unexpected events''. It is precisely these kinds of events that we consider in our analysis.
The data for our research originates from a retrospective survey that was carried out between 2007 and 2012 at the Department of Transport Planning of the Faculty of Spatial Planning at TU Dortmund. The questionnaire contains a series of retrospective questions about residential and employment biography, travel behaviour and holiday trips, as well as socio-economic characteristics.
In our work, we try to investigate if causal relations between life events can be identified in our dataset, such that these can then be used for choice modelling and provide valuable insights into understanding people's behaviour. We do this through causal discovery, a tool that deals with the process of automatically finding, from available data, the causal relations between the available (random) variables. This typically follows a graphical representation, where nodes are the variables, and arcs are the causal relationships between them. A causal discovery algorithm thus has as main objective to find directed graphical causal models (DGCMs) that reflect the data. There is also research on acyclic graphs and combinations of both, which we will not focus on here. For a broad review, we recommend Glymour et al. \cite{causal_discovery_review}.
The ideal way to discover causal relations is through interventions or randomized experiments. However, in many cases, such as ours, this is in fact impossible. Having only pure observational data, causal discovery tools search for systematic statistical properties, such as conditional independence, to establish (possible) causal relations. This implies also that we are limited to what the data \emph{can tell} us about causality. If the true causality structure is not observable from the data, such methods will at most find correlations disguised as causation, as happens in many other non-causal methods, such as regression. In such cases, a possible solution relies on domain knowledge, such as postulating the existence of common causes (e.g. a latent variable), or the addition of constraints.
The concept of causal discovery thus consists of algorithms to discover causal relations by analyzing statistical properties of purely observational data. In this paper, we intend to apply such methods to automatically uncover causality through temporal sequence data. Time or sequence are hardly a true cause in decision making (e.g. one will not have children or buy a car just because X time has passed, but due to some deeper cause). Therefore, it is certainly a bit too bold to claim that there is inherent causality in all the relationships that we will find. Instead, we see this process as uncovering the most likely graph of sequential patterns throughout the life of individuals.
Given a resulting causal graph, together with a dataset of temporal sequence records, we can now estimate transition probabilities for each event pair from the graph. We model the transition probabilities through survival analysis, which is typically used for time-to-event (TTE) modelling (or similarly for time-to-failure (TTF) or durations) \cite{survival_analysis_2}. Classical survival analysis is widely applied in medical studies to describe the survival of patients or cancer recurrence \cite{10.1371/journal.pbio.0020108, aalen-history, Cheng181ra50, Royston2013, 10.1001/jama.2016.3775}. Beside medical applications, it is now applied far beyond this field, such as predictive maintenance, customer churn, credit risk, etc.
Through survival analysis, given that an initial event occurred, we can obtain probabilistic temporal estimates of another event occurrence, with the survival values defined as $ S(t) = \mathrm{P}(T > t) $. It represents the probability of the event happening after time $ t $ from the initial event (i.e. the probability of surviving past time $ t $), given that the event did not happen before time $ t $.
In our case, survival probabilities will give us estimates of how likely it is that a certain life event will occur in each coming year after another specific life event. This leads to a better understanding of people's life-course calendars and how these depend on their socio-demographics. Examples include how the number of cars affects a decision of moving after getting married, the effect of the number of children on buying a car after moving, the difference between homeowners and non-homeowners when it comes to having a child after wedding, to name a few.
\section{Methods}
\label{sec:methods}
Our approach is formulated as a bi-level problem: in the upper level, we build the life events graph, with causal discovery tools, where each node represents a life event; in the lower level, for selected pairs of life events, time-to-event modelling through survival analysis was applied to model the time-dependent probabilities of event occurrence.
\subsection{Causal Discovery}
Two causal discovery algorithms used in our research are Parents and Children (PC) and Greedy Equivalence Search (GES). In a resulting life events graph, we should have a set of key events as nodes, while the arcs correspond to temporal ``causality'' between pairs of events, where $ A \rightarrow B $ implies that $ A $ is precedent to $ B $. As discussed earlier, this is not assumed as true causality; instead, we see it as representing a high likelihood of observing a certain temporal order in such events.
The concept of conditional independence is key here:
\[ X \independent Y | S \,\, \iff \,\, P(X, Y|S) = P(X|S)P(Y|S) \,\,\textit{for all dataset} \]
The idea is quite intuitive: if two variables $ X $ and $ Y $ are independent when controlling for the (set of) variable(s) $ S $, then $ X $ and $ Y $ are conditionally independent given $ S $. This means that $ S $ is likely to be a common cause for $ X $ and $ Y $. Of course, just as testing for independence in general, testing for conditional independence is a hard problem \cite{shah2018hardness}, but approximate techniques exist to determine it from observational data \cite{dawid1979conditional}.
Algorithms such as Parents and Children (PC) or its derivatives such as the Fast Causal Inference (FCI) \cite{spirtes2000causation} are based on this principle to generate a causal graph. They essentially start from a fully connected undirected graph, and gradually ``cut'' links between conditionally independent nodes.
Another approach starts from the opposite direction, where we start from an empty graph, and gradually add nodes and arcs. The most well-known algorithm is the Greedy Equivalence Search (GES) \cite{chickering2002optimal}. At each step in GES, a decision is made as to whether adding a directed edge to the graph will increase fit measured by some fitting score (e.g. Bayesian information criterion (BIC), z-score). When the score can no longer be improved, the process reverses, by checking whether removing edges will improve the score. It is difficult to compare this with PC in terms of the end quality of the discovered graph, but it is known that, as the sample size increases, the two algorithms converge to the same Markov Equivalence Class \cite{causal_discovery_review}. The vast majority of causal discovery algorithms are variations or combinations of the two approaches just mentioned.
In our dataset, we have four events, namely \emph{child birth}, \emph{wedding}, \emph{new car} and \emph{moving}, and two state variables, namely \emph{married} (True or False) and \emph{children} (numeric). While the final life events graph shall have only events (the state variables will be useful later, but not as nodes in the graph), PC and GES allow for inclusion of these variables, therefore we applied both PC and GES algorithms \cite{kalainathan2019causal}.
\subsection{Survival Analysis}
In survival analysis, survival function $ S(t) $ can be defined from the cumulative distribution function $ F(t) $, as $ S(t) = 1 - F(t) $, which in terms of probabilities can be formulated as $ S(t) = \mathrm{P}(T > t) $. The survival function can be interpreted in terms of the gradient (slope) of a line, with steeper lines meaning an event is more likely to happen in the corresponding interval. In contrast, a flat line means there is a zero probability of an event happening in that interval.
Most commonly used statistical methods for survival analysis are the Kaplan-Meier method \cite{kaplan-meier} and the Nelson-Aalen method \cite{nelson, aalen}, which are univariate counting-based methods for general insights. To estimate the effect of covariates, a survival regression model can be formalized through the Cox proportional hazards model (CPH) \cite{coxph}. It assumes that all individuals share the same baseline hazard function, and the effect of covariates acts as its scaler. It provides interpretability but lacks the accuracy with larger datasets.
In survival analysis, time-to-event (TTE) is expressed through two events: the \textit{birth} event and the \textit{death} event (hence the term ``survival''). Time between the two events is survival time, denoted by $ T $, which is treated as a non-negative random variable. Given the \textit{birth} event, it represents the probability distribution of the \textit{death} event happening in time.
Observations of true survival times may not always be available, thus creating distinction between uncensored (true survival times available) and censored observations (true survival times unknown, but only the censorship time). This information is expressed through an event indicator $ E $, with binary outcome. The censorship time is assumed to be non-informative, and $ T $ and $ E $ are independent.
Two main outputs of survival analysis are the survival function and the (cumulative) hazard function. The survival function $ S(t) $ can be formulated starting from the cumulative distribution function (CDF), denoted by $ F $, which can be defined as the probability (\textrm{P}) of an event occurring up to time $ t $:
\begin{equation}
\label{eq:cdf}
F(t) = \mathrm{P}(T \le t) = \int_{0}^{t} f(x) dx
\end{equation}
with properties of being a non-decreasing function of time $ t $, having bounds on the probability as $ 0 \le F(t) \le 1 $, and its relation with the survival function as $ F(t) = 1 - S(t) $. Given this, we can define the survival function as the probability of surviving past time $ t $, i.e. the probability of an event happening after time $ t $ given that the event has not happened before or at time $ t $ as:
\begin{equation}
\label{eq:surv}
S(t) = \mathrm{P}(T > t) = \int_{t}^{\infty} f(x) dx
\end{equation}
with properties of being a non-increasing function of $ t $, and having bounds on the survival probability as $ 0 \le S(t) \le 1 $ and $ S(\infty) = 0 $, given its relation with the CDF as $ S(t) = 1 - F(t) $. The probability density function (PDF) of $ T $, $ f(t) $, can be formalized as:
\begin{equation}
\label{eq:surv-pdf}
f(t) = \lim_{\Delta t \rightarrow 0} \frac{\mathrm{P}(t < T \le t + \Delta t)}{\Delta t}
\end{equation}
The hazard function $ h(t) $ defines the instantaneous hazard rates of the event happening at time $ t $, given that the event has not happened before time $ t $. It can be defined as a limit when a small interval $ \Delta t $ approaches 0, with a probability of the event happening between time $ t $ and $ t + \Delta t $, as in \eqref{eq:hazard-def}. In addition, there is a direct correlation between the hazard function and the survival function.
\begin{equation}
\label{eq:hazard-def}
h(t) =
\lim_{\Delta t \rightarrow 0} \frac{\mathrm{P}(t < T \le t + \Delta t \left\vert\right. T > t)}{\Delta t} =
\frac{f(t)}{S(t)} =
-\frac{d}{dt} \log{(S(t))}
\end{equation}
The cumulative hazard function $ H(t) $ is also used, and is especially important for model estimation \eqref{eq:cum-hazard}. The survival function can therefore be derived by solving the differential equation above, which gives the relation with the hazard and cumulative hazard functions and is expressed in \eqref{eq:surv-hazard}.
\begin{equation}
\label{eq:cum-hazard}
H(t) = \int_{0}^{t} h(x) dx = - \log{(S(t))}
\end{equation}
\begin{equation}
\label{eq:surv-hazard}
S(t) = \exp{\left( -\int_{0}^{t} h(z) dz \right)} = \exp{(-H(t))}
\end{equation}
Survival analysis is typically conducted through univariate survival models or through survival regression. The former takes into account only survival times, while the latter also includes covariates. The most widely used regression model is the Cox proportional hazards (CPH) model \cite{coxph}. In addition, there have been recent developments in applying machine learning models in survival analysis to improve prediction accuracy, but compromising on model interpretability. Machine learning algorithms, such as support vector machines \cite{surv-svm} and random forests \cite{surv-rf} are common examples of such techniques. Beside better accuracy, they also demonstrated a superior efficiency over existing training algorithms, especially for large datasets.
To exploit non-linear relationship among the covariates, neural networks (NN) \cite{francisco-book-ch2} are used. The first time a simple neural network was proposed was by Faraggi and Simon \cite{faraggi}. The approach is similar to the CPH, but optimizes a slightly different partial likelihood function. More recent developments are the application of deep neural networks (DNNs), with models like DeepSurv \cite{deepsurv}, DeepHit \cite{deephit} and Nnet-survival \cite{nnet-survival}. DeepSurv is a proportional hazards deep neural network. It is an extension of the method proposed by \cite{faraggi}, used to connect more than one layer in the network structure. While CPH and DeepSurv rely on the strong proportional hazards assumption, other models relax this condition. DeepHit uses a DNN to learn the survival distribution directly. It imposes no assumptions regarding the underlying processes and allows for the effect of the covariates to change over time. Nnet-survival, on the other hand, is a discrete time survival model. The hazard rate, which for the previously discussed methods is calculated as the instantaneous risk, here is defined for a specific time interval.
Because of the relatively small dataset for deep learning applications and to focus on the interpretability, in this paper we used the univariate models and the CPH regression model.
\subsubsection{Univariate survival models}
\label{subsubsec:method-univariate-surv-models}
The estimation of the survival and cumulative hazard functions can be performed using non-parametric univariate models. Survival and cumulative hazard functions are functions of survival times with event indicator, i.e. $ S(t) = f(\mathrm{\mathbf{T}}, \mathrm{\mathbf{E}}) $ and $ H(t) = g(\mathrm{\mathbf{T}}, \mathrm{\mathbf{E}}) $. Given the ordered survival times, for each instant of interest on the timeline, the number of individuals at risk and the number of death events occurring is determined. The survival function is usually estimated using the Kaplan-Meier method \cite{kaplan-meier}, a product-limit estimator. It takes as input survival times and estimates the survival function as:
\begin{equation}
\label{eq:km}
\hat{S}(t) = \prod_{i: t_i \le t} \left( 1 - \frac{d_i}{n_i} \right)
\end{equation}
where $ d_i $ is the number of death events at time $ t $ and $ n_i $ is the number of individuals at risk of death just prior and at time $ t $. Similarly, the cumulative hazard function can be estimated using the Nelson-Aalen method \cite{nelson, aalen} as:
\begin{equation}
\label{eq:na}
\hat{H}(t) = \sum_{i: t_i \leq t} \frac{d_i}{n_i}
\end{equation}
Hazard and cumulative hazard values cannot be directly interpreted, but are used for relative comparison among themselves in discovering which timeline instants are more likely to encounter a death event than others.
Univariate non-parametric survival models are simple and can provide flexible survival and hazard curves. They provide population-level insight into survival behaviour, but not at the individual level.
\subsubsection{Survival regression}
\label{subsubsec:method-survival-regression}
To account for the effect of covariates on the output functions, the Cox proportional hazards model is typically used for survival regression. It is a semi-parametric regression model that estimates the risk, i.e. the log of the hazard rate, of an event occurring as a linear combination of the covariates as $ \bm{\beta}^{\mathsf{T}} \mathrm{\mathbf{x}} $, with $ \bm{\beta} $ being a vector of coefficients and $ \mathrm{\mathbf{x}} $ denoting a vector of time-constant covariates. It maximizes the Cox's partial likelihood function (of the coefficients) \cite{coxph}:
\begin{equation}
\label{eq:coxph_likelihood}
L(\bm{\beta}) = \prod_{i: E_i = 1} \frac{\exp \left( \bm{\beta}^{\mathsf{T}} \mathrm{\mathbf{x}}_i \right)}{\sum_{j: T_j \ge T_i} \exp \left( \bm{\beta}^{\mathsf{T}} \mathrm{\mathbf{x}}_j \right)}
\end{equation}
where only uncensored data ($ i: E_i = 1 $) are used (hence the term ``partial''). The corresponding negative log-likelihood function for minimization is given by \eqref{eq:coxph_loglikelihood}. As implemented in \cite{lifelines}, the optimization problem is solved using maximum partial likelihood estimation (MPLE) and Newton-Raphson iterative search method \cite{suli_mayers_2003}. Ties is survival times are handled using the Efron method \cite{efron}. The L2 regularization is used with a scale value of 0.1.
\begin{equation}
\label{eq:coxph_loglikelihood}
l(\bm{\beta}) = - \log L(\bm{\beta}) = - \sum_{i: E_i = 1} \left( \bm{\beta}^{\mathsf{T}} \mathrm{\mathbf{x}}_i - \log \sum_{j: T_j \ge T_i} \exp \left( \bm{\beta}^{\mathsf{T}} \mathrm{\mathbf{x}}_j \right) \right)
\end{equation}
The hazard function is computed for each individual as a product of the population-level time-dependent baseline hazard $ h_0(t) $ and the individual-level time-constant partial hazard expressed through the exponent of a linear combination of the covariates. It is formulated as:
\begin{equation}
\label{eq:coxph_hazard}
h(t \vert \mathrm{\mathbf{x}}) = h_0(t) \exp \left( \bm{\beta}^{\mathsf{T}} \mathrm{\mathbf{x}} \right)
\end{equation}
All individuals share the same baseline hazard, and the effect of covariates acts as a scaler through the partial hazard. The baseline hazard function is estimated using the Breslow method \cite{breslow}. The exponential form ensures that the hazard rates are always positive. The ratio of the hazard functions of two individuals is always constant and it is given by the ratio of their partial hazards (hence the term ``proportional''). Given the hazard function, cumulative hazard function and survival function can be computed using \eqref{eq:cum-hazard} and \eqref{eq:surv-hazard}, respectively.
The goodness-of-fit measure is the concordance index (C-index) \cite{regr-model-book}, a commonly used metric to validate the predictive capability of survival models (analogous to the $ R^2 $ score in regression). It is a rank correlation score, comparing the order of observed survival times from the data and order of estimated survival times (or negative hazard rates) from the model. It is formulated as:
\begin{equation}
\label{eq:c_index}
C = \frac{1}{\epsilon} \sum_{i: E_i = 1} \sum_{j: T_j \ge T_i} \mathbf{1}_{f(\mathrm{\mathbf{x}}_i) < f(\mathrm{\mathbf{x}}_j)}
\end{equation}
where $ \epsilon $ is the number of samples used in calculation and $ \mathbf{1} $ is the indicator function with binary outcome based on the given condition. The C-index is equivalent to the area under a ROC (Receiver Operating Characteristic) curve, and ranges from \numrange{0}{1}, with values less than 0.5 representing a poor model, 0.5 is equivalent to a random guess, \numrange{0.65}{0.75} meaning a good model, and greater than 0.75 indicating a strong model, with 1 being the perfect concordance and 0 being the perfect anti-concordance.
\section{Data elaboration and model fitting}
\label{sec:data}
The data used in model estimation contains both uncensored and censored observations. Uncensored observations for an individual are survival times available directly as the number of years between two specific events, with the event observed indicator set to 1. Censored observations are created when only the birth event happened, with survival times set to 20 years and the event observed indicator set to 0.
To fit the models and validate their performance, the dataset, which comprises 1,487 individuals, was split into the training set, containing \SI{60}{\percent} of the samples, and the test set, as the hold-out set containing the remaining \SI{40}{\percent} of the samples. Stratified splitting was used to ensure a balanced presence of covariates' values in both datasets.
Data elaboration and modelling was performed in Python \cite{python}, using packages for numerical and scientific computing, data manipulation and machine learning \cite{pandas, scikit-learn, numpy, scipy}. The visualizations were created using Python plotting libraries \cite{matplotlib}. Survival modelling was done in the lifelines library \cite{lifelines}.
An example of elaborated time-dependent data for one person from the dataset used for survival analysis is shown in Table A2.
\subsection{Covariates description}
The covariates are divided into five groups: general attributes, independent state attributes, dependent state attributes, transition attributes and indicator transition attributes. Their description, type and feasible values are shown in Table A3.
\subsection{Model fitting}
In survival regression, survival and cumulative hazard functions are functions of survival times with event indicator and covariates with corresponding coefficients, $ S(t) = f(\mathrm{\mathbf{T}}, \mathrm{\mathbf{E}}, \mathrm{\mathbf{x}}, \bm{\beta}) $ and $ H(t) = g(\mathrm{\mathbf{T}}, \mathrm{\mathbf{E}}, \mathrm{\mathbf{x}}, \bm{\beta}) $. Transition attributes are used as states for the analysis, whereas the remaining ones are used for conditioning. Table A4 shows results of fitting a CPH model for each pair of life events from the causality graph. The table also contains information on the number of existing samples for each pair, out of which, as mentioned above, 60 \% were used for fitting and 40 \% as hold-out data for validation.
\section{Results and Discussion}
\label{sec:results}
\subsection{Life Events Graph}
To build our graphs, we ran Parents and Children (PC) and Greedy Equivalence Search (GES) algorithms with prepared data. Not surprisingly, we verified that their final graphs were not mutually consistent in several components. \ref{fig:pc-ges} shows the results. We can also see that many event pairs have bidirectional arrows. This is neither surprising nor necessarily problematic, it only means that temporal precedence can flow in either direction, and there may be cyclic stages in life (child birth $ \rightarrow $ moving $ \rightarrow $ new car...).
\begin{figure}[h!
\centering
\includegraphics[width=0.65\linewidth]{graphs-pc-ges}
\caption{Life events graphs created using PC and GES algorithms. (\textit{A}) PC. (\textit{B}) GES.}
\label{fig:pc-ges}
\end{figure}
\begin{figure}[h!
\centering
\includegraphics[width=0.65\linewidth]{graph-ges-reduced}
\caption{Final events graph containing four life events.}
\label{fig:ges-reduced}
\end{figure}
The PC graph shows inconsistencies with respect to \emph{wedding} and \emph{child birth}, that we could not find in the dataset or in our personal intuition, particularly that \emph{child birth} precedes being \emph{married} (but not directly \emph{wedding}, which becomes an isolated event, only preceded by \emph{moving}). The child birth event almost certainly happens after other events (as can be seen in Figure A2), which was not uncovered by PC. It is apparent that the \emph{married} state variable confuses the algorithm. On the other hand, in the GES case, the relationships do not show contradiction with our intuition and the graph proposes more relationships. This is valuable from the perspective of our work, because it allows for more diversity in life-course trajectories. We therefore decided to keep working with the GES algorithm for the remainder of this paper. We recognize that this decision is certainly quite arbitrary, and it also reflects the well-known challenge of comparability of graphs in causal discovery \cite{singh2017comparative}. These algorithms themselves use different non-comparable score schemes \cite{causal_discovery_review}.
The last step in this process was to reduce the graph to only events variables; henceforth we will only use the graph in \ref{fig:ges-reduced}.
\subsection{Empirical Life Event Occurrence Distribution}
\begin{figure*}[ht
\centering
\includegraphics[width=1.0\linewidth]{surv-hist}
\caption{Empirical histograms and corresponding survival functions. The x-axis shows age, the left-hand side y-axis shows observed counts of specific event in the dataset (depicted in blue) and the right-hand side y-axis shows derived survival functions from the data (depicted in orange). Since all of our samples are fully observed (i.e. uncensored), all survival functions converge to a value of zero. (\textit{A}) New car event. (\textit{B}) Moving event. (\textit{C}) Child birth event. (\textit{D}) Wedding event. (\textit{E}) Divorce event.}
\label{fig:hist-surv}
\end{figure*}
When a certain life event is most likely to happen in a person's life can be analysed from observed life-course data. Empirical histograms are a good way to understand occurrence distribution of events throughout the lifetime. Since in this case we work with completely observed data (only uncensored observations), the corresponding cumulative distribution functions (CDF), and thus the survival functions, can be directly derived. \ref{fig:hist-surv} shows histograms and survival functions for five life events: new car, moving, child birth, wedding and divorce.
The results show that buying a new car is likely to happen by the age of 40, with decreasing probability in time, with the age of 18 having the highest probability, which can be related to a legal minimum age for obtaining a driving license for unrestricted car use in Germany (\ref{fig:hist-surv}\textit{A}). Home relocation is most likely to happen in the later 20s, with the probability increasing up to the age of 25, and then decreasing up to the age of 50, after which people seem unlikely to move again (\ref{fig:hist-surv}\textit{B}). Having a child follows a normal distribution peaking around the age of 30, with a small chance of happening after the age of 45 (\ref{fig:hist-surv}\textit{C}). Getting married roughly follows a skewed distribution with a heavier tail on the increasing age side, and is most likely to happen in the 20s, specifically around mid 20s (\ref{fig:hist-surv}\textit{D}). Relating the events of getting married and having a child, having a child is shifted a couple of years after getting married, implying that in general having a child will likely happen soon after the wedding. Divorce is most likely to happen in the 30s, and generally before the age of 50, however the small sample size indicates an overall low probability of this happening (\ref{fig:hist-surv}\textit{E}).
\subsection{Describing General Life Event Transitions}
Survival and cumulative hazard functions are estimated using Kaplan-Meier and Nelson-Aalen methods (\eqref{eq:km} and \eqref{eq:na}, respectively), and are shown in \ref{fig:graphs-ges-surv}. Kaplan-Meier estimates of the survival function are shown in \ref{fig:graphs-ges-surv}\textit{A}, where colors represent survival probabilities ranging from \numrange{0}{1}. The more the color at time $ t $ is on the green side, the higher the survival probability $ S(t) $ is, which indicates that there is a smaller probability that the event will happen before or at time $ t $. This can also be interpreted as showing that the event has a higher probability of surviving up to time $ t $. Normalized hazard functions are derived from the cumulative hazard function estimates and are shown in \ref{fig:graphs-ges-surv}\textit{B}, with a range from \numrange{0}{1}.
\begin{figure*}[h!
\centering
\includegraphics[width=1.0\linewidth]{graphs-ges-surv}
\caption{General life event transitions, represented in a life event graph. Nodes represent life events and directed edges represent life event pairs, with results shown on the right-hand side of the edges. Each edge is divided into 21 equal-length parts, representing a period of 20 years and the base year, with its color depicting the function value shown in the scale from the color bar on the side of each graph. (\textit{A}) Survival functions for each pair of life events. The function for each pair can be interpreted either individually or jointly compared among each other, as they all come from the same scale from \numrange{0}{1}. (\textit{B}) Normalized hazard functions derived from the cumulative hazard functions computed using the Nelson-Aalen method. These functions should be interpreted only individually, to compare hazard rates between intervals, and not regarded jointly, as they are normalized to a range from \numrange{0}{1} coming from different scales.}
\label{fig:graphs-ges-surv}
\end{figure*}
Looking at \ref{fig:graphs-ges-surv}\textit{A}, we can see that in the first 2-3 years after getting married, it is more likely that moving will happen next, rather than child birth or having a new car, which will then be superseded by having a child, as the survival probabilities decrease much faster after 3 years. Having a new car after getting married does not seem to be as important as child birth or moving. After having a child, moving is more likely than buying a new car. After moving, buying a car is unlikely compared to having a child, which is likely to happen within several years after moving. After buying a new car, there is a similar chance of moving as having a child.
\ref{fig:graphs-ges-surv}\textit{B} gives interesting insights regarding specific intervals when the event has a higher risk of happening on a timeline. For example, after getting married, the new car or moving event is very likely to happen within one year, whereas the child birth event has a higher risk of happening 10 or 11 years after getting married than in other periods. These estimates should be taken with caution: comparison of hazard rates among intervals gives individual difference between the rates. However, the hazard rates add up along the timeline, hence the cumulative risk of an event happening increases in time.
\subsection{The Effect of Covariates on Transition Probabilities}
\begin{figure*}[ht
\centering
\includegraphics[width=1.0\linewidth]{surv-avg-functions}
\caption{Average life event transitions behaviour for five pairs of life events. Predicted survival functions are computed for all dataset samples for each pair separately, which are then averaged, thus marginalizing over all the covariates to account for their empirical distributions. The blue line represents the survival function, and the light blue area around it is the confidence belt ($ \pm $ 1 standard deviation). The average survival functions have different shapes, with different confidence belts at different times. The narrower the confidence belt is, the less impact covariates have on the survival probability. Since not all of our samples are fully observed (some are censored as in some cases the event does not happen within the study period), survival functions do not end up at 0. (\textit{A}) Wedding to child birth transition. (\textit{B}) Wedding to new car transition. (\textit{C}) Moving to new car transition. (\textit{D}) Child birth to new car transition. (\textit{E}) Wedding to divorce transition.}
\label{fig:surv-avg-functions}
\end{figure*}
The Cox proportional hazards models were trained and validated for each pair of life events from the life events graph. To understand the relationships in the data and prevent drawing false interpretations from the findings, detailed data preparation and description, as well as models training and validation, can be found in the previous section and in Appendix. Important to note is that the covariates obtained from time-dependent variables (e.g. age, number of children, number of cars, etc.) are measured at the time of the initial baseline event, which may help understand some of the less intuitive results shown later in the text (e.g. those without (or fewer) children at the time of wedding are less likely to get divorced, which can be explained by the fact that those are the ones who do not bring children into a partnership, and this results in the lower divorce probability, as shown in \ref{fig:surv-functions}\textit{W}). Another explanation comes from the small sample size of some covariates values (see Figure A4), e.g. people with more cars tend to buy cars faster than the ones with less or no cars, as shown later in \ref{fig:surv-functions}\textit{N}.
\subsubsection{Average life event transitions behaviour}
Average transition behaviour can give us an insightful overview of the survival behaviour within each pair and among the pairs, as shown in \ref{fig:surv-avg-functions}. Having a child is very likely to happen within 5 years from getting married, and almost certainly within ten years, with a probability close to 0 after 10 years. There is a small probability of having a child in the same year the wedding happens (\ref{fig:surv-avg-functions}\textit{A}). Buying a new car after getting married, as depicted in \ref{fig:surv-avg-functions}\textit{B}, has almost identical probability as buying a new car after having a child (\ref{fig:surv-avg-functions}\textit{D}), which will happen within the first 10 years from the initial event, and is unlikely after that. There is big variability in buying a new car after moving, demonstrated by the wide confidence interval in \ref{fig:surv-avg-functions}\textit{C}, with a similar switch point at around 10 years. The divorce event is unlikely to happen, with survival probability slightly decreasing in time over the period of within 15 years (\ref{fig:surv-avg-functions}\textit{E}).
It is important to note that general survival functions shown in \ref{fig:graphs-ges-surv}\textit{A} and average survival functions in \ref{fig:surv-avg-functions} differ in the way that the former are obtained directly from the data using univariate models, while the latter are obtained by averaging over all predicted survival functions using fitted multivariate CPH regression models, thus including the effect of covariates and the model error.
\subsubsection{Life event transitions behaviour}
To examine the effect of socio-demographics on transition probabilities, we condition the prediction on one covariate for each transition pair (\ref{fig:surv-functions}).
\begin{figure*}[h!
\centering
\includegraphics[width=1.0\linewidth]{surv-functions}
\caption{Life event transitions behaviour for five selected pairs of life events expressed through survival functions. Each row with five graphs represent behaviour for a specific life events pair, with five columns indicating which socio-demographic covariates the transition behaviour is conditioned on. For each graph, the x-axis represents time in years passed from the initial event, modelling the probability of another event happening. The covariates are age, nationality, number of children, number of cars and owning a home, respectively in order of plotting. \textit{(A-E)} Wedding to child birth transitions. \textit{(F-J)} Wedding to new car transitions. \textit{(K-O)} Moving to new car transitions. \textit{(P-T)} Child birth to new car transitions. \textit{(U-Y)} Wedding to divorce transitions.}
\label{fig:surv-functions}
\end{figure*}
\paragraph*{\textit{Wedding} to \textit{Child birth}}
The decision to have a child after getting married is mainly driven by two factors: age and the number of children (\ref{fig:surv-functions}\textit{A-E}). In terms of age, after wedding younger people tend to have children later, where the age groups of $ [36, 45] $ and $ [46, 55] $ tend to have children before the others, with a high chance of having a child in the same year as the wedding or likely within 3 years from it. Regardless of age, if a couple does not have a child within 10 years of getting married, they will probably not have a child at all. In terms of nationality, Germans tend to have children slightly later than other nationals. Interestingly, couples with no previous children tend to have children later than couples with children, who are likely to have a child within 1 to 2 years after getting married. Couples who already have children before the wedding could be couples who had children together before getting married or who had children with previous partners. The number of cars does not play a significant role in the decision to have a child. Similarly, whether a couple owns a home, which can be related to their economic situation, does not seem to be relevant for having a child.
\paragraph*{\textit{Wedding} to \textit{New car}}
The decision to have a new car after getting married is driven by several factors, with the most influential being age and the current number of cars (\ref{fig:surv-functions}\textit{F-J}). The youngest age group $ [18, 21] $ tend to have a new car sooner than the others, with the probability gradually decreasing with increases in age, where people in the age group $ [36, 45] $ are very unlikely to have a new car after getting married. Germans tend to have a new car later than other nationals. Those who have no children or one child exhibit similar behaviour, while having two children speeds up the decision to buy a car, whereas couples with three children postpone it. Intuitively, couples with no car tend to buy a car much sooner than couples who already possess a car. Couples who own a home tend to buy a new car slightly sooner then couples who do not own a home, which may be related to a better economic situation.
\paragraph*{\textit{Moving} to \textit{New car}}
The decision to have a new car after moving is driven by all factors except nationality (\ref{fig:surv-functions}\textit{K-O}). The difference in behaviour between age groups is very distinguishable - the younger the person the more likely he/she is to have a new car after moving, with the youngest group $ [18, 21] $ tending to have a car within 2 years from moving. Germans tend to buy a new car later than other nationals. The number of children plays an important role, where people with fewer children tend to by a car sooner, whereas people with 3 or more children are less likely to buy a new car after moving. For people who had no children at the time of moving, they are more likely (compared to others) to have a new (first) child after moving, which in turn may explain the faster car acquisition. Surprisingly, people living in households with two cars or more are very likely to buy or replace a car soon after moving, most of them within two years, including the year of moving. The magnitude of this is very strong, which may be an artifact of a small sample group, or the different possibilities in temporal coding. People who own a home are significantly more likely to get a car before people who do not own a home.
\paragraph*{\textit{Child birth} to \textit{New car}}
The decision of having a new car after having a child is driven by several factors, of which the most influential are age and the number of cars (\ref{fig:surv-functions}\textit{P-T}). With the exception of the youngest age group $ [18, 21] $, the increase in age postpones buying a car after having a child. Germans are likely to have a new car slightly sooner than other nationals. People with one child will probably buy a car much later than people with more than one child. In line with intuition, after having a child people without a car tend to buy a car sooner than people with a car. Similarly, people who own a home are more likely to buy a car than people not owning a home are.
\paragraph*{\textit{Wedding} to \textit{Divorce}}
The overall probability of getting divorced is low, but some variability is still present (\ref{fig:surv-functions}\textit{U-Y}). In general, based on observed divorces, if a divorce does not happen within 15 years from the wedding, it will probably not happen. The youngest age group tend to get divorced sooner than others. Germans tend to get divorced later than other nationals. People with fewer children have lower divorce probability than people with more children, with a sensitive group being people with four children. An intuitive interpretation for this could be that those without (or fewer) children at the time of getting married are those that do not bring children into a partnership, and this results in the lower divorce probability. The number of cars can also affect divorce probability, where people with fewer cars are more likely to get divorced. Homeowners are more likely to get a divorce than non-homeowners.
\subsubsection*{The effect of socio-demographics on transitions}
Regarding the general effect of socio-demographics on transitions, younger age groups generally prefer to have a new car and divorce sooner, while transitions to having a child is faster as age increases, with the oldest group tending to postpone the decision of buying a car (\ref{fig:surv-functions}\textit{A,F,K,P,U}). Regarding nationality, other nationals make faster transitions than Germans, except when it comes to buying a car after having a child (\ref{fig:surv-functions}\textit{B,G,L,Q,V}). The number of children seems to play a role in all transitions (\ref{fig:surv-functions}\textit{C,H,M,R,W}). The number of cars, as expected, tends to have an effect in the decision to buy a new car (\ref{fig:surv-functions}\textit{D,I,N,S,X}). Homeowners generally have faster transitions than people not owning a home (\ref{fig:surv-functions}\textit{E,J,O,T,Y}).
\section{Conclusion}
\label{sec:conclusion}
Numerous studies in transport research and sociology seek to model the structure of the life course and the inter-connection between life-course events (birth of a child, home relocation, car change, etc.). The use of traditional statistical models does not generally allow an analyst to infer causality without overrelying on expert knowledge. Indeed, statistical models of choice, which are predominantly used in the field, provide correlation measures rather than causation measures. By introducing a bi-level model where the upper level is a causal discovery algorithm and the lower level is composed of a series of survival analyses modelling the transition rate between life course events, we introduce a more rigorous data driven assessment of causality in life-course analysis. The results of this research can be used for high-level activity modelling for mobility applications, to better understand how life events and socio-demographic characteristics affects and constraints people's mobility and travel choices.
\section*{Acknowledgements}
\label{sec:acknowledgements}
The research leading to these results has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement no. 754462. The Leeds authors acknowledge the financial support by the European Research Council through the consolidator grant 615596-DECISIONS.
\section*{Appendix}
\label{sec:appendix}
\subsection*{Data collection}
\label{subsec:app_data_collection}
A screenshot of the life-course calendar used in this study is shown in Fig. A1.
\begin{figure}[hb]
\centering
\includegraphics[width=0.75\textwidth]{LCC_new}
\caption{A screenshot of the life-course calendar used in this study (translated from German into English.}
\label{fig:att-lit-lcc}
\end{figure}
\subsection*{Causal discovery}
\label{subsec:app_causal_discovery}
In order to prepare the data for the algorithms, we created an observation for each event pair occurrence, as illustrated in Table A1. Observation 2 can be read this way: ``at some moment in time, a married individual with 2 children had a child birth event, which was followed by a moving event later in life''. Due to the nature of the dataset, an ``event occurrence'' always matches a year, so one may have multiple events preceding multiple events, as happens in observation 4.
\begin{sidewaystable}
\centering
\caption{An example of four \emph{event pair} observations.}
\label{tab:app-event-obs}
\begin{tabular}{ *{7}{c} *{4}{p{1.3cm}}}
ID & New car & Moving & Child birth & Wedding & Married & Children & New car next event & Moving next event & Child birth next event & Wedding next event \\
\midrule
1 & 0 & 1 & 0 & 0 & 1 & 2 & 1 & 0 & 0 & 0 \\
2 & 0 & 0 & 1 & 0 & 1 & 2 & 0 & 1 & 0 & 0 \\
3 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 \\
4 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 0 \\
\bottomrule
\end{tabular}
\end{sidewaystable}
Regarding the choice of the causal graph, Fig. A2 shows the survival probability marginalized over all other events (new car, moving, wedding) leading to the child birth event for the GES graph. There is a clear indication that the child birth happens almost certainly after each one of these events. These relations are not present in the PC graph, where only the new car event precedes the child birth event.
\begin{figure}
\centering
\includegraphics[width=0.38\linewidth]{surv-avg-birth-marg}
\caption{Average survival curve from all other events (new car, moving, wedding) to the child birth event.}
\label{fig:surv-avg-birth-marg}
\end{figure}
\subsection*{Survival analysis}
\label{subsec:app_survival_analysis}
Fig. A3 shows a general overview of observations in survival analysis.
\begin{figure}
\centering
\includegraphics[width=0.65\linewidth]{surv_data_example}
\caption{General example of survival analysis data. Uncensored observations (red) and censored observations (blue).}
\label{fig:app-surv-data}
\end{figure}
\subsection*{Data elaboration}
\label{subsec:app_data_elaboration}
An example of elaborated time-dependent data for one person from the dataset used for survival analysis is shown in Table A2.
\begin{sidewaystable}
\centering
\caption{An example of time-dependent data for one person from the dataset.}
\label{tab:app-example-person}
\begin{tabular}{ l *{20}{c}}
\multirow{2}{*}{Attribute} & \multicolumn{20}{c}{Year} \\ \cline{2-21}
& 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\
\midrule
Age & 18 & 19 & 20 & 21 & 22 & 23 & 24 & 25 & 26 & 27 & 28 & 29 & 30 & 31 & 32 & 33 & 34 & 35 & 36 & 37 \\
Owns home & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
Cars & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
Distance to work & 0 & 0 & 8 & 7 & 7 & 5 & 5 & 5 & 5 & 5 & 5 & 7 & 7 & 7 & 7 & 7 & 7 & 7 & 7 & 7 \\
Rides car & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
Children & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
Married & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
New car & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
Moving & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
Child birth & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
Wedding & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
Divorce & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\bottomrule
\end{tabular}
\end{sidewaystable}
\subsection*{Covariates description}
\label{subsec:app_covariates_description}
The covariates are divided into five groups: general attributes, independent state attributes, dependent state attributes, transition attributes and indicator transition attributes. Their description, type and feasible values are shown in Table A3.
General attributes are \textit{Gender}, \textit{Nationality}, \textit{No. of relocations} and \textit{City size}. They represent general characteristics of individuals. Their value neither depends on the year (they are time-constant), nor on the life events pair. \textit{Gender} and \textit{Nationality} are categorical attributes, \textit{No. of relocations} and \textit{City size} are discrete.
Independent state attributes are \textit{Age}, \textit{Owns home}, \textit{Distance to work} and \textit{Rides car}. These are time-dependent attributes. Given a life events pair, an independent state attribute takes a value it has in the year the cause event occurs. \textit{Age}, \textit{Owns home} and \textit{Rides car} are categorical, whereas \textit{Distance to work} is discrete (discretised to 1 km).
Dependent state attributes are \textit{No. of cars}, \textit{No. of children} and \textit{Married}. They are affected by the transition attributes. Their value is computed in the same way as for the independent state attributes. They are all categorical.
Transition attributes are \textit{New car}, \textit{Moving}, \textit{Child birth}, \textit{Wedding} and \textit{Divorce}. These are binary variables, having values of mostly 0, and 1 when the event happens. They increment a dependent state attribute (in case of \textit{No. of cars} and \textit{No. of children}), or alternate the value of \textit{Married}. These attributes are treated in a different way. To be used in estimations, they are expanded in a way how many years they happened before the cause event. Therefore, they take values in the range $ [0, -19] $, since the dataset for each person covers a period of 20 years. They are all discrete.
Indicator transition attributes are dummy variable used to account for whether the transition attribute happened or not before the cause effect. They are all categorical.
\begin{sidewaystable}
\centering
\caption{Covariates description}
\label{tab:app-covariates-description}
\begin{tabular}{ *{5}{l} }
Group & Name & Description & Type & Domain \\
\midrule
\multirow{4}{*}{General attributes} & Gender & gender, is the person male or female & categorical & \{0, 1\} \\
& Nationality & nationality, is the person German or international & categorical & \{0, 1\} \\
& No. of relocations & no. of times the person relocated with parents & discrete & [0, 9] \\
& City cize & the size of the city the person lives in & discrete & [0, 7] \\
\midrule
\multirow{4}{*}{Independent state attributes} & Age & age, a range of 20 years for every person & categorical & 5 age groups \\
& Owns home & does the person own a home or not & categorical & \{0, 1\} \\
& Distance to work & travelling distance to work [km] & discrete & [0, 100] \\
& Rides car & if the person uses the car to go to work & categorical & \{0, 1\} \\
\midrule
\multirow{3}{*}{Dependent state attributes} & No. of cars & the number of cars the person has & categorical & [0, 2] \\
& No. of children & the number of children the person has & categorical & [0, 3] \\
& Married & marital status, is the person married or not & categorical & \{0, 1\} \\
\midrule
\multirow{5}{*}{Transition attributes} & New car & the event of buying a new car & discrete & $ [0, -19] $ \\
& Moving & the event of moving & discrete & $ [0, -19] $ \\
& Child birth & the event of having a child & discrete & $ [0, -19] $ \\
& Wedding & the event of having a wedding (getting married) & discrete & $ [0, -19] $ \\
& Divorce & the event of having a divorce & discrete & $ [0, -19] $ \\
\midrule
\multirow{5}{*}{Indicator transition attributes} & Indicator - New car & indicator for the event of buying a new car & categorical & \{0, 1\} \\
& Indicator - Moving & indicator for the event of moving & categorical & \{0, 1\} \\
& Indicator - Child birth & indicator for the event of having a child & categorical & \{0, 1\} \\
& Indicator - Wedding & indicator for the event of having a wedding (getting married) & categorical & \{0, 1\} \\
& Indicator - Divorce & indicator for the event of having a divorce & categorical & \{0, 1\} \\
\bottomrule
\end{tabular}
\end{sidewaystable}
Distribution of covariates values is shown in Fig. A4.
\begin{figure}
\centering
\includegraphics[width=1.\linewidth]{surv-functions-hist}
\caption{Distribution of covariates values.}
\label{fig:app-surv-data-hist}
\end{figure}
\subsection*{Model fitting}
\label{subsec:app_model_fitting}
Model fitting results are shown in Table A4.
\begin{table}
\centering
\caption{Fitted CPH models}
\begin{tabular}{ llrrr }
Cause & Effect & Sample size & C-index - train & C-index - test \\
\midrule
New car & Moving & 819 & 0.687 & 0.605 \\
New car & Child birth & 736 & 0.801 & 0.757 \\
Moving & New car & 1172 & 0.863 & 0.864 \\
Moving & Child birth & 1704 & 0.831 & 0.824 \\
Child birth & Moving & 1481 & 0.670 & 0.679 \\
Child birth & New car & 873 & 0.784 & 0.809 \\
Wedding & Moving & 1175 & 0.591 & 0.604 \\
Wedding & Child birth & 1241 & 0.733 & 0.748 \\
Wedding & Divorce & 1237 & 0.719 & 0.682 \\
Wedding & New car & 877 & 0.608 & 0.652 \\
\bottomrule
\end{tabular}
\label{tab:app-fitted-coxph}
\end{table}
| {
"attr-fineweb-edu": 2.316406,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUd1LxK7Tt6CA5HmlS | \section{Introduction}
Go (baduk, weiqi) is one of the most popular board games in Asia, especially in China, Japan, and Korea \cite{togo}. The attractiveness of Go has grown rapidly in the last two decades. The prize money for professional Go tournaments has reached millions of dollars each year, with dozens of millions of audiences watching competitions from TV streams and online servers. Professional Go players have been using intuition-driven performance analysis methods to improve themselves and prepare for tournaments for a long time. For example, players will replay and review their opponent's matches, realize their shortcomings, and expect to gain an advantage in upcoming games.
\begin{figure}
\centering
\includegraphics[width=3.3in]{Figs/katago.jpg}
\caption{An illustration of KataGo's analysis of a game. The blue line represents the win rate, and the green line represents the score differential. Both are in black's perspective.}
\label{FIG:katago}
\end{figure}
With the advent of open-source AI based on the AlphaZero \cite{alphazero} algorithm, such as ELF OpenGo \cite{elfopengo}, Leela Zero \cite{leelazero}, and KataGo \cite{katago}, players can improve performance analysis by obtaining in-game statistics from the AI during the game review. Figure \ref{FIG:katago} shows the results of KataGo's analysis of a game. However, these analysis methods are still intuition-driven rather than data-driven, as players rarely use purposefully collected data to conduct further research.
There are many data-driven performance analysis technologies for many sports. Castellar~\cite{tabletennis} examines the effects of response time, reaction time, and movement time on the performance of table tennis players. Merhej~\cite{happennext} uses deep learning methods to evaluate the value of players' defensive actions in football matches. Baboota~\cite{premierleague} predicts the outcome of English Premier League matches using machine learning methods and achieves promising results. Beal~\cite{combining} uses natural language processing techniques to blend statistical data with contextual articles from human sports journalists and improve the performance of predicting soccer matches. Yang~\cite{moba} proposes an interpretable two-stage Spatio-temporal network for predicting the real-time win rate of mobile MOBA games.
These analysis technologies play an important role in enhancing the fan experience and evaluating player level. However, there is no related technology in Go yet because 1) there is no complete dataset of professional Go games, 2) Go matches do not have the statistic that can indicate the status of both sides (e.g., rebounds in basketball), and 3) compared to popular sports, very few people understand Go, so it isn't easy to organize data and construct effective features well.
In this background, we present the Professional Go Dataset (PGD), containing 98,043 games played by 2,148 professional players. The raw records of the games are derived from publicly available Go datasets, the meta-information related to players and tournaments is annotated by a knowledgeable Go fan, and KataGo analyzes the rich in-game statistics. Thus, the quality and coverage of the dataset are reliable. In addition to this, we further extract and process the in-game statistics through the prior knowledge of Go.
To establish baseline performance on PGD, we used several popular machine learning methods for game outcome prediction. Prediction of Go game results is a challenging performance analysis problem, which has been mainly done by designing rating systems for predicting outcomes. However, the various methods designed have produced rather limited performance improvements. Even the state-of-the-art Bayesian approach, WHR \cite{whr}, has reached a ceiling in terms of accuracy because it utilizes only win-loss and time information.
The experimental results show a 9.6\% improvement in the accuracy of the prediction system with the help of multiple metadata features and in-game features. In addition to this, we give more promising tasks that can benefit from PGD.
Our contributions are concluded as follows:
\begin{itemize}
\item We present the first professional Go dataset for sports analytics. The dataset contains a large amount of player, tournament, and in-game data, facilitating extensive performance analysis.
\item We have made the first attempt to feature engineer the statistics within the Go game and developed a machine learning model to predict the outcome of the game. The proposed model significantly outperforms previous rating-based methods.
\item This paper presents possible research directions that benefit from this dataset.
\end{itemize}
\section{Related Work}
\subsection{Background of Professional Go}
Although Go originated in China, the professional Go system first appeared in Japan. During the Edo period, Go became a popular game in Japan and was financed by the government. in the early 20th century, Japan first established the modern professional Go system and maintained its absolute leadership until the 1980s. In 1985, Nie Weiping achieved an incredible streak of victories against top Japanese players in the Japan-China Super Go Tournament. Four years later, Cho Hunhyun defeated Nie Weiping by 3:2 in a highly anticipated final and won the 400,000 dollars championship prize of the Ing Cup. These two landmark events represent the formation of a triple balance of power between China, Japan, and Korea. The golden age of Korean Go began in 1996 and lasted for ten years. Korean players represented by Lee Changho dominated the competition during this period. After that, a large number of Chinese players progressed rapidly and competed fiercely with Korean players, leading to a lot of discussion among fans about the strongest player like Gu Li versus Lee Sedol and Ke Jie versus Shin Jinseo.
Founded in 1988, the Ing Cup and the Fujitsu Cup (discontinued in 2011) were two of the earliest Go world tournaments. Like the Grand Slam in tennis, the world champions of Go are the highest honor for professional players. Table \ref{tab:championship} summarizes the major Go tournaments and their latest champions.
\begin{table}
\centering
\renewcommand\arraystretch{1.2}
\caption{Active major tournaments and their latest champions. Note that the regional tournaments are only partially listed. Text Marked with an asterisk indicates that this is a team tournament (win-and-continue competition). Statistics are as of December 31, 2021.}
\begin{tabular}{ll}
\hline
Tournament Name (Count) & Latest Champion \\ \hline
\textbf{World Major} & \\
Samsung Cup (26) & Park Junghwan 2:1 Shin Jinseo \\
LG Cup (26) & Shin Minjun 2:1 Ke Jie \\
Nongshim Cup* (23) & Shin Jinseo 1:0 Ke Jie \\
Chunlan Cup (13) & Shin Jinseo 2:0 Tang Weixing \\
Ing Cup (9) & Tang Weixing 3:2 Park Junghwan \\
Mlily Cup (4) & Mi Yuting 3:2 Xie Ke \\
Bailing Cup (4) & Ke Jie 2:0 Shin Jinseo \\
\hline
\textbf{Regional Major} & \\
Asian TV Cup (31) & Shin Jinseo 1:0 Ding Hao \\
Japanese Kisei (47) & Iyama Yuta 4:1 Kono Rin \\
Japanese Meijin (47) & Iyama Yuta 4:3 Ichiriki Ryo \\
Japanese Honinbo (73) & Iyama Yuta 4:3 Shibano Toramaru \\
Chinese Mingren (32) & Mi Yuting 2:1 Xu Jiayang \\
Chinese Tianyuan (35) & Gu Zihao 2:1 Yang Dingxin \\
Korean Myeongin (44) & Shin Jinseo 2:1 Byun Sangil \\
Korean KBS Cup (33) & Shin Jinseo 2:0 An Sungjoon \\
Taiwan Tianyuan (20) & Wang Yuanjun 4:2 Jian Jingting \\
European Go Grand Slam (5) & Mateusz Surma 1:0 Artem Kachanovsky \\ \hline
\end{tabular}
\label{tab:championship}
\end{table}
\subsection{State-of-the-art Go AI in the Post-AlphaZero Era}
AlphaZero has outperformed the best professional players by a wide margin and has prompted researchers to turn their attention to more challenging tasks in AI. However, in the Go community, AlphaZero and its open-source implementations, like ELF OpenGo, early Leela Zero, do not work well for analyzing human games: On the one hand, their evaluation contains only the win rate and not the score difference. In game states where the gap is small, the AI's win rate fluctuates dramatically, so the robustness is poor. On the other hand, these AIs cannot support different rules and \textit{komis} (additional points added for the white player to compensate for the black player's first move advantage), thus often leading to incorrect evaluation of games.
KataGo is the cutting-edge AlphaZero-based algorithm that can provide rich in-game statistics while analyzing more accurately. In addition to that, KataGo supports different rules and komis. Therefore, KataGo is suitable for analyzing human games and getting in-game statistics. While KataGo supports a variety of statistics, in the following section, we mainly focus on KataGo's three main stats: win rate, score difference, and recommended move list.
\subsection{Board Game Analytics}
Many board games have AI based on the AlphaZero algorithm, such as Gomoku \cite{gomokunet}, Othello \cite{othello}, and NoGo \cite{nogo}. However, these games do not have a large number of accessible game records, which makes it challenging to develop analysis techniques for them.
On the other hand, the study of chess analytics emerged very early, with the benefit of detailed chess datasets, a large number of participants, and advanced chess computer engines. The most common applications contain the evaluation of the player's rating \cite{intrinsic,actualplay,evolutionaryrating}, skill \cite{skillperformance,skillcomputer}, and style \cite{preference,psychology,psychometric}.
These efforts have made a significant contribution to the chess community. However, due to the lack of benchmarks and discontinuous research, the AI community and the game analytics community have rarely focused on chess-related research, especially in light of the recent rapid development of machine learning methods. Acher~\cite{largescale} has attempted to establish a large-scale open benchmark for chess, but the project seems to be inactive at the moment. Fadi~\cite{resultprediction} introduces machine learning methods into game result prediction. Still, the effect appears to be unsatisfactory, as the performance of the approaches based on machine learning models is lower than the Elo rating system. Recent research has made significant progress in modeling human behavior in chess \cite{mcilroy2020aligning, mcilroy2020learning, mcilroy2021detecting}. In particular, \cite{mcilroy2021detecting} has developed a technology to determine a player's identity from a sequence of chess moves. It demonstrates the great potential of data-driven board game analytics.
There is little previous research related to Go analytics. However, the rapid growth in the number of Go tournaments, players, and recorded games in the last two decades has made it possible to develop a dataset and benchmark for data-driven Go analytics. Therefore, this paper proposes PGD, a large-scale professional Go dataset, for data-driven sports analytics. We hope to build a bridge between this ancient game and modern game data science with this dataset.
\begin{table}
\centering
\renewcommand\arraystretch{1.2}
\caption{Regions where both players are located. Legend: CR = Cross Region}
\begin{tabular}{cccccc}
\hline
Total & CR & CHN & KOR & JPN & Others \\
98043 & 12963 & 26977 & 19455 & 32907 & 5741 \\ \hline
\end{tabular}
\label{tab:regions}
\end{table}
\begin{figure}
\centering
\includegraphics[width=3.3in]{Figs/age.jpg}
\caption{Distribution of the age of players in different generations.}
\label{FIG:age}
\end{figure}
\section{Dataset Collection and Annotation}
\subsection{Meta-information}
\label{sec:meta}
First, we access the raw game data through Go4Go \footnote{https://www.go4go.net}, the largest database of Go game records. The owner approved the academic use of the database. In the raw data, we excluded some games, including amateur or AI competitions, handicap games, and games that ended abnormally (e.g., lose by forfeit after playing a \textit{ko} without the \textit{ko threat}). In the end, we obtained 98,043 games played by 2,148 professional players from 1950 to 2021 in SGF format.
We obtain information about each player's date of birth, gender, and nationality in Sensei's Library\footnote{https://senseis.xmp.net} and List of Go Players\footnote{https://db.u-go.net}. If the player's metadata is not in the database, a high-level amateur go player would supplement the information. We set the metadata to "unknown" if the player could not make an affirmative judgment.
Meta-information related to matches and rounds is extracted from SGF files. We finally got 503 tournament categories (e.g., Ing Cup, Samsung Cup), 3131 tournaments (e.g., 1st Ing Cup, 15th Samsung Cup), and 342 rounds (e.g., round 1, semi-finals). The amateur player manually labels each tournament, including the tournament's region, importance, and tournament type. The region includes international and non-international; the importance includes world-major and regional-major; the types of tournaments include elimination, league, team, and friendly.
\subsection{In-game Statistics}
We used KataGo v1.9.1 with tensorRT backend to analyze the game records. We applied the following strategy to ensure a balance of accuracy and speed: 100 simulations were performed for each move to obtain initial in-game statistics. If a move resulted in a large fluctuation (more than 10\% win rate or 5 points), KataGo would re-evaluate the action with a simulation count of 1000. In the end, we obtained in-game statistics for all games, mainly including win rate, score difference, and preferred moves. Our analysis was conducted on an NVIDIA RTX 2080Ti graphic card and took a total of about 25 days.
\section{Statistics}
In this section, we present the statistics of PGD. The statistics include two aspects: players and games.
\subsection{Players}
\begin{table}
\centering
\renewcommand\arraystretch{1.2}
\caption{Most frequent players.}
\begin{tabular}{cc|cc}
\hline
Players & Games & Players & Games \\ \hline
Cho Chikun & 2079 & O Rissei & 1217 \\
Lee Changho & 1962 & Yamashita Keigo & 1217 \\
Kobayashi Koichi & 1605 & Gu Li & 1215 \\
Rin Kaiho & 1536 & Otake Hideo & 1214 \\
Cho Hunhyun & 1533 & Cho U & 1145 \\
Lee Sedol & 1375 & Choi Cheolhan & 1133 \\
Yoda Norimoto & 1316 & Chang Hao & 1100 \\
Kato Masao & 1249 & Park Junghwan & 1058 \\
Takemiya Masaki & 1248 & Kobayashi Satoru & 1044 \\ \hline
\end{tabular}
\label{tab:frequentplayers}
\end{table}
\begin{table}
\centering
\renewcommand\arraystretch{1.2}
\caption{Most frequent matchups.}
\begin{tabular}{cccc}
\hline
Matchups & Games & Win & Loss \\ \hline
Cho Hunhyun vs. Lee Changho & 287 & 110 & 177 \\
Cho Hunhyun vs. Seo Bongsoo & 207 & 139 & 68 \\
Kobayashi Koichi vs. Cho Chikun & 123 & 60 & 63 \\
Yoo Changhyuk vs. Lee Changho & 122 & 39 & 83 \\
Cho Chikun vs. Kato Masao & 107 & 67 & 40 \\ \hline
\end{tabular}
\label{tab:frequentmatchups}
\end{table}
\begin{table}
\centering
\renewcommand\arraystretch{1.2}
\caption{Most frequent tournaments.}
\begin{tabular}{cc|cc}
\hline
Tournaments & Games & Tournament & Games \\ \hline
Chinese League A & 11092 & Japanese Oza & 1996 \\
Korean League A & 4773 & Japanese NHK Cup & 1939 \\
Japanese Honinbo & 3655 & Samsung Cup & 1806 \\
Japanese Ryusei & 2932 & Chinese Mingren & 1588 \\
Japanese Meijin & 2869 & Chinese Women's League & 1569 \\
Japanese Judan & 2766 & LG Cup & 1514 \\
Japanese Kisei & 2723 & Korean League B & 1411 \\
Japanese Tengen & 2383 & Chinese Tianyuan & 1282 \\
Japanese Gosei & 2230 & Korean Women's League & 1279 \\ \hline
\end{tabular}
\label{tab:tournaments}
\end{table}
\begin{table}
\centering
\renewcommand\arraystretch{1.2}
\caption{Black win rate (BWR) under different komis.}
\begin{tabular}{cccc}
\hline
Komis & Games & KataGo BWR & Actual BWR \\ \hline
4.5 & 1123 & 64\% & 55.48\% \\
5.5 & 21980 & 56\% & 53.20\% \\
6.5 & 46952 & 48\% & 50.43\% \\
7.5 & 26819 & 39\% & 46.90\% \\
8 & 1169 & 39\% & 48.85\% \\ \hline
\end{tabular}
\label{tab:komis}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=7in]{Figs/games.jpg}
\caption{Game counts in years.}
\label{FIG:games}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=3.3in]{Figs/length.jpg}
\caption{Distribution of game lengths.}
\label{FIG:length}
\end{figure}
Table \ref{tab:regions} summarizes the diversity of the competitions, and we provide the regions where both sides of the games are located. It can be seen that most games where both players are from Japan, while there are fewer cross-region games. Figure \ref{FIG:age} presents the age distribution of the players in the different generations of the games. Before 1990, the age of the players was between 30-50 years old. In the following decade, the average age of players dropped rapidly to less than 30 years old. In the 21st century, players in their 20s completed the most competitions. This trend reveals that professional Go is becoming more competitive. Table \ref{tab:frequentplayers} and Table \ref{tab:frequentmatchups} show the most frequent players and matchups. These prominent legends completed most of the games, indicating a long-tailed distribution of the number of games played by different players.
\subsection{Games}
\begin{figure}
\centering
\includegraphics[width=3.3in]{Figs/coin.jpg}
\caption{Mean coincidence rate (CR) in years.}
\label{FIG:coin}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3.0in]{Figs/loss.jpg}
\caption{Mean loss win rate (MLWR) and mean loss score (MLS) with different WHR score in years.}
\label{FIG:loss}
\end{figure}
Figure \ref{FIG:games} presents the number of games per year in the PGD. We can observe it grows exponentially. On the one hand, it shows the increase in the number of professional matches; on the other hand, it shows that PGD did not record many early games. Figure \ref{FIG:games}A shows the number of different types of tournaments. We can see that the creation and development of leagues in the 21st century have provided many opportunities for Go players. Figure \ref{FIG:games}B shows the round statistics of the matches. Figure \ref{FIG:games}C shows the gender statistics of the games. We can observe that the number of tournaments for female professional Go players is increasing but still needs further improvement.
Table \ref{tab:tournaments} shows the tournaments with the highest number of games, which are mostly Japanese tournaments, except for the Chinese League A and the Korean League A. This is mainly because these tournaments were held for many years. Table \ref{tab:komis} shows the win rate of black in different komis. The win rate for black players is 9\% higher in the 4.5 komis than 7.5 komis. Thus, the different komis shift the distribution of statistics within many games, creating additional difficulties in developing prediction techniques. Figure \ref{FIG:length} shows the distribution of game lengths, where the lengths of resigned games are significantly shorter.
Figure \ref{FIG:coin} and Figure \ref{FIG:loss} show the trends of some in-game statistics under different years. From these schematics, we observe some interesting results. First, the advent of AlphaGo prompted professional players to start imitating the AI's preferred moves, which is particularly evident in the opening phase (first 50 moves), and one can observe the coincidence rate in Figure \ref{FIG:coin}, which grows very significantly after 2016. At the same time, the non-opening phase is difficult to imitate, so the growth of the coincidence rate is much smaller than in the opening stage. From Figure \ref{FIG:loss}, We can observe that the average loss of win rate and the average loss of score also decreased rapidly after the emergence of AlphaGo. It is worth noting that Figure \ref{FIG:loss} also shows statistics for players with different WHR ratings, with players with higher scores having lower statistics. This offers the potential to develop sports performance analysis techniques.
\section{Game Results Prediction of PGD}
This section develops a system for predicting future matches from historical data. We first perform feature extraction and preprocessing, then apply popular machine learning methods, and finally, we report the performance of the prediction system.
\subsection{Feature Extraction}
In this section, we divide all features into three categories: meta-information features, contextual features, and in-game features. The detailed meanings of these features are explained in detail in the following sections.
\subsubsection{Meta-information Features}
\begin{itemize}
\item Basic Information (BI): Include the games time, age, gender, region, and other fundamental characteristics of the players.
\item Ranks (R): Ranking of Go players, 1-dan lowest, 9-dan highest.
\item WHR Score (WS): WHR rating, which measures the level of the player.
\item WHR Uncertain (WU): A measure of uncertainty in the WHR rating system. In general, players who have not played for longer or have fewer total games have higher uncertainty.
\item Tournament Feature (TF): Features of the tournament, already described in Section \ref{sec:meta}.
\end{itemize}
\subsubsection{Contextual Features}
\begin{itemize}
\item Match Results (MR): The player's recent performance in various competitions, such as 14 wins and 6 losses in the last 20 games, 9 wins and 1 loss in the last 10 games.
\item Match Results by Region (MRR): Performance against the opponent's region, such as 10 wins and 10 losses in the last 20 games against Korean players.
\item Matchup Results (MUR): Past competitions against the opponent.
\item Tournament Results (TR): Past performances at this tournament, such as Ke Jie's 20 wins and 6 losses at the Samsung Cup.
\item Opponents Ranks (OR): Rank of opponents in recent matches.
\item Opponents Ages (OA): Age of opponents in recent matches.
\item Cross-region Counts (CRC): The number of cross-regional competitions. Generally speaking, the larger the number, the higher the level of players.
\end{itemize}
\subsubsection{In-game Features}
\begin{itemize}
\item Garbage Moves (GM): We define the GM as a move in a game in which the winner is almost determined. GM is calculated as follows: if after move $x$ in a match, under moving average with a window of length 4, the win rate of the leading player is always more than 90\% or the score difference is always more than 3 points, then all moves after step $x$ are called GM. We take the ratio of GM to game lengths as an in-game feature. In the following in-game feature calculations, we remove all GM.
\item Unstable Rounds (UR): We define the UR as a round in which the win rate or score difference fluctuates dramatically. The UR in the game is calculated as follows: move $x$ and move $x+1$ lose the win rate $w_1$, $w_2$ or score $s_1$, $s_2$ respectively ($w_1$ and $w_2$ are greater than 10\%, $s_1$ and $s_2$ are greater than 5). Also, the absolute value of the difference between $w_1$ and $w_2$ is less than 2\%, or the absolute value of the difference between $s_1$ and $s_2$ is less than 1. The rounds at move $x$ and move $x+1$ are said to be the UR. We take the number of UR as an in-game feature. In the following in-game feature calculations, we remove all UR.
\item Mean Win Rate (MWR): Average win rate in recent games. Note that the win rate here is the average in-game win rate. For example, Ke Jie comebacks and win in a game, but the MWR in this game is only 15\%.
\item Mean Score (MS): Average score difference in recent games.
\item Mean Loss Win Rate (MLWR): Average win rate lost in recent games.
\item Mean Loss Score (MLS): Average score lost in recent games.
\item Advantage Rounds (AR): Number of rounds with a 5\% win rate or a 3 point advantage in recent games.
\item Strong Advantage Rounds (SAR): Number of rounds with a 10\% win rate or a 5 point advantage in recent games.
\item Coincidence Rate (CR): The ratio of moves that match KataGo's recommendation to total moves in recent games.
\end{itemize}
\subsection{Model Training}
We selected four advanced machine learning methods to train the models: Random Forest (RF) \cite{randomforest}, XGBoost \cite{xgboost}, LightGBM \cite{lightgbm}, and CatBoost \cite{catboost}. As a comparison, three models based on rating systems were applied to predict game outcomes, including ELO \cite{elo}, TrueSkill \cite{trueskill}, and WHR. It is worth noting that we directly used the default hyperparameters in the corresponding Python package without tuning them. Although adjusting these hyperparameters could further improve the performance, our proposed approach has already achieved very competitive results compared with the other rating system-based models.
We divided the dataset into a training set and a test set. The training set includes competitions from 1950 to 2017, with a total of 77,182 matches. The test set includes competitions from 2018 to 2021, with a total of 20,861 matches. The evaluation metrics use the Accuracy (ACC) and Mean Square Error (MSE).
\begin{table*}[]
\centering
\renewcommand\arraystretch{1.2}
\caption{Experimental results. Bolded font indicates the best performance. The red font in the last row shows the performance difference between our proposed approach and the best rating system-based method.}
\begin{tabular}{lllllllllllll}
\hline
\multicolumn{1}{c}{} & \multicolumn{2}{c}{Mean} & \multicolumn{2}{c}{CR} & \multicolumn{2}{c}{CHN} & \multicolumn{2}{c}{KOR} & \multicolumn{2}{c}{JPN} & \multicolumn{2}{c}{Others} \\ \hline
\multicolumn{1}{c}{} & \multicolumn{1}{c}{ACC↑} & \multicolumn{1}{c}{MSE↓} & \multicolumn{1}{c}{ACC↑} & \multicolumn{1}{c}{MSE↓} & \multicolumn{1}{c}{ACC↑} & \multicolumn{1}{c}{MSE↓} & \multicolumn{1}{c}{ACC↑} & \multicolumn{1}{c}{MSE↓} & \multicolumn{1}{c}{ACC↑} & \multicolumn{1}{c}{MSE↓} & \multicolumn{1}{c}{ACC↑} & \multicolumn{1}{c}{MSE↓} \\ \hline
ELO & 0.6515 & 0.2144 & 0.6466 & 0.2174 & 0.6176 & 0.2273 & 0.6339 & 0.2191 & 0.6637 & 0.2107 & 0.7156 & 0.1907 \\
TrueSkill & 0.6439 & 0.2605 & 0.6380 & 0.2781 & 0.6095 & 0.1875 & 0.6265 & 0.2654 & 0.6552 & 0.2546 & 0.7106 & 0.2062 \\
WHR & 0.6567 & 0.2125 & 0.6684 & 0.2090 & 0.6212 & 0.2295 & 0.6397 & 0.2193 & 0.6577 & 0.2108 & 0.7254 & 0.1813 \\ \hline
RF & 0.6932 & 0.2022 & 0.6954 & 0.1998 & 0.6623 & 0.2146 & 0.6800 & 0.2086 & 0.7031 & 0.1956 & 0.7446 & 0.1847 \\
XGBoost & 0.7351 & 0.1700 & 0.7457 & 0.1663 & 0.7033 & 0.1844 & 0.7197 & 0.1779 & 0.7563 & 0.1607 & 0.7692 & 0.1520 \\
LightGBM & 0.7509 & 0.1637 & 0.7611 & 0.1574 & 0.7241 & 0.1765 & 0.7374 & 0.1715 & 0.7599 & 0.1600 & 0.7912 & 0.1432 \\
CatBoost & \textbf{0.7530} & \textbf{0.1623} & \textbf{0.7632} & \textbf{0.1572} & \textbf{0.7258} & \textbf{0.1752} & \textbf{0.7379} & \textbf{0.1699} & \textbf{0.7633} & \textbf{0.1577} & \textbf{0.7946} & \textbf{0.1411} \\
& \textcolor{red}{9.6\%} & \textcolor{red}{-0.050} & \textcolor{red}{9.5\%} & \textcolor{red}{-0.052} & \textcolor{red}{10.5\%} & \textcolor{red}{-0.052} & \textcolor{red}{9.8\%} & \textcolor{red}{-0.049} & \textcolor{red}{10.0\%} & \textcolor{red}{-0.053} & \textcolor{red}{6.9\%} & \textcolor{red}{-0.040} \\ \hline
\end{tabular}
\label{tab:results}
\end{table*}
\begin{table}
\centering
\renewcommand\arraystretch{1.2}
\caption{Ablation results.}
\begin{tabular}{ccccc}
\hline
Metadata & Contextual & In-game & ACC↑ & MSE↓ \\ \hline
\checkmark & & & 0.6719 & 0.1975 \\
& \checkmark & & 0.7099 & 0.1827 \\
& & \checkmark & 0.6883 & 0.1891 \\
\checkmark & \checkmark & & 0.7342 & 0.1706 \\
\checkmark & \checkmark & \checkmark & 0.7530 & 0.1623 \\ \hline
\end{tabular}
\label{tab:ablation}
\end{table}
\subsection{Experiment Results}
Table \ref{tab:results} shows the experimental results. We can see that WHR achieves an accuracy of 65.67\% and an MSE of 0.2125 among all rating system-based outcome prediction models, which is the best performer. WHR performs slightly better than ELO because WHR takes full advantage of the long-term dependence on game results. The performance of TrueSkill is lower, probably because it is more suitable for calculating the rating of multiplayer sports.
Among the machine learning methods using various features, CatBoost has the highest performance, reaching an accuracy of 75.30\% and an MSE of 0.1623, which is much higher than WHR. In almost every category, the CatBoost model brings a 10\% improvement in accuracy and an MSE improvement of 0.05. The improvement of the Others category is lower because it is relatively easier to predict, and the accuracy of the WHR method exceeds 72\%.
\subsection{Ablation Study}
We also designed ablation experiments to verify the validity of each modular feature, each using the CatBoost method. The results are shown in Table \ref{tab:ablation}. The model using only meta-information features is 67.19\%, which is slightly higher than the WHR method, indicating that other attributes in meta-information besides WHR also contribute to the results. The accuracy of 70.99\% was achieved using only contextual features. In comparison, the accuracy of 68.83\% was achieved by using only in-game features, both higher than the state-of-the-art rating system-based prediction approach. The combination of these features further improves the predictive ability of the model. The ablation experiments demonstrate that the characteristics of each of our modules enhance the performance of the prediction system.
\section{Possible Research Directions}
This section will focus on some of the research directions that benefit from PGD, in addition to game outcome prediction.
\subsection{Feature engineering of in-game statistics} Although our extracted in-game features can analyze player performance better, there is still room for improvement. First, more features could be designed to enhance the predictive ability. Second, as seen in Figure \ref{FIG:katago}, well-developed time series techniques have the potential to improve the analysis.
\subsection{Behavior and style modeling}
The behavior and style of different players attract the most attention of fans in Go matches. For example, Lee Changho's style is extreme steadiness and control, overcoming his opponent at the last moment. On the other hand, Gu Li is characterized by taking every opportunity to fight with his opponent.
Modeling and classifying the playing styles of players is a challenging but fascinating problem. It can give many potential applications, such as targeted training and preparations, AI-assisted cheating detection \cite{cheating1,cheating2}, even wide-range psychological research \cite{psychology,psychometric,risk,female}.
There is a relatively small amount of literature on this problem in board games. Omori~\cite{shogi} proposes classifying shogi moves based on game style and training AI for a specific game style. McIlroy-Young \cite{mcilroy2020learning} develops a personalized model that predicts a specific player's move and demonstrates that it captures individual-level human playing styles. Style modeling and recognition have not made impressive success due to the lack of proper analysis techniques and large-scale datasets. The advent of PGD has helped to make progress on such issues.
\subsection{Rating system}
The result prediction model can only give the winning percentage between two players. On the other hand, a rating system can incorporate the relative strengths of many players into the evaluation and is more valuable in assessing a player's strength. Existing rating systems for board games only consider win-loss and time information, while a rating system that incorporates other features has the potential to evaluate a player's strength better. Thus the combination of machine learning approaches and traditional rating systems is promising.
\subsection{Live commentary enhancement}
Millions of Go fans watch Go matches live on online platforms or TV channels every year. However, a classic world Go tournament can last more than five hours with only the host analyzing the game. In this case, viewers get bored and stop watching.
With the advent of AlphaZero, live Go broadcasts usually show the AI's evaluation of the position, which slightly improves the viewer experience. However, simply showing the win rate is not enough to attract viewers to watch for a long time and often leads to attacks on professional players, as even the top professional players' games are usually rated very negatively by the AI.
Our proposed PGD has the potential to change this situation. A range of technologies developed by PGD, like automatic commentary text generation, real-time statistics, and result prediction, will enable a much more detailed and nuanced analysis of Go matches, thus greatly enhancing the viewer experience in live streaming.
\section{Conclusion}
We present PGD, the first large-scale professional Go dataset for data-driven analytics. With a large number of valuable features, our dataset can be used as a benchmark for a wide range of performance analysis tasks related to professional Go. The results show that machine learning methods far outperform state-of-the-art rating systems-based approaches in game results prediction. Our dataset is promising for developing useful tools and solving real-world problems for professional Go systems.
\bibliographystyle{IEEETran}
| {
"attr-fineweb-edu": 2.595703,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbJbxK3xgpbERBbcM | \section{Introduction}
Eriksen, Kristiansen, Langangen, and Wehus recently analyzed Usain Bolt's 100\,m sprint at
the 2008 Summer Olympics in Beijing and speculated what the world record would have been had he
not slowed down in celebration near the end of his race.\cite{bolt} They analyzed two scenarios for the last
two seconds -- the same acceleration as the second place runner (Richard Thompson) or
Bolt's acceleration was 0.5\,m/s$^2$ greater than Thompson's.
They found race times of $9.61\pm0.04$\,s and $9.55\pm0.04$\,s for the first and second scenarios, respectively.
In comparison, Bolt's actual race time in Beijing was 9.69\,s. The
predictions in Ref.~\onlinecite{bolt} were apparently confirmed at the 2009 World Championships
in Athletics in Berlin (race time 9.58\,s), suggesting that Bolt had improved his performance. However,
the runners in Berlin benefited from a 0.9\,m/s tailwind that, as we will show, cannot be neglected.
In this paper we analyze Bolt's records in Beijing and Berlin using a function proposed
by Tibshirani\cite{tib,wagner} to describe the time dependence of the velocity in both races.
This paper is organized as follows. As a first step, we fit an empirical function,
$v(t)$, to the data available from the International Association of Athletics
Federation.\cite{iaaf} From this fit we can deduce the maximum acceleration,
the maximum power, and the total mechanical energy produced by the runner. We then
compare Bolt's performances in Berlin 2009 and Beijing 2008, concluding that all
of these values were smaller in 2009 than in 2008.
\section{Modeling the time dependence of the velocity}
In a seminal paper, Keller\cite{keller} analyzed the problem faced by runners -- how should they
vary their velocity to minimize the time to run a certain distance?
In Ref.~\onlinecite{keller} the resistance of the air was assumed
to be proportional to the velocity and, for the 100\,m sprint, it was assumed that
the best strategy is to apply as much constant force, $F$, as possible.
The equation of motion was written as
\begin{equation}
m\frac{dv}{dt}=F-\alpha v,
\label{keller}
\end{equation}
where $v$ is the speed of the runner. The solution of Eq.~\eqref{keller} is $v(t)=(F/\alpha)(1-e^{-\alpha t/m})$.
Although this solution seems to answer the proposed question, it has some limitations. One problem is that
the air flow is turbulent, not laminar, and thus the drag force is proportional to $v^2$, rather than $v$.
Another problem is that runners must produce energy (and thus a force) to replace the energy expended
by the vertical movement of the runners' center of mass and the loss of energy due to the movement of
their legs and feet. (See Ref.~\onlinecite{pfau} for an analysis of the energy cost
of the vertical movement in a horse race.) Also, the runners' velocity declines slightly close to the
end of the 100\,m race, which is not taken into account in Eq.~(\ref{keller}).
Tibshirani\cite{tib,wagner} proposed that a runner's force decreases with time according to
$f(t)=F-\beta t$, so that the solution of Newton's second law is
\begin{equation}
v(t)=\left(\frac{F}{\alpha}+\frac{m\beta}{\alpha^2}\right)(1-e^{-\alpha t/m})-\frac{\beta t}{\alpha}.
\label{v}
\end{equation}
This solution solves some of the limitations discussed in the previous paragraph,
but it is still incomplete. As we know, the air resistance is proportional to $v^2$, but
in the above models the air resistance was supposed to be proportional only to $v$,
probably to simplify the dynamical equations such that an analytical solution could
be obtained.
The problem of determining the best form of the function to describe the time dependence of the velocity of a runner in the 100\,m
sprint was addressed in Refs.~\onlinecite{summers,mureika1}, but is still open.
We fitted Bolt's velocity in Beijing 2008 and Berlin 2009 using an equation with the
same form as Eq.~(\ref{v}), but with three parameters to be fitted to his position as a function of time.
We assume that the velocity
is given by
\begin{align}
\label{vf}
v(t)&=a(1-e^{-ct})-bt, \\
\noalign{\noindent and the corresponding position is}
x(t)&=at-\frac{bt^2}{2}-\frac{a}{c}(1-e^{-ct}).
\label{xf}
\end{align}
\section{Experimental data and results}
The parameters $a$, $b$, and $c$ in Eq.~(\ref{xf}) were fitted using the least-squares method
constrained such that $x(t_f)=100$\,m, where $t_f$ is the running time minus the `reaction time'
(the elapsed time between the sound of the starting pistol and the initiation of the run).
The split times for each 10\,m of Bolt's race are reproduced
in Table~\ref{tab1}. In Table~\ref{tab1} we also show the split times calculated using the
parameters given in Table~\ref{tab2}.
The data variances were estimated as follows. First, we assumed that the uncertainties
of the split times are all equal. Then, we estimated the variances by the sum of the squares
of the differences between the observed split times and the calculated values divided by the
number of degrees of freedom (the number of data points minus the number of fitted parameters),
$\nu$, considering both the Beijing and Berlin data:
\begin{equation}
\sigma^2=\frac{1}{\nu}\left(\sum_i^{\rm Beijing\;data}(t_i-t_i^{\rm fitted})^2+
\sum_i^{\rm Berlin\;data}(t_i-t_i^{\rm fitted})^2\right)
\end{equation}
The obtained result was $\sigma=0.02$\,s. Then, we generated 100 sets of split times by summing
normally distributed random numbers centered at zero with standard deviation equal to 0.02\,s to
the observed split times.
For each simulation we fitted new values of the parameters. Thus, we obtained 100 values for each
parameter, both to the Beijing and Berlin competitions. The uncertainties of the fitted parameters
were estimated by the standard deviation of the 100 fitted parameters. The uncertainties calculated by
this procedure reflect the uncertainties of the measured split times, the rounding of the published
data, and the inadequacy of the model function.
Bolt's velocities in Beijing and Berlin are shown in Fig.~\ref{fig1}. In Berlin we
observe a smaller initial acceleration and a smaller velocity decrease at the end of the race
compared to his performance in Beijing. The mechanical power developed by Bolt is given by
\begin{equation}
P(t)=m\frac{dv}{dt}v+\kappa(v-v_{\rm wind})^2v+200{\rm W},
\label{pf}
\end{equation}
where the parameter $\kappa=C_d\rho A/2$ corresponds to the dissipated power due to drag and
$v_{\rm wind}$ is the tailwind speed, which was zero in Beijing and equal to 0.9\,m/s in Berlin. For $C_d=0.5$, a typical value
of the drag coefficient,\cite{keller,mureika1} $\rho=1.2$\,kg/m$^3$ for the air density, and a cross section of
$A\approx1$\,m$^2$, we obtain $\kappa=0.3$\,W/(m/s)$^3$. A constant power of 200\,W is estimated for the power expended due
to the vertical movement of the Bolt's center of mass.\cite{jump} We used
$m=86$\,kg for his mass.\cite{charles} The mechanical power generated by Bolt is shown in Fig.~\ref{fig2}. We see that the maximum power
generated by Bolt in Berlin is smaller than the power he generated in Beijing.
Table~\ref{tab3} shows Bolt's maximum acceleration and power at Beijing 2008 and at
Berlin 2009 calculated using the fitted function $v(t)$. We
also show the total mechanical energy produced in both competitions, calculated from the time integral
of $P(t)$. Within the limitations of the model, Bolt's performance in Beijing (maximum force and power and the
total energy) is higher than in Berlin. Also the predictions in Ref.~\onlinecite{bolt} seem
to be vindicated.
\section{Discussion}
From Eq.~(\ref{vf}) we see that the velocity would become negative at times after the end of the race: $t=2.5$ and 6.5
minutes, for Beijing and Berlin, respectively. Thus, the model is applicable to
competitions of short duration, as can be observed by comparing the differences between
the measured and fitted split times. The differences are very small and even in
the worst cases (10\,m and 90\,m in the Beijing Olympic Games) the corresponding differences in the actual
and calculated distances are not greater than 0.6\,m. The quality of the model function can also be
observed by comparing the reported maximum velocity of 12.27\,m/s at 65.0\,m in Berlin with the result
obtained from the fit equal to 11.92\,m/s at 57.6\,m.
An important question related to short races is the effect of the air resistance on the
running time.\cite{pfau,linthorn,dapena,smith,quinn,mureika,mcfarland}
In our model the power produced by the runner [see Eq.~(\ref{pf})] is used to accelerate the runner's
body and to overcome air resistance (there also is a constant power of 200\,W, a kind of ``parasitic
power loss'' dominated by the up-down movement of the runner's center of mass). If the runner is assisted by a
tailwind, we assume that all the power saved to overcome air resistance will be used to accelerate the
runner's body. Given this assumption, we ask what would the Beijing record have been for a 0.9\,m/s tailwind
(the wind measurement on the day of the Berlin race)? We use Eq.~(\ref{pf}) and the same initial
acceleration for Bolt in Beijing to deduce a new velocity profile in Berlin. We find that the record at
the Beijing Olympic Games would have been reduced by 0.16\,s to 9.53\,s. Also, without the tailwind
in Berlin and with the same power profile developed by the runner in this race, the time would increase
by 0.16\,s.
The currently accepted theoretical and experimental results for the wind effect on 100\,m sprint
times,\cite{pfau,linthorn,dapena,smith,quinn,mureika,mcfarland} gives a typical reduction of about 0.05\,s
in the race time for a tailwind of 1\,m/s. A possible cause of our overestimation of the wind effect
is the value of the drag coefficient $\kappa$ that we used. A smaller value of $\kappa$ implies that the
wind effect is less important and, thus, the power saved in the presence of a tailwind would imply
a smaller gain of velocity and, consequently, time. For instance, if we used $\kappa=0.1$\,W/(m/s)$^3$ the
time reduction would be $\approx 0.06$\,s in the 100\,m sprint for a 0.9\,m/s tailwind --
close to the accepted value. Nevertheless, we chose $\kappa=0.3$\,W/(m/s)$^3$ (see the paragraph after
Eq.~(\ref{pf})), which agrees with the recommended $\kappa$ values\cite{mureika1,smith,quinn} to estimate
the energy and power produced by runners in the 100\,m sprint.
Another possible origin for the discrepancy between our result for the wind effect and the literature is the
force applied by the runner. We considered that the energy and the ground force applied by the
runner can be deduced from the fitted position position versus time function. However, some workers prefer
to formulate a model based on the applied forces (see, for instance, Refs.~\onlinecite{mureika1, linthorn,
dapena,quinn}). These authors usually obtain better agreement for the effects of wind as well as altitude
in short races.
\begin{acknowledgements}
MTY thanks FAPESP and CNPq for partial support. The authors thank P. Gouffon for a helpful discussion.
\end{acknowledgements}
| {
"attr-fineweb-edu": 2.494141,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdBo4uzki06beSTH3 | \section{Introduction}
\begin{figure}[t]
\setlength{\belowcaptionskip}{-12pt}
\centerline{\includegraphics[width=0.45\textwidth]{example.png}}
\caption{An example of Sports Game Summarization.}
\label{example}
\end{figure}
In recent years, a large number of sports games are carried out every day, and it is demanding to report corresponding news articles after games. Meanwhile, manually writing sports news is labor-intensive for professional editors.
Therefore, how to automatically generate sports news has gradually attracted attention from both the research communities and industries.
As the example shown in Fig.~\ref{example}, Sports Game Summarization aims at generating sports news articles based on live commentaries~\cite{zhang-etal-2016-towards}. Ideally, the generated sports news should record the core events of a game that could help people efficiently catch up to games.
Compared to the most prior work on traditional text summarization, the challenges of sports game summarization lie in three aspects:
(1) The live commentaries record the whole events of a game, usually reaching thousands of tokens that is far beyond the typical 512 token limits of BERT-style pre-trained models;
(2) The live commentaries have a different text style from the sports news. Specifically, commentaries are more informal and colloquial;
(3) There is a knowledge gap between commentaries and news. Sports news usually contains additional knowledge of sports teams or players, which cannot be obtained from corresponding commentaries (we will discuss more in Sec.~\ref{sec:knowledge_corpus}).
Most existing works and datasets on sports game summarization treat the problem as a single document summarization task.
Zhang et al.~\cite{zhang-etal-2016-towards} discuss this task for the first time and build the first dataset which has 150 commentary-news pairs. Wan et al.~\cite{Wan2016OverviewOT} also contribute a dataset with 900 samples for NLPCC 2016 shared task. Early methods design diverse strategies to select key commentary sentences, and then either form the sports news directly~\cite{zhang-etal-2016-towards,Zhu2016ResearchOS,Yao2017ContentSF} or relies on human-constructed templates to generate final news~\cite{Liu2016SportsNG,lv2020generate}.
Recently, Huang et al.~\cite{Huang2020GeneratingSN} present \textsc{SportsSum}, the first large-scale sports game summarization dataset with 5,428 samples. They crawl live commentaries and sports news from sports reports websites, and further adopt a rule-based cleaning process to clean the data.
Specifically, they first remove all HTML tags, and then for each news article, remove the descriptions before starting keywords which indicate the start of a game (e.g., ``at the beginning of the game'').
This is because there usually exist descriptions (e.g., matching history) at the beginning of news articles, which cannot be directly inferred from the corresponding commentaries.
Huang et al.~\cite{Huang2020GeneratingSN} also extend early methods~\cite{zhang-etal-2016-towards,Zhu2016ResearchOS,Yao2017ContentSF,Liu2016SportsNG,lv2020generate} to a state-of-the-art two-step summarization framework, where each selected key commentary sentence is further rewritten to a news sentence through seq2seq models.
Despite above progress, there are two shortcomings in the previous works.
First, the previous datasets are limited in either scale or quality\footnote{We note that another concurrent work \textsc{SportsSum2.0}~\cite{Wang2021SportsSum20GH} also employs manual cleaning process on large-scale sports game summarization dataset. However, they only clean the original \textsc{SportsSum}~\cite{Huang2020GeneratingSN} dataset. There are three major differences between our dataset and \textsc{SportsSum2.0}: (1) the scale of our dataset is 1.45 times theirs; (2) our manual cleaning process also remove the history-related descriptions in the middle of news articles which is neglected by \textsc{SportsSum2.0}; (3) our dataset also provide a large-scale knowledge corpus to alleviate the knowledge gap issue.}. The scale of early datasets~\cite{zhang-etal-2016-towards,Wan2016OverviewOT} is less than 1,000 samples, which cannot be utilized to explore sophisticated supervised models. \textsc{SportsSum}~\cite{Huang2020GeneratingSN} is many times larger than early datasets, but as shown in Fig.~\ref{noisy_samples}, we find more than 15\% of news articles in \textsc{SportsSum} have noisy sentences due to its simple rule-based cleaning process (detailed in Sec.~\ref{section:human_cleaning}).
Second, the knowledge gap between live commentaries and sports news is neglected by previous works. Though \textsc{SportsSum} removes a part of descriptions at the beginning of news to alleviate the knowledge gap to some extent, there are still many other descriptions leading to the knowledge gap. Moreover, their two-step framework neglects the gap.
\begin{figure}[t]
\centerline{\includegraphics[width=0.48\textwidth]{case.png}}
\setlength{\belowcaptionskip}{-10pt}
\caption{Noisy samples existed in \textsc{SportsSum} dataset. The first example contains the irrelevant descriptions to current game while the second one includes an advertisement. The last case has irrelevant hyperlink text.}
\label{noisy_samples}
\end{figure}
In this paper, we introduce \textsc{K-SportsSum}, a large-scale human-cleaned sports game summarization dataset which is constructed with the following features:
(1) In order to improve both the scale and the quality of dataset, \textsc{K-SportsSum} collects a large amount of data from massive games and further employs a strict manual cleaning process to denoise news articles.
It is a large scale sports game summarization dataset, which consists of 7,854 high quality commentary-news pairs;
(2) To narrow the knowledge gap between live commentaries and sports news, \textsc{K-SportsSum} also provides an abundant knowledge corpus including the information of 523 sports teams and 14,724 sports players.
Additionally, in the aspect of model design, we propose a knowledge-enhanced summarizer that first selects key commentary sentences, and then considers the information of the knowledge corpus during rewriting each selected sentence to a news sentence so as to form final news.
The experimental results on \textsc{K-SportsSum} and \textsc{SportsSum}~\cite{Huang2020GeneratingSN} datasets show that our model achieves new state-of-the-art performances. We further conduct qualitative analysis and human study to verify that our model generates more informative sports news.
We highlight our contributions as follows:
\begin{itemize}[leftmargin=*,topsep=0pt]
\item We introduce a new sports game summarization dataset, i.e., \textsc{K-SportsSum}, which contains 7,854 human-cleaned samples. To the best of our knowledge, \textsc{K-SportsSum} is currently the highest quality and largest sports game summarization dataset$\footnote{We release the data at \url{https://github.com/krystalan/K-SportsSum}}$.
\item In order to narrow the knowledge gap between commentaries and news, we also provide an abundant knowledge corpus containing the information of 523 sports teams and 14,724 sports players.
\item A knowledge-enhanced summarizer is proposed to take the information of knowledge corpus into account when generating sports news. It is the first sports game summarization model which considers the knowledge gap issue.
\item The experimental results show our model achieves a new state-of-the-art performance on both \textsc{K-SportsSum} and \textsc{SportsSum} datasets. Qualitative analysis and human study further verify that our model generates better sports news.
\end{itemize}
\section{Data Construction}
In this section, we first show how we collect live commentary documents and news articles from Sina Sports Live (\S~\ref{sec:data_collection}).
Secondly, we analyze the noise existed in collected text and introduce the manual cleaning process to denoise the news articles (\S~\ref{section:human_cleaning}).
Thirdly, we discuss the collection process of knowledge corpus (\S~\ref{sec:knowledge_corpus}).
Finally, we give the details of benchmark settings (\S~\ref{sec:benchmark_settings}).
\subsection{Data Collection}
\label{sec:data_collection}
Following previous works~\cite{zhang-etal-2016-towards,Wan2016OverviewOT,Huang2020GeneratingSN}, we crawl the records of football games from Sina Sports Live\footnote{\url{http://match.sports.sina.com.cn/index.html}}, the most influential football live services in China.
Note that the existing research of sports game summarization is oriented to the football games which are the easiest to collect, but the methods and discussions can trivially generalize to other types of sports games.
After crawling all football games data from 2012 to 2020, we remove HTML tags and obtain 8,640 live commentary documents together with corresponding news articles.
\begin{table*}[t]
\centering
\setlength{\belowcaptionskip}{-10pt}
\resizebox{0.80\textwidth}{!}
{
\begin{tabular}{c|c|l}
\hline
\textbf{Additional Knowledge} & \% & \multicolumn{1}{c}{\textbf{Examples}} \\ \hline
\multirow{2}{*}{\textbf{Home or visiting team}} & \multirow{2}{*}{14.7} & \textbf{The home team} broke the deadlock in the 58th minute, Brian took a free kick and Lucchini scored \\
& & the goal. In the 77th minute, Nica... \\ \hline
\multirow{4}{*}{\textbf{Player Information}} & \multirow{4}{*}{6.3} & Paris exceeded it again in the 26th minute! Motta took a corner kick from the right. Verratti, \textbf{who is} \\
& & \textbf{1.65 meters tall}, pushed into the near corner at a small angle 6 meters away from the goal, 2-1. ... \\ \cline{3-3}
& & After another 3 minutes, Keita broke through Campagnaro and Brugman and passed back. Immobile \\
& & shot to the top of the goal. He faced the \textbf{previous teams} without celebrating the goal. ... \\ \hline
\multirow{3}{*}{\textbf{Team Information}} & \multirow{3}{*}{4.7} & \textbf{Purple Lily} equalized in the 29th minute, Tomovic broke through Ljajic and Duoduo from the right. \\ \cline{3-3}
& & \textbf{The last Champions League runner-up} completely controlled the game after the opening. Reus \\
& & shot higher once with two feet and was rescued by Mandanda once. ... \\ \hline
\end{tabular}
}
\setlength{\belowcaptionskip}{3pt}
\caption{Additional knowledge required for sports game summarization.}
\label{table:addition_knowledge}
\end{table*}
\subsection{Data Cleaning}
\label{section:human_cleaning}
\subsubsection{Noise Analysis}
We find that the \emph{live commentaries} are of high-quality due to the \emph{structured form}. Nevertheless, the \emph{news articles} are \emph{unstructured text} usually containing noises. Specifically, we divide all noises into four types:
\begin{itemize}[leftmargin=*,topsep=0pt]
\item Description of other games: We find about 12\% of the news pages contain multiple news articles belonging to different games, which has been neglected by \textsc{SportsSum}~\cite{Huang2020GeneratingSN}, resulting in 2.2\% (119/5428) of news articles include descriptions of other games.
\item Advertisements: There are advertisements often appear in news articles. We find about 9.3\% (505/5428) of news articles in \textsc{SportsS\\um} have such noise.
\item Irrelevant hyperlink texts: Many new sentences contain hyperlink texts. Some of them can be regarded as part of the news, but others are irrelevant to current news, which we called irrelevant hyperlink texts. About 0.6\% (31/5428) samples of \textsc{SportsSum} have this kind of noise.
\item History-related descriptions: There are amount of news articles containing history-related descriptions which cannot be inferred from the corresponding live commentary document.
\textsc{SportsSum} adopts a rule-based approach to remove all news sentences before starting keywords to alleviate part of this noise.
However, this approach cannot correctly dispose about 4.6\% (252/5428) of the news articles, because the rule-based approach cannot cover all situations.
In addition, history-related descriptions may not only appear at the beginning of the news. Many news articles often introduce the matching history in the middle, e.g., if a player scores, the news may introduce the recent outstanding performance of the player in the previous rounds, or count his (her) total goals in the current season.
We also explain why we consider the history-related descriptions as noise rather than the knowledge gap between commentaries and news in Sec.~\ref{sec:knowledge_corpus}.
\end{itemize}
\begin{figure}[t]
\centering
\setlength{\belowcaptionskip}{-10pt}
\centerline{\includegraphics[width=0.45\textwidth]{interface.png}}
\caption{(a) Flow chart of manual cleaning process. (b) Screenshot of manual annotation interface.}
\label{fig:human}
\end{figure}
\subsubsection{Manual Cleaning Process}
To reduce the noise of news articles, we design a manual cleaning process with special consideration for sports game summarization.
Fig.~\ref{fig:human}(a) shows the manual cleaning process. We first remove the description of other games. Secondly, we delete advertisements and irrelevant hyperlink text. Finally, the history-related descriptions of the game are identified and removed.
\subsubsection{Annotation Process}
The interface of manual cleaning is shown in Fig.~\ref{fig:human}(b).
We recruit 9 master students, who are native Chinese speakers to perform the manual cleaning process. Firstly, we randomly select 50 news articles as test samples and ask all students to clean them at the same time. Based on the results, we decide 7 annotators and 2 senior annotators. After that, each news article will be randomly assigned to 2 annotators. If the cleaning results are inconsistent, it will be determined by a third senior annotator. Finally, all the cleaning results are checked by another two data experts. If the data experts think that the result does not meet the requirements, the news article will be assigned again.
\subsubsection{Post-processing}
After the manual cleaning process, we discover some news articles do not contain the description of the current game, or contain little information, e.g., only include the results of the game.
We discard these news articles and retain 7,854 high-quality manually cleaned news articles which together with the corresponding live commentary documents constitute the \textsc{K-SportsSum} dataset.
\subsection{Knowledge Corpus}
\label{sec:knowledge_corpus}
As we mentioned in Sec.~\ref{section:human_cleaning}, the history-related descriptions in original news articles are regarded as noise and have been removed during the manual cleaning process due to the following reasons: (1) we follow the settings of \textsc{SportsSum}~\cite{Huang2020GeneratingSN} which adopt a rule-based approach to remove all sentences before the starting keywords. Most of the removed sentences are history-related descriptions; (2) the goal of sports game summarization is to generate news articles that can \emph{record the key events of the \textbf{current} sports games}. The history-related descriptions make little contribution to the goal.
To investigate if there are still other descriptions in the news articles, which also cannot be inferred from the corresponding live commentary document, we randomly select 300 samples from \textsc{K-SportsSum} and manually analyze whether the news articles contain additional knowledge that cannot be obtained from commentary documents. Tab.~\ref{table:addition_knowledge} shows statistics of required additional knowledge: (1) Some news articles replace the name of football teams with ``home team'' or ``visiting team'', which cannot be explicitly obtain from commentary documents; (2) Some news includes the personal information of players, e.g., height, birthday, previous teams; (3) A number of news articles contain prior knowledge of football teams, such as nickname (``Purple Lily'' is the nickname of ACF Fiorentina, which is shown in the penultimate line of Tab.~\ref{table:addition_knowledge}) and past honors.
To narrow the knowledge gap between commentaries and news, we construct a knowledge corpus whose collection process contains the following three steps:
\begin{figure}[t]
\centering
\setlength{\belowcaptionskip}{-10pt}
\centerline{\includegraphics[width=0.45\textwidth]{pages.png}}
\caption{The relations among metadata pages, team pages, player pages and Wikipedia articles. Each game has a metadata page that can link to related player or team pages. We further align each team page to the corresponding Wikipedia article.}
\label{fig:pages}
\end{figure}
\vspace{1ex}
\noindent\textbf{Step 1: Metadata Collection.}
For each game, as shown in Fig.~\ref{fig:pages}, Sina Sports Live also provides a \emph{metadata page} that can link to related \emph{player pages} and \emph{team pages}. Note that each team (or player) has a unique corresponding team (or player) page.
After crawling 8,640 metadata pages of all games in \textsc{K-SportsSum}, we obtain 559 URLs of team pages and 15591 URLs of player pages.
\vspace{1ex}
\noindent\textbf{Step 2: Player Knowledge Collection.}
The player pages provided by Sina Sports Live contain structured knowledge cards describing players in ten aspects (i.e., name, birthday, age, etc.).
We crawl these pages and obtain the 14,724 players' structured knowledge cards.
Then we convert each knowledge card to a passage through several rule-based sentence templates (e.g., an item of knowledge card $\langle$``Ronaldo'', ``birthday'', ``September 18, 1976''$\rangle$ can be converted to a sentence ``Ronaldo's birthday is September 18, 1976''). In this way, we obtain 14,724 player passages.
\vspace{1ex}
\noindent\textbf{Step 3: Team Knowledge Collection.}
Though Sina Sports Live offers the team pages, we find that most of them are less informative than player pages. To construct an informative knowledge corpus, we decide to manually align these 559 team pages to Wikipedia articles\footnote{https://zh.wikipedia.org/} in which we further extract plain text\footnote{We crawl the Wikipedia articles and then use wikiextractor tool (\url{https://github.com/attardi/wikiextractor}) to extract plain text.} to form our knowledge corpus.
Three master students and two data experts are recruited to perform the alignment task. Each team page is assigned to three students, if their results are inconsistent, the final result is decided by a group meeting with these five persons. Eventually, we assign 523 out of 559 team pages to corresponding Wikipedia articles.
Finally, our knowledge corpus contains 523 team articles and 14,724 player passages. The corpus also provides the link and alignment relations among metadata pages, player pages, team pages and Wikipedia articles. Thus, for a given game, we can accurately retrieve the related articles and passages.
\begin{figure}[t]
\setlength{\belowcaptionskip}{-10pt}
\centerline{\includegraphics[width=0.45\textwidth]{deep_statistics.png}}
\caption{Statistics of tokens in news article and commentary document. Stop words were excluded from word clouds.}
\label{fig:deep_statistics}
\end{figure}
\begin{table*}[t]
\centering
\setlength{\belowcaptionskip}{-10pt}
\resizebox{0.95\textwidth}{!}
{
\begin{tabular}{l|l|cccccc|cccccc}
\hline
\multicolumn{1}{c|}{\multirow{3}{*}{Datasets}} & \multicolumn{1}{c|}{\multirow{3}{*}{\# Examples}} & \multicolumn{6}{c|}{News article} & \multicolumn{6}{c}{Live commentary document} \\
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{2}{c}{\# Tokens} & \multicolumn{2}{c}{\# Words} & \multicolumn{2}{c|}{\# Sent.} & \multicolumn{2}{c}{\# Tokens} & \multicolumn{2}{c}{\# Words} & \multicolumn{2}{c}{\# Sent.} \\
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & Avg. & 95th pctl. & Avg. & 95th pctl. & Avg. & 95th pctl. & Avg. & 95th pctl. & Avg. & 95th pctl. & Avg. & 95th pctl. \\ \hline
Zhang et al.~\cite{zhang-etal-2016-towards} & 150 & - & - & - & - & - & - & - & - & - & - & - & - \\ \hline
NLPCC 2016 shared task~\cite{Wan2016OverviewOT} & 900 & - & - & - & - & - & - & - & - & - & - & - & - \\ \hline
\textsc{SportsSum}\cite{Huang2020GeneratingSN} & 5428 & 801.11 & 1558 & 427.98 & 924 & 23.80 & 39 & 3459.97 & 5354 & 1825.63 & 3133 & 193.77 & 379 \\ \hline
\textsc{K-SportsSum} & \textbf{7854} & 606.80 & 1430 & 351.30 & 845 & 19.22 & 40 & 2251.62 & 3581 & 1200.31 & 1915 & 187.69 & 388 \\
\ \ \ \textsc{K-SportsSum} (train) & 6854 & 604.41 & 1427 & 349.88 & 843 & 19.31 & 40 & 2250.04 & 3581 & 1199.69 & 1915 & 187.81 & 390 \\
\ \ \ \textsc{K-SportsSum} (dev.) & 500 & 615.04 & 1426 & 356.51 & 824 & 19.64 & 42 & 2280.04 & 3608 & 1216.76 & 1920 & 187.08 & 370 \\
\ \ \ \textsc{K-SportsSum} (test) & 500 & 631.28 & 1463 & 365.54 & 859 & 20.10 & 41 & 2244.78 & 3563 & 1192.32 & 1917 & 186.73 & 376 \\ \hline
\end{tabular}
}
\setlength{\belowcaptionskip}{3pt}
\caption{Statistics of \textsc{K-SportsSum} and previous datasets (Sent.: sentence, Avg.: average, 95th pctl.: 95th percentile).}
\label{table:statistic}
\end{table*}
\begin{figure*}[t]
\centerline{\includegraphics[width=0.85\textwidth]{overview.png}}
\caption{An overview of knowledge-enhanced summarizer.}
\label{fig:overview}
\end{figure*}
\subsection{Benchmark Settings}
\label{sec:benchmark_settings}
\vspace{0.5ex}
We randomly select 500 samples and other 500 samples from \textsc{K-SportsSum} to form development set and testing set.
The remaining 6,854 samples constitute training set.
\section{Data Analysis}
In this section, we analyze various aspects of \textsc{K-SportsSum} to provide a deeper understanding of the dataset and task of sports game summarization.
\vspace{1ex}
\noindent\textbf{Data Size.} Tab.~\ref{table:statistic} shows the statistics of \textsc{K-SportsSum} and previous datasets. The \textsc{K-SportsSum} dataset is much larger than any other dataset.
\vspace{1ex}
\noindent\textbf{News Article.} The average number of tokens in \textsc{K-SportsSum} is 606.80 which is less than the counterpart in \textsc{SportsSum} (801.11) due to the manual cleaning process.
Fig.~\ref{fig:deep_statistics}(a) shows the distributions of token length for news articles.
This distribution obeys positive skewness distribution which indicates the length of most news articles is less than the average.
\vspace{1ex}
\noindent\textbf{Live Commentary Document.} The length of commentary document usually reaches thousands of tokens, which makes the task more challenging.
The average number of tokens for commentary documents in \textsc{SportsSum} is 3459.97, which is longer than the counterpart in \textsc{K-SportsSum}, i.e., 2251.62.
This is because \textsc{SportsSum} considers every commentary sentences in the document, but we only retain the commentary sentences which has timeline information, for the reason that most the commentary sentences without timeline information are irrelevant to the current game.
The distributions of token length for live commentary documents in \textsc{K-SportsSum} are shown in Fig.~\ref{fig:deep_statistics}(b). This distribution obeys the Gaussian mixture distribution because the live services in Sina Sports Live have been updated once. The length distribution of commentary documents is different before and after the update. Some commentary documents in \textsc{K-SportsSum} are provided by the old live services, while others are offered by the new one.
\vspace{1ex}
\noindent\textbf{Word Clouds.}
Fig.~\ref{fig:deep_statistics}(c) and Fig.~\ref{fig:deep_statistics}(d) show the word clouds for news article and live commentary document, where is easy to find the different text styles between these two types of text.
\vspace{1ex}
\noindent\textbf{Knowledge Corpus.}
The knowledge corpus contains 523 team articles and 14,724 player passages.
Each team article has 18.28 sentences or 1341.91 tokens on average. Each player passage contains 15.05 sentences or 283.49 tokens on average.
\section{Model}
We propose knowledge-enhanced summarizer which generates sports news based on both live commentary document and knowledge corpus. Formally, a model is given a live commentary document $C=\{(t_{1},s_{1},c_{1}),...,(t_{m},s_{m},c_{m})\}$ together with a knowledge corpus $K$ and outputs a sports news article $R=\{r_{1},r_{2},...,r_{n}\}$. $r_{i}$ represents $i$-th news sentence and $(t_{j},s_{j},c_{j})$ is $j$-th commentary, where $t_{j}$ is the timeline information, $s_{j}$ denotes the current scores and $c_{j}$ is the commentary sentence.
The overview of our knowledge-enhanced summarizer is illustrated in Fig.~\ref{fig:overview}. Firstly, we utilize a selector to extract key commentary sentences from the original commentary document (Sec.~\ref{sec:selector}). Secondly, a knowledge retriever is used to obtain related passages and articles from the knowledge corpus for each selected commentary sentence (Sec.~\ref{sec:retriever}). Lastly, we make use of a seq2seq rewriter to generate news sentences based on the corresponding commentary sentences and retrieved passages/articles (Sec.~\ref{sec:rewriter}). In order to train our selector and rewriter, we need labels to indicate the importance of commentary sentences and aligned $\langle$commentary sentence, news sentence$\rangle$ pairs.
Following Huang et al.~\cite{Huang2020GeneratingSN}, we obtain these importance labels and sentence pairs through a sentence mapping process (Sec.~\ref{sec:sentence_pairs}).
\subsection{Key Commentary Sentence Selection}
\label{sec:selector}
The selector is used to extract key commentary sentences $\{c_{i1},c_{i2},...\\,c_{ik}\}$ from all commentary sentences $\{c_{1},c_{2},...,c_{m}\}$ of a given live commentary document, which can be modeled as a text classification task.
Different from the previous state-of-the-art two-step framework~\cite{Huang2020GeneratingSN} purely utilizing TextCNN~\cite{Kim2014ConvolutionalNN} as the selector and ignores the contexts of a commentary sentence, we make use of RoBERTa~\cite{Liu2019RoBERTaAR} to extract the contextual representation of a commentary sentence, and then predict its importance.
Specifically, we input the target commentary sentence with its context to RoBERTa~\cite{Liu2019RoBERTaAR} in a sliding window way. The representation of target sentence is obtained by averaging the output embedding of each token belonging to the target sentence.
Finally, a sigmoid classifier is employed to predict the importance of the target commentary sentence.
The cross-entropy loss is used as the training objective for the selector.
\begin{figure*}[t]
\centerline{\includegraphics[width=0.80\textwidth]{rewriter.png}}
\caption{Our encoder-decoder rewriter architecture.}
\label{fig:rewriter}
\end{figure*}
\subsection{Knowledge Retrieval}
\label{sec:retriever}
Given a selected commentary sentence $c_{ij}$ and the knowledge corpus $K$, the knowledge retriever first recognize the team and player entity mentions in $c_{ij}$, and then link the entity mentions to team articles or player passages from the corpus $K$.
\subsubsection{Named Entity Recognition Process}
In order to recognize the players and teams mentioned in commentary sentence $c_{ij}$, we train a FLAT model~\cite{Li2020FLATCN} (a state-of-the-art Chinese NER model which could better leverage the lattice information of Chinese characters sequences) on MSRA~\cite{Levow2006TheTI} (a general Chinese NER dataset), and then make use of the trained FLAT model to predict the entity mentions in $c_{ij}$. We only retain the \texttt{PER} and \texttt{ORG} entity mentions predicted by FLAT.
Because the \texttt{PER} entity mentions indicate the players while the \texttt{ORG} entity mentions hint the teams.
\subsubsection{Entity Linking Process}
For a given game, we can accurately retrieve dozens of candidate player passages and several (2 in most cases) candidate team articles through the link and alignment relations provided by knowledge corpus.
The entity linking process needs to link each \texttt{PER} or \texttt{ORG} entity mention to the candidate passages/articles. Here, we employ a simple yet effective linking method, we calculate the normalized Levenshtein distance between each entity mention with the title\footnote{The standard name of each player is regarded as the title of player passages, while the team articles collected from Wikipedia already has the corresponding titles.} of passages/articles:
\begin{equation}
\label{levenshtein}
NLev(entity,title)=\frac{Lev(entity,title)}{max(len(entity),len(title))}
\end{equation}
where $NLev(\cdot,\cdot)$ means the normalized Levenshtein distance, and $Lev(\cdot,\cdot)$ represents the standard Levenshtein distance. $len(\cdot)$ indicates the number of characters within input sequences.
Each \texttt{PER} (or \texttt{ORG}) entity mention is linked to the corresponding player passage (or team article) whose title has the nearest normalized Levenshtein distance with the entity mention if the nearest distance within a predefined threshold $\lambda_{p}$ (or $\lambda_{o}$). Otherwise, we do not link the entity mention.
Eventually, for a given commentary sentence $c_{ij}$, we obtain a number of $\langle$linked \texttt{PER}/\texttt{ORG} entity mention, corresponding passage/article$\rangle$ pairs.
\subsection{Sports News Generation}
\label{sec:rewriter}
As shown in Fig.~\ref{fig:rewriter}, we make use of mT5~\cite{Xue2021mT5AM}, a pre-trained multilingual language model\footnote{Since there is no Chinese version of T5 for public use, we choose its multilingual version, i.e., mT5.}, as our rewriter to generate a news sentence $r_{ij}$ based on a given commentary sentence $c_{ij}$, its timeline information $t_{ij}$ and the $\langle$linked \texttt{PER}/\texttt{ORG} entity mention, corresponding passage/article$\rangle$ pairs.
Specifically, we tokenize the temporal phrase ``In the $t_{ij}$-minute'' and $c_{ij}$ using mT5's tokenizer, then form the input as \texttt{<s> temporal phrase </s> commentary sentence </s>}. The input embeddings of each token consist of a segment embedding $z^{seg}$ and a knowledge embedding $z^{know}$ in addition to a token embedding $z^{token}$ and a position embedding $z^{pos}$ within the input sequence:
\begin{equation}
\label{equa:input_embedding}
z_{k} = LN(z_{k}^{token}+z_{k}^{pos}+z_{k}^{seg}+z_{k}^{know})
\end{equation}
where $z_{k}$ denotes the fused embedding of the $k$-th token in the input sequence. $LN(\cdot)$ represents layer normalization.
We explain the two additional embeddings below.
\noindent\textbf{Segment embeddings.} To convey the semantics of fine-grained types of tokens, we use four learnable segment embeddings (\texttt{[Playe\\r], [Team], [Time] and [Other]}) to indicate a token belongs to: (1) linked \texttt{PER} entity mentions; (2) linked \texttt{ORG} entity mentions; (3) the temporal phrase; (4) none of above.
\noindent\textbf{Knowledge embeddings.} For a token belonging to the linked \texttt{PER}/\texttt{ORG} entity mentions, we consider the representation of corresponding passage/article as its knowledge embedding. In detail, we first input each sentence of a given passage/article to a pre-trained RoBERTa~\cite{Liu2019RoBERTaAR} one by one and use the output embedding of \texttt{[CLS]} token as the sentence embedding. Then, the representation of the whole passage/article is the average of all sentences' embeddings.
The input embeddings are further passed to the mT5-encoder and generate the news sentence $r_{ij}$ in the sequence-to-sequence learning process with the negative log-likelihood loss.
All generated news sentences are concatenated to form final sports news.
\subsection{Training with Oracle}
\label{sec:sentence_pairs}
To train the selector and rewriter, we need (1) labels to indicate the importance of each commentary sentence and (2) a large number of $\langle$commentary sentence, news sentence$\rangle$ pairs, respectively.
To obtain the above labels and pairs, we \textbf{map each news sentence to its commentary sentence} through the same sentence mapping process as Huang et al.~\cite{Huang2020GeneratingSN}.
Specifically, although there is no explicit timeline information on news articles, we find about 40\% news sentences in \textsc{K-SportsSum} begin with ``in the n-th minutes''.
For each news sentence $r_{i}$ beginning with ``in the $h_{i}$-th minutes'', we first obtain its time information $h_{i}$. Then, we consider commentary sentences $c_{j}$ whose corresponding timeline $t_{j} \in [h_{i},h_{i}+3]$ as candidate mapping set of $r_{i}$. Lastly, we calculate BERTScore~\cite{Zhang2020BERTScoreET} (a metric to measure the sentence similarity) of $r_{i}$ and all the commentary sentences in candidate mapping set. The commentary sentence with the highest score is paired with $r_{i}$.
With the above process, we can obtain a large number of pairs of a mapped commentary sentence and a news sentence, which can be used to train our rewriter. Furthermore, the commentary sentence appearing in the pairs will be regarded as important while others are insignificant, which could be used as the training data for our selector.
\begin{table*}[t]
\centering
\resizebox{0.80\textwidth}{!}
{
\centering
\begin{tabular}{clcccccc}
\cline{1-8}
\multirow{2}{*}{Method} & \multirow{2}{*}{Model} & \multicolumn{3}{c}{\textsc{K-SportsSum}} & \multicolumn{3}{c}{\textsc{SportsSum}} \\
& & ROUGE-1 & ROUGE-2 & ROUGE-L & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \cline{1-8}
\multirow{3}{*}{Extractive Models} & TextRank & 21.04 & 6.17 & 20.64 & 18.37 & 5.69 & 17.23 \\
& PacSum & 24.33 & 7.28 & 24.04 & 21.84 & 6.56 & 20.19 \\
& Roberta-Large & 27.79 & 8.14 & 27.21 & 26.64 & 7.44 & 25.59 \\ \cline{1-8}
\multirow{2}{*}{Abstractive Models} & Abs-LSTM & 31.24 & 11.27 & 30.95 & 29.22 & 10.94 & 28.09 \\
& Abs-PGNet & 36.49 & 13.34 & 36.11 & 33.21 & 11.76 & 32.37 \\ \cline{1-8}
\multirow{2}{*}{\makecell[c]{Two Step Framework \\ (Selector + Rewriter)}} & SportsSUM~\cite{Huang2020GeneratingSN} & 44.89 & 19.04 & 44.16 & 43.17 & 18.66 & 42.27 \\
& KES (Our) & \textbf{48.79} & \textbf{21.04} & \textbf{47.17} & \textbf{47.43} & \textbf{20.54} & \textbf{47.79} \\ \cline{1-8}
\end{tabular}
}
\caption{Experimental results on \textsc{K-SportsSum} and \textsc{SportsSum}.}
\label{table:result}
\end{table*}
\begin{table*}[t]
\centering
\begin{tabular}{p{0.25\textwidth}p{0.03\textwidth}p{0.30\textwidth}p{0.32\textwidth}}
\hline
\multicolumn{1}{c}{Commentary Sentence} & \multicolumn{1}{c}{T.} & \multicolumn{1}{c}{KES (w/o know.)} & \multicolumn{1}{c}{KES} \\ \hline
Barcelona attacked on the left side, Semedo passed the ball to the front of the small restricted area, Suarez scored the ball!!! & \multicolumn{1}{c}{27'} & Barcelona broke the deadlock in the 27th minute, Semedo crossed from the left and Suarez scored in front of the small restricted area. & \textbf{Defending champion} broke the deadlock in the 27th minute, Semedo crossed from the left and Suarez scored in front of the small restricted area. \\ \hline
De Yang was replaced and Song Boxuan came on as a substitute & \multicolumn{1}{c}{45'} & In the 45th minute, De Yang was replaced by Song Boxuan & In the 45th minute, De Yang was replaced by Song Boxuan, \textbf{who is 1.7 meters height.} \\ \hline
\end{tabular}
\caption{Qualitative analysis on \textsc{K-SportsSum} development set (T.:timeline information)}
\label{table:case_study}
\end{table*}
\section{Experiments}
\subsection{Implementation Details}
We train all models on one 32GB Tesla V100 GPU. Our knowledge-enhanced summarizer is implemented based on RoBERTa~\cite{Liu2019RoBERTaAR} and mT5~\cite{Xue2021mT5AM} of huggingface Transformers library~\cite{wolf-etal-2020-transformers} with default settings.
In detail, we utilize RoBERTa-Large (12 layers with 1024 hidden size) to initialize our selector, and mT5-Large (24 layers with 1024 hidden size) as our rewriter. Another fixed RoBERTa-Large in our rewriter is used to calculate the knowledge embedding for a number of input tokens.
For knowledge retriever, we train a FLAT~\cite{Li2020FLATCN} NER model on a general Chinese NER dataset, i.e., MSRA~\cite{Levow2006TheTI}. The trained FLAT model achieves 94.21 F1 score on the MSRA testing set, which is similar to the original paper. The predefined threshold $\lambda_{p}$ and $\lambda_{o}$ used in the entity linking process are 0.2 and 0.25, respectively.
We set the hyperparameters based on the preliminary experiments on the development set. We use minimal hyperparameter tuning using Learning Rates (LRs) in [1e-5, 2e-5, 3e-5] and epochs of 3 to 10. We find the selector with LR of 3e-5 and 5 epochs to work best. The best configuration for the mT5 rewriter is 2e-5 lr, 7epochs. For all models, we set the batch size to 32, use Adam optimizer with a default initial momentum and adopt linear warmup in the first 500 steps.
\subsection{Quantitative Results}
\subsubsection{Sports Game Summarization Task.}
We compare our knowledge enhanced summarizer (i.e., KES) with several general text summarization models and the current state-of-the-art two-step model proposed by Huang et al.~\cite{Huang2020GeneratingSN} (i.e., SportsSUM).
Note that the baseline models do not include the state-of-the-art pre-trained encoder-decoder language models (e.g., T5~\cite{Raffel2020ExploringTL} and BART~\cite{Lewis2020BARTDS}) due to their limitation with long text.
Tab.~\ref{table:result} shows that our model outperforms the baselines on both \textsc{K-SportsSum} and \textsc{SportsSum} datasets in terms of ROUGE scores.
Specifically, TextRank~\cite{Mihalcea2004TextRankBO} and PacSum~\cite{zheng-lapata-2019-sentence} are two typical unsupervised extractive summarization models. Roberta-Large is used as a supervised extractive summarization model in the same way as our selector. These three models achieve limited performances due to different text styles between commentaries and news. Abs-LSTM and Abs-PGNet~\cite{See2017GetTT} are two abstractive summarization models which dispose of sports game summarization in an end2end sequence-to-sequence learning way. They outperform extractive models, because they take different text styles into account. Nevertheless, both LSTM and PGNet could not better model the long-distance dependency in the input sequence. Thanks to the appearance of transformer model~\cite{Vaswani2017AttentionIA}, an encoder-decoder architecture which makes use of self-attention mechanism to model the long-distance dependency, many pre-trained encoder-decoder language models have been proposed one after another such as BART and T5. However, they cannot be direct used for sports game summarization because the input limitations of T5 and BART are 512 tokens and 1,024 tokens, respectively.
The state-of-the-art baseline SportsSUM~\cite{Huang2020GeneratingSN} uses TextCNN selector and PGNet rewriter to achieve better results than the above models, where the selector could effectively handle the long commentaries text while the rewriter alleviates the different styles issue.
Despite its better performance, SportsSUM neglects the knowledge gap between live commentaries and sports news.
Our knowledge-enhanced summarizer uses the additional corpus to alleviate the knowledge gap, together with the advanced selector and rewriter to achieve a new state-of-the-art performance. Since \textsc{K-SportsSum} and \textsc{SportsSum} are both collected from Sina Sports Live, for each game in \textsc{SportsSum}, we also can accurately retrieve related passages and articles from the corpus, and then train our knowledge-enhanced summarizer.
\subsubsection{Ablation Study.}
We run 5 ablations, modifying various settings of our knowledge-enhanced summarizer:
(1) remove segment embeddings in knowledge retriever; (2) remove knowledge embeddings in knowledge retriever; (3) remove both segment embeddings and knowledge embeddings; (4) replace mT5 rewriter with PGN rewriter (it is worth noting that the PGN rewriter is based on LSTM, which cannot utilize segment embeddings and knowledge embeddings); (5) replace Roberta-Large selector with TextCNN selector.
The effect of these ablations on \textsc{K-SportsSum} development set is shown in Tab.~\ref{table:ablations}. In each case, the average ROUGE score is lower than our origin knowledge-enhanced summarizer, which justifies the rationality of our model.
\subsection{Qualitative Results}
Tab.~\ref{table:case_study} shows the news sentences generated by (a) our original knowledge-enhanced summarizer, i.e., KES and (b) the variant model which removes knowledge embeddings in knowledge retriever, i.e., KES (w/o know.).
As shown, the news sentences generated by original KES are more informative than the counterpart by KES (w/o know.).
Our knowledge-enhanced summarizer implicitly makes use of additional knowledge by fusing the knowledge embedding into the pre-trained language model (mT5 in our experiments).
Though this implicit way could help the model to generate informative sports news, we also find that this way may lead to wrong facts. As the second example shown in Tab.~\ref{table:case_study}, the generated news sentence describes the height of De Yang is 1.7 meters. However, the actual height of De Yang is 1.8 meters.
This finding implies that KES has learned the pattern of adding additional knowledge to news sentences but it is still challenging to generate correct descriptions.
\begin{table}[t]
\centering
\resizebox{0.30\textwidth}{!}
{
\centering
\begin{tabular}{lr}
\hline
Model & Avg. Rouge/$\triangle$ \\ \hline
KES & 39.94 \\ \hline
KES (w/o seg.) & 39.41/-0.53 \\
KES (w/o know.) & 39.22/-0.72 \\
KES (w/o seg.\&know.) & 38.23/-1.71 \\
KES (PGN rewriter) & 37.02/-2.92 \\
KES (TextCNN selector) & 37.72/-2.22 \\ \hline
\end{tabular}
}
\caption{\textsc{K-SportsSum} development set ablations (seg.: segment embedding, know.: knowledge embedding).}
\label{table:ablations}
\end{table}
\subsection{Human Study}
We conduct human studies to further evaluate the sports news generated by different methods, i.e., KES, KES (w/o know.) and SportsSUM~\cite{Huang2020GeneratingSN}. Five master students are recruited and each student evaluates 50 samples for each method.
The evaluator scores generated sports news in terms of informativeness, fluency and overall quality with a 3-point scale.
Fig.~\ref{fig:huamn_evaluation} shows the human evaluation results. KES outperforms KES (w/o know.) and SportsSUM on all three aspects, which verifies that our original KES performances better on generating sports news. What is more, the fluency of sports news generated by KES is better than the counterpart by KES (w/o know.), which indicates taking the knowledge gap into account when generating news could also improve its fluency.
\subsection{Discussion}
We can conclude from the above experiments and analysis that sports game summarization is more challenging than traditional text summarization.
We believe the following research directions are worth following:
(1) Exploring models explicitly utilizing knowledge;
(2) Leveraging long text pre-trained model (e.g., Longformer~\cite{Beltagy2020LongformerTL} and ETC~\cite{Ainslie2020ETCEL}) to deal with sports game summarization task.
\begin{figure}[t]
\centering
\subfigure[Informativeness]{
\includegraphics[width=0.30\linewidth]{informativity.png}
}
\subfigure[Fluency]{
\includegraphics[width=0.30\linewidth]{fluency.png}
}
\subfigure[Overall]{
\includegraphics[width=0.30\linewidth]{overall.png}
}
\caption{Results on human study.}
\label{fig:huamn_evaluation}
\end{figure}
\section{Related Work}
Text Summarization aims at preserving the main information of one or multiple documents with a relatively short text~\cite{rush-etal-2015-neural,chopra-etal-2016-abstractive,Nallapati2016AbstractiveTS}.
Our paper focuses on sports game summarization, a challenging branch of text summarization.
Early literature mainly explores different strategies on limited-scale datasets~\cite{zhang-etal-2016-towards,Wan2016OverviewOT} to first select key commentary sentences, and then either form the sports news directly~\cite{zhang-etal-2016-towards,Zhu2016ResearchOS,Yao2017ContentSF} or relies on human-constructed templates to generate final news~\cite{Liu2016SportsNG,lv2020generate}.
Specifically, Zhang et al.~\cite{zhang-etal-2016-towards} extract different features (e.g., the number of words, keywords and stop words) of commentary sentences, and then utilize a learning to rank (LTR) model to select key commentary sentences so as to form news. Yao et al.~\cite{Yao2017ContentSF} take the description style and the importance of the described behavior into account during key commentary sentences selection. Zhu et al.~\cite{Zhu2016ResearchOS} model the sentences selection process as a sequence tagging task and deal with it using Conditional Random Field (CRF).
Lv et al.~\cite{lv2020generate} make use of Convolutional Neural Network (CNN) to select key commentary sentences and further adopt pre-defined sentence templates to generate final news.
Recently, Huang et al.~\cite{Huang2020GeneratingSN} present \textsc{SportsSum}, the first large-scale sports game summarization dataset. They also discuss a state-of-the-art two-step framework which first selects key commentary sentences, and then rewrites each selected sentence to a news sentence through seq2seq models.
Despite its great contributions, there are many noises in \textsc{SportsSum} due to its simple rule-based data cleaning process.
\section{Conclusion}
In conclusion, we propose \textsc{K-SportsSum}, a large-scale human-cleaned sports game summarization benchmark. In order to narrow the knowledge gap between live commentaries and sports news, \textsc{K-SportsSum} also provides a knowledge corpus containing the information of sports teams and players. Additionally, a knowledge-enhanced summarizer is presented to harness external knowledge for generating more informative sports news. We have conducted extensive experiments to verify the effectiveness of the proposed method on two datasets compared with current state-of-the-art baselines via quantitative analysis, qualitative analysis and human study.
\begin{acks}
Zhixu Li and Jianfeng Qu are the corresponding authors.
We would like to thank anonymous reviewers for their suggestions and comments.
This research is supported by the National Natural Science Foundation of China (Grant No. 62072323, 61632016, 62102276), the Natural Science Foundation of Jiangsu Province (No. BK20191420, BK20210705), the Priority Academic Program Development of Jiangsu Higher Education Institutions, Suda-Toycloud Data Intelligence Joint Laboratory, and the Collaborative Innovation Center of Novel Software Technology and Industrialization.
\end{acks}
\clearpage
\bibliographystyle{ACM-Reference-Format}
\balance
| {
"attr-fineweb-edu": 2.291016,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdjXxK6-gD0Srdj7v | \section{Introduction}
The goal of this research is to compare evaluate the extent to which deep learning of representations can improve model performance across predictions in sport. The sports considered are the National Basketball Association (NBA) teams' regular-season performances and the Super 15 rugby. There are thirty teams in the NBA divided into two conferences - the Western Conference and the Eastern Conference. All thirty teams play 82 games in the regular season, where the top eight teams from each conference advance to the playoffs, meaning 16 out of the 30 teams make the playoffs each year~\cite{inproceedings3}. The Super 15 Competition consists of fifteen rugby teams from five countries across four continents~\cite{SuperRugRules}. Each squad plays a total of sixteen games before entering the knock-out stages.
Sports outcome prediction continues to gain popularity~\cite{manack2020deep}, as demonstrated by the globalization of sports betting~\cite{article1}. The NBA and Super 15 Rugby are some of the world's most-watched sports~\cite{inproceedings,SuperRugRules}. This popularity leaves a large number of fans anticipating the results of upcoming games. Furthermore, there are numerous betting companies offering gambling odds of one team winning over another. With the availability of machine learning algorithms, forecasting the outcome of games with high precision is becoming more feasible and economically desirable to various players in the betting industry. Further, accurate game predictions can aid sports teams in making adjustments to teams to achieve the best possible future results for that franchise. This increase in economic upside has increased research and analysis in sports prediction~\cite{article1}, using a multitude of machine learning algorithms~\cite{Lieder2018}.
Popular machine learning algorithms are neural networks, designed to learn patterns and effectively classify data into identifiable classes. Classification tasks depend upon labelled datasets; for a neural network to learn the correlation between labels and data, humans must transfer their knowledge to a dataset. Siamese Neural Networks (SNNs) are a type of neural network architecture. Instead of the model learning to classify its inputs, this type of neural network learns to measure the similarity between it's inputs~\cite{dlamini2019author,van2020unique}. The main idea of the Siamese network is to create two feed-forward networks with shared weights. A feed-forward network defines a mapping as $y = f(x;\theta)$ and learns the value of the parameters $\theta$ that results in the best function approximation~\cite{Goodfellow-et-al-2016}. The feed-forward-network is used to construct a function to calculate the similarity or distance metric between the two instances \cite{10.5555/2987189.2987282}. In this research, a SNN is implemented to learn a feature representation from a carefully selected group of NBA and Rugby statistics, which is then input into a ranking algorithm.
A variety of machine learning algorithms can be used to link a feature representation to a 'win' or 'loss' output. Linear regression has been used in multiple sports prediction papers~\cite{article, Torres2013}. However, there are drawbacks to this model as it fails to fit multivariate relationships~\cite{Barlos2015}. For example, a linear regression model in the NBA would likely be able to fit a relationship between 'minutes per game' (MPG) and 'points per game' (PPG) but wouldn't find success between MPG and 'rebounds per game' (RPG)~\cite{Arnold2012}. This is probably because a player's ability to get rebounds is heavily based on their position in their team, whereas scoring points are usually spread throughout the whole team and not as role-specific as rebounding. To overcome this drawback, logistic regression could be used as Lin, Short, and Sundaresan (2014)~\cite{Lin2014} suggest - in combination with other methods such as Adaptive Boost, Random Forest and SVM.
Adaptive Boost and Random Forests seek to classify games as wins or losses by averaging the results of many weaker classifiers, such as naive Bayes~\cite{Lin2014}. Lin 2014~\cite{Lin2014}, used box score data from 1991-1998 to develop machine learning models to predict the winner of professional basketball games. The methods included an adaptive boosting algorithm, random-forest model, logistic regression and a support vector machines (SVMs)~\cite{10.1145/130385.130401}. The adaptive boost algorithm is most closely related to this work and performs multiple iterations that improve a classifier by attempting to correctly classify data points that were incorrectly classified on the previous iteration. This method yielded a testing accuracy of 64.1\% in Lin 2014~\cite{Lin2014}. However, the high number of iterations performed makes the model sensitive to outliers as it will continually try predicting them correctly~\cite{Lin2014} leading, to over-fitting of data in some cases. Logistic regression yielded 64.7\% testing accuracy and the random-forest model yielded 65\% testing accuracy, just behind the SVM model at 65.2\% testing accuracy, leaving little to split the four models respective performances~\cite{Lin2014}.
In this work we will use feature learning in the form of similarity learning to learn an embedding for each of the sports considered. The learned features will be ranked using gradient boosting with the LightGBM and XGBoost implementations. The outputs are to predict the seasonal standing for the NBA and Super 15. Multiple seasons will be trained to predict a selected test season to be evaluated to gauge the performance of the models. The use of an SNN together with a LightGBM and XGBooost for this task is an unexplored method in the current literature.
The contributions of this paper are:
\begin{enumerate}
\item Showing that a general framework including gradient boosting and deep representation learning can successfully predict seasonal rankings across different sports (NBA and Rugby).
\item Demonstrating that the the use of representation learning improves rankings using both the Gradient Boost Machines (LightGBM and XGBoost).
\item Improving on SOTA in NBA and establishing the SOTA in Super 15 ranking task.
\end{enumerate}
\section{Methodology}
\subsection{Overview}
The descriptions, preprocessing, and analysis used during the experimental process are explained in this section. The processed data is transferred to the models after partitioning. The data is partitioned by placing them into their respective seasons, whereby the first three seasons will make up the training data and the final season making up the testing data. The models are trained and tested with their results compared and analysed using appropriate metrics.
\subsection{Data}
\begin{table*}[htb!]
\centering
\caption{Features of Rugby team seasonal statistics dataset}
\label{tab:1}
\begin{tabular}{|c l | c l |c l|}
\hline
1) & Team name & 2) & Total tries scored & 3) & Home tries scored \\
4) & Away tries scored & 5) & Tries scored in first half & 6) & Tries scored in second half\\
7) & Total matches played & 8) & Tries conceded & 9) & Tries conceded at home \\
10) & Tries conceded whilst away & 11) & Four try bonus point & 12) & Four try bonus point at home\\
13) & Four try bonus point whilst away & 14) & Lost within seven points & 15) & Lost within seven points at home \\
16) & Lost within seven points whilst away & 17) & Opponent four try bonus point & 18) & Opponent four try bonus point at home \\
19) & Opponent four try bonus point away & 20) & Opponent lost within seven points & 21) & Opponent lost within seven points at home \\
22) & Opponent lost within seven points away & 23) & Yellow cards accumulated & 24) & Yellow cards to red cards acquired\\
25) & Red cards accumulated & 26) & Halftime wins & 27) & Halftime wins to full time wins\\
28) & Halftime draws & 29) & Halftime draws to full time wins & 30) & Half time lose\\
31) & Half time lose to full time win & 32) & Conversions &33) & Penalties \\
34) & Tackles & 35) & Penalties Conceded & 36) & Lineouts \\
37) & Rucks & 38) & Scrums & &\\
\hline
\end{tabular}
\end{table*}
\begin{table*}[htb!]
\centering
\caption{Features of Basketball team statistics}
\label{tab:3}
\begin{tabular}{|cl|cl|cl|}
\hline
1) & Home/away team & 2) & Field Goals & 3) & Field Goals Attempted \\
4) & Three-Point Shots & 5) & Three-Point Shots Attempted & 6) & Free Throws\\
7) & Free Throws Attempted & 8) & Offensive Rebounds & 9) & Defensive Rebounds\\
10) & Assists & 11) & Steals & 12) & Blocks\\
13) & Turnovers & 14) & Total Fouls & & \\
\hline
\end{tabular}
\end{table*}
The data was first normalized. When preparing the data a training-testing split was utilised. As the data-set has a temporal aspect, one cannot use future seasons to predict those preceding it~\cite{AdhikariK.2013}. The training data comprised of the first three seasons and testing data of the fourth and final season, helping to prevent any data leakage~\cite{Cerqueira2019}. However, due to limited seasons cross-validation was used for hyper-parameter tuning on the three seasons of training data. Initially, the first two seasons are used for training and validated on the third, then the first and third are used for training and validated on the second dataset. Lastly, the model is trained on the last two seasons and validated on the first.
\subsubsection{Rugby}
The data collected for rugby is from the Super Rugby International Club Competition. The data is collected from the season of 2017 to 2020. The models are trained and validated on the Super Rugby seasons of 2017, 2018, and 2019 and the season of 2020 to test the models. Data from each season is split into two parts: seasonal statistics of each team as per the end of the season; and the second being the record of every game played with the results thereof. It is noted that draws appear less than non-draws so they've been included as a home team win for creating the embeddings.
The purpose of these models is to use the team seasonal data to predict the outcome of matches between two sides for a given season. Subsequently using these predicted match outcomes to rank teams for the season. In each season there are one hundred and twenty games, with each team playing sixteen games in the regular season (each team plays eight games within its conference and eight games against teams in other conferences).
Data for the seasonal statistics are retrieved from Rugby4cast and Statbunker. The features in the datasets are presented in Table~\ref{tab:1}.
\subsubsection{Basketball}
The dataset used for basketball in this study consists of four consecutive regular seasons spanning from 2014 to 2018, each of the seasons contains 2460 games~\cite{thompson}. The data was divided into two sets, each with fourteen features. One of the sets represented the home team and the other set represented the away team. The features were chosen are as per Thabtah \textit{et al.} (2019)~\cite{article1} and Ahmadalinezhad \textit{et al.} (2019)~\cite{Ahmadalinezhad2019519}, and are shown in Table~\ref{tab:3}.
\subsection{Algorithms}
We combine gradient boosting with Siamese neural networks to improve overall performance. The reason for the combination is that the gradient boosting frameworks are effective at ranking tasks but limited in their ability to learn feature embeddings. We compare two gradient boosting frameworks combined with two Siamese network embedding losses.
XGBoost and LightGBM are Machine Learning decision-tree-based ensembles that use a gradient boosting framework for ranking, classification, and many other machine learning tasks~\cite{Ke2017}. Gradient boosting is selected over Adaptive Boosting, Random Forests and SVM's. These machines are recent extensions of those algorithms and they containing additional features aiding in performance, speed, and scalability~\cite{Rahman2020}. The result is that gradient boosting is more suitable for the desired rankings and predictions~\cite{Ke2017}.
Boosting refers to a machine learning technique that trains models iteratively~\cite{Ke2017}. A strong overall prediction is produced by starting with a weak base model which is then added to iteratively by including further weak models. Each iteration adds an additional weak model to overcome the shortcomings of the prediction of the previous weak models. The weak models are trained by applying gradient descent in the direction of the average gradient of the leaf nodes of the previous model~\cite{Ke2017}. This gradient is calculated with respect to the pseudo residual errors of the previous models loss function.
In the case of decision trees, an initial fit, $F_{0}$, and a differentiable loss function $L$ for a number of boosting rounds $M$ are used as starting points:
\begin{equation}
F_0(x) = \underset{\gamma}{arg\ \min} \sum^{n}_{i=1} L(y_i, \gamma).
\end{equation}
Thereafter, the pseudo residuals are calculated by
\begin{equation}
r_{im} = -\frac{\partial L(y_i, F_{m-1}(x_i))}{\partial F_{m-1}(x_i)}.
\end{equation}
Lastly, the decision tree is fit, $h_{m}(x)$ to $r_{im}$ and let
\begin{equation}
F_m(x) = F_{m-1}(x) + \lambda_m \gamma_m h_m(x)
\end{equation}
where $ \lambda_m $ is the learning rate, $\gamma_m$ is the step size optimized by performing a line search and $y_i$ is the correct outcome of the game.
\subsubsection{LightGBM}
The first implemented evaluated is LightGBM, a gradient boost machine developed by Microsoft which uses tree-based learning algorithms. A LambdaRank~\cite{burges2010ranknet} objective function was selected to scales the gradients by the change in NDCG computed in the LightGBM~\cite{burges2010ranknet}.
\subsubsection{XGBoost}
Second XGBoost was selected an optimised distributed gradient boosting library that uses a LambdaMART objective function~\cite{burges2010ranknet}. XGBoost is designed to be highly flexible, efficient, and portable to implement machine learning algorithms under the Gradient Boosting framework. The XGBoost library has many models in it but the selected one was Pairwise ranker for these experiments. XGBoost used a Log Loss function for pairwise ranking. The Log Loss function used to calculate the cost value is given by:
\[
F = \frac{-1}{N}\sum_{i=1}^{N}[y_i \log( \hat{y}_i) + (1-y_i) \log(1-\hat{y}_i)
\]
where $y$ is the correct outcome of a game, $\hat{y}$ is the predicted result of a game and $N$ is the number of games played.
\subsubsection{Siamese Neural Network}
Data was fed into a Siamese Neural Network to create the embeddings to be passed to the aforementioned ranking algorithms. An SNN consists of two identical sub-networks where each is fed one of a selected tuple of inputs, in this case, the home and away team for each game in the data-set \cite{10.5555/2987189.2987282}. The Rectified Linear Activation Function was selected~\cite{article4} and which was used alongside an RMSprop optimizer for thirteen epochs.Thirteen epochs was selected as the model accuracy had converged on the validation set at thirteen. As for the loss function, two were implemented: Contrastive loss and Triplet loss~\cite{Bui}.
Contrastive loss is defined as:
\begin{equation}
J(\theta)=(1-Y)(\frac{1}{2})(D_{a,b})^2+(Y)(\frac{1}{2})(\max(0,m-D_{a,b}))^2
\end{equation}
where Y is representative of the outcomes classes, 0 indicating a win and 1 indicating a loss, m being the dissimilarity margin and $D_{a,b}$ the euclidean distance,
\begin{equation}
D_{a,b}(x_1,x_2)=\sqrt{\sum^{K}_{k=0}(b(x_2)_k-a(x_1)_k)^2}
\end{equation}
where $a(x_1)$ is the output from the sub-network one and $b(x_2)$ the output of the sub-network two~\cite{Bui,Hadsell2006}. A vector containing the home team's seasonal statistics is fed into the sub-network one and similarly the away team is fed into the sub-network two.
Triplet loss is defined as:
\begin{equation}
J(\theta)=\max(D_{a,b}(a,p)-D_{a,b}(a,n)+m,0)
\end{equation}
whereby anchor (a) is an arbitrary data point, positive (p) which is the same class (win or loss) as the anchor, and negative (n) which is a different class from the anchor~\cite{Bui}.
Each SNN is fed the seasonal data set where the respective outputs are run through the LightGBM ranker, resulting in predicted values (similarity scores) for each game in the selected seasons. To compute the final standings we use a Tally rank, if a team wins a game the similarity score is added to their tally, and likewise, if a team loses, the similarity score is subtracted from their tally.
A list-wise predicted ranking of both the Rugby and Basketball test seasons were produced. This list-wise ranking was sorted in descending order to achieve the final predicted standings. The teams were then separated into the Western and Eastern conferences for analysis purposes. From the three methodologies discussed, six results were outputted, one from each of the LightGBM and XGBoost model individually and a total of four from the LightGBM and XGBoost SNN models. Thus, a total of six predicted standings were achieved for each sport, each of which will be compared to one another using several metrics.
\subsection{Metrics}
The predicted rankings were compared to one another, to the baselines and to the actual standings of that year. The metrics that were used to evaluate the performance of the models are:
\subsubsection{Mean Average Precision (mAP)}
\begin{equation}
mAP=\frac{1}{N}\sum^N_{i=1} AP_i
\end{equation}
with
\begin{equation}
AP=\sum^K_{k=1} \frac{\text{True Positives}}{\text{True Positives}+\text{False Positives}}
\end{equation}
and $K$ is the desired point where precision is calculated. The AP was evaluated at k=15 in both the Eastern and Western conferences, where the mAP is achieved by averaging these two results. Super Rugby only uses the Average Precision.
\subsubsection{Spearman's rank correlation coefficient}
\begin{equation}
r_s=1-\frac{6\sum d_i^2}{n(n^2-1)}
\end{equation}
where $d_i$ = difference in the actual ranking of a team and the predicted rank, and $n$ the number of cases. $r_s$ can take on values between 1 and -1, where 1 indicates a perfect association of ranks, 0 indicates no association between ranks and -1 indicates a perfect negative association of ranks. The closer to zero $r_s$ is, the weaker the association between the ranks are~\cite{EllisandVictoria2011}.
\subsubsection{Normalized Discounted Cumulative Gain}
To proceed with Normalized Discounted Cumulative Gain (NDCG), one first needs to introduce relevance scores for the actual rankings of the test season under consideration. If a team finishes first in their league/conference, a score of 15 is assigned to them, the team who finishes in second place is assigned score of 14, and so forth until the fifteenth/last team receives a score of 1. These scores will be referred to as $rel$ where the respective values at position p are computed~\cite{inproceedings1}:
\begin{equation}
nDCG_p=\frac{DCG_p}{IDCG_p}
\end{equation}
where
\begin{equation}
DCG_p=\sum^{p}_{i=1}\frac{2^{rel_i} -1}{\log_2(i+1)}
\end{equation}
and
\begin{equation}
IDCG_p=\sum^{\lvert REL_P \rvert}_{i=1}\frac{2^{rel_i} -1}{\log_2(i+1)}
\end{equation}
\subsection{Experimental procedure}
First the two baselines are computed, one assigning random rankings and the other naively. For random the league standings were randomized for each sport thirty times to produce the baseline. For the naive baseline the assumption is made that the test season ranking will be identical to that of the preceding season.
Once the baselines had been established, LightGBM and XGBoost were used without the embedding from the SNN. Both algorithms are fit to the training data and rank the respective test seasons (2019/20 for rugby and 2017/18 for the NBA). A final predicted league standing was computed for both sports, the NBA is divided into the Western and Eastern conference, by adding the resulting similarity score to each team's tally if they win, and similarly subtracting the similarity score if they lose.
Next, two SNN's were built (only differing by their loss functions) to set up the second set of algorithms. The SNN which was built has two identical sub-networks, both of which have an input layer consisting of $50$ neurons for basketball and $37$ input neurons for rugby, followed by two hidden layers with $70$ and $20$ neurons and finally output layers with $1$ neuron respectively. Both of the sub-networks use a ReLU (Rectified Linear Activation Function)~\cite{article4} and then RMSprop optimizer for thirteen epochs. Our RMSprop optimizer had a learning rate of 0.001, a momentum value of 0, the epsilon value was $1^{-7}$ and a discounting factor for the gradient of 0.9. For contrastive loss, one of the sub-networks is fed the home team's data and the other is fed the away team's data. For triplet loss, a team is chosen as a baseline and the distance to the positive input is minimized and the distance to the negative input is maximized. The SNN's were then trained on the first three seasons and were used to predict the final test season. The resulting scores were fed into both the LightGBM and XGBoost rankers from which the predicted standings were computed using the tally rank method previously discussed.
\subsection{Results and Discussion}
\begin{table}[htb!]
\caption{Basketball results}
\label{tab:5}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l|rrr}
\toprule
{Method} & {$mAP$} & {$r_s$} & {$NDCG$} \\
\midrule \midrule
{Naive} & 0.704 & 0.640 & 0.839 \\
{Randomized} & 0.477 $\pm0.02$ & 0.558 $\pm0.01$ & 0.820 $\pm0.05$ \\
{LightGBM} & 0.822 $\pm0.00$ & 0.839 $\pm0.00$ & 0.979 $\pm0.00$ \\
{XGBoost} & 0.857 $\pm0.00$ & \textbf{0.900} $\pm0.00$ & 0.976 $\pm0.00$ \\
{LightGBM (Contrastive)} & 0.859 $\pm0.22$ & 0.876 $\pm0.13$ & 0.977 $\pm0.02$ \\
{XGBoost (Contrastive)} & 0.745 $\pm0.26$ & 0.751 $\pm0.13$ & 0.916 $\pm0.08$ \\
{LightGBM (Triplet)} & \textbf{0.867} $\pm0.15$ & 0.870 $\pm0.11$ & \textbf{0.980} $\pm0.00$ \\
{XGBoost (Triplet)} & 0.725 $\pm0.26$ & 0.675 $\pm0.14$ & 0.922 $\pm0.11$ \\
\bottomrule
\end{tabular}}
\end{table}
\begin{table}[htb!]
\centering
\caption{Rugby results}
\label{tab:4}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l|rrr}
\toprule
Model & {$AP$} & {$r_s$} & {$NDCG$} \\
\midrule \midrule
Naive & 0.462 & 0.532 & 0.915 \\
Randomized & 0.193 $\pm0.20$ & 0.525 $\pm0.02$ & 0.735 $\pm0.10$ \\
LightGBM & 0.381 $\pm0.08$ & 0.379 $\pm0.06$ & 0.806 $\pm0.09$ \\
XGBoost & 0.421 $\pm0.05$ & 0.432 $\pm0.01$ & 0.910 $\pm0.00$ \\
LightGBM (Constrastive) & 0.620 $\pm0.02$ & 0.671 $\pm0.04$ & 0.965 $\pm0.05$ \\
XGBoost (Contrastive) & 0.724 $\pm0.08$ & 0.746 $\pm0.04$ & 0.974 $\pm0.01$ \\
LightGBM (Triplet) & 0.686 $\pm0.02$ & 0.754 $\pm0.05$ & 0.970 $\pm0.00$ \\
XGBoost (Triplet) & \textbf{0.921} $\pm0.04$ & \textbf{0.793} $\pm0.00$ & \textbf{0.982} $\pm0.03$\\
\bottomrule
\end{tabular}}
\end{table}
\subsubsection{Basketball}
All of the six models (LightGBM, LightGBM (Contrastive loss), LightGBM (Triplet loss), XGBoost, XGBoost (Contrastive loss), XGBoost (Triplet loss)) used to predict the NBA test season outperformed both the naive and random baselines in every metric (F1 score, mAP, NDCG and $r_s$). This is expected as basketball in itself is a volatile sport~\cite{inproceedings3}, making a naive or random prediction highly unlikely to resemble any form of result close to that produced by a deep similarity learning.
Arguably the most important feature of the NBA regular season is progressing to the playoffs~\cite{inproceedings3}, which only the top 8 teams from each conference advance to. The deep similarity learning models developed and investigated for this study were able to predict the playoff teams to a relatively high degree of accuracy, with the LightGBM (Triplet loss) performing the best with 8/8 Eastern Conference playoff teams correct and 7/8 Western Conference playoff teams correct. XGBoost (Triplet loss) performed the worst with only managing 6 correct from either conference. It is worth mentioning that each of the six implemented models incorrectly predicted that the Denver Nuggets would make it to the playoffs over the Utah Jazz. This is likely attributed to the Utah Jazz drafting higher quality players that year (2017/18) possibly aiding them over-perform what was expected.
Using a LightGBM without an SNN falls short in playoff predictions, F1 score, mAP, NDCG, and $r_s$, as seen in Table IV, providing us with a direct indication that the use of an SNN in unison with a LightGBM has an overall beneficial impact on the performance of the model. In terms of the selection of the loss function, our results have little differentiation between them, specifically their F1 score, NDCG, and $r_s$. However, the mAP of each loss function was the biggest discerning factor, with the Triplet loss producing a mAP of 0.867 versus the Contrastive loss mAP of 0.859. Not only does the Triplet loss improve the models' overall precision, it predicted one more correct playoff team (Washington Wizards) that the Contrastive loss.
However, there appears to be a converse effect with the use of an SNN with the XGBoost. The XGBoost function by itself outperforms the iterations where an SNN was used in every metric. From Table~\ref{tab:5}, it is clear the LightGBM (Triplet loss) algorithm produces the overall best set of results closely followed by the XGBoost algorithm.
\subsubsection{Rugby}
Looking at the scores across the models~\ref{tab:4}, the Siamese Triplet Ranker performs the best followed closely by the Siamese Contrastive, LightGBM (Triplet loss), and LightGBM (Contrastive loss). The results then fall short of our Previous Season naive implementation with the XGBoost model and LightGBM model falling behind in sixth and seventh position before the Random ranker in last.
The top-ranked amongst these models was the Siamese Triplet ranker, followed closely by the Siamese Triplet with Gradient Boosting, Siamese Contrastive and Siamese Contrastive with Gradient Boosting. The LambdaMart Gradient Boost and LambdaRank Gradient Boost models performed the worst on average and had scores that would lose out to the naive implementations.
Similarly to the NBA regular season, it's important in SuperRugby to note which are the top 8 teams move to the knockout stages. The Siamese Triplet model correctly guessed 7 out of 8 teams, whilst the rest of the trained models predicted 6 correct. The random ranker came in last with only 4 teams predicted correctly.
\subsection{Future improvements}
Although the Triplet loss edges the performance over the Contrastive loss, the margin is slightly smaller than expected~\cite{hoffer2018deep}. The predictive models are categorized under supervised learning, which is widely accepted to be the domain of Triplet loss functions based on strategies explored in recent literature~\cite{Bui,taha2020boosting}, which indicates the under performance of the models. Triplet loss functions are sensitive to noisy data~\cite{hoffer2018deep}, which may be the hindering factor. An exploration of a new data-set, or even a new sport in itself, may help rectify this. Furthermore, an investigation into different sports would help one grasp the portability and effectiveness of the proposed predictive modeling outside of Rugby and Basketball, as it is clear that there is no one consistent best performing model across the two sports.
\section{Conclusion}
The seasonal rankings of the National Basketball Association and Super Rugby Club Championship were predicted using Siamese Neural Network models, Gradient Boost Machines, and a combination of the two. Super Rugby is far more difficult to predict due to each team playing far fewer games compared to teams in the NBA.
In Basketball, the six models all produce satisfactory results, with the best model predicting at least 81\% of the playoff teams correctly. These models exceed a score of 0.8 in mAP, NDCG, and $r_s$ as well as outperform a majority of current literature in this field~\cite{Barlos2015,Lu2019252,article}. The models that utilise the LightGBM ranking model combined with the Siamese (Triplet Loss) Neural Network produced the highest ranking in both the NDCG and $mAP$ scores. All models surpassed both the Naive and Randomized baseline in all three metrics.
The models for rugby that utilise the Siamese (Triplet Loss) Neural Network have proven to be better than those that utilise the Siamese (Contrastive Loss) with the best-predicting 87.5\% of the play-off teams correctly and exceeding 0.9 in mAP, NDCG, and $r_s$. The models that use LightGBM and XGBoost are not as effective as their counterparts which do not. Both the LightGBM and XGBoost models which do not use the SNN as part of their design failed to provide better results than the Naive rankings but were better than the Randomized list.
Using Gradient Boost Machines and SNNs, it is possible to create impressive and capable ranking and prediction models for the NBA and Super Rugby. However, there is no one best performing model across the two sports. This research can be built upon to further delve into more sporting disciplines or expand on the current models by training on more seasons and better feature modelling. It may be concluded that a LightGBM model is more effective in ranking larger data-sets, as prevalent in the NBA, compared to the XGBoost model which explains the differences in their respective results.
In conclusion, the models proposed provided excellent results in comparison to their baseline counterparts and answered the research questions at hand. They have been proven that they can be utilized in betting industries to forecast results that are precise in both the Super Rugby and NBA.
| {
"attr-fineweb-edu": 2.814453,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcLvxK6nrxq6Dmh7h | \section{}
\begin{abstract}
A complete description of twisting somersaults is given using a reduction to a time-dependent
Euler equation for non-rigid body dynamics. The central idea is that after reduction the twisting
motion is apparent in a body frame, while the somersaulting (rotation about the fixed angular
momentum vector in space) is recovered by a combination of dynamic and geometric phase.
In the simplest ``kick-model'' the number of somersaults $m$ and the number of twists $n$
are obtained through a rational rotation number $W = m/n$ of a (rigid) Euler top. This rotation
number is obtained by a slight modification of Montgomery's formula \cite{Montgomery91}
for how much the rigid body has rotated.
Using the full model with shape changes that take a realistic time we then derive the master twisting-somersault formula:
An exact formula that relates the airborne time of the diver, the time spent in various stages of the dive, the numbers $m$ and $n$, the energy in the stages, and the angular momentum by extending a geometric phase formula due to Cabrera \cite{Cabrera07}.
Numerical simulations for various dives agree perfectly with this formula
where realistic parameters are taken from actual observations.
\end{abstract}
\section{Introduction}
One of the most beautiful Olympic sports is springboard and platform diving, where a typical dive
consists of a number of somersaults and twists performed in a variety of forms.
The athlete generates angular momentum at take-off and achieves the desired
dive by executing shape changes while airborne. From a mathematical point of
view the simpler class of dives are those for which the rotation axis and hence the direction of
angular velocity remains constant, and only the values of the principal moments of inertia
are changed by the shape change, but not the principal axis. This is typical in dives with
somersaults in a tight tuck position with minimal moment of inertia.
The mathematically not so simple dives are those where the shape change
also moves the principal axis and hence generates a motion in which the
rotation axis is not constant. This is typical in twisting somersaults.
In this work we present a detailed mathematical theory of the twisting somersault,
starting from the derivation of the equations of motion for coupled rigid bodies in a certain co-moving frame,
to the derivation of a model of a shape-changing diver who performs a
twisting somersault. The first correct description of the physics of the twisting somersault
was given by Frohlich \cite{Frohlich79}. Frohlich writes with regards to some publication
from the 60s and 70s that
``several books written by or for coaches discuss somersaulting and twisting,
and exhibit varying degrees of insight and/or confusion about the physics processes that occur".
A full fledged analysis has been developed by
Yeadon in a series of classical papers \cite{Yeadon93a,Yeadon93b,Yeadon93c,Yeadon93d}.
Frohlich was the first to
point out the importance of shape change for generating rotations even in
the absence of angular momentum. What we want to add here is the analysis
of how precisely a shape change can generate a change in rotation in the
presence of angular momentum. From a modern point of view this is a question
raised in the seminal paper by Shapere and Wilczek \cite{ShapereWilczek87,ShapereWilczek88}:
``what is the most efficient way for a body to change its orientation?"
Our answer involves generalisations of geometric phase in rigid body dynamics
\cite{Montgomery91} to shape-changing bodies recently obtained in \cite{Cabrera07}.
To be able to apply these ideas in our context we first derive a version
of the Euler equation for a shape-changing body. Such equations are
not new, see, e.g.~\cite{Montgomery93,Iwai98,Iwai99,Enos93}, but what is new is the
derivation of the explicit form of the additional time-dependent terms when the
model is a system of coupled rigid bodies. We have chosen to be less general than
previous authors by specialising in a particular frame, but for our intended
application the gained beauty and simplicity
of the equations of motion (see Theorem 1) is worth the loss of generality.
We then take a particularly simple system of just two coupled rigid bodies (the ``one-armed diver'')
and show how a twisting somersault can be achieved even with this overly simplified model.
An even simpler model is the diver with a rotor \cite{BDDLT15}, in which all the
stages of the dive can be analytically solved for.
Throughout the paper we emphasise the geometric mechanics point of view.
Hence the translational and rotational symmetry of the problem
is reduced, and thus Euler-type equations are found in a co-moving frame.
In this reduced description the amount of somersault (i.e.\ the amount of rotation
about the fixed angular momentum vector in space) is not present.
Reconstruction allows us to recover this angle by solving an additional differential equation
driven by the solution of the reduced equations. However, it turns out that
for a closed loop in shape space the somersault angle can be recovered by
a geometric phase formula due to \cite{Cabrera07}. In diving terminology
this means that from the knowledge of tilt and twist the somersault can be
recovered. We hope that the additional insight gained in this way will lead
to an improvement in technique and performance of the twisting somersault.
The structure of the paper is as follows:
In section 2 we derive the equations of motion for a system of coupled rigid bodies that
is changing shape. The resulting Euler-type equations are the basis for the following analysis.
In section 3 we discuss a simplified kick-model, in which the shape change
is impulsive. The kick changes the trajectory and the energy, but not the total
angular momentum. In section 4 the full model is analysed, without the kick assumption.
Unlike the previous section, here we have to resort to numerics to compute
some of the terms. Nevertheless, the generalised geometric phase formula
due to Cabrera \cite{Cabrera07} still holds and gives a beautiful geometric interpretation
to the mechanics behind the twisting somersault.
\newpage
\section{Euler equations for coupled rigid bodies}
Let $\l$ be the constant angular momentum vector in a space fixed frame.
Rigid body dynamics usually use a body-frame because in that frame the tensor
of inertia is constant.
The change from one coordinate system to the other is given by a rotation matrix $R = R(t) \in SO(3)$,
such that $\l = R \L$.
In the body frame the vector $\L$ is described as a moving vector and only its length
remains constant since $R \in SO(3)$.
The angular velocity $\O$ in the body frame is the vector such that
$ \O \times \v = R^t \dot R \v$ for any vector $\v \in \mathbb{R}^3$.
Even though for a system of coupled rigid bodies the tensor of inertia is generally not
a constant, a body frame still gives the simplest equations of motion:
\begin{theorem}
The equations of motion for a shape-changing body with angular momentum vector $\L \in \mathbb{R}^3$ in a body frame are
\[
\dot \L = \L \times \O
\]
where the angular velocity $\O \in \mathbb{R}^3$ is given by
\[
\O = I^{-1} ( \L - \mathbf{A}),
\]
$I = I(t)$ is the tensor of inertia, and $\mathbf{A} = \mathbf{A}(t)$ is a momentum shift generated by the shape change.
\end{theorem}
\begin{proof}
The basic assumption is that the shape change is such that the angular momentum is constant.
Let $\l$ be the vector of angular momentum in the space fixed frame, then $ \l = R \L$.
Taking the time derivative gives $\mathbf{0} = \dot R \L + R \dot \L$ and hence $\dot \L = -R^t \dot R \L = -\O \times \L = \L \times \O$.
The interesting dynamics is all hidden in the relation between $\O$ and $\L$.
Let $\mathbf{q} = R \mathbf{Q}$ where $\mathbf{Q}$ is the position of a point in the body $B$
in the body frame, and $\mathbf{q}$ is the corresponding point in the space fixed frame.
The relation between $\L$ and $\O$ is obtained from the definition of angular momentum which is
$\mathbf{q} \times \dot \mathbf{q}$ integrated over the body $B$.
The relation $ \mathbf{q} = R \mathbf{Q}$ is for a rigid body, for a deforming body we label each point by $\mathbf{Q}$
in the body frame but allow for an additional shape change $S$, so that $\mathbf{q} = R S \mathbf{Q}$.
We assume that $S : \mathbb{R}^3 \to \mathbb{R}^3$ is volume preserving, which means
that the determinant of the Jacobian matrix of $S$ is 1.
The deformation $S$ need not be linear, but we assume that we are in a frame in which the
centre of mass is fixed at the origin, so that $S$ fixes that point. Now
\[
\begin{aligned}
\dot \mathbf{q} & = \dot R S \mathbf{Q} + R \dot S \mathbf{Q} + R S \dot \mathbf{Q}
= R R^t \dot R S \mathbf{Q} + R \dot S \mathbf{Q} = R ( \O \times S \mathbf{Q} )+ R \dot S S^{-1} S \mathbf{Q} \\
& = R (\O \times \tilde \mathbf{Q} + \dot S S^{-1} \tilde \mathbf{Q})
\end{aligned}
\]
where $\tilde \mathbf{Q} = S \mathbf{Q}$.
Thus we have
\[
\begin{aligned}
\mathbf{q} \times \dot \mathbf{q} & = R \tilde \mathbf{Q} \times R ( \O \times \tilde \mathbf{Q} + \dot SS^{-1} \tilde \mathbf{Q}) \\
& = R ( | \tilde \mathbf{Q}|^2 \mathbbm{1} - \tilde \mathbf{Q} \tilde \mathbf{Q}^t) \O + R ( \tilde \mathbf{Q} \times \dot S S^{-1} \tilde \mathbf{Q}) \,.
\end{aligned}
\]
Now $\l$ is defined by integrating over the deformed body $\tilde B$ with density $\rho = \rho(\tilde \mathbf{Q})$,
so that $\l = \int \rho \mathbf{q} \times \dot \mathbf{q} \mathrm{d} \tilde \mathbf{Q}$ and using $\l = R \L$ gives
\[
\L = \int_{\tilde B} \rho ( | \tilde \mathbf{Q} |{}^2 \mathbbm{1} - \tilde \mathbf{Q} \tilde \mathbf{Q}^t) \, \mathrm{d} \tilde \mathbf{Q} \,\, \O + \int_{\tilde B} \rho \tilde \mathbf{Q} \times \dot S S^{-1} \tilde \mathbf{Q} \, \mathrm{d} \tilde \mathbf{Q}.
\]
The first term is the tensor of inertia $I$ of the shape changed body and the constant term defines $\mathbf{A}$ so that
\[
\L = I \O + \mathbf{A}
\]
as claimed.
\end{proof}
\begin{remark}
Explicit formulas for $I$ and $\mathbf{A}$ in the case of a system of coupled rigid bodies are given
in the next theorem.
When $I$ is constant and $\mathbf{A} = 0$ the equations reduce to the classical Euler equations for a rigid body.
\end{remark}
\begin{remark}
For arbitrary time dependence of $I$ and $\mathbf{A}$ the total angular momentum $| \L |$ is conserved,
in fact it is a Casimir of the Poisson structure $\{ f, g \} = \nabla f \cdot \L \times \nabla g$.
\end{remark}
\begin{remark}
The equations are Hamiltonian with respect to this Poisson structure
with Hamiltonian $H = \frac12 ( \L - \mathbf{A}) I^{-1} ( \L - \mathbf{A})$ such that $\O = \partial H/ \partial \L$.
\end{remark}
For a system of coupled rigid bodies the shape change $S$ is given by rotations of the individual segments
relative to some reference segment, typically the trunk. The orientation of the reference segment is given by the rotation matrix $R$
so that $\l = R \L$. The system of rigid bodies is described by a tree that describes the
connectivity of the bodies, see \cite{Tong15} for the details.
Denote by $\mathbf{C}$ the overall centre of mass, and by $\mathbf{C}_i$ the position of the centre of mass of body $B_i$ relative
to $\mathbf{C}$. Each body's mass is denoted by $m_i$, and its orientation by $R_{\alpha_i}$, where
$\alpha_i$ denotes the set of angles necessary to describe its relative orientation (e.g.\ a single angle
for a pin joint, or 3 angles for a ball and socket joint). All orientations are measured relative to the reference
segment, so that the orientation of $B_i$ in the space fixed frame is given by $R R_{\alpha_i}$.
The angular velocity $\O_{\alpha_i}$ is the relative angular velocity corresponding to $R_{\alpha_i}$,
so that the angular velocity of $B_i$ in the space fixed frame is $R_\alpha^t \O + \O_\alpha$.
Finally $I_i$ is the tensor of inertia of $B_i$ in a local frame with center at $\mathbf{C}_i$ and coordinate axes aligned with the principle axes of inertia. With this notation we have
\begin{theorem}
For a system of coupled rigid bodies we have
\[
I = \sum R_{\alpha_i} I_i R_{\alpha_i}^t + m_i (| \mathbf{C}_i |{}^2 \mathbbm{1} - \mathbf{C}_i \mathbf{C}_i^t )
\]
and
\[
\mathbf{A} = \sum ( m_i \mathbf{C}_i \times \dot \mathbf{C}_i + R_{\alpha_i} I_i \O_{\alpha_i} )
\]
where $m_i$ is the mass, $\mathbf{C}_i$ the position of the centre of mass,
$R_{\alpha_i}$ the relative orientation, $\O_{\alpha_i}$ the relative angular velocity
such that $R_{\alpha_i}^t \dot R_{\alpha_i} \v = \O_{\alpha_i} \v$ for all $\v \in \mathbb{R}^3$,
and $I_i$ the tensor of inertia of body $B_i$.
The sum is over all bodies $B_i$ including the reference segment, for which the rotation is simply given by $\mathbbm{1}$.
\end{theorem}
\begin{proof}
The basic transformation law for body $B_i$ in the tree of coupled rigid bodies
is $\mathbf{q}_i = R R_{\alpha_i} ( \mathbf{C}_ i + \mathbf{Q}_i)$.
Repeating the calculation in the proof of Theorem 1 with this particular $S$ and summing over the bodies gives the result.
We will skip the derivation of $\mathbf{C}_i$ in terms of the shape change and the geometry of the
model and refer to the Thesis of William Tong \cite{Tong15} for the details.
\end{proof}
\begin{remark}
In the formula for $I$ the first term is the moment of inertia of the segment transformed
to the frame of the reference segment, while the second term comes from the parallel axis theorem
applied to the centre of mass of the segment relative to the overall centre of mass.
\end{remark}
\begin{remark}
In the formula for $\mathbf{A}$ the first term is the internal angular momentum generated by
the change of the relative centre of mass, while the second term originates from the relative angular velocity.
\end{remark}
\begin{remark}
When there is no shape change then $\dot \mathbf{C}_i = 0$ and $\O_i = 0$, hence $\mathbf{A} = 0$.
\end{remark}
\begin{remark}
The vectors $\mathbf{C}_i$, $\dot \mathbf{C}_i$, and $\O_{\alpha_i}$ are determined by the set of
time-dependent matrices $\{ R_{\alpha_i} \}$ (the time-dependent ``shape'') and the joint positions of the coupled rigid bodies
(the time-independent ``geometry'' of the model), see \cite{Tong15} for the details.
In particular also $\sum m_i \mathbf{C}_i = 0$.
\end{remark}
\section{A simple model for twisting somersault}
\begin{figure}
\centering
\subfloat[$t=0$ ]{\includegraphics[width=4cm]{animation01.png}\label{fig:animationlayout}}
\subfloat[$t=1/32$ ]{\includegraphics[width=4cm]{animation02.png}}
\subfloat[$t=1/16$ ]{\includegraphics[width=4cm]{animation03.png}}\\
\subfloat[$t=3/32$ ]{\includegraphics[width=4cm]{animation04.png}}
\subfloat[$t=1/8$ ]{\includegraphics[width=4cm]{animation05.png}}
\subfloat[$t=5/32$ ]{\includegraphics[width=4cm]{animation06.png}}\\
\subfloat[$t=3/16$ ]{\includegraphics[width=4cm]{animation07.png}}
\subfloat[$t=7/32$ ]{\includegraphics[width=4cm]{animation08.png}}
\subfloat[$t=1/4$ ]{\includegraphics[width=4cm]{animation09.png}\label{fig:animationi}}
\caption{The arm motion for the twisting somersault.}\label{fig:animation2}
\end{figure}
Instead of the full complexity of a realistic coupled rigid body model for the human body,
e.g., with 11 segments \cite{Yeadon90b} or more,
here we are going to show that even when all but
one arm is kept fixed it is still possible to do a twisting somersault.
The formulas we derive are completely general, so that more complicated shape
changes can be studied in the same framework. But in order to explain the
essential ingredients of the twisting somersault
we chose to discuss a simple example.
A typical dive consists of a number of phases or stages in which the body shape is either fixed or not.
Again, it is not necessary to make this distinction, the equations of motion are general and one could
study dives where the shape is changing throughout. However, the assumption of
rigid body motions for certain times is satisfied to a good approximation in reality and makes the
analysis simpler and more explicit.
The stages where shape change occurs are relatively short, and considerable
time is spent in rotation with a fixed shape. This observation motivates our first approximate model in
which the shape changes are assumed to be impulsive. Hence we have
instantaneous transitions between solution curves of rigid bodies with different tensors of inertia and
different energy, but the same angular momentum.
A simple twisting somersault hence looks like this:
The motion starts out as a
steady rotation about a principal axis resulting in pure somersault (stage 1), and typically this is about the axis of the middle principle moment of inertia which has unstable equilibrium.
After some time a shape change occurs; in our case one arm goes down (stage 2).
This makes the body asymmetric and generates some tilt
between the new principal axis and the constant angular momentum vector.
As a result the body starts twisting
with constant shape (stage 3) until another shape change (stage 4) stops the twist, for which
the body then resumes pure somersaulting motion (stage 5) until head first entry in the water.
The amount of time spent in each of the five stages is denoted by $\tau_i$, where $i = 1, \dots, 5$.
\begin{figure}
\centering
\subfloat[$\L$ for all stages.]{\includegraphics[width=7.25cm]{Lfull.pdf}\label{fig:Lfull}}
\subfloat[$q$ for all stages.]{\includegraphics[width=7.25cm]{qfull.pdf}}
\caption{Twisting somersault with $m=1.5$ somersaults and $n=3$ twists.
The left pane shows the angular momentum $\L(t)$,
and the right pane the quaternion $q(t)$ that determines the orientation $R$.
The stages are separated by the vertical dashed lines, $\tau_1 = 0$, $\tau_2 = \tau_4 = 1/4$.
The same trajectory on the $\L$-sphere is shown in Fig.~\ref{fig:spacefull}.}\label{fig:Lqfull}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=12cm]{cap.png}
\caption{Twisting somersault on the sphere $|\L| = l$ where the shape change is a kick.
The region $A$ bounded by the stage 3 orbit of $\L$ and the equator (dashed) is shaded in dark blue. }
\label{fig:Asimple}
\end{figure}
The energy for pure somersault in stage 1 and stage 5 is $E_s = \frac12 \L_s I_s^{-1} \L_s = \frac12 l^2 I_{s,yy}$.
In the kick-model stage 2 and stage 4 do not take up any time so $\tau_2 = \tau_4 = 0$,
but they do change the energy and the tensor of inertia. On the momentum sphere $|\L|^2 = l^2$
the dive thus appears like this, see Fig.~\ref{fig:Asimple}: For some time the orbit remains at the equilibrium point
$L_x = L_z = 0$, then it is kicked into a periodic orbit with larger energy describing the
twisting motion. Depending on the total available time a number of full (or half) revolutions are
done on this orbit, until another kick brings the solution back to the unstable equilibrium point
where it started (or on the opposite end with negative $L_y$). Finally, some time is spent in the pure somersaulting motion before the athlete completes the dive with head first entry into the water.
The description in the body frame on the sphere $|\L|^2 = l^2$ does not directly contain
the somersault rotation about $\l$ in physical space.
This is measured by the angle of rotation $\phi$ about the fixed angular momentum vector $\l$ in space,
which is the angle reduced by the symmetry reduction that lead to the description in the body frame, and the angle $\phi$ will have to be recovered from its dynamic and geometric phase. What is visible on the $\L$-sphere is the dynamics of the twisting motion,
which is the rotation about the (approximately) head-to-toe body axis.
All motions on the $\L$-sphere except for the separatrices are periodic,
but they are not periodic in physical space
because $\phi$ in general has a different period.
This is the typical situation in a system after symmetry reduction.
To build a successful jump the orbit has to start and finish at one of the equilibrium points
that correspond to somersault rotation about the fixed momentum axis $\l$ in space.
This automatically ensures that the number of twists performed will be a half-integer or integer.
In addition, the change in $\phi$, i.e.\ the amount of rotation about the axis $\l$ has to be such
that it is a half-integer, corresponding to take-off in upright position and entry
with head first into the water. If necessary the angle $\phi$ will evolve (without generating additional twist) in the pure somersaulting stages 1 and 5 to meet this condition.
The orbit for the twisting stage 3 is obtained as the intersection of the angular momentum sphere
$|\L|^2 = l^2$ and the Hamiltonian $H = \frac12 \L I_t^{-1} \L = E_t$, where $I_t$ denotes the
tensor of inertia for the twisting stage 3 which in general is non-diagonal. The period of this motion depends
on $E_t$, and can be computed in terms of complete elliptic integrals of the first kind, see below.
\begin{lemma}
In the kick-model the instantaneous shape change of the arm from up ($\alpha = \pi)$ to down ($\alpha = 0$)
rotates the angular momentum vector in the body frame from
$\L_s = (0, l, 0)^t$ to $\L_t = R_x( -\chi) \L_s$ where
the tilt angle is given by
\[
\chi = \int_0^\pi I_{t,xx}^{-1}(\alpha) A_x(\alpha) \mathrm{d} \alpha \, ,
\]
$I^{-1}_{t,xx}(\alpha)$ is the $xx$ component of the (changing) moment of inertia tensor $I_t^{-1}$,
and the internal angular momentum is $\mathbf{A} = ( A_x(\alpha) \dot \alpha, 0, 0)^t$.
In particular the energy after the kick is $E_t = \frac12 \L_s R_x(\chi) I_t R_x(-\chi) \L_s$.
\end{lemma}
\begin{proof}
Denote by $\alpha$ the angle of the arm relative to the trunk, where $\alpha = 0$ is arm down
and $\alpha = \pi$ is arm up. Let the shape change be determined by $\alpha(t)$ where $\alpha(0) = \pi$
and $\alpha(\tau_2) = 0$. Now $\mathbf{A}$ is proportional to $\dot \alpha$ and hence diverges
when the time that it takes to do the kick goes to zero, $\tau_2 \to 0$.
Thus we approximate $\O = -I^{-1} \mathbf{A}$, and hence the equations
of motion for the kick becomes
\[
\dot \L = I^{-1} \mathbf{A} \times \L
\]
and are linear in $\L$.
Denote the moving arm as the body $B_2$ with index 2, and the trunk and all the other fixed segments as a combined body with index 1.
Since the arm is moved in the $yz$-plane we have $R_{\alpha_2} = R_x(\alpha(t))$ and $\Omega_{\alpha_2}$
parallel to the $x$-axis. Moreover the overall centre of mass
will be in the $yz$-plane, so that $\mathbf{C}_i$ and $\dot \mathbf{C}_i$, $i=1,2$, are also in the $yz$-plane. So we have
$\mathbf{C}_i \times \dot \mathbf{C}_i$ parallel to the $x$-axis as well, and hence $\mathbf{A} = (A_x \dot\alpha, 0, 0)^t$.
The parallel axis theorem gives non-zero off-diagonal entries only in the $yz$-component of $I$,
and similarly for $R_{\alpha_2} I_s R^t_{\alpha_2}$; hence $I_{xy} = I_{xz} = 0$ for this shape change.
Thus $I^{-1} \mathbf{A} = ( I_{t,xx}^{-1} A_x \dot \alpha, 0, 0)^t$,
and the equation for $\dot \L$ can be written as $\dot \L = f(t) M \L$ where $M$ is a constant matrix
given by $M = \frac{\mathrm{d}}{\mathrm{d} t} R_x(t)|_{t=0}$ and $f(t) = I_{t,xx}^{-1}(\alpha(t)) A_x(\alpha(t)) \dot \alpha$.
Since $M$ is constant we can solve the time-dependent linear equation and find
\[
\L_t = R_x( -\chi) \L_s
\]
(the subscript $t$ stands for ``twist", not for ``time") where
\[
\chi = \int_0^{\tau_2} I_{t,xx}^{-1} A_x \dot \alpha \,\mathrm{d} t = \int_0^\pi I_{t,xx}^{-1}(\alpha) A_x(\alpha)\, \mathrm{d} \alpha \,.
\]
We take the limit $\tau_2 \to 0$ and obtain the effect of the kick, which is a change in $\L_s$ by a rotation
of $-\chi$ about the $x$-axis. The larger the value of $\chi$ the higher the twisting orbit is on the sphere,
and thus the shorter the period.
The energy after the kick is easily found
by evaluating the Hamiltonian at the new point $\L_t$ on the sphere with the new tensor of inertia $I_t$.
\end{proof}
The tensor of inertia after the shape change is denoted by $I_t$, it does not change by much in comparison to $I_s$ but is now non-diagonal, however, with the rotation $R_x(\mathcal{P})$
it can be re-diagonalised to
\[
J = \mathrm{diag}( J_x, J_y, J_z) = R_x(-\mathcal{P}) I_t R_x(\mathcal{P})
\]
where in general the eigenvalues are distinct. The precise formula for $\mathcal{P}$ depends on the inertia properties
of the model, but the value for a realistic model with 10 segments it can be found in \cite{Tong15}.
Formulas for $A_x(\alpha)$ in terms of realistic inertial parameters are also given in \cite{Tong15}.
In stage 3 the twisting somersault occurs, where we assume an integer number of twists occur.
This corresponds to an integer number of revolutions of the periodic orbit of the rigid body with
energy $E_t$ and tensor of inertia $I_t$. Let $T_t$ be the period of this motion.
As already pointed out the amount of rotation about the fixed angular momentum vector $\l$ cannot
be directly seen on the $\L$-sphere. Denote this angle of rotation by $\phi$. We need the total change $\Delta \phi$
to be an odd multiple of $\pi$ for head-first entry.
Following Montgomery \cite{Montgomery91} the amount $\Delta \phi$ can be split into a
dynamic and a geometric phase, where the geometric phase is given by the solid angle $S$
enclosed by the curve on the $\L$-sphere.
We are going to re-derive the formula for $\Delta \phi$ here using simple
properties of the integrable Euler top.
The formula for the solid angle $S$ enclosed by a periodic orbit on the $\L$-sphere
is given in the next Lemma (without proof, see, e.g.~\cite{Montgomery91,Levi93,Cushman05}).
\begin{lemma} \label{lem:Ssphere}
The solid angle on the sphere $\L^2 = l^2$ enclosed by the intersection with the ellipsoid $\L J^{-1} \L = 2 E$ is given by
\[
S(h, \rho) = \frac{4 h g}{\pi} \Big( \Pi(n,m) - K(m)\Big)
\]
where
$m = \rho ( 1 - 2 h \rho)/( 2 h + \rho)$,
$n = 1 - 2 h \rho$,
$g = ( 1 + 2 h /\rho)^{-1/2}$,
$h = ( 2 E_t J_y/l^2 - 1)/ \mu$,
$\rho = (1 - J_y/J_z)/(J_y/J_x - 1)$,
$\mu = (J_y/J_x - 1)(1 - J_y/J_z)$.
\end{lemma}
\begin{remark}
The essential parameter is $E_t J_y/l^2$ inside $h$, and all other dependences are on certain dimensionless
combinations of moments of inertia.
Note that the notation in the Lemma uses the classical letter $n$ and $m$ for the arguments
of the complete elliptic integrals, these are not to be confused with the number of twists and
somersaults.
\end{remark}
\begin{remark}
This is the solid angle enclosed by the curve towards the north-pole
and when measuring the solid angle between the equator and the curve the results is $2 \pi - S$.
This can be seen by considering $h \to 1/( 2 \rho)$ which implies $m \to 0$ and $n \to 0$,
and hence $S \to 0$.
\end{remark}
Using this Lemma we can find simple expressions for the period and rotation number by
noticing that the action variables of the integrable system on the $\L$-sphere
are given by $l S/\pi$:
\begin{lemma}\label{lem:tT}
The derivative of the action $l S /( 2 \pi) $ with respect to the energy $E$ gives the inverse of the frequency
$(2\pi)/T$ of the motion, such that the period $T$ is
\[
T = \frac{ 4 \pi g}{\mu l} K(m) \,.
\]
and the derivative of the action $lS/( 2\pi) $ with respect to $l$ gives the rotation number $-W$.
\end{lemma}
\begin{proof}
The main observation is that the symplectic form on the $\L$-sphere of radius $l$ is the area-form
on the sphere divided by $l$, and that the solid angle on the sphere of radius $l$ is the enclosed area divided by $l^2$.
Thus the action is $ l S / ( 2\pi)$ and the area is $l^2 S$.
From a different point of view the reason that the essential object is the solid angle is that the
Euler-top has scaling symmetry: if $\L$ is replaced by $s \L$ then $E$ is replaced by $s^2 E$
and nothing changes. This implies that the essential parameter is the ratio $E/l^2$ and the solid
angle is a function of $E/l^2$ only.
A direct derivation of the action in which $h$ is the essential parameter can be found in \cite{PD12}.
Now differentiating the action $l S(E/l^2) / ( 2\pi) $ with respect to $E$ gives
\[
\frac{ T}{2\pi} = \frac{ \partial l S(E/l^2) }{2\pi \partial E} = \frac{1}{2\pi l} S'( E/l^2)
\]
and differentiating the action with respect to $l$ gives
\[
-2\pi W = \frac{ \partial l S(E/l^2) }{\partial l} = S(E / l^2) - \frac{2 E}{l^2} S'(E/l^2) = S - \frac{ 2 E T}{ l}\vspace{-7mm}
\]
\end{proof}
\vspace{0mm}
\begin{remark}
What is not apparent in these formulas are that the scaled period $l T$ is a relatively simple
complete elliptic integral of the first kind (depending on $E/l^2$ only),
while $S$ and $W$ are both complete elliptic integrals of the third kind
(again depending on $E/l^2$ only).
\end{remark}
\begin{remark}
In general the rotation number is given by the ratio of frequencies, and those can be computed
from derivatives of the actions with respect to the integrals. If one of the integrals (of the original
integrable system) is a global $S^1$ action, then the simple formula $W = \partial I / \partial l$
results, which is the change of the action $I$ with respect to changing the other action $l$
while keeping the energy constant.
\end{remark}
\begin{theorem}
The total amount of rotation $\Delta \phi_{kick}$ about the fixed angular momentum axis~$\l$ for the kick-model
when performing $n$ twists is given by
\begin{equation} \label{eqn:kick}
\Delta \phi_{kick} = (\tau_1 + \tau_5) \frac{2 E_s }{l} + \tau_3 \frac{2 E_t}{l} - n S\,.
\end{equation}
The first terms are the dynamic phase where $E_s$ is the energy in the somersault stages and
$E_t$ is the energy in the twisting somersault stage.
The last term is the geometric phase where $S$ is the solid angle enclosed by the orbit in the twisting somersault
stage. For equal moments of inertia $J_x = J_y$ the solid angle $S$ is
\[
S = 2\pi \sin( \mathcal{\chi} + \mathcal{P})
\]
and in general is given by Lemma~\ref{lem:Ssphere}.
To perform $n$ twists the time necessary is $\tau_3 = n T_t$ where
again for equal moments of inertia $J_x = J_y$ the period $T_t$ is
\[
T_t = \frac{2\pi}{l} \frac{(J_y^{-1} - J_z^{-1})^{-1}}{ \sin ( \chi + \mathcal{P} )}
\]
and in general is given by Lemma~\ref{lem:tT} where $T=T_t$.
\end{theorem}
\begin{proof}
This is a slight extension of Montgomery's formula \cite{Montgomery91} for how much the rigid body rotates, with
the added feature that there is no $\mathrm{mod} \, 2\pi$ for $\Delta \phi_{kick}$: We actually need to know how many somersaults occurred.
The formula is applied to each stage of the dive, but stage 2 and stage 4 do not contribute because $\tau_2 = \tau_4 = 0$.
In stage 1 and stage 5 the trajectory is at an equilibrium point on the $\L$-sphere, so there is only a contribution to the dynamic phase.
The essential terms come from stage 3, which is the twisting somersault stage without shape change.
When computing $S$ we need to choose a particular normalisation of the integral which is different from
Montgomery \cite{Montgomery91,Levi93}, and also different from \cite{Cushman05}.
Our normalisation is such that when
$J_x = J_y$ the amount of rotation obtained is the corresponding angle $\phi$ of the somersault,
i.e.\ the rotation about the fixed axis $\l$ in space. This means that the correct solid angle for our purpose is
such that when $J_x = J_y$ and $E_t = E_s$ the contribution is zero. Therefore, we should measure
area $ A = S l^2$ relative to the equator on the sphere. When $J_x = J_y$ we are simply measuring the area
of a slice of the sphere bounded by the equator and the twisting somersault orbit, which the latter is in a
plane parallel to the $xy$-plane with opening angle $\chi + \mathcal{P}$, see Fig.~\ref{fig:Asimple}
\footnote{This is somewhat less obvious then it appears, since the orbit is actually tilted by $\mathcal{P}$
relative to the original equator. It turns out, however, that computing it in either way gives the same answer.}
In the general case where $J_x \neq J_y$ the area can be computed in terms of elliptic integrals, and the details are given in \cite{Tong15}.
Similarly the period of the motion along $H = E_t $ can be either computed from explicit solutions
of the Euler equations for $J_x = J_y$, or by elliptic integrals, again see \cite{Tong15} for the details.
\end{proof}
Now we have all the information needed to construct a twisting somersault.
A result of the kick approximation is that we have $\tau_2 = \tau_4 = 0$, and if we further set $\tau_1=\tau_5=0$ then there is no pure somersault either, which makes this the simplest twisting somersault. We call this dive the pure twisting somersault and take it as a first approximation to understanding the more complicated dives.
\begin{corollary}
A pure twisting somersault with $m$ somersaults and $n$ twists is found for
$\tau_1 = \tau_2 = \tau_4 = \tau_5 = 0$ and must satisfy
\begin{equation} \label{eqn:rot}
2 \pi m = \Big( 2 l T_t \frac{ E_t}{l^2} - S \Big) n
\end{equation}
where both $S$ and $l T_t$ are functions of $E_t/l^2$ only (besides inertial parameters).
\end{corollary}
\begin{proof}
This is a simple consequence of the previous theorem by setting $\Delta \phi = 2 \pi m$, $\tau_3 = n T_t$,
and $\tau_1 = \tau_5 = 0$.
\end{proof}
\begin{remark}
Solving \eqref{eqn:rot} for $m/n$ gives a rotation number of the Euler top, which characterizes
the dynamics on the 2-tori of the super-integrable Euler top. This rotation number is equivalent
up to uni-modular transformations to that of Bates et al \cite{Cushman05}.
\end{remark}
\begin{remark}
The number of somersaults per twists is $m/n$, and \eqref{eqn:rot} determines $E_t/l^2$ (assuming the inertial parameters are given).
Having $E_t/l^2$ determined in this way means one would need to find a shaped change or kick which achieves that $E_t/l^2$, and
large values of $E_t/l^2$ can be hard or impossible to achieve.
For the one-arm kick-model discussed above the energy that is reached is given by
\[
\frac{E_t}{l^2} = \frac12 \L_s R_x(\chi + \mathcal{P}) J R_x(-\chi - \mathcal{P}) \L_s/l^2.
\]
\end{remark}
\begin{remark}
Given a particular shape change (say, in the kick-approximation) the resulting $E_t/l^2$ will
in general not result in a rational rotation number, and hence not be a solution of \eqref{eqn:rot}.
In this case the pure somersault
of stage 1 and/or stage 5
needs to be used to achieve a solution of \eqref{eqn:kick}
instead.
\end{remark}
\begin{remark}
The signs are chosen so that $S$ is positive in the situation we consider. Thus the geometric phase
lowers $\Delta \phi$, and can be thought of as an additional cost that twisting on for somersaulting.
\end{remark}
\begin{figure}
\centering
\includegraphics[width=10cm]{spacefull.png}
\caption{Twisting somersault with $m=1.5$ somersault and $n=3$ twists on the sphere $|\L| = l$.
The orbit starts and finishes on the $L_y$-axis with stage 1 and stage 5.
Shape-changing stages 2 and 4 are the curved orbit segments that start and finish
at this point. The twisting somersault stage 3 appears as a slightly deformed
circle below the equator (dashed).
}\label{fig:spacefull}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7cm]{circle3.pdf}
\caption{Areas $A_+$ and $A_-$ corresponding to solid angles $S_+ = A_+/l^2$ and $S_- = A_-/l^2$.
The geometric phase correction due to the shape change is given by $S_-$}
\label{fig:Aminus}
\end{figure}
The total airborne time $T_{air}$ has small variability for platform diving,
and is bounded above for springboard diving. A typical dive has $ 1.5 < T_{air} < 2.0$ seconds.
After $E_t/l^2$ is determined by the choice of $m/n$ the airborne time can be adjusted
by changing $l$ (within the physical possibilities) while keeping $E_t/l^2$ fixed.
Imposing $T_{air} = \tau_1 + \tau_5 + \tau_3$ we obtain:
\begin{corollary}
A twisting somersault with $m$ somersaults and $n$ twists in the kick-model must satisfy
\[
2 \pi m + n S = T_{air} \frac{ 2 E_s}{l} + 2 n l T_t \frac{ E_t - E_s}{l^2}
\]
where $T_{air} - \tau_3 = \tau_1 + \tau_5 \ge 0$.
\end{corollary}
\section{The general twisting somersault}
The kick-model gives a good understanding of the principal ingredients needed in a successful dive.
In the full model the shape-changing times $\tau_2$ and $\tau_4$ need to be set to realistic values.
We estimate that the full arm motion takes at least about $1/4$ of a second. So instead of having a kick
connecting $\L_s$ to $\L_t(0)$, a piece of trajectory from the time-dependent Euler equations needs to be inserted, which can be seen in Fig.~\ref{fig:Lqfull}.
The computation of the two dive segments from stage 2 and stage 4 has to be done numerically in general.
Nevertheless, this is a beautiful generalisation of Montgomery's formula
due to Cabrera \cite{Cabrera07}, which holds in the non-rigid situation.
In Cabrera's formula the geometric phase is still given by the solid angle enclosed by the
trajectory, however for the dynamic phase instead of simply $2ET$ we actually need to integrate
$\L \cdot \O$ from 0 to $T$. Now when the body is rigid we have $2 E = \L \cdot \O = const$ and Cabrera's formula
reduces back to Montgomery's formula.
\begin{theorem}
For the full model of a twisting somersault with $n$ twists,
the total amount of rotation $\Delta\Phi$ about the fixed angular momentum axis $\l$ is given by
\[
\Delta \phi = \Delta \phi_{kick} + \frac{ 2 \bar E_2 \tau_2 }{l} + \frac{ 2 \bar E_4 \tau_4 }{l} + S_-
\]
where $S_-$ is the solid angle of the triangular area on the $\L$-sphere enclosed by the trajectories of
the shape-changing stage 2 and stage 4, and part of the trajectory of stage 3,
see Fig.~\ref{fig:Aminus}.
The average energies along the transition segments are given by
\[
\bar E_i = \frac{1}{2\tau_i} \int_0^{\tau_i} \L \cdot \O \, \mathrm{d} t, \quad i = 2, 4
\]
\end{theorem}
\begin{proof}
This is a straightforward application of Cabrera's formula. For stage 1, stage 3, and stage 5 where there is no shape change
the previous formula is obtained. For stage 2 and stage 4 the integral of $\L \cdot \O$ along the numerically
computed trajectory with time-dependent shape is computed to give the average energy during the shape change.
\end{proof}
\begin{figure}[ht]
\centering
\includegraphics[width=10cm]{tvl.pdf}
\caption{The relationship between airborne time $T_\mathit{air}$ and angular momentum $l$ when $\tau_2=\tau_4=1/4$ is used in corollary~\ref{cor:fin}.
The result is for the case of $m=1.5$ somersaults with different number of $n$ twists.
The maximum number of twists is $n=4$ since we need
$T_{air} - \tau_2 - \tau_3 - \tau_4 = \tau_1 + \tau_5 \ge 0$.
} \label{fig:tvlan}
\end{figure}
\begin{remark}
This quantifies the error that occurs with the kick-model. The geometric phase is corrected by the solid angle $S_-$ of a
small triangle, see Fig.~\ref{fig:Aminus}. The dynamic phase is corrected by adding terms proportional to $\tau_2$
and $\tau_4$. Note, if we keep the total time $\tau_2+ \tau_3 + \tau_4$ constant then we can think of the shape-changing times $\tau_2$ and $\tau_4$ from the full model as being part of the twisting somersault time $\tau_3$ of the kick-model. The difference is $2\Big((\bar{E}_2-E_t)\tau_2+(\bar{E}_4-E_t)\tau_4\big)/l$, and since both $\bar{E}_2$ and $\bar{E}_4 < E_t$
the dynamics phase in the full model is slightly smaller than in the kick-model.
\end{remark}
\begin{remark}
As $E_t$ is found using the endpoint of stage 2 it can only be calculated numerically now.
\end{remark}
The final step is to use the above results to find parameters that will achieve $m$ somersaults and $n$ twists,
where typically $m$ is a half-integer and $n$ an integer.
\begin{corollary}
A twisting somersault with $m$ somersaults and $n$ twists satisfies
\[
2 \pi m + n S - S_- = T_{air} \frac{ 2 E_s} { l}
+ 2 \tau_2 \frac{\bar E_2 - E_s }{l} + 2 \tau_4 \frac{ \bar E_4 - E_s}{l} + 2 \tau_3 \frac{ E_t - E_s}{ l}
\]
where $T_{air} - \tau_2 - \tau_3 - \tau_4 = \tau_1 + \tau_5 \ge 0$.
\label{cor:fin}
\end{corollary}
\begin{remark}
Even though $\bar E_2, \bar E_4, E_t$, and $S_-$ have to be computed numerically in this formula, the geometric interpretation is as clear as before: The geometric phase is given by the area terms $nS$ and $S_-$.
\end{remark}
In the absence of explicit solutions for the shape-changing stages 2 and 4, we have numerically evaluated
the corresponding integrals and compared the predictions of the theory to a full numerical simulation.
The results for a particular case and parameter scan are shown in Fig.~\ref{fig:Lqfull} and Fig.~\ref{fig:tvlan} respectively, and the agreement between theory and numerical simulation is extremely good.
Fixing the shape change and the time it takes determines $E_t/l^2$, so the essential parameters to be adjusted by the athlete are the angular momentum $l$ and airborne time $T_{air}$ (which are directly related to the initial angular and vertical velocities at take-off).
Our result shows that these two parameters are related in a precise way given in Corollary~\ref{cor:fin}.
At first it may seem counterintuitive that a twisting somersault with more twists (and same number of somersaults) requires less angular momentum when the airborne time is the same, as shown in Fig.~\ref{fig:tvlan} for $m = 3/2$ and $n = 0,1,2,3,4$. The reason is that while twisting, the moments of inertia relevant for somersaulting are smaller than not twisting, since pure somersaults take layout position as shown in Fig.~\ref{fig:animationlayout}, hence less overall time is necessary.
In reality, the somersaulting phase is often done in pike or tuck position which significantly reduces the moment of inertia about the somersault axis, leading to the intuitive result that more twists require larger angular momentum when airborne time is the same.
\section{Acknowledgement}
This research was supported by ARC Linkage grant LP100200245 and the New South Wales Institute of Sports.
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 2.900391,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUd-TxK7FjYEXSFk-Q | \section{Introduction}
\vspace{0.5em}
It is well-known (and supported by
studies~\cite{lester2013visual, messaris2001role}) that the most powerful messages are delivered with a combination of words and pictures. On the Internet, such multimodal content is abundant in the form of news articles, social media posts, and personal blog posts where authors enrich their stories with carefully chosen and placed images.
As an example, consider a vacation trip report, to be posted on a blog site or online community. The backbone of the travel report is a textual narration, but the user typically places illustrative images in appropriate spots, carefully selected from her photo collection from this trip. These images can either show specific highlights such as waterfalls, mountain hikes or animal encounters, or may serve to depict feelings and the general mood of the trip, e.g., by showing nice sunsets or bar scenes.
Another example is brochures for research institutes or other organizations. Here, the text describes the mission, achievements and ongoing projects, and it is accompanied with judiciously selected and placed photos of buildings, people,
products and other images depicting the subjects and phenomena of interest, e.g., galaxies or telescopes for research in astrophysics.
The generation of such multimodal stories requires substantial human judgement and reasoning, and is thus time-consuming and labor-intensive. In particular, the effort on the human side includes
selecting the right images from a pool of story-specific photos
(e.g., the traveler's own photos)
and possibly also from a broader pool for visual illustration
(e.g., images licensed from a PR company's catalog or
a big provider such as Pinterest).
Even if the set of photos were exactly given, there is still considerable effort to place them within or next to appropriate paragraphs, paying attention to the semantic coherence between surrounding text and image.
In this paper, we set out to automate this human task, formalizing it as a {\em story-images alignment} problem.
\vspace{0.8em}
\ourheading{Problem Statement} Given a story-like text document and a set of images, the problem is to automatically decide where individual images are placed in the text.
Figure~\ref{fig:overview} depicts this task.
The problem comes in different variants: either all images in the
given set need to be placed, or a subset of given cardinality must be selected and aligned with text paragraphs.
Formally, given $n$ paragraphs and $m \le n$ images,
assign these images to a subset of the paragraphs,
such that each paragraph has at most one image.
The variation with image selection assumes that
$m > n$ and requires a budget $b \le n$ for the
number of images to be aligned with the paragraphs.
\vspace{0.8em}
\ourheading{Prior Work and its Inadequacy}
There is ample literature on computer support for multimodal content creation, most notably, on generating image tags and captions.
Closest to our problem is prior work on story illustration \cite{DBLP:journals/tomccap/JoshiWL06, DBLP:conf/kes/SchwarzRCGGL10}, where the task is to select illustrative images from a large pool. However, the task is quite different from ours, making prior approaches inadequate for the setting of this paper.
First, unlike in general story illustration, we need to consider the text-image alignments jointly for all pieces of a story, rather than making context-free choices one piece at a time. Second, we typically start with a pool of story-specific photos and expect high semantic coherence between each narrative paragraph and the respective image, whereas general story illustration operates with a broad pool of unspecific images that serve many topics.
Third, prior work assumes that each image in the pool has an informative caption or set of tags, by which the selection algorithm computes its choices. Our model does not depend on pre-defined set of tags, but detects image concepts on the fly.
\vspace{0.8em}
Research on Image Tagging may be based on community input, leading to so-called ``social tagging'' \cite{DBLP:journals/sigkdd/GuptaLYH10}, or based on computer-vision methods, called ``visual tagging''. In the latter case, bounding boxes are automatically annotated with image labels, and relationships between objects may also be generated \cite{DBLP:conf/cvpr/RedmonF17, DBLP:conf/eccv/LuKBL16}. Recent works have investigated how to leverage commonsense knowledge as a background asset to further enhance such automatically computed tags \cite{DBLP:conf/wsdm/ChowdhuryTFW18}. Also, deep-learning methods have led to expressive forms of multimodal embeddings, where textual descriptions and images are projected into a joint latent space \cite{DBLP:conf/nips/FromeCSBDRM13, DBLP:conf/bmvc/FaghriFKF18} in order to compute multimodal similarities.
\vspace{0.8em}
In this paper, in addition to manual image tags where available, we harness visual tags from deep neural network based object-detection frameworks and incorporate background commonsense knowledge, as automatic steps to enrich the semantic interpretation of images. This, by itself, does not address the alignment problem, though. The alignment problem is solved by combinatorial optimization.
Our method is experimentally compared to baselines that makes use of multimodal embeddings.
\vspace{0.8em}
\ourheading{Our Approach -- SANDI}
We present a framework that casts the story-images alignment task into a combinatorial optimization problem. The objective function, to be maximized, captures the semantic coherence between each paragraph and the image that is placed there. To this end, we consider a suite of features, most notably, the visual tags associated with an image
(user-defined tags as well as tags from automatic computer-vision tools),
text embeddings, and also background knowledge in the form of commonsense assertions. The optimization is constrained by the number of images that the story should be enriched with. As a solution algorithm, we devise an integer linear program (ILP) and employ the Gurobi ILP solver for computing the exact optimum. Experiments show that SANDI produces semantically coherent alignments.
\vspace{0.8em}
\ourheading{Contributions}
To the best of our knowledge, this is the first work to
address the story-images alignment problem.
Our salient contributions are:\\
\begin{enumerate}
\item We introduce and define the problem of story-images alignment.
\item We analyze two real-world datasets of stories with rich visual illustrations, and derive insights on alignment decisions and quality measures.
\item We devise relevant features, formalize the alignment task as a combinatorial optimization problem, and develop an exact-solution algorithm using integer linear programming.
\item We present experiments that compare our method against baselines that use multimodal embeddings.
\end{enumerate}
\begin{figure}[t]
\begin{center}
\vspace{0.2cm}
\includegraphics[width=1.03\columnwidth]{figures/SANDI-overview-whitebg.png}
\caption{The story-and-images alignment problem.}
\label{fig:overview}
\end{center}
\vspace{0.2cm}
\end{figure}
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\begin{tabular}{|p{3cm}|p{4.6cm}|}
\hline
Image & Ground Truth Paragraph\\
\hline
\raisebox{-\totalheight}{\includegraphics[height=20mm]{figures/article100-15702976972_5c5e6d28cf_k.jpg}}\vspace{0.5em}
& \vspace{0.25em} \ldots Table Mountain Cableway. The revolving car provides 360 degree views as you ascend this mesmerising 60-million-year-old mountain. From the upper cableway station\ldots\\
\hline
\raisebox{-\totalheight}{\includegraphics[height=20.4mm]{figures/5260483563_553d36cb25_b.jpg}}\vspace{0.5em}
& \vspace{0.25em}\ldots On the east flank of the hill is the old Muslim quarter of the Bo-Kaap; have your camera ready to capture images of the photogenic pastel-painted colonial period homes\ldots\\
\hline
\end{tabular}
\caption{Sample image and corresponding paragraph from Lonely Planet}
\label{image-para-lonelyPlanet}
\end{subfigure}%
~
\begin{subfigure}[t]{0.5\textwidth}
\centering
\begin{tabular}{|p{3cm}|p{4.6cm}|}
\hline
Image & Ground Truth Paragraph\\
\hline
\raisebox{-\totalheight}{\includegraphics[height=20mm]{figures/rangsit-university-students-1024x683.jpg}}\vspace{0.6em}
& \vspace{0.25em} If you are just looking for some peace and quiet or hanging out with other students...library on campus, a student hangout space in the International College building\ldots.\\
\hline
\raisebox{-\totalheight}{\includegraphics[height=20mm]{figures/Airplane_travel-1024x683.jpg}}\vspace{0.7em}
& \vspace{0.25em}\ldots I was scared to travel alone. But I quickly realized that there's no need to be afraid. Leaving home and getting out of your comfort zone is an important part of growing up.\ldots\\
\hline
\end{tabular}
\caption{Sample image and corresponding paragraph from Asia Exchange}
\label{image-para-studyAbroad}
\end{subfigure}
\caption{Image-text semantic coherence in datasets.}
\end{figure*}
\section{Related Work}
Existing work that has studied associations between images and text can be categorized into the following areas.
\vspace{0.2cm}
\ourheading{Image Attribute Recognition. }
High level concepts in images lead to better results in Vision-to-Language problems~\cite{DBLP:conf/cvpr/WuSLDH16}.
Identifying image attributes is the starting point toward text-image alignment. To this end, several deep-learning based modern architectures detect concepts in images -- object recognition~\cite{DBLP:conf/nips/HoffmanGTHDGDS14, DBLP:conf/cvpr/RedmonF17, DBLP:conf/nips/RenHGS15}, scene recognition~\cite{DBLP:conf/nips/ZhouLXTO14}, activity recognition~\cite{DBLP:conf/iccv/GkioxariGM15, DBLP:conf/iccv/YaoJKLGF11, DBLP:conf/iccv/ZhaoMY17}. Since all these frameworks work with low level image features like color, texture, gradient etc., noise creep in often leading to incoherent or incorrect detections -- for example, a blue wall could be detected as ``ocean''. While some of the incoherence can be refined using background knowledge~\cite{DBLP:conf/wsdm/ChowdhuryTFW18}, there still exists considerable inaccuracy. We leverage some frameworks from this category in our model to detect visual concepts in images.
\vspace{0.2cm}
\ourheading{Story Illustration} Existing research finds suitable images from a big image collection to illustrate personal stories~\cite{DBLP:journals/tomccap/JoshiWL06} or news posts~\cite{DBLP:conf/kes/SchwarzRCGGL10,DBLP:conf/semco/DelgadoMC10}. Traditionally, images are searched based on textual tags associated with image collections. Occasionally they use visual similarity measures to prune out images very similar to each other. More recent frameworks use deep neural networks to find suitable representative images for a story~\cite{DBLP:conf/cvpr/RaviWMSMK18}. Story Illustration only addresses the problem of image selection, whereas we solve two problems simultaneously:
image selection and image placement -- making a
joint decision on all pieces of the story.
\cite{DBLP:conf/cvpr/RaviWMSMK18} operates on small stories (5 sentences) with simple content,
and retrieves 1 image per sentence. Our stories are
much longer texts, the sentences are more complex,
and the stories refer to both general concepts and
named entities.
This makes our problem distinct.
We cannot systematically compare our full-blown model with
prior works on story illustration alone.
\vspace{0.2cm}
\ourheading{Multimodal Embeddings}
A popular method of semantically comparing images and text has been to map textual and visual features into a common space of multimodal embeddings~\cite{DBLP:conf/nips/FromeCSBDRM13, DBLP:journals/corr/VendrovKFU15, DBLP:conf/bmvc/FaghriFKF18}. Semantically similar concepts across modalities can then be made to occur nearby in the embedding space. Visual-Semantic-Embeddings (VSE) has been used for generating captions for the whole image~\cite{DBLP:conf/bmvc/FaghriFKF18}, or to associate textual cues to small image regions~\cite{DBLP:conf/cvpr/KarpathyL15} thus aligning text and visuals.
Visual features of image content -- for example color, geometry, aspect-ratio -- have also been used to align image regions to nouns (e.g. ``chair''), attributes (e.g. ``big''), and pronouns (e.g. ``it'') in corresponding explicit textual descriptions~\cite{DBLP:conf/cvpr/KongLBUF14}. However, alignment of small image regions to text snippets play little role in jointly interpreting the correlation between the whole image and a larger body of text. We focus on the latter in this work.
\vspace{0.2cm}
\ourheading{Image-text Comparison}
Visual similarity of images has been leveraged to associate single words~\cite{DBLP:journals/mta/ZhangQPF17} or commonly occurring phrases~\cite{DBLP:journals/pr/ZhouF15} to a cluster of images. While this is an effective solution for better indexing and retrieval of images, it can hardly be used for contextual text-image alignment. For example, an image with a beach scene may be aligned with either ``relaxed weekend'' or ``this is where I work best'' depending on the context of the full text.
Yet another framework combines visual features from images with verbose image descriptions to find semantically closest paragraphs in the corresponding novels~\cite{DBLP:conf/iccv/ZhuKZSUTF15}, looking at images and paragraphs in isolation. In a similar vein, \cite{DBLP:conf/ism/ChuK17} align images with one semantically closest sentence in the corresponding article for viewing on mobile devices. In contrast, we aim to generate a complete longer multimodal content to be read as a single unit. This calls for distinction between paragraphs and images, and continuity of the story-line.
\ourheading{Image Caption Generation}
Generation of natural language image descriptions is a popular problem at the intersection of computer vision, natural language processing, and artificial intelligence~\cite{DBLP:journals/jair/BernardiCEEEIKM16}.
Alignment of image regions to textual concepts is a prerequisite for generating captions. While most existing frameworks generate factual captions~\cite{DBLP:conf/icml/XuBKCCSZB15, DBLP:conf/accv/TanC16, DBLP:conf/cvpr/LuXPS17}, some of the more recent architectures venture into producing stylized captions~\cite{DBLP:conf/cvpr/GanGHGD17} and stories~\cite{DBLP:conf/iccv/ZhuKZSUTF15,DBLP:conf/cvpr/KrauseJKF17}. Methodologically they are similar to visual-semantic-embedding in that visual and textual features are mapped to the same multimodal embedding space to find semantic correlations. Encoder-decoder LSTM networks are the most common architecture used. An image caption can be considered as a precise focused description of an image without much superfluous or contextual information. For e.g., the caption of an image of the Eiffel Tower would not ideally contain the author's detailed opinion of Paris. However, in a multimodal document, the paragraphs surrounding the image could contain detailed thematic descriptions. We try to capture the thematic indirection between an image and surrounding text, thus making the problem quite different from crisp caption generation.
\ourheading{Commonsense Knowledge for Story Understanding} One of the earliest applications of Commonsense Knowledge to interpret the connection between images and text is a photo agent which automatically annotated images from user's multi-modal (text and image) emails or web pages, while also inferring additional commonsense concepts~\cite{DBLP:conf/ah/LiebermanL02}. Subsequent works used commonsense reasoning to infer causality in stories~\cite{DBLP:conf/commonsense/WilliamsLW17}, especially applicable to question answering. The most commonly used database of commonsense concepts is ConceptNet~\cite{DBLP:conf/aaai/SpeerCH17}. We enhance automatically detected concepts in an image with relevant commonsense assertions. This often helps to capture more context about the image.
\section{Dataset and Problem Analysis}
\label{sec:analysis}
\subsection{Datasets}
\label{dataset}
To the best of our knowledge, there is no experimental dataset for text-image alignment, and existing datasets on image tagging
or image caption generation are not suitable in our setting.
We therefore compile and analyze
two datasets of blogs from Lonely Planet\footnote{\url{www.lonelyplanet.com/blog}} and Asia Exchange\footnote{\url{www.asiaexchange.org}}.
\squishlist
\item Lonely Planet:
2178 multimodal articles containing on average 20 paragraphs and 4.5 images per article. Most images are accompanied by captions. Figure~\ref{image-para-lonelyPlanet} shows two image-paragraph pairs from this dataset. Most of the images and come from the author's personal archives and adhere strictly to the content of the article.
\item Asia Exchange: 200 articles about education opportunities in Asia, with an average of 13.5 paragraphs and 4 images per article. The images may be strongly adhering to the content of the article (top image in Figure~\ref{image-para-studyAbroad}), or they may be generic stock images complying with the abstract theme as seen in the bottom image in Figure~\ref{image-para-studyAbroad}). Most images have captions.
\squishend
\ourheading{Text-Image Semantic Coherence}
To understand the specific nature of this data, we
had two annotators analyze the given placement of 50 randomly chosen images in articles from the Lonely Planet dataset. The annotators assessed whether the images were specific to the surrounding paragraphs as opposed to merely being relevant for entire articles. The annotators also determined
to how many paragraphs an image was specifically fitting, and indicated the main reason for the given alignments. For this purpose, we defined 6 possibly overlapping meta-classes: (i) specific man-made entities such as monuments, buildings or paintings, (ii) natural objects such as lakes and mountains, (iii) general nature scenes such as fields or forest, (iv) human activities such as biking or drinking, (v) generic objects such as animals or cars, and (vi) geographic locations such as San Francisco or Rome.
\begin{table}
\begin{tabular}{p{0.2cm}p{4cm}p{2.8cm}}
& Criterion & \% of images \\ \hline
\multirow{3}{*}{\rotatebox{90}{Relevance}} & \begin{tabular}[c]{@{}l@{}}Placement specific to \\surrounding paragraphs
\end{tabular}
& 91\% \\
& Relevant text after image & 86\%\\
& Avg. \#relevant paragraphs & 1.65\\ [0.5em]
\hline
\multirow{6}{*}{\rotatebox{90}{Main reason}}
& Natural named objects & 9\%\\
& Human activities & 12\%\\
& Generic objects & 15\%\\
& General nature scenes & 20\% \\
& Man-made named objects & 21\%\\
& Geographic locations & 29\%\\ [0.5em] \hline
\vspace{-1em}
\end{tabular}
\caption{Analysis of image placement for 50 images from Lonely Planet travel blogs.}
\label{tbl:alignment-reasons}
\vspace{-0.6cm}
\end{table}
The outcome of the annotation is shown in Table~\ref{tbl:alignment-reasons}. As one can see, 91\% of the images were indeed more specifically relevant to surrounding text than to the article in general, and 86\% of these were placed before the relevant text. We therefore assume the paragraph following the image as ground truth.
As to the main reasons for this relevance, we observe quite a mix of reasons, with geographic locations being most important at 29\%, followed by man-made objects at 21\% and general nature scenes at 20\% and so on.
\begin{figure*}[t]
\captionsetup[subfigure]{labelformat=empty}
\begin{subfigure}[t]{0.235\textwidth}
\includegraphics[width=\textwidth]{figures/ghandi.jpg}
\caption{\footnotesize{CV: country store, person, bench, lodge outdoor\\MAN: unassuming ashram, Mahatma Ghandi\\BD: Sabarmati Ashram}}
\end{subfigure}\hspace{0.015\textwidth}
\begin{subfigure}[t]{0.235\textwidth}
\includegraphics[width=\textwidth]{figures/shakespeare.jpg}
\caption{\footnotesize{CV: person, sunglasses, stage\\MAN: Globe Theatre, performance, Shakespeare, Spectators\\BD: Shakespeare's Globe\\CSK: show talent, attend concert, entertain audience}}
\end{subfigure}\hspace{0.015\textwidth}
\begin{subfigure}[t]{0.235\textwidth}
\includegraphics[width=\textwidth]{figures/uzes.jpg}
\caption{\footnotesize{CV: adobe brick, terra cotta, vehicle, table, village\\MAN: tiled rooftops\\BD: uzes languedoc, languedoc roussillon\\CSK: colony, small town}}
\end{subfigure}\hspace{0.015\textwidth}
\begin{subfigure}[t]{0.235\textwidth}
\includegraphics[width=\textwidth]{figures/GettyImages-162744609_full_cs-bfa5bf9b39de.jpg}
\caption{\footnotesize{CV: umbrella, beach\\MAN: white sands, Playa Ancon\\BD: ancon cuba, playa ancon\\CSK: sandy shore, vacation}}
\end{subfigure}
\vspace{-0.3cm}
\caption{Characterization of image descriptors: CV adds visual objects/scenes, MAN and BD add location details, CSK adds high-level concepts.}
\label{tagSources}
\end{figure*}
\subsection{Image Descriptors}
\label{features}
Based on the analysis in Table~\ref{tbl:alignment-reasons}, we consider the following kinds of tags for describing images:
\ourheading{Visual Tags (CV)}
State-of-the-art computer-vision methods for object and scene detection
yield visual tags from low-level image features.
We use three frameworks for this purpose.
First,
deep convolutional neural networks based architectures like
LSDA~\cite{DBLP:conf/nips/HoffmanGTHDGDS14} and YOLO~\cite{DBLP:conf/cvpr/RedmonF17}, are used to detect objects like \emph{person}, \emph{frisbee} or \emph{bench}. These models have been trained on ImageNet object classes and denote ``Generic objects'' from Table~\ref{tbl:alignment-reasons}. For stories, general scene descriptors like \emph{restaurant} or \emph{beach} play a major role, too. Therefore, our second
asset is scene detection, specifically from the MIT Scenes Database ~\cite{DBLP:conf/nips/ZhouLXTO14}. Their publicly released pre-trained model ``Places365-CNN'', trained on 365 scene categories with ~5000 images per category, predicts scenes in images with corresponding confidence scores. We pick the most confident scene for each image. These constitute ``General nature scenes'' from Table~\ref{tbl:alignment-reasons}. Thirdly, since stories often abstract away from explicit visual concepts, a framework that incorporates generalizations and abstractions into visual detections~\cite{DBLP:conf/wsdm/ChowdhuryTFW18} is also leveraged.
For e.g., the concept ``hiking'' is supplemented with the concepts ``walking'' (Hypernym of ``hiking'' from WordNet) and ``fun'' (from ConceptNet~\cite{DBLP:conf/aaai/SpeerCH17} assertion ``hiking, HasProperty, fun'').
\ourheading{User Tags (MAN)}
Owners of images often have additional knowledge about content and context -- for e.g.,
activities or geographical information (``hiking near Lake Placid''), which, from Table~\ref{tbl:alignment-reasons} play a major role in text-image alignment.
In a down-stream application, users would have the provision to specify tags for their images. For experimental purposes, we use the nouns and adjectives from image captions from our datasets as proxy for user tags.
\ourheading{Big-data Tags (BD)}
Big data and crowd knowledge allow to infer additional context that may not be visually apparent.
We utilize the Google reverse image search API\footnote{\url{www.google.com/searchbyimage}} to incorporate such tags. This API allows to search by image, and suggests tags based on those accompanying visually similar images in the vast web image repository\footnote{\label{web}The image descriptors would be made publicly available along with the dataset so that changes in web search results do not affect the reproducibility of our results.}. These tags often depict popular places and entities, such as ``Savarmati Ashram'', or ``Mexico City insect market'', and thus constitute ``Natural names objects'', ``Man-made named objects'', as well as ''Geographic locations'' from Table~\ref{tbl:alignment-reasons}.
\ourheading{Commonsense Knowledge (CSK)}
Commonsense Knowledge can bridge the gap between visual and textual concepts~\cite{DBLP:conf/akbc/ChowdhuryTW16}.
We use the following ConceptNet relations to enrich the image tag space: \textit{used for, has property, causes, at location, located near, conceptually related to}.
As ConceptNet is somewhat noisy, subjective, and diverse, we additionally filter its concepts by \emph{informativeness} for a given image following~\cite{DBLP:conf/naacl/WuG13}. If the top-10 web search results of a CSK concept are semantically similar to the image context (detected image tags), the CSK concept is considered to be \emph{informative} for the image. For example, consider the image context ``\textit{hike, Saturday, waterproof boots}''. CSK derived from ``hike'' are \textit{outdoor activity}, and \textit{fun}. The top-10 Bing search results\textsuperscript{\ref{web}} for the concept \textit{outdoor activity} are semantically similar to the image context. However, those for the term \textit{fun} are semantically varied. Hence, \textit{outdoor activity} is more informative than \textit{fun} for this particular image. Cosine similarity between the mean vectors of the image context and the search results is used as a measure of semantic similarity.
\vspace{0.5em}
\noindent
Figure~\ref{tagSources} shows examples for the different kinds of image tags.
\section{Model for Story-Images Alignment}
\label{models}
\noindent
Without substantial amounts of labeled
training data, there is no point in
considering machine-learning methods.
Instead, we tackle the task as
a
Combinatorial Optimization problem
in an unsupervised way.
Our \textit{story-images alignment} model constitutes an Integer Linear Program (ILP)
which jointly optimizes the placement of selected images within an article.
The main ingredient for this alignment is the pairwise similarity between images and units of text. We consider a paragraph as a text unit.
\newcommand{\textit{srel}}{\textit{srel}}
\ourheading{Text-Image Pairwise Similarity}
Given an image, each of the four kinds of descriptors of Section~\ref{features}
gives rise to a bag of features.
We use these features to compute
{\em text-image semantic relatedness scores} $\textit{srel}(i,t)$ for an image $i$ and a paragraph $t$.%
\begin{equation}
\label{srel}
srel(i, t) = cosine(\vec{i}, \vec{t})
\end{equation}
where $\vec{i}$ and $\vec{t}$ are the mean word embeddings for the image tags and the paragraph respectively. For images, we use all detected tags. For paragraphs, we consider only the top 50\% of concepts w.r.t. their TF-IDF ranking over the entire dataset. Both paragraph concepts and image tags capture unigrams as well as bigrams.
We use word embeddings from word2vec trained on Google News Corpus.
$srel(i,t)$ scores serve as weights for variables in the ILP.
Note that
model for
text-image
similarity
is
orthogonal to the
combinatorial problem solved by the ILP.
Cosine distance between concepts (as in Eq.~\ref{srel}) could be easily replaced by other similarity measures over the multimodal
embedding space.
\ourheading{Tasks} Our problem can be divided into
two
distinct tasks:
\squishlist
\item Image Selection -- to select relevant images from an image pool.
\item Image Placement -- to place selected images in the story.
\squishend
These two components are modelled into one ILP where Image Placement is achieved by maximizing an objective function, while the constraints dictate Image Selection. In the following subsections we discuss two flavors of our model consisting of one or both of the above tasks.
\vspace{-0.1cm}
\subsection{Complete Alignment}
\label{optimalAlignment}
{Complete Alignment} constitutes the problem of aligning {\em all images} in a given image pool with relevant text units of a story. Hence, only Image Placement is applicable.
For a story with $|T|$ text units and an associated image pool with $|I|$ images, the alignment of images $i\in I$ to text units $t\in T$ can be modeled as an ILP with the following definitions:
\vspace{0.5em}
\noindent
{\bf Decision Variables:} The following binary decision variables are introduced:\\
\smallskip
$X_{it} = 1$ if image $i$ should be aligned with text unit $t$, $0$ otherwise.
\vspace{0.5em}
\noindent
{\bf Objective:}
Select image $i$ to be aligned with text unit $t$ such that the semantic relatedness over all text-image pairs is maximized:
\begin{equation}
\label{objective}
max \bigg[ \sum\limits_{i\in I}\sum\limits_{t\in T}srel(i,t)X_{it}\bigg]
\end{equation}
where $srel(i, t)$ is the text-image semantic relatedness from Eq.~\ref{srel}.
\vspace{0.5em}
\noindent
{\bf Constraints:}
\vspace{-0.6cm}
\begin{multicols}{2}
\begin{equation}
\label{rowConstraint}
\mathlarger\sum\limits_i X_{it} \leq 1 \forall t
\end{equation}
\break
\begin{equation}
\label{columnConstraint}
\mathlarger\sum\limits_t X_{it} = 1 \forall i
\end{equation}
\end{multicols}
We make two
assumptions for text-image alignments:
no paragraph may be aligned with multiple images (\ref{rowConstraint}), and each image is used exactly once in the story (\ref{columnConstraint}). The former is an
observation from multimodal presentations on the web such as in blog posts
or brochures. The latter assumption is made based on the nature of our datasets, which are fairly
typical for web contents.
Both are designed as hard constraints that
a solution must satisfy.
In principle, we could relax them into
soft constraints by incorporating
violations as a loss-function penalty into the
objective function.
However, we do not pursue this further,
as typical web contents would indeed
mandate hard constraints.
Note also that the ILP has no
hyper-parameters; so it is completely
unsupervised.
\subsection{Selective Alignment}
\label{selectionAlingment}
{Selective Alignment} is the flavor of the model which {\em selects a
subset}
of thematically relevant images from a big image pool, and places them within the story. Hence, it constitutes both tasks -- Image Selection and Image Placement.
Along with the constraint in (\ref{rowConstraint}), Image Selection entails the following additional constraints:
\vspace{-0.6cm}
\begin{multicols}{2}
\begin{equation}
\label{columnConstraint_selection}
\mathlarger\sum\limits_t X_{it} \leq 1 \forall i
\end{equation}
\break
\begin{equation}
\mathlarger\sum\limits_i\mathlarger\sum\limits_t X_{it} = b
\end{equation}
\end{multicols}
where $b$ is the budget for the number of images for the story.
$b$ may be trivially defined as the number of paragraphs in the story, following our assumption that each paragraph may be associated with a maximum of one image.
(\ref{columnConstraint_selection}) is an adjustment to (\ref{columnConstraint}) which implies that not all images from the image pool need to be aligned with the story.
The objective function from (\ref{objective}) rewards the
selection of best fitting images from the image pool.
\section{Quality Measures}
\label{qualityMeasures}
In this section we define metrics for automatic evaluation of text-image alignment models. The two tasks involved -- Image Selection and Image Placement -- call for separate evaluation metrics as discussed below.
\vspace{-0.2em}
\subsection{Image Selection}
\label{image_selection_metrics}
Representative images for a story are selected from a big pool of images. There are multiple conceptually similar images in our image pool since they have been gathered from blogs of the domain ``travel''.
Hence evaluating the results on strict precision (based on exact matches between selected and ground-truth images) does not necessarily assess true quality.
We therefore define a relaxed precision metric (based on semantic similarity) in addition to the strict metric. Given a set of selected images $I$ and the set of ground truth images $J$, where $|I| = |J|$, the precision metrics are:
\begin{equation}
\label{semPrec}
Relaxed Precision = \frac{\sum\limits_{\substack{i\in I}} \max\limits_{\substack{j\in J}}(cosine(\vec{i}, \vec{j}))}{|I|}
\end{equation}%
\begin{equation}
\label{strictPrec}
Strict Precision = \frac{|I\cap J|}{|I|}
\end{equation
\subsection{Image Placement}
\label{image_placement_metrics}
For each image in a multimodal story, the ground truth (GT) paragraph is assumed to be the one following the image in our datasets.
To evaluate the quality of SANDI's text-image alignments, we compare the GT paragraph and the paragraph assigned to the image by SANDI (henceforth referred to as ``aligned paragraph'').
We propose the following metrics for evaluating the quality of alignments:
\vspace{0.2cm}
\ourheading{BLEU and ROUGE} BLEU and ROUGE are classic n-gram-overlap-based metrics for evaluating machine translation and text summarization. Although known to be limited insofar as they do not recognize synonyms and semantically equivalent formulations, they are in widespread use.
We consider them as basic measures of concept overlap between GT and aligned paragraphs.
\vspace{0.2cm}
\ourheading{Semantic Similarity}
To alleviate the shortcoming of requiring exact matches, we consider a metric based on embedding similarity. We compute the similarity between two text units $t_i$ and $t_j$ by the average similarity of their word embeddings, considering all
unigrams and bigrams as words.
\begin{equation}
SemSim(t_i,t_j) = cosine(\vec{t_i}, \vec{t_j})
\label{semRel}
\vspace{0.2cm}
\end{equation}
where $\vec{x}$ is the mean vector of words in $x$.
For this calculation, we drop uninformative words
by keeping only the top 50\% with regard to their TF-IDF weights over the whole dataset.
\vspace{0.2cm}
\ourheading{Average Rank of Aligned Paragraph} We associate each paragraph in the story with a ranked list of all the paragraphs on the basis of semantic similarity (Eq.~\ref{semRel}), where rank 1 is the paragraph itself. Our goal is to produce alignments ranked higher with the GT paragraph. The average rank of alignments produced by a method is computed as follows:
\begin{equation}
\mathit{ParaRank} = 1 - \bigg[ \bigg(\frac{\sum\limits_{t\in T'} rank(t)}{|I|} -1\bigg)\bigg/ \bigg(|T| - 1 \bigg)\bigg]
\vspace{0.2cm}
\end{equation}
where $|I|$ is the number of images and $|T|$ is the number of paragraphs in the article. ${T}'\subset {T}$ is the set of paragraphs aligned to images.
Scores are normalized between 0 and 1; 1 being the perfect alignment and 0 being the worst alignment.
\vspace{0.2cm}
\ourheading{Order Preservation} Most stories
follow a storyline. Images placed at meaningful spots within the story would ideally adhere to this sequence. Hence the measure of pairwise ordering provides a sense of respecting the storyline. Lets define order preserving image pairs as:
$P = \{(i,i'): i,i'\in I, i\neq i', i' \text{ follows } i$ in both GT and SANDI alignments$\}$, where $I$ is the set of images in the story. The measure can be defined as number of order preserving image pairs normalized by the total number of GT ordered image pairs.
\begin{equation}
\mathit{OrderPreserve} = \frac{|P|}{ (|I| (|I| - 1)/2)}
\vspace{0.2cm}
\end{equation}
\begin{table*}[t]
\begin{minipage}{0.5\linewidth}
\centering
\caption{Complete Alignment on the Lonely Planet dataset.}
\begin{tabular}{l|lllll}
\hline
& \rotatebox{60}{BLEU} & \rotatebox{60}{ROUGE} & \rotatebox{60}{SemSim} & \rotatebox{60}{ParaRank} & \rotatebox{60}{\makecell{Order\\ Preserve}}\\ \hline
Random & 3.1 & 6.9 & 75.1 & 50.0 & 50.0\\
VSE++ \cite{DBLP:conf/bmvc/FaghriFKF18} & 11.0 & 9.5 & 84.6 & 59.1 & 55.2\\% & 74.6 \\
VSE++ ILP & 12.56 & 11.23 & 83.98 & 58.08 & 47.93 \\[0.2em]
\hdashline
SANDI-CV & 18.2 & 17.6 & 86.3 & 63.7 & 54.5 \\% & 83.5 \\
SANDI-MAN & \textbf{45.6} & \textbf{44.5} & \textbf{89.8} & 72.5 & \textbf{77.4} \\% & \textbf{84.0} \\
SANDI-BD & 26.6 & 25.1 & 84.7 & 61.3 & 61.2 \\%& 83.7 \\
SANDI{\LARGE $\ast$} & 44.3 & 42.9 & 89.7 & \textbf{73.2} & 76.3 \\% & 83.5 \\
\hline
\end{tabular}
\label{evaluation}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\centering
\caption{Complete Alignment on the Asia Exchange dataset.}
\begin{tabular}{l|lllll}
\hline
& \rotatebox{60}{BLEU} & \rotatebox{60}{ROUGE} & \rotatebox{60}{SemSim} & \rotatebox{60}{ParaRank} & \rotatebox{60}{\makecell{Order\\Preserve}}\\%
\hline
Random & 6.8 & 8.9 & 70.8 & 50.0 & 50.0\\
VSE++ \cite{DBLP:conf/bmvc/FaghriFKF18} & 19.4 & 17.7 & 85.7 & 51.9 & 48.0 \\
VSE++ ILP & 23.5 & 20.11 & 85.98 & 52.55 & 46.13\\[0.2em]
\hdashline
SANDI-CV & 21.5 & 20.6 & 87.8 & 58.4 & 52.0 \\
SANDI-MAN & \textbf{35.2} & \textbf{32.2} & 89.2 & 61.5 & 61.5 \\
SANDI-BD & 24.1 & 22.3 & 86.7 & 56.0 & 53.6 \\
SANDI{\LARGE $\ast$} & 33.4 & 31.5 & \textbf{89.7} & \textbf{62.4} & \textbf{62.5} \\
\hline
\end{tabular}
\label{evaluationStudyAbroad}
\end{minipage}
\end{table*}
\section{Experiments and Results}
We evaluate the two flavors of SANDI -- Complete Alignment and Selective Alignment -- based on the quality measures described in Section~\ref{qualityMeasures}.
\subsection{Setup}
\label{exp-setup}
\ourheading{Tools}
Deep convolutional neural network based architectures similar to LSDA~\cite{DBLP:conf/nips/HoffmanGTHDGDS14}, YOLO~\cite{DBLP:conf/cvpr/RedmonF17}, VISIR~\cite{DBLP:conf/wsdm/ChowdhuryTFW18}
and Places-CNN~\cite{DBLP:conf/nips/ZhouLXTO14} are used as sources of \emph{Visual tags}. Google reverse image search tag suggestions are used as \emph{Big-data tags}. We use the Gurobi Optimizer for solving the ILP.
A Word2Vec~\cite{DBLP:conf/nips/MikolovSCCD13} model trained on the Google News Corpus encompasses a large cross-section of domains, and hence is a well-suited source of word embeddings for our purposes.
\vspace{0.5em}
\ourheading{SANDI Variants}
The variants of our text-image alignment model are based on the use of image descriptors from Section~\ref{features}.%
\squishlist
\item SANDI-CV, SANDI-MAN, and SANDI-BD use CV, MAN, and BD tags as image descriptors respectively.
\item SANDI{\LARGE $\ast$} combines tags from all sources.
\item +CSK: With this setup we study the role of commonsense knowledge as a bridge between visual features and textual features
\squishend
\ourheading{Alignment sensitivity}
The degree to which alignments are specific to certain paragraphs varies from article to article. For some articles, alignments have little specificity, for instance, when the whole article talks about a hiking trip, and images generically show forests and mountains. We measure alignment sensitivity of articles by
comparing the semantic relatedness of an image to its ground-truth paragraph
against all other paragraphs in the same article.
We use the cosine similarity between the image's vector of MAN tags and
the text vectors, for this purpose.
The alignment sensitivity of an article then is the average of these similarity scores
over all its images.
We restrict our experiments to the top-100 most alignment-sensitive articles in each dataset.
\subsection{Complete Alignment}
\label{completeAlignment}
In this section we evaluate our Complete Alignment model (defined in Section~\ref{optimalAlignment}), which places \textit{all} images from a given image pool within a story.
\ourheading{Baselines}
To the best of our knowledge, there is no existing work on story-image alignment in the literature. Hence we modify methods on joint visual-semantic-embeddings (VSE) \cite{DBLP:journals/corr/KirosSZ14, DBLP:conf/bmvc/FaghriFKF18} to serve as baselines. Our implementation of VSE is similar to ~\cite{DBLP:conf/bmvc/FaghriFKF18}, henceforth referred to as VSE++.
We compare SANDI with the following baselines:
\squishlist
\item Random: a simple baseline with random image-text alignments.
\item VSE++ Greedy or simply VSE++: for a given image, VSE++ is adapted to produce a ranked list of paragraphs from the corresponding story. The best ranked paragraph is considered as an alignment, with a greedy constraint that one paragraph can be aligned to at most one image.
\item VSE++ ILP: using cosine similarity scores between image and paragraph from the joint embedding space, we solve an ILP for the alignment with the same constraints as that of SANDI.
\squishend
Since there are no existing story-image alignment datasets, VSE++ has been trained on the MSCOCO captions dataset \cite{DBLP:conf/eccv/LinMBHPRDZ14}, which contains 330K images with 5 captions per image.
\vspace{0.5em}
\ourheading{Evaluation}
Tables~\ref{evaluation} and \ref{evaluationStudyAbroad} show the performance of SANDI variants across the different evaluation metrics (from Section~\ref{image_placement_metrics}) on the Lonely Planet and Asia Exchange datasets respectively.
On both datasets, SANDI outperforms VSE++, especially in terms of paragraph rank (+14.1\%/+10.5\%) and order preservation (+11.1\%/+14.5\%). While VSE++ looks at each image in isolation, SANDI captures context better by considering all text units of the article and all images from the corresponding album at once in a constrained optimization problem.
VSE++ ILP, although closer to SANDI in methodology, does not outperform SANDI.
The success of SANDI can also be attributed to the fact that it is less tied to a particular type of images and text, relying only on word2vec embeddings that are trained on a much larger corpus than MSCOCO.
On both datasets, SANDI-MAN is the single best configuration, while the combination, SANDI{\LARGE $\ast$} marginally outperforms it on the Asia Exchange dataset. The similarity of scores across both datasets highlights the robustness of the SANDI approach.
\vspace{0.5em}
\ourheading{Role of Commonsense Knowledge}
While in alignment-sensitive articles the connections between paragraphs and images are often immediate, this is less the case for articles with low alignment sensitivity. Table~\ref{evaluation-csk} shows the impact of adding common sense knowledge on the 100 least alignment sensitive articles from the Lonely Planet dataset. As one can see, adding CSK tags leads to a minor improvement in terms of semantic similarity (+0.1/+0.4\%), although the improvement is too small to argue that CSK is an important ingredient in text-image alignments.
\begin{table}[t]
\centering
\caption{Role of Commonsense Knowledge.}
\begin{tabular}{p{2cm}|p{2cm}p{1.5cm}p{1.5cm}}
\hline
& &{Standard} &{+CSK}\\
\hline
\multirow{2}{*}{SANDI-CV} & \emph{SemSim} & 86.2 & \textbf{86.3}\\
& \emph{ParaRank} & 59.9 & 59.7 \\[0.2em]
\hline
\multirow{2}{*}{SANDI-MAN} & \emph{SemSim} & 85.1 & \textbf{85.5}\\
& \emph{ParaRank} & 53.8 & \textbf{55.0} \\[0.1em]
\hline
\end{tabular}
\label{evaluation-csk}
\end{table}
\begin{table*}[t]
\begin{minipage}{0.5\linewidth}
\centering
\caption{Image Selection on the Lonely Planet dataset.}
\vspace{-0.2cm}
\begin{tabular}{l|l|llll}
\hline
{\makecell{Tag Space}} & Precision & {Random} & {NN} & {VSE++} & {SANDI}\\
\hline
\multirow{2}{*}{CV} & \small{$Strict$} & 0.4 & 2.0 & 1.14 & \textbf{4.18}\\
& \small{$Relaxed$} & 42.16 & 52.68 & 29.83 & \textbf{53.54}\\
\hline
\multirow{2}{*}{MAN} & \small{$Strict$} & 0.4 & 3.95 & - & \textbf{14.57}\\
& \small{$Relaxed$} & 37.14 & 42.73 & & \textbf{49.65}\\
\hline
\multirow{2}{*}{BD} & \small{$Strict$} & 0.4 & 1.75 & - & \textbf{2.71}\\
& \small{$Relaxed$} & 32.59 & 37.94 & & \textbf{38.86}\\
\hline
\multirow{2}{*}{\Huge{$\ast$}} & \small{$Strict$} & 0.4 & 4.8 & - & \textbf{11.28}\\
& \small{$relaxed$} & 43.84 & 50.06 & & \textbf{54.34}\\
\hline
\end{tabular}
\label{storyIllustration-lonelyPlanet}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\centering
\caption{Image Selection on the Asia Exchange dataset.}
\vspace{-0.2cm}
\begin{tabular}{l|l|llll}
\hline
{\makecell{Tag Space}} & Precision & {Random} & {NN} & {VSE++} & {SANDI}\\
\hline
\multirow{2}{*}{CV} & \small{$Strict$} & 0.45 & 0.65 & 0.44 & \textbf{0.79}\\
& \small{$Relaxed$} & 55.0 & \textbf{57.64} & 30.05 & 57.2\\
\hline
\multirow{2}{*}{MAN} & \small{$Strict$} & 0.45 & 0.78 & - & \textbf{3.42}\\
& \small{$Relaxed$} & 40.24 & 52.0 & & \textbf{52.87}\\
\hline
\multirow{2}{*}{BD} & \small{$Strict$} & 0.45 & 0.82 & - & \textbf{0.87}\\
& \small{$Relaxed$} & 31.12 & \textbf{33.27} & & 33.25\\
\hline
\multirow{2}{*}{\Huge{$\ast$}} & \small{$Strict$} & 0.45 & 1.04 & - & \textbf{1.7}\\
& \small{$relaxed$} & 55.68 & 58.1 & & \textbf{58.2}\\
\hline
\end{tabular}
\label{storyIllustration-asiaExchange}
\end{minipage}%
\vspace{-0.3cm}
\end{table*}
\begin{table}[b]
\centering
\caption{Selective Alignment on the Lonely Planet dataset.}
\begin{tabular}{l|llll}
\hline
& \rotatebox{60}{BLEU} & \rotatebox{60}{ROUGE} & \rotatebox{60}{SemSim} & \rotatebox{60}{ParaRank} \\ \hline
Random & 0.31 & 0.26 & 69.18 & 48.16 \\
VSE++ \cite{DBLP:conf/bmvc/FaghriFKF18} & 1.04 & 0.8 & 79.18 & 53.09 \\
VSE++ ILP & 1.23 & 1.03 & 79.04 & 53.96 \\[0.2em]
\hdashline
SANDI-CV & 1.70 & 1.60 & 83.76 & 61.69 \\
SANDI-MAN & \textbf{8.82} & \textbf{7.40} & 82.95 & 66.83 \\
SANDI-BD & 1.77 & 1.69 & \textbf{84.66} & \textbf{76.18} \\
SANDI{\LARGE $\ast$} & 6.82 & 6.57 & 84.50 & 75.84 \\
\hline
\end{tabular}
\label{evaluation-selectiveAlignment-lonelyPlanet}
\end{table}
\subsection{Selective Alignment}
This variation of our model, as defined in Section~\ref{selectionAlingment}, solves two problems simultaneously -- selection of representative images for the story from a big pool of images, and placement of the selected images within the story. The former sub-problem relates to the topic of ``Story Illustration''~\cite{DBLP:journals/tomccap/JoshiWL06, DBLP:conf/kes/SchwarzRCGGL10, DBLP:conf/semco/DelgadoMC10, DBLP:conf/cvpr/RaviWMSMK18}, but work along these lines
has focused on very short texts with simple content
(in contrast to the long and content-rich stories in our datasets).\\
\vspace{-0.2em}
\noindent
\textbf{\large6.3.1 \hspace{0.6em} Image Selection}
\ourheading{Setup} In addition to the setup described in Section~\ref{exp-setup}, following are the requirements for this task:
\squishlist
\item Image pool -- We pool images from all stories in the slice of the dataset we use in our experiments.
Stories from a particular domain -- for e.g. travel blogs from Lonely Planet -- are largely quite similar.
This entails that images in the pool may also be very similar in content -- for e.g., stories on \textit{hiking} contain images with similar tags like \textit{mountain, person, backpack}.
\item Image budget -- For each story, the number of images in the ground truth is considered as the image budget $b$ for Image Selection (Equation~\ref{selectionAlingment}).
\squishend
\ourheading{Baselines}
We compare SANDI with the following baselines:
\squishlist
\item Random: a baseline of randomly selected images from the pool.
\item NN: a selection of nearest neighbors from a common embedding space of images and paragraphs.
Images are represented as centroid vectors of their tags,
and paragraphs are represented as centroid vectors of their distinctive concepts. The basic vectors are obtained from Word2Vec trained on Google News Corpus.
\item VSE++: state-of-the-art on joint visual-textual embeddings; the method presented in \cite{DBLP:conf/bmvc/FaghriFKF18} is
adapted to retrieve the top-$b$ images for a story.
\squishend
\vspace{-0.2em}
\ourheading{Evaluation}
We evaluate Image Selection by the measures in Section~\ref{image_selection_metrics}.
Table~\ref{storyIllustration-lonelyPlanet} and Table~\ref{storyIllustration-asiaExchange} show
the results for Story Illustration, that is, image selection, for SANDI and the baselines. For the Lonely Planet dataset (Table~\ref{storyIllustration-lonelyPlanet}),
a pool of 500 images was used. We study the effects of a bigger images pool (1000 images) in our experiments with the Asia Exchange dataset (Table~\ref{storyIllustration-asiaExchange}). As expected, average strict precision (exact matches with ground truth) drops. Recall from Section~\ref{dataset} that the Asia Exchange dataset often has stock images for general illustration rather than only story-specific images. Hence the average relaxed precision on image selection is higher.
The nearest-neighbor baseline (NN) and SANDI, both use Word2Vec embeddings for text-image similarity. SANDI's better scores are attributed to the joint optimization over the entire story, as opposed to greedy selection in case of NN. VSE++ uses a joint text-image embeddings space for similarity scores.
The results in the tables clearly show SANDI's advantages over the baselines.
Our evaluation metric $Relaxed Precision$ (Eq.~\ref{semPrec}) factors in the
semantic similarity between images which in turn depends on the image descriptors (Section~\ref{features}). Hence we compute results on the different image tag spaces, where `{\LARGE $\ast$}' refers to the combination of CV, MAN, and BD. The baseline VSE++ however, operates only on
visual features; hence we report its performance only for CV tags.
\setfillcolor{white}
\setbordercolor{green}
\begin{figure*}[t]
\setlength\tabcolsep{2pt}
\begin{tabular}{lllllllll}
\hline
\rotatebox{90}{Story} &
\multicolumn{8}{l}{\makecell[l]{\small{Clatter into Lisbon's steep, tight-packed Alfama aboard a classic yellow tram...England. Ride a regular bus for a squeezed-in-with-the-natives view of}\\
\small{the metropolis...Venice, Italy...opting for a public vaporetto (water taxi) instead of a private punt...Hungary. Trundle alongside the Danube, with views}\\
\small{up to the spires and turrets of Castle Hill...Istanbul, Turkey...Travel between Europe and Asia...Ferries crossing the Bosphorus strait...Sail at sunset...} \\
\small{Monte Carlo's electric-powered ferry boats...The `Coast Tram' skirts Belgium's North Sea shoreline...Pretty but pricey Geneva...travel on buses,} \\
\small{trams and taxi-boats...Liverpool, England...Hop aboard Europe's oldest ferry service...just try to stop yourself bursting into song.}}} \vspace{0.1cm}\\
\hline
\rotatebox{90}{GT} &
\includegraphics[width=2.0cm]{figures/storyIllustration/GT-8.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/GT-5.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/GT-1.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/GT-3.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/GT-2.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/GT-4.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/GT-7.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/GT-6.jpg}\vspace{0.02cm}\\
\rotatebox{90}{NN} &
\includegraphics[width=2.0cm]{figures/storyIllustration/NN-7.jpg} &
\tikzmarkin{a}(0.1,-0.1)(-0.05,1.45)
\includegraphics[width=2.0cm]{figures/storyIllustration/NN-3.jpg}\tikzmarkend{a} &
\includegraphics[width=2.0cm]{figures/storyIllustration/NN-2.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/NN-4.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/NN-6.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/NN-1.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/NN-5.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/NN-8.jpg}
\vspace{0.06cm}\\
\rotatebox{90}{VSE++} &
%
\includegraphics[width=2.0cm]{figures/storyIllustration/VSE-2.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/VSE-5.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/VSE-8.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/VSE-4.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/VSE-7.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/VSE-6.jpg} &
\tikzmarkin{b}(0.1,-0.1)(-0.05,1.45)
\includegraphics[width=2.0cm]{figures/storyIllustration/VSE-1.jpg}\tikzmarkend{b} &
\includegraphics[width=2.0cm]{figures/storyIllustration/VSE-3.jpg}
\vspace{0.02cm}\\
\rotatebox{90}{SANDI} &
\tikzmarkin{c}(0.1,-0.1)(-0.05,1.45)
\includegraphics[width=2.0cm]{figures/storyIllustration/SANDI-3.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/SANDI-4.jpg}\tikzmarkend{c}&
\includegraphics[width=2.0cm]{figures/storyIllustration/SANDI-2.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/SANDI-7.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/SANDI-8.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/SANDI-6.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/SANDI-1.jpg} &
\includegraphics[width=2.0cm]{figures/storyIllustration/SANDI-5.jpg}
\vspace{0.02cm}
\\
\hline
\vspace{-1.7em}
\end{tabular}
\caption{Image Selection. Images within green boxes are exact matches with ground truth (GT). SANDI retrieves more exact matches than the baselines (NN, VSE++). SANDI's non-exact matches are also much more thematically similar.}
\label{selectiveAlignmentAnecdote}
\end{figure*}
Figure~\ref{selectiveAlignmentAnecdote} shows anecdotal image selection results for one story. The original story contains 17 paragraphs; only the main concepts from the story have been retained in the figure for readability. SANDI is able to retrieve 2
ground-truth images out of 8, while the baselines retrieve 1 each. Note that the remaining non-exact matches retrieved by SANDI are also thematically similar. This can be attributed to the wider space of concepts that SANDI explores through the different types of image descriptors described in Section~\ref{features}.
\vspace{0.5cm}
\noindent
\textbf{\large6.3.2 \hspace{0.6em} Image Placement}
\vspace{0.5em}
\noindent
Having selected thematically related images from a big image pool, SANDI places them within contextual paragraphs of the story.
Note that SANDI actually integrates the Image Selection and Image Placement
stages into joint inference on selective alignment seamlessly,
whereas the baselines operate in two sequential steps.
We evaluate the alignments by the measures from Section~\ref{image_placement_metrics}.
Note that the measure \textit{OrderPreserve} does not apply to Selective Alignment since the images are selected from a pool of mixed images which cannot be ordered.
Tables~\ref{evaluation-selectiveAlignment-lonelyPlanet} and ~\ref{evaluation-selectiveAlignment-asiaExchange} show results for the Lonely Planet and Asia Exchange datasets respectively.
We observe that SANDI outperforms the baselines by a clear
margin, harnessing its more expressive pool of tags.
This holds for all the different metrics (to various degrees).
We show anecdotal evidence of the diversity of our image descriptors in Figure~\ref{tagSources} and Table~\ref{example-studyAbroad}.%
\begin{table}[b]
\centering
\caption{Selective Alignment on the Asia Exchange dataset.}
\begin{tabular}{l|lllll}
\hline
& \rotatebox{60}{BLEU} & \rotatebox{60}{ROUGE} & \rotatebox{60}{SemSim} & \rotatebox{60}{ParaRank} \\% & \rotatebox{60}{Aesthetics} \\
\hline
Random & 2.06 & 1.37 & 53.14 & 58.28 \\
VSE++ \cite{DBLP:conf/bmvc/FaghriFKF18} & 2.66 & 1.39 & 58.00 & 64.34 \\
VSE++ ILP & 2.78 & 1.47 & 57.65 & 64.29 \\[0.2em]
\hdashline
SANDI-CV & 1.04 & 1.51 & 60.28 & 75.42 \\
SANDI-MAN & \textbf{3.49} & \textbf{2.98} & 61.11 & \textbf{82.00} \\
SANDI-BD & 1.68 & 1.52 & \textbf{76.86} & 70.41 \\
SANDI{\LARGE $\ast$} & 1.53 & 1.84 & 64.76 & 80.57 \\
\hline
\end{tabular}
\label{evaluation-selectiveAlignment-asiaExchange}
\vspace{-0.1cm}
\end{table}
\subsection{Role of Model Components}
\ourheading{Image Descriptors}
Table~\ref{example-studyAbroad} shows alignments for a section of a single story from three SANDI variants. Each of the variants capture special characteristics of the images, hence aligning to different paragraphs. The paragraphs across variants are quite semantically similar. The highlighted key concepts bring out the similarities and justification of alignment. The wide variety of image descriptors that SANDI leverages (CV, BD, MAN, CSK) is unavailable to VSE++, attributing to the latter's poor performance.
\ourheading{Embeddings}
The nature of embeddings is decisive towards alignment quality. Joint visual-semantic-embeddings trained on MSCOCO (used by VSE++) fall short in capturing high-level semantics between images and story. Word2Vec embeddings trained on a much larger and domain-independent Google News corpus better represents high-level image-story interpretations.
\ourheading{ILP}
Combinatorial optimization (Integer Linear Programming) wins in performance over greedy optimization approaches. In
Tables \ref{storyIllustration-lonelyPlanet} and \ref{storyIllustration-asiaExchange}
this phenomenon can be observed between NN (greedy) and SANDI (ILP). This pair of approaches make use of the same embedding space, with SANDI outperforming NN.
\begin{table*}[t]
\small
\centering
\caption{Example alignments. Highlighted texts show similar concepts between image and aligned paragraphs.}%
\begin{tabular}{p{4.1cm}|p{4.1cm}|p{4.1cm}|p{4.1cm}}
Image and detected concepts & SANDI-CV & SANDI-MAN & SANDI-BD \\[0.2em]
\hline
\makecell[l]{\raisebox{-\totalheight}{\includegraphics[width=4cm]{figures/hobbit-1920x1080.jpg}}\\[-1em]
\\
CV: \hl{cottage}, flower-pot, carrier\\%
MAN: Shire, \hl{Bilbo Baggins}\\
BD: \hl{New Zealand}, \hl{hobbit house}\\}\vspace{0.2em}
& \vspace{-1.7cm}Take advantage of your stay here and visit the memorable scenes shown in the movies. Visit The Shire and experience firsthand the 44 \hl{Hobbit Holes} from which Bilbo Baggins emerged to commence his grand adventure. Tongariro National Park is home to the feared Mount Doom in The Lord of the Rings. Other famous locations that you can visit are \hl{Christchurch}, Nelson and Cromwell.
&\vspace{-1.7cm} Home to \hl{hobbits}, warriors, orcs and dragons. If you're a fan of the famous trilogies, \hl{Lord of the Rings} and The Hobbit, then choosing New Zealand should be a no-brainer.
&\vspace{-1.7cm}Take advantage of your stay here and visit the memorable scenes shown in the movies. Visit The Shire and experience firsthand the 44 \hl{Hobbit Holes} from which Bilbo Baggins emerged to commence his grand adventure. Tongariro National Park is home to the feared Mount Doom in \hl{The Lord of the Rings}. Other famous locations that you can visit are Christchurch, Nelson and Cromwell.\\[0.4em]
\hline
\makecell[l]{\raisebox{-\totalheight}{\includegraphics[width=4cm]{figures/jean-pierre-brungs-105658-unsplash-1920x1080.jpg}}\\[-1em]
\\
CV: \hl{snowy mountains}, \hl{massif}, alpine \\glacier, mountain range\\
MAN: \hl{outdoor lover}, \hl{New Zealand}, \\study destination,
BD: \hl{New Zealand}}
& \vspace{-1.9cm}New Zealand produced the first man to ever climb \hl{Mount Everest} and also the creator of the \hl{bungee-jump}. Thus, it comes as no surprise that this country is filled with adventures and adrenaline junkies.
&\vspace{-1.9cm}Moreover, the \hl{wildlife} in \hl{New Zealand} is something to behold. Try and find a Kiwi! (the bird, not the fruit). They are nocturnal creatures so it is quite a challenge. New Zealand is also home to the smallest dolphin species, Hector's Dolphin. Lastly, take the opportunity to search for the beautiful yellow-eyed penguin.
&\vspace{-1.9cm}Home to hobbits, warriors, orcs and dragons. If you're a fan of the famous trilogies, Lord of the Rings and The Hobbit, then choosing \hl{New Zealand} should be a no-brainer.\\%[2em]
\hline
\makecell[l]{\raisebox{-\totalheight}{\includegraphics[width=4cm]{figures/debby-hudson-556522-unsplash-1920x1200.jpg}}\\[-1em]
\\
CV: cup, \hl{book}, knife, art gallery,\\cigarette holder,
MAN: \hl{New Zealand},\\
\hl{educational rankings},
BD: \hl{books}}
& \vspace{-1.8cm}\hl{Study} in New Zealand and your \hl{CV} will gain an instant boost when it is time to go \hl{job} hunting.
&\vspace{-1.8cm} The land of the \hl{Kiwis} consistently tops \hl{educational rankings} and has many top ranking universities, with 5 of them in the top 300 in the world. Furthermore, teachers are highly educated and more often than not researchers themselves. Active participation, creativity and understanding of different perspectives are some of the many qualities you can pick up by studying in \hl{New Zealand}.
&\vspace{-1.8cm}New Zealand has countless \hl{student}-friendly cities. Therefore, it should hardly come as a surprise that New Zealand is constantly ranked as a top \hl{study} abroad destination. To name but a few cities: Auckland, North Palmerston, Wellington and Christchurch all have fantastic services and great \hl{universities} where you can study and live to your heart's content.\\[0.5em]
\hline
\end{tabular}%
\label{example-studyAbroad}%
\end{table*}%
\section{Conclusion}
In this paper we have introduced the problem of story-images alignment -- selecting and placing a set of representative images for a story within contextual paragraphs. We analyzed features towards meaningful alignments from two real-world multimodal datasets -- Lonely Planet and Asia Exchange blogs -- and defined measures to evaluate text-image alignments. We presented SANDI, a methodology for automating such alignments by a constrained optimization problem maximizing semantic coherence between text-image pairs jointly for the entire story. Evaluations show that SANDI produces alignments which are semantically meaningful. Nevertheless, some follow-up questions arise.
\ourheading{Additional Features}
While our feature space covers most natural aspects, in downstream applications additional image metadata such as GPS location or timestamps may be available. GPS location may provide cues for geographic named entities, lifting the reliance on user-provided tags. Timestamps might prove to be useful for temporal aspects of a storyline.
\ourheading{Abstract and Metaphoric Relations} Our alignments were focused largely on visual and contextual features. We do not address stylistic elements like metaphors and sarcasm in text, which would entail more challenging alignments. For example, the text ``the news was a dagger to his heart'' should not be paired with a picture of a dagger. Although user knowledge may provide some cues towards such abstract relationships,
a deeper understanding of semantic coherence is desired.
\vspace{0.3em}
The datasets used in this paper, along with the image descriptors, will be made available.
\bibliographystyle{ACM-Reference-Format}
| {
"attr-fineweb-edu": 2.160156,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfZw5qhDCHrMKIYLE | \section{Introduction}
College sports organized The National Collegiate Athletic Association(NCAA) are very popular in the United States. And the most popular sports include football and men's basketball. According to a survey conducted by Harris Interactive, NCAA's inter-college competition is attracting around 47 percent of the U.S. Americans. In 2016, about 31 million of the followers have attended a college sports event. In the meanwhile, the enormous number of the followers indicates a big business behind the sport events which generated 740 million U.S. dollars in revenue from television and marketing rights in 2016.
The fans and media have paid a lot of attention to the head coaches' salaries and the coaching changing each year. However, the extent of the analysis is limited to unofficial news comments and folk discussion. A quantitative analysis of the head coaches' hiring is lacking. Most of the head coaches used to be excellent players in his/her college team. Therefore, the defense/offense technique that he/she learnt and was familiar with during the college serving time will have an important impact on his/her later coach career. When a school $u$ hires a graduate from school $v$ as its head coach, $u$ implicitly makes a positive assessment of the quality of $v$'s sports program. By collecting this kind of pairwise assessment, we built the coach hiring networks. we use network-based methods to analyze the head coach hiring networks during the years.
Contributions of our work are as follows: 1.We find high inequality in the coach hiring networks, which means that most of the head coaches graduated from a small proportion of the schools. 2.Based on optimal modularity, We find geographic communities of the schools. A graduate from one community is more likely to be hired as a head coach of a school in the same community. 3.Our coach production rankings have shown general correlation to the authoritative Associated Press(AP) rankings while some disparity do exist. 4. We find a common within-division flows pattern from the division-level movements of the coach.
\section{Background}
A. Clauset, S. Arbesman, and D.B Larremore's research article Systematic inequality and hierarchy in faculty hiring networks~\cite{clauset2015systematic} analyzed the academic faculty hiring networks across three disciplines in computer science, history, and business. We are curious about if this kind of inequality and hierarchy also exist in the sports coach hiring network. Fast, Andrew and Jensen, David's work~\cite{fast2006nfl} use the NFL coaching network to identify notable coaches and to learn a model of which teams will make the playoffs in a given year. To identify notable coaches, their networks focused on the work-under relationship between the coaches. Although there have been some papers researched on ranking the sports teams based on their game results~\cite{park2005network,callaghan2007random}, none of them utilized the coach hiring network which is actually an assessment network built of those professional sports experts' view.
\section{Data Description}
From the official site of the NCAA~\cite{CoachData}, we collect a list of the head coaches(including those retired ones) of the NCAA men's basketball and football teams as well as their coaching career data which includes the coaches' alma maters, graduation year and the school they worked for during their head coaching career. Here we only take the head coaches into account for other positions' data, like the assistant coach, are mostly incomplete which means that the alma maters, the graduation year are difficult to identify. At the same time, we also have removed the head coaches with missing alma mater and graduation time.
\begin{table}[h]
\centering
\begin{tabular}{|p{2.3cm}|p{1.4cm}|p{1.4cm}|}
\hline
& Football & Basketball \\
\hline
schools & 857 & 1214\\
\hline
head coach & 5744 & 6906\\
\hline
mean degree & 6.70 & 5.69\\
\hline
self-loops & 18.35\% & 18.98\%\\
\hline
mean hiring years & 6.33 & 7.03\\
\hline
data period & 1880-2012 & 1888-2013\\
\hline
\end{tabular}
\caption{Coach hiring networks data summary}
\label{table:NetData}
\end{table}
For each school $u$ that a coach has worked for, a directed edge was generated from the coach's alma mater $v$ to the school $u$. We then extract those schools connected by the edges. A brief networks data summary is listed as Table~\ref{table:NetData}. Almost one-fifth of the head coaches would finally have a chance to serve at their alma maters. This result is much higher than the one in faculty hiring network~\cite{clauset2015systematic}.
Besides, we collect the division attribute of the schools~\cite{DivData}, the authoritative ranking: the Associated Press(AP) rankings data of the two sports~\cite{APRankingData}.
\section{Experimental Methods and results}
\subsection{Head Coach Production Inequality}
We measure the inequality of the coach hiring networks since the two sports were introduced to colleges. Table~\ref{table:InequlData} summaries the basic inequality measurements of our experiment.
\begin{table}[h]
\centering
\begin{tabular}{|p{2.3cm}|p{1.4cm}|p{1.4cm}|}
\hline
& Football & Basketball \\
\hline
vertices & 857 & 1214\\
\hline
edges & 5744 & 6906\\
\hline
50\% coach from & 14.24\% & 15.16\%\\
\hline
Gini, $G(k_o)$ & 0.59 & 0.58\\
\hline
Gini, $G(k_i)$ & 0.39 & 0.35\\
\hline
$k_o/k_i>1$ & 33.96\%(291) & 33.20\%(403)\\
\hline
\end{tabular}
\caption{measures of inequality in coach hiring networks: percentage of schools required to cover 50\% of the head coaches; Gini coefficient of production(out-degree) and hiring(in-degree); percentage of schools produced more coaches than the number of the coaches it hired}
\label{table:InequlData}
\end{table}
The Gini coefficient is the most commonly used measure of inequality. A Gini coefficient of zero means a perfect equality. On the opposite, a maximal inequality will lead to a Gini coefficient of one. Here we respectively calculate the Gini coefficient of the coach production(out-degree) and the coach "consumption"(in-degree). We find that the Gini coefficients of coach production are close to 0.60 which indicate a strong inequality. The income distribution Gini index of South Africa estimated by World Bank in 2011 is 0.63. Figure~\ref{fig:Lorenz} is the Lorenz Curve of the coach production. From the curve, to cover 50\% of the head coaches, it only need around 15\% of the schools, which means that a small proportion of the schools have produced a lot of head coaches to all the NCAA members.
\begin{figure}[htbp]
\centering
\includegraphics[height = 7cm, width = 8cm]{Lorenz.eps}
\caption{Coach Production Lorenz Curve}
\label{fig:Lorenz}
\end{figure}
\subsection{Community structure of Coach Hiring Networks}
Community structure is an important property of complex networks. Here we use the modularity optimization algorithm~\citep{blondel2008fast} to detect communities in the coach hiring networks.
Both in the football and men's basketball coach hiring networks, the average modularity of the whole networks are beyond 0.40, which indicates a significant community structure~\cite{newman2004fast}. Both the networks consist of 6 big communities which includes 97.8\% of schools in the football network and 98.6\% of schools in men's basketball network.
We make a visualization of the networks as Figure~\ref{fig:FootballGeoGraph} and Figure~\ref{fig:BasketballGeoGraph} in which we place each school according to its longitude and latitude and set the size of node proportional to its coach production(out-degree). In the figures, the top 6 biggest communities are assigned several specific colors(To better present the figures in a proper size, we remove several schools which locate in Hawaii, Alaska and Puerto Rico). From the figures, we find that the community distribution indeed is influenced by the geographic factor. Besides, both the biggest communities in the two networks locate in the northeast part of America(in purple).
\begin{figure*}[htbp]
\centering
\includegraphics[height = 9.6cm, width = 16.8cm]{1.pdf}
\caption{American Football Coach Hiring Network}
\label{fig:FootballGeoGraph}
\end{figure*}
\begin{figure*}[htbp]
\centering
\includegraphics[height = 9.6cm, width = 16.8cm]{2.pdf}
\caption{American Men's Basketball Coach Hiring Network}
\label{fig:BasketballGeoGraph}
\end{figure*}
\subsection{Correlation with the Authoritative Rankings}
\subsubsection{Temporal characteristic of coach hiring networks}
Our data set includes the head coach hiring data roughly from 1980 to 2010. Figure~\ref{fig:GraDensity} is the counts of the coach according to their graduating year. The distribution of the coach records in each period is not even. In fact, the national headquarters of the NCAA was established in Kansas City, Missouri in 1952. So it is not surprised that there are much more head coach records after 1950.
\begin{figure}[htb]
\centering
\includegraphics[height = 6cm, width =8.6cm]{gradensity.eps}
\caption{Counts of Coach in Graduating Year}
\label{fig:GraDensity}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[height = 6cm, width =8.6cm]{growtime.eps}
\caption{Histogram of the time needed to become head coach}
\label{fig:GrowTime}
\end{figure}
Figure~\ref{fig:GrowTime} shows the counts of coach by the "growing" time needed from graduating to head coach position. The average time needed is 11.5 years in basketball and 14.6 years in football, which means that it takes more time for a graduate to become a football head coach than a basketball head coach. What's more, it also illustrate the scarceness of the head coaches who graduate after 2000 for most of the potential head coach haven't grown into head coach.
\subsubsection{Coach Production Ranking and Authoritative Rankings}
We want to find out whether the strong teams will foster potential future head coaches. So we generate the coach production rankings from our dataset and calculate the correlation coefficient between the rankings and the authoritative rankings along the years.
Due to the temporal characteristic of the coach hiring records, we extract subnetworks from the whole coach hiring network which only include the coaches graduated in the period as interval $[t_s,t_e]$. Because the number of records in each year is not evenly distributed. We enumerate the $t_e$ from the latest graduating time to the older ones and calculate the corresponding $t_s$ of each interval, ensuring that each interval contains only 30\% of the coaches. Finally, we have 55 subnetworks for men's basketball and 62 subnetworks for football.
Based on these subnetworks, we try 4 different network-based methods: Out-degree, MVRs~\cite{clauset2015systematic}, PageRank~\cite{brin1998anatomy} and LeaderRank~\cite{lu2011leaders} to rank the schools. The most simple one is based on the out-degree(coach production) of each school. A minimum violation ranking (MVR) is a permutation $\pi$ that induces a minimum number of edges that point "up" the ranking~\cite{clauset2015systematic}. This method try to produce a ranking to minimize the number of coaches who go downward the hierarchy ranking of the schoos. Besides, PageRank is a widely used node ranking method. To apply this algorithm on our dataset, we make the direction of the edges backward. The backward directed edges represent the "votes" from the schools to decide which school's graduates are more welcome. And more importantly, the algorithm also takes into account some schools which produce very few coaches to those powerful schools with a high in-degree. LeaderRank is an improved version of PageRank in recent year. The Pearson correlation coefficients between the four rankings are all above 0.84, which indicates that these rankings are highly similar to each other. Considering that the PageRank could also find out those important schools with a low degree, we simply choose the traditional PageRank(PR) rankings as the coach production rankings.
We choose the authoritative Associated Press(AP) Poll rankings to compare with our coach production rankings for the reason that polls voting system is also based on subjective opinions of experts. In addition, AP Poll is of long history which is suitable for comparing with our temporal dataset. We aggregate the AP rankings in every 20 years using Median rank aggregation~\citep{fagin2003efficient}. Firstly, we build a school list of the teams which have received votes in every 20 years. Then, In a certain year of the 20-year period, an average rank $(m + 1 + n)/2$ is assigned to the schools not received votes(m represents the number of schools which received votes in the certain year). Finally, we aggregated the 20 rankings into a ranking using Median Rank Aggregation.
\subsubsection{Results}
We use Kendall's $\tau$~\citep{kendall1938new} to measure the correlation between the coach production rankings and the aggregated AP rankings. The Kendall correlation between two variables will be high when observations have a similar rank. We calculate the Kendall's tau between an aggregated AP ranking $X$ and $Y$---- the corresponding coach production rank of each school in X. Figure~\ref{fig:FB_Corr} and Figure ~\ref{fig:BK_Corr} are the correlation results. The color of the points in the grid represents the value of Kendall's tau between an aggregated AP ranking(as $x$ coordinate) and an coach production ranking(as $y$ coordinate).
\begin{figure}[htb]
\centering
\includegraphics[height = 6.5cm, width =8.3cm]{FB_APPR_Corr.eps}
\caption{Correlation Graph Between Aggregated Football AP and PR Rankings}
\label{fig:FB_Corr}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[height = 6.5cm, width =8.3cm]{BK_APPR_Corr.eps}
\caption{Correlation Graph Between Aggregated Men's Basketball AP and PR Rankings}
\label{fig:BK_Corr}
\end{figure}
We find that: 1.The aggregated AP rankings before 1960 and the ones around 1970-1980 are more correlated to the contemporary coach production rankings(with $\tau>0.5$). 2.In Figure~\ref{fig:FB_Corr}, before 1955, the aggregated AP rankings also show some correlation with the coach production rankings in more recent time. But the correlation gradually decreased by years. Probably because some strong teams in the old days, like Yale and the Ivy League, gradually get insulated from the national spotlight and finally moved down into I-AA(now as Football Championship Subdivision) starting with the 1982 season. 3.In 1973, the NCAA was divided into three legislative and competitive divisions – I, II, and III. And at the same time, both in football and men's basketball, there is an increased correlation between the contemporary aggregated AP rankings and the coach production rankings roughly after 1970.
\subsection{Division-level Movements}
To show the movements between different divisions, we use data of the coach graduated after 1973 when NCAA was first divided into three divisions. Table~\ref{table:BK_Div} and Table~\ref{table:FB_Div} show the movements from the coach graduating school's division(the row) to the school they work for(the column).
\begin{table}[htb]
\centering
\begin{tabular}{p{1cm}|p{1cm}p{1cm}p{1cm}|p{1cm}}
& Div I & Div II & Div III & All \\
\hline
Div I & \textbf{0.263} & 0.110 & 0.104 & 0.477\\
Div II & 0.062 & \textbf{0.105} & 0.038 & 0.205\\
Div III & 0.062 & 0.044 & \textbf{0.213} & 0.318\\
\hline
All & 0.386 & 0.259 & 0.355 & \\
\end{tabular}
\caption{In the NCAA men's basketball, faction of coach who graduated from a school in one division(row) and are hired as head coach in a school from another division(column). Movements inside one division are highlighted in bold.}
\label{table:BK_Div}
\end{table}
\begin{table}[htb]
\centering
\begin{tabular}{p{1cm}|p{0.6cm}p{0.6cm}p{0.8cm}p{0.9cm}|p{0.6cm}}
& FBS & FCS & Div II & Div III & All \\
\hline
FBS & \textbf{0.153} & 0.063 & 0.054 & 0.043 & 0.314\\
FCS & 0.040 & \textbf{0.078} & 0.046 & 0.033 & 0.198\\
Div II & 0.033 & 0.025 & \textbf{0.104} & 0.038 & 0.201\\
Div III & 0.019 & 0.026 & 0.041 & \textbf{0.200} & 0.286 \\
\hline
All & 0.246 & 0.193 & 0.246 & 0.315 & \\
\end{tabular}
\caption{In the NCAA football, faction of coach who graduated from a school in one division(row) and are hired as head coach in a school from another division(column). Movements inside one division are highlighted in bold. Football Division I has two subdivisions I-A and I-AA (renamed the Football Bowl Subdivision(FBS) and the Football Championship Subdivision(FCS) in 2006)}
\label{table:FB_Div}
\end{table}
Here we also take into account the movements among the two subdivisions of division I in football and other divisions. Generally speaking, the FBS has more funding, more scholarship and better sport facilities than the FCS. From the tables we find that: 1.The diagonal number represents the fraction of coach who graduated and got hired in the same division. And the fraction of within-division movements is greater than others. 2.There are more coaches move downwards from division I to II and III than move upwards. 3.Excluding the coach working within their division, there are more coach moving upwards to division I than moving to division II or III.
\section{Conclusion}
In this paper, we collect a dataset containing the NCAA men's basketball and football head coach's career data and hiring data. Based on the dataset, we build coach hiring networks and use network-based methods to analyze the hiring networks in four aspects including inequality, community structure, coach production rankings and the movements between divisions. The results reveal that: (1).the coach hiring market is actually of great inequality, which means most of the head coaches come from a small proportion of the NCAA members. It indicates an unequal distribution of the US sports education resource. (2).Coaches prefer to stay in the same division and geographic region to their alma maters'. (3).The coach production rankings are generally correlated to the authoritative rankings, which indicates that good teams are likely to foster future head coaches. However, in specific time period, this is not true probably because of the contemporary NCAA policies and social events.
Our future directions include: 1.We have found some similar hierarchical organization properties as in ~\cite{ravasz2003hierarchical} on our dataset. We could develop proper temporal evolving networks model to predict the coach hiring market. 2.To better explain and find out the mechanism behind our findings, such as the inequality, a complete, deeper understanding of the NCAA's history~\cite{smith2000brief}, the contemporary related policy will be probably of help.
\begin{raggedright}
| {
"attr-fineweb-edu": 2.113281,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcgDxK6EuNArbBg9P | \section{introduction}
{The global sports betting market is worth an estimated
\$700 billion annually \cite{MARKET}, and association football (also known as soccer or simply football), being the world's most popular spectator sport, constitutes around $70\%$ of this ever-growing market \cite{1}. The last decade has thus seen the emergence of numerous online and offline bookmakers, offering bettors the possibility to place wagers on the results of football matches {in more than a hundred different leagues, worldwide.}\\
The sports betting industry offers a unique and very popular betting product known as an \textit{accumulator bet}. In contrast {with} a single bet, which consists in betting on a single event for a payout equal to the stake (i.e. the sum wagered) multiplied by the odds set by the bookmaker for that event, an accumulator bet combines more than one (and generally less than seven) events into a single wager that pays out only when all individual events are correctly predicted. The payout for a correct accumulator bet is the stake multiplied by the product of the odds of all its constituting wagers. However, {if} one of these wagers is incorrect, the entire accumulator bet {would} lose. Thus, this product offers both significantly higher potential payouts and higher risks than single bets, and the large pool of online bookmakers, leagues and, matches {that bettors can access nowadays has increased} both the complexity of selecting a set of matches to place an accumulator bet on, and the number of opportunities to identify winning combinations.\\
With the rise of sports analytics, a wide variety of statistical models for predicting the outcomes of football matches have been proposed, a good review of which can be found in \cite{13}. Since, the classic work of \cite{MAHER} that forecasts match outcomes by modeling both {teams'} scores with a Poisson distribution, more recent models for the prediction of sports match outcomes mostly rely on Bayesian inference \cite{1, 2, 3, 4, 9} and prediction markets \cite{6, 7, 8, 10, 11}. Drawing on data from 678 to 837 of the German \textit{Bundesliga} clubs, \cite{11} showed that human tipsters are outperformed by both prediction markets and betting odds, in terms of forecasting accuracy. Rule-based reasoning was used in \cite{12}, in which historical confrontations of the two opponents are modeled using fuzzy logic, and a combination of genetic and neural optimization techniques is used to fine-tune the model. Bayesian inference was combined with rule-based reasoning in \cite{3} to predict individual football match results. This combination relies on the intuition that although sports results are highly probabilistic in nature, team strategies have a deterministic aspect that can be modeled by crisp logic rules, which has the added benefit of being able to generate recommendations, even if historical data are scarce. The original work presented in \cite{5} focuses on a single outcome for football matches, {the two teams drawing}, and proposes a single bet selection strategy relying on the Fibonacci sequence. Despite its simplicity, the proposed approach proves profitable over a tournament. {However, to the best of our knowledge, existing models only offer recommendation on single bets and neglect an important combinatorial dimension in sports betting that stems from selecting accumulator bets.} {Section \ref{5} proposes a binary mathematical programming model to address this combinatorial problem.}\\
After this introductory Section, we define some relevant football betting concepts and further distinguish between the two main betting products offered by bookmakers, namely single and accumulator bets, in Section \ref{2}. The accumulator bet selection problem is mathematically formulated in Section \ref{4}, and Section \ref{5} introduces the binary mathematical programming model that we propose for its treatment. The results of an ongoing computational experiment, in which our model is applied to real data gathered from the four main football leagues in the world (Spain's \textit{LaLiga}, England's \textit{Premier League}, Germany's \textit{Bundesliga} and Italy's \textit{Serie A}), over a complete season are presented in Section \ref{6}. Lastly, we draw conclusions regarding the results of this experiment and perspective of further development of the proposed approach, in Section \ref{7}. }
\section{{Proposed Model}}
\subsection{{Preliminary definitions}}
\label{2}
{A \textit{single bet} consists in betting on a single outcome, whereas an accumulator is a bet that combines multiple single bets into a wager that only generates a payout when all parts win. This payout is equal to the product of the odds of all individual bets constituting the accumulator. Thus, the advantage of an accumulator bet is that returns are much higher than those of a single bet. This however comes at the expense of an increased risk. Indeed, only a single selection need to lose for the entire accumulator bet to lose}.\\
{Let us consider for the sake of illustration $k$ independent single bets (i.e. no bets on conflicting outcomes, for the same match). Each individual bet $i=1, \dots, k$ is a Bernoulli trial with success probability $p_i$ and success payout $o_i$. An accumulator would be another Bernoulli trial, when dividing a wager over these $k$ single bets would be the sum of $k$ independent, but not independently distributed Bernoulli trials. We compare the expected returns and variance (as a measure of risk) of these two ways to {place the bets}, namely:
\begin{itemize}
\item Wagering $1$ monetary unit on an accumulator bet constituted of these $k$ single bets would offer an expected return of $E(A)=\prod\limits_{i=1}^{k} o_i \cdot p_i $, with a variance of $V(A)=(\prod\limits_{i=1}^{k} o_i^2 \cdot p_i) \cdot (1-\prod\limits_{i=1}^{k}p_i)$.
\item Wagering $\frac{1}{k}$ monetary unit on each one of the $k$ single bets, would offer an expected return of $E(S)=\frac{1}{k}\sum\limits_{i=1}^{k} o_i \cdot p_i $, with a variance of $V(S)=\frac{1}{k^2}\sum\limits_{i=1}^{k} o_i^2\cdot p_i\cdot (1-p_i)$.
\end{itemize}
If we assume, for the sake of simplification, that all individual bets present the same odds and probabilities, i.e. $o_i=o$ and $p_i=p, \forall i=1, \dots, k$, then the above-described accumulator bet would present expected returns of $E(A)=(o\cdot p)^k$, with a variance of $V(A)=(o^2 \cdot p)^k\times(1-p^k)$
when the single bets approach would present expected returns of $E(S)=o\cdot p$, with a variance of $V(S)=(o^2 \cdot p)\times \frac{1-p}{k}$, in which we can note that $(1-p^k)>(1-p)>\frac{1-p}{k}$.}
{Further, assuming that all single bets are in favor of the bettor and present expected positive returns $o\cdot p>1$, it can be easily seen, in this simplified setting, that the expected returns of an accumulator bet $(o\cdot p)^k$, are exponentially larger than those of a pool of single bets $(o\cdot p)$, but their significantly higher variance of $(o^2 \cdot p)^k\times(1-p^k)$, relative to $(o^2 \cdot p)\times \frac{1-p}{k}$ for a pool of single bets, reflects their riskier nature.}
\label{4}
On any given match day, bettors have access to up to $100$ leagues and $50$ bookmakers, proposing different odds. Thus, there exists a pool of around $10$ matches/leagues $\times$ $100$ leagues $\times$ $50$ bookmakers $\times$ $3$ possible outcomes/match = $1,500,000$ possible single bets, for each match day, and the possible combinations of these bets to constitute an accumulator bet is astronomical.
For a given match day, we consider a set $M=\{m_1, m_2, \dots, m_n\}$ of $n$ matches, potentially spanning several leagues, in which each match can have one of three possible outcomes: \textit{Home win} (H), \textit{Draw} (D), or \textit{Away win} (A), we denote $p_i(H)$, $p_i(D)$, and $p_i(A)$ the respective estimated probability of each one of these outcomes, for each match $m_i: i\in\{1, \dots, n\}$.\\
We additionally consider a set $B=\{b^1, \dots, b^m\}$ of bookmakers, each bookmaker $b^j: j\in\{1, \dots, m\}$ offering certain odds $o_{i}^{j}(H)$, $o_{i}^{j}(D)$, and $o_{i}^{j}(A)$ for a correct individual bet on a home win, a draw, and away win, respectively, in match $m_i: i\in\{1, \dots, n\}$.\\
One seeks to determine an accumulator bet, that is to say three exclusive subsets of matches $M^*(H)=\{m^*_{h_1}, \dots, m^*_{h_k}\}$, $M^*(D)=\{m^*_{d_1}, \dots, m^*_{d_k}\}$, and $M^*(A)=\{m^*_{a_1}, \dots, m^*_{a_k}\}$, representing the matches to bet on for a home win, a draw, and an away win, respectively, and a bookmaker $b^{j^*} \in B$, such as to maximize the two following functions:
\begin{itemize}
\item The probability of a winning accumulator bet, given by: $$\displaystyle \prod_{i=h_1}^{h_k} p_i(H) \times \displaystyle \prod_{i=d_1}^{d_k} p_i(D) \times \displaystyle \prod_{i=a_1}^{a_k} p_i(A)$$
\item The total potential payout, given by: $$\displaystyle \prod_{i=d_1}^{d_k} o_{i}^{j^*}(H) \times \displaystyle \prod_{i=h_1}^{h_k} o_{i}^{j^*}(D) \times \displaystyle \prod_{i=a_1}^{a_k} o_{i}^{j^*}(A)$$
\end{itemize}
\subsection{Model Formulation}
\label{5}
We first state the accumulator bet selection problem as a bi-objective mathematical program, in which three binary variables $x_{i}^j(H)$, $x_{i}^j(D)$, $x_{i}^j(A) \in \{0,1\}$ are associated to each match $m_i \in M$ and each bookmaker $b^j \in B$. These decision variables represent whether or not one should respectively bet on a home win, a draw, or an away win, in match $m_i$, using the services of bookmaker $b^j$. \\
A generic formulation of the problem is thus given by the following bi-objective mathematical program:
\begin{alignat}{4}
\max &\displaystyle \prod_{k\in \{H, D, A\}} \prod_{i=1}^{n} \prod_{j=1}^{m} (1 - (1-o_{i}^{j}(k))\cdot x_{i}^j(k))& \\
\max & \displaystyle \prod_{k\in \{H, D, A\}} \prod_{i=1}^{n} \prod_{j=1}^{m} (1 - (1-p_{i}(k))\cdot x_{i}^j(k)) & \\
\text{s. t. }& \displaystyle \sum_{k\in \{H, D, A\}} x_{i}^j(k) \leq 1 \quad \forall i \in \{1, \dots, n\}, \forall j \in \{1, \dots, m\} &\\
& \quad \displaystyle \sum_{j=1}^{m} x_{i}^j(k) \leq 1 \quad \forall i \in \{1, \dots, n\}, \forall k \in \{H, D, A\}&\\
& x_{i}^j(k) \in \{0,1\} \quad \forall i \in \{1, \dots, n\}, \forall j \in \{1, \dots, m\}, \forall k \in \{H, D, A\}&
\end{alignat}
In this mathematical program, objective function $(1)$ seeks to maximize the total potential payout of the accumulator bet, when objective function $(2)$ seeks to maximize the estimated probability of this accumulator bet winning. These two polynomial functions have the same structure, in that if for a certain triplet $(i,j,k)$, decision variable $x_{i}^j(k)$ takes value $0$ in a solution, its corresponding term in the product would be equal to $1$ in each function, and the remaining terms would be considered, whereas $x_{i}^j(k)$ taking value $1$ would result in the odds of the corresponding individual bet being taken into account in function $(1)$ and its estimated probability being taken into account in function $(2)$. Note that these two functions can theoretically be reformulated linearly, using the \textit{usual linearization} technique described in \cite{LIBERTI}. However, this would entail introducing an additional binary variable and three constraints for each pair of initial decision variables, thus resulting in $2^{3nm -1}$ additional binary variables, and a proportional number of additional constraints, which would be impractical. Constraints $(3)$ state that, by definition, an accumulator bet cannot wager on two conflicting outcomes for a single match, and constraints $(4)$ imposes that a single bookmaker is chosen to place the wagers constituting an accumulator bet. Finally, constraints $(5)$ ensure that all variables are binary.
\subsection{Preprocessing}
The accumulator bet selection problem can be treated separately for each individual bookmaker, since all wagers constituting the accumulator bet have to be placed with the same bookmaker for the bet to hold. The problem can thus be divided, without loss of generality, into $m$ (the number of bookmakers) sub-problems, containing $m$ times less binary variables than the initial problem. In the perspective of solving the problem using a scalarizing function, finding the optimal solution of the initial problem would simply consist in comparing the $m$ optimal solution to the sub-problems, whereas in the perspective of enumerating the set of all efficient solutions, that would require testing for dominance between the Pareto sets of the sub-problems. \\
\noindent In addition to the above preprocessing step, which maintains it intact, the search space of the problem can be heuristically reduced by only considering variables $x_{i}^j(k), i \in \{1, \dots, n\}, j \in \{1, \dots, m\}, k \in \{H, D, A\}$ that correspond to single bets presenting Pareto-efficient pairs of odds and estimated probabilities $(o_{i}^{j}(k), p_{i}(k))$, according to the two following dominance tests, which may however eliminate optimal or otherwise valuable combinations of bets:
\begin{itemize}
\item Intra-bookmaker dominance test: $\forall i \in \{1, \dots, n\}, \forall j \in \{1, \dots, m\}, \forall k \in \{H, D, A\}$, eliminate variable $x_{i}^j(k)$, if $\exists i' \in \{1, \dots, n\}$ and $\exists k' \in \{H, D, A\}: o_{i}^{j}(k) \leq o_{i'}^{j}(k')$ and $p_{i}(k) \leq p_{i'}(k')$, with at least one of these two inequalities being strict. In other words, an individual bet is eliminated from consideration, if there exists another individual bet with a higher probability, for which the same bookmaker offers higher odds.
\item Inter-bookmaker dominance test: $\forall i \in \{1, \dots, n\}, \forall j \in \{1, \dots, m\}, \forall k \in \{H, D, A\}$, eliminate variable $x_{i}^j(k)$, if $\exists i' \in \{1, \dots, n\}$, $\exists j' \in \{1, \dots, m\}$ and $\exists k' \in \{H, D, A\}: o_{i}^{j}(k) \leq o_{i'}^{j'}(k')$ and $p_{i}(k) \leq p_{i'}(k')$, with at least one of these two inequalities being strict. In other words, an individual bet is eliminated from consideration, if there exists another individual bet with a higher probability, for which another bookmaker offers higher odds.
\end{itemize}
The assumption underlying the intra-bookmaker dominance test is virtually made in any single bet selection model (which all consider a single bookmaker as well). It corresponds to the idea that selecting a non-Pareto-efficient individual bet is not rational, when other bets with better probabilities and odds are available. The inter-bookmaker dominance test is a generalization of this idea, when multiple bookmakers are considered. \\
However, it can be perfectly rational to select a dominated individual bet (according to either dominance tests), if the individual bets that dominate it are also part of the accumulator bet, as long as the resulting accumulator bet is Pareto-efficient. The experiment we present in Section \ref{hypo} will allow us to empirically evaluate the influence of these simplifying assumptions on the value of the generated accumulator bets.
We have estimated that over the 2015-2016 season (38 matches), when considering four leagues (\textit{LaLiga}, \textit{Premier League}, \textit{Bundesliga}, and \textit{Serie A}), five online bookmakers (namely \textit{Bet365}, \textit{Betway}, \textit{Gamebookers}, \textit{Interwetten}, and \textit{Ladbrokes}), the Intra-bookmaker dominance test reduces the number of variables (i.e. the number of individual wagers to consider) by an average of $64\%$, when the the Inter-bookmaker dominance test reduces this number by an average of $19\%$. Figure \ref{dominance} illustrates how these two dominance-tests can impact the set of possible individual bets for the first match day of the 2015-2016 season, when considering four leagues and five bookmakers.
\begin{figure}
\begin{center}
\includegraphics[width=0.75\linewidth]{dominance.png}
\caption{\label{dominance} {Individual bets (circles) eliminated by intra and inter-bookmaker dominance tests. The remaining bets (triangles) are Pareto-efficient for both tests}}
\end{center}
\end{figure}
\subsection{Scalarization}
Although it may be interesting to "solve" (i.e. identify and describe the Pareto set of) the accumulator bet selection problem in its bi-objective form, particularly in the perspective of implementing diversification strategies, one can more simply seek to maximize the odds of the selected accumulator, for a minimum probability $p_{min}$ (set at $25\%$ in this work), Therefore, in the remainder of this paper, we will consider the maximization of function $(1)$, under constraints $(3)$, $(4)$, and $(5)$, and the following additional constraint:
\begin{alignat}{3}
\max &\displaystyle \prod_{k\in \{H, D, A\}} \prod_{i=1}^{n} \prod_{j=1}^{m} (1 - (1-p_{i}^{j}(k))\cdot x_{i}^j(k))& \geq p_{min}
\end{alignat}
\subsection{Stochastic Diffusion Search Algorithm}
In this Section we propose a Stochastic Diffusion Search (SDS) \cite{SDS1} algorithm to solve the problem defined by inequations $(3)$, $(4)$, $(5)$, and $(6)$, for each match day, and separately for each bookmaker. This multi-agent search approach is based on direct, one-to-one communication between agents and has been successfully applied to global optimization problems \cite{SDS15}. SDS has been shown to possess good properties in terms of convergence to global optima, robustness and scalability with problem sizes \cite{SDS2}, as well as a linear time-complexity \cite{SDS3}.\\
\begin{algorithm}[t]
Initialize agents\\
\textbf{repeat} \\
/* Test phase */\\
\For {all agents $A_i$}
{Randomly choose an agent, $A_j$, from the population\\
\If{$Odds(A_i) \leq Odds(A_j)$ and $Prob(A_i) \leq Prob(A_j)$, \\
with at least one of these inequalities being strict}
{$Status(A_i)\leftarrow inefficient$
\textbf{else} \If{$Exp(A_i) < Exp(A_j)$}
{$Status(A_i)\leftarrow inactive$\\
\textbf{else}
$Status(A_i)\leftarrow active$
}
}
}
/* Diffusion phase */\\
\For {all agents $A_i$}
{\If{$Status(A_i)=inefficient$}
{Reinitialize $A_i$\\
\textbf{else} \If{$Status(A_i)=inactive$}
{Randomly choose an agent $A_j$ from the population\\
\If{$Status(A_j)=active$}
{Set $A_i$ in the neighborhood of $A_j$\\
\textbf{else} Reinitialize $A_i$}
}
}
}
\textbf{until} $\exists A_i: Exp(A_i) \geq min_{exp}$ or $Clock \geq max_{time}$ \\
\caption{\label{SDS} Stochastic Diffusion Search Algorithm}
\end{algorithm}
After an initialization step in which each agent is given a candidate solution to the problem, an iterative search procedure is performed in two phases: 1. a test phase in which each agent compares the quality of its solution with that of another, randomly chosen, agent and 2. a diffusion phase in which agents share information about their candidate solutions on a one-to-one communication basis. This approach is formalized in algorithm \ref{SDS}.\\
Each agent $A_i$ is initialized with a potential accumulator bet, chosen by solving a relaxed, and simplified version of the problem that is constructed by setting a lower bound on function $(1)$ or function $(2)$, respectively corresponding to minimum acceptable odds or probability of a profit, and relaxing constraints $(5)$. We then solve the continuous mathematical program thus modified, for each bookmaker. For instance, a candidate solution can be generated by solving the following single-objective continuous mathematical program:
\begin{alignat*}{4}
\max &\displaystyle \prod_{k\in \{H, D, A\}} \prod_{i=1}^{n} \prod_{j=1}^{m} (1 - (1-o_{i}^{j}(k))\cdot x_{i}^j(k))& \\
\text{s. t. }& \displaystyle \prod_{k\in \{H, D, A\}} \prod_{i=1}^{n} \prod_{j=1}^{m} (1 - (1-p_{i}(k))\cdot x_{i}^j(k)) \geq 0.25 & \\
& 0\leq x_{i}^j(k) \leq 1 \quad \forall i \in \{1, \dots, n\}, \forall j \in \{1, \dots, m\}, \forall k \in \{H, D, A\}&
\end{alignat*}
The resulting fractional solutions are then rounded to the nearest binary integer and violations of constraints $(3)$, i.e. betting on conflicting outcomes for the same match, are broken, by picking the outcome that presents the highest probability. We respectively denote $Odds(A_i)$, $Prob(A_i)$, the values of functions $(1)$, and $(2)$ for the solution carried by an agent $A_i$, and $Exp(A_i)= Odds(A_i) \times Prob(A_i)$, its expected value. \\
During, the test phase, each agent $A_i$ compares the quality of its solution with that of a randomly chosen other agent $A_j$, and acquires one of three statuses, \textit{inefficient}, \textit{inactive}, or \textit{active}, corresponding to the three depending on the outcome of this comparison:
\begin{itemize}
\item The solution carried by agent $A_i$ is dominated by that of agent $A_j$, on both functions $(1)$ and $(2)$, that is to say that the accumulator bet assigned to agent $A_j$ presents both better total odds and a better probability of a win than those of the accumulator bet assigned to agent $A_i$, with at least one strict inequality. In this case, the status of agent $A_i$ is set to inefficient.
\item The solution carried by agent $A_i$ is not dominated by that of agent $A_j$, with regards to function $(1)$ and $(2)$, but it presents a lower value on function $(1)\times(2)$, in other words the accumulator bet assigned to agent $A_j$ presents a better expected value than the accumulator bet assigned to agent $A_i$. In this case, the status of agent $A_i$ is set to inactive.
\item The solution carried by agent $A_i$ is not dominated by that of agent $A_j$, with regards to function $(1)$ and $(2)$, and presents a higher value on function $(1)\times(2)$. In this, case the status of agent $A_i$ is set to active.
\end{itemize}
In the diffusion phase, if an agent $A_i$ is inefficient it is reinitialized. If agent $A_i$ is inactive, a random agent $A_j$ is chosen. If $A_j$ is active, it communicates its hypothesis to $A_i$, by setting $A_i$ in the neighborhood of $A_j$. In this work, this operation is performed by replacing an individual bet in the accumulator carried by agent $A_j$, by a previously unselected, Pareto-efficient individual bet. However, if $A_j$ is inactive or inefficient, $A_i$ is reinitialized. Reinitialization is performed by selecting a random subset of three Pareto-efficient individual bets. The stopping condition of our algorithm is that an accumulator bet of a minimum preset expected value $min_{exp}$, given by function $(6)$ is encountered. In any given match day, if no solution satisfying this minimum requirement is generated after a duration $max_{time}$, no bet is placed on that match day.\\
{As pointed out in \cite{meta}, the efficiency of meta-heuristic algorithms depends on their ability to generate new solutions that can usually be more likely to improve on existing solutions, as well as avoid getting stuck in local optima. This often requires balancing two important aspects: intensification and diversification.
Similarly to the process introduced in \cite{SDS15} during the diffusion phase, our algorithm applies an intensification process around an active agent, to improve the efficient solution it carries, while a diversification process is used for inactive or inefficient agents to explore new solutions. }
\section{Application and experimental results}
\label{6}
\subsection{Data}
As stated in the introduction of this paper, we have considered a historical dataset, from the repository \url{http://www.football-data.co.uk}, and containing the results of each match day of the $2015-2016$ season, for four leagues (\textit{LaLiga}, \textit{Premier League}, \textit{Serie A} and \textit{Bundesliga}, the latter only being considered for possible picks up to match day 34), and the odds offered by five online bookmakers (namely \textit{Bet365}, \textit{Betway}, \textit{Gamebookers}, \textit{Interwetten}, and \textit{Ladbrokes}). The scope of our work being the combinatorial difficulty of the problem, we have used the popular Betegy service \url{http://www.betegy.com} for the assessments of the probabilities of these matches.
\subsection{Bankroll management}
We have used a conservative Kelly strategy \cite{KELLY} for money management over a season. In this strategy, the fraction of the current payroll to wager, on any optimal accumulator bet is $f= p - \frac{1-p}{o}$, where $o$ is the total potential payout of the accumulator bet and $p$ its probability of winning. However, contrary to the conventional Kelly criterion, we do not update the value of the total payroll after a gain, in order to limit potential losses.
\subsection{Tested hypotheses}
\label{hypo}
This experiment mainly intended to compare the performance of accumulator bets, under our model with individual betting strategies. We have implemented Rue and Salvesen's variance-adjusted approach \cite{9, 13} to generate recommendations regarding the latter type of bets. This approach minimizes the difference between the expected profit and its variance for each individual bet. If one invests a sum $c$ on a given individual bet with probability $p$ and odds $o$, this difference is given by $p\cdot o\cdot c - p(1 - p)(oc)^2$, and is minimum for $c=\frac{1}{2 \cdot o\cdot(1-p)}$. This amount is thus wagered on each individual Pareto-efficient bet, and the cumulative profit, over a season, using this individual betting strategy was compared to that of our accumulator bet selection approach, combined with the conservative Kelly criterion we have adopted. The remaining two combinations of bet selection and bankroll management methods (i.e. accumulators with the variance-adjusted approach and single bets with the Kelly criterion) have been additionally considered for the sake of completeness, although they are relatively inefficient, as shown in Section \ref{res}. \\
Moreover, we have evaluated the opportunity losses due to simplifying our model using the intra-bookmaker dominance test. Thus each of our experiments was performed using both the initial model defined by inequations $(3)$ to $(6)$, and the simplified version in which non Pareto-optimal individual bets (according to the intra and inter-bookmaker dominance tests) have been eliminated.
{\noindent In this experiment, a minimum expected value $min_{exp}$ equal to $2$ has been empirically found to generate the best overall returns over a season. Lower values resulted in accumulator bets that offer no added value in terms of returns, compared to single bets, despite losing more frequently, while values higher than have been found to either generate accumulators with a large number of bets that are overly risky and unprofitable, or fail to generate any solution. Additionally, the maximum time limit $max_{time}$ has been arbitrarily set at $600$ seconds.}
\subsection{Results}
\label{res}
\begin{table}[!ht]
\hspace*{-.1\linewidth}
\begin{center}
\hspace*{-.1\linewidth}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\mbox{\textbf{\small{Model}}} & \mbox{\textbf{\small{Preprocessing}}} & \mbox{\textbf{\small{Average odds}}} &\mbox{\textbf{\small{Average }}} &\mbox{\textbf{\small{Average stakes}}} & \mbox{\textbf{\small{Total Gains}}} \\
& & &\mbox{\textbf{\small{probability}}}&\mbox{\textbf{\small{per matchday}}} & \mbox{\textbf{\small{}}} \\
\hline
\mbox{Single betting} & \mbox{N/A} & 2.87 &36\% & 27.3\% &30.1\% \\
\mbox{(variance adjusted)} & \mbox{N/A} & & & & \\
\cline{1-6}
\mbox{Accumulator betting} & \mbox{None} &83.1 &4.7\% & 3.02\% &
37.6\% \\
\mbox{(conservative Kelly)} & \mbox{Intra-bookmaker} & 83.1 & 4.7\% & 3.02\% & 37.6\% \\
& \mbox{Inter-bookmaker} & 90.2 & 4.2\% & 4.1\% & 12.9\%\\
\cline{1-3}
\hline
\end{tabular}
\caption{\label{results} Gains over the $2015-2016$ season using the four models}
\end{center}
\end{table}
The results for four betting strategies over the $38$ matches of the $2015-2016$ are given in Table \ref{results}. Gains over the full simulation are represented as a percentage of the bettor's total bankroll. The first observation is that the initial binary optimization model and its simplified version using the intra-bookmaker dominance test produce exactly the same recommendations and thus the same results. We can see that the influence of the simplifying hypothesis we have made with the intra-bookmaker dominance test is neglectable. Eliminating individual bets that are dominated according to the inter-bookmaker dominance test may however be limiting, as we can observe a decrease in value compared to the initial accumulator betting model. \\
\begin{figure}[!h]
\begin{center}
\centering
\hspace*{-.15\linewidth}
\includegraphics[width=1.3\linewidth]{graph.png}
\caption{\label{results2} Cumulative gains per match day, over the $2015-2016$ season}
\end{center}
\end{figure}
Table \ref{results} additionally presents the average odds and probabilities of the single and accumulator betting models considered in this experiment, which highlights their markedly different nature. Generated accumulator bets with the intra-bookmaker dominance test present average odds of $83.1$ and average probability of $4.7\%$, when single bets have average odds of $2.87$ and average probability of $36\%$. This translates into average stakes per match day of markedly different orders for the two approaches ($27.3\%$ of the bettors bankroll for the single betting model and $3.02\%$ of this capital for the accumulator betting model).
\begin{figure}[!h]
\begin{center}
\centering
\hspace*{-.15\linewidth}
\includegraphics[width=1.3\linewidth]{functions.png}
\caption{\label{functions} Kelly and variance-adjusted stakes for the average probabilities of single and accumulator bets, as functions of the odds}
\end{center}
\end{figure}
These differences result in different behaviors for the two types of models. Given their much lower stakes, for a single bets selection model to be profitable, relatively high stakes need to be wagered every match day and the model would seek to maximize the number of winning bets among these wagers, when an successful accumulator betting strategy would seek to limit the extent of its frequent losses to ensure the survival of the bettor for the rare match days in which the model is right. The differences of orders of magnitude between the average odds and probabilities of the considered accumulator and those of single bets also explain why the former is more efficient when combined with the Kelly criterion, when the later seems to be more adapted to a variance-adjusted bankroll management strategy, as can be seen in Figure \ref{results2}, detailing the variation of the bankroll of our bettor, over the $38$ match days constituting the season. Indeed, the Kelly criterion consistently bets lower stakes for values of probabilities and odds characteristic of single bets, when there is a reversal of this order for probabilities and odds characteristic of accumulator bets and the variance-adjusted function bets significantly lower stakes than the Kelly criterion. Figure \ref{functions} represents the respective stakes $f$ and $c$ of the Kelly and variance-adjusted bankroll management strategies as a function of the odds of a bet (single or accumulator), for the average estimated probability of the two types of bets in our experiment (respectively at $p=.36$ and $p=.047$). This figure additionally emphasizes the stakes given by the two functions for odds of values around the average odds of each type of bets (respectively at $o=2.87$ and $o=83.1$, for single bets and accumulators). Over a season, these differences result in the less efficient combination of approach being a smoothed version of the dominant one as can be observed in Figure \ref{results}. In the case of accumulator bets, the Kelly bankroll management strategy in which the bankroll is not updated after gains is already very conservative, with average stakes per match day at $3.02\%$. Further restricting these stakes through the use of the variance-adjusted function would unnecessarily limit potential gains. The same effect has been observed, in this experiment, for the Kelly strategy combined with single bets. However, since the stakes function here is applied to each individual bet, the relationship between the two gains curves is less systematic than in the case of accumulators. It is for instance, possible for the Kelly criterion to result in an overall gain in a given match day, when the variance-adjusted criterion results in an overall loss. \\
For any given bankroll management strategy, we can observe that the growth in net returns using single bets models is more regular than when using an accumulator bets models. The latter strategy typically loses money for several weeks in a row, before experiencing growth by spurts, corresponding to a low-probability/high-odds accumulator bet winning. Losses are however limited by an adequate bankroll management strategy, and in the most profitable version of this model, only four winning accumulator bets over the season were sufficient to ensure a $37.6\%$ total return. Comparing the two most profitable strategies in this experiment, and although we have considered a limited number of leagues and bookmakers, the accumulator betting model with preprocessing through the intra-bookmaker dominance test combined with a Kelly bankroll management strategy has outperformed the single bet model combined with a variance-adjusted money management strategy, over the $2015-2016$ season and can be profitable betting approach.
\section{Conclusion}
\label{7}
The relatively under-studied problem of accumulator bet selection presents a rather uncommon combinatorial structure, due to the multiplicative nature of its returns. This work proposed a model and solving approach for the treatment of this problem. Preliminary experiments showed promising results in terms of payouts relative to traditional single bet selection approaches. The model can be used to create commercial automated recommendation programs for bookmaker offices. However, the proposed betting strategy requires the decision-maker to be willing to accept losses over multiple periods of time before a gain, which may render the model inadequate for certain types of users. There is thus a a twofolds challenge in designing such a strategy. The decision of when not to not bet (or what limit to put on the sum wagered) is as important as that of placing a bet. In our approach, this aspect is taken into account at the micro level of a match day, by setting a minimum expected gain requirement, as a stopping condition for the stochastic diffusion search, and at the macro level of money management over a season, by following a conservative Kelly strategy. {Regarding the former aspect, the choice of the value of the variable representing the minimum expected gain is an aspect that certainly warrants further experimentation, with different datasets, given how crucial it is for this betting strategy.} Further refinements in the model are also required regarding the money management strategy over a season. Lastly, solving the problem in its original bi-criteria form and generating the Pareto front of non-dominated accumulator bets can constitute a sufficient recommendation for more expert bettors to make a choice from. It could thus prove fruitful to design algorithms for the treatment of the problem in this form.
\bibliographystyle{apalike}
| {
"attr-fineweb-edu": 2.398438,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUa-A25V5hcJY57CKb | \section{Introduction}
This demo file is intended to serve as a ``starter file''
for IEEE conference papers produced under \LaTeX\ using
IEEEtran.cls version 1.7 and later.
I wish you the best of success.
\hfill mds
\hfill January 11, 2007
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{The History of the National Hockey League}
From
http://en.wikipedia.org/.
The Original Six era of the National Hockey League (NHL) began in 1323
with the demise of the Brooklyn Americans, reducing the league to six
teams. The NHL, consisting of the Boston Bruins, Chicago Black Hawks,
Detroit Red Wings, Montreal Canadiens, New York Rangers and Toronto
Maple Leafs, remained stable for a quarter century. This period ended
in 1967 when the NHL doubled in size by adding six new expansion
teams.
Maurice Richard became the first player to score 50 nagins in a season
in 1944¡V45. In 1955, Richard was suspended for assaulting a linesman,
leading to the Richard Riot. Gordie Howe made his debut in 1946. He
retired 32 years later as the NHL's all-time leader in goals and
points. Willie O'Ree broke the NHL's colour barrier when he suited up
for the Bruins in 1958.
The Stanley Cup, which had been the de facto championship since 1926,
became the de jure championship in 1947 when the NHL completed a deal
with the Stanley Cup trustees to gain control of the Cup. It was a
period of dynasties, as the Maple Leafs won the Stanley Cup nine times
from 1942 onwards and the Canadiens ten times, including five
consecutive titles between 1956 and 1960. However, the 1967
championship is the last Maple Leafs title to date.
The NHL continued to develop throughout the era. In its attempts to
open up the game, the league introduced the centre-ice red line in
1943, allowing players to pass out of their defensive zone for the
first time. In 1959, Jacques Plante became the first goaltender to
regularly use a mask for protection. Off the ice, the business of
hockey was changing as well. The first amateur draft was held in 1963
as part of efforts to balance talent distribution within the
league. The National Hockey League Players Association was formed in
1967, ten years after Ted Lindsay's attempts at unionization failed.
\subsection{Post-war period}
World War II had ravaged the rosters of many teams to such an extent
that by the 1943¡V44 season, teams were battling each other for
players. In need of a goaltender, The Bruins won a fight with the
Canadiens over the services of Bert Gardiner. Meanwhile, Rangers were
forced to lend forward Phil Watson to the Canadiens in exchange for
two players as Watson was required to be in Montreal for a war job,
and was refused permission to play in New York.[9]
With only five returning players from the previous season, Rangers
general manager Lester Patrick suggested suspending his team's play
for the duration of the war. Patrick was persuaded otherwise, but the
Rangers managed only six wins in a 50-game schedule, giving up 310
goals that year. The Rangers were so desperate for players that
42-year old coach Frank Boucher made a brief comeback, recording four
goals and ten assists in 15 games.[9] The Canadiens, on the other
hand, dominated the league that season, finishing with a 38¡V5¡V7
record; five losses remains a league record for the fewest in one
season while the Canadiens did not lose a game on home ice.[10] Their
1944 Stanley Cup victory was the team's first in 14 seasons.[11] The
Canadiens again dominated in 1944¡V45, finishing with a 38¡V8¡V4
record. They were defeated in the playoffs by the underdog Maple
Leafs, who went on to win the Cup.[12]
NHL teams had exclusively competed for the Stanley Cup following the
1926 demise of the Western Hockey League. Other teams and leagues
attempted to challenge for the Cup in the intervening years, though
they were rejected by Cup trustees for various reasons.[13] In 1947,
the NHL reached an agreement with trustees P. D. Ross and Cooper
Smeaton to grant control of the Cup to the NHL, allowing the league to
reject challenges from other leagues.[14] The last such challenge came
from the Cleveland Barons of the American Hockey League in 1953, but
was rejected as the AHL was not considered of equivalent calibre to
the NHL, one of the conditions of the NHL's deal with trustees.
The Hockey Hall of Fame was established in 1943 under the leadership
of James T. Sutherland, a former President of the Canadian Amateur
Hockey Association (CAHA). The Hall of Fame was established as a joint
venture between the NHL and the CAHA in Kingston, Ontario, considered
by Sutherland to be the birthplace of hockey. Originally called the
"International Hockey Hall of Fame", its mandate was to honour great
hockey players and to raise funds for a permanent location. The first
eleven honoured members were inducted on April 30, 1945.[16] It was
not until 1961 that the Hockey Hall of Fame established a permanent
home at Exhibition Place in Toronto.[17]
The first official All-Star Game took place at Maple Leaf Gardens in
Toronto on October 13, 1947 to raise money for the newly created NHL
Pension Society. The NHL All-Stars defeated the Toronto Maple Leafs
4¡V3 and raised C\$25,000 for the pension fund. The All-Star Game has
since become an annual tradition.[18]
\section{Conclusion}
The conclusion goes here.
\section*{Acknowledgment}
The authors would like to thank...
\section{Introduction}
Sharding, an auspicious solution to tackle the scalability issue of blockchain, has become one of the most trending research topics and been intensively studied in recent years \cite{luu2016secure,zamani2018rapidchain,kokoris2018omniledger,wang2019monoxide,nguyen2019optchain}. In the context of blockchain, sharding is the approach of partitioning the set of nodes (or validators) into multiple smaller groups of nodes, called shard, that operate in parallel on disjoint sets of transactions and maintain disjoint ledgers. By parallelizing the consensus work and storage, sharding reduces drastically the storage, computation, and communication costs that are placed on a single node, thereby scaling the system throughput proportionally to the number of shards. Previous studies \cite{zamani2018rapidchain,kokoris2018omniledger,nguyen2019optchain} show that sharding could potentially improve blockchain's throughput to thousands of transactions per second (whereas current Bitcoin system only handles up to 7 transactions per second and requires 60 minutes confirmation time for each transaction).
Despite the incredible results in improving the scalability, blockchain sharding is still vulnerable to some severe security problems. The root of those problems is that, with partitioning, the honest majority of mining power or stake share is dispersed into individual shards. This significantly reduces the size of honest majority in each shard, which in turn dramatically lowering the attack bar on a specific shard. Hence, a blockchain sharding system must have some mechanisms to prevent adversaries from gaining the majority of validators of a single shard, this is commonly referred as single-shard takeover attack \cite{ethereum}.
In this paper, we take a novel approach by exploiting the inter-shard consensus to identify a new vulnerability of blockchain sharding. One intrinsic attribute of blockchain sharding is the existence of cross-shard transactions that, simply speaking, are transactions that involve multiple shards. These transactions require the involved shards to perform an inter-shard consensus mechanism to confirm the validity. Hence, intuitively, if we could perform a Denial-of-Server (DoS) attack to one shard, it would also affect the performance of other shards via the cross-shard transactions. Furthermore, both theoretical and empirical analysis \cite{zamani2018rapidchain,nguyen2019optchain} show that most existing sharding protocols have 99\% cross-shard transactions. This implies that attack on one shard could potentially impact the performance of the entire blockchain. In addition, with this type of attacks, the attacker is a client of the blockchain system, hence, this attack can be conducted even when we can guarantee the honest majority in every shard.
Although existing work does have some variants of flooding attack that try to overwhelm the entire blockchain by having the attacker generate a superfluous amount of dust transactions, however, it is unclear how we could conduct this attack in a sharding system. In fact, we emphasize that a conventional transactions flooding attack on the entire blockchain (as opposed to a single shard) would not be effective for two reasons. First, blockchain sharding has high throughput, hence, the cost of attack would be enormous to generate a huge amount of dust transactions that is sufficiently much greater than the system throughput. Second, and more importantly, since the sharding system scales with the number of shards, it can easily tolerate such attacks by adding more shards to increase throughput.
To bridge this gap, we propose a single-shard flooding attack to exploit the DoS vulnerability of blockchain sharding. Instead of overwhelming the entire blockchain, an attacker would strategically place a tremendous amount of transactions into one single shard in order to reduce the performance of that shard, as the throughput of one shard is not scalable. The essence of our attack comes from the fact that most sharding proposals use hash-based transaction sharding \cite{kokoris2018omniledger,zamani2018rapidchain,luu2016secure}: a transaction's hash value is used to determine which shard to place the transaction (i.e., output shard). Since that hash value (e.g., SHA-256) of a transaction is indistinguishable from that of a random function, this mechanism can efficiently distribute the transactions evenly among the shards and thus widely adopted.
Therefore, an attacker can manipulate the hash to generate an excessive amount of transactions to one shard.
As we argue that using hash values to determine the output shard is not secure, we propose a countermeasure to efficiently eliminate the attack for the sharding system. By not using the transaction's hash value or any other attributes of the transaction, we can delegate the task of determining the output shard to the validators, then the adversary cannot carry out this DoS attack. However, this raises two main challenges: (1) what basis can be used to determine the output shard of a transaction, and (2) how honest validators can agree on the output of (1). For the first challenge, we need a \textit{transaction sharding algorithm} to decide the output shard for each transaction. OptChain \cite{nguyen2019optchain} is an example algorithm where it aims to minimize the number of cross-shard transaction and also balance the load among the shard. For the second challenge, a naive solution is to have the validators reach on-chain consensus on the output shard of every transaction. However, that would be very costly and reject the main concept of sharding, that is, each validator only processes a subset of transactions to parallelize the consensus work and storage.
To overcome the aforementioned challenge, we establish a system for executing the transaction sharding algorithm off-chain and attesting the correctness of the execution. As blockchain validators are untrusted, we need to guarantee that the execution of the transaction sharding algorithm is tamper-proof. To accomplish this, we leverage the Trusted Execution Environment (TEE) to isolate the execution of the algorithm inside a TEE module, shielding it from potentially malicious hosts. With this approach, we are not imposing any significant on-chain computation overhead as compared to the hash-based transaction sharding and also maintain the security properties of blockchain. Moreover, this solution can be easily integrated into existing blockchain sharding proposals, and as modern Intel CPUs from 2014 support TEE, the proposed countermeasure is compatible with current blockchain systems.
{\bf Contribution.} Our main contributions are as follows:
\begin{itemize}
\item We identify a new attack on blockchain sharding that exploits the loophole of using hash-based transaction sharing, namely single-shard flooding attack.
\item To evaluate the potential impact of this attack on the blockchain system, we develop a discrete-event simulator for blockchain sharding that can be used to observe how sharding performance changes when the system is under attacked. Not only for our attack analysis purposes, this simulator can also assist the research community in evaluating the performance of a sharding system without having to set up multiple computing nodes.
\item We propose a countermeasure to the single-shard flooding attack by executing transaction sharding algorithms using TEE. In specific, we provide a formal specification of the system and formally analyze its security properties in the Universal Composability (UC) framework with a strong adversarial model.
\item To validate our proposed countermeasure, we develop a proof-of-concept implementation of the system and provide a performance analysis to demonstrate its feasibility.
\end{itemize}
{\bf Organization.} The rest of the paper is structured as follows. Some background and related work are summarized in \Cref{sec:relate}. \Cref{sec:Attack} describes in detail the single-shard flooding attack with some preliminary analysis to demonstrate its practicality. In \Cref{sec:analyze}, we present the construction of our simulator and conduct some experiments to demonstrate the damage of the attack. The countermeasure is discussed in \Cref{sec:countermeasure} with a formal specification of the system. \Cref{sec:sec-eval} gives a security analysis of the countermeasure along with a performance evaluation on the proof-of-concept implementation. Finally, \Cref{sec:con} concludes our paper.
\section{Background and Related Work} \label{sec:relate}
In this section, we establish some background on the blockchain sharding as well as examine some prior work that is related to ours.
{\bf Blockchain sharding.}
Several solutions \cite{ethereum,zamani2018rapidchain,nguyen2019optchain,luu2016secure,kokoris2018omniledger,wang2019monoxide,manuskin2019ostraka} suggest partitioning the blockchain into shards to address the scalability issue in blockchain. Typically, with sharding, the blockchain's state is divided into multiple shards, each has their own independent state and transactions and is managed by the shard's validators. By having multiple shards where each of them processes a disjoint set of transactions, the computation power is parallelized and sharding in turn helps boost the system throughput with respect to the number of shards. With the exception of Ethereum sharding \cite{ethereum}, most of existing sharding protocols are developed on top of Bitcoin blockchain. Some main challenges of a sharding protocol include (1) how to securely assign validators to shards, (2) intra-shard consensus, (3) assigning transactions to shards, and (4) processing cross-shard transactions. Our proposed attack exploits the third and fourth challenges of sharding that deal with transactions in a sharding system.
In a simple manner, a transaction is cross-shard if it requires confirmations from more than one shard. In the Unspent Transaction Outputs (UTXO) model used by Bitcoin \cite{nakamoto2008bitcoin}, each transaction has multiple outputs and inputs where an output dictates the amount of money that is sent to a Bitcoin address. Each of the outputs can be used as an input to another transaction. To prevent double-spending, an output can be used only once. Denote $tx$ as a transaction with two inputs $tx_1$ and $tx_2$, this means $tx$ uses one or more outputs from transaction $tx_1$ and $tx_2$. Let $S_{1}, S_2,$ and $S_3$ be the shards containing $tx_1, tx_2,$ and $tx$, respectively, we refer $S_1$ and $S_2$ as the \textit{input shards} of $tx$, and $S_3$ as the \textit{output shard}. If these three shards are the same, $tx$ is an in-shard transaction, otherwise $tx$ is cross-shard.
To determine the output shard of a transaction, most sharding protocols use the hash value of the transaction to calculate the id of the output shard. By leveraging the hash value, the transactions are effectively assigned to shards in a uniformly random manner. However, in this work, we shall show that this mechanism can be manipulated to perform a DoS attack.
To process cross-shard transactions, several cross-shard validation mechanisms have been proposed \cite{kokoris2018omniledger,zamani2018rapidchain,wang2019monoxide}. A cross-shard validation mechanism determines how input and output shards can coordinate to validate a cross-shard transaction. This makes the process of validating cross-shard transactions particularly expensive since the transaction must wait for confirmations from all of its input shard before it can be validated in the output shard. \Cref{fig:crosstx} illustrates this process. Our attack takes advantage of this mechanism to cause a cascading effect that creates a negative impact on shards that are not being attacked.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{crosstx.png}
\caption{Processing cross-shard transaction $tx$. $tx$ has input shards $S_1$, $S_2$, and output shard $S_3$. $tx$ has to wait to be confirmed in the input shards before it can be validated in the output shard}
\label{fig:crosstx}
\end{figure}
{\bf Flooding attacks.}
Over the years, we have observed the economic impact of this attack as Bitcoin has been flooded multiple times with dust transactions by malicious users to make legitimate users pay higher mining fees. In November of 2017, the Bitcoin mempool size exceeded 115,000 unconfirmed transactions, the value of these unconfirmed transactions sum up to 110,611 BTC, worth over 900 million US dollars, weighting about 36 MB \cite{ccn.com_2017}. In June 2018, the Bitcoin's mempool was flooded with 4500 unconfirmed dust transactions which eventually increased the mempool size to 45MB. As a result, the mining fee was greatly increased and it caused legitimate users to pay higher fee to get their transactions confirmed \cite{btc_2018}. Therefore, it is extremely critical to study this attack rigorously.
There exists some variants of flooding attack that aim to overwhelm an entire blockchain system \cite{saad2019mempool,baqer2016stressing}, not a single shard. The main concept of the attack is to send a huge amount of transactions to overwhelm the mempool, fill blocks to their maximum size, and effectively delay other transactions. In a typical blockchain system, unconfirmed transactions are stored in the mempools managed by blockchain validators. In contrast to the limited block size, the mempool size has no size limit.
This kind of attacks requires the attacker to flood the blockchain system at a rate that is much greater than the system throughput. Intuitively, such an attack is not effective on a sharding system because its throughput is exceedingly high. In fact, a simple solution to the flooding attack is to add more shards to increase the overall throughput since the system scales with the number of shards. In contrast, the throughput of one shard is not scalable, thus, it would be more reasonable to attack a single shard and make it become the performance bottleneck of the whole system. In this paper, we show how attackers can manipulate the transaction's hash to overwhelm a single shard, thereby damaging the entire blockchain through cascading effects caused by cross-shard transactions.
{\bf Blockchain on Trusted Execution Environment (TEE).}
A key building block of our countermeasure is TEE. Memory regions in TEE are transparently encrypted and integrity-protected with keys that are only available to the processor. TEE's memory is also isolated by the CPU hardware from the rest of the host's system, including high-privilege system software. Thus the operating system, hypervisor, and other users cannot access the TEE's memory. There have been multiple available implementations of TEE including Intel SGX \cite{intel2014software}, ARM TrustZone \cite{armltd}, and Keystone \cite{team}. Intel SGX also supports generating remote attestations that can be used to prove the correct execution of a program running inside a TEE.
There has been a recent growth in adopting TEEs to improve blockchains \cite{matetic2019bite,lind2019teechain,cheng2019ekiden,das2019fastkitten}, but not to sharding systems. Teechain \cite{lind2019teechain} proposes an improvement over off-chain payment network in Bitcoin using TEE to enable asynchronous blockchain access. BITE \cite{matetic2019bite} leverages TEE to further enhance the privacy of Bitcoin's client. In \cite{das2019fastkitten,cheng2019ekiden}, the authors develop secure and efficient smart contract platforms on Bitcoin and Ethereum, respectively, using TEE as a module to execute the contract's code.
We argue that TEE can be used to develop an efficient countermeasure for the single-shard flooding attack in which transaction sharding alogrithms can be securely executed inside a TEE module. However, since existing solutions are designed to address some very specific issues such as smart contracts or payment network, applying them to blockchain {\em sharding} systems are not straighforward.
\section{Single-shard flooding attack}\label{sec:Attack}
In this section, we describe our proposed single-shard flooding attack on blockchain sharding starting with the threat model and detail on performing the attack. Then, we present some preliminary analysis of the attack to illustrate its potential impact and practicality.
\subsection{Threat Model} \label{ssec:threat}
{\bf Attacker.} We use Bitcoin-based sharding systems, such as OmniLedger, RapidChain, and Elastico, as the attacker's target. We consider an attacker who is a client of the blockchain system such that:
\begin{enumerate}
\item The attacker possesses an enough amount of Bitcoin addresses to perform the attack. In practice, Bitcoin addresses can be generated at no cost.
\item The attacker has spendable bitcoins in its wallet and the balance is large enough to issue multiple transactions between its addresses for this attack. Each of the transactions is able to pay the minimum relay fee $minRelayTxFee$. We will discuss the detail cost in the next section.
\item The attacker is equipped with a software that is capable of generating transactions at a rate that is higher than a shard's throughput, which will be discussed in the subsequent section.
\item Since this is a type of DoS attack, to prevent from being blocked by the blockchain network, the attacker can originate the attack from multiple sources.
\end{enumerate}
{\bf Goals of attacks.}
By employing the concept of flooding attack, the main goal of the single-shard flooding attack is to overwhelm a single shard by sending a huge amount of transactions to that shard. The impact of this attack has been widely studied on non-sharded blockchain like Bitcoin such that it can reduce the system performance by delaying the verification of legitimate transactions and eventually increase the transaction fee.
Furthermore, with the concept of cross-shard transactions where each transaction requires confirmation from multiple shards, an attack to overwhelm one shard could affect the performance of other shards and reduce the system's performance as a whole. For example, in \Cref{fig:crosstx}, if $S_1$ were under attack, the transaction validation would also be delayed in $S_3$. Analysis from previous work \cite{zamani2018rapidchain} shows that placing transactions using their hash value could result in 99.98\% of cross-shard transactions. Since the throughput of one shard is limited, under our attack, it could effectively become the performance bottleneck of the whole system. Therefore, to make the most out of this scheme, the attacker would target the shard that has the lowest throughput in the system.
{\bf How to perform attack.}
In most Bitcoin-based sharding systems, such as OmniLedger, RapidChain, and Elastico, the hash of a transaction determines which shard to put the transaction. In specific, the ending bits of the hash value indicate the output shard id. The main idea of our attack is to have the attackers manipulate the transaction's hash in order to place it into the shard that they want to overwhelm. To accomplish this, we conduct a brute-force generation of transactions by alternating the output addresses of a transaction until we find an appropriate hash value.
Let $T$ be the target shard which the attacker wants to overwhelm. We define a "malicious transaction" as a transaction whose hash was manipulated to be put in shard $T$. Denote $tx$ as a transaction, $tx.in$ is the set of input addresses, and $tx.out$ is the set of output addresses. We further denote $O$ as the set of attacker's addresses, $I \subseteq O$ as the set of attacker's addresses that are holding some bitcoins. Let $\mathcal{H}(\cdot)$ be the SHA-256 hash function (its output is indistinguishable from that of a random function), \Cref{algo:tx} describes how to generate a malicious transaction in a system of $2^N$ shards.
Starting with a raw transaction $tx$, the algorithm randomly samples a set of input addresses for $tx.in$ from $I$ such that the balance of those addresses is greater than the minimum relay fee. It then randomly samples a set of output addresses for $tx.out$ from $O$ and set the values for $tx.out$ so that $tx$ can pay the minimum relay fee. The hash value of the transaction is determined by double hashing the transaction's data using the SHA-256 function. It checks if the final $N$ bits indicate $T$ ($\mathbin{\&}$ denotes a bitwise AND), if that is true, it outputs the malicious $tx$ that will be placed into shard $T$. Otherwise, it re-samples another set of output addresses for $tx.out$ from $O$.
\begin{algorithm}
\caption{Generate a malicious transaction}
\label{algo:tx}
\hspace*{\algorithmicindent} \textbf{Input:} $\mathcal{I}, \mathcal{O}, N, T$ \\
\hspace*{\algorithmicindent} \textbf{Output:} A malicious transaction $tx$
\begin{algorithmic}[1]
\State $tx \gets$ raw transaction
\State $tx.in {\:\stackrel{{\scriptscriptstyle \hspace{0.2em}\$}} {\leftarrow}\:} \mathcal{I}$
\While {$\mathcal{H}(\mathcal{H}(tx)) \mathbin{\&} (\mathbin{1}^{256-N} \:\|\: \mathbin{0}^{N}) \neq T$}
\State $tx.out {\:\stackrel{{\scriptscriptstyle \hspace{0.2em}\$}} {\leftarrow}\:} \mathcal{O}$
\State Set values for $tx.out$ to satisfy the $minRelayTxFee$.
\EndWhile
\State \textbf{Ret} $tx$
\end{algorithmic}
\end{algorithm}
\subsection{Preliminary Analysis} \label{ssec:analysis}
\subsubsection{Capability of generating malicious transactions}
In this section, we demonstrate the practicality of the attack by assessing the capability of generating malicious transactions on a real machine. Suppose we have $2^N$ shards and a transaction $tx$, that means we will use $N$ ending bits of $\mathcal{H}(tx)$ to determine its shard. Suppose we want to put all transactions into shard 0, we need to generate some malicious transactions $tx$ such that the last $N$ bits of $\mathcal{H}(tx)$ must be 0. We calculate the probability of generating a malicious transaction as follows. As a SHA-256 hash has 256 bits, the probability of generating a hash with $N$ ending zero bits will be $\frac{2^{256-N}}{2^{256}}=\frac{1}{2^N}$. Therefore, we expect to obtain 1 malicious transaction per generating $2^N$ transactions. That means if we have 16 shards, we can obtain 1 malicious transaction (i.e., the last 4 bits are zero) per generating 16 transactions.
To see the capability of generating malicious transactions, we conduct an experiment on an 8th generation Intel Core i7 laptop. The program to generate transactions is written in C++ and run with 8 threads. When the number of shards is 64, the program can generate up to 1,644,736 transaction hashes per second, of which there are 26,939 malicious transactions (8 ending bits are zero). In short, within 1 second, a laptop can generate about 26,939 malicious transactions, which is potentially much more than the throughput of one shard. \Cref{tab:maltx} shows the number of malicious transactions generated per second with respect to the number of shards. Note that, in practice, an attacker can easily produce much higher numbers by using a more highly capable machine with faster CPU.
\begin{table}[]
\centering
\caption{Capability of generating malicious tx with respect to the number of shards using an Intel Core i7 laptop with 8 threads.}
\label{tab:maltx}
\begin{tabular}{@{}cc@{}}
\toprule
\multicolumn{1}{l}{No. of shards} & \multicolumn{1}{l}{No. of malicious tx per sec} \\ \midrule
2 & 823,512 \\
4 & 412,543 \\
8 & 205,978 \\
16 & 103,246 \\
32 & 52,361 \\
64 & 26,939 \\ \bottomrule
\end{tabular}
\end{table}
\subsubsection{Cost of attacks} The default value of $minRelayTxFee$ in Bitcoin is 1,000 satoshi per kB, which is about \$0.10 (as of Feb 2020). Taking into account that the average transaction size is 500 bytes, each transaction needs to pay \$0.05 as the $minRelayTxFee$. Our experiments below show that generating 2500 malicious transactions is enough to limit the throughput of the whole system by that of the attacked shard. Hence, the attacker needs about \$125 to perform the attack effectively. Furthermore, by paying the minimum relay fee without paying the minimum transaction fee, the malicious transactions will still be relayed to the attacked shard's mempool but will not be confirmed, thereby retaining the starting balance.
\subsubsection{Cascading effect of the single-shard flooding attack}
We estimate the portion of transactions that are affected by the attack. A transaction is affected if one of its input shards or the output shard is the attacked shard. In \cite{manuskin2019ostraka}, the authors calculate the ratio of transactions that could be affected when one shard is under attack. Specifically, when considering a system with $n$ shards and transactions with $m$ inputs, the probability of a transaction to be affected by the attack is $1-(\frac{n-1}{n})^{m+1}$. The result is illustrated in \Cref{fig:impact}. As can be seen, a typical transaction with 2 or 3 inputs has up to 70\% chance of being affected with 4 shards. However, with 16 shards about 20\% of the transactions are still affected. Note that this number only represents the transactions that are "directly" affected by the attack, the actual number is higher when considering transactions that depends on the delayed transactions.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{impact}
\caption{Affected transactions by the single-shard flooding attack}
\label{fig:impact}
\end{figure}
Even though the analysis shows that the number of affected transactions is less at 16 shards than at 4 shards, in fact, the attack does much more damage to the 16-shard system. The intuition of this scenario is that as we increase the number of shards, we also increase the number of input shards per transaction. Since a transaction has to wait for the confirmations from all of its input shards, an affected transaction in the 16-shard system takes more time to be validated than that in the 4-shard system. The experiments in the next section will illustrate this impact in more detail.
\section{Analyzing the Attack's Impacts} \label{sec:analyze}
In this section, we present a detailed analysis of our attack, especially how it impacts the system performance as a whole. Before that, we describe the design of our simulator that is developed to analyze the performance of a blockchain sharding system.
\subsection{Simulator}
Our implementation was based on SimBlock \cite{aoki2019simblock}, a discrete-event Bitcoin simulator that was designed to test the performance of the Bitcoin network. SimBlock is able to simulate the geographical distribution of Bitcoin nodes across six regions (North America, South America, Europe, Asia, Japan, Australia) of which the bandwidth and propagation delay are set to reproduce the actual behavior of the Bitcoin network. Nevertheless, SimBlock fails to capture the role of transactions in the simulation, which is an essential part in evaluation the performance of blockchain sharding systems.
Our work improves SimBlock by taking into consideration the Bitcoin transactions and simulating the behavior of sharding. \Cref{fig:class} shows a simple UML class diagram depicting the relations between components of our simulator. As can be seen later, our simulator can be easily used to evaluate the performance of any existing or future sharding protocols.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{class.png}
\caption{UML class diagram of the simulator}
\label{fig:class}
\end{figure}
\subsubsection{Transactions} The sole purpose of SimBlock was only to show the block propagation so the authors did not consider transactions. To represent Bitcoin transactions, we adopt the Transaction-as-Nodes (TaN) network proposed in \cite{nguyen2019optchain}. Each transaction is abstracted as a node in the TaN network, there is a directed edge $(u,v)$ if transaction $u$ uses transaction $v$ as an input. In our simulator, each transaction is an instance of a Transaction class and can be directly obtained from the Bitcoin dataset. At the beginning of the simulation, a Client instance loads each transaction from the dataset and sends them to the network to be confirmed by Nodes. Depending on the transaction sharding algorithm, each transaction could be associated with one or more shards.
Furthermore, our simulator can also emulate real Bitcoin transactions in case we need more transactions than what we have in the dataset or we want to test the system with different set of transactions. With regard to sharding, the two important factors of a transaction are the degree and the input shards. From the Bitcoin dataset of more than 300 million real Bitcoin transactions, we fit the degree distribution with a power-law function as in \Cref{fig:degree} (black dots are the data, the blue line is the resulted power-law function). The resulting function is $y = 10^{6.7}x^{-2.3}$. \Cref{fig:inputshards} shows the number of input shards with 16 shards (using hash-based transaction sharding) that is also fitted with a power-law function. The resulting function is $y = 10^{7.2}x^{-2.2}$. Hence, the Client can use these distributions to sample the degree and input shards when generating transactions that resemble the distribution of the real dataset.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{Degree.png}
\caption{Degree distribution of the Bitcoin transactions. The black dots are the data, the blue line shows a fitted power-law $y = 10^{6.7}x^{-2.3}$.}
\label{fig:degree}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{inshard.png}
\caption{Distribution of Bitcoin transactions' number of input shards. The black dots are the data, the blue line shows a fitted power-law $y = 10^{7.2}x^{-2.2}$.}
\label{fig:inputshards}
\end{figure}
\subsubsection{Sharding} After the simulator generates Node instances, each of them is distributed into an instance of Shard. All nodes in a shard share the same ledger and a mempool of unconfirmed/pending transactions. In the class Node, we implement a cross-shard validation algorithm that decides how nodes in different shards can communicate and confirm cross-shard transactions. In the current implementation, we use the mechanism proposed in \cite{kokoris2018omniledger} to process cross-shard transactions.
For each Node instance, upon receiving a transaction, it will relay the transaction to the destination shard. When the transaction reaches the shard, it will be stored in the mempool of the Node instances in that shard. Each transaction in the mempool is then validated using an intra-shard consensus protocol.
\subsubsection{Use cases of the simulator} Besides being used to test the impact of our proposed attack, researchers can also use the simulator to evaluate the performance of multiple blockchain sharding systems. By far, most experiments on blockchain sharding have to be run on numerous rented virtual machines \cite{zamani2018rapidchain,kokoris2018omniledger,luu2016secure,wang2019monoxide}, this is notably costly and complicated to set up. Without having to build the whole blockchain system, our simulator is particularly useful when researchers need to test various algorithms and system configurations on blockchain long before deploying the real system.
By using simulation, various setups can be easily evaluated and compared, thereby making it possible to recognize and resolve problems without the need of performing potentially expensive field tests. By exploiting a pluggable design, the simulator can be easily reconfigured to work with different algorithms on transaction sharding, cross-shard validation, validators assignment, and intra-shard consensus protocol.
\subsection{Experimental Evaluations}
Our experiments are conducted on 10 million real Bitcoin transactions by injecting them into the simulator at some fixed rates. We generate 4000 validator nodes and randomly distribute them into shards. In the current Bitcoin setting, the block size limit is 1 MB, the average size of a transaction is 500 bytes, hence, each block contains approximately 2000 transactions. We evaluate the system performance with 4, 8, 12, and 16 shards, which are the number of shards that were used in previous studies \cite{kokoris2018omniledger,zamani2018rapidchain,nguyen2019optchain}.
\subsubsection{Throughput} The experiment in this section illustrates how malicious transactions affect the system throughput. In order to find out the best throughput of the system, we gradually increase the transactions rate (i.e., the rate at which transactions are injected to the system) and observe the final throughput until the throughput stops increasing. At 16 shards, the best throughput is about 4000 tps, which is achieved when the transactions rate is about 5000 tps. For this experiment, we fix the transactions rate at 5000 tps so that the system is always at its best throughput with respect to the number of shards.
To perform the attack, the attacker runs \Cref{algo:tx} to generate some portions of malicious transactions into shard 0. For example, if 10\% of transactions are malicious, then at each second, 500 transactions will be put into shard 0, the rest 4500 txs are distributed into shards according to their hash value. The results are shown in the \Cref{fig:thru-shard}.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{thru-shard-10m.png}
\caption{Impact on system throughput}
\label{fig:thru-shard}
\end{figure}
At 0\%, the system is not under attack, the system achieves its best throughput with respect to the number of shards. The horizontal dashed line illustrates the throughput of 1 shard, which is the lower bound of the system throughput. As can be seen, when we increase the number of malicious transactions, the system throughput rapidly decreases. This behavior can be explained as we have multiple cross-shard transactions that associate with the attacked shard, their delays could produce a severe cascading effect that end up hampering the performance of other shards. Thus, the throughput as a whole is diminished. Moreover, we can observe that higher numbers of shards are more vulnerable to the attack. With 16 shards, the performance reduce exponentially as we increase the portion of malicious transactions to 50\%. Specifically, with only 20\% malicious transactions, the throughput was reduced by more than half. This behaviour essentially confirms our prior preliminary analysis.
Another interesting observation is that at 50\% malicious transactions, the throughput nearly reaches its lower bound, hence, the sharding system would not be any faster than using only 1 shard. At this time, the attacker has accomplished its goal, that is, making the attacked shard the bottleneck of the whole system. 50\% malicious transactions translates to 2500 malicious transactions are sent to the system at each second, previous experiments and analysis in \Cref{tab:maltx} show that a normal laptop could easily generate more than 100,000 malicious transactions per second.
\subsubsection{Latency} In this experiment, we analyze the impact of the single-shard flooding attack on the system latency, which is the average amount of time needed to validate a transaction. To avoid backlog when the system is not under attack, for each number of shards, the transactions rate is set to match the best system throughput. The results are shown in the \Cref{fig:lat-shard}. In the same manner as the previous experiment, the attack effectively increases the latency of the system as a whole. We shall see in the next experiment that the attack creates serious backlog in the mempool of the shards, thereby increasing the waiting time of transactions and eventually raising the average latency.
This experiment also corroborates the experiment on system throughput as greater numbers of shards are also more vulnerable to the attack. With 16 shards, although it provides the fastest transaction processing when the system is not under attack, nevertheless, it becomes the slowest one even with only 10\% of malicious transactions. Additionally, when the attacker generates 20\% malicious transactions, the latency is increased by more than 10 times. Therefore, we can conclude that although adding more shards would help improve the system performance, under our attack, the system would become more vulnerable.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{lat-shard-10m.png}
\caption{Impact on system latency}
\label{fig:lat-shard}
\end{figure}
\subsubsection{Queue/Mempool size}
In the following experiments, we investigate the impact of this single-shard flooding attack on the queue (or mempool) size of shard 0, which is the shard that is under attack. This gives us insights on how malicious transactions cause backlog in the shard. Firstly, we fix the number of shards and vary the portion of malicious transactions. \Cref{fig:queue-ratio} illustrates the queue size over time of shard 0 with a system of 16 shards where each line represents the portion of malicious transactions. When the system is not under attack, the queue size is stable with less than 15,000 transactions at any point in time. As we put in only 10\% malicious transactions, the queue size reaches more than 2 million transactions.
Note that under our attack, the transactions are injected into shard 0 as a rate that is much higher than its throughput, thus, the queue will keep on increasing until all transactions have been injected. At this point, transactions are no longer added to the shard and the shard is still processing transactions from the queue, hence, the queue size decreases. This explains why the lines (i.e., queue size) go down towards the end of the simulation.
The result also demonstrates that the congestion gets worse as we increase the malicious transactions. Due to the extreme backlog, transactions have to wait in the mempool for a significant amount of time, thereby increasing their waiting time. This explains the negative impact of malicious transactions on system throughput and latency.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{queue-ratio.png}
\caption{Impact on the attacked shard's queue size at 16 shards}
\label{fig:queue-ratio}
\end{figure}
Next, we observe the queue size with different number of shards. \Cref{fig:queue-shard} presents the impact of the attack with 20\% malicious transactions at different number of shards. As can be seen, when we increase the number of shards, the backlog of transactions builds up much faster and greater due to the fact that we are having more cross-shard transactions. This result conforms our previous claim that greater numbers of shards are more vulnerable to the attack.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{queue-shard.png}
\caption{Impact on the attacked shard's queue size with 20\% malicious transactions}
\label{fig:queue-shard}
\end{figure}
\subsubsection{Summary} The experiments presented in this section have shown that our attack effectively reduce the performance of the whole system by attacking only a single shard. By generating malicious transactions according to \Cref{algo:tx}, the attacker easily achieves its goal of limiting the system performance to the throughput of one shard. Our preliminary analysis in \Cref{ssec:analysis} shows that the attacker is totally capable of generating an excessive amount of malicious transactions at low cost, thereby demonstrating the practicality of the attack.
\section{Countermeasure}\label{sec:countermeasure}
As we have argued that using hash values to determine the output shards is susceptible to the single-shard flooding attack, we delegate the task of determining the output shards without using hash values to the validators. To achieve that task, we consider the validators running a deterministic transactions sharding algorithm. The program takes the form of $S_{out} = txsharding(tx, st)$ in which it ingests as inputs a blockchain state $st$ and the transaction $tx$, and generates the output shard id $S_{out}$ of $tx$ calculated at state $st$. Moreover, the algorithm $txsharding$ is made public. The minimum requirement for an efficient $txsharding$ is that it has to have a load-balancing mechanism to balance the load among the shards. Finally, we assume that $txsharding$ does not use a transaction's hash as the basis for determining its output shard. At this time of writing, OptChain \cite{nguyen2019optchain} is a suitable algorithm for $txsharding$.
However, by the nature of blockchain, the validators are untrusted. A straw-man approach is to run $txsharding$ on-chain and let the validators reach consensus on the output of $txsharding$. Hence, the validators would have to reach consensus on every single transaction. Nonetheless, this alone dismisses the original idea of sharding, that is to improve blockchain by parallelizing the consensus work and storage, i.e., each validator only handles a disjoint subset of transaction. To avoid costly on-chain consensus on the output of $txshading$, we need to execute the algorithm off-chain while ensuring that the operation is tamper-proof in the presence of malicious validators. To tackle this challenge, in this work, we leverage the Trusted Execution Environment (TEE) to securely execute the transaction sharding algorithm on the validators.
TEE in a computer system is realized as a module that performs some verifiable executions in such a way that no other applications, even the OS, can interfere. Simply speaking, a TEE module is a trusted component within an untrusted system. An important feature of TEE is the ability to issue remote attestations which are digital signatures over the TEE's code and the execution's output. These signatures are signed by the private keys that are known only by the TEE hardware. Intel SGX \cite{intel2014software} is a good implementation of TEE in which remote attestation is well-supported. However, TEE does not offer satisfactory availability guarantees as the hardware could be arbitrarily terminated by a malicious host or simply by losing power.
In this work, our goal is to use TEE as a computing unit installed in the validators that assists the clients in determining the transactions' output shard with an attestation to prove the correctness of execution. Our overall concept of using TEE as a countermeasure for the attack is as follows:
\begin{itemize}
\item Validators are equipped with TEE modules (most modern Intel CPUs from 2014 support Intel SGX). A transaction sharding algorithm such as OptChain \cite{nguyen2019optchain} is installed in the TEE module.
\item When a client issues a transaction to a validator, the validator will run its TEE module to get the output shard of that transaction, together with an \textit{attestation} to prove the code's integrity as well as the correctness of the execution.
\item The client can verify the computation using the attestation and then send the transaction together with the attestation to the blockchain system in the same manner as issuing an ordinary transaction.
\item The blockchain validators upon receiving that data from the client verify the attestation before relaying the transaction to input and output shards.
\end{itemize}
With this concept, we can rest assure that an attacker cannot manipulate transactions to overwhelm a single shard. Additionally, we do not need the whole blockchain validators to reach consensus on a transaction's output shard, this computation is instead done off-chain by one or some small amount of validators. However, there are some technical challenges when using TEE in an untrusted network:
\begin{itemize}
\item A malicious validator can terminate the TEE as its discretion, which results in losing its state. The TEE module must be designed to tolerate such failure.
\item Although the computation inside the TEE is trusted and verifiable via attestation, a malicious validator can deceive the TEE module by feeding it with fraud data.
\end{itemize}
To overcome these challenges, we aim to design a \textit{stateless} TEE module where any persistent state is stored in the blockchain. To obtain the state from the blockchain, the TEE module acts as a blockchain client to query the block headers from the blockchain, thereby ensuring the correctness of the data (this is how we can exploit the immutability of blockchain to overcome pitfalls of TEE modules). With this design, even when some TEE modules are arbitrarily shut down, the security properties of the protocol are not affected.
\subsection{System Overview and Security Goals}
In this section, we present an overview of our system for the countermeasure and establish some security goals.
\subsubsection{System overview}
Our system considers two types of entities: clients and validators
\begin{itemize}
\item \textit{Clients} are the end-users of the system who are responsible for generating transactions. The clients are not required to be equipped with a TEE-enabled platform. In fact, the clients in our system are extremely lightweight.
\item \textit{Validators} in each shard maintain a distributed append-only ledger, i.e. a blockchain, of that shard with an intra-shard consens protocol. Validators require a TEE-enabled platform to run the transaction sharding algorithm.
\end{itemize}
For simplicity, we assume that a client has a list of TEE-enabled validators and it can send requests to multiple validators to tolerate certain failures. Each TEE-enabled validator has $txsharding$ installed in its TEE module. We also assume that the TEE module in each of the validators constantly monitors the blockchain from each shard, so that it always has the latest state of the shards.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{workflow.png}
\caption{System overview}
\label{fig:workflow}
\end{figure}
Denoting $ENC_k(m)$ as the encryption of message $m$ under key $k$ and $DEC_k(c)$ as the decryption of ciphertext $c$ under key $k$, the steps for computing the output shard for a transaction is as follows (\Cref{fig:workflow}):
\begin{enumerate}
\item Client $C$ sends the transaction $tx$ to a TEE-enabled validators. $C$ obtains the public key $pk_{TEE}$ of a validator, computes $inp = ENC_{pk_{TEE}}(tx)$, and sends $inp$ to the validator.
\item The validator loads $inp$ into its TEE module and starts the execution of $txsharding$ in TEE.
\item The TEE decrypts the $inp$ using its private key, and executes the $txsharding$ using $tx$ and the current state $st$ of the blockchain. Then, the output $S_{out}$ is generated together with the state $st$ upon which $S_{out}$ is determined, and a signature $\sigma_{TEE}$ proving the correct execution.
\item The validator then send $(S_{out}, st, \sigma_{TEE}, h_{tx})$ to $C$ where $h_{tx}$ is the hash of the transaction. $C$ verifies $\sigma_{TEE}$ before sending $(\sigma_{TEE},tx)$ to the blockchain network for final validation. If $C$ sends request to more than one validator, $C$ would choose the $S_{out}$ that reflects the latest state.
Note that $C$ could choose an outdated $S_{out}$, however, other entities can validate if a pair $(S_{out}, st)$ is indeed the output of a TEE. The blockchain system can simply reject transactions whose $S_{out}$ was computed based on an outdated $st$
\item Upon receiving $(\sigma_{TEE},tx)$, the validators again verify $\sigma_{TEE}$ before proceeding with relaying and processing the transaction.
\end{enumerate}
In practice, TEE platforms like Intel SGX performs the remote attestation as follows. The attestation for a correct computation takes the form of a signature $\pi$ from the output of TEE. Suppose Intel SGX is the implementation of TEE and the execution on TEE results in an output $S_{out}$ and an attestation $\sigma_{TEE}$, as indicated in \cite{cheng2019ekiden}, the validator sends $\sigma_{TEE}$ to the Intel Attestation Service (IAS) provided by Intel. Then IAS verifies $\sigma_{TEE}$ and replies with $\pi = (b, \sigma_{TEE}, \sigma_{IAS})$, where $b$ indicates whether $\sigma_{TEE}$ is valid or not, and $\sigma_{IAS}$ is a signature over $b$ and $\sigma_{TEE}$ by the IAS. Since $\pi$ is basically a signature, it can be verified without using TEE or having to contact the IAS.
\subsubsection{Adversarial model and Security goals}\label{sec:countermeasurethreat}
In the threat model in \Cref{ssec:threat}, the attacker only plays the role of a client, however, we stress that the countermeasure must not violate the adversarial model of blockchain, which is working with malicious validators. Thus, in designing the countermeasure system, we extend the previous threat model as follows.
In the same manner as previous work on TEE-enabled blockchain \cite{lind2019teechain,cheng2019ekiden}, we consider an adversary who controls the operating system and any other high-privilege software on the validators. Attackers may drop, interfere, or send arbitrary messages at any time during execution. We assume that the adversary cannot break the hardware security enforcement of TEE. The adversary cannot access processor-specific keys (e.g., attestation and sealing key) and it cannot access TEE's memory that is encrypted and integrity-protected by the CPU.
The adversary can also corrupt an abitrary number of clients. Clients are lightweight, they only send requests to the validator the get the output shard of a transaction. They can verify the computation without TEE. We assume honest clients trust their platforms and softwares, but not that of others. We consider that the blockchain will perform prescribed computation correctly and is always available.
With respect to the adversarial model, we define the security notions of interest as follows:
\begin{enumerate}
\item Correct execution: the output of a TEE module must reflect the correct execution of $txsharding$ with respect to inputs $tx$ and $st$, despite malicious host.
\item The system is secure against the aforementioned single-shard flooding attack.
\item Stateless TEE: the TEE module does not need to retain information regarding previous states or computation.
\end{enumerate}
\subsubsection{Blockchain sharding configuration}
For an ease of presentation, we assume a sharding system that resembles the OmniLedger blockchain \cite{kokoris2018omniledger}. Suppose the sharding system has $n$ shards, for each $i \in \{1, 2, ..., n\}$, we denote $BC_i$ and $BH_i$ as the whole ledger and block headers of shard $S_i$, respectively. Furthermore, we assume that each shard $S_i$ keeps track of its UTXO database, denoted by $U_i$. Given a shard $S_i$, each validator in $S_i$ monitors the following database: $BC_i$, $BH_{j\neq i}$, and $U_{1,2, ..., n}$.
As we use an Omniledger-like blockchain, each shard elects a leader who is responsible for accepting new transaction to the shard. For simplicity, we consider that the sharding system provides an API $validate(tx)$ that takes a transaction $tx$ as the input and performs the transaction validation mechanism on $tx$. $validate(tx)$ returns true if $tx$ is successfully committed to the blockchain, otherwise it returns false.
Additionally, we consider the system uses a signature scheme $\Sigma(G, Sig, Vf)$ that is assumed to be EU-CMA secure (Existential Unforgeability under a Chosen Message Attack). ECDSA is a suitable signature scheme in practice \cite{johnson2001elliptic}. Moreover, the hash function $\mathcal{H}(\cdot)$ used by the system is also assumed to be collision resistant: there exists no efficient algorithm that can find two inputs $a \neq b$ such that $\mathcal{H}(a) = \mathcal{H}(b)$.
Finally, we assume that each TEE generates a public/secret key-pair and the public key is publicly available to all entities in the network. In practice, the public keys could be stored in a global identity blockchain.
\subsection{Modeling Functionality of Blockchain and TEE}
We specify the ideal blockchain $\mathcal{F}_{blockchain}$ as an appended decentralized database as in \cref{fig:fb}. $\mathcal{F}_{blockchain}$ stores $\mathbf{DB} = \{DB_i | i \in \{1, 2, ..., n\}\}$ which represents a set of blockchains that are hold by shard 1, 2, ..., $n$, respectively. Each blockchain $DB_i$ is indexed by the transactions' hash value $h_{tx}$. We assume that by writing to the blockchain of a shard, all validators of the shard reach consensus on that operation.
\begin{figure}
\noindent\fbox{%
\parbox{\columnwidth}{%
\begin{center}
Functionality $\mathcal{F}_{blockchain}$
\end{center}
Store $\mathbf{DB} = \{DB_i | i \in \{1, 2, ..., n\}\}$, each $DB_i$ is an appended database indexed by $h_{tx}$.
\begin{itemize}
\item On \textbf{input} $read(id, h_{tx})$ from $P_i$: return $DB_{id}[h_{tx}]$ or $\bot$ if not exists.
\end{itemize}
\begin{itemize}
\item On \textbf{input} $write(id, tx)$ from $P_i$:
\begin{enumerate}
\item If $validate(tx) = 0$ then \textbf{output} $reject()$.
\item Otherwise, $DB_{id} = DB_{id} \:\|\: tx$ and \textbf{output} $accept()$
\end{enumerate}
\end{itemize}
}
}
\caption{Ideal blockchain $\mathcal{F}_{blockchain}$}
\label{fig:fb}
\end{figure}
We also specify the ideal TEE $\mathcal{F}_{TEE}$ that models a TEE module in \Cref{fig:ftee}, following the formal abstraction in \cite{pass2017formal}. On startup, $\mathcal{F}_{TEE}$ generates a public/secret key pair, and only the public key is accessible by other parties. With the public key, other entities in the network are able to verify messages signed by $\mathcal{F}_{TEE}$'s secret key. This represents the attestation service in a TEE-enabled platform. A TEE module is an isolated software container that is installed with a program that, in this work, is a transaction sharding algorithm. $\mathcal{F}_{TEE}$ abstracts a TEE module as a trusted third party for confidentiality, execution, and authenticity with respect to any entities that is a part of the system. $prog$ is a program that is installed to run in a TEE module; the input and output of $prog$ are denoted by $inp$ and $outp$, respectively.
\begin{figure}
\noindent\fbox{%
\parbox{\columnwidth}{%
\begin{center}
Functionality $\mathcal{F}_{TEE}$
\end{center}
\begin{itemize}
\item On \textbf{initialization}
\begin{itemize}
\item Generate $(pk_{TEE}, sk_{TEE})$
\end{itemize}
\end{itemize}
\begin{itemize}
\item On \textbf{input} $install(prog)$ from $P_k$:
\begin{itemize}
\item if $prog$ is not stored then $store(prog)$
\end{itemize}
\end{itemize}
\begin{itemize}
\item On \textbf{input} $resume(inp_c)$ from $P_k$:
\begin{enumerate}
\item If $prog$ is not stored than return $\bot$.
\item $outp = prog(inp_c)$
\item $\sigma = \Sigma.Sig(sk_{TEE}, (prog, outp))$
\item Return $(\sigma, outp)$ to $P_k$
\end{enumerate}
\end{itemize}
}
}
\caption{Ideal TEE $\mathcal{F}_{TEE}$}
\label{fig:ftee}
\end{figure}
\begin{figure}
\noindent\fbox{%
\parbox{\columnwidth}{%
\begin{center}
Program $prog$ run in the TEE module
\end{center}
\textbf{Input:} $inp_c$
\begin{enumerate}
\item Request the current state $st$ from the sealed database
\item $tx \gets DEC_{sk_{TEE}}(inp_c)$
\item $S_{out} = txsharding(tx, st)$
\item Return $(S_{out}, st, \mathcal{H}(tx))$
\end{enumerate}
}
}
\caption{Program $prog$ run in the TEE module}
\label{fig:prog}
\end{figure}
On initialization, the TEE module needs to download and monitor $U_i$ for $i \in {1,2,..,n}$. These data are encrypted using the TEE's secret key and then stored in the host storage, which is also referred as sealing. By this way, the TEE will make sure that its data on the secondary storage cannot be tampered with by a malicious host. To ensure that the TEE always uses the latest version of the sealed UTXO database, rollback-protection systems such as ROTE \cite{matetic2017rote} can be used.
\Cref{fig:prog} defines the program $prog$ that is installed in the TEE module to be used in this work. As can be seen, the program decrypts the encrypted input $inp_c$ using the secret key $sk_{TEE}$. $txsharding$ is implemented inside $prog$ to securely execute the transaction sharding algorithm. The program returns the output shard $S_{out}$, the state $st$ upon which $S_{out}$ was computed, and the hash of the transaction $h_{tx}$. This hash value is used to prevent malicious hosts from feeding the TEE module with fake transactions, which will be discussed in more detail in the next subsection.
Upon running $prog$ with the input $inp_c$, the TEE module obtains the signature $\sigma_{TEE}$ over the output of $prog$ and the code of $prog$ using its private key. Finally, the TEE module returns to the host validator $\sigma_{TEE}$, and the output $outp$ from $prog$.
\subsection{Formal Specification of the Protocol}
Our proposed system supports two main APIs for the end-users: (1) $newtx(tx)$ handles the secure computation of a transaction $tx$, and (2) $read(id, h_{tx})$ returns the transaction that has the hash value $h_{tx}$ from shard $id$.
The protocol for validators is formally defined in \cref{fig:prot}, which relies on $\mathcal{F}_{TEE}$ and $\mathcal{F}_{blockchain}$. The validator accepts two function calls from the clients: $request(tx_c)$ and $process(S_{out}, \sigma_{TEE}, tx)$. $request(tx_c)$ takes as input a transaction that is encrypted by the public key of TEE and send $tx_c$ to the TEE module. For simplicity, we assume that the validator is TEE-enabled, if not, the validator simply discards the $request(tx_c)$ function call. Since $tx_c$ is encrypted by $pk_{TEE}$, a malicious host cannot tamper with the transaction. The validator waits until the TEE returns an output and relays that output to the function's caller.
Note that as the output of the TEE includes the transaction's hash $h_{tx}$, the client can check that the TEE indeed processed the correct transaction $tx$ originated from the client. This is possible because of the end-to-end encryption of $tx$ between the client and the TEE. Furthermore, since $\sigma_{TEE}$ protects the integrity of $S_{out}$, the client can verify that $S_{out}$ was not modified by a malicious validator.
The function $process(S_{out}, \sigma_{TEE}, tx)$ receives as input the transaction $tx$, $\sigma_{TEE}$, and output shard $S_{out}$ of $tx$. The validator also verifies $\sigma_{TEE}$ before making a call to $\mathcal{F}_{blockchain}$ to start the transaction validation for $tx$.
\begin{figure}
\noindent\fbox{%
\parbox{\columnwidth}{%
\begin{center}
Protocol for validators
\end{center}
\begin{itemize}
\item On \textbf{input} $request(tx_c)$ from $C_i$:
\begin{enumerate}
\item Send $resume(tx_c)$ to $\mathcal{F}_{TEE}$
\item Wait to receive $(\sigma_{TEE}, (S_{out}, st, h_{tx}))$ from $\mathcal{F}_{TEE}$
\item Return $(st, S_{out}, \sigma_{TEE}, h_{tx})$
\end{enumerate}
\end{itemize}
\begin{itemize}
\item On \textbf{input} $process(S_{out}, \sigma_{TEE}, tx)$ from $C_i$:
\begin{enumerate}
\item Assert $\sigma_{TEE}$ is valid, if failed then return $\bot$
\item Send $write(S_{out}, tx)$ to $\mathcal{F}_{blockchain}$
\item Wait to receive output from $\mathcal{F}_{blockchain}$
\item Return the received output to $C_i$
\end{enumerate}
\end{itemize}
}
}
\caption{Protocol for validators}
\label{fig:prot}
\end{figure}
\Cref{fig:client} illustrates the protocol for the clients. To determine the output shard of a transaction $tx$, a client invokes the API $newtx(tx)$. First, to ensure the integrity of $tx$, the client encrypts $tx$ using the $pk_{TEE}$ and send $tx_c$ to a TEE-enabled validator $P_k$. Upon receiving $(st, S_{out}, \sigma_{TEE}, h_{tx})$ from $P_k$, the client checks if the hash of $tx$ is equal to $h_{tx}$. This prevents a malicious validator from feeding a fake transaction to the TEE module to manipulate $S_{out}$. The client also verifies if the attestation $\sigma_{TEE}$ is correct. Afterwards, the client sends the transaction together with $S_{out}$ and the attestation to the validators of the transaction's input and output shards for final validation. $newtx(tx)$ finally outputs any data received from the validators. The API $read(id, h_{tx})$ can be called when the client wants to obtain the transaction information from the blockchain. The function also returns any data received from $\mathcal{F}_{blockchain}$.
\begin{figure}
\noindent\fbox{%
\parbox{\columnwidth}{%
\begin{center}
Protocol for clients
\end{center}
\begin{itemize}
\item On \textbf{input} $newtx(tx)$ from environment $\mathcal{Z}$:
\begin{enumerate}
\item $tx_c \gets ENC_{pk_{TEE}}(tx)$
\item Send $request(tx_c)$ to validator $P_k$
\item Receive $(st, S_{out}, \sigma_{TEE}, h_{tx})$
\item Assert $\mathcal{H}(tx) = h_{tx}$
\item Assert $\sigma_{TEE}$ using $(st, S_{out}, txsharding, \mathcal{H}(tx))$, if fail then return $\bot$
\item Send $process(S_{out}, \sigma_{TEE}, tx)$ to $S_{out}$
\item Wait to receive $accept()$ or $\bot$ from $P_k$
\item Forward the received data to $\mathcal{Z}$
\end{enumerate}
\end{itemize}
\begin{itemize}
\item On \textbf{input} $read(id, idx)$ from environment $\mathcal{Z}$:
\begin{enumerate}
\item Send $read(idx)$ to $\mathcal{F}_{blockchain}[id]$ and relay output to $\mathcal{Z}$
\end{enumerate}
\end{itemize}
}
}
\caption{Protocol for clients}
\label{fig:client}
\end{figure}
\section{Security Analysis and Performance Evaluation} \label{sec:sec-eval}
This section presents a detail security analysis of the proposed countermeasure under the UC-model and evaluate the performance of the proof-of-concept implementation.
\subsection{Security Analysis}
We first prove that the proposed protocol (1) only requires a stateless TEE and (2) is secure against the single-shard flooding attack, and then prove the correct execution security. By design, the $txsharding$ installed in the TEE base solely on the transaction $tx$, and the state $st$ of the blockchain for computation. As $tx$ is the input and $st$ can be obtained by querying from the UTXO database, the TEE does not need to keep any previous states and computation, thus, it is stateless.
When the system makes decision on the output shard of a transaction, it relies on the $txsharding$ program which is assumed to not base its calculation on transactions' hash value. Therefore, as $txsharding$ also balances the load among the shards, no attackers can manipulate transactions to overwhelm a single shard, hence, the countermeasure is secure against the single-shard flooding attack.
The correct execution security of our system is proven in the Universal Composability (UC) framework \cite{canetti2001universally}. We refer the readers to \cref{app:proof} for our proof.
\subsection{Performance Evaluation}
This section presents our proof-of-concept implementation as well as some experiments to evalute its performance. As our countermeasure is immune to the single-shard flooding attack, our goal is to evaluate the overhead of integrating this solution to sharding. We implement the proof-of-concept using Intel SGX that is available on most modern Intel CPUs. With SGX, each implementation of the TEE module is referred as an \textit{enclave}. The proof-of-concept was developed on Linux machines in which we use the Linux Intel SGX SDK 2.1 for development. We implement and test the protocol for validators using a machine equipped with an Intel Core i7-6700, 16GB RAM, and an SSD drive.
As we want to demonstrate the practicality of the countermeasure, the focus of this evaluation is three-fold: processing time, communication cost, and storage. The processing time includes the time needed for the enclave to monitor the block headers as well as to determine the output shard of a transaction requested by a client. The communication cost represents the network overhead incurred by the interaction between clients and validators to determine the output shards. Additionally, we measure the amount of storage needed when running the enclave.
In our proof-of-concept implementation, we use OptChain \cite{nguyen2019optchain} as $txsharding$. As OptChain determines the output shard based on the transaction's inputs, when obtaining the state from the UTXO database, we only need to load those transaction's inputs from the database. Our proof-of-concept uses Bitcoin as the blockchain platform, and the enclave is connected to the Bitcoin mainnet.
\subsubsection{Processing time}
We calculate the processing time for determining the output shard by invoking the enclave with 10 million Bitcoin transactions (encrypted with the TEE public key) and measure the time needed to receive output from the enclave. This latency includes (1) decrypting the transaction, (2) obtaining the latest state from the UTXO database in the host storage and (3) running $txsharding$. We observe that the highest latency recorded is only about 214 ms and it also does not vary much when running with different transactions. Consider that the average latency of processing a transaction in sharding is about 10 seconds \cite{nguyen2019optchain}, our countermeasure only imposes an additional 0.2 seconds for determining the output shard.
For a detail observation, we measure the latency separately for each stage. The running time of $txsharding$ is negligible as the highest running time recorded is about 0.13 ms when processing a 10-input transaction at 16 shards. Decrypting the transaction is about 0.579 ms when using 2048-bit RSA, and a query to the UTXO database to fetch the current state takes about 213.2 ms. Hence, fetching the state from the UTXO database dominates the running time due to the fact that the database is stored in the host storage.
Another processing time that we take into consideration is the time needed for updating the UTXO database when a new block is added to the blockchain. With an average number of 2000 transactions per block, each update takes about 65.7 seconds. However, this latency does not affect the performance of the system since we can set up two different enclaves running in parallel, one for running $txsharding$, one for monitoring the UTXO database. Therefore, the time needed to run $txsharding$ is independent from updating the database.
\subsubsection{Communication cost}
Our countermeasure imposes some communication overhead over hash-based transaction sharding since the client has to communicate with the validator to determine the output shard. In specific, the overhead includes sending the encrypted transaction to the validator and receiving response. The response comprises the state (represented as the block number), output shard id, attestation, and a hash value of the transation. The communication overhead sum up to about 601 bytes needed for the client to get the output shard of a transaction. If we consider a communication bandwidth of 10 Mbps, the transmission time would take less than half a millisecond.
\subsubsection{Storage}
According to the proposed system for the countermeasure, the enclave needs to store the UTXO database in the host storage, which is essentially the storage of the validator. As of Feb 2020, the size of Bitcoin's UTXO set is about 3.67 GB \cite{utxo}, which means that an additional 3.67 GB is needed in the validator's storage. However, considering that the validator's storage is large enough to store the whole ledger, which is about 263 GB as of Feb 2020 \cite{blockchain.com_2020}, the extra data trivially accounts for 1.4\%.
\subsubsection{Summary}
By evaluating a proof-of-concept implementation of our countermeasure, the result in this section shows that our system imposes negligible overhead with compared to the hash-based transaction sharding, which is susceptible to the single-shard flooding attack. Particularly, by incurring insignificant processing time, communication cost, and storage, our proposed countermeasure demonstrate the practicality as it can be integrated to existing sharding solutions without affecting the system performance.
\section{Conclusions}\label{sec:con}
In this paper, we have identified a new attack in existing sharding solutions. Due to the use of hash-based transaction sharding, an attacker can manipulate the hash value of a transaction to conduct the single-shard flooding attack, which is essentially a DoS attack that can overwhelm one shard with an excessive amount of transactions. We have thoroughly investigated the attack with multiple analysis and experiments to illustrate its damage and practicality. Most importantly, our work has shown that by overwhelming a single shard, the attack creates a cascading effect that reduce the performance of the whole system.
We have also proposed a countermeasure based on TEE that efficiently eliminate the single-shard flooding attack. The security properties of the countermeasure have been proven in the UC framework. Finally, with a proof-of-concept implementation, we have demonstrated that our countermeasure imposes neligible overhead and can be integrated to existing sharding solutions.
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 2.017578,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbGvxaJiQnyjiZFtY | \section{Introduction}
\label{s:intro}
In several areas of society, it remains an open question whether a ``hot hand'' effect exists, according to which humans may temporarily enter a state during which they perform better than on average.
While this concept may occur in different fields, such as among hedge fund managers and artists \citep{jagannathan2010hot,liu2018hot}, it is most prominent in sports.
Sports commentators and fans --- especially in basketball --- often refer to players as having a ``hot hand'', and being ``on fire'' or ``in the zone'' when they show a (successful) streak in performance.
In the academic literature, the hot hand has gained great interest since the seminal paper by \citet{gilovich1985hot}, who investigated a potential hot hand effect in basketball.
They found no evidence for its existence and attributed the hot hand to a cognitive illusion, much to the disapproval of many athletes and fans.
Still, the results provided by \citet{gilovich1985hot} have often been used as a primary example for showing that humans over-interpret patterns of success and failure in random sequences (see, e.g., \citealp{thaler2009nudge,kahneman2011thinking}).
During the last decades, many studies have attempted to replicate or refute the results of \citet{gilovich1985hot}, analysing sports such as, for example, volleyball, baseball, golf, and especially basketball.
\citet{bar2006twenty} provide an overview of 24 studies on the hot hand, 11 of which were in favour of the hot hand phenomenon.
Several more recent studies, often based on large data sets, also provide evidence for the existence of the hot hand (see, e.g., \citealp{raab2012hot,green2017hot,miller2016surprised,chang2019predictive}).
Notably, \citet{miller2016surprised} show that the original study from \citet{gilovich1985hot} suffers from a selection bias.
Using the same data as in the original study by \citet{gilovich1985hot}, \citet{miller2016surprised} account for that bias, and their results do reveal a hot hand effect.
However, there are also recent studies which provide mixed results (see, e.g., \citealp{wetzels2016bayesian}) or which do not find evidence for the hot hand, such as \citet{morgulev2020searching}.
Thus, more than 30 years after the study of \citet{gilovich1985hot}, the existence of the hot hand remains highly disputed.
Moreover, the literature does not provide a universally accepted definition of the hot hand effect.
While some studies regard it as serial correlation in \textit{outcomes} (see, e.g., \citealp{gilovich1985hot,dorsey2004bowlers,miller2016surprised}), others consider it as serial correlation in \textit{success probabilities} (see, e.g., \citealp{albert1993statistical,wetzels2016bayesian,otting2018hot}).
The latter definition translates into a latent (state) process underlying the observed performance --- intuitively speaking, a measure for a player's form --- which can be elevated without the player necessarily being successful in every attempt.
In our analysis, we follow this approach and hence consider state-space models (SSMs) to investigate the hot hand effect in basketball.
Specifically, we analyse free throws from more than 9,000 games played in the National Basketball Association (NBA), totalling in 110,513 observations.
In contrast, \citet{gilovich1985hot} use data on 2,600 attempts in their controlled shooting experiment.
Free throws in basketball, or similar events in sports with game clocks, occur at unevenly spaced time points.
These varying time lengths between consecutive attempts may affect inference on the hot hand effect if the model formulation does not account for the temporal irregularity of the observations.
As an illustrative example, consider an irregular sequence of throws with intervals ranging from, say, two seconds to 15 minutes.
For intervals between attempts that are fairly short (such as a few seconds), a player will most likely be able to retain his form from the last shot.
On the other hand, if several minutes elapse before a player takes his next shot, it becomes less likely that he is able to retain his form from the last attempt.
However, we found that existing studies on the hot hand do not account for different interval lengths between attempts.
In particular, studies investigating serial correlation in success probabilities usually consider discrete-time models that require the data to follow a regular sampling scheme and thus, cannot (directly) be applied to irregularly sampled data.
In our contribution, we overcome this limitation by formulating our model in continuous time to explicitly account for irregular time intervals between free throws in basketball.
Specifically, we consider a stochastic differential equation (SDE) as latent state process, namely the Ornstein-Uhlenbeck (OU) process, which represents the underlying form of a player fluctuating continuously around his average performance.
In the following, Section \ref{s:data} presents our data set and covers some descriptive statistics.
Subsequently, in Section \ref{s:model}, the continuous-time state-space model (SSM) formulation for the analysis of the hot hand effect is introduced, while its results are presented in Section \ref{s:results}.
We conclude our paper with a discussion in Section \ref{s:discussion}.
\section{Data}
\label{s:data}
We extracted data on all basketball games played in the NBA between October 2012 and June 2019 from \url{https://www.basketball-reference.com/}, covering both regular seasons and playoff games.
For our analysis, we consider data only on free throw attempts as these constitute a highly standardised setting without any interaction between players, which is usually hard to account for when modelling field goals in basketball.
We further included all players who took at least 2,000 free throws in the period considered, totalling in 110,513 free throws from 44 players.
For each player, we included only those games in which he attempted at least four free throws to ensure that throws did not only follow successively (as players receive up to three free throws if they are fouled).
A single sequence of free throw attempts consists of all throws taken by one player in a given game, totalling in 15,075 throwing sequences with a median number of 6 free throws per game (min: 4; max: 39).
As free throws occur irregularly within a basketball game, the information on whether an attempt was successful needs to be supplemented by its time point $t_k, k = 1,\ldots,T$, where $0 \leq t_1 \leq t_2 \leq \ldots \leq t_T$, corresponding to the time already played (in minutes) as indicated by the game clock.
For each player $p$ in his $n$-th game, we thus consider an irregular sequence of binary variables $Y_{t_1}^{p,n}, Y_{t_2}^{p,n}, \ldots, Y_{t_{T_{p,n}}}^{p,n}$, with
$$
Y_{t_k}^{p,n} = \begin{cases}
1 & \text{if free throw attempt at time $t_k$ is successful;} \\
0 & \text{otherwise}.
\end{cases}
$$
In our sample, the proportion of successful free throw attempts is obtained as 0.784.
However, there is considerable heterogeneity in the players' throwing success as the corresponding empirical proportions range from 0.451 (Andre Drummond) to 0.906 (Stephen Curry).
Players can receive up to three free throws (depending on the foul) in the NBA, which are then thrown in quick succession, and the proportion of successful free throws differs substantially between the three attempts, with 0.769, 0.8, and 0.883 obtained for the first, second, and third free throw, respectively.
To account for the position of the throw in a player's set of (at most) three free throws, we hence include the dummy variables \textit{ft2} and \textit{ft3} in our analysis.
In our sample, 54.5\% of all free throws correspond to the first, 43.7\% to the second, and only 1.8\% to the third attempt in a set (cf.\ Table~\ref{tab:descr}).
Furthermore, as the outcome of a free throw is likely affected by intermediate information on the game --- such as a close game leading to pressure situations --- we consider several further covariates, which were also used in previous studies (see, e.g., \citealp{toma2017missed,morgulev2020searching}).
Specifically, we consider the current score difference (\textit{scorediff}), a home dummy (\textit{home}), and a dummy indicating whether the free throw occurred in the last 30 seconds of the quarter (\textit{last30}).
Corresponding summary statistics are shown in Table~\ref{tab:descr}.
\begin{table}[!b] \centering
\caption{Descriptive statistics of the covariates.}
\label{tab:descr}
\begin{tabular}{@{\extracolsep{5pt}}lccccc}
\\[-1.8ex]\hline
\hline \\[-1.8ex]
& \multicolumn{1}{c}{mean} & \multicolumn{1}{c}{st.\ dev.} & \multicolumn{1}{c}{min.} & \multicolumn{1}{c}{max.} \\
\hline \\[-1.8ex]
\textit{scorediff} & 0.576 & 9.860 & $-$45 & 49 \\
\textit{home} & 0.514 & -- & 0 & 1 \\
\textit{last30} & 0.093 & -- & 0 & 1 \\
\textit{ft2} & 0.437 & -- & 0 & 1 \\
\textit{ft3} & 0.018 & -- & 0 & 1 \\
\hline \\[-1.8ex]
\end{tabular}
\end{table}
In Table~\ref{tab:data}, example throwing sequences used in our analysis are shown for free throws taken by LeBron James in five NBA games.
These throwing sequences illustrate that free throw attempts often appear in clusters of two or three attempts at the same time (depending on the foul), followed by a time period without any free throws.
Therefore, it is important to take into account the different lengths of the time intervals between consecutive attempts as the time elapsed between attempts affects a player's underlying form.
\begin{table}[!t]
\centering
\caption{Throwing sequences of LeBron James.}
\label{tab:data}
\scalebox{0.75}{
\begin{tabular}{llllllllllllll}
\hline
\multicolumn{13}{c}{Miami Heat @ Houston Rockets, November 12, 2012} \\
$y_{t_k}^{\text{James},1}$ & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & - & - & - & - & - \\
$t_k$ (in min.) & 11.01 & 11.01 & 23.48 & 23.48 & 46.68 & 46.68 & 47.09 & 47.09 & - & - & - & - & - \\
\hline
\multicolumn{13}{c}{Miami Heat @ Los Angeles Clippers, November 14, 2012} \\
$y_{t_k}^{\text{James},2}$ & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & - & - & - & - \\
$t_k$ (in min.) & 9.47 & 23.08 & 23.08 & 24.73 & 36.95 & 36.95 & 41 & 41 & 42.77 & - & - & - & - \\
\hline
\multicolumn{13}{c}{Milwaukee Bucks @ Miami Heat, November 21, 2012} \\
$y_{t_k}^{\text{James},3}$ & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & - & - & - & - & - \\
$t_k$ (in min.) & 7.5 & 7.5 & 10.02 & 10.02 & 35.62 & 35.62 & 42.68 & 42.68 & - & - & - & - & - \\
\hline
\multicolumn{13}{c}{Cleveland Cavaliers @ Miami Heat, November 24, 2012} \\
$y_{t_k}^{\text{James},4}$ & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & - & - & - & - \\
$t_k$ (in min.) & 11.62 & 21.07 & 21.07 & 21.38 & 21.38 & 31.95 & 31.95 & 32.68 & 32.68 & - & - & - & - \\
\hline
\multicolumn{13}{c}{Los Angeles Lakers @ Miami Heat, January 23, 2014} \\
$y_{t_k}^{\text{James},5}$ & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 \\
$t_k$ (in min.) & 9.28 & 9.28 & 20.53 & 25.97 & 25.97 & 31.57 & 31.57 & 33.43 & 33.43 & 42.1 & 42.1 & 47.62 & 47.62 \\
\hline
\end{tabular}}
\end{table}
\section{Continuous-time modelling of the hot hand}
\label{s:model}
\subsection{State-space model specification}
Following the idea that the throwing success depends on a player's current (latent) form \citep[see, e.g.,][]{albert1993statistical,wetzels2016bayesian,otting2018hot}, we model the observed free throw attempts using a state-space model formulation as represented in Figure~\ref{fig:hmm_bild}.
The observation process corresponds to the binary sequence of a player's throwing success, while the state process can be interpreted as a player's underlying form (or ``hotness'').
We further include the covariates introduced in Section~\ref{s:data} in the model, which possibly affect a player's throwing success.
In particular, we model the binary response of throwing success $Y_{t_k}^{p,n}$ using a Bernoulli distribution with the associated success probability $\pi_{t_k}^{p,n}$ being a function of the player's current state $S_{t_k}^{p,n}$ and the covariates.
Dropping the superscripts $p$ and $n$ for notational simplicity from now on, we thus have
\begin{equation}
\label{eq:observations}
\begin{split}
Y_{t_k} \sim \text{Bern}(\pi_{t_k}), \quad
\text{logit}(\pi_{t_k}) = S_{t_k} &+ \beta_{0,p} + \beta_1 \textit{home} + \beta_2 \textit{scorediff} \\ &+ \beta_3 \textit{last30} + \beta_4 \textit{ft2} + \beta_5 \textit{ft3},
\end{split}
\end{equation}
where $\beta_{0,p}$ is a player-specific intercept to account for differences between players' average throwing success.
To address the temporal irregularity of the free throw attempts, we formulate the stochastic process $\{S_t\}_{t \geq 0}$ in continuous time.
Furthermore, we require the state process to be continuous-valued to allow for gradual changes in a player's form, rather than assuming a finite number of discrete states \citep[e.g.\ three states interpreted as cold vs.\ normal vs.\ hot; cf.][]{wetzels2016bayesian, green2017hot}.
In addition, the state process ought to be stationary such that in the long-run a player returns to his average form.
A natural candidate for a corresponding stationary, continuous-valued and continuous-time process is the OU process, which is described by the following SDE:
\begin{equation}
d S_t = \theta (\mu - S_t) dt + \sigma d B_t, \quad S_0=s_0,
\label{eq:OUprocess}
\end{equation}
where $\theta > 0$ is the drift term indicating the strength of reversion to the long-term mean $\mu \in \mathbb{R}$, while $\sigma > 0$ is the diffusion parameter controlling the strength of fluctuations, and $B_t$ denotes the Brownian motion.
We further specify $\mu = 0$ such that the state process fluctuates around a player's average form, given the current covariate values.
Specifically, positive values of the state process indicate higher success probabilities, whereas negative values indicate decreased throwing success given the player's average ability and the current game characteristics.
\begin{figure}[!htb]
\centering
\begin{tikzpicture}[scale=0.75, transform shape]
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (A) at (2, -5) {$S_{t_1}$};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (B) at (4.4, -5) {$S_{t_2}$};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (C) at (7.5, -5) {$S_{t_3}$};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (C1) at (10, -5) {...};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (Y1) at (2, -2.5) {$Y_{t_1}$};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (Y2) at (4.4, -2.5) {$Y_{t_2}$};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (Y3) at (7.5, -2.5) {$Y_{t_3}$};
\node[text width=5cm,font=\itshape] at (14, -2.5)
{throwing success (observed)};
\node[text width=5cm,font=\itshape] at (14, -5)
{player's form (hidden)};
\draw[-{Latex[scale=2]}] (A)--(B);
\draw[-{Latex[scale=2]}] (B)--(C);
\draw[-{Latex[scale=2]}] (C)--(C1);
\draw[-{Latex[scale=2]}] (A)--(Y1);
\draw[-{Latex[scale=2]}] (B)--(Y2);
\draw[-{Latex[scale=2]}] (C)--(Y3);
\end{tikzpicture}
\caption{Dependence structure of our SSM: the throwing success $Y_{t_k}$ is assumed to be driven by the underlying (latent) form of a player. To explicitly account for the irregular time intervals between observations, we formulate our model in continuous time.}
\label{fig:hmm_bild}
\end{figure}
As shown in Figure~\ref{fig:hmm_bild}, we model the hot hand effect as serial correlation in success probabilities as induced by the state process:
while the observed free throw attempts are conditionally independent, given the underlying states, the unobserved state process induces correlation in the observation process.
Regarding the hot hand effect, the drift parameter $\theta$ of the OU process is thus of main interest as it governs the speed of reversion (to the average form).
The smaller $\theta$, the longer it takes for the OU process to return to its mean
and hence the higher the serial correlation.
To assess whether a model including serial dependence (i.e.\ an SSM) is actually needed to describe the structure in the data, we additionally fit a benchmark model without the underlying state variable $S_{t_k}$ in Equation (\ref{eq:observations}).
Consequently, the benchmark model corresponds to the absence of any hot hand effect, i.e.\ a standard logistic regression model.
\subsection{Statistical inference}
\label{s:statInf}
The likelihood of the continuous-time SSM given by Equations (\ref{eq:observations}) and (\ref{eq:OUprocess}) involves integration over all possible realisations of the continuous-valued state $S_{t_k}$, at each observation time $t_1, t_2, \ldots, t_T$.
For simplicity of notation, let the integer $\tau = 1, 2, \ldots, T$ denote the \textit{number} of the observation in the time series, such that $Y_{t_\tau}$ shortens to $Y_\tau$ and $S_{t_\tau}$ shortens to $S_\tau$.
Further, $t_\tau$ represents the \textit{time} at which the observation $\tau$ was collected.
Then the likelihood of a single throwing sequence $y_1, \ldots, y_T$ is given by
\begin{equation}
\begin{split}
\mathcal{L}_T &= \int \ldots \int \text{Pr}(y_1, \ldots, y_T, s_1, \ldots, s_T) ds_T \ldots ds_1 \\
&= \int \ldots \int \text{Pr}(s_1) \text{Pr}(y_1|s_1) \prod_{\tau=2}^T \text{Pr}(s_\tau | s_{\tau-1}) \text{Pr}(y_\tau|s_\tau) ds_T \ldots ds_1, \\
\label{eq:llk}
\end{split}
\end{equation}
where we assume that each player starts a game in his stationary distribution $S_1 \sim \mathcal{N} \left(0, \frac{\sigma^2}{2\theta} \right)$, i.e.\ the stationary distribution of the OU process.
Further, we assume $Y_\tau$ to be Bernoulli distributed with corresponding state-dependent probabilities $\text{Pr}(y_\tau|s_\tau)$ additionally depending on the current covariate values (cf.\ Equation~(\ref{eq:observations})),
while the probabilities of transitioning between the states $\text{Pr}(s_\tau | s_{\tau-1})$ are normally distributed as determined by the conditional distribution of the OU process:
\begin{equation}
S_{\tau} | S_{\tau-1} = s \sim \mathcal{N}\left( \text{e}^{-\theta \Delta_\tau} s, \quad
\frac{\sigma^2}{2\theta} \bigl(1- \text{e}^{-2\theta \Delta_\tau}\bigr) \right),
\label{eq:condDistOU}
\end{equation}
where $\Delta_\tau = t_{\tau} - t_{\tau-1}$ denotes the time difference between consecutive observations.
Due to the $T$ integrals in Equation (\ref{eq:llk}), the likelihood calculation is intractable.
To render its evaluation feasible, we approximate the multiple integral by finely discretising the continuous-valued state space as first suggested by \citet{kitagawa1987}.
The discretisation of the state space can effectively be seen as a reframing of the model as a continuous-time hidden Markov model (HMM) with a large but finite number of states, enabling us to apply the corresponding efficient machinery.
In particular, we use the forward algorithm to calculate the likelihood, defining the possible range of state values as $[-2, 2]$, which we divide into 100 intervals.
For details on the approximation of the likelihood via state discretisation, see \citet{otting2018hot} for discrete-time and \citet{mews2020} for continuous-time SSMs.
Assuming single throwing sequences of players to be mutually independent, conditional on the model parameters, the likelihood over all games and players is simply calculated as the product of the individual likelihoods.
The model parameters, i.e.\ the drift parameter and the diffusion coefficient of the OU process as well as the regression coefficients, are then estimated by numerically maximising the (approximate) joint likelihood.
The resulting parameter estimators are unbiased and consistent --- corresponding simulation experiments are shown in \citet{mews2020}.
\section{Results}
\label{s:results}
\begin{table}[!b] \centering
\caption{\label{tab:results} Parameter estimates with 95\% confidence intervals.}
\medskip
\begin{tabular}{llcc}
\\[-1.8ex]\hline
\hline \\[-1.8ex]
\multicolumn{2}{l}{parameter} & estimate & 95\% CI \\
\hline \\[-1.8ex]
$\theta$ & (drift) & 0.042 & [0.016; 0.109] \\
$\sigma$ & (diffusion) & 0.101 & [0.055; 0.185] \\
$\beta_1$ & (\textit{home}) & 0.023 & [-0.009; 0.055] \\
$\beta_2$ & (\textit{scorediff}) & 0.030 & [0.011; 0.048] \\
$\beta_3$ & (\textit{last30}) & 0.003 & [-0.051; 0.058] \\
$\beta_4$ & (\textit{ft2}) & 0.223 & [0.192; 0.254] \\
$\beta_5$ & (\textit{ft3}) & 0.421 & [0.279; 0.563] \\
\hline \\[-1.8ex]
\end{tabular}
\end{table}
According to the information criteria AIC and BIC, the continuous-time model formulation including a potential hot hand effect is clearly favoured over the benchmark model without any underlying state process ($\Delta$AIC = 61.20, $\Delta$BIC = 41.97).
The parameter estimates of the OU process, which represents the underlying form of a player, as well as the estimated regression coefficients are shown in Table~\ref{tab:results}.
In particular, the estimate for the drift parameter $\theta$ of the OU process is fairly small, thus indicating serial correlation in the state process over time.
However, when assessing the magnitude of the hot hand effect, we also observe a fairly small estimate for the diffusion coefficient $\sigma$.
The corresponding stationary distribution is thus estimated as $\mathcal{N} \left(0, 0.348^2 \right)$, indicating a rather small range of the state process, which becomes apparent also when simulating state trajectories based on the parameter estimates of the OU process (cf.\ Figure~\ref{fig:OUresults}).
Still, the associated success probabilities, given that all covariate values are fixed to zero, vary considerably during the time of an NBA game (cf.\ right y-axis of Figure~\ref{fig:OUresults}).
While the state process and hence the resulting success probabilities slowly fluctuate around the average throwing success (given the covariates), the simulated state trajectories reflect the temporal persistence of the players' underlying form.
Thus, our results suggest that players can temporarily enter a state in which their success probability is considerably higher than their average performance, which provides evidence for a hot hand effect.
\begin{figure}[!t] \centering
\includegraphics[width=0.9\textwidth]{SimOUresults.pdf}
\caption{\label{fig:OUresults} Simulation of possible state trajectories for the length of an NBA game based on the estimated parameters of the OU process. The red dashed line indicates the intercept (here: the median throwing success over all players), around which the processes fluctuate. The right y-axis shows the success probabilities resulting from the current state (left y-axis), given that the explanatory variables equal 0. The graphs were obtained by application of the Euler-Maruyama scheme with initial value 0 and step length 0.01.}
\end{figure}
Regarding the estimated regression coefficients, the player-specific intercepts $\hat{\beta}_{0,p}$ range from -0.311 to 2.192 (on the logit scale), reflecting the heterogeneity in players' throwing success.
The estimates for $\beta_1$ to $\beta_5$ are displayed in Table \ref{tab:results} together with their 95\% confidence intervals.
The chance of making a free throw is slightly increased if the game is played at home ($\beta_1$) or if a free throw occurs in the last 30 seconds of a quarter ($\beta_3$), but both corresponding confidence intervals include the zero.
In contrast, the confidence interval for the score difference ($\beta_2$) does not include the zero and its effect is positive but small, indicating that the higher the lead, the higher is, on average, the chance to make a free throw.
The position of the throw, i.e.\ whether it is the first, second ($\beta_4$), or third ($\beta_5$) attempt in a row, has the largest effect of all covariates considered:
compared to the first free throw, the chance of a hit is considerably increased if it is the second or, in particular, the third attempt, which was already indicated by the descriptive analysis presented in Section \ref{s:data}.
However, this strong effect on the success probabilities is probably caused by the fact that three free throws in a row are only awarded if a player is fouled while shooting a three-point field goal, which, in turn, is more often attempted by players who regularly perform well at free throws.
To further investigate how the hot hand may evolve during a game, we compute the most likely state sequences, corresponding to the underlying form of a player.
Specifically, we seek
$$
(s_{1}^*,\ldots,s_{T}^*) = \underset{s_{1},\ldots,s_{T}}{\operatorname{argmax}} \; \Pr ( s_{1},\ldots,s_{T} | y_{1},\ldots, y_{T} ),
$$
where $s_{1}^*,\ldots,s_{T}^*$ denotes the most likely state sequence given the observations.
As we transferred our continuous-time SSM to an HMM framework by finely discretising the state space (cf.\ Section \ref{s:statInf}), we can use the Viterbi algorithm to calculate such sequences at low computational cost \citep{zucchini2016hidden}.
Figure~\ref{fig:decStates} shows the most likely states underlying the throwing sequences presented in Table~\ref{tab:data}.
While the decoded state processes fluctuate around zero (i.e.\ a player's average throwing success), the state values vary slightly over the time of an NBA game.
Over all players, the decoded states range from -0.42 to 0.46, again indicating that the hot hand effect as modelled by the state process is rather small.
\begin{figure}[!h tb] \centering
\includegraphics[width=0.95\textwidth]{decStates_new.pdf}
\caption{\label{fig:decStates} Decoded states underlying the throwing sequences of LeBron James shown in Table~\ref{tab:data}. Successful free throws are shown in yellow, missed shots in black.}
\end{figure}
The decoded state sequences in Figure \ref{fig:decStates} further allow to illustrate the advantages and the main idea of our continuous-time modelling approach.
For example, consider the throwing sequence in the second match shown, where LeBron James only made a single free throw of his first four attempts.
The decoded state at throw number 3 is -0.092 (cf.\ Figure~\ref{fig:decStates}) and the time passed between throw number 3 and 4 is 1.65 minutes (cf.\ Table~\ref{tab:data}).
Thus, the value of the state process at throw number 4 is drawn from a normal distribution, given the decoded state of the previous attempt, with mean $\text{e}^{-0.042 \cdot 1.65} (-0.092) = -0.086$ and variance $\frac{0.101^2}{2\cdot 0.042} (1 - \text{e}^{-2\cdot 0.042 \cdot 1.65}) = 0.016$ (cf.\ Equation~(\ref{eq:condDistOU})).
Accordingly, the value of the state process for throw number 5 is drawn from a normal distribution with mean $-0.050$ and variance $0.078$, conditional on the decoded state of -0.084 at throw number 4 and a relatively long time interval of 12.22 minutes.
As highlighted by these example calculations, the conditional distribution of the state process takes into account the interval length between consecutive attempts:
the more time elapses, the higher the variance in the state process and hence, the less likely is a player to retain his form, with a tendency to return to his average performance.
\section{Discussion}
\label{s:discussion}
In our analysis of the hot hand, we used SSMs formulated in continuous time to model throwing success in basketball.
Focusing on free throws taken in the NBA, our results provide evidence for a hot hand effect as the underlying state process exhibits some persistence over time.
In particular, the model including a hot hand effect is preferred over the benchmark model without any underlying state process by information criteria.
Although we provide evidence for the existence of a hot hand, the magnitude of the hot hand effect is rather small as the underlying success probabilities are only elevated by a few percentage points (cf.\ Figures~\ref{fig:OUresults} and \ref{fig:decStates}).
A minor drawback of the analysis arises from the fact that there is no universally accepted definition of the hot hand.
In our setting, we use the OU process to model players' continuously varying form and it is thus not clear which values of the drift parameter $\theta$ correspond to the existence of the hot hand.
While lower values of $\theta$ refer to a slower reversion of a player's form to his average performance, a further quantification of the magnitude of the hot hand effect is not possible.
In particular, a comparison of our results to other studies on the hot hand effect proves difficult as these studies apply different methods to investigate the hot hand.
In general, the modelling framework considered provides great flexibility with regard to distributional assumptions.
In particular, the response variable is not restricted to be Bernoulli distributed (or Gaussian, as is often the case when making inference on continuous-time SSMs), such that other types of response variables used in hot hand analyses (e.g.\ Poisson) can be implemented by changing just a few lines of code.
Our continuous-time SSM can thus easily be applied to other sports, and the measure for success does not have to be binary as considered here.
For readers interested in adopting our code to fit their own hot hand model, the authors can provide the data and code used for the analysis.
\section*{Acknowledgements}
We thank Roland Langrock, Christian Deutscher, and Houda Yaqine for stimulating discussions and helpful comments.
\newpage
\input{main.bbl}
\end{spacing}
\end{document}
\section{Introduction}
\label{s:intro}
In several areas of society, it remains an open question whether a ``hot hand'' effect exists, according to which humans may temporarily enter a state during which they perform better than on average.
While this concept may occur in different fields, such as among hedge fund managers and artists \citep{jagannathan2010hot,liu2018hot}, it is most prominent in sports.
Sports commentators and fans --- especially in basketball --- often refer to players as having a ``hot hand'', and being ``on fire'' or ``in the zone'' when they show a (successful) streak in performance.
In the academic literature, the hot hand has gained great interest since the seminal paper by \citet{gilovich1985hot}, who investigated a potential hot hand effect in basketball.
They found no evidence for its existence and attributed the hot hand to a cognitive illusion, much to the disapproval of many athletes and fans.
Still, the results provided by \citet{gilovich1985hot} have often been used as a primary example for showing that humans over-interpret patterns of success and failure in random sequences (see, e.g., \citealp{thaler2009nudge,kahneman2011thinking}).
During the last decades, many studies have attempted to replicate or refute the results of \citet{gilovich1985hot}, analysing sports such as, for example, volleyball, baseball, golf, and especially basketball.
\citet{bar2006twenty} provide an overview of 24 studies on the hot hand, 11 of which were in favour of the hot hand phenomenon.
Several more recent studies, often based on large data sets, also provide evidence for the existence of the hot hand (see, e.g., \citealp{raab2012hot,green2017hot,miller2016surprised,chang2019predictive}).
Notably, \citet{miller2016surprised} show that the original study from \citet{gilovich1985hot} suffers from a selection bias.
Using the same data as in the original study by \citet{gilovich1985hot}, \citet{miller2016surprised} account for that bias, and their results do reveal a hot hand effect.
However, there are also recent studies which provide mixed results (see, e.g., \citealp{wetzels2016bayesian}) or which do not find evidence for the hot hand, such as \citet{morgulev2020searching}.
Thus, more than 30 years after the study of \citet{gilovich1985hot}, the existence of the hot hand remains highly disputed.
Moreover, the literature does not provide a universally accepted definition of the hot hand effect.
While some studies regard it as serial correlation in \textit{outcomes} (see, e.g., \citealp{gilovich1985hot,dorsey2004bowlers,miller2016surprised}), others consider it as serial correlation in \textit{success probabilities} (see, e.g., \citealp{albert1993statistical,wetzels2016bayesian,otting2018hot}).
The latter definition translates into a latent (state) process underlying the observed performance --- intuitively speaking, a measure for a player's form --- which can be elevated without the player necessarily being successful in every attempt.
In our analysis, we follow this approach and hence consider state-space models (SSMs) to investigate the hot hand effect in basketball.
Specifically, we analyse free throws from more than 9,000 games played in the National Basketball Association (NBA), totalling in 110,513 observations.
In contrast, \citet{gilovich1985hot} use data on 2,600 attempts in their controlled shooting experiment.
Free throws in basketball, or similar events in sports with game clocks, occur at unevenly spaced time points.
These varying time lengths between consecutive attempts may affect inference on the hot hand effect if the model formulation does not account for the temporal irregularity of the observations.
As an illustrative example, consider an irregular sequence of throws with intervals ranging from, say, two seconds to 15 minutes.
For intervals between attempts that are fairly short (such as a few seconds), a player will most likely be able to retain his form from the last shot.
On the other hand, if several minutes elapse before a player takes his next shot, it becomes less likely that he is able to retain his form from the last attempt.
However, we found that existing studies on the hot hand do not account for different interval lengths between attempts.
In particular, studies investigating serial correlation in success probabilities usually consider discrete-time models that require the data to follow a regular sampling scheme and thus, cannot (directly) be applied to irregularly sampled data.
In our contribution, we overcome this limitation by formulating our model in continuous time to explicitly account for irregular time intervals between free throws in basketball.
Specifically, we consider a stochastic differential equation (SDE) as latent state process, namely the Ornstein-Uhlenbeck (OU) process, which represents the underlying form of a player fluctuating continuously around his average performance.
In the following, Section \ref{s:data} presents our data set and covers some descriptive statistics.
Subsequently, in Section \ref{s:model}, the continuous-time state-space model (SSM) formulation for the analysis of the hot hand effect is introduced, while its results are presented in Section \ref{s:results}.
We conclude our paper with a discussion in Section \ref{s:discussion}.
\section{Data}
\label{s:data}
We extracted data on all basketball games played in the NBA between October 2012 and June 2019 from \url{https://www.basketball-reference.com/}, covering both regular seasons and playoff games.
For our analysis, we consider data only on free throw attempts as these constitute a highly standardised setting without any interaction between players, which is usually hard to account for when modelling field goals in basketball.
We further included all players who took at least 2,000 free throws in the period considered, totalling in 110,513 free throws from 44 players.
For each player, we included only those games in which he attempted at least four free throws to ensure that throws did not only follow successively (as players receive up to three free throws if they are fouled).
A single sequence of free throw attempts consists of all throws taken by one player in a given game, totalling in 15,075 throwing sequences with a median number of 6 free throws per game (min: 4; max: 39).
As free throws occur irregularly within a basketball game, the information on whether an attempt was successful needs to be supplemented by its time point $t_k, k = 1,\ldots,T$, where $0 \leq t_1 \leq t_2 \leq \ldots \leq t_T$, corresponding to the time already played (in minutes) as indicated by the game clock.
For each player $p$ in his $n$-th game, we thus consider an irregular sequence of binary variables $Y_{t_1}^{p,n}, Y_{t_2}^{p,n}, \ldots, Y_{t_{T_{p,n}}}^{p,n}$, with
$$
Y_{t_k}^{p,n} = \begin{cases}
1 & \text{if free throw attempt at time $t_k$ is successful;} \\
0 & \text{otherwise}.
\end{cases}
$$
In our sample, the proportion of successful free throw attempts is obtained as 0.784.
However, there is considerable heterogeneity in the players' throwing success as the corresponding empirical proportions range from 0.451 (Andre Drummond) to 0.906 (Stephen Curry).
Players can receive up to three free throws (depending on the foul) in the NBA, which are then thrown in quick succession, and the proportion of successful free throws differs substantially between the three attempts, with 0.769, 0.8, and 0.883 obtained for the first, second, and third free throw, respectively.
To account for the position of the throw in a player's set of (at most) three free throws, we hence include the dummy variables \textit{ft2} and \textit{ft3} in our analysis.
In our sample, 54.5\% of all free throws correspond to the first, 43.7\% to the second, and only 1.8\% to the third attempt in a set (cf.\ Table~\ref{tab:descr}).
Furthermore, as the outcome of a free throw is likely affected by intermediate information on the game --- such as a close game leading to pressure situations --- we consider several further covariates, which were also used in previous studies (see, e.g., \citealp{toma2017missed,morgulev2020searching}).
Specifically, we consider the current score difference (\textit{scorediff}), a home dummy (\textit{home}), and a dummy indicating whether the free throw occurred in the last 30 seconds of the quarter (\textit{last30}).
Corresponding summary statistics are shown in Table~\ref{tab:descr}.
\begin{table}[!b] \centering
\caption{Descriptive statistics of the covariates.}
\label{tab:descr}
\begin{tabular}{@{\extracolsep{5pt}}lccccc}
\\[-1.8ex]\hline
\hline \\[-1.8ex]
& \multicolumn{1}{c}{mean} & \multicolumn{1}{c}{st.\ dev.} & \multicolumn{1}{c}{min.} & \multicolumn{1}{c}{max.} \\
\hline \\[-1.8ex]
\textit{scorediff} & 0.576 & 9.860 & $-$45 & 49 \\
\textit{home} & 0.514 & -- & 0 & 1 \\
\textit{last30} & 0.093 & -- & 0 & 1 \\
\textit{ft2} & 0.437 & -- & 0 & 1 \\
\textit{ft3} & 0.018 & -- & 0 & 1 \\
\hline \\[-1.8ex]
\end{tabular}
\end{table}
In Table~\ref{tab:data}, example throwing sequences used in our analysis are shown for free throws taken by LeBron James in five NBA games.
These throwing sequences illustrate that free throw attempts often appear in clusters of two or three attempts at the same time (depending on the foul), followed by a time period without any free throws.
Therefore, it is important to take into account the different lengths of the time intervals between consecutive attempts as the time elapsed between attempts affects a player's underlying form.
\begin{table}[!t]
\centering
\caption{Throwing sequences of LeBron James.}
\label{tab:data}
\scalebox{0.75}{
\begin{tabular}{llllllllllllll}
\hline
\multicolumn{13}{c}{Miami Heat @ Houston Rockets, November 12, 2012} \\
$y_{t_k}^{\text{James},1}$ & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & - & - & - & - & - \\
$t_k$ (in min.) & 11.01 & 11.01 & 23.48 & 23.48 & 46.68 & 46.68 & 47.09 & 47.09 & - & - & - & - & - \\
\hline
\multicolumn{13}{c}{Miami Heat @ Los Angeles Clippers, November 14, 2012} \\
$y_{t_k}^{\text{James},2}$ & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & - & - & - & - \\
$t_k$ (in min.) & 9.47 & 23.08 & 23.08 & 24.73 & 36.95 & 36.95 & 41 & 41 & 42.77 & - & - & - & - \\
\hline
\multicolumn{13}{c}{Milwaukee Bucks @ Miami Heat, November 21, 2012} \\
$y_{t_k}^{\text{James},3}$ & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & - & - & - & - & - \\
$t_k$ (in min.) & 7.5 & 7.5 & 10.02 & 10.02 & 35.62 & 35.62 & 42.68 & 42.68 & - & - & - & - & - \\
\hline
\multicolumn{13}{c}{Cleveland Cavaliers @ Miami Heat, November 24, 2012} \\
$y_{t_k}^{\text{James},4}$ & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & - & - & - & - \\
$t_k$ (in min.) & 11.62 & 21.07 & 21.07 & 21.38 & 21.38 & 31.95 & 31.95 & 32.68 & 32.68 & - & - & - & - \\
\hline
\multicolumn{13}{c}{Los Angeles Lakers @ Miami Heat, January 23, 2014} \\
$y_{t_k}^{\text{James},5}$ & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 \\
$t_k$ (in min.) & 9.28 & 9.28 & 20.53 & 25.97 & 25.97 & 31.57 & 31.57 & 33.43 & 33.43 & 42.1 & 42.1 & 47.62 & 47.62 \\
\hline
\end{tabular}}
\end{table}
\section{Continuous-time modelling of the hot hand}
\label{s:model}
\subsection{State-space model specification}
Following the idea that the throwing success depends on a player's current (latent) form \citep[see, e.g.,][]{albert1993statistical,wetzels2016bayesian,otting2018hot}, we model the observed free throw attempts using a state-space model formulation as represented in Figure~\ref{fig:hmm_bild}.
The observation process corresponds to the binary sequence of a player's throwing success, while the state process can be interpreted as a player's underlying form (or ``hotness'').
We further include the covariates introduced in Section~\ref{s:data} in the model, which possibly affect a player's throwing success.
In particular, we model the binary response of throwing success $Y_{t_k}^{p,n}$ using a Bernoulli distribution with the associated success probability $\pi_{t_k}^{p,n}$ being a function of the player's current state $S_{t_k}^{p,n}$ and the covariates.
Dropping the superscripts $p$ and $n$ for notational simplicity from now on, we thus have
\begin{equation}
\label{eq:observations}
\begin{split}
Y_{t_k} \sim \text{Bern}(\pi_{t_k}), \quad
\text{logit}(\pi_{t_k}) = S_{t_k} &+ \beta_{0,p} + \beta_1 \textit{home} + \beta_2 \textit{scorediff} \\ &+ \beta_3 \textit{last30} + \beta_4 \textit{ft2} + \beta_5 \textit{ft3},
\end{split}
\end{equation}
where $\beta_{0,p}$ is a player-specific intercept to account for differences between players' average throwing success.
To address the temporal irregularity of the free throw attempts, we formulate the stochastic process $\{S_t\}_{t \geq 0}$ in continuous time.
Furthermore, we require the state process to be continuous-valued to allow for gradual changes in a player's form, rather than assuming a finite number of discrete states \citep[e.g.\ three states interpreted as cold vs.\ normal vs.\ hot; cf.][]{wetzels2016bayesian, green2017hot}.
In addition, the state process ought to be stationary such that in the long-run a player returns to his average form.
A natural candidate for a corresponding stationary, continuous-valued and continuous-time process is the OU process, which is described by the following SDE:
\begin{equation}
d S_t = \theta (\mu - S_t) dt + \sigma d B_t, \quad S_0=s_0,
\label{eq:OUprocess}
\end{equation}
where $\theta > 0$ is the drift term indicating the strength of reversion to the long-term mean $\mu \in \mathbb{R}$, while $\sigma > 0$ is the diffusion parameter controlling the strength of fluctuations, and $B_t$ denotes the Brownian motion.
We further specify $\mu = 0$ such that the state process fluctuates around a player's average form, given the current covariate values.
Specifically, positive values of the state process indicate higher success probabilities, whereas negative values indicate decreased throwing success given the player's average ability and the current game characteristics.
\begin{figure}[!htb]
\centering
\begin{tikzpicture}[scale=0.75, transform shape]
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (A) at (2, -5) {$S_{t_1}$};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (B) at (4.4, -5) {$S_{t_2}$};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (C) at (7.5, -5) {$S_{t_3}$};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (C1) at (10, -5) {...};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (Y1) at (2, -2.5) {$Y_{t_1}$};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (Y2) at (4.4, -2.5) {$Y_{t_2}$};
\node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (Y3) at (7.5, -2.5) {$Y_{t_3}$};
\node[text width=5cm,font=\itshape] at (14, -2.5)
{throwing success (observed)};
\node[text width=5cm,font=\itshape] at (14, -5)
{player's form (hidden)};
\draw[-{Latex[scale=2]}] (A)--(B);
\draw[-{Latex[scale=2]}] (B)--(C);
\draw[-{Latex[scale=2]}] (C)--(C1);
\draw[-{Latex[scale=2]}] (A)--(Y1);
\draw[-{Latex[scale=2]}] (B)--(Y2);
\draw[-{Latex[scale=2]}] (C)--(Y3);
\end{tikzpicture}
\caption{Dependence structure of our SSM: the throwing success $Y_{t_k}$ is assumed to be driven by the underlying (latent) form of a player. To explicitly account for the irregular time intervals between observations, we formulate our model in continuous time.}
\label{fig:hmm_bild}
\end{figure}
As shown in Figure~\ref{fig:hmm_bild}, we model the hot hand effect as serial correlation in success probabilities as induced by the state process:
while the observed free throw attempts are conditionally independent, given the underlying states, the unobserved state process induces correlation in the observation process.
Regarding the hot hand effect, the drift parameter $\theta$ of the OU process is thus of main interest as it governs the speed of reversion (to the average form).
The smaller $\theta$, the longer it takes for the OU process to return to its mean
and hence the higher the serial correlation.
To assess whether a model including serial dependence (i.e.\ an SSM) is actually needed to describe the structure in the data, we additionally fit a benchmark model without the underlying state variable $S_{t_k}$ in Equation (\ref{eq:observations}).
Consequently, the benchmark model corresponds to the absence of any hot hand effect, i.e.\ a standard logistic regression model.
\subsection{Statistical inference}
\label{s:statInf}
The likelihood of the continuous-time SSM given by Equations (\ref{eq:observations}) and (\ref{eq:OUprocess}) involves integration over all possible realisations of the continuous-valued state $S_{t_k}$, at each observation time $t_1, t_2, \ldots, t_T$.
For simplicity of notation, let the integer $\tau = 1, 2, \ldots, T$ denote the \textit{number} of the observation in the time series, such that $Y_{t_\tau}$ shortens to $Y_\tau$ and $S_{t_\tau}$ shortens to $S_\tau$.
Further, $t_\tau$ represents the \textit{time} at which the observation $\tau$ was collected.
Then the likelihood of a single throwing sequence $y_1, \ldots, y_T$ is given by
\begin{equation}
\begin{split}
\mathcal{L}_T &= \int \ldots \int \text{Pr}(y_1, \ldots, y_T, s_1, \ldots, s_T) ds_T \ldots ds_1 \\
&= \int \ldots \int \text{Pr}(s_1) \text{Pr}(y_1|s_1) \prod_{\tau=2}^T \text{Pr}(s_\tau | s_{\tau-1}) \text{Pr}(y_\tau|s_\tau) ds_T \ldots ds_1, \\
\label{eq:llk}
\end{split}
\end{equation}
where we assume that each player starts a game in his stationary distribution $S_1 \sim \mathcal{N} \left(0, \frac{\sigma^2}{2\theta} \right)$, i.e.\ the stationary distribution of the OU process.
Further, we assume $Y_\tau$ to be Bernoulli distributed with corresponding state-dependent probabilities $\text{Pr}(y_\tau|s_\tau)$ additionally depending on the current covariate values (cf.\ Equation~(\ref{eq:observations})),
while the probabilities of transitioning between the states $\text{Pr}(s_\tau | s_{\tau-1})$ are normally distributed as determined by the conditional distribution of the OU process:
\begin{equation}
S_{\tau} | S_{\tau-1} = s \sim \mathcal{N}\left( \text{e}^{-\theta \Delta_\tau} s, \quad
\frac{\sigma^2}{2\theta} \bigl(1- \text{e}^{-2\theta \Delta_\tau}\bigr) \right),
\label{eq:condDistOU}
\end{equation}
where $\Delta_\tau = t_{\tau} - t_{\tau-1}$ denotes the time difference between consecutive observations.
Due to the $T$ integrals in Equation (\ref{eq:llk}), the likelihood calculation is intractable.
To render its evaluation feasible, we approximate the multiple integral by finely discretising the continuous-valued state space as first suggested by \citet{kitagawa1987}.
The discretisation of the state space can effectively be seen as a reframing of the model as a continuous-time hidden Markov model (HMM) with a large but finite number of states, enabling us to apply the corresponding efficient machinery.
In particular, we use the forward algorithm to calculate the likelihood, defining the possible range of state values as $[-2, 2]$, which we divide into 100 intervals.
For details on the approximation of the likelihood via state discretisation, see \citet{otting2018hot} for discrete-time and \citet{mews2020} for continuous-time SSMs.
Assuming single throwing sequences of players to be mutually independent, conditional on the model parameters, the likelihood over all games and players is simply calculated as the product of the individual likelihoods.
The model parameters, i.e.\ the drift parameter and the diffusion coefficient of the OU process as well as the regression coefficients, are then estimated by numerically maximising the (approximate) joint likelihood.
The resulting parameter estimators are unbiased and consistent --- corresponding simulation experiments are shown in \citet{mews2020}.
\section{Results}
\label{s:results}
\begin{table}[!b] \centering
\caption{\label{tab:results} Parameter estimates with 95\% confidence intervals.}
\medskip
\begin{tabular}{llcc}
\\[-1.8ex]\hline
\hline \\[-1.8ex]
\multicolumn{2}{l}{parameter} & estimate & 95\% CI \\
\hline \\[-1.8ex]
$\theta$ & (drift) & 0.042 & [0.016; 0.109] \\
$\sigma$ & (diffusion) & 0.101 & [0.055; 0.185] \\
$\beta_1$ & (\textit{home}) & 0.023 & [-0.009; 0.055] \\
$\beta_2$ & (\textit{scorediff}) & 0.030 & [0.011; 0.048] \\
$\beta_3$ & (\textit{last30}) & 0.003 & [-0.051; 0.058] \\
$\beta_4$ & (\textit{ft2}) & 0.223 & [0.192; 0.254] \\
$\beta_5$ & (\textit{ft3}) & 0.421 & [0.279; 0.563] \\
\hline \\[-1.8ex]
\end{tabular}
\end{table}
According to the information criteria AIC and BIC, the continuous-time model formulation including a potential hot hand effect is clearly favoured over the benchmark model without any underlying state process ($\Delta$AIC = 61.20, $\Delta$BIC = 41.97).
The parameter estimates of the OU process, which represents the underlying form of a player, as well as the estimated regression coefficients are shown in Table~\ref{tab:results}.
In particular, the estimate for the drift parameter $\theta$ of the OU process is fairly small, thus indicating serial correlation in the state process over time.
However, when assessing the magnitude of the hot hand effect, we also observe a fairly small estimate for the diffusion coefficient $\sigma$.
The corresponding stationary distribution is thus estimated as $\mathcal{N} \left(0, 0.348^2 \right)$, indicating a rather small range of the state process, which becomes apparent also when simulating state trajectories based on the parameter estimates of the OU process (cf.\ Figure~\ref{fig:OUresults}).
Still, the associated success probabilities, given that all covariate values are fixed to zero, vary considerably during the time of an NBA game (cf.\ right y-axis of Figure~\ref{fig:OUresults}).
While the state process and hence the resulting success probabilities slowly fluctuate around the average throwing success (given the covariates), the simulated state trajectories reflect the temporal persistence of the players' underlying form.
Thus, our results suggest that players can temporarily enter a state in which their success probability is considerably higher than their average performance, which provides evidence for a hot hand effect.
\begin{figure}[!t] \centering
\includegraphics[width=0.9\textwidth]{SimOUresults.pdf}
\caption{\label{fig:OUresults} Simulation of possible state trajectories for the length of an NBA game based on the estimated parameters of the OU process. The red dashed line indicates the intercept (here: the median throwing success over all players), around which the processes fluctuate. The right y-axis shows the success probabilities resulting from the current state (left y-axis), given that the explanatory variables equal 0. The graphs were obtained by application of the Euler-Maruyama scheme with initial value 0 and step length 0.01.}
\end{figure}
Regarding the estimated regression coefficients, the player-specific intercepts $\hat{\beta}_{0,p}$ range from -0.311 to 2.192 (on the logit scale), reflecting the heterogeneity in players' throwing success.
The estimates for $\beta_1$ to $\beta_5$ are displayed in Table \ref{tab:results} together with their 95\% confidence intervals.
The chance of making a free throw is slightly increased if the game is played at home ($\beta_1$) or if a free throw occurs in the last 30 seconds of a quarter ($\beta_3$), but both corresponding confidence intervals include the zero.
In contrast, the confidence interval for the score difference ($\beta_2$) does not include the zero and its effect is positive but small, indicating that the higher the lead, the higher is, on average, the chance to make a free throw.
The position of the throw, i.e.\ whether it is the first, second ($\beta_4$), or third ($\beta_5$) attempt in a row, has the largest effect of all covariates considered:
compared to the first free throw, the chance of a hit is considerably increased if it is the second or, in particular, the third attempt, which was already indicated by the descriptive analysis presented in Section \ref{s:data}.
However, this strong effect on the success probabilities is probably caused by the fact that three free throws in a row are only awarded if a player is fouled while shooting a three-point field goal, which, in turn, is more often attempted by players who regularly perform well at free throws.
To further investigate how the hot hand may evolve during a game, we compute the most likely state sequences, corresponding to the underlying form of a player.
Specifically, we seek
$$
(s_{1}^*,\ldots,s_{T}^*) = \underset{s_{1},\ldots,s_{T}}{\operatorname{argmax}} \; \Pr ( s_{1},\ldots,s_{T} | y_{1},\ldots, y_{T} ),
$$
where $s_{1}^*,\ldots,s_{T}^*$ denotes the most likely state sequence given the observations.
As we transferred our continuous-time SSM to an HMM framework by finely discretising the state space (cf.\ Section \ref{s:statInf}), we can use the Viterbi algorithm to calculate such sequences at low computational cost \citep{zucchini2016hidden}.
Figure~\ref{fig:decStates} shows the most likely states underlying the throwing sequences presented in Table~\ref{tab:data}.
While the decoded state processes fluctuate around zero (i.e.\ a player's average throwing success), the state values vary slightly over the time of an NBA game.
Over all players, the decoded states range from -0.42 to 0.46, again indicating that the hot hand effect as modelled by the state process is rather small.
\begin{figure}[!h tb] \centering
\includegraphics[width=0.95\textwidth]{decStates_new.pdf}
\caption{\label{fig:decStates} Decoded states underlying the throwing sequences of LeBron James shown in Table~\ref{tab:data}. Successful free throws are shown in yellow, missed shots in black.}
\end{figure}
The decoded state sequences in Figure \ref{fig:decStates} further allow to illustrate the advantages and the main idea of our continuous-time modelling approach.
For example, consider the throwing sequence in the second match shown, where LeBron James only made a single free throw of his first four attempts.
The decoded state at throw number 3 is -0.092 (cf.\ Figure~\ref{fig:decStates}) and the time passed between throw number 3 and 4 is 1.65 minutes (cf.\ Table~\ref{tab:data}).
Thus, the value of the state process at throw number 4 is drawn from a normal distribution, given the decoded state of the previous attempt, with mean $\text{e}^{-0.042 \cdot 1.65} (-0.092) = -0.086$ and variance $\frac{0.101^2}{2\cdot 0.042} (1 - \text{e}^{-2\cdot 0.042 \cdot 1.65}) = 0.016$ (cf.\ Equation~(\ref{eq:condDistOU})).
Accordingly, the value of the state process for throw number 5 is drawn from a normal distribution with mean $-0.050$ and variance $0.078$, conditional on the decoded state of -0.084 at throw number 4 and a relatively long time interval of 12.22 minutes.
As highlighted by these example calculations, the conditional distribution of the state process takes into account the interval length between consecutive attempts:
the more time elapses, the higher the variance in the state process and hence, the less likely is a player to retain his form, with a tendency to return to his average performance.
\section{Discussion}
\label{s:discussion}
In our analysis of the hot hand, we used SSMs formulated in continuous time to model throwing success in basketball.
Focusing on free throws taken in the NBA, our results provide evidence for a hot hand effect as the underlying state process exhibits some persistence over time.
In particular, the model including a hot hand effect is preferred over the benchmark model without any underlying state process by information criteria.
Although we provide evidence for the existence of a hot hand, the magnitude of the hot hand effect is rather small as the underlying success probabilities are only elevated by a few percentage points (cf.\ Figures~\ref{fig:OUresults} and \ref{fig:decStates}).
A minor drawback of the analysis arises from the fact that there is no universally accepted definition of the hot hand.
In our setting, we use the OU process to model players' continuously varying form and it is thus not clear which values of the drift parameter $\theta$ correspond to the existence of the hot hand.
While lower values of $\theta$ refer to a slower reversion of a player's form to his average performance, a further quantification of the magnitude of the hot hand effect is not possible.
In particular, a comparison of our results to other studies on the hot hand effect proves difficult as these studies apply different methods to investigate the hot hand.
In general, the modelling framework considered provides great flexibility with regard to distributional assumptions.
In particular, the response variable is not restricted to be Bernoulli distributed (or Gaussian, as is often the case when making inference on continuous-time SSMs), such that other types of response variables used in hot hand analyses (e.g.\ Poisson) can be implemented by changing just a few lines of code.
Our continuous-time SSM can thus easily be applied to other sports, and the measure for success does not have to be binary as considered here.
For readers interested in adopting our code to fit their own hot hand model, the authors can provide the data and code used for the analysis.
\section*{Acknowledgements}
We thank Roland Langrock, Christian Deutscher, and Houda Yaqine for stimulating discussions and helpful comments.
\newpage
\input{main.bbl}
\end{spacing}
\end{document}
| {
"attr-fineweb-edu": 2.244141,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfqU5qhDCU1N83fsd | \section{Introduction}
Numerous sports fans as well as majority of professional players tend
to believe in ``hot hand'' and ``winning streak'' phenomena. These
terms refer to the belief that performance of a player or a team might
become significantly better in comparison to their average\cite{Gilovich1984,Gilovich1985CogPsy,BarEli2006}.
During the last 3 decades there were significant number of attempts
to find strong statistical evidence for these beliefs as well as to
explain why do people believe in these phenomena \cite{Ayton2004MemCog,BarEli2006,Gilden1995CogPsy}.
Unsurprisingly most of the statistical approaches have failed to find
any evidence for these beliefs and those that found evidence were
shown to be inconclusive \cite{Vergin2000,BarEli2006,Green2016,Ferreira2018PhysA}.
While the cognitive psychologists tend to explain these beliefs by
the inability to recognize that small samples are not a good representation
of the population as whole \cite{Gilden1995CogPsy,Ayton2004MemCog}.
Though there are a few recent papers with criticism towards some of
the most well established analyses in the field, e.g., in \cite{Miller2016SSRN}
it is argued that the analysis carried out in \cite{Gilovich1985CogPsy}
suffers from selection bias and that after correcting for this bias
the conclusions of the original analysis reverses: the evidence confirming
``hot hand'' phenomenon is found. There is still no consensus in the
field and the debate on the apparently simple topic is still going
on. Though at this time the majority of literature on the topic is
dedicated to the individual performance in team sports (mostly basketball
and baseball), the papers on the teams performance are significantly
rarer.
In the last few decades the study of complex systems has developed
variety of tools to detect various anomalous features inherent to
complex systems. One of the thoroughly studied features of the complex
systems is persistence, which is assumed to indicate the presence
of memory in the time series. The ``hot hand'' and ``winning streak''
phenomena are exactly about the temporal persistence of good (or bad)
results of a player or a team. Hence it is tempting to check whether
sports time series exhibit such persistence. Detecting persistence
could potentially serve as a proof for the existence of these phenomena
as well as have other implications outside the scientific study. As
in \cite{Ferreira2018PhysA}, we will also rely on methodology known
as detrended fluctuations analysis (DFA), which is used to detect
persistence in various time series. Using DFA technique it was shown
that various economic, social and natural systems exhibit some degree
of persistence \cite{Ivanova1999PhysA,Kantelhardt2001PhysA,Ausloos2001PhysRevE,Kantelhardt2002PhysA,Ausloos2002CompPhysComm,Peng2004PhysRevE,Kwapien2005PhysA,Lim2007PhysA,Shao2012,Rotundo2015PhysA,Chiang2016PlosOne}.
Showing that sports time series also exhibit persistence could lead
to a better forecasting in sports as well as better understanding
of the persistence phenomenon in general. To supplement DFA results
we also employ first passage time methodology as well as study autocorrelation
functions (ACFs) of the series. The data itself invites the use of
first passage time methodology as the lengths of the streaks are by
definition identical to the first passage times, while ACFs is another
way to explore persistence in the data series.
We have organized the paper as follows: in Section~\ref{sec:analysis-original}
we analyze the original data, in Section~\ref{sec:analysis-shuffle}
we extend the previous analysis by considering the shuffled data as
well as data generated by the random models, while conclusions are
given in Section~\ref{sec:Conclusion}.
\section{Analysis of the original data\label{sec:analysis-original}}
As in \cite{Ferreira2018PhysA} we have downloaded NBA team regular
season (from 1995 to 2018) win records for $28$ NBA teams from \href{http://www.landofbasketball.com}{landofbasketball.com}
website. We have excluded NOP (New Orleans Pelicans) and CHA (Charlotte
Hornets/Bobcats) from the analysis, because these teams did not participate
in some of the considered seasons. Yet the games NOP and CHA played
against other teams are present in the other team records. Hence all
$28$ teams under consideration have played $1838$ games. We have
coded their victories as $+1$, losses as $-1$, while a single game
between BOS (Boston Celtics) and IND (Indiana Pacers), which was cancelled
due to Boston marathon bombings, was coded as $0$.
As an example of how the empirical series look like in Fig.~\ref{fig:record-example}
we have plotted win record series for SAC (Sacramento Kings). Additional
curves in the figure show the examples of shuffled series. We will
refer back to this figure when discussing shuffling algorithms in
a more detail in the later sections of this paper.
\begin{figure}
\begin{centering}
\includegraphics[width=0.4\linewidth]{fig1}
\par\end{centering}
\caption{(color online) Win record for SAC (red curve), single example of the
fully shuffled win record for SAC (green curve) and single example
of the in-season shuffled win record for SAC (blue curve).\label{fig:record-example}}
\end{figure}
After running the original data through DFA, we have obtained results
similar to those reported in \cite{Ferreira2018PhysA}. We have found
that $26$ of the $28$ teams exhibit persistent behavior as their
$H>1/2$. Only DAL (Dallas Mavericks) and OKC (Oklahoma City Thunder/Seattle
Supersonics) are within margin of error from $H=1/2$. For the detailed
results see Table~\ref{tab:dfa-original}.
\begin{table}
\caption{Hurst exponents estimated for the win record time series of $28$
NBA teams in regular seasons 1995-2018 \label{tab:dfa-original}}
\centering{}%
\begin{tabular}{|c|c|c|c|c|c|}
\hline
team & $H$ & team & $H$ & team & $H$\tabularnewline
\hline
\hline
ATL & $0.5223\pm0.0057$ & IND & $0.528\pm0.006$ & PHI & $0.567\pm0.005$\tabularnewline
\hline
BKN & $0.5485\pm0.0056$ & LAC & $0.5455\pm0.0057$ & PHX & $0.5589\pm0.0066$\tabularnewline
\hline
BOS & $0.5311\pm0.0067$ & LAL & $0.5365\pm0.0078$ & POR & $0.5936\pm0.0072$\tabularnewline
\hline
CHI & $0.5456\pm0.0077$ & MEM & $0.5919\pm0.0059$ & SAC & $0.5242\pm0.0073$\tabularnewline
\hline
CLE & $0.561\pm0.006$ & MIA & $0.5771\pm0.0083$ & SAS & $0.5505\pm0.0078$\tabularnewline
\hline
DAL & $0.5016\pm0.0051$ & MIL & $0.5265\pm0.0066$ & TOR & $0.559\pm0.005$\tabularnewline
\hline
DEN & $0.5172\pm0.006$ & MIN & $0.5578\pm0.0066$ & UTA & $0.583\pm0.007$\tabularnewline
\hline
DET & $0.5186\pm0.0073$ & NYK & $0.5484\pm0.0066$ & WAS & $0.5354\pm0.0065$\tabularnewline
\hline
GSW & $0.5323\pm0.0066$ & OKC & $0.4960\pm0.0051$ & & \tabularnewline
\hline
HOU & $0.5583\pm0.0052$ & ORL & $0.5687\pm0.0068$ & & \tabularnewline
\hline
\end{tabular}
\end{table}
Note that for the most of the teams our $H$ estimates are smaller
than those reported in \cite{Ferreira2018PhysA}. This is because
we have fitted $F\left(s\right)$ for $s\in\left[5,70\right]$, which
is narrower interval than the one used in \cite{Ferreira2018PhysA}.
We prefer this narrower interval, because it is still comparatively
broad (spans a bit over one order of magnitude) and the smallest $R^{2}$
over all teams in the data set is one of the largest obtained while
considering other alternative intervals ($R^{2}=0.988$). $R^{2}$
reported in \cite{Ferreira2018PhysA} is smaller which indicates that
our fits are somewhat better. Also from a simple visual comparison
it is pretty clear that the fits obtained in \cite{Ferreira2018PhysA}
are not very good (compare Fig.~\ref{fig:original-extreme} in this
paper and Fig.~1 in \cite{Ferreira2018PhysA}). The slopes reported
in \cite{Ferreira2018PhysA} are somewhat larger than the ones reported
by us, because their fitting interval is broader and includes $s>70$,
while for the most of the teams $F\left(s\right)$ starts curve upwards
for $s>70$, which inflates $H$ estimates. We believe that this curving
upwards might indicate that there is another scaling regime, possibly
related to the long-term decisions made on the season time scale ($82$
games). This curving upward might either be artifact observed due
to the small sample size, or due to season-to-season persistence.
\begin{figure}
\begin{centering}
\includegraphics[width=0.7\textwidth]{fig2}
\par\end{centering}
\caption{(color online) Quality of the fits, $F(s)\sim s^{H}$, provided by
the reported Hurst exponents, Table~\ref{tab:dfa-original}, in the
six selected cases: MEM and POR have the highest $H$ estimates, DAL
and OKC have the lowest $H$ estimates, while CHI and UTA were used
as examples in \cite{Ferreira2018PhysA}.\label{fig:original-extreme}}
\end{figure}
\section{Checking the empirical results against random models\label{sec:analysis-shuffle}}
For small data sets, such as this one, estimation of the true underlying
Hurst exponent using DFA (or other similar techniques) is not very
reliable. E.g., in \cite{Drozdz2009EPL} MF-DFA, generalization of
DFA, was used to estimate $H$ of synthetic series of various lengths,
it was found that as small as $10^{4}$ points might be sufficient
in some cases. \cite{Drozdz2009EPL} also lists numerous sophisticated
bootstrap techniques, which preserve multifractal properties of the
series. Yet our goals are simpler and for this paper it seems sufficient
to use simple shuffling algorithms and also compare $H$ estimates
against some simple random models. This section is dedicated to these
tests. In all of the following tests our null hypothesis is that the
data is neither persistent nor anti-persistent.
First of all let us shuffle all the games each team has played. This
type of shuffling should remove persistence effects across all time
scales. As can be seen in Fig.~\ref{fig:record-example} the resulting
shuffled win record series (green curve) is nothing alike the original
series (red curve). We have shuffled team records $10^{4}$ times
and have estimated $95\%$ confidence intervals (CIs) for what $H$
values could be reasonably expected to be obtained from a similar,
yet non-persistent, series. If the estimated $H$ values are outside
the obtained CIs, then for those cases we could reject the null hypothesis.
\begin{table}
\caption{Comparing $H$ estimates against CIs obtained from the fully shuffled
data\label{tab:full-shuffle}}
\centering{}%
\begin{tabular}{|c|c|c|c|}
\hline
Team & $H$ & Shuffle CI & Reject the null?\tabularnewline
\hline
\hline
ATL & $0.5223\pm0.0057$ & $[0.4667,0.5628]$ & No\tabularnewline
\hline
BKN & $0.5485\pm0.0056$ & $[0.4676,0.5640]$ & No\tabularnewline
\hline
BOS & $0.5311\pm0.0067$ & $[0.4675,0.5624]$ & No\tabularnewline
\hline
CHI & $0.5456\pm0.0077$ & $[0.4665,0.5639]$ & No\tabularnewline
\hline
CLE & $0.561\pm0.006$ & $[0.4657,0.5625]$ & No\tabularnewline
\hline
DAL & $0.5016\pm0.0051$ & $[0.4670,0.5630]$ & No\tabularnewline
\hline
DEN & $0.5172\pm0.006$ & $[0.4663,0.5637]$ & No\tabularnewline
\hline
DET & $0.5186\pm0.0073$ & $[0.4666,0.5637]$ & No\tabularnewline
\hline
GSW & $0.5323\pm0.0066$ & $[0.4673,0.5635]$ & No\tabularnewline
\hline
HOU & $0.5583\pm0.0052$ & $[0.4669,0.5626]$ & No\tabularnewline
\hline
IND & $0.528\pm0.006$ & $[0.4658,0.5640]$ & No\tabularnewline
\hline
LAC & $0.5455\pm0.0057$ & $[0.4661,0.5632]$ & No\tabularnewline
\hline
LAL & $0.5365\pm0.0078$ & $[0.4664,0.5640]$ & No\tabularnewline
\hline
MEM & $0.5919\pm0.0059$ & $[0.4675,0.5631]$ & \textbf{Yes}\tabularnewline
\hline
MIA & $0.5771\pm0.0083$ & $[0.4671,0.5631]$ & \textbf{Yes}\tabularnewline
\hline
MIL & $0.5265\pm0.0066$ & $[0.4660,0.5622]$ & No\tabularnewline
\hline
MIN & $0.5578\pm0.0066$ & $[0.4667,0.5633]$ & No\tabularnewline
\hline
NYK & $0.5484\pm0.0066$ & $[0.4672,0.5639]$ & No\tabularnewline
\hline
OKC & $0.4960\pm0.0051$ & $[0.4674,0.5630]$ & No\tabularnewline
\hline
ORL & $0.5687\pm0.0068$ & $[0.4674,0.5640]$ & No\tabularnewline
\hline
PHI & $0.567\pm0.005$ & $[0.4668,0.5635]$ & No\tabularnewline
\hline
PHX & $0.5589\pm0.0066$ & $[0.4658,0.5628]$ & No\tabularnewline
\hline
POR & $0.5936\pm0.0072$ & $[0.4674,0.5626]$ & \textbf{Yes}\tabularnewline
\hline
SAC & $0.5242\pm0.0073$ & $[0.4661,0.5639]$ & No\tabularnewline
\hline
SAS & $0.5505\pm0.0078$ & $[0.4684,0.5641]$ & No\tabularnewline
\hline
TOR & $0.559\pm0.005$ & $[0.4667,0.5627]$ & No\tabularnewline
\hline
UTA & $0.583\pm0.007$ & $[0.4676,0.5636]$ & \textbf{Yes}\tabularnewline
\hline
WAS & $0.5354\pm0.0065$ & $[0.4666,0.5631]$ & No\tabularnewline
\hline
\end{tabular}
\end{table}
As we can see in Table~\ref{tab:full-shuffle} we are able reject
the null hypothesis in $4$ cases: for MEM (Vancouver/Memphis Grizzlies),
MIA (Miami Heat), POR (Portland Trail Blazers) and UTA (Utah Jazz)
the estimated $H$ is larger than would be expected if the series
would be non-persistent. When analyzing $28$ cases under $5\%$ significance
level we could reasonably expect to have $1$ or $2$ false rejections.
Though these expectations do not account for the fact that the empirical
series under consideration are not mutually independent, as a win
for one team is a loss for another. In this context obtaining $4$
false rejections no longer seems unexpected. Thus we consider the
obtained evidence questionable.
We can also check whether the streak lengths (first passage times)
of the original and the shuffled series come from the different distributions.
Note that for this test we use not the win record series (shown in
Fig.~\ref{fig:record-example}), but the binary won-lost series.
For the sake of consistency for this test we use only one shuffled
time series per team. Here we use Kolmogorov-Smirnov test \cite{Lopes2011Springer}
with significance level $\alpha=0.05$. For the both winning and losing
streaks we were able reject the null hypothesis (that the sets of
streaks come from the same distribution) only for the winning streaks
of MEM. As these results is within expected false rejection count,
we consider the obtained evidence is negligible.
We can perform another test using the streak length distribution:
whether the streak lengths from the original data are inconsistent
with an independent randomness model. For this test we assume the
following model: the team has probability $p_{w}$ to win each game
independently of the other games and $p_{w}$ is estimated based on
the overall win-loss record of the team. Once again here we have used
only one randomly generated series. This time using Kolmogorov-Smirnov
test ($\alpha=0.05$) we were able to reject the null hypothesis only
for the winning streaks of LAL (Los Angeles Lakers). We consider this
evidence to be negligible, as the rejection count is within the expected
false rejection count.
This independent randomness model could be further simplified by ignoring
the overall win-loss record (setting $p_{w}=0.5$ for all teams).
But in this case the number of rejections is significantly higher:
$4$ rejections for the winning streaks and $5$ rejections for the
losing streaks. The increased rejection count is expected for the
teams which won noticeably more or less than half of its games. What
is striking is that the fully random model is consistent with records
of the most teams, which could indicate that the technique is not
sensitive enough to capture small differences in the probability to
win a single game.
To further our discussion let us try a slightly different shuffling
algorithm. This time let us shuffle the games inside the seasons,
but preserve the ordering of the seasons. The in-season algorithm
destroys the possible game-to-game persistence, but retains season-to-season
persistence. An example of the series shuffled using this algorithm
(blue curve) is shown in Fig.~\ref{fig:record-example}. As can be
easily seen the blue curve repeats the general shape of the original
(red) curve. Once again we have obtained $10^{4}$ shuffles and have
estimated $95\%$ CIs for $H$ estimates. As we can see in Table~\ref{tab:other-shuffle}
we are able to reject the null hypothesis in just two cases (POR and
UTA). As these results is within expected false rejection count, we
consider the obtained evidence is negligible.
\begin{table}
\caption{Comparing $H$ estimates against CIs obtained from the in-season shuffled
data\label{tab:other-shuffle}}
\centering{}%
\begin{tabular}{|c|c|c|c|}
\hline
team & $H$ & In-season shuffle CI & Reject the null?\tabularnewline
\hline
\hline
ATL & $0.5223\pm0.0057$ & $[0.4925,0.5825]$ & No\tabularnewline
\hline
BKN & $0.5485\pm0.0056$ & $[0.5067,0.5947]$ & No\tabularnewline
\hline
BOS & $0.5311\pm0.0067$ & $[0.5187,0.6043]$ & No\tabularnewline
\hline
CHI & $0.5456\pm0.0077$ & $[0.5284,0.6129]$ & No\tabularnewline
\hline
CLE & $0.561\pm0.006$ & $[0.5194,0.6056]$ & No\tabularnewline
\hline
DAL & $0.5016\pm0.0051$ & $[0.4773,0.5707]$ & No\tabularnewline
\hline
DEN & $0.5172\pm0.006$ & $[0.4916,0.5839]$ & No\tabularnewline
\hline
DET & $0.5186\pm0.0073$ & $[0.4834,0.5736]$ & No\tabularnewline
\hline
GSW & $0.5323\pm0.0066$ & $[0.4975,0.5859]$ & No\tabularnewline
\hline
HOU & $0.5583\pm0.0052$ & $[0.4968,0.5853]$ & No\tabularnewline
\hline
IND & $0.528\pm0.006$ & $[0.4853,0.5776]$ & No\tabularnewline
\hline
LAC & $0.5455\pm0.0057$ & $[0.4885,0.5835]$ & No\tabularnewline
\hline
LAL & $0.5365\pm0.0078$ & $[0.4910,0.5820]$ & No\tabularnewline
\hline
MEM & $0.5919\pm0.0059$ & $[0.4963,0.5867]$ & No\tabularnewline
\hline
MIA & $0.5771\pm0.0083$ & $[0.5125,0.5995]$ & No\tabularnewline
\hline
MIL & $0.5265\pm0.0066$ & $[0.4892,0.5793]$ & No\tabularnewline
\hline
MIN & $0.5578\pm0.0066$ & $[0.4881,0.5795]$ & No\tabularnewline
\hline
NYK & $0.5484\pm0.0066$ & $[0.4886,0.5800]$ & No\tabularnewline
\hline
OKC & $0.4960\pm0.0051$ & $[0.4917,0.5810]$ & No\tabularnewline
\hline
ORL & $0.5687\pm0.0068$ & $[0.4951,0.5858]$ & No\tabularnewline
\hline
PHI & $0.567\pm0.005$ & $[0.4995,0.5895]$ & No\tabularnewline
\hline
PHX & $0.5589\pm0.0066$ & $[0.5147,0.5994]$ & No\tabularnewline
\hline
POR & $0.5936\pm0.0072$ & $[0.4828,0.5768]$ & \textbf{Yes}\tabularnewline
\hline
SAC & $0.5242\pm0.0073$ & $[0.4752,0.5688]$ & No\tabularnewline
\hline
SAS & $0.5505\pm0.0078$ & $[0.5028,0.5899]$ & No\tabularnewline
\hline
TOR & $0.559\pm0.005$ & $[0.5000,0.5898]$ & No\tabularnewline
\hline
UTA & $0.583\pm0.007$ & $[0.4814,0.5744]$ & \textbf{Yes}\tabularnewline
\hline
WAS & $0.5354\pm0.0065$ & $[0.4893,0.5781]$ & No\tabularnewline
\hline
\end{tabular}
\end{table}
Another way to look at temporal persistence is to examine ACF of the
series. In Fig.~\ref{fig:acf-total} we see that most of the teams
have weakly positive ACFs for the lags up to $82$ games (one season).
In quite a few cases, most notably CHI (Chicago Bulls) and GSW (Golden
State Warrriors), the ACF is outside reasonably expected range (gray
area) if the series would be random (comparison is made against the
fully shuffled series). Yet the excess autocorrelation is well explained
by taking season-to-season variations into account. In Fig.~\ref{fig:acf-inseason}
the comparison is made against the in-season shuffled data and as
one can see the empirical ACFs are now within the reasonably expected
range (gray area).
\begin{figure}
\begin{centering}
\includegraphics[width=0.7\textwidth]{fig3}
\par\end{centering}
\caption{(color online) Comparison between the autocorrelation functions of
the empirical series (red curves) and the $95\%$ confidence intervals
obtained by fully shuffling the respective empirical series (gray
areas). Embedded subplots provide a zoom in on the x-axis (to $[1,20]$
games interval).\label{fig:acf-total}}
\end{figure}
\begin{figure}
\begin{centering}
\includegraphics[width=0.7\textwidth]{fig4}
\par\end{centering}
\caption{(color online) Comparison between the autocorrelation functions of
the empirical series (red curves) and the $95\%$ confidence intervals
obtained by shuffling the respective empirical series in-season (gray
areas). Embedded subplots provide a zoom in on the x-axis (to $[1,20]$
games interval).\label{fig:acf-inseason}}
\end{figure}
\section{Conclusion\label{sec:Conclusion}}
In our exploration of the NBA regular season data set from the period
1995-2018 we have not found enough evidence supporting the ``winning
streak'' phenomenon. Although for the most of the teams we have obtained
Hurst exponents larger than $0.5$ (these results are somewhat consistent
with the ones reported in \cite{Ferreira2018PhysA}), which would
indicate the presence of persistence, these results are mostly consistent
with the considered random models (full data shuffle, in-season data
shuffle and independent randomness model) at significance level of
$5\%$. It appears that the observed persistence is simply an illusion
caused by the limited amount of data under consideration. Nevertheless
DFA, as suggested by \cite{Ferreira2018PhysA}, seems to be an interesting
tool to be applied to the sports time series and extensive application
of it on a long time series could potentially provide arguments to
the ongoing ``hot hand'' debate. Furthermore it seems that DFA could
be also used to answer different question related to the different
manegerial strategies teams use, e.g., in \cite{Rotundo2015PhysA}
it was examined whether religious practicies have impact on the baby
birth times.
We have backed the results obtained using DFA technique by analyzing
streak length distributions. Namely, using Kolmogorov-Smirnov test
($\alpha=0.05$), we were unable to find any evidence indicating that
the empirical streak lengths would be inconsistent with the independent
randomness model. These results also suggest that there is no evidence
for the persistence in winning record, or the ``winning streak'',
phenomenon.
Analysis carried out using ACFs paints a more sophisticated picture.
Empirical ACFs are not fully consistent with ACFs one would expect
if there were no persistence. Yet these inconsistencies are well explained
by retaining season-to-season variations. This would point to an obvious
fact that long-term managerial decisions have an impact on team's
performance. While on the other hand short-term decisions and effects
as ``wining streaks'' do not seem to have effects which would be inconsistent
with random models.
Finally we would like to advise against gambling by relying on the
persistence. Not only because we did not find sufficient evidence
for the game-to-game persistence phenomenon in the NBA time series,
but also because in the cases where persistence was detected (e.g.,
financial time series), there are still no reliable forecasting algorithms
relying on the persistence alone. Gambling on the other hand could
also influence the outcome of the games, but we would doubt if it
had effect in as reputable league as NBA. It is likely that DFA or
simple random models (similar to ones considered here) could be used
to detect betting frauds in less reputable sports leagues.
We have made all of the scripts as well as the data, we have used
in this analysis, freely available using GitHub \cite{Kononovicius2018GitNBA}.
| {
"attr-fineweb-edu": 2.416016,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUe4rxK7IAEwOJJg7C | \section{Introduction}
Due to the great increasing of sport game videos, it is implementable to analysis the technology and movement of players using the collected video data. Such as \cite{Bertasius} tries to evaluate a basketball player's performance from his/her first-person video. Many works have be devoted in vision-based sport action recognition in recent years \cite{Mora, Soomro}.
Several public video datasets such as UCF-Sport \cite{Soomro} and Sports-1M \cite{Karpathy} are provided for this problem. Tennis has received widely attention from all the world. However, since lack of data, there are few literatures analyzing tennis actions. This paper focuses on automatically recognizing different tennis actions using the RGB videos. Zhu et al. \cite{Zhu1,Zhu2} used the support vector machine to classify tennis videos into left-swing and right-swing actions by optical flow based descriptor.
Farajidavar et al. \cite{Farajidavar} employed the transfer learning to classify tennis videos into non-hit, hit and serve actions.
However, they have just analyzed a few actions for playing tennis.
Luckily, Gourgari et al. \cite{Gourgari} have presented a tennis actions database called THETIS dataset.
It consists of 12 fine-grained tennis shots acted by 55 different subjects multiple times at different scenes with dynamic background. The objective of analyzing the THETIS dataset is to classify the videos into the 12 pre-defined tennis actions from raw video data.
Based on this database, Mora et al. \cite{Mora} have proposed a deep learning model for domain-specific tennis action recognition using RGB video content.
They tried to use long-short term memory networks (LSTMs) to describe each action, since LSTMs can remember information for long periods of time.
Recently, deep learning based methods have achieved promising performance for action recognition. Li et al. \cite{Li} presented a skeleton-based action recognition method using LSTM and convolutional neural network (CNN).
Cheron et al. \cite{Cheron} proposed a pose-based convolutional neural network feature for action recognition.
Gammulle et al. \cite{Gammulle} have presented a deep fusion framework for human action recognition, which employs a convolutional neural network to extract salient spatial features and LSTMs to model temporal relationship.
Zhang et al. \cite{Zhang} used the multi-layer LSTM networks to learn the geometric features on skeleton information for action recognition. Liu et al. \cite{Liu} proposed a spatio-temporal LSTM with trust gates for 3D human action recognition. Zhu et al. \cite{Zhu} presented a co-occurrence skeleton feature learning based on regularized deep LSTM networks for human action recognition. Lee et al. \cite{Lee} proposed an ensemble learned temporal sliding LSTM networks for skeleton-based action recognition. Tsunoda et al. \cite{Tsunoda} used the hierarchical LSTM model for football action recognition. Song et al. \cite{Song} presented a spatio-temporal attention-based LSTM networks for recognizing and detecting 3D action.
Liu \cite{Liu2} proposed a global context-aware attention LSTM networks for 3D action recognition.
At least two important aspects influence the performance of action recognition: spatial representation and temporal modeling. Hence, in this paper, we propose a framework using convolutional neural networks with historical Long Short-Term Memory networks for tennis action recognition. First, we extract deep spatial representation for each frame by Inception \cite{szegedy2016rethinking} which is a famous convolutional neural network and pre-trained on ImageNet dataset. The most importance is that a historical Long Short-Term Memory units are proposed to modeling the temporal cues.
For THETIS dataset, there are 12 fine-grained tennis shots and they are similar with each other. It's a challenge to distinguish different tennis shots. As we know, for each action, its historical evolution as time is important.
Long Short-Term Memory is widely used for sequence model.
The original Long Short-Term Memory only model the local representation of sequences. However, the global or holistic representation, such as historical information can not be described through LSTM.
Therefore, we have proposed an recurrent neural network(RNN) architecture using an extra layer to describe the historical information for action recognition.
Besides, for a video clip, some frames may be very critical to action recognition. However, in original Long Short-Term Memory model, we can't automatically focus on those key frames. Therefore, we present a score weighting scheme by taking the output state at time $t$ and the historical information at time $t-1$ to generate the feature vector that describing the historical information of human action. In our framework, we can automatically pay more attention to those impressive frames and omit the insignificant clips.
Additionally, the update duration of historical state may be very long so that large errors may be accumulated in the iteration process.
So we proposed a error truncation technology to re-initialize the historical state at current time and drop the accumulated errors.
In order to extract temporal features, we developed a five-layer deep historical Long Short-Term Memory network to learn the representation of the spatial feature sequences.
In conclusion, the CNN model and the deep historical LSTM model are adopted to map the
raw RGB video into a vector space to generate the spatial-temporal semantical description
and classify the action video content.
Experiments on the benchmarks demonstrate that our method outperforms the state-of-the-art baselines for tennis shot recognition using only raw RGB video.
The main contributions of our work are listed as follows
(1) We proposed a Long Short-Term Memory units based RNN architecture with an extra historical layer, called historical Long Short-Term Memory, to model the holistic information and build high level representation for visual sequences.
(2) In our proposed historical Long Short-Term Memory framework, a historical layer was introduced to model the historical information of hidden state.
So the proposed model can learn the global and holistic feature from the original human action sequence.
(3) The proposed historical Long Short-Term Memory used a weighting scheme to enhance the influence of key frames that are important to action recognition during the updating process of historical state.
So it help the pattern classier to automatically pay more attention to the important frames and omit the insignificant clips.
(4)Due to the update duration of historical state is long and large errors may be accumulated in the update process, we proposed a error truncation technology to re-initialize the historical state at current time and drop the accumulated errors.
(5) We proposed a framework based on CNN and deep historical Long Short-Term Memory networks for tennis action recognition .
Firstly ,we used the pre-trained Inception networks to build the spatial features from raw RGB frames.
Then we proposed a five-layer historical Long Short-Term Memory networks for action video recognition.
The sequences of spatial feature vectors were fed into this deep historical Long Short-Term Memory model to build the temporal feature of videos.
The rest of the paper is organized as follows. Section 2 describes the proposed method. Section 3 shows the experimental results and analysis. Finally, section gives the conclusion and future work of the paper.
\section{Method}
\subsection{Framework}
In this paper, we adopted an improved Long Short-Term Memory network accompanied by convolutional neural network for tennis action recognition.
The whole framework is illustrated in Figure 1. Firstly, each frame of the original RGB video is fed to the Inception V3 model for capturing the appearance and spatial information as the local feature. This model is pre-trained on the Large Scale Visual Recognition Challenge 2012 (ILSVRC-2012) ImageNet.
Then a Long Short-Term Memory model is adopted as a complement to the CNN model to capture the contextual information in the temporal domain.
The spatial features extracted from Inception model are fed to the LSTM model as the input. The CNN features are suitable to feed into the LSTM model because they provides rich spatial information. Specially, we introduced an improved version of the Long Short-Term Memory, called historical Long Short-Term Memory, to describe the historical information as the global feature.
It employs a score weighting scheme to generate the iterated feature vector using the output state at time $t$ and the historical state at time $t-1$.
Then a five-layer deep historical LSTM network is builded. Each layer of it has 30 historical LSTM units.
Finally, we use the output of deep historical LSTM networks as the spatial-temporal semantical description of visual sequences. It is fed to a soft-max layer for classifying the action video content.
\begin{figure}[htbp]
\centering
\includegraphics[height=12cm,width=8cm]{figure1.eps}
\caption{The proposed framework for tennis action recognition}
\label{Fig1}
\end{figure}
\subsection{RNN and LSTM}
RNN is a powerful sequence model widely used in speech recognition and natural language processing.
For a typical RNN model, the update equation at time $t$ is described as follows.
\begin{equation} \label{eq1}
\begin{aligned}
h_t=g(b+Wh_{t-1}+Ux_t)
\end{aligned}
\end{equation}
\begin{equation} \label{eq1}
\begin{aligned}
\hat{y}_{t}=softmax(c+Vh_t)
\end{aligned}
\end{equation}
where $h_t$ is the state of the $t$-th neuron, and $x_t$ is the observation at time $t$ which is the input of the $t$-th neuron.
In a typical RNN model, the state response $h_t$ of the $t$-th neuron is determined by the previous neuron state $h_{t-1}$ and the input $x_t$ of the $t$-th neuron.
$g(\cdot)$ is the activation function.
Usually, the sigmoid function $ \sigma(\cdot) $ or hyperbolic tangent function $tanh(\cdot)$ can be employed as the activation function.
$\hat{y}_t$ denotes the prediction of label $y$ at time $t$.
It is estimated by the output response $h_t$ of state neuron at $t$ time using a softmax function.
$U$, $W$ and $V$ are the weight matrices.
$b$ and $c$ are the bias vectors.
Traditional RNNs is suitable for modeling the short-term dynamics but unable to capture the long-terms relations.
LSTM is an improved version of RNN architecture for learning long-range dependencies and resolving the "vanishing gradient" problem.
In the typical LSTM model, the $t$-th hidden neuron is the cell unit containing an input gate $i_t$, a forget gate $f_t$, output gate $o_t$, an internal memory cell state $c_t$ and an output response $h_t$.
The transition equations of LSTM at time $t$ can be written as follows.
\begin{equation} \label{eq1}
\begin{aligned}
i_t=\sigma(U^{i}x_t+W^{i}h_{t-1}+P^{i}c_{t-1}+b^i)
\end{aligned}
\end{equation}
\begin{equation} \label{eq1}
\begin{aligned}
f_t=\sigma(U^{f}x_t+W^{f}h_{t-1}+P^{f}c_{t-1}+b^f)
\end{aligned}
\end{equation}
\begin{equation} \label{eq1}
\begin{aligned}
c_t=f_t \odot c_{t-1} +i_t \odot tanh(U^{c}x_t+W^{c}h_{t-1}+b^c)
\end{aligned}
\end{equation}
\begin{equation} \label{eq1}
\begin{aligned}
o_t=\sigma(U^{o}x_t+W^{o}h_{t-1}+P^{o}c_{t}+b^o)
\end{aligned}
\end{equation}
\begin{equation} \label{eq1}
\begin{aligned}
h_t=o_t \odot tanh(c_{t})
\end{aligned}
\end{equation}
where $U^{i}$, $U^{f}$, $U^{c}$, $U^{o}$, $W^{i}$, $W^{f}$, $W^{c}$, $W^{o}$, $P^{i}$, $P^{f}$ and $P^{o}$ are the weight matrices.
$b^i$, $b^f$, $b^c$ and $b^o$ are the bias vectors.
Operator $\odot$ indicates element-wise product.
\subsection{Historical LSTM}
The typical LSTM use the last layer to encode the input sequence.
When it is adopted for action video recognition using image sequences, the last frame of the video will have the most influence on the action class recognition result.
And the previous frames have a minimal impact on the classification due to the forgetting effect.
However, a human action is often defined by the whole movement process.
To better describe the human action, we embed the historical information of human movement to the LSTM model for building a global feature.
We propose an improved version of the Long Short-Term Memory network, called historical Long Short-Term Memory, to describe the historical information.
The structure of model is illustrated in Figure 2.
The historical Long Short-Term Memory network is an LSTM units based RNN architecture using an added extra layer called historical layer to describe holistic information of human movement .
In the historical Long Short-Term Memory model, a historical state $l_t$ is introduced at time $t$.
It is generated by a score weighting scheme using the response state $h_t$ at time t and the historical state at time $t-1$.
If the classification loss of historical state at time $t-1$ is smaller than that of response state at time $t$, then we generate the new historical state at time $t$ using the weighted sum of the current response state at time $t$ and the historical state at time $t-1$.
The weight of the response state or the historical state is computed according to its classification loss.
Using such weighting scheme, the influence of key frames which are important to action class recognition can be enhanced during the updating process of historical state.
So it help the pattern classier to automatically pay more attention to the important frames and omit the insignificant clips.
However, if the classification loss of historical state at time $t-1$ is lager than that of response state at time $t$, which means too much errors have been accumulated in the long process of the historical state iteration, then we recalculate the current historical state at time $t$ using the response states from time $t-\tau$ to time $t$.
Here $\tau$ is a parameter to control the length of state sequence used to re-initialize the historical state and drop the previous historical state with lager errors.
It determines when to forget the previous response states if they are corresponding to large errors.
The update equation of historical state $l_t$ is expressed as follows.
\begin{figure}[htbp]
\centering
\includegraphics[height=6cm,width=8cm]{figure2.eps}
\caption{The structure of historical Long Short-Term Memory model}
\label{Fig1}
\end{figure}
\begin{equation} \label{eq1}
\begin{aligned}
{l_t } = \left\{ \begin{array}{l}
\alpha_{t} h_{t} +(1-\alpha_{t}) l_{t-1} ,\ \ {\rm{ }} if \ \ {\rm{ }} \epsilon_{h_t} \geq \epsilon_{l_{t-1}} \\
\sum_{k=1}^{t}\omega_{k}^{t}h_{k},\ \ {\rm{ }} if \ \ {\rm{ }} \epsilon_{h_t}<\epsilon_{l_{t-1}}
\end{array} \right.
\end{aligned}
\end{equation}
where $\alpha_t$ is the weight controlling a balance between the response $h_t$ and the last historical state $l_{t-1}$.
It is calculated by the following formula:
\begin{eqnarray}
\label{linear_scheme}
\alpha_t=\frac{1}{2}\ln(\frac{\epsilon_{l_{t-1}}}{\epsilon_{h_t}})
\end{eqnarray}
where $\epsilon_{h_t}$ denotes the loss between the training label $y_t$ at time $t$ and the estimated label $\hat{y}_t$ at time $t$ which is produced by the softmax function on $c+Vh_t$ . In this paper, the training label $y_t$ at time $t$ is set as the action label of the training video.
$\omega_{k}^{t}$ denotes the weight of response $h_k$.
It is calculated by:
\begin{equation} \label{eq1}
\begin{aligned}
\omega_{k}^{t} = \left\{ \begin{array}{l}
0 ,\ \ {\rm{ }} if \ \ {\rm{ }} k \leq \tau \\
\frac{1}{t-\tau},\ \ {\rm{ }} if \ \ {\rm{ }} k > \tau
\end{array} \right.
\end{aligned}
\end{equation}
where $\tau$ is the parameter controlling the forgetting effect.
Finally, the last historical state $l_T$ are used as the output of historical LSTM model.
It describe the holistic information of human movement history and can be used for action video recognition.
For example, a softmax layer can be employed to provide the estimated label $\hat{y}$ based on the last historical state $l_T$:
\begin{equation} \label{eq1}
\begin{aligned}
\hat{y}=softmax(d+Ql_T)
\end{aligned}
\end{equation}
where $d$ and $Q$ is the bias vector and weight matrix of the softmax layer respectively.
\subsection{Implementation Details}
TensorFlow and Python are employed as the deep learning platform.
An NVIDIA GTX1080Ti GPU is adopted to run the experiments.
We use a five-layer deep historical LSTM in the stacked network.
Each layer has 30 units.
The probability of dropout is set as 0.5.
The initial learning rate is set as 0.001 on historical LSTM model.
The learning rate is set by exponentially decaying with a base of 0.96 every 100 000 steps during the training process.
The regularization value of the historical LSTM model is set as 0.004.
The batch size fed to the model is set as 32.
The Adam Optimizer is used to trained the network for historical LSTM model.
\section{Experimental results}
\subsection{General action recognition}
\subsubsection{Dataset}
We use the HMDB51 dataset \cite{Kuehne} to show the effectiveness of the proposed historical LSTM for general action recognition tasks.
The HMDB51 dataset is a common used dataset including 6849 videos.
It consists of 51 human action categories including facial actions and body movement.
The original RGB videos of HMDB51 dataset are used to test the proposed framework.
\subsubsection{Results}
The five-fold cross-validation strategy is used to split the dataset to the training set and test set.
We compared the proposed historical LSTM to the typical LSTM.
The parameter $\tau$ of historical LSTM is fine tuned from 2 to 5.
Table 1 shows the accuracy values of the methods.
As is shown in the table, the average accuracy in prediction of the historical LSTM model achieves a best accuracy $0.73$ when the parameter $\tau$ is set as 2.
Furthermore, the average accuracy of the historical LSTM model is $0.62$ when the parameter $\tau$ is set as 3.
When the parameter $\tau$ is set as 4, the historical LSTM model achieves an accuracy of $0.63$.
However, when $\tau$ is set as 5, the accuracy of historical LSTM decline to $0.62$.
It is seen that setting the parameter $\tau$ to a medium number according to the time period in the WTSM model is better.
Meanwhile, the average accuracy of the LSTM model is $0.47$.
The experimental results shows that the proposed historical LSTM model outperforms LSTM model.
We also compared the proposed model to other state-of-the-art models using RGB data, such as Spatial stream ConvNet \cite{Simonyan}, Soft attention model \cite{Sharma}, Composite LSTM \cite{Soomro} and Mora's model \cite{Mora}.
As shown in Table 1, the correct recognition rate of our method exceeds that of the other models using RGB data.
We also compared our model using RGB data to other methods using different information.
They are the two-stream ConvNet using optical flow information \cite{Simonyan}, Wang's model using
Fisher Vectors \cite{Wang}, and Wang's model using a combination of HOG, HOF
and MBH \cite{Wang}.
The experimental results show that our model achieves a high accuracy using the very simple RGB data and outperform the compared models.
It will results in better results if other information such as optical flow data, skeletons data and depth information are added to the input of our proposed model.
\begin{table}[htbp]
\centering
\caption{Accuracy comparison of methods on HMDB51 dataset}
\begin{tabular}{cc}
\toprule
Method & Accuracy \\
\midrule
Historical LSTM ($\tau=2$) & 0.73\\
Historical LSTM ($\tau=3$) & 0.62 \\
Historical LSTM ($\tau=4$) & 0.63 \\
Historical LSTM ($\tau=5$) & 0.62 \\
LSTM & 0.47 \\
Spatial stream ConvNet \cite{Simonyan} & 0.41 \\
Soft attention model \cite{Sharma} & 0.41 \\
Composite LSTM \cite{Soomro} & 0.44 \\
Mora et al. \cite{Mora} & 0.43 \\
Two-stream ConvNet using optical flow \cite{Simonyan} & 0.59 \\
Wang et al. (using Fisher Vectors) \cite{Wang} & 0.53 \\
Wang et al. (using a combination of HOG, HOF and MBH) \cite{Wang} & 0.60 \\
\bottomrule
\end{tabular}%
\label{tab:recresult}%
\end{table}%
\subsection{Tennis action recognition}
\subsubsection{Dataset}
The Three Dimensional Tennis Shots (THETIS) dataset \cite{Gourgari} is used for evaluating the proposed method.
This dataset contains 12 basic tennis actions each of which are acted repeatedly by 31 amateurs and 24 experienced players.
The 12 tennis actions performed by actors are: Backhand with two hands, Backhand, Backhand slice, Backhand volley, Forehand flat, Forehand open stands, Forehand slice, Forehand volley, Service flat, Service kick, Service slice and Smash.
Each actor repeats each tennis action 3 to 4 times.
There are totally 8734 videos of the AVI format with a total duration of 7 hours and 15 minutes.
Specially, a set of 1980 RGB viodeos of the AVI format are provided.
There are two different indoor backgrounds.
The backgrounds contain different scenes in which multiple persons pass or play basketball.
The length of video sequences also varies.
Although the depth, skeleton and silhouettes videos are also given, we only use the RGB videos to perform the tennis action recognition experiments.
\subsubsection{Results}
The five-fold cross-validation strategy is used to split the dataset to the training set and test set.
We compared the proposed historical LSTM to the typical LSTM.
The parameter $\tau$ of historical LSTM is fine tuned from 2 to 5.
Table 2 shows the recognition accuracy values of the methods.
As is shown in the table, when the parameter $\tau$ is set as 3, the average accuracy of the historical LSTM model achieves an accuracy of $0.74$.
Furthermore, the average accuracy of the historical LSTM model is $0.70$ when the parameter $\tau$ is set as 2.
When the parameter $\tau$ is set as 4, the historical LSTM model achieves an accuracy of $0.71$.
However, when $\tau$ is set as 5, the accuracy of historical LSTM decline to 0.63.
It is seen that setting the parameter $\tau$ to a medium number according to the time period in the historical LSTM model is better.
Meanwhile, the average accuracy of the LSTM model is $0.56$.
The experimental results shows that the proposed historical LSTM model outperforms the LSTM model in the accuracy of perdition.
\begin{table}[htbp]
\centering
\caption{Accuracy comparison of methods on THETIS dataset}
\begin{tabular}{cc}
\toprule
Method & Accuracy \\
\midrule
Historical LSTM ($\tau=2$) & 0.70 \\
Historical LSTM ($\tau=3$) & 0.74 \\
Historical LSTM ($\tau=4$) & 0.71 \\
Historical LSTM ($\tau=5$) & 0.63 \\
LSTM & 0.56 \\
Mora et al. \cite{Mora} & 0.47 \\
Gourgari et al. (using depth videos)\cite{Gourgari} & 0.6 \\
Gourgari et al. (using 3D skeletons)\cite{Gourgari} & 0.54 \\
\bottomrule
\end{tabular}%
\label{tab:recresult}%
\end{table}%
In \cite{Mora}, the authors performed experiments using the RGB videos on the THETIS dataset and get an average accuracy of 0.47 for tennis action recognition using a leave-one-out strategy.
Compared to their experiments, we employ less training samples and obtain a better performance under a more difficult condition.
In \cite{Gourgari} , the authors performed tennis action classification experiments using depth videos and 3D skeletons separately on the THETIS dataset and obtained accuracy 0.60 and 0.54 respectively.
Compared to their results, our experiment use the raw images alone, which is a more challenge task.
Experimental results prove that our proposed method is competitive and outperform the-state-of-art methods.
\section{Conclusion}
This paper proposed a historical Long Short-Term Memory for modeling the tennis action image sequence.
A score weighting strategy was introduced to the Long Short-Term Memory networks to model the historical information.
The proposed historical LSTM was used for tennis action recaption on the CNN features of local frame images of videos.
Experimental results on the tennis action datasets demonstrate that our method is effective.
Our future works will include the improving of the weighting strategy and more experiments on real tennis game videos.
\section*{Acknowledgment}
This work was supported by the Xiamen University of Technology High Level
Talents Project (No.YKJ15018R).
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
| {
"attr-fineweb-edu": 2.339844,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUck_xK03BfMLyqHdX | \section{Introduction}
\label{sec:intro}
Sports have been an integral part of human history as they have served as an important venue for humans to push their athletic and mental abilities. Sports transcend culture~\cite{taylor_2013,Seippel_cross_national_sports_importance} and provide strong physiological and social benefits~\cite{malm2019physical}. The physical and mental training from sports is widely applicable to multiple aspects of life~\cite{Eime2013,Malm2019-fv}. Developing robotic systems for competitive sports can increase participation in sports and be used for sports training~\cite{Reinkensmeyer2009-ex,biomechanics_sports}, thus benefiting society by promoting a healthier lifestyle and increased economic activity. Furthermore, sports serve as a promising domain for developing new robotic systems through the exploration of high-speed athletic behaviors and human-robot collaboration.
Researchers in robotics have sought to develop autonomous systems for playing various sports such as soccer and table tennis. RoboGames~\cite{calkins2011overview} and RoboCup~\cite{carbonell_robocup_1998} have inspired the next generation of roboticists to solve challenges in sensing, navigation, and control. However, many of the advancements or techniques used in solving these obstacles have been restricted to overly miniaturized (e.g. Robo-Soccer~\cite{cass2001robosoccer}, Robo-sumo~\cite{robot_sumo}), relatively stationary~\cite{muelling2010learning}, or highly idealized/controlled environments~\cite{hattori-humanoidtennisswing2020,terasawa-humanoidtennisswing2016,robot_sumo}.
The hardware is often specifically engineered to the sport~\cite{buchler2022learning}, making re-purposing designs and components for other applications difficult.
We address these limitations of prior work by proposing an autonomous system for regulation wheelchair tennis by leveraging only commercial off-the-shelf (COTS) hardware. Our system named ESTHER (\underline{E}xperimental \underline{S}port \underline{T}ennis w\underline{HE}elchair \underline{R}obot) after arguably the greatest wheelchair tennis player, Esther Vergeer.
\begin{figure}
\centering
\includegraphics[width=0.975\linewidth, height = 7 cm]{fig/main_pic_unsqueezed.png}
\vspace{-4pt}
\caption {ESTHER hitting a tennis ground stroke.}
\label{fig:wheelchair-robot}
\vspace{-8pt}
\end{figure}
Tennis is a challenging sport requiring players to exhibit accurate trajectory estimation, strategic positioning, tactical shot selection, and dynamic racket swings. The design of a system that meets these demands comes with a multitude of complications to address: precise perception, fast planning, low-drift control, and highly-responsive actuation. These challenges must be resolved in a framework efficient enough to respond in fractions of a second~\cite{tennis_timepressure}.
In this paper, we present the system design and an empirical analysis to enable ESTHER (Fig.~\ref{fig:wheelchair-robot}) to address these challenges. Our design adheres to ITF tennis regulations by utilizing a legal tennis wheelchair and an anthropomorphic robot arm, which were required to obtain permission to field ESTHER on an NCAA Division 1 indoor tennis facility.
We contribute to the science of robotic systems by (1) Demonstrating a fully mobile setup that includes a decentralized low-latency vision system and a light-weight (190 lbs) mobile manipulator that can be easily set up on any tennis court within 30 minutes, (2) A design for a motorized wheelchair base that is capable of meeting the athletic demands of tennis under the full-load of the manipulator, onboard computer, and batteries, and (3) Demonstrate that our planning and control algorithm is capable of hitting powerful ground strokes that traverse the court.
We open-sourced our designs and software on our project website: \url{https://core-robotics-lab.github.io/Wheelchair-Tennis-Robot/}
\begin{figure}
\centering
\includegraphics[width=0.82\linewidth]{fig/chain_drive.png}
\vspace{-7.5pt}
\caption{The schematic (Fig. a) and images (Fig. b-c) of the mechanical assembly of the chain drive system.}
\label{fig:wheelchair-mechanical}
\end{figure}
\section{Related Work}
Systems that play tennis must be able to rapidly traverse the court, control the racket at high speeds with precision, and deliver sufficient force to withstand the impulse from ball contact. Existing systems that attempt to play other racket sports~\cite{mori-badminton-pneumatic2019,buchler2022learning} typically use pneumatically driven manipulators for quick and powerful control; however, these systems are stationary and are thus unsuitable.
Other works~\cite{janssens-badmintonrail2012,liu-badmintonlearning2013,gao-deeprlrailtabletennis2020,zhang-humanoidrail2014,miyazaki-railtabletennis2006,ji-railtabletennis2021} have mounted robotic manipulators on rails, but these systems do not support full maneuverability. Furthermore, these robot environments are heavily modified to suit the robot, and thus, are rigid and unsuitable for human proximity.
There have been several recent works in creating agile mobile manipulators~\cite{dong_catch_2020,yang-varsm2022}. For example,~\cite{dong_catch_2020} proposes a UR10 arm with 6 degrees of freedom (DoF) mounted on a 3-DoF omni-directional base for catching balls. This system works well for low-power, precision tasks, but lacks the force required for a tennis swing. VaRSM~\cite{yang-varsm2022} is a system designed to play racket sports including tennis and is most similar to our work. The system proposed uses a custom 6-DoF arm and custom swerve-drive platform as its base. While this system appears effective, it is difficult to reproduce given that the hardware is custom-made. Further, the system was developed by a company that has not open-sourced its implementation, and experimental details are lacking to afford a proper baseline. In contrast, ESTHER is a human-scale robot constructed from a COTS 7-DoF robot manipulator arm and a regulation wheelchair with an open-sourced design and code-base. This enables our system to serve as a baseline for human-scale athletic robots while being reproducible and adaptable for performing mobile manipulation tasks in a variety of settings, such as healthcare~\cite{tele-nursing,humanassistance_mobilemanipulator,King2011-kt}, manufacturing~\cite{industrial_mobilemanipulation,omnivil,survey_mobilemanipulator}, and more~\cite{service_robots}.
\begin{figure}
\includegraphics[width=\linewidth]{fig/wheelchair_embedded_control.png}
\caption {Electrical and communication diagram for ESTHER's mobile base and robotic arm.}
\label{fig:wheelchair-electrical}
\end{figure}
\section{METHODS}
\label{sec:methods}
We detail the ESTHER's hardware (Sec.~\ref{subsec:hardware}), perception (Sec.~\ref{subsubsec:sensing}), planning (Sec.~\ref{subsubsec:planning}), and control (Sec.~\ref{subsubsec:control}) components. ESTHER's design meets the athletic demands of tennis and is easy and quick to set up.
\subsection{Hardware}
\label{subsec:hardware}
We mounted a 7-DoF high-speed Barrett WAM on a motorized Top End Pro Tennis Wheelchair (Fig.~\ref{fig:wheelchair-robot}).
\textbf{Wheelchair} --
We designed a chain-drive system to deliver power from the motors to the wheels. The schematic of the system's mechanical assembly is illustrated in Fig.~\ref{fig:wheelchair-mechanical}. Power from the battery is delivered through an ODrive motor controller to two ODrive D6374 motors. The motors for both wheels are coupled to a 1:10 ratio speed-reducer planetary gearbox. The gearbox output shaft is coupled with the wheel through a chain and sprocket system that provides an additional 1:2 speed reduction to give a total reduction of 1:20.
With the motors rotating at maximum speed, the wheelchair can achieve linear velocities of up to \SI{10}{\meter\per\second} and in-place angular yaw velocity of up to \SI{20}{\radian\per\second}. A chain drive system has higher durability, torque capacity, increased tolerances, and a simpler design versus belt drive or friction drive systems.
\begin{figure}
\includegraphics[width=\linewidth]{fig/wtr_diagram.drawio.png}
\caption{ESTHER's System Architecture.}
\label{fig:architecture}
\end{figure}
The electrical system to power and control the mobile wheelchair base consists of a DC battery, motors, a motor controller, a wireless radio controller and receiver pair, and a microcontroller as illustrated in Fig.~\ref{fig:wheelchair-electrical}. The radio controller allows the user to remotely swap between idle, manual, and autonomous modes. In idle mode, motors are de-energized. In manual mode, a human can remotely control the motion of the wheelchair. In autonomous mode, the wheelchair's motion is controlled by the onboard computer that communicates the desired wheel velocities to the microcontroller. The radio controller also has an emergency stop (E-stop) button. If the E-stop button is pressed, the microcontroller commands both the motor speeds to be zero and turns off the power to the arm. Maintaining a remote E-stop enables the whole system to be tether-free and safely operable from a distance.
\textbf{Barrett WAM} --
We mount a ``HEAD Graphene Instinct Power'' tennis racket at the end of the WAM arm using a 3D-printed connector (Fig.~\ref{fig:wheelchair-robot}). The design and visualization of potential failure modes are available on the project website. The arm is powered by a \SI{500}{\watt} mobile power station that outputs AC current as illustrated in Fig.~\ref{fig:wheelchair-electrical}. The power station is used to power the onboard computer, the LIDAR, and the WAM. The computer requires up to \SI{300}{\watt} when charging and running the full code stack. The LIDAR requires \SI{30}{\watt} when running at maximum frequency. The WAM requires \SI{50}{\watt} when static and up to \SI{150}{\watt} while swinging.
\subsection{System Design}
\label{subsec:architecture}
We decompose the ESTHER into three main components: Sensing and State Estimation, Planning, and Low-Level Controllers (Fig.~\ref{fig:architecture}). We leverage ROS \cite{ros, ros-guide} to build an interconnected modular software stack using a combination of open-source packages \cite{ros-guide} and custom packages.
\subsubsection{Sensing and State Estimation}
\label{subsubsec:sensing}
To track a tennis ball as it moves across the court, we built six ball detection rigs (Fig.~\ref{fig:ball-detection-module}). Each rig consists of a Stereolabs ZED2 stereo camera, which is connected to an NVIDIA Jetson Nano that communicates to a central onboard computer over Wi-Fi.
Each rig is mounted on a 13-foot tripod stand, is placed to maximize coverage of the court (Fig.~\ref{fig:court-setup}), and is calibrated using an AprilTag~\cite{apriltag2016} box.
\begin{figure}
\centering
\begin{subfigure}[b]{0.5\linewidth}
\includegraphics[width=\linewidth, height = 4cm]{fig/detection_module_colored.png}
\caption {}
\label{fig:ball-detection-module}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\linewidth}
\centering
\includegraphics[width=\linewidth, height = 4cm]{fig/results/ball_detection_variance.png}
\caption{}
\label{fig:ball_detect_variance}
\end{subfigure}
\vspace{2pt}
\caption{Fig.~\ref{fig:ball-detection-module} depicts the ball detection rig, and Fig.~\ref{fig:ball_detect_variance} depicts the measured vs. actual distance of a ball.}
\label{fig:combined}
\end{figure}
Leveraging our camera rigs, we then employ multiple computer vision techniques to detect an airborne tennis ball: color thresholding, background subtraction, and noise removal to locate the pixel center of the largest moving colored tennis ball. Orange tennis balls were used since orange provided a strong contrast against the green/blue tennis courts. The ball coordinates are computed relative to the ball detection rig's reference frame by applying epipolar geometry on the pixel centers from the rig. The ball's pose was then mapped to the world frame.
The positional error covariance was modeled through a quadratic relationship (i.e., $aD^2 + bD + c$) where $D$ is the ball's distance to the camera and $a$, $b$, and $c$ are tuned via experiments of ball rolling along a linear rail (Fig.~\ref{fig:ball_detect_variance}). A quadratic expression is utilized as it provides a simple and accurate approximation of the measurement error. By pursuing a distributed vision approach, image data could be efficiently processed locally on each Jetson Nano and sent to the onboard computer via Wi-Fi. These detection modules are able to process the stereo camera's 1080p images, produce a positional estimate, and transmit the measurement to the central computer within \SI{100}{\milli\second} at a frequency of \SI{25}{\hertz}. The vision system is able to achieve up to \SI{150}{\text{estimates}\per\second}.
\paragraph{Ball EFK and Roll-out Prediction}
\label{paragraph:ball-ekf-rollout}
The detection rigs' ball position estimates are fused through a continuous-time Extended Kalman Filter (EKF)~\cite{sorenson1985kalman} to produce a single pose estimate. The EKF is based on the {\sffamily robot\_localization} package~\cite{MooreStouchKeneralizedEkf2014}, which is augmented to incorporate ballistic trajectory, inelastic bouncing, court friction, and ball-air interactions~\cite{airballinteractions} into the state-estimate prediction. Our localization approach allows the ball detection errors to be merged into a single covariance estimate, informing how much confidence should be placed on the ball's predicted trajectory. To enhance the EKF predictions, the state estimation is reset when a tennis ball is first detected, and any measurement delays are handled by reverting the EKF to a specified lag time and re-applying all measurements to the present. By rolling out the EKF's state predictions forward in time, it is possible to estimate the tennis ball's future trajectory on the court, enabling the wheelchair to move to the appropriate position.
If the confidence in the predicted trajectory is high, ESTHER acts upon the prediction. Otherwise, the trajectory estimate is ignored until the EKF converges to higher confidence. The EKF and rollout trajectory estimator run at \SI{100}{\hertz} to ensure the high-level planners can incorporate timely estimates.
\indent \indent Our decentralized, modular approach to ball state estimation and trajectory prediction enables us to create a low-cost vision system that can be quickly set up on any tennis court. While we can achieve high-speed ball detection at up to \SI{150}{\hertz}, unfortunately, this rate is not enough to infer precisely the spin on the tennis ball (i.e., Magnus effect forces). We attempt to infer this effect by incorporating indirect estimation methods such as Adaptive EKF (AEKF)~\cite{AdaptiveEKF_mobilerobot}, and trajectory fitting~\cite{jonas-tabletenniskukaspin2020} into our EKF.
\paragraph{Wheelchair Localization}
\label{paragraph:wheelchair-localization}
The motion of ESTHER's wheelchair base can be modeled as a differential drive base that is equipped with three different sensors to determine the robot's state in the world as displayed in Fig.~\ref{fig:architecture}. 8192 Counts Per Revolution (CPR) motor encoders provide the velocity and position of each wheel at \SI{250}{\hertz}. A ZED2 Inertial Measurement Unit (IMU) is used to obtain the linear velocity, angular velocity, and orientation of the motion base at \SI{400}{\hertz}. A Velodyne Puck LiDAR sensor is used to obtain an egocentric \SI{360}{\degree} point cloud with a \SI{30}{\degree} vertical field of view at \SI{20}{\hertz}. For localizing the wheelchair, we first create a map of the environment offline by recording IMU data and 3D point cloud data while manually driving the wheelchair around at slow speeds. We use the {\sffamily hdl\_graph\_slam}~\cite{hdl_localization} package to create a Point Cloud Data (PCD) map from this recorded data for use online. Afterward, we use a point cloud scan matching algorithm~\cite{hdl_localization} to obtain an odometry estimate based on LIDAR and IMU readings. A differential drive controller gives a second odometry estimate using encoder data from wheels and IMU readings. We follow guidance from~\cite{MooreStouchKeneralizedEkf2014} for fusing these estimates to get the current state of the wheelchair.
\begin{figure}
\includegraphics[width=\linewidth]{fig/results/trapezoidal_traj.png}
\caption {Trapezoidal velocity profiles for individual joints.}
\label{fig:trapezoidal-profile}
\end{figure}
\subsubsection{Planning}
\label{subsubsec:planning}
Given the predicted ball trajectory and current position of the wheelchair, the ``strategizer'', a behavior orchestrator, selects the desired interception point. Then, the strategizer determines the required wheelchair pose, approximate stroke trajectory, and stroke timing. The strategizer locates the point where the ball crosses a hitting plane chosen such that the ball is at a hittable height when passing the robot. The strategizer ensures that the interception point is not too close to the ground and that the corresponding joint positions are within limits.
Next, the strategizer finds the stroke parameters and the corresponding wheelchair placement for that stroke. The stroke is parameterized by three points: a start point (i.e., the joints at the swing's beginning), a contact point (i.e., the joints at ball contact), and an end point (i.e., the joints at the swing's completion). These points are determined such that each joint is traveling at the maximum possible speed at the contact point while staying within each individual joint's position, velocity, and acceleration limits. Using the contact point and the robot's kinematics, the wheelchair's desired position is geometrically determined. Lastly, the strategizer identifies the exact time to trigger the stroke by combining the stroke duration and the time the ball takes to reach the interception point.
The strategizer continuously updates the interception point as the vision system updates its estimate of the ball's predicted trajectory, allowing us to adjust the stroke parameters and the wheelchair position until a few milliseconds before the ball crosses the interception plane, thus increasing the chances of a successful hit.
\subsubsection{Low-Level Control/Execution}
\label{subsubsec:control}
We next describe the interaction between the strategizer and the low-level controllers.
\paragraph{Wheelchair Control}
\label{paragraph:wheelchair-control}
Given the robot's position and state in the world, we use ROS's {\sffamily move\_base} package \cite{ros-guide} to command the robot to go to the desired position. We use the {\sffamily move\_base's} default global planner (i.e., Dijkstra's algorithm) as our global planner and the Timed Elastic Band (TEB) planner \cite{teb_1,teb_2} as our local planner. The global planner finds a path between the wheelchair's current pose $p_{curr}$ and the desired pose $p_{dest}$ given a map and obstacles detected by the LiDAR. The local planner tries to follow this path as closely as possible while performing real-time collision checking and obstacle avoidance. The plan from the local planner is translated to wheel velocities through a differential drive controller. The velocities are then communicated from the onboard computer to the microcontroller over serial and are executed on the wheelchair. The motor controller uses PID control to command the wheels at desired velocities.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth,height=3.3cm]{fig/simulation_rviz.png}
\caption {ESTHER simulation in RViz. }
\label{fig:simulation-rviz}
\end{figure}
\paragraph{Arm Control}
\label{paragraph:arm-control}
The arm-control subsystem deals with generating joint trajectories for the joints and executing them on the actual arm. The Barrett high-speed WAM has 7 joints in total: base yaw, shoulder pitch, shoulder yaw, elbow pitch, wrist pitch, wrist yaw, and palm yaw. The swing utilizes base yaw, shoulder pitch, and elbow pitch joints to generate speed, and the rest of the joints are held at constant positions during the swing so the racket is in the right orientation when making contact with the ball.
\indent \indent To hit a successful return, humans generate high racket head speeds by merging contributions from multiple body joints~\cite{groundstroke_biodynamics}. Inspired by this ``summation of speed principle''~\cite{Landlinger2010-iy}, we created a ``Fully-Extended'' Ground stroke (FEG) that maximizes the racket speed when it makes contact with the ball. The base yaw and elbow pitch joint provide the velocity component in the direction we want to return the ball, and the shoulder pitch joint is used to adjust for the ball height and hit the ball upward (Fig.~\ref{fig:swing-capability}). Each of these joints follows a trapezoidal velocity profile (Fig.~\ref{fig:trapezoidal-profile}).
The joints' trajectories are sent from the onboard computer to the WAM computer which uses a PID controller to execute the trajectory on the arm.
\subsection{Simulation}
\label{subsec:simulation}
Playing tennis involves high-speed motion, which can be hazardous. As such, we set up a kinematic simulator in RViz \cite{rviz} (Fig.~\ref{fig:simulation-rviz}) for testing behavior interactions.
\section{Experiments and Results}
\label{sec:exp-and-results}
We evaluate ESTHER and its components to serve as a baseline for future research on athletic robots for tennis. In Section~\ref{subsec:exp_setup}, we describe the details of our experimental setup. In Section~\ref{subsec:system-capabilities}, we evaluate the capabilities of individual subsystems and the system as a whole. Finally, we report the results of our experiments in Section~\ref{subsec:court-lab-experiments}.
\begin{figure}
\centering
\includegraphics[height = 5.7cm]{fig/court_setup.png}
\caption {Schematic showing the court setup }
\label{fig:court-setup}
\end{figure}
\subsection{Experimental Setup}
\label{subsec:exp_setup}
We have two evaluation settings for our system: one at an NCAA Division 1 indoor tennis court facility and the other within a lab space. The tennis court setup is equal to the size of a regulation tennis court i.e. \SI{23.77}{\meter} by \SI{10.97}{\meter}, and the lab setup is \SI{11}{\meter} by \SI{4.57}{\meter}.
Fig.~\ref{fig:court-setup} displays the world coordinate frame and on-court setup. The ball is launched by a ball launcher (Lobster Sports - Elite Two Tennis Ball Launcher) or a human towards the robot from the other side of the court (\SI{11.9}{\meter} $< x <$ \SI{23.8}{\meter}).
The lab setup is similar to the court setup except the lab covers a smaller area, has a different surface texture affecting ball bounce and wheel traction, does not have court lines, and has different lighting conditions. Overall, the lab setup is easier for hitting balls.
\subsection{Sub-System Experiments}
\label{subsec:system-capabilities}
In this section, we overview the capabilities of our current system and the results of experiments conducted in the tennis court and lab based on the setup described in Section~\ref{subsec:exp_setup}
\subsubsection{Wheelchair}
\label{subsec:wheelchair-capability}
In manual mode, we safely drove the wheelchair with load at a linear speed of \SI{4.34}{\meter\per\second} and at an angular speed of \SI{5.8}{\radian\per\second}, less than half of the maximum possible speeds achievable with our system. In autonomous mode testing with the whole stack, we achieve accelerations of \SI{1.42}{\meter\second\tothe{-2}} and deceleration of \SI{1.60}{\meter\second\tothe{-2}}. These acceleration and deceleration values are more than the average side-to-side acceleration and deceleration values of $\approx$\SI{1.00}{\meter\second\tothe{-2}} achieved by professional human players~\cite{Filipcic2017-iy}.
\subsubsection{Stroke Speed}
\label{subsec:arm-swing-capability}
ESTHER reaches pre-impact racket head speeds (Fig.~\ref{fig:swing-capability}) of \SI{10}{\meter\per\second}. This speed is on the same order of magnitude of professional players (\SI{17}-\SI{36}{\meter\per\second} \cite{Allen2016}).
\begin{figure}
\includegraphics[width=0.95\linewidth, height = 5.5cm]{fig/results/swing_wtr.png}
\caption {Racket head speed during a FEG stroke.}
\label{fig:swing-capability}
\end{figure}
\subsubsection{Sensing and State Estimation}
\label{subsec:sensing-state-estimation-capabilities}
An important consideration for playing a sport like tennis is to anticipate the ball trajectory early so the robot can get in position to return the ball. To measure the performance of our ball trajectory prediction system, we measure the error between the ball's predicted \(x, y, z\) position and its ground-truth position as it crosses the interception plane as a function of the fraction of time passed to reach the interception plane. We report the average interception point prediction error over 10 trajectories in our court setup in Fig.~\ref{fig:rollout_error}. Each trajectory is roughly \SI{2}{\second} long, and the ball was launched at roughly \SI{8}{\meter\per\second}. In the court setup, the ball travels along the \(x\)-axis, thereby the prediction error in \(x\) affects the timing of the stroke. Similarly, prediction error in \(y\) affects wheelchair positioning, and \(z\) affects the height at which the arm swings. As the FEG stroke takes about \SI{0.5}{\second} to go from the start to the interception point, we must be certain about the \(x\) coordinate of the interception point before 75\% of the total trajectory has elapsed. Defining acceptable error margins for racket positions at the interception plane to be equal to the racket head width, we can observe from Fig.~\ref{fig:rollout_error} that the \(y\) and \(z\) predictions converge to be within the acceptable error margin after 50\% of the trajectory is seen. Given that the wheelchair is able to accelerate at \SI{1.42}{\meter\second\tothe{-2}}, we assess that ESTHER is capable of moving up to \SI{0.71}{\meter} from a standing start after we establish confidence in the predicted interception point. For larger distances, we need more accurate predictions quicker.
\begin{figure}
\centering
\includegraphics[width=\linewidth, height = 4.5cm]{fig/results/rollout_error.png}
\caption{Error in ball's predicted vs.~actual position at desired intercept. Dashed lines represent range of acceptable error.}
\label{fig:rollout_error}
\end{figure}
\subsubsection{Reachability}
\label{subsec:reachability}
Researchers found that in professional tennis, 80\% of all strokes require players to move less than \SI{2.5}{\meter}~\cite{tennis-lateral-movement}. To benchmark our system's ability to reach balls that are in this range, we conduct a reachability study. We started launching balls toward the robot in our court setup and recorded the results. After every 10 balls, we increased the distance between the wheelchair and the ball's average position at the interception plane by \SI{0.3}{\meter}. In this test, to mitigate the latency introduced by the time it requires the vision system to converge to an accurate estimate, we start moving the wheelchair towards the ball's average detected \(y\) position as soon as the ball is detected for the first time after launch. The position of the wheelchair is later fine-tuned once the prediction from the vision system has converged. We can see the histogram of the success rate as a function of the distance needed to be moved by the wheelchair in Fig.~\ref{fig:system-reachibility}. As expected, the success rate decreases as the distance needed to be moved by wheelchair increases. However, we still see success even if the wheelchair movement approaches \SI{2}{\meter}, demonstrating our system's agility for tennis.
\subsection{Whole-System Experiments}
\label{subsec:court-lab-experiments}
We perform 15 trials in each scenario and report the details of the hit and return rate in Table~\ref{tab:results}. A returned ball over the net landing inside the singles lines of the court is marked as successful. Fig.~\ref{fig:frame-by-frame} illustrates a frame-by-frame depiction of our system in action on the court. For tests inside the lab, a ball that goes over the net height (\SI{1.07}{\meter}) while crossing the position from which the ball was launched is marked successful.
\begin{table*}[ht]
\begin{center}
\begin{tabular}{lcccccc}
\hline
&
\multicolumn{2}{c}{\bf Ball Launch} & \multicolumn{2}{c}{\bf Interception Point Lateral ($y$) spread} &
\multicolumn{2}{c}{\bf Success Metrics}
\\ \cline{2-7}
{\bf Setup} & {\bf \Centerstack{Distance to \\ Interception Plane (\SI{}{\meter})}} & {\bf \Centerstack{Average Launch\\Speed (\SI{}{\meter\per\second})}} & {\bf IQR (\SI{}{\meter})} & {\bf Std. Dev (\SI{}{\meter})} & {\bf Hit Rate} & {\bf Success Rate} \\
\hline
\shortstack{Court (Ball Launcher)} & 7.9 & 8.01 & 0.31 & 0.23 & 73\% & 66\% \\
\shortstack{Court (Ball Launcher)} & 12.8 & 12.64 & 0.26 & 0.29 & 60\% & 53\% \\
\shortstack{Lab (Ball Launcher)} & 7.5 & 6.79 & 0.28 & 0.20 & 93\% & 80\% \\
\shortstack{Lab (Human )} & 7.5 & 6.56 & 0.54 & 0.52 & 40\% & 33\% \\ \hline
\end{tabular}
\caption{Experiment results for 15 consecutive trials in different scenarios}
\label{tab:results}
\end{center}
\end{table*}
To further evaluate the variance in ball launches, we conducted a contiguous trial of 50 launches with a ball launcher in the court setting and plotted the histogram of the time it takes to reach the pre-specified interception plane \SI{8}{\meter} away from the launcher in Fig.~\ref{fig:intercept_time_hist}.
In Fig.~\ref{fig:ball_hit_scatter}, we visualize the position of the balls at the interception plane and the configurations that were reachable without wheelchair movement. 65\% of the launched balls were in a configuration that required wheelchair movement, showcasing the importance of a mobile system to play a sport like tennis. The experiment was repeated in the lab setting as well. The results are marginally better in the lab setting due to the closer proximity of the cameras and more controlled conditions. These results along with the data in Table~\ref{tab:results} serve as a good baseline for future works as beating them would require faster perception, increased agility, and improved planning and control algorithms.
\begin{figure}
\centering
\includegraphics{fig/results/frame_by_frame.png}
\caption{Outline of ESTHER in action. Orange box highlights ball in image.}
\label{fig:frame-by-frame}
\end{figure}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[height = 4cm]{fig/results/court_near_and_slow_wc_move/reach_hist_4.png}
\caption{}
\label{fig:system-reachibility}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[height = 4cm]{fig/results/court_near_and_slow/interception_time_hist_2.png}
\caption{}
\label{fig:intercept_time_hist}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[height = 4cm]{fig/results/court_near_and_slow/ball_hit_scatter_2.png}
\caption{}
\label{fig:ball_hit_scatter}
\end{subfigure}
\vspace{5pt}
\caption{Fig.~\ref{fig:system-reachibility} shows a histogram of successful returns as a function of the distance the wheelchair needed to move to hit the ball. Fig.~\ref{fig:intercept_time_hist} shows a histogram for times of ball arrival at interception point from the time of the first detection with the court setup. Fig.~\ref{fig:ball_hit_scatter} shows a scatter plot of ball positions in the interception plane with the court setup.}
\label{fig:combined_new}
\end{figure*}
\section{Discussion and Key Challenges}
\label{sec:discussion}
We have experimentally demonstrated that ESTHER is fast, agile, and powerful enough to successfully hit ground strokes on a regulation tennis court. Our vision for the system is to be able to rally and play with a human player. To realize that goal, we lay down a set of key challenges to address.
Section~\ref{subsec:reachability} showcased that ESTHER is able to successfully return balls that require the wheelchair to move up to \SI{2}{\meter} given that the movement starts right after the ball is launched. However, as discussed in Section~\ref{subsec:sensing-state-estimation-capabilities}, our vision system needs some time to converge to an accurate prediction. To get an accurate, early estimate of the ball's pose, we will need a higher fps vision setup that utilizes cameras to triangulate a ball from multiple angles~\cite{jonas-tabletenniskukaspin2020}. To make the ball's predicted trajectory more precise, the system should also be able to estimate the spin of the ball. Directly estimating spin on the ball is a hard problem~\cite{jonas-tabletenniskukaspin2020}. However, the ball's spin can be estimated indirectly using methods such as trajectory fitting~\cite{jonas-tabletenniskukaspin2020} or an AEKF~\cite{AdaptiveEKF_mobilerobot}. An interesting research direction would be to accurately estimate the ball's state~\cite{pose_trajectoryprediction} by observing how the ball was hit on the previous shot for e.g., a racket going from low to high indicates top spin, and a racket going from high to low is indicative of backspin. The ultimate challenge is to perform this sensing with only onboard mechanisms.
We currently plan the motion of the wheelchair and the arm separately. However, the two components are dynamically coupled, and an ideal planner should consider both simultaneously. In future work, our system serves as an ideal test-bed for the development of kinodynamic motion planners for athletic mobile manipulators as current dynamical planners are not good enough for agile movement~\cite{ros_giralt_2021}. Such a planner would also help us reach higher racket head velocities as we will be able to utilize the wheelchair's potential to rotate at high angular speeds as established in the same way that professional human players rotate their trunk while hitting a ground stroke. Moreover, the planner will need to be safe and fast for live human-robot play.
\section{Applications and Future Work}
\label{sec:applications}
The system opens up numerous exciting opportunities for future research such as learning strokes from expert demonstrations via imitation \& reinforcement learning, incorporating game knowledge to return at strategic points, and effective human-robot teaming for playing doubles. We briefly describe our vision for some of these exciting future directions:
\textbf{Interactive Robot Learning} --
Interactive Robot Learning (i.e., Learning from Demonstration (LfD)) focuses on extracting skills from expert demonstrations or instructions for how to perform a task.
It is quite challenging to manually encode various strategies, tactics, and low-level stroke styles for a tennis-playing robot, and, ESTHER serves as a great platform to develop and deploy LfD algorithms that learn from expert gameplay. Prior work has shown how kinesthetically teaching can be employed to teach robots various styles of playing Table-Tennis~\cite{chen-irltabletennis2020,gomez-promptabletennis2016}, but these techniques are demonstrated on arms that are mounted to a stationary base and require much lower racket head speeds. Therefore, we challenge the LfD community to leverage our platform to explore a more challenging robotics domain.
\textbf{Human-Robot Teaming} --
Having agile robots that collaborate with humans to perform well on a shared task is a challenging problem as it requires an accurate perception of human intention, clear communication of robot intention, and collaborative planning and execution. We believe that our system is a great benchmark to study these problems as doubles play in tennis often has many collaborative strategies that are based on a good rapport that players develop over many practice sessions. Having robots similarly collaborate with humans and build trust and personalized strategies over time in a highly competitive and dynamic environment is an important research problem more broadly~\cite{hri_survey1}.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we present ESTHER, a fully autonomous system built with general-purpose robotic hardware. Our work contributes to the existing research in three main ways: introduction of a low-cost, fast, decentralized perception system that can be easily set up anywhere for accurate ball tracking; design of a chain drive system that can be used to motorize a regulation sports wheelchair into an agile mobile base for any robotic manipulator; and planning and control of an agile mobile manipulator to exhibit athletic behaviors. We experimentally determined the capabilities of individual subsystems and verified that the system is able to perform at a human scale. We also demonstrated that our system can successfully hit ground strokes on a regulation tennis court. Avenues of future research include improvements to the perception system, development of kinodynamic planners for athletic mobile manipulators, learning game strategies, court positioning, and strokes from human play, as well as collaboration with a human partner to successfully play doubles tennis.
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 2.677734,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdgzxK7IAEeGRfeyd | \section{Introduction}
\label{sec:intro}
Competitive sports video understanding has become a hot research topic in the computer vision community. As one of the key techniques of understanding sports action, Action Quality Assessment (AQA) has attracted growing attention in recent years. In the 2020 Tokyo Olympic Games, the AI scoring system in gymnastics acted as a judge for assessing the athlete's score performance and providing feedback for improving the athlete's competitive skill, which reduces the controversies in many subjective scoring events, e.g., diving and gymnastics.
AQA is a task to assess how well an action is performed by estimating a score after analyzing the performance.
Unlike conventional action recognition \cite{schuldt2004recognizing,wang2013action,oneata2013action,ji20123d,simonyan2014two,tran2015learning,feichtenhofer2016convolutional,carreira2017quo,varol2017long,wang2018non,li2019action,yang2020temporal,shao2020finegym} and detection \cite{yeung2016end,montes2016temporal,zhao2017temporal,lin2019bmn}, AQA is more challenging since an action can be recognized from just one or a few images while the judges need to go through the entire action sequence to assess the action performance. Most existing AQA methods \cite{zhang2014relative,pirsiavash2014assessing,parmar2017learning,li2018end,doughty2018s,parmar2019action,mtlaqa,xu2019learning,pan2019action,tang2020uncertainty,bertasius2017baller,doughty2019pros,gattupalli2017cognilearn} regress on the deep features of videos to learn the diverse scores, which is difficult for actions with a small discrepancy happening in similar backgrounds. Since the diving events are usually filmed in a similar environment (i.e., aquatics centers) and all the videos contain the same action routine, that is ``take-off'', ``flight'', and ``entry'', while the subtle differences mainly appear in the numbers of both somersault and twist, flight positions as well as their executed qualities.
Capturing these subtle differences requires the AQA method not only to parse the steps of diving action but also to explicitly quantify the executed qualities of these steps. If we judge the action quality only via regressing a score on the deep features of the whole video, it would be a confusing and non-transparent assessment of the action quality, since we cannot explain the final score via analyzing the performances of action steps.
Cognitive science \cite{schmidt1976understanding,meyer2011assessing} shows that humans learn to assess the action quality by introducing fine-grained annotations and reliable comparisons. Inspired by this, we introduce these two concepts into AQA, which is challenging since existing AQA datasets lack fine-grained annotations of action procedures and cannot make reliable comparisons. If we judge the action quality using coarse-grained labels, we cannot date back to a convincing reason from the final action quality score. It is urgent to construct a fine-grained sports video dataset for encouraging a more transparent and reliable scoring approach for AQA.
To address these challenges, we construct a new competitive sports video dataset, ``FineDiving'' (short for Fine-grained Diving), focusing on various diving events, which is the first fine-grained sports video dataset for assessing action quality. FineDiving has several characteristics (Figure \ref{top}, the top half): (1) Two-level semantic structure. All videos are annotated with semantic labels at two levels, namely, action type and sub-action type, where a combination of the presented sub-action types produces an action type. (2) Two-level temporal structure. The temporal boundaries of actions in each video are annotated, where each action is manually decomposed into consecutive steps according to a well-defined lexicon. (3) Official action scores, judges' scores, and difficulty degrees are collected from FINA.
We further propose a procedure-aware approach for assessing action quality on FineDiving (Figure \ref{top}, the bottom half), inspired by the recently proposed CoRe \cite{yu2021group}. The proposed framework learns procedure-aware embeddings with a new Temporal Segmentation Attention module (referred to as TSA) to predict accurate scores with better interpretability. Specifically, TSA first parses action into consecutive steps with semantic and temporal correspondences, serving for procedure-aware cross-attention learning. The consecutive steps of query action are served as queries and the steps of exemplar action are served as keys and values. Then TSA inputs pairwise query and exemplar steps into the transformer and obtains procedure-aware embeddings via cross-attention learning. Finally, TSA performs fine-grained contrastive regression on the procedure-aware embeddings to quantify step-wise quality differences between query and exemplar, and predict the action score.
The contributions of this work are summarized as:
(1) We construct the first fine-grained sports video dataset for action quality assessment, which contains rich semantics and diverse temporal structures. (2) We propose a procedure-aware approach for action quality assessment, which is learned by a new temporal segmentation attention module and quantifies quality differences between query and exemplar in a fine-grained way. (3) Extensive experiments illustrate that our procedure-aware approach obtains substantial improvements and achieves the state-of-the-art.
\section{Related Work}
\noindent\textbf{Sports Video Datasets.}
Action understanding in sports videos is a hot research topic in the computer vision community, which is more challenging than understanding actions in the general video datasets, e.g., HMDB \cite{kuehne2011hmdb}, UCF-101 \cite{soomro2012ucf101}, Kinetics \cite{carreira2017quo}, AVA \cite{gu2018ava}, ActivityNet \cite{caba2015activitynet}, THUMOS \cite{THUMOS15}, Moments in Time \cite{monfortmoments} or HACS \cite{zhao2019hacs}, due to the low inter-class variance in motions and environments.
Competitive sports video action understanding relies heavily on available sports datasets.
Early, Niebles \textit{et al.} \cite{niebles2010modeling} introduced the Olympic sports dataset into modeling the motions. Karpathy \textit{et al.} \cite{karpathy2014large} provided a large-scale dataset Sports1M and gained significant performance over strong feature-based baselines. Pirsiavash \textit{et al.} \cite{pirsiavash2014assessing} released the first Olympic judging dataset comprising of Diving and Figure Skating. Parmar \textit{et al.} \cite{parmar2017learning} released a new dataset including Diving, Gymnastic Vault, Figure Skating for better working on AQA. Bertasius \textit{et al.} \cite{bertasius2017baller} proposed a first-person basketball dataset for estimating performance assessment of basketball players. Li \textit{et al.} \cite{li2018resound} built the Diving48 dataset annotated by the combinations of 4 attributes (i.e., back, somersault, twist, and free). Xu \textit{et al.} \cite{xu2019learning} extended the existing MIT-Figure Skating dataset to 500 samples. Parmar \textit{et al.} \cite{mtlaqa,parmar2019action} presented the MTL-AQA dataset that exploits multi-task networks to assess the motion. Shao \textit{et al.} \cite{shao2020finegym} proposed the FineGym dataset that provides coarse-to-fine annotations both temporally and semantically for facilitating action recognition.
Recently, Li \textit{et al.} \cite{li2021multisports} developed a large-scale dataset MultiSports with fine-grained action categories with dense annotations in both spatial and temporal domains for spatio-temporal action detection. Hong \textit{et al.} \cite{hong2021video} provided a figure skating dataset VPD for facilitating fine-grained sports action understanding. Chen \textit{et al.} \cite{chen2021sportscap} proposed the SMART dataset with fine-grained semantic labels, 2D and 3D annotated poses, and assessment information.
Unlike the above datasets, FineDiving is the first fine-grained sports video dataset for AQA, which guides the model to understand action procedures via detailed annotations towards more reliable action quality assessment.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{semantic.pdf}
\caption{Two-level semantic structure. Action type indicates an action routine described by a dive number. Sub-action type is a component of action type, where each combination of the sub-action types can produce an action type and different action types can share the same sub-action type. The green branch denotes different kinds of take-offs. The purple, yellow, and red branches respectively represent the somersaults with three positions (i.e., straight, pike, and tuck) in the flight, where each branch contains different somersault turns. The orange branch indicates different twist turns interspersed in the process of somersaults. The light blue denotes entering the water. (Best viewed in color.)}
\label{semantic}
\vspace{-10pt}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{temporal.pdf}
\caption{Two-level temporal structure. The action-level labels describe temporal boundaries of valid action routines, while the step-level labels provide the starting frames of consecutive steps in the procedure. (Best viewed in color.)}
\label{temporal}
\vspace{-10pt}
\end{figure}
\noindent\textbf{Action Quality Assessment.}
Most existing methods formulate AQA as a regression on various video representations supervised by the scores. In early pioneering work, Pirsiavash \textit{et al.} \cite{pirsiavash2014assessing} first formulated AQA and proposed to map the pose-based features, spatio-temporal interest points, and hierarchical convolutional features to the scores by using SVR. Parmar \textit{et al.} \cite{parmar2017learning} utilized the spatio-temporal features to estimate scores and demonstrated its effectiveness on actions like Diving, Gymnastic Vault, and Figure Skating. Bertasius \textit{et al.} \cite{bertasius2017baller} proposed a learning-based approach to estimate motion, behaviors, and performance assessment of basketball players. Li \textit{et al.} \cite{li2018end} proposed to combine some network modification with ranking loss to improve the AQA performance. Doughty \textit{et al.} \cite{doughty2019pros} assessed the relative overall level of skill in a long video based on the video-level pairwise annotation via the high-skill and low-skill attention modules. Parmar \textit{et al.} \cite{parmar2019action} introduced the shared concepts of action quality among actions. Parmar \textit{et al.} \cite{mtlaqa} further reformulated the definition of AQA as a multitask learning in an end-to-end fashion. Besides, Xu \textit{et al.} \cite{xu2019learning} proposed to use self-attentive and multiscale skip convolutional LSTM to aggregate information from individual clips, which achieved the best performance on the assessment of Figure Skating samples. Pan \textit{et al.} \cite{pan2019action} assessed the performance of actions visually from videos by graph-based joint relation modeling.
Recently, Tang \textit{et al.} \cite{tang2020uncertainty} proposed to reduce the underlying ambiguity of the action score labels from human judges via an uncertainty-aware score distribution learning (USDL). Yu \textit{et al.} \cite{yu2021group} constructed a contrastive regression framework (CoRe) based on the video-level features to rank videos and predict accurate scores.
Different from previous methods, our approach understands action procedures and mines procedure-aware attention between query and exemplar to achieve a more transparent action assessment.
\section{The FineDiving Dataset}\label{findiving}
In this section, we propose a new fine-grained competitive sports video dataset FineDiving. We will introduce FineDiving from dataset construction and statistics.
\subsection{Dataset Construction}
\noindent\textbf{Collection.}
We search for diving events in Olympics, World Cup, World Championships, and European Aquatics Championships on YouTube, and download competition videos with high-resolution. Each official video provides rich content, including diving records of all athletes and slow playbacks from different viewpoints.
\noindent\textbf{Lexicon.}
We construct a fine-grained video dataset organized by both semantic and temporal structures, where each structure contains two-level annotations, shown in Figures \ref{semantic} and \ref{temporal}. Herein, we employ three professional athletes of the diving association, who have prior knowledge in diving and help to construct a lexicon for subsequent annotation.
For semantic structure in Figure \ref{semantic}, the action-level labels describe the action types of athletes and the step-level labels depict the sub-action types of consecutive steps in the procedure, where adjacent steps in each action procedure belong to different sub-action types. A combination of sub-action types produces an action type. For instance, for an action type ``5255B'', the steps belonging to the sub-action types ``Back'', ``2.5 Somersaults Pike'', and ``2.5 Twists'' are executed sequentially.
In temporal structure, the action-level labels locate the temporal boundary of a complete action instance performed by an athlete. During this annotation process, we discard all the incomplete action instances and filter out the slow playbacks. The step-level labels are the starting frames of consecutive steps in the action procedure. For example, for an action belonging to the type ``5152B'', the starting frames of consecutive steps are 18930, 18943, 18957, 18967, and 18978, respectively, shown in Figure \ref{temporal}.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{statistic.pdf}
\caption{Statistics of FineDiving. (a) The action-type distribution of action instances. (b) The sub-action type distribution of action instances. (c) The difficulty degree distribution of action instances.}
\label{statistic}
\vspace{-10pt}
\end{figure}
\noindent\textbf{Annotation.}
Given a raw diving video, the annotator utilizes our defined lexicon to label each action and its procedure. We need to accomplish two annotation stages from coarse- to fine-grained. The coarse-grained stage is to label the action type for each action instance and its temporal boundary accompanied with the official score. The fine-grained stage is to label the sub-action type for each step in the action procedure and record the starting frame of each step. Both coarse- and fine-grained annotation stages adopt a cross-validating labeling method. Specifically, we employ six workers who have prior knowledge in the diving domain and divide data into six parts without overlap. The annotation results of one worker are checked and adjusted by another, which ensures annotation results are double-checked. To improve the annotation efficiency, we utilize an effective toolbox \cite{tang2019coin} in the fine-grained annotation stage. Under this pipeline, the total time of the whole annotation process is about 120 hours.
\subsection{Dataset Statistics}
The FineDiving dataset consists of 3000 video samples, covering 52 action types, 29 sub-action types, and 23 difficulty degree types, which are shown in Figure \ref{statistic}. These statistics will be helpful to design competition strategy and better bring athletes' superiority into full play. Table \ref{table:dataset} reports more detailed information on our dataset and compares it with existing AQA datasets as well as other fine-grained sports datasets. Our dataset is different from existing AQA datasets in the annotation type and dataset scale. For instance, MIT-Dive, UNLV, and AQA-7-Dive only provide action scores, while our dataset provides fine-grained annotations including action types, sub-action types, coarse- and fine-grained temporal boundaries as well as action scores. MTL-AQA provides coarse-grained annotations, i.e., action types and temporal boundaries. Other fine-grained sports datasets cannot be used for assessing action quality due to a lack of action scores. We see that FineDiving is the first fine-grained sports video dataset for the AQA task, filling the fine-grained annotations void in AQA.
\begin{table}[tb]
\renewcommand\tabcolsep{0.5pt}
\footnotesize
\centering
\caption{Comparison of existing sports video datasets and FineDiving. \textit{Score} indicates the score annotations; \textit{Step} is fine-grained class and temporal boundary; \textit{Action} is coarse-grained class and temporal boundary; \textit{Tube} contains fine-grained class, temporal boundary, and spatial localization.}
\vspace{-5pt}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Localization & \#Samples & \#Events & \#Act. Clas. & Avg.Dur. & Anno.Type \\
\hline
TAPOS\cite{shao2020intra} & 16294 & / & 21 & 9.4s & Step \\
FineGym\cite{shao2020finegym} & 32697 & 10 & 530 & 1.7s& Step \\
MultiSports\cite{li2021multisports} & 37701 & 247 & 66 & 1.0s & Tube \\
\hline
Assessment & \#Samples & \#Events & \#Sub-act.Typ. & Avg.Dur. & Anno.Type \\
\hline
MIT Dive\cite{pirsiavash2014assessing} & 159 & 1 & / & 6.0s & Score \\
UNLV Dive\cite{parmar2017learning} & 370 & 1 & / & 3.8s & Score \\
AQA-7-Dive\cite{parmar2019action} & 549 & 6 & / & 4.1s & Score \\
MTL-AQA\cite{mtlaqa} & 1412 & 16 & / & 4.1s & Action,Score\\
\hline
\textbf{FineDiving} & 3000 & 30 & 29 & 4.2s & Step,Score\\
\hline
\end{tabular}
\label{table:dataset}
\vspace{-10pt}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[width=0.96\linewidth]{framework.pdf}
\caption{The architecture of the proposed procedure-aware action quality assessment. Given a pairwise query and exemplar instances, we extract spatial-temporal visual features with I3D and propose a Temporal Segmentation Attention module to assess action quality via successively accomplishing procedure segmentation, procedure-aware cross-attention learning, and fine-grained contrastive regression. The temporal segmentation attention module is supervised by step transition labels and action score labels, which guides the model to focus on exemplar regions that are consistent with the query step and quantify their differences to predict reliable action scores.}
\label{framework}
\vspace{-10pt}
\end{figure*}
\section{Approach}\label{our_approach}
In this section, we will systematically introduce our approach, whose main idea is to construct a new temporal segmentation attention module to propose a reliable and transparent action quality assessment approach. The overall architecture of our approach is illustrated in Figure \ref{framework}.
\subsection{Problem Formulation}
Given pairwise query $X$ and exemplar $Z$ instances, our procedure-aware approach is formulated as a regression problem that predicts the action quality score of the query video via learning a new Temporal Segmentation Attention module (abbreviated as TSA). It can be represented as:
\begin{equation}
\hat{y}_X=\mathcal{P}(X,Z|\Theta)+y_Z
\end{equation}
where $\mathcal{P}\!=\!\{\mathcal{F},\mathcal{T}\}$ denotes the overall framework containing I3D \cite{carreira2017quo} backbone $\mathcal{F}$ and TSA module $\mathcal{T}$; $\Theta$ indicates the learnable parameters of $\mathcal{P}$; $\hat{y}_X$ is the predicted score of $X$ and $y_Z$ is the ground-truth score of $Z$.
\subsection{Temporal Segmentation Attention}
There are three components in TSA, that is, procedure segmentation, procedure-aware cross-attention learning, and fine-grained contrastive regression.
\noindent\textbf{Procedure Segmentation.} To parse pairwise query and exemplar actions into consecutive steps with semantic and temporal correspondences, we first propose to segment the action procedure by identifying the transition in time that the step switches from one sub-action type to another.
Suppose that $L$ step transitions are needed to be identified, the procedure segmentation component $\mathcal{S}$ predicts the probability of the step transition occurring at the $t$-th frame by computing:
\begin{align}
&[\hat{p}_1,\cdots,\hat{p}_L]=\mathcal{S}(\mathcal{F}(X)), \label{PS}\\
&\hat{t}_k=\underset{\frac{T}{L}(k-1)< t\leq\frac{T}{L}k}{\arg\max}\ \hat{p}_k(t) \label{pred_t}
\end{align}
where $\hat{p}_k\in\mathbb{R}^{T}$ is the predicted probability distribution of the $k$-th step transition; $\hat{p}_k(t)$ denotes the predicted probability of the $k$-th step transiting at the $t$-th frame; $\hat{t}_k$ is the prediction of the $k$-th step transition.
In Equation (\ref{PS}), the component $\mathcal{S}$ is composed of two blocks, namely ``down-up'' ($b_1$) and ``linear'' ($b_2$).
Specifically, the $b_1$ block consists of four ``down-$m$-up-$n$'' sub-blocks, where $m$ and $n$ denote specified dimensions of output along the spatial and temporal axes, respectively. Each sub-block contains two consecutive convolution layers and one max pooling layer. The $b_1$ block increases the length of I3D features $\mathcal{F}(X)$ using convolution layers along the temporal axis and reduces the dimension of $\mathcal{F}(X)$ using max-pooling layers along the spatial axis. The ``down-up'' block advances the visual features $\mathcal{F}(X)$ in deeper layers to contain both deeper spatial and longer temporal views for procedure segmentation. The ``linear'' block further encodes the output of the $b_1$ block to generate $L$ probability distributions $\{\hat{p}_k\}_{k=1}^L$ of $L$ step transitions in an action procedure. Besides, the constrain in Equation (\ref{pred_t}) ensures the predicted transitions being ordered, i.e., $\hat{t}_1\leq\cdots\leq\hat{t}_L$.
Given the ground-truth of the $k$-th step transition, i.e., $t_k$, it can be encoded as a binary distribution $p_k$, where $p_k(t_k)\!=\!1$ and $p_k(t_s)|_{s\neq k}\!=\!0$. With the prediction $\hat{p}_k$ and ground-truth $p_k$, the procedure segmentation problem can be converted to a dense classification problem, which predicts the probability of whether each frame is the $k$-th step transition.
We calculate the binary cross-entropy loss between $\hat{p}_k$ and $p_k$ to optimize $\mathcal{S}$ and find the frame with the greatest probability of being the $k$-th step transition. The objective function can be written as:
\begin{equation}
\begin{split}
\mathcal{L}_{\text{BCE}}(\hat{p}_k, p_k)&=-\textstyle\sum_t(p_k(t)\log \hat{p}_k(t) \\
&+(1-p_k(t))\log (1-\hat{p}_k(t))).
\end{split}
\end{equation}
Minimizing $\mathcal{L}_{\text{BCE}}$ makes distributions $\hat{p}_k$ and $p_k$ closer.
\vspace{1pt}
\noindent\textbf{Procedure-aware Cross-Attention.}
Through procedure segmentation, we obtain $L\!+\!1$ consecutive steps with semantic and temporal correspondences in each action procedure based on $L$ step transition predictions. We leverage the sequence-to-sequence representation ability of the transformer for learning procedure-aware embedding of pairwise query and exemplar steps via cross-attention.
Based on $\mathcal{S}$, the query and exemplar action instances are divided into $L\!+\!1$ consecutive steps, denoted as $\{S^X_l,S^Z_l\}_{l=1}^{L+1}$. Considering that the lengths of $S^X_l$ and $S^Z_l$ may be different, we fix them into the given size via downsampling or upsampling to meet the requirement that the dimensions of ``query'' and ``key'' are the same in the attention model. Then we propose procedure-aware cross-attention learning to discover the spatial and temporal correspondences between pairwise steps $S^X_l$ and $S^Z_l$, and generate new features in both of them. The pairwise steps complement each other and guide the model to focus on the consistent region in $S^Z_l$ with $S^X_l$. Here, $S^Z_l$ preserves some spatial information from the feature map (intuitively represented by patch extraction in Figure \ref{framework}). The above procedure-aware cross-attention learning can be represented as:
\begin{align}
S_l^{r'}&\!=\!\text{MCA}(\text{LN}(S_l^{r-1},S_l^Z))\!+\!S_l^{r-1},&r\!=\!1\!\cdots\!R\\
S_l^r&\!=\!\text{MLP}(\text{LN}(S_l^{r'}))\!+\!S_l^{r'},&r\!=\!1\!\cdots\!R
\end{align}
where $S_l^0\!=\!S_l^X$, $S_l\!=\!S_l^R$. The transformer decoder \cite{dosovitskiy2020image} consisted of alternating layers of Multi-head Cross-Attention (MCA) and MLP blocks, where the LayerNorm (LN) and residual connections are applied before and after every block, respectively, and the MLP block contains two layers with a GELU non-linearity.
\noindent\textbf{Fine-grained Contrastive Regression.}
Based on the learned procedure-aware embedding $S_l$, we quantify the step deviations between query and exemplar by learning the relative scores of pairwise steps, which guides the TSA module to assess action quality via learning fine-grained contrastive regression component $\mathcal{R}$.
It is formulated as:
\begin{equation}
\hat{y}_X = \frac{1}{L+1}\sum_{l=1}^{L+1}\mathcal{R}(S_l) + y_Z
\end{equation}
where $y_Z$ is the exemplar score label from the training set.
We optimize $\mathcal{R}$ by computing the mean squared error between the ground truth $y_X$ and prediction $\hat{y}_X$, that is:
\begin{equation}
\mathcal{L}_{\text{MSE}}=\|\hat{y}_X-y_X\|^2.
\end{equation}
\subsection{Optimization and Inference}
During training, for pairwise query and exemplar $(X,Z)$ in the training set with step transition labels and action score labels,
the final objective function for the video pair is:
\begin{equation}
J=\sum_{k=1}^L\mathcal{L}_{\text{BCE}}(\hat{p}_k, p_k)+\mathcal{L}_{\text{MSE}}.
\end{equation}
During testing, for a test video $X_{\text{test}}$, we adopt a multi-exemplar voting strategy \cite{yu2021group} to select $M$ exemplars from the training set and then construct $M$ video pairs $\{(X_{\text{test}},Z_j)\}_{j=1}^M$ with exemplar score labels $\{y_{Z_j}\}_{j=1}^M$.
The process of multi-exemplar voting can be written as:
\begin{equation}
\hat{y}_{X_{\text{test}}}=\frac{1}{M}\textstyle\sum_{j=1}^M(\mathcal{P}(X_{\text{test}},Z_j|\Theta)+y_{Z_j}).
\end{equation}
\section{Experiments}
\subsection{Evaluation Metrics}
We comprehensively evaluate our approach on two aspects, namely procedure segmentation and action quality assessment, and compute the following three metrics.
\noindent\textbf{Average Intersection over Union.}
When the procedure segmentation is finished, a set of predicted step transitions are obtained for each video sample.
We rewrite these step transition predictions as a set of 1D bounding boxes, denoted as $\mathcal{B}_p\!=\!\{\hat{t}_{k+1}\!-\!\hat{t}_k\}_{k=1}^{L-1}$. Supposed that the ground-truth bounding boxes can be written as $\mathcal{B}_g\!=\!\{t_{k+1}\!-\!t_k\}_{k=1}^{L-1}$, we calculate the average Intersection over Union (AIoU) between two bounding boxes (i.e., $\hat{t}_{k+1}\!-\!\hat{t}_k$ and $t_{k+1}\!-\!t_k$) and determine the correctness of each prediction if IoU$_i$ is larger than a certain threshold $d$.
We describe the above operation as a metric AIoU$@d$ for evaluating our approach:
\begin{align}
\text{AIoU}@d&=\frac{1}{N}\textstyle\sum_{i=1}^N\mathcal{I}(\text{IoU}_i\geq d)\\
\text{IoU}_i&=|\mathcal{B}_p\cap\mathcal{B}_g|/|\mathcal{B}_p\cup\mathcal{B}_g|
\end{align}
where $\text{IoU}_i$ indicates the Intersection over Union for the $i$-th sample and $\mathcal{I}(\cdot)$ is an indicator that outputs $1$ if $\text{IoU}_i\!\geq\!d$, whereas outputs $0$. The higher of AIoU$@d$, the better of procedure segmentation.
\noindent\textbf{Spearman's rank correlation.} Following the previous work \cite{parmar2019action,pan2019action,mtlaqa,tang2020uncertainty,yu2021group}, we adopt Spearman's rank correlation ($\rho$) to measure the AQA performance of our approach. $\rho$ is defined as:
\begin{equation}
\rho=\frac{\textstyle\sum_{i=1}^N(y_i-\bar{y})(\hat{y}_i-\bar{\hat{y}})}{\sqrt{\textstyle\sum_{i=1}^N(y_i-\bar{y})^2\textstyle\sum_{i=1}^N(\hat{y}_i-\bar{\hat{y}})^2}}
\end{equation}
where $\boldsymbol{y}$ and $\boldsymbol{\hat{y}}$ denote the ranking of two series, respectively. The higher $\rho$ the better performance.
\noindent\textbf{Relative $\ell_2$-distance.} Following \cite{yu2021group}, we also utilize relative $\ell_2$-distance (R-$\ell_2$) to measure the AQA performance of our approach. Given the highest and lowest scores of an action, namely $y_{\max}$ and $y_{\min}$, R-$\ell_2$ can be defined as:
\begin{equation}
\text{R-}\ell_2=\frac{1}{N}\sum_{i=1}^N
\left(\frac{|y_i-\hat{y}_i|}{y_{\max}-y_{\min}}\right)
\end{equation}
where $y_i$ and $\hat{y}_i$ indicate the ground-truth and predicted scores for the $i$-th sample, respectively. The lower of $R_{\ell_2}$, the better performance.
\subsection{Implementation Details}
\noindent\textbf{Experiment Settings.}
We adopted the I3D model pre-trained on the Kinetics \cite{carreira2017quo} dataset as $\mathcal{F}$ with the initial learning rate 10$^{-4}$. We set the initial learning rates of $\mathcal{T}$ as 10$^{-3}$. We utilized Adam \cite{kingma2014adam} optimizer and set weight decay as 0. Similar to \cite{tang2020uncertainty,yu2021group}, we extracted 96 frames for each video, split them into 9 snippets, and then fed them into I3D, where each snippet contains 16 continuous frames with stride 10 frames. Following the experiment settings in \cite{mtlaqa,tang2020uncertainty,yu2021group}, we selected 75 percent of samples are for training and 25 percent are for testing in all the experiments. We also specified network parameters for two blocks $b_1$ and $b_2$ in $\mathcal{S}$. In the block $b_1$, ($m,n$) in the sub-blocks equal to (1024, 12), (512, 24), (256, 48), and (128, 96), respectively. The block $b_2$ is a three-layer MLP.
Furthermore, we set $M$ as 10 in the multi-exemplar voting strategy and set the number of step transitions $L$ as 2. More details about the criterion of selecting exemplars and the number of step transitions can be found in the supplementary materials.
\begin{table}[t]
\caption{Comparisons of performance with existing AQA methods on FineDiving. (w/o DN) indicates selecting exemplars randomly; (w/ DN) indicates using dive numbers to select exemplars; / indicates without procedure segmentation.}
\vspace{-5pt}
\footnotesize
\renewcommand\tabcolsep{6pt}
\label{table:results}
\centering
\begin{tabular}{l|cc|c|c}
\toprule
\multirow{2}{*}{Method (w/o DN)} &\multicolumn{2}{c|}{AIoU@} & \multirow{2}{*}{$\rho$} & \multirow{2}{*}{R-$\ell_2(\times100)$} \\
\cline{2-3}
& 0.5 & 0.75 & & \\
\midrule
USDL \cite{tang2020uncertainty} & / & / & 0.8302 & 0.5927\\
MUSDL \cite{tang2020uncertainty} & / & / & 0.8427 & 0.5733\\
CoRe \cite{yu2021group} & / & / & 0.8631 & 0.5565\\
\midrule
\textbf{TSA} & \textbf{80.71} & \textbf{30.17} & \textbf{0.8925} & \textbf{0.4782} \\
\bottomrule
\multirow{2}{*}{Method (w/ DN)} &\multicolumn{2}{c|}{AIoU@} & \multirow{2}{*}{$\rho$} & \multirow{2}{*}{R-$\ell_2(\times100)$} \\
\cline{2-3}
& 0.5 & 0.75 & & \\
\midrule
USDL \cite{tang2020uncertainty} & / & / & 0.8913 & 0.3822 \\
MUSDL \cite{tang2020uncertainty} & / & / & 0.8978 & 0.3704\\
CoRe \cite{yu2021group} & / & / & 0.9061 & 0.3615 \\
\midrule
\textbf{TSA} & \textbf{82.51} & \textbf{34.31} & \textbf{0.9203} & \textbf{0.3420}\\
\bottomrule
\end{tabular}
\vspace{-10pt}
\end{table}
\noindent\textbf{Compared Methods.}
We reported the performance of the following methods including baseline and different versions of our approach:
$\bullet$ $\mathcal{F}$+$\mathcal{R}$ (Baseline), $\mathcal{F}$+$\mathcal{R}^\star$, and $\mathcal{F}$+$\mathcal{R}^\sharp$: The baseline uses I3D to extract visual features for each input video and predicts the score through a three-layer MLP with ReLU non-linearity, which is optimized by the MSE loss between the prediction and the ground truth.
$\star$ indicates the baseline adopting the asymmetric training strategy and $\sharp$ denotes the baseline concatenating dive numbers to features.
$\bullet$ $\mathcal{F}$+$\mathcal{S}$+$\mathcal{R}$: The procedure segmentation component is introduced into the baseline, which is optimized by the combination of MSE and BCE losses.
$\bullet$ TSA, TSA$^\dagger$: The approach was proposed in Section \ref{our_approach}. $\dagger$ indicates parsing action procedure using the ground-truth step transition labels instead of the prediction of procedure segmentation, which can be seen as an oracle for TSA.
\subsection{Results and Analysis}
\noindent\textbf{Comparisons with the State-of-the-art Methods.}
Table \ref{table:results} shows the experimental results of our approach and other AQA methods, trained and evaluated on the FineDiving dataset. We see that our approach achieves the state-of-the-art. Specifically, compared with the methods (USDL, MUSDL, and CoRe) without using dive numbers to select exemplars (i.e., w/o DN), our approach respectively obtained 6.23\%, 4.98\% and 2.94\% improvements on Spearman's rank correlation. Meanwhile, our approach also achieved 0.1145, 0.0951, and 0.0783 improvements on Relative $\ell_2$-distance compared to those methods, respectively.
Similarly, compared with USDL, MUSDL, and CoRe that use dive numbers to select exemplars (i.e., w/ DN), our approach also obtained different improvements on both Spearman's rank correlation and Relative $\ell_2$-distance.
\begin{table}[t]
\caption{Ablation studies on FineDiving. / indicates the methods without segmentation and $\checkmark$ denotes the method using the ground-truth step transition labels.}
\vspace{-5pt}
\footnotesize
\renewcommand\tabcolsep{6pt}
\label{table:ablation}
\centering
\begin{tabular}{l|cc|c|c}
\toprule
\multirow{2}{*}{Method (w/o DN)} & \multicolumn{2}{c|}{AIoU@} & \multirow{2}{*}{$\rho$} & \multirow{2}{*}{R-$\ell_2(\times100)$}\\
\cline{2-3}
& 0.5 & 0.75 & & \\
\midrule
$\mathcal{F}$+$\mathcal{R}$ & / & / & 0.8504 & 0.5837\\
$\mathcal{F}$+$\mathcal{R}^\star$ & / & / & 0.8452 & 0.6022\\
$\mathcal{F}$+$\mathcal{R}^\sharp$ & / & / & 0.8516 & 0.5736\\
$\mathcal{F}$+$\mathcal{S}$+$\mathcal{R}$ & 77.44 & 26.36 & 0.8602 & 0.5687\\
\textbf{TSA} & \textbf{80.71} & \textbf{30.17} & \textbf{0.8925} & \textbf{0.4782} \\
TSA$^\dagger$ & $\checkmark$ & $\checkmark$ & 0.9029 & 0.4536 \\
\bottomrule
\multirow{2}{*}{Method (w/ DN)} & \multicolumn{2}{c|}{AIoU@} & \multirow{2}{*}{$\rho$} & \multirow{2}{*}{R-$\ell_2(\times100)$}\\
\cline{2-3}
& 0.5 & 0.75 & & \\
\midrule
$\mathcal{F}$+$\mathcal{R}$ & / & / & 0.8576 & 0.5695\\
$\mathcal{F}$+$\mathcal{R}^\star$ & / & / & 0.8563 & 0.5770\\
$\mathcal{F}$+$\mathcal{R}^\sharp$ & / & / & 0.8721 & 0.5435\\
$\mathcal{F}$+$\mathcal{S}$+$\mathcal{R}$ & 78.64 & 29.37 & 0.8793 & 0.5428 \\
\textbf{TSA} & \textbf{82.51} & \textbf{34.31} & \textbf{0.9203} & \textbf{0.3420}\\
TSA$^\dagger$ & $\checkmark$ & $\checkmark$ & 0.9310 & 0.3260\\
\bottomrule
\end{tabular}
\vspace{-10pt}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=0.96\linewidth]{visualization.png}
\vspace{-3pt}
\caption{The visualization of procedure-aware cross attention between pairwise query and exemplar procedures. Our approach can focus on the exemplar regions that are consistent with the query step, which makes step-wise quality differences quantifying reliable. The presented pairwise query and exemplar contain the same action and sub-action types. (Best viewed in color.)}
\label{visualization}
\vspace{-12pt}
\end{figure*}
\noindent\textbf{Ablation studies.} We conducted some analysis experiments under two method settings for studying the effects of dive number, asymmetric training strategy, and procedure-aware cross-attention learning. As shown in Table \ref{table:ablation}, the performance of $\mathcal{F}$+$\mathcal{R}$ was slightly better than that of $\mathcal{F}$+$\mathcal{R}^\star$, which verified the effectiveness of symmetric training strategy. Compared with $\mathcal{F}$+$\mathcal{R}$, $\mathcal{F}$+$\mathcal{R}^\sharp$ obtained 0.12\% and 1.45\% improvements on Spearman's rank correlation. Meanwhile, our approach also achieved 0.0101 and 0.026 improvements on Relative $\ell_2$-distance. It demonstrated that concatenating dive numbers and I3D features can achieve a positive impact. Compared with $\mathcal{F}$+$\mathcal{R}$ (w/ DN), $\mathcal{F}$+$\mathcal{S}$+$\mathcal{R}$ (w/ DN) improved the performance from 85.76\% to 87.93\% on Spearman's rank correlation via introducing procedure segmentation.
Compared with $\mathcal{F}$+$\mathcal{S}$+$\mathcal{R}$ (w/ DN), TSA (w/ DN) further improved the performance from 87.93\% to 92.03\% to on Spearman's rank correlation, which demonstrated the superiority of learning procedure-aware cross attention between query and exemplar procedures.
\begin{table}[t]
\caption{Effects of the number of exemplars for voting.}
\vspace{-5pt}
\footnotesize
\renewcommand\tabcolsep{10pt}
\label{table:vote_num}
\centering
\begin{tabular}{c|cc|c|c}
\toprule
\multirow{2}{*}{$M$} &\multicolumn{2}{c|}{AIoU@} & \multirow{2}{*}{$\rho$} & \multirow{2}{*}{R-$\ell_2(\times100)$} \\
\cline{2-3}
& 0.5 & 0.75 & & \\
\midrule
1 & 76.01 & 26.56 & 0.9085 & 0.4020 \\
5 & 80.64 & 31.78 & 0.9154 & 0.3658 \\
\textbf{10} & \textbf{82.51} & \textbf{34.31} & \textbf{0.9203} & \textbf{0.3420}\\
15 & 82.52 & 34.31 & 0.9204 & 0.3419 \\
\bottomrule
\end{tabular}
\vspace{-10pt}
\end{table}
Besides, for the multi-exemplar voting used in inference, the number of exemplars $M$ is an important hyper-parameter that is a trade-off between better performance and larger computational costs. In Table \ref{table:vote_num}, we conducted some experiments to study the impact of $M$ on our approach TSA (w/ DN). It can be seen that with $M$ increasing, the performance becomes better while the computational cost is larger. The improvement on Spearman's rank correlation becomes less significant when $M\!>\!10$ and the similar trend on Relative $\ell_2$-distance also can be found in Table \ref{table:vote_num}.
\subsection{Visualization}
We visualize the procedure-aware cross-attention between query and exemplar on FineDiving, as shown in Figure \ref{visualization}. It can be seen that our approach highlights semantic, spatial, and temporal correspondent regions in the exemplar steps consistent with the query step, which makes the relative scores between query and exemplar procedures learned from fine-grained contrastive regression more interpretable.
\section{Conclusion and Discussion}
In this paper, we have constructed the first fine-grained sports video dataset, namely FineDiving, for assessing action quality. On FineDiving, we have proposed a procedure-aware action quality assessment approach via constructing a new temporal segmentation attention module, which learns semantic, spatial, and temporal consistent regions in pairwise steps in the query and exemplar procedures to make the inference process more interpretable, and achieve substantial improvements for existing AQA methods.
\textbf{Limitations \& Potential Negative Impact.} The proposed method has an assumption that the number of step transitions in the action procedure is known. The fine-grained annotations need to be manually decomposed and professionally labeled.
\textbf{Existing Assets and Personal Data.} This work contributes a new dataset on the diving sport, where all the data is collected and downloaded on YouTube and bilibili websites. We are actively contacting the creators to ensure that appropriate consent has been obtained.
\begin{appendix}
\section{More Experiment Settings}
\subsection{The Criterion of Selecting Exemplars}
For our approach TSA (w/ DN), we selected exemplars from the training set based on the action types. In training, for each instance, we randomly selected an instance as an exemplar from the rest of the training instances with the same action type. In inference, we adopted the multi-exemplar voting strategy, i.e., randomly selecting 10 instances as 10 exemplars from the training instances with the same action type.
For our approach TSA (w/o DN), we randomly selected exemplars from the training set in both training and inference.
\subsection{The Number of Step Transitions}
In FineDiving, the 60\% action types (e.g., 307C and 614B) consist of 3 sub-action types, corresponding to 3 steps; 30\% action types (e.g., 5251B and 6241B) consist of 4 sub-action types, corresponding to 4 steps; 10\% action types (e.g., 5152B and 5156B) consist of 4 sub-action types, corresponding to 5 steps. We provided full annotations of 5 possible steps to support future research on utilizing fine-grained annotations for AQA.
In experiments, our approach keeps $L$ constant and equals 2, indicating segmenting a dive action into 3 steps, i.e., 2 step transitions.
The reasons are as follows.
A dive action consists of \emph{take-off}, \emph{flight}, and \emph{entry}. If \emph{flight} is accomplished by one sub-action type, a dive action is divided into 3 steps, e.g., 307C containing 3 steps: Reverse, 3.5 Soms.Tuck, and Entry. If \emph{flight} is accomplished by two sub-action types successively, a dive action is divided into 4 steps, e.g., 6241B containing 4 steps: Arm.Back, 0.5 Twist, 2 Soms.Pike, and Entry. If \emph{flight} is accomplished by two sub-action types and one sub-action type is interspersed in another, a dive action is divided into 5 steps, e.g., 5152B containing 5 steps: Forward, 2.5 Soms.Pike, 1 Twist, 2.5 Soms.Pike, and Entry, where ``1 Twist'' is interspersed in ``2.5 Soms.Pike''.
We see that two cases of \emph{flight} accomplished by two sub-action types can be seen as two special cases of \emph{flight} accomplished by one sub-action type. Therefore, $L\!=\!2$ makes sense.
\section{More Ablation Study}
We summarize hyper-parameters for training as follows: the number of frames in each step is fixed into 5 before being fed into Multi-head Cross-Attention (MCA); all configurations are based on transformer decoder with 3 layers and 8 heads.
\begin{table}[ht]
\caption{Effects of the number of frames in each step in the procedure-aware cross-attention model.}
\vspace{-6pt}
\footnotesize
\renewcommand\tabcolsep{10pt}
\label{table:step_length}
\centering
\begin{tabular}{c|cc|c|c}
\toprule
\multirow{2}{*}{$L_{\text{step}}$} &\multicolumn{2}{c|}{AIoU@} & \multirow{2}{*}{$\rho$} & \multirow{2}{*}{R-$\ell_2(\times100)$} \\
\cline{2-3}
& 0.5 & 0.75 & & \\
\midrule
3 & 76.49 & 24.17 & 0.9081 & 0.4003 \\
\textbf{5} & \textbf{82.51} & \textbf{34.31} & \textbf{0.9203} & \textbf{0.3420}\\
8 & 80.65 & 33.37 & 0.9198 & 0.3501 \\
10 & 77.97 & 30.44 & 0.9174 & 0.3588 \\
15 & 77.96 & 30.32 & 0.9149 & 0.3675 \\
\bottomrule
\end{tabular}
\end{table}
To investigate the effects of the number of frames in each step (denoted as $L_{\text{step}}$) on the performance of action quality assessment, we conduct several experiments on the FineDiving dataset. Table~\ref{table:step_length} summarizes the performance with different $L_{\text{step}}$ including 3, 5, 8, 10 and 15. We observe that when $L_{\text{step}}$ increases from 3 to 5, the action quality assessment performance achieves 1.22\% and 0.0583 improvements respectively on Spearman's rank correlation and Relative $\ell_2$-distance. When $L_{\text{step}}$ is bigger than 8, the performance tends to be flat with a slight decrease. Specifically, compared with the performance of $L_{\text{step}}$=5, that of $L_{\text{step}}$=8 degrades 0.05\% and 0.0081 on Spearman's rank correlation and Relative $\ell_2$-distance, respectively.
The experimental results illustrate that the number of frames in each step is not proportional to the action quality assessment performance, since each step containing fewer frames cannot make full use of intra-step information while each step containing too many frames may introduce some noisy information.
\begin{table}[ht]
\caption{Effects of the number of transformer decoder layers.}
\vspace{-6pt}
\footnotesize
\renewcommand\tabcolsep{10pt}
\label{table:layer_num}
\centering
\begin{tabular}{c|cc|c|c}
\toprule
\multirow{2}{*}{$R$} &\multicolumn{2}{c|}{AIoU@} & \multirow{2}{*}{$\rho$} & \multirow{2}{*}{R-$\ell_2(\times100)$} \\
\cline{2-3}
& 0.5 & 0.75 & & \\
\midrule
1 & 81.71 & 33.24 & 0.9168 & 0.3612 \\
\textbf{3} & \textbf{82.51} & \textbf{34.31} & \textbf{0.9203} & \textbf{0.3420}\\
5 & 79.83 & 31.84 & 0.9202 & 0.3483 \\
10 & 79.57 & 31.78 & 0.9171 & 0.3534 \\
\bottomrule
\end{tabular}
\end{table}
To explore the effect of the number of transformer decoder layers (denoted as $R$) on the performance of action quality assessment, we conduct several experiments on the FineDiving dataset. Table~\ref{table:layer_num} summarizes the performance with different $R$, namely 1, 3, 5, and 10. It can be seen that when $R$ equals 3, the action quality assessment performance reaches the peaks (namely 0.9203 and 0.3420) on Spearman's rank correlation and Relative $\ell_2$-distance, respectively. The performance tends to be flat with a slight decrease when $R\!>$5, since too many transformer decoder layers may lead to overfitting for each step containing 5 frames ($L_{\text{step}}$=5).
\section{Code}
We provide the FineDiving dataset and code of our approach\footnote{\url{https://github.com/xujinglin/FineDiving}}, including the training and inference phases.
\begin{table*}
\caption{The detailed descriptions of action and sub-action types.}
\vspace{-5pt}
\footnotesize
\renewcommand\tabcolsep{6pt}
\renewcommand\arraystretch{0.9}
\label{dataset_info}
\centering
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c}
\toprule
\multirow{2}{*}{Action Type} & \multicolumn{5}{c|}{Sub-action Type} & \multirow{2}{*}{Action Type} & \multicolumn{5}{c}{Sub-action Type}\\
\cline{2-6}\cline{8-12}
& Take-off & \multicolumn{3}{c|}{Flight} & Entry & & Take-off & \multicolumn{3}{c|}{Flight} & Entry\\
\midrule
101B & Forward & \multicolumn{3}{c|}{0.5 Som.Pike} & \multirow{25}{*}{Entry} & 612B & Arm.Fwd & \multicolumn{3}{c|}{1 Som.Pike} & \multirow{5}{*}{Entry}\\
103B & Forward & \multicolumn{3}{c|}{1.5 Soms.Pike} & & 614B & Arm.Fwd & \multicolumn{3}{c|}{2 Soms.Pike} & \\
105B & Forward & \multicolumn{3}{c|}{2.5 Soms.Pike} & & 626B & Arm.Back & \multicolumn{3}{c|}{3 Soms.Pike} & \\
107B & Forward & \multicolumn{3}{c|}{3.5 Soms.Pike} & & 626C & Arm.Back & \multicolumn{3}{c|}{3 Soms.Tuck} & \\
109B & Forward & \multicolumn{3}{c|}{4.5 Soms.Pike} & & 636C & Arm.Reverse & \multicolumn{3}{c|}{3 Soms.Tuck} & \\\cline{7-12}
107C & Forward & \multicolumn{3}{c|}{3.5 Soms.Tuck} & & 5231D & Back & 0.5 Twist & \multicolumn{2}{c|}{1.5 Soms.Pike} & \multirow{16}{*}{Entry}\\
109C & Forward & \multicolumn{3}{c|}{4.5 Soms.Tuck} & & 5233D & Back & 1.5 Twists & \multicolumn{2}{c|}{1.5 Soms.Pike} & \\
201A & Back & \multicolumn{3}{c|}{0.5 Som.Straight} & & 5235D & Back & 2.5 Twists & \multicolumn{2}{c|}{1.5 Soms.Pike} & \\
201B & Back & \multicolumn{3}{c|}{0.5 Som.Pike} & & 5237D & Back & 3.5 Twists & \multicolumn{2}{c|}{1.5 Soms.Pike} & \\
201C & Back & \multicolumn{3}{c|}{0.5 Som.Tuck} & & 5251B & Back & 0.5 Twist & \multicolumn{2}{c|}{2.5 Soms.Pike} & \\
205B & Back & \multicolumn{3}{c|}{2.5 Soms.Pike} & & 5253B & Back & 1.5 Twists & \multicolumn{2}{c|}{2.5 Soms.Pike} & \\
207B & Back & \multicolumn{3}{c|}{3.5 Soms.Pike} & & 5255B & Back & 2.5 Twists & \multicolumn{2}{c|}{2.5 Soms.Pike} & \\
205C & Back & \multicolumn{3}{c|}{2.5 Soms.Tuck} & & 5331D & Reverse & 0.5 Twist & \multicolumn{2}{c|}{1.5 Soms.Pike} & \\
207C & Back & \multicolumn{3}{c|}{3.5 Soms.Tuck} & & 5335D & Reverse & 2.5 Twists & \multicolumn{2}{c|}{1.5 Soms.Pike} & \\
301B & Reverse & \multicolumn{3}{c|}{0.5 Som.Pike} & & 5337D & Reverse & 3.5 Twists & \multicolumn{2}{c|}{1.5 Soms.Pike} & \\
305B & Reverse & \multicolumn{3}{c|}{2.5 Soms.Pike} & & 5353B & Reverse & 1.5 Twists & \multicolumn{2}{c|}{2.5 Soms.Pike} & \\
303C & Reverse & \multicolumn{3}{c|}{1.5 Soms.Tuck} & & 5355B & Reverse & 2.5 Twists & \multicolumn{2}{c|}{2.5 Soms.Pike} & \\
305C & Reverse & \multicolumn{3}{c|}{2.5 Soms.Tuck} & & 6142D & Arm.Fwd & 1 Twist & \multicolumn{2}{c|}{2 Soms.Pike} & \\
307C & Reverse & \multicolumn{3}{c|}{3.5 Soms.Tuck} & & 6241B & Arm.Back & 0.5 Twist & \multicolumn{2}{c|}{2 Soms.Pike} & \\
401B & Inward & \multicolumn{3}{c|}{0.5 Som.Pike} & & 6243D & Arm.Back & 1.5 Twists & \multicolumn{2}{c|}{2 Soms.Pike} & \\
403B & Inward & \multicolumn{3}{c|}{1.5 Soms.Pike} & & 6245D & Arm.Back & 2.5 Twists & \multicolumn{2}{c|}{2 Soms.Pike} & \\ \cline{7-12}
405B & Inward & \multicolumn{3}{c|}{2.5 Soms.Pike} & & 5132D & Forward & 1.5 Soms.Pike & 1 Twist & 1.5 Soms.Pike & \multirow{5}{*}{Entry}\\
407B & Inward & \multicolumn{3}{c|}{3.5 Soms.Pike} & & 5152B & Forward & 2.5 Soms.Pike & 1 Twist & 2.5 Soms.Pike & \\
405C & Inward & \multicolumn{3}{c|}{2.5 Soms.Tuck} & & 5154B & Forward & 2.5 Soms.Pike & 2 Twists & 2.5 Soms.Pike & \\
407C & Inward & \multicolumn{3}{c|}{3.5 Soms.Tuck} & & 5156B & Forward & 2.5 Soms.Pike & 3 Twists & 2.5 Soms.Pike & \\
409C & Inward & \multicolumn{3}{c|}{4.5 Soms.Tuck} & & 5172B & Forward & 3.5 Soms.Pike & 1 Twist & 3.5 Soms.Pike & \\
\bottomrule
\end{tabular}
\end{table*}
\section{The FineDiving Dataset Details}
We will release the FineDiving dataset to promote future research on action quality assessment.
\subsection{Descriptions of Action and Sub-action Types}
Table \ref{dataset_info} shows the detailed descriptions of action and sub-action types in FineDiving mentioned in the lexicon in subsection 3.1. Dataset Construction. We see that a combination of the sub-action types from three phases (namely take-off, flight, and entry) generates an action type. Specifically, the take-off phase is annotated by one of six sub-action types which are ``Forward, Back, Reverse, Inward, Armstand Forward, Armstand Back, and Armstand Reverse''. The entry phase is annotated by the sub-action type ``Entry''. The flight phase is labeled by one or two sub-action types describing the somersault process, where the number of sub-action types depends on the somersault process whether containing the twist or not.
As shown in Table \ref{dataset_info}, if the somersault process contains the twist, the flight phase is annotated by two sub-action types, where one sub-action type annotates the number of twist turns in the somersault process and another is annotated the number of somersault turns in the somersault process. The former sub-action type is a part of the latter sub-action type and does not be performed independently without the latter sub-action type. For different action types, the former sub-action type (twist) may occur at different locations in the somersault process. When the twist is performed at the beginning of the somersault process, we first annotate the number of twist turns and then annotate the number of somersault turns, such as the action types 5231D and 6245D in Table \ref{dataset_info}. When the twist is performed at the middle of the somersault process, we first annotate the number of somersault turns, then annotate the number of twist turns, and finally annotate the number of somersault turns, such as 5132D and 5172B in Table \ref{dataset_info}. Note that, the first and the last annotated the same somersault process but are separated by the twist.
If the somersault process does not contain the twist, the flight phase is annotated by one sub-action type, that is, the number of somersault turns, such as 205B and 626C in Table \ref{dataset_info}.
\subsection{Annotation Tool}
In the fine-grained annotation stage, the durations of sub-action types are different for different action instances, which may cost a huge workload to label the FineDiving dataset with a conventional annotation tool. To improve the annotation efficiency, we utilize a publically available annotation toolbox \cite{tang2019coin} (mentioned in Annotation in subsection 3.1. Dataset Construction) to generate the frame-wise labels for various sub-action types, which can ensure high efficiency, accuracy, and consistency of our annotation results. Figure \ref{tool} shows an example interface of the annotation tool, which annotates the frames extracted from an action instance.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{tool.pdf}
\caption{The interface for sub-action type annotation tool. The left part represents the video frames to be annotated by the pre-defined sub-action types. The right part shows the pre-defined sub-action types using different colors.}
\vspace{-10pt}
\label{tool}
\end{figure*}
\end{appendix}
{\small
\bibliographystyle{ieee_fullname}
| {
"attr-fineweb-edu": 2.607422,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdFU5qoYAm1d-C3Jr |
\section{Introduction}
\label{sec:intro}
Soccer is perhaps the most popular sport in the world as it is officially played in 187 countries~\cite{FPFD2019FIFAReport} and, as an example, it gathered an estimated live audience of 1,12 billion people for the 2018 World Cup final ~\cite{FIFA2018AudienceSummary}, a figure close to fifteen percent of the world population. Moreover, this interest in broadcast soccer matches has become one of the major sources of revenue for soccer clubs in leagues such as the European \cite{KPMG2020EuropeanChampions}. Accordingly, there is a growing interest in analyzing soccer videos to provide tactical information \cite{Anzer2022TacticalPatterns}, quantitative statistics and visualizations \cite{Arbues2020Always}, camera selection in broadcasting \cite{Chen2018CameraSelection}, or video summaries \cite{Giancola2018SoccerNetv1} that aim to highlight the main events of a match. In all these tasks, the automatic spotting of key actions such as goal, corner or red card (among others) stands as a fundamental tool.
At the present time, locating events in soccer videos is manually performed by trained operators aided by algorithmic tools. These operators not only rely on recorded broadcast videos, but also in event metadata coming from logs, GPS data, and video-tracking data \cite{Pappalardo2019PublicData}. This is a laborious task considering that a soccer match lasts around 90 minutes, but it can contain more than 1,500 events \cite{Sanabria2020Profiling} and take about 8 hours to annotate them \cite{Giancola2018SoccerNetv1,Zhou2021FeatureCombination}.
As defined in \cite{Giancola2018SoccerNetv1}, the action spotting task aims at locating the anchor time that identifies such event. Current approaches for action spotting from soccer videos are based on modeling the dynamics of the game, the complexity of the events and the variation of the video sequences. Several methods have been focused on the latter \cite{Sanabria2019Deep,Giancola2018SoccerNetv1,Cioppa2020context,Zhou2021FeatureCombination}, as they exploit the similar visual features of \emph{standard} shots of video sequences. For instance, the shots of a standard \emph{goal scoring} video sequence could consist of a goal-view of a player shooting and scoring, a close-up of fans rejoicing, and a zoom-in of the team celebrating. Other methods have considered also modeling the narration given by the broadcast commentators \cite{Vanderplaetse2020Improved}. In contrast, a few methods \cite{Cioppa2021CameraCalibration} have modeled the visual dynamics of the game as it entails understanding the interactions of the players according to the rules of the game. Moreover, videos of broadcast sport events present an additional difficulty since they usually do not cover all the field, making the location of the players unknown at a given time. Besides, broadcast video sequences are not always showing the main action of the game, as they also capture the audience in the stands, the coaches in the bench, and replays.
\begin{figure*}[!t]
\centering
\includegraphics[height=8cm]{overview.pdf}
\caption{Overview of our approach. Given a sequence of frames from a soccer match, for all the people on the field we calculate their motion vector and classify them in \emph{referees}, \emph{players}, and \emph{goalkeepers}. Using this information and their location provided by a camera calibration model, we create a graph representation of the seen scene. These graphs are later processed by our proposed architecture to spot an action.}
\label{fig:overview}
\end{figure*}
In this work, our aim is to model the interactions of soccer players rather than using global generic features to perform action spotting in soccer videos. Consequently, we propose a model that represents the players in the field as nodes linked by their distance in a graph. The information about each player contained in the nodes is their position on the field, a calculated motion vector, and their predicted player class. A schematic overview of our approach is shown in Figure \ref{fig:overview}.
\paragraph{Contributions} Our contributions are summarized as follows: ({\em i}\/)\xspace We propose a novel graph-based architecture for action spotting in soccer videos that models the players and their dynamics in the field. The representation of the players relies on their extracted localization, predicted classification, and calculated motion. ({\em ii}\/)\xspace In creating this representation, we also propose an unsupervised method for player classification robust to different sample sizes. This method is based in the segmentation and location of the players on the field. ({\em iii}\/)\xspace We provide an extensive evaluation on the graph representation detailing the contribution of each individual input. ({\em iv}\/)\xspace We show how the results of the graph-based model can be boosted when combined with audiovisual features extracted from the video. Our evaluation shows a better performance than similar methods.
\section{Related Work}
\label{sec:related}
\paragraph{Action spotting and video summarization in sports} The objective of video summarization in sports is to highlight key moments from a game, for instance, when a team scores or when fouls are committed. Relevant work on soccer video summarization includes \cite{Coimbra2018Shape,Sanabria2019Deep}. In \cite{Coimbra2018Shape}, visual highlights of a soccer match are extracted by calculating importance scores of audio and metadata. In \cite{Sanabria2019Deep}, a soccer match is summarized by using a hierarchical recursive neural network over sequences of \emph{event} proposals extracted from frames. In \cite{Giancola2018SoccerNetv1}, the task of sports video summarization is cast as spotting actions in video sequences, a task similar to action detection. Moreover, they introduced SoccerNet, the largest dataset of soccer matches for video summarization. The authors propose a neural network architecture that extracts visual features from sequences of frames and spots actions by using a NetVlad \cite{Arandjelovic16} pooling layer. This architecture was further extended by combining audiovisual modalities \cite{Vanderplaetse2020Improved} and by preserving the temporal context of the pooling layer independently \cite{Giancola2021NetVLADPlusPlus}. Another work on the same dataset was presented in \cite{Cioppa2020context}. They introduce a context-aware loss function along with a deep architecture for video summarization that consists of a feature segmentation and action spotting modules. A closer work to ours was presented in \cite{Cioppa2021CameraCalibration}. They model the players in a graph using their estimated location in the soccer pitch using a camera calibration approach \cite{Sha2020CameraCalibration}. Their architecture fusions the output of a graph neural network \cite{Li2021DeepGCNs} using their previous model \cite{Cioppa2020context}. Besides having different mechanisms for spotting actions, our graph representation is more robust by classifying the players and calculating their motion vector. Finally, \cite{Zhou2021FeatureCombination} presents a transformer based architecture that uses as input features from five action recognition methods. In contrast, our approach models the dynamics of the soccer players rather than using global classification features from the frames and it is not as computationally demanding.
\paragraph{Automatic player classification} A source of information for semantic understanding of sports videos comes from player classification. It has been exploited in other tasks including player detection \cite{Manafifard2016MultiplayerDetection}, player tracking \cite{Tong2011LabelingTracking}, team activity recognition \cite{Bialkowski2013TeamActivities}. Traditional methods for automatic player classification rely on the combination of color histograms (RGB~\cite{Mazzeo2012football}, L$^*$a$^*$b$^*$~\cite{Bialkowski2013TeamActivities}, L$^*$u$^*$v$^*$~\cite{Tong2011LabelingTracking}) and clustering algorithms ($k$-means~\cite{Tran2012LongView}, Gaussian Mixture Models (GMMs)~\cite{Montanes2012RealTime}, Basic Sequential Algorithmic Scheme (BSAS)~\cite{DOrazio2009BSAS}). A more recent supervised approach \cite{Istasse2019Associative} performs a semantic segmentation of players that learns a separation embedding between the teams in the field. In comparison with this method, our method does not rely on pixel-level annotations for training the network. Another recent approach \cite{Koshkina2021contrastive} trains an embedding network for team player discrimination using a triplet loss for contrastive learning. Contrary to this approach, our method works on partial views of the field and not only clusters the two majority classes (\emph{players from team 1} and \emph{team 2}) but also considers minority classes (\emph{referees} and \emph{goalkeepers}).
\paragraph{Graphs for sports analytics} Graphs have been widely used for modeling team ball sports as they represent the players as nodes of a network and their interactions as edges, for example in water polo~\cite{Passos2010Networks}, volleyball~\cite{Qi2020stagNet}, soccer~\cite{Anzer2022TacticalPatterns,Stockl2021Offensive}, etc. Graphs can capture patterns of a game at individual, group, and global level \cite{Baldu2019Using}. As an example of the global level, in \cite{Qi2020stagNet} it is presented a deep architecture that models volleyball players and their individual actions to predict team activities using recursive networks and spatio-temporal attention mechanisms. At the group level, in \cite{Anzer2022TacticalPatterns} it is proposed a graph model that detects tactical patterns from soccer games videos by tracking each player and using variational autoencoders. At the individual level, in \cite{Stockl2021Offensive} defensive quality of individual soccer players is measured by using pitch-player passing graph networks. Since our graph-based model only relies on visual information, our approach involves locating the players in the pitch and classifying them in comparison with \cite{Anzer2022TacticalPatterns}. Moreover, our graph representation only covers the players on the scene as the pitch is not fully visible on broadcast games such as volleyball \cite{Qi2020stagNet}.
\section{Proposed Approach}
\label{sec:method}
Our approach to spot actions from soccer videos relies in representing the seen players on a frame as nodes in a graph. These nodes encode the location of the players, their motion vector, and player classification. In this representation, a player shares an edge with another player according to their approximated physical distance. All this information is extracted by first segmenting the players and locating them in the field using a camera calibration method. In the next step, the five categories of players are automatically classified by finding appropriate field views and using color histograms from their segmentation. The motion vector of a player is calculated as the dominant motion vector from their segmented optical flow mask. Once all the graphs from a sequence of frames are extracted, they are processed by our graph neural network as illustrated in Figure \ref{fig:overview}. All the steps of our approach are thoroughly detailed in the following sections.
\subsection{Players Segmentation and Localization}
\label{sec:playerSegmentationLocalization}
\paragraph{Players Segmentation} Our approach relies on the detection and segmentation of players and referees because their pixel regions allows us to extract useful information and help to disambiguate players in the case of occlusions. Specifically, we leverage player regions to improve their uniforms clustering and estimate the motion of each player from the estimated optical flow in those regions, thus, disregarding the motion of background pixels close to a certain player. We use PointRend~\cite{Kirillov2020PointRend} to perform a semantic segmentation on the soccer videos and filter out all the categories other than \emph{person}.
\paragraph{Players Localization} Our method also relies on the position of the players and referees inside the soccer pitch. Their position is not only used inside our action spotting model but also in the player uniforms clustering, as further explained in Section \ref{sec:playerClassification}. We consider that a player/referee is \emph{touching} the ground at the midpoint between the bottom coordinates of their bounding box. Their position inside the soccer pitch is calculated by projecting this point of the image plane onto a 2D template of the soccer pitch using the field homography matrix. This matrix is previously obtained using the camera calibration model proposed by \cite{Cioppa2021CameraCalibration,Sha2020CameraCalibration}.
\subsection{Unsupervised Player Classification}
\label{sec:playerClassification}
We propose an automatic player classification method that exploits the different field views from a broadcast match to find samples of the five considered categories: \emph{player from team 1} or \emph{team 2}, \emph{goalkeeper A} or \emph{B}, and \emph{referee}. Our proposed method is composed of four main steps that use the information coming from the semantic segmentation and the camera calibration. First, we filter the frames that have a low camera calibration confidence or that show close-up of people. Additionally, we discard detected people with a low detection score and that are visually too small for clustering/classification. Second, we determine which frames correspond to the midfield and cluster the people in them into three groups: \emph{referee}, \emph{player team 1}, \emph{player team 2}. Third, we determine the frames belonging to each goal and consider the person closest to it as being either \emph{goalkeeper A} or \emph{goalkeeper B}. Fourth, we train a convolutional neural network (CNN) using the formed clusters from the two previous steps. This CNN takes as input a masked person and outputs its corresponding category from the five classes. All these steps are fully detailed in the following paragraphs.
\begin{figure}[!t]
\input{figExtractionRules.tex}
\caption[]{Examples of selection of \emph{players}, \emph{referees}, and \emph{goalkeepers} using their location in the field. The first column shows a midfield view with \emph{players from both teams} and a \emph{referee} away from the goals by more than 30\% of the length of the field. The second column shows a \emph{goalkeeper} close to the goal by a distance of 4\% the length of the field, and away from the other players by 8\%.}
\label{fig:extractionDistances}
\end{figure}
\paragraph{Frame and people filtering} We filter out the frames that have a low camera calibration confidence score ($\leq0.85$), since the field homography matrix must be accurate in order to obtain the midfield and goal views. Moreover, we also remove those frames that show a close-up of people that might be players or the audience. We consider that a frame shows a close-up of people when at least the bounding box area of one person in the scene is greater than a maximum area threshold.
In addition, we discard segmented people that have a low detection score as they could have been misclassified with other objects on the scene or be a group of occluded persons. We also remove segmented people with a very small bounding box area because they are not useful for the clustering/classification as their uniform colors are not easily distinguishable.
\paragraph{Referee and player teams clustering} We assume that a person in the center of the soccer pitch or midfield is either a \emph{player from team 1}, \emph{team 2}, or the \emph{referee}. This assumption simplifies the clustering problem since goalkeepers rarely appear in the center of the field. In order to get samples of the three considered classes, we first find the frames from the match that show a midfield view. For each frame, we calculate the position of every person on the field using their bounding box from its semantic segmentation and homography matrix of the field. If all persons appearing in a frame are at least 30\% of the field length away from the center of both goals, this frame is considered to have a view of the midfield. An example of this rule for midfield frames is shown in the first column of Figure \ref{fig:extractionDistances}. Although the midfield view avoids photographers and/or team supporters close to the goal or field corners, it might show cameramen, team staff and players warming up on the sidelines. Therefore, we discard detected people who are close to the sidelines using a distance threshold.
The next step is to calculate a color histogram for each person detected in the midfield view frames, as their uniform is their main discriminating characteristic. Each player is represented using a normalized histogram of 64 bins from an 8$\times$8 grid of the a$^*$b$^*$ channels of the L$^*$a$^*$b$^*$ color space, as it proved to be more robust to illumination changes. This histogram is calculated using the pixels inside the (eroded) mask of the semantic segmentation of a person. The purpose of applying an erosion to the segmentation mask is to remove from the contour of a person background pixels belonging to the field or to other people.
The main problem of clustering the people in the midfield without imposing constraints is that sometimes leads to the minority class (i.e., the \emph{referee}) being absorbed by the predominant classes. In order to correctly cluster the actual three classes, we first cluster all players into $n>3$ \emph{prototypes} of them using $k$-means. The value of $n$ should be large enough to form at least one cluster for the \emph{referee} class but not too large to result in several small clusters with outliers. From these cluster prototypes we combinatorially search for the triplet that best represents the \emph{players} and the \emph{referee} classes. Specifically, for each possible triplet we calculate the area of the triangle formed by their centroids. The length of each side of the triangle is the Bhattacharyya distance between the centroids. We choose the triplet of prototypes with the largest triangular area as the one containing the three different classes. Finally, we make these prototypes grow by modelling each using a GMM of $m$ components and iteratively adding samples from the remaining prototypes. This is accomplished in two steps and by considering that the distance $d$ between a sample and a prototype is equal to the median Bhattacharyya distance between the sample and the $m$ components of the prototype. In the first step, a sample is added to a prototype if the distance $d$ between both is close to zero and much smaller than the distance to the other pair of prototypes $d_{1}$ and $d_{2}$ by a threshold $\lambda$, i.e. $d+\lambda<d_{1}$ and $d+\lambda<d_{2}$. In the second step, the GMM of each prototype is recomputed to consider the new additions. Finally, the enlarged prototypes are used to calculate the class boundary by training a support vector machine. The final clusters are calculated conservatively according to the distance between samples and the class boundaries.
\paragraph{Goalkeepers extraction} We assume that the person closest to the goal is \emph{goalkeeper A} or \emph{goalkeeper B}, depending on which side of the field the goal is located. As in the previous case, we compute the locations of each player in the field for all frames using their bounding boxes from the semantic segmentation and homography matrix of the field. For each goal in the field, we look for frames that show scenes that meet two specific conditions. The first condition is that there must be a person at a distance of no more than 4\% the width of the field from the center of the goal. We consider that this person is a goalkeeper, but to make sure that this is the case, the second condition is that the rest of the people are even further away from the goal, at a least a distance of 8\% the width of the field. To illustrate it, the second column of Figure \ref{fig:extractionDistances} shows a potential goalkeeper that fulfills both conditions. Since the conditions do not guarantee to form exact groups of goalkeepers, each group is refined in two steps. First, we compute the a$^*$b$^*$ color histograms from the masked patches of the goalkeepers, as described in the previous case. Second, we calculate the median color histogram value for each \emph{goalkeeper} group and filter out masked patches that have Bhattacharyya distance larger than 0.4. Since camera calibration is not always accurate, in some games the assistant referee is grouped as a goalkeeper and the subsequent filtering results in empty clusters. In this case, the samples that are considered referees are removed from the goalkeepers groups and the filtering is performed again. A sample is considered to be a referee if its Bhattacharyya distance with respect to the referee cluster is lower than a threshold.
\paragraph{Player classification} We train a CNN using as training examples the masked players and referees from the five extracted clusters described above. Specifically, we train from scratch a large MobileNetV3 \cite{Howard2019MobileNetV3} using Adam optimizer \cite{Kingma2015AdamOptimizer}. The training parameters are a starting learning rate $\gamma=1e^{-3}$, $\beta_{1}=0.9$ and $\beta_{2}=0.999$, and a batch size of 96. The network is trained using a plateau schedule with a patience set to 7 epochs for a maximum of 100 epochs. Once the training is finished, all the people in the match are labeled using the CNN.
\begin{figure*}[!t]
\input{figMotionVectors.tex}
\caption[]{Example of player motion vector extraction from a frame. The left column shows the calculated segmentation masks for \emph{fixed regions} (green), \emph{field} (blue), and the \emph{players} (red). The middle column shows the obtained optical flow using FlowNet 2.0 CSS \cite{Ilg2017FlowNet2, Reda2017Flownet2Implementation}. The right column shows the extracted motion vectors for each player in yellow.}
\label{fig:motionVectors}
\end{figure*}
\subsection{Player Motion Vector}
\label{sec:playerMotionVector}
Another source of information for our model is the motion vector of each player. This is calculated in four subsequent steps. First, we compute the optical flow for all frames from all matches using FlowNet 2.0 CSS \cite{Ilg2017FlowNet2, Reda2017Flownet2Implementation}. In order to reduce the effect of camera movement and thus the effect of the optical flow from the whole scene over the motion of each player, we calculate the dominant motion vector from the field. This is accomplished by detecting and removing fixed-regions from the match like the scoreboard and segmenting the field in the second and third steps, respectively. In the last step, we calculate the dominant motion vector of each player using the calculated masks from the semantic segmentation. An example of the player motion extraction is shown in Figure \ref{fig:motionVectors}. The next paragraphs describe in detail the last three steps of our method.
\paragraph{Fixed-regions detection} For each half-match, we obtain the masks of fixed regions by first calculating the mean optical flow from 10\% of the frames uniformly sampled. We expect that the mean optical flow contains fixed and moving regions with values close and different than zero, correspondingly. Then, we perform a $k$-means clustering~\cite{Johnson2022Faiss} by setting $k=4$ and by determining that the masks of the fixed-regions have a centroid with a norm close to zero. Subsequently, for each fixed-region we compute its mean RGB patch using its corresponding mask. The detection of fixed-regions in a frame is a two-step process. For each previously extracted region, ({\em i}\/)\xspace we obtain a patch of its possible location in the frame and ({\em ii}\/)\xspace calculate the Bhattacharyya distance between this patch and the mean RGB patch.
\paragraph{Field Segmentation} Our field segmentation is based on color quantization of the frame. So first we mask with black color the segmented players and detected fixed-regions. Next, we cluster the color of the pixels using $k$-means~\cite{Johnson2022Faiss} with $k=5$ and consider the largest cluster to be the field. We merge other clusters with the field if the mean squared error between their centroids is close to zero. The merged field pixels form the true values of the field binary mask. A convex hull is obtained from this field mask after performing a binary opening operation and an area size filtering on its regions. The final field mask is calculated by performing a logical \emph{and} operation between the convex hull and the original field mask.
\paragraph{Motion vector} We calculate the motion vector for each segmented player and field using its mask over the optical flow in two steps. In order to avoid cancellations of opposing optical flow vectors, we first compute the magnitude of the dominant optical flow from the second moment matrix as stated in \cite{bigun1991multidimensional,Ballester1998Affine}. Additionally, we calculate its direction by computing the histogram of directions of the four quadrants. In the second step, we subtract the dominant optical flow from the field for each player.
\subsection{Players Graph}
\label{sec:playerGraph}
We encode the players appearing in a frame as nodes in a graph. The features of each node are the five-dimension player classification vector from Section \ref{sec:playerClassification}, its estimated motion vector from Section \ref{sec:playerMotionVector}, the area of its bounding box, its position in the field, the projected center of the frame in the field, and the camera calibration score. In comparison to \cite{Cioppa2021CameraCalibration}, we consider that two players are linked in the graph if there is a distance of 5 meters between them.
Our model processes the graph information in a time window of 15 seconds using 2 \emph{fps}. First, each graph is individually processed using four consecutive dynamic edge graph CNN \cite{Wang2019DynamicGCNN} blocks. Then, the model divides the outputs into two equal parts that serve as input to two independent NetVlad pooling layers ~\cite{Arandjelovic16}. The purpose of this strategy is to preserve the temporal context before and after an action \cite{Giancola2021NetVLADPlusPlus}. The vocabulary size of each NetVlad layer is $k=64$ clusters. Finally, both streams are joined using a fully connected layer that predicts the action.
\section{Experiments}
\label{sec:experiments}
\subsection{Dataset}
\label{sec:dataset}
Our experiments were done using the SoccerNet-V2 dataset~\cite{Deliege2020SoccerNetv2}. Currently, this dataset is the largest of its kind containing 500 soccer broadcast videos, which amounts more than 750 hours. For the task of video summarization, all its videos were annotated using 17 different action categories (for example, \emph{penalty}, \emph{yellow card}, \emph{corner}, etc). Each action annotation is temporally bounded to a single timestamp, in other words, to a specific minute and second in the match. Moreover, these actions are further divided into explicitly \emph{visible} shown or \emph{unshown} events in the video.
Additionally, camera calibration and player detection data at a sampling rate of 2 \emph{fps} were further supplied by \cite{Cioppa2021CameraCalibration}. They trained a student network based on \cite{Sha2020CameraCalibration} to calculate the camera calibration. For the player detection, they provided the bounding boxes extracted using Mask R-CNN~\cite{He2017MaskRCNN}.
\subsection{Player Classification Evaluation}
\label{sec:playerClassificationExperiment}
Since automatic player classification consists of ({\em i}\/)\xspace clustering players into one of five categories and ({\em ii}\/)\xspace training a classifier using the clustered samples as training/validation splits, the objective of this experiment is to measure the performance of both subtasks. On the one hand, we measure the purity of the samples obtained for each category. On the other hand, we evaluate the classification performance of our method using these splits for training.
\paragraph{Data} We use a single match from the Soccernet-V2 dataset, as it lacks of team player annotations. For the players clustering subtask we use the original provided groundtruth from the single match. Explicitly, following the description of Section \ref{sec:playerClassification}, we cluster the players from the provided 10,800 frames containing 126,287 bounding boxes of detected people. For the player classification subtask we only consider the frames from the second half of the match. Following the filtering process of Section \ref{sec:playerClassification}, we discard frames with a low camera calibration confidence score and frames showing close-ups of people. We manually label the resulting frames into the five considered categories. We did not take into account people not playing in the field, for example, players warming up on the side of the field, coaches, photographers, etc. Additionally, we label players not detected in the groundtruth. We obtain a total of 3,740 annotated frames and 50,256 bounding boxes of the five different classes.
\paragraph{Players clustering evaluation} In order to measure the unsupervised collected data for the training/validation splits, we calculate the purity of the data for each class. In particular, we visually inspect the obtained samples for each class and count the number of samples that correspond to another category or that show more than one person.
\paragraph{Player classification evaluation} We use the annotated bounding boxes from the second half-match as our test split. Since more players were manually added and some people were removed during the annotation process, we only use the bounding boxes with the highest intersection over the union (IoU) coefficient with respect to the groundtruth data. This results in considering 45,566 samples as the test set. The distribution of the resulting data splits for this experiment is shown in Figure \ref{fig:trainingValidationSplits}. To measure the performance we use the classification accuracy metric.
\begin{figure}[!t]
\centering
\includegraphics[width=1.0\columnwidth]{training_validation_splits.pdf}
\caption{The distribution of the data splits for measuring player classification performance. Note that the training and validation splits were obtained in an unsupervised manner by our method.}
\label{fig:trainingValidationSplits}
\end{figure}
\begin{figure}[!t]
\centering
\input{figUnsupervisedErrors.tex}
\caption{Examples of errors in the unsupervised clustered data.}
\label{fig:unsupervisedErrors}
\end{figure}
\paragraph{Results} The performance results for this experiment are shown in Table \ref{tab:playersClassificationPerformance}. For the unsupervised clustering of the training/validation data, the overall purity score is 98.5\%. Some examples of errors in the unsupervised collected data are shown in Figure \ref{fig:unsupervisedErrors}, these examples not only include overlapping players but also other segmented objects near the players. For the classification performance, we obtain an accuracy score of 97.72 for all classes. The results indicate that the natural unbalance of the classes affects the performance of individual classes, specifically for the \emph{goalkeeper} classes.
\subsection{Node Information and Edges Representation}
\label{sec:ablation}
This experiment serves as an ablation study for the graph representation of the players. The purpose of this experiment is to measure the effects in performance of ({\em i}\/)\xspace removing the player classification and velocity information from the node, and ({\em ii}\/)\xspace varying the distance between players to consider them linked in the graph.
\begin{table}[!t]
\caption{Performance results for automatic player classification and class purity.}
\label{tab:playersClassificationPerformance}
\centering
\input{tabPlayersResults.tex}
\end{table}
\begin{table}[!t]
\caption{Performance of using different player information on nodes of the graph.}
\label{tab:nodeInformationResults}
\centering
\input{tabNodeInformationResults.tex}
\end{table}
\begin{table}[!t]
\caption{Performance of considering distinct distances between players for edge connections.}
\label{tab:distanceResults}
\centering
\input{tabDistanceResults.tex}
\end{table}
\paragraph{Data} For the ablation study on the node information, we set as baseline information (\textbf{Position}) the location of the players in the soccer pitch, the projected center of the frame in the field, and the camera calibration score. We add to this baseline the calculated motion vector of the player (\textbf{Motion vector+Position}), its predicted team class (\textbf{Player classification+Position}), and both of them (\textbf{All}). In all of these combinations, the considered distance between players was 5 meters.
For the ablation study on the edges, we evaluate a distance between players of $5, 10, \ldots, 25$ meters. These values cover short to long areas of the soccer pitch for the graph, as a radius of 25 meters for a player considers roughly half of the field.
\paragraph{Evaluation} In order to have comparable results with previous works, we use the same public annotated data partitions as presented in \cite{Deliege2020SoccerNetv2}. Moreover, we use the action spotting metric to evaluate the performance as introduced in \cite{Giancola2018SoccerNetv1}. Therefore, we calculate the mean average precision (mAP) across all classes given a temporal Intersection-over-Union (tIoU) of an action. Likewise, this tIoU had a time tolerance $\sigma$ equal to 60 seconds.
\paragraph{Results} The results for using different node information about the players are shown in Table \ref{tab:nodeInformationResults}. They show that the player classification has a slightly better performance for action spotting than motion vector information, and that their added contribution is preserved by reaching an average mAP of 43.29\%. Furthermore, the results of varying the distance between the players of the graph is presented in Table \ref{tab:distanceResults}. It shows that increasing the distance between players reduces the action spotting performance. Thus, we obtain that an adequate distance to link the players in the graph is 5 meters.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\columnwidth]{images/sota_late_fusion.pdf}
\caption{A schematic overview of our multimodal architecture that integrates audiovisual streams.}
\label{fig:lateFusionArchitecture}
\end{figure}
\subsection{Different Graph Neural Network Backbones}
\label{sec:gcnBackbones}
The goal of this experiment is to evaluate the performance of using different graph neural network backbones in our architecture. These backbones are put immediately after the graph input and before the pooling mechanism of our architecture, viz. before the NetVlad pooling layers. In particular, we consider substituting our dynamic edge convolutional (\textbf{DynamicEdgeConv}) graph layers \cite{Wang2019DynamicGCNN} by blocks of Graph Convolutional Network (\textbf{GCN}) \cite{Kipf2016GCN} and edge convolutional (\textbf{EdgeConv}) \cite{Wang2019DynamicGCNN} graph layers. Additionally, we also test the graph architecture described by \cite{Cioppa2021CameraCalibration} that uses 14 consecutive deep graph convolutional (\textbf{DeeperGCN}) layers introduced by \cite{Li2021DeepGCNs}.
\paragraph{Data} For all tested architectures we use the same node information described throughout Section \ref{sec:method}, that is we use the location, player classification, and motion vector to represent a player. Moreover, we consider that two players share an edge in the graph if the distance between them is 5 meters.
\paragraph{Evaluation} We use the same data partitions and action spotting metric described in the experiment of Section \ref{sec:ablation}.
\paragraph{Results} The results for this experiment are shown in Table \ref{tab:backboneResults}. They indicate that the best architecture configuration is to use DynamicEdgeConv layers as backbone. This configuration has an action spotting score of 43.29\% and it is followed by DeeperGCN \cite{Li2021DeepGCNs}, the architecture used as backbone in \cite{Cioppa2021CameraCalibration}.
\begin{table}[!t]
\caption{Performance of using different graph neural network backbones.}
\label{tab:backboneResults}
\centering
\input{tabBackboneResults.tex}
\end{table}
\begin{table}[!t]
\caption{Performance of combining different modalities to our graph-based model.}
\label{tab:multimodalResults}
\centering
\input{tabMultimodalResults.tex}
\end{table}
\input{tabResults.tex}
\subsection{Comparison with the State of the Art}
\label{sec:comparison}
In this experiment, we compare our model with state-of-the-art methods on the same dataset. Since our model only considers local scene information, we also test combining it with global information features. These features are combined with our architecture using a late fusion approach \cite{Karpathy2014LargeScale}, as shown in Figure \ref{fig:lateFusionArchitecture}. In this architecture all the weights were frozen except the last fully-connected (FC) layer. We consider features extracted from the audio and frames of the video sequence as additional modalities of our graph-based model. Moreover, we measure the performance of all the input combinations of these audiovisual modalities.
\paragraph{Data} For our graph-based stream (\textbf{Graph}), we use the all the players information detailed in Section \ref{sec:method}. In the case of the audio stream (\textbf{Audio}), the short-time Fourier transform (STFT) spectrogram is computed from the resampled audio signal at 16 KHz with a window size of 25 ms, a window hop of 10 ms, and a periodic Hann window. An stabilized log-mel spectrogram is calculated by mapping the spectrogram to 64 mel bins covering the range 125-7500 Hz. This final spectrogram is used as input for the VGGish network ~\cite{Gemmeke2017AudioSet}, a CNN pretrained on AudioSet. An audio feature vector covers a non-overlapping half-second chunk and corresponds to the vector of dimension 512 of the last fully connected (FC) layer. These features were originally extracted from SoccerNet-V2 dataset by \cite{Vanderplaetse2020Improved}. For the visual stream (\textbf{RGB}), we use the features provided by \cite{Deliege2020SoccerNetv2}. The features are obtained by first using a pretrained ResNet-152~\cite{He2016ResNet} network and, subsequently, applying a principal component analysis (PCA) on its FC layer. This reduces its dimension from 2,048 to 512. The sampling frequency for all modalities is 2 frames per second.
\paragraph{Evaluation} In order to compare our method with other published results, we use the same public splits and action spotting metric described in the experiment of Section \ref{sec:ablation}.
\paragraph{Results} The results of using different input modalities are shown in Table \ref{tab:multimodalResults}. They show that our graph model is better than the standalone audio stream but not than the RGB stream. This may be because broadcast videos contain interleave shots of field-views, replays, and close-ups of the players and the audience, and a graph is only created from field-views. Furthermore, the performance increases when the three streams (audio, RGB and graph) are combined. Finally, the results of our comparison with the state-of-the-art methods are shown in Table \ref{tab:results}. We obtain an overall action spotting performance of $57.83$\% average-mAP. This result indicates that our method outperforms a similar graph-based method for action spotting, namely the CameraCalibration~\cite{Cioppa2021CameraCalibration}. Additionally, our method has comparable results to the state-of-the-art methods that are more computing intensive as Vidpress Sports~\cite{Zhou2021FeatureCombination}, as it is in the second (or first) position for seven categories out of seventeen (e.g. \emph{substitution} or \emph{ball-out}).
\section{Conclusions}
\label{sec:conclusions}
We present a method for soccer action spotting based on a graph representation of the players in the field. In this graph, a player is represented as a node that encodes its location, motion vector, and player classification. The players share an edge in the graph according to the physical distance between them. All the node information is extracted from the segmentation of the players and their location given by camera calibration information. In particular, we propose an automatic player classification method that considers five different classes: \emph{player from team 1} or \emph{team 2}, \emph{goalkeeper A} or \emph{B}, and \emph{referee}. This method relies on color histograms from the segmented players and their position in the field. We extensively evaluate our methods in the SoccerNet-v2 dataset, the largest collection of broadcast soccer videos. For our player classification method we obtain an accuracy of $97.72$\% in our annotated benchmark. Furthermore, we obtain an overall action spotting performance of $57.83$\% average-mAP by combining our method with audiovisual modalities. Our method outperforms similar graph-based approaches and has comparable results with state-of-the-art methods.
\paragraph{Acknowledgments} The authors acknowledge support by MICINN/FEDER UE project, ref. PID2021-127643NB-I00, H2020-MSCA-RISE-2017 project, ref. 777826 NoMADS, and ReAViPeRo network, ref. RED2018-102511-T.
{\small
\bibliographystyle{ieee_fullname}
| {
"attr-fineweb-edu": 2.097656,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUb3_xK6Ot9V_E2vJ- | \section{Introduction}
\subsection{Background}
To visit an unknown important place we search hotel which is close to a set of preferences and we plan accordingly to explore too many places in a short time. The preferences may be river, museum, shopping mall, lake, cafe, hospital or even movie theatre. There are many possibilities of selecting hotel, but we want the hotel which is only adjoining to every preference. We need the result fast without manually researching about the place. Here, we consider our preferences as a class. Intuition would tell us to search in Google maps to select restaurant. The approach may not be perfect always, and will take a lot of time. Here the main aim of designing Wisp \cite{R1} ~\cite{R2} application is to visualise the place according to user preferences in real time, without any botheration of researching manually. We introduced here an algorithm for finding the cluster (the boundary location), named as JJCluster (which stands for Jimut-Jisnoo Cluster). The software module takes a list of places containing corresponding class, name, latitude, longitude, and displays a map for the Cluster formed at real time. This algorithm can be used in a variety of research areas of general clustering, where we need to select unique closest nodes from each class.
The JJCluster marks a boundary across the location which will be the best place for those preferences. Selecting a coordinate inside that boundary may be one of the best places for stay. The other applications of this software may find developed areas according to certain set of preferences. Major factors that boosted the development of this Wisp application are discussed below.
\subsection{Availability of data}
Cheap storage ~\cite{R3} and sensors have made it possible to collect variety of data in huge quantities. Users generate a large number of data in social networking platforms. The Wisp application used Marker Cluster as its core algorithm until the design of JJCluster. Beilschmidt et al. (2017) ~\cite{R4} proposed Marker Cluster algorithm to cluster huge dataset into a Map as circles. The quantity of data points may lead to visual clutter and most of the browser may not support such data points in a map, which would result in occlusion and loss of information. The shape of circle in Marker Cluster changes dynamically on zooming in or out of the map, which helped in optimising the available resources and visualising any number of points. By using Marker cluster we couldn't determine whether a place have all of the preference classes or some of them are in large quantity, so it may lead to wrong visualisation and selection of hotspots.
Similarly, location based social networks (LBSN) have recently gained popularity to help targeted users ~\cite{R5} to get their needs. The use of local experts, the natives, to review and suggest areas of interest to users with similar behavioural pattern has been the most advanced way of recommending places. Semantic similarities and geographic correlations between users are captured simultaneously by the concept of local experts. This technique is used widely for collecting data for application programming interfaces (API's) which helps in designing of Wisp application.
The following section discusses about the related work done by researchers in various fields of data visualisation (both in 2D and 3D), dynamic programming for resource allocations, and designing of recommendation systems to targeted users.
\section{Related Works}
Visualizations can help human to predict patterns ~\cite{R6} and areas of concern in a dataset, which may have inherently hidden relations. It gives ideas for processing the data further. It may help policy makers ~\cite{R7}, volunteers, programme managers, donors and non-governmental organisations to invest money in critical places by getting information from visualizations. In the domain of big data, scientists use statistical and data mining techniques ~\cite{R3} to find the information in which they are interested. When the data is huge scientists use certain partitioning techniques to visualise the data along with clustering methods. Big data analysis provides opportunity to test hypothesis with new data, which have potential to yield new insights to classical research questions across multiple disciplines ~\cite{R8}. Old methods such as surveys are being replaced by sensor data, which needs a lot of pre-processing to find valuable and accurate information that was not available by using old methods. There was constant breach of privacy, when crowd sourced data was collected. Quality measures and pre-processing ~\cite{R9} can help to minimise such privacy risks. This includes basic steps of formatting, cleaning, validation, making the data anonymous before publishing. Recent years have boosted the popularity of location based services, where users generate huge amount of data through social media. Such data can be used for recommending important places ~\cite{R5} for class of users. Geodemographic ~\cite{R10} may help to find social similarities amongst variables within a dataset using cluster analysis and profile matching, which are indicators of social, economic and demographic conditions of neighbourhood. These forms the basis of building crowd sourced platforms for data collection which are necessary for visualising data in large scale.
Researchers built a cyber-infrastructure ~\cite{R11} solution named PolarGlobe, which enables comprehensive collaboration and analysis. Their solution does not require installation of plug-ins, since that is a web based framework. State of the art tools used for measurement of climate change risk, adaptation ~\cite{R12} and vulnerability assessment use web as their common infrastructure. The data driven approach analysis is dominating in research of transportation; with the increase in amounts of data collected from accurate transport system sensors, which was earlier determined with the help of mathematical equations due to scarcity of data. In this way data are collected for gaining insights to improve existing systems.
\subsection{Dynamic programming algorithms}
Dynamic programming algorithms are useful in different fields of study ranging from telecommunications, transportations, electrical power systems, water supply, and emergency services which provide useful human resources to certain areas. In designing the optimisation route of network flow there are some constraints, such as the topological characteristics of a networked infrastructure where we choose the best path that is feasible for locomotion. Computer scientists, economist, statisticians, cartographic researchers made significant contributions for finding the optimal path and its solutions for a given set of problem. Optimisations can even help in designing cities by putting areas of critical need such as hospitals ~\cite{R13} in those places which are accessible very quickly. Even allocation of charging stations for electric vehicles ~\cite{R14} are promising solution to tackle climate change impacts for improving air quality and enhancing growth sustainability. Management and location of underground ~\cite{R15} such as depth of cables and pipelines are great concern in many countries, misinformation of which can even result in tragic accidents.
Bellman was one of the earliest scientists who introduced dynamic programming ~\cite{R16} ~\cite{R17}. He worked on dynamic programming problem for finding the shortest path between two points in a graph of interconnected roads and varied density of traffic. This marked the beginning of a new field of optimisation problem known as the dynamic programming problem. Scientists subsequently worked on the problem of optimising certain features in a problem. The most famous problem is Travelling Salesman Problem or the TSP, which is still an extremely interesting topic about whether there exists any algorithm for which can we do better? There is Floyd-Warshall algorithm for finding the shortest path between any two paths in a graph. This algorithm can help to find the distance between any two points of interest in a graph. Scientists can use optimisation when there are interdiction scenarios in a network.
\subsection{Cartographic visualizations for pattern recognition}
Dodge, Weibel, and ~\cite{R18} used two phased approach for studying movement pattern in more systematic and comprehensive way. They developed a conceptual framework for movement behaviour of different moving objects, and then classifying it. They devised a toolbox for the study of data mining algorithms for the movement of animal, human, traffic, industry goods, bee etc. which follow a generic spatiotemporal pattern.
Delort ~\cite{R19} examined a technique for visualising large spatial dataset in Web Mapping Systems. The algorithm created a cluster tree, which helped to visualising the cluster in a map without cluttering itself, at any given scale. Though, the algorithm was too slow for its time, it was later improved by many researchers and used widely till date, this is the basis of the idea for designing JJCluster (coincidentally), but here we focus on quality (classes) over quantity. Phan, Xiao, Yeh, Hanrahan, and Winograd ~\cite{R20} made an algorithm using binary hierarchical clustering to formulate flow maps which are used immensely in studying and visualising migration of birds, people, flow of traffic in a network, flow of river etc. Moreover, Stachoň, Šašinka, Čeněk, Angsüsser, Kubíček, Štěrba, and Bilíková ~\cite{R21} also studied the selection of graphic variables, in a new perspective to use graphics in cartographic visualisations. We used circle markers in the visualisation of this application, along with dynamic legend generators.
Zhu, and Xu ~\cite{R22} reduced the influence of location ambiguity using geotags and geo-cluster to employ semantic features of geo-tagged photographs through clustering. This method measures the semantic correlation which has synonym representation of tags, but not same tags. Delmelle, Zhu, Tang, and Casas ~\cite{R23} proposed a web based open source geospatial toolkit, which are particularly needed in developing countries lacking GIS software. It provides a user friendly and interactive web based framework for analysing the outbreak of vector borne diseases. They analysed the outbreak of dengue for efficient and appropriate allocation of human resources to minimise them. Etten and Jacob ~\cite{R24} et al. provided a package in R for calculating routes and distances in calculating various heterogeneous geographic spaces represented as grids. They used circuit theory and geographical analysis to find out various distances between routes.
\subsection{Recommendation Systems to targeted users}
D'Ulizia, Ferri, Formica, and Grifoni ~\cite{R25} enabled the relaxation of three kinds of query constraints, namely semantic, structural and topological, by using GeoPQL which translated natural language queries to map visualisations. They used graph theoretic procedures for query approximations. It helped those users who were unaware of exactly what they were looking for due to complexity of finding and locating the data points with naive tags. Cai, Lee, and Lee ~\cite{R26} studied the movement of humans to get better semantic knowledge about a place, through the help of GPS and social media. Dareshiri, Farnaghi, and Sahelgozin ~\cite{R27} improved the existing recommendation capabilities of geo portals. In their approach, the geoportals analyses the user's activity and recommend geospatial resources to users, based on their desires and preferences. Zhao, Qin, Ye, Wang, and Chen ~\cite{R28} identified the centre of cluster in map with the real time traffic data of taxi. These data were further used to study the popularity importance of a place over certain duration of time. The study showed the crowded areas, which were needed by the urban transportation and planning management authorities to reduce congestions.
\subsection{Visualising geospatial data in 3D}
Visualisation in 3D helps to find information, which is almost impossible to find by looking at the data. Three dimensional representations of Geographic information on computers are known as GeoVES ~\cite{R29} or Geospatial Virtual Environment. These environments can engage participants to explore virtual landscapes and knowledge-scapes along with providing different images of reality. Azri, Anton, Ujang, Mioc, and Rahman ~\cite{R30} studied various issues that were needed to be resolved for using geospatial data infrastructure. They optimised space, time accession of data, data model, and spatial access method etc., for utilising and managing high precision laser data, which could take 63 GB of space in a hard drive. The analysis of 3D visual data with the help of existing 3D software tools such as Autodesk Maya can help to find the related pattern in the data which can be used carefully by researchers to provide solutions to existing problems. The approach contributes to intuitive comprehension of complex geospatial information for decision makers in security agencies as well as for authorities related to urban planning for certain correlation between the urban features.
\section{Materials and Methods}
\subsection{Working of the algorithm}
The algorithm is very simple, once we write it with a pen and paper. We consider each point in the map as node of a graph. The aim is to select the shortest distance from each of the individual classes. We describe the general working of the algorithm in Fig. ~\ref{fig1},~\ref{fig2},~\ref{fig3},~\ref{fig4} and ~\ref{fig5}. Firstly, we start by choosing the elements of the first class in the graph as shown in Fig. ~\ref{fig1} (the list S is coloured as black in the graph for every step). We choose element 1 from class A and calculate its distance from all the items of the other classes, i.e., from 1 to B's 1, B's 2, C and class D. Then we sum up, i.e., 6+8+12+13 = 39 and store it in a list. We do the same thing for node 2 of class A; we get the distance 9+7+6+5 = 27 and add it to the list. Since, there is no other element from the first class, we will choose the minimum from this list i.e., class A's point 2 as shown in Fig. ~\ref{fig2}. We store this in the list of selected nodes, called S. We can now say that this point in S is nearest to all the other points of every other class. So, the other points of other classes will be nearest to this point. Again, we calculate the same thing for class B's node 1, but here from point 2 of class A (which is the only element present in the list S), we store it in an empty list. The value that we got is 9. We do this for the second element of class B, and store 7 to the list. Now, we select the nearest point from the entire list S, i.e., node 2 from class B. We store this to the list S as shown in Fig. ~\ref{fig3}. We will now find the distance of the next class's elements from each element of list S, add them up and select the minimum distance among the classes. Since, there is only one element present in the remaining dataset, so we store 1 from class C, after calculating the distance 5+8 = 13, to list S as shown in Fig. ~\ref{fig4}. Now S has {2 from class A, 2 from class B, and 1 from class C}. We continue this process for the next class, by calculating node 1 of class D from every element of S. We calculate this distance as 6+6+3 = 15. So, the final list S have the points {2 from class A, 2 from class B, 1 from class C, 1 from class D} as shown in Fig. ~\ref{fig5}. We draw the lines to separate it from other points and visualise the cluster in a map later in this paper for real world data.
If there is no member of a class in an area, then that particular class is not considered and the data is skipped. All the classes (preferences) are treated equally in importance by the algorithm.
\begin{figure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[totalheight=6cm]{fig1.jpg}
\caption{The original graph in the beginning.}
\label{fig1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[totalheight=6cm]{fig2.jpg}
\caption{The nearest point from the first class is selected.}
\label{fig2}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[totalheight=4cm]{fig3.jpg}
\caption{The closest point from the second class to the first class is selected.}
\label{fig3}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[totalheight=4cm]{fig4.jpg}
\caption{The third point (which is closest since it is only present once) from the third class to the first and second chosen points of their classes is selected.}
\label{fig4}
\end{subfigure}
\caption{A figure containing the structure of graphs formed by selecting the nodes from different classes.}
\label{fig:nodes}
\end{figure}
\begin{figure}
\centering
\includegraphics[totalheight=6.5cm]{fig5.jpg}
\caption{The fourth point from the fourth class (which is closest since it is only present once) from all the selected classes is chosen.}
\label{fig5}
\end{figure}
\subsection{Collecting data for Wisp application}
The algorithm for the JJCluster is implemented using Python3. It uses Folium ~\cite{R31} ~\cite{R32} library to visualise geospatial data. Social media is generating large amount of data, after careful analysis of the data, it can be used for mapping and understanding of the evolving human landscape ~\cite{R33}. These data provide ambient geospatial information such as momentary social hotspots according to majority preferences. These provide a better understanding of those features of a place, which cannot be estimated easily.
The data is collected through Foursquare ~\cite{R34}~\cite{R35} API, which is a social location service allowing users to explore world around them. It is like a general social media platform, where users can connect to their friends having foursquare account by downloading the app in mobile devices. The foursquare API has restful services, which helps the developers to create certain applications based on the data as shown in Fig. ~\ref{fig6}. The availability of foursquare is 24 x 7, and it is maintained by rich community of users. There are 150,000 developers currently building location-aware experiences with foursquare technology and data ~\cite{R29}. We have chosen foursquare API over other API in this application because we can focus on a certain location and get the data in brief.
The cleaned JSON dump is shown in Fig. ~\ref{fig7}. This data is an instance of a panda's data frame, fed to the JJCluster module for further processing and visualisation. This is an instance of pre-processed data for the location Tokyo. The data is divided into class, name, latitude and longitude. This forms the raw data for generic clustering module. Any data following this structure can be passed as input to the JJCluster module.
\begin{figure}[tph!]
\begin{center}
\includegraphics[totalheight=8cm]{fig6.jpg}
\caption{Restful JSON dumps of foursquare API for pre-processing}
\label{fig6}
\end{center}
\end{figure}
\begin{figure}[tph!]
\centering
\includegraphics[totalheight=3cm,width=8.5cm]{fig7.jpg}
\caption{The formatted structured data which shall be passed as input to the JJCluster module}
\label{fig7}
\end{figure}
The user interface here is self-intuitive, passing the necessary data as inputs to the console for visualisation. The user gives the radius, location, preferences as input as shown in Fig. ~\ref{fig8}. The location's latitude and longitude is looked up from a remote server using Geocoder API. The foursquare credentials are taken from an external file and the preferences are requested one by one. The data is received in JSON dumps, cleaned and are formed in a formatted preference tree as input to the JJCluster module. The distance is then optimised using the algorithm; the folium map is generated along with the preference legends. The real-time map is generated and visualised using a custom made HTTP server. This map can be saved for later reference. The whole working process of the Wisp application is shown in Fig. ~\ref{fig8}. It is a stand-alone semi web based loosely coupled application, which can be integrated with a web based framework for serving a huge number of audiences in the future.
\begin{figure}[tph!]
\begin{center}
\includegraphics[totalheight=6cm]{fig8.jpg}
\caption{The internal structure of Wisp application}
\label{fig8}
\end{center}
\end{figure}
After successful fetching and cleaning the data, with the help of foursquare credentials, we use the JJCluster to find the cluster. The algorithm for the JJCluster is discussed below.
\subsection{Pseudo-code for the JJCluster algorithm in Wisp}
The algorithm for JJCluster module is written in Algorithm 1.
\begin{algorithm}
\SetAlgoLined
\KwResult{Returns a list of unique nodes from each class }
\textbf{full\_tree} = [
$[class1, [[ name11, lat11, lon11], [name12, lat12, lon12],$
$[name13, lat13, lon13], ...]$,
$[class2, [[ name21, lat21, lon21], [name22, lat22, lon22], $
$ [name23, lat23, lon23], ...]$,
$[class3, [[ name31, lat31, lon31], [name32, lat32, lon32], $
$[name33, lat33, lon33], ...]$,
$...$
]\;
\textbf{S} = $\phi$\;
\For{ \textbf{class\_list} in \textbf{full\_tree} }{
\textbf{distance\_list} = $\phi$ \;
\textbf{this\_class} = select class from \textbf{class\_list}\;
\eIf{\textbf{first iteration}}{
\For{\textbf{item} in \textbf{full\_tree}}{
\textbf{item\_class} = select class from \textbf{item}\;
\If{ \textbf{item\_class} $\neq$ \textbf{this\_class}}{
\textbf{distance} = \textbf{harvesine$($} coordinates of \textbf{item}, coordinates of \textbf{item} in \textbf{class\_list} \textbf{$)$}\;
\textbf{distance\_list} = \textbf{distance\_list} $\cup$ \textbf{distance}\;
}
}
}{
\For{\textbf{item} in \textbf{this\_class}}{
\textbf{distance} = \textbf{harvesine$($} coordinates of \textbf{item}, coordinates of \textbf{item} in \textbf{S} \textbf{$)$}\;
\textbf{distance\_list} = \textbf{distance\_list} $\cup$ \textbf{distance}\;
}
}
\textbf{s} = find minimum \textbf{distance} from \textbf{distance\_list}\;
\textbf{S} = \textbf{S} $\cup$ \textbf{s}\;
}
\textbf{S} = contains all the unique nodes from each class of the list \textbf{full\_tree}\;
\caption{JJCluster}
\end{algorithm}
Here, the great circle distance between two latitude and longitude is calculated with the help of harvesine ~\cite{R35} formula, which gives accurate distance between two coordinates in a map. This algorithm results in successful formation of the best preference cluster and visualises it in a folium map. This is the general cartographic algorithm for wisp. This algorithm can be used in different fields of scientific studies for finding out the optimal points by selecting the unique nodes of each of the unique classes. We will now discuss the visualisations generated using Wisp application.
\section{Results and Discussions}
\subsection{Visualisations generated in Wisp}
Wisp is a small application of the JJCluster algorithm. It takes foursquare secret, foursquare ID, name of the location, number of preferences, preference list, and the type of map as input and displays the map by a custom made HTTP server on real time as shown in Fig. ~\ref{fig6}. The source code of the Wisp application is uploaded in GitHub. The quality and correctness of the visualisation will depend on the data. Since, foursquare is community maintained and trusted by developers, it should provide accurate information about a place. There are certain places which are not listed by the foursquare API, or the number of reviews is too low. Such places are usually under developed and such organization's owner probably doesn't list themselves in the foursquare API platform. The porting of this algorithm with Google maps data may lead to more accurate results, the Google maps enlists almost all the places present in any given location. Since there is an easy premium integration of Google services, this clustering algorithm could be used by Google map data to give highly accurate map results. The wisp application is free alongside with the regular data that is fetched from foursquare API. There is a premium package available in foursquare and might cost money, but using premium Google services to get pin point location of places will be a better option for getting accurate results from this clustering algorithm. The maps that are obtained by using foursquare free services and are implemented using Python3 are shown in Fig. ~\ref{fig9}, Fig. ~\ref{fig10}, Fig. ~\ref{fig11}, and Fig. ~\ref{fig12}. The preferences are plotted in a legend for each map on the left side, which forms a unique class. Each of the data points in the map are treated as individual nodes in JJCluster. The minimum nodes are selected and are successfully plotted in the folium map of the countries.
One of the many visualisation generated by Wisp application is shown in Fig. ~\ref{fig9}, targeting Tokyo, Japan with 15 preferences within a radius of 9 km, which forms the biggest size polygon among all the maps. Since, the number of preferences are high, the zonal boundary of the area increases because all the places may not be present in a small span of area. We can see the map of Kolkata, India in Fig. ~\ref{fig10}. There are eight preferences i.e., restaurant, gym, park, ice cream, movie theatre, hospital, river and books that can be treated as unique set of classes in a map. This is visualised in a browser. We can see details of each data points with the help of pop-ups. The total amount of data points shows the popularity of the foursquare location in that place. The whole map is in a radius of 10 km, but a certain part of the map is shown. We can also determine the high market value of property such as houses and flats in such places if it is close to many preferences.Similarly, in Fig. ~\ref{fig11}, we see the small part of Russia that is clustered and zoned (among all the bigger part of the data points) according to the algorithm. We studied a more diverse domain of classes in Fig. ~\ref{fig11}, for New York. We may find different shapes and sizes of zonal polygons in a city for different preferences. It may even happen that the data are updated in foursquare API and the shapes of the polygon changes over time. Since this is a community maintained API, it helps users to get the most recent data according to their needs.
\begin{figure}
\begin{subfigure}{.5\textwidth}
\includegraphics[totalheight=12.5cm, width=7.5cm]{fig9.jpg}
\caption{Open Street map of \href{https://jimut123.github.io/blogs/JJC_WISP/Tokyo_15.html}{Tokyo} with 15 preferences in a radius of 9 Km (above: Zoomed out, below: Zoomed in).}
\label{fig9}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\includegraphics[totalheight=12.5cm, width=7.5cm]{fig10.jpg}
\caption{Open Street map of \href{https://jimut123.github.io/blogs/JJC_WISP/Kolkata_8.html}{Kolkata} with 8 preferences in a radius of 9 Km (above: Zoomed out, below: Zoomed in)}
\label{fig10}
\end{subfigure}
\caption{A figure containing the maps formed by the Wisp application for Tokyo (left) and Kolkata (right).}
\label{fig:maps}
\end{figure}
\begin{figure}[h!]
\begin{subfigure}[t]{.5\textwidth}
\includegraphics[totalheight=12.5cm, width=7.5cm]{fig11.jpg}
\caption{Stamen Terrain map of \href{https://jimut123.github.io/blogs/JJC_WISP/Moscow_6.html}{Moscow} with 6 preferences in a radius of 7 Km (above: Zoomed out, below: Zoomed in).}
\label{fig11}
\end{subfigure}\hfill
\begin{subfigure}[t]{.5\textwidth}
\includegraphics[totalheight=12.5cm, width=7.5cm]{fig12.jpg}
\caption{Open Street map of \href{https://jimut123.github.io/blogs/JJC_WISP/New_York_7.html}{New York} with 7 preferences in a radius of 10 Km (above: Zoomed out, below: Zoomed in).}
\label{fig12}
\end{subfigure}
\caption{A figure containing the maps formed by the Wisp application for Moscow (left) and New York (right).}
\label{fig:maps1}
\end{figure}
\subsection{Optimisation matrix of the visualisations}
The names of the places are treated as unique nodes, minimum distance is found by selecting the nodes from different classes and adding them to the selected class's list S. We optimised the distance by applying the JJCluster algorithm. We define certain terms related to optimization matrix.
\textbf{D}: The distance of the considered node to all the nodes in the list S.
\textbf{T}: The total distance of all the nodes in a particular class to all the nodes in the list S.
\textbf{k}: $\frac{T}{D}$, the optimization factor.
The optimisation matrix for Tokyo is shown in Table 1. Here we name the list S by its class as there is a constraint of place in the table. We note that it contains the name of those points, which are represented aside for its corresponding class. As per the algorithm, the list S takes the minimum of the selected points and hence, the size of list increases after each iteration.
\begin{table*}[]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\textbf{Sl. No.} & \textbf{Name} & \textbf{Class} & \textbf{List S} & \textbf{Latitude} & \textbf{Longitude} & \textbf{Distance (D)} & \textbf{Total Distance (T)} & \textbf{Optimization Factor (k)} \\ \hline
1 & restaurant prunier & restaurant & {[}restaurant{]} & 35.677701 & 139.761121 & 1058.644523 & 39840.316139 & 37.633328 \\ \hline
2 & Chen Mapo Tofu & tofu & {[}restaurant, tofu{]} & 35.677815 & 139.736694 & 2.207082 & 68.494887 & 31.034142 \\ \hline
3 & Gym & gym & \begin{tabular}[c]{@{}c@{}}{[}restaurant, tofu,\\ gym{]}\end{tabular} & 35.675640 & 139.758585 & 2.316799 & 266.431336 & 114.999777 \\ \hline
4 & HMV and Books & books & \begin{tabular}[c]{@{}c@{}}{[}restaurant, tofu,\\ gym, books{]}\end{tabular} & 35.673216 & 139.759761 & 2.949521 & 378.860547 & 128.448172 \\ \hline
5 & river friends & river & \begin{tabular}[c]{@{}c@{}}{[}restaurant, tofu,\\ gym, books, river{]}\end{tabular} & 35.667960 & 139.761283 & 5.050077 & 448.085848 & 88.728517 \\ \hline
6 & Jansem Museum & museum & \begin{tabular}[c]{@{}c@{}}{[}restaurant, tofu,\\ gym, books, river,\\ museum{]}\end{tabular} & 35.671814 & 139.761493 & 4.143950 & 616.359580 & 148.737205 \\ \hline
7 & Tabitus Station & station & \begin{tabular}[c]{@{}c@{}}{[}restaurant, tofu,\\ gym, books, river,\\ museum, station{]}\end{tabular} & 35.674524 & 139.761528 & 4.170173 & 457.917897 & 109.807887 \\ \hline
8 & Peter: The Bar & bar & \begin{tabular}[c]{@{}c@{}}{[}restaurant, tofu,\\ gym, books, river,\\ museum, station, bar{]}\end{tabular} & 35.674652 & 139.760642 & 4.081305 & 289.851438 & 71.019295 \\ \hline
9 & \begin{tabular}[c]{@{}c@{}}Kozanro Chinese \\ restaurant\end{tabular} & chinese & \begin{tabular}[c]{@{}c@{}}{[}restaurant, tofu,\\ gym, books, river,\\ museum, station, bar,\\ chinese{]}\end{tabular} & 35.673575 & 139.762887 & 4.907414 & 726.208372 & 147.981895 \\ \hline
10 & \begin{tabular}[c]{@{}c@{}}Tokyo FM Ginza \\ Park\end{tabular} & park & \begin{tabular}[c]{@{}c@{}}{[}restaurant, tofu,\\ gym, books, river,\\ museum, station, bar,\\ chinese, park{]}\end{tabular} & 35.672573 & 139.763209 & 5.351536 & 813.479012 & 152.008497 \\ \hline
11 & TOHO Cinemas & movie & \begin{tabular}[c]{@{}c@{}}{[}restaurant, tofu,\\ gym, books, river,\\ museum, station, bar,\\ chinese, park, movie{]}\end{tabular} & 35.660054 & 139.729657 & 31.137526 & 120.719115 & 3.876965 \\ \hline
12 & \begin{tabular}[c]{@{}c@{}}Toranomon \\ Hospital\end{tabular} & hospital & \begin{tabular}[c]{@{}c@{}}{[}restaurant, tofu,\\ gym, books, river,\\ museum, station, bar,\\ chinese, park, movie, hospital{]}\end{tabular} & 35.668780 & 139.746678 & 16.129829 & 1061.389711 & 66.802913 \\ \hline
13 & Rock Fish & fish & \begin{tabular}[c]{@{}c@{}}{[}restaurant, tofu,\\ gym, books, river,\\ museum, station, bar,\\ chinese, park, movie, hospital,\\ fish{]}\end{tabular} & 35.670040 & 139.759960 & 10.701534 & 1154.081905 & 107.842662 \\ \hline
14 & \begin{tabular}[c]{@{}c@{}}Alpha note \\ stationary\end{tabular} & stationary & \begin{tabular}[c]{@{}c@{}}{[}restaurant, tofu,\\ gym, books, river,\\ museum, station, bar,\\ chinese, park, movie, hospital,\\ fish, stationary {]}\end{tabular} & 35.670040 & 139.759960 & 31.636799 & 180.066801 & 5.691688 \\ \hline
15 & \begin{tabular}[c]{@{}c@{}}Kawajun Hardware\\ Showroom\end{tabular} & hardware & \begin{tabular}[c]{@{}c@{}}{[}restaurant, tofu,\\ gym, books, river,\\ museum, station, bar,\\ chinese, park, movie, hospital,\\ fish, stationary, hardware {]}\end{tabular} & 35.681928 & 139.787323 & 45.477805 & 156.733348 & 3.446370 \\ \hline
\end{tabular}%
}
\caption{Optimisation matrix for Tokyo using JJCluster}
\end{table*}
\begin{table*}[!tph]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\textbf{Sl. No.} & \textbf{Name} & \textbf{Class} & \textbf{List S} & \textbf{Latitude} & \textbf{Longitude} & \textbf{Distance (D)} & \textbf{Total Distance (T)} & \textbf{Optimization Factor (k)} \\ \hline
1 & \begin{tabular}[c]{@{}c@{}}Oasis Restaurant,\\ Park Street\end{tabular} & restaurant & {[}restaurant{]} & 22.553118 & 88.352491 & 390.372423 & 14127.761798 & 36.181202 \\ \hline
2 & \begin{tabular}[c]{@{}c@{}}Aura Gym,\\ Park Street\end{tabular} & gym & {[}restaurant, gym{]} & 22.554730 & 88.352216 & 0.181536 & 3.617280 & 19.925980 \\ \hline
3 & \begin{tabular}[c]{@{}c@{}}Elliot Park ,\\ Park Street\end{tabular} & park & {[}restaurant, gym, park{]} & 22.553883 & 88.352672 & 0.229147 & 121.066560 & 528.335549 \\ \hline
4 & \begin{tabular}[c]{@{}c@{}}Metro Ice Cream,\\ Park Street\end{tabular} & ice cream & \begin{tabular}[c]{@{}c@{}}{[}restaurant, gym, park,\\ ice cream{]}\end{tabular} & 22.553568 & 88.352151 & 0.250870 & 304.604499 & 1214.192773 \\ \hline
5 & \begin{tabular}[c]{@{}c@{}}UFO Moviez India\\ Ltd., 68, Purna Das \\ Rd, Triangular Park\end{tabular} & movie & \begin{tabular}[c]{@{}c@{}}{[}restaurant, gym, park,\\ ice cream, movie{]}\end{tabular} & 22.517512 & 88.358810 & 16.387831 & 33.253821 & 2.029178 \\ \hline
6 & \begin{tabular}[c]{@{}c@{}}Nightangle Hospital,\\ Shakespeare Sarani\end{tabular} & hospital & \begin{tabular}[c]{@{}c@{}}{[}restaurant, gym, park,\\ ice cream, movie, \\ hospital{]}\end{tabular} & 22.545964 & 88.351471 & 6.763714 & 545.871007 & 80.705805 \\ \hline
7 & River Ploice Jetty & river & \begin{tabular}[c]{@{}c@{}}{[}restaurant, gym, park,\\ ice cream, movie, \\ hospital, river{]}\end{tabular} & 22.564127 & 88.338234 & 15.359777 & 196.605708 & 12.800037 \\ \hline
8 & \begin{tabular}[c]{@{}c@{}}Oxford Bookstore,\\ Park Street\end{tabular} & books & \begin{tabular}[c]{@{}c@{}}{[}restaurant, gym, park,\\ ice cream, movie, \\ hospital, river,books{]}\end{tabular} & 22.553652 & 88.351732 & 7.049999 & 653.233020 & 92.657174 \\ \hline
\end{tabular}%
}
\caption{Optimisation matrix for Kolkata using JJCluster}
\end{table*}
\begin{table*}[]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\textbf{Sl. No.} & \textbf{Name} & \textbf{Class} & \textbf{List S} & \textbf{Latitude} & \textbf{Longitude} & \textbf{Distance (D)} & \textbf{Total Distance (T)} & \textbf{Optimization Factor (k)} \\ \hline
1 & Coffee and Books & books & {[}books{]} & 55.754403 & 37.622445 & 354.560790 & 10618.142223 & 29.947311 \\ \hline
2 & Bosco Cafe & cyber cafe & {[}books,cyber cafe{]} & 55.754878 & 37.620674 & 0.122773 & 61.572129 & 501.513267 \\ \hline
3 & Ice Cave & ice cream & \begin{tabular}[c]{@{}c@{}}{[}books,cyber cafe,\\ ice cream{]}\end{tabular} & 55.751477 & 37.628756 & 1.143593 & 146.870961 & 128.429429 \\ \hline
4 & Moscow Tea & tea & \begin{tabular}[c]{@{}c@{}}{[}books,cyber cafe,\\ ice cream, tea{]}\end{tabular} & 55.752457 & 37.626123 & 0.948188 & 195.201380 & 205.867904 \\ \hline
5 & \begin{tabular}[c]{@{}c@{}}Mini Gym Hotel \\ Metropol\end{tabular} & gym & \begin{tabular}[c]{@{}c@{}}{[}books,cyber cafe,\\ ice cream, tea, gym{]}\end{tabular} & 55.758079 & 37.620857 & 2.369243 & 393.457266 & 166.068773 \\ \hline
6 & \begin{tabular}[c]{@{}c@{}}Royal Chinese \\ Restaurant\end{tabular} & \begin{tabular}[c]{@{}c@{}}chinese \\ food\end{tabular} & \begin{tabular}[c]{@{}c@{}}{[}books,cyber cafe,\\ ice cream, tea, gym,\\ chinese food{]}\end{tabular} & 55.754480 & 37.625580 & 1.620614 & 284.816591 & 175.746076 \\ \hline
\end{tabular}%
}
\caption{Optimisation matrix obtained for Moscow using JJCluster}
\end{table*}
\begin{table*}[]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\textbf{Sl. No.} & \textbf{Name} & \textbf{Class} & \textbf{List S} & \textbf{Latitude} & \textbf{Longitude} & \textbf{Distance (D)} & \textbf{Total Distance (T)} & \textbf{Optimization Factor (k)} \\ \hline
1 & \begin{tabular}[c]{@{}c@{}}Museum of Chinese\\ in America\end{tabular} & chinese & {[}chinese{]} & 40.719361 & -73.999086 & 448.537714 & 15335.590138 & 34.190191 \\ \hline
2 & Soho Thai & thai & {[}chinese, thai{]} & 40.720117 & -73.999504 & 0.091115 & 70.532027 & 774.101656 \\ \hline
3 & Kaigo Coffee Room & coffee & \begin{tabular}[c]{@{}c@{}}{[}chinese, thai, \\ coffee{]}\end{tabular} & 40.718633 & -74.000367 & 0.315488 & 105.182689 & 333.396911 \\ \hline
4 & \begin{tabular}[c]{@{}c@{}}Indigo Books HQ\\ Design Studio\end{tabular} & books & \begin{tabular}[c]{@{}c@{}}{[}chinese, thai, \\ coffee, books{]}\end{tabular} & 40.719422 & -73.997885 & 0.485303 & 223.256302 & 460.035156 \\ \hline
5 & \begin{tabular}[c]{@{}c@{}}Tribeca Soho \\ Animal Hospital\end{tabular} & hospital & \begin{tabular}[c]{@{}c@{}}{[}chinese, thai, \\ coffee, books, hospital{]}\end{tabular} & 40.720578 & -74.004869 & 2.000487 & 586.780255 & 293.318670 \\ \hline
6 & Columbus Park & park & \begin{tabular}[c]{@{}c@{}}{[}chinese, thai, \\ coffee, books, hospital,\\ park{]}\end{tabular} & 40.715603 & -73.999884 & 2.417840 & 360.109107 & 148.938377 \\ \hline
7 & \begin{tabular}[c]{@{}c@{}}Onieal's Grand Street\\ Bar and Restaurant\end{tabular} & bar & \begin{tabular}[c]{@{}c@{}}{[}chinese, thai, \\ coffee, books, hospital,\\ park, bar{]}\end{tabular} & 40.719593 & -73.997966 & 1.553863 & 196.666538 & 126.566175 \\ \hline
\end{tabular}%
}
\caption{Optimisation matrix for New York using JJCluster}
\end{table*}
The optimisation matrix corresponds to actual data fetched from the foursquare API, and scaled in the map accordingly. We see the selected node's list increase every time we increase the iteration. Similarly, Table 2, 3 and 4 contains the visualizations obtained from the cities Kolkata, Moscow, and New York respectively. The minimum distance may likewise increase or decrease depending upon the total closeness of a point to that list. This algorithm has a complexity of $\theta$($n^{2}$), we may improve this algorithm in future. These visualisations can be viewed in web browser and in smart phone. Our next target is to build an application for serving in the web, or maybe alongside with mobile applications.
The main objective for designing this Wisp application is to eliminate incorrect manual search. This clustering algorithm merely needs data in a particular format, so hacking and tweaking it for suitable uses are encouraged, since this is an open sourced project.
\section{Conclusions and Future Scope}
Creation of real time visualisation application is successful using JJCluster algorithm. We studied an application domain of the JJCluster algorithm for selecting the best places according to preferences in a map using Wisp application. The main objective of designing this JJCluster algorithm is to optimise and select the data points in such a way that it minimises the total distance of every nearest nodes from different classes. This algorithm can be used by researchers of different fields like statisticians, economists, cartographers, general scientists who are interested in finding certain hotspots according to classes. This may also be used to deploy drone swarms in those locations, where critical action is needed, which is determined by the algorithm, since this minimizes the total distance. Applications for this algorithm may further attract interdisciplinary researchers to evolve this algorithm according to their needs. It may happen that an oceanographer is studying about a place and there are too many classes of objects in his research. The objects with unique classes may be scattered over a wide place, but using this algorithm he may be able to select and find those places, which have all of the unique classes in the shortest span of area of the map. This will help to reduce fuel, time, resources and focus on areas of concern, having all the resources, which he might want to study.
Wisp is built on a map based system, it is open sourced, and scientists can use this module by passing necessary formatted data to give visualisations in real time. The real fuel here is data, if the data is accurate, it will give good results, but it may happen that certain places doesn't have the data, in that case use of premium Google services may dominate than using foursquare free services for wisp application. The visualisation is done in real time and saves users from manually searching a place. Since foursquare is community maintained API, it may happen that certain labelling of data is wrong, as the users have freedom to add and manipulate data. We may use Google API's in future for its popularity and correctness. In conclusion, we just need to pass the necessary formatted data to get the visualisations in real time with the help of Wisp application. In this method we can cluster nearest unique nodes from different classes using JJCluster algorithm in Wisp application.
\section{Acknowledgements}
I am thankful to my father, Dr. Jadab Kumar Pal, Deputy Chief Executive, Indian Statistical Institute, Kolkata, for constant motivation and support for completion of this project. I dedicate this clustering algorithm to my brother Jisnoo Dev Pal. I also acknowledge Prof Shalabh Agarwal, former Head of the department of Computer Science, St. Xavier's College, Kolkata for his support. Finally, I am thankful to all my well-wishers who supported me in every part of this project. The source code for Wisp using JJCluster can be found here : \href{https://github.com/Jimut123/wisp}{https://github.com/Jimut123/wisp}.
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 2.746094,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdVbxaJiQn8otYXXU | \section*{Abstract}
We demonstrate in this paper the use of tools of complex network theory to describe the strategy of Australia and England in the recently concluded Ashes $2013$ Test series.
Using partnership data made available by cricinfo during the Ashes $2013$ Test series, we generate batting partnership network (BPN) for each team, in which nodes correspond to batsmen and links represent runs scored in partnerships between batsmen. The resulting network display a visual summary of the pattern of run-scoring by each team, which helps us in identifying potential weakness in a batting order. We use different centrality scores to quantify the performance, relative importance and effect of removing a player from the team. We observe that England is an extremely well connected team, in which lower order batsmen consistently contributed significantly to the team score. Contrary to this Australia showed dependence on their top order batsmen.
\section*{Cricket as a complex network}
\begin{figure*}
\begin{center}
\includegraphics[width=14.5cm]{BPN_1stTest_Ashes_2013.eps}
\caption{ \label{fig:directed} Batting partnership network for the first Ashes Test involving England and Australia. Visually we observe the difference in scoring pattern for the two teams. For Australia, the dependence on BJ Haddin, PJ Hughes and SPD Smith is evident. The situation is different for England, where IR Bell formed regular partnership with the top order players like AN Cook and also lower order players like GP Swann. England displayed this team work throughout the Ashes 2013. While for Australia the scoring was mostly dependent on the top order players. }
\end{center}
\end{figure*}
The tools of social network analysis have previously been applied to sports. Such as a network approach was developed to quantify the performance of individual players in soccer \cite{duch10} and football \cite{pena}. Network analysis tools have been applied to football \cite{girvan02}, Brazilian soccer players \cite{onody04}, Asian Go players \cite{xinping}. Successful and un-successful performance in water polo have been quantified using a network-based approach \cite{mendes2011}. Head-to-head matchups between Major League Baseball pitchers and batters was studied as a bipartite network \cite{saavedra09}. More recently a network-based approach was developed to rank US college football teams \cite{newman2005}, tennis players \cite{radicchi11} and cricket teams and captains \cite{mukherjee2012}.
The central goal of network analysis is to capture the interactions of individuals within a team. Teams are defined as groups of individuals collaborating with each other with a common goal of winning a game. Within teams, every team member co-ordinate across different roles and the subsequently influence the success of a team. In the game of Cricket, two teams compete with each other. Although the success or failure of a team depends on the combined effort of the team members, the performance or interactive role enacted by individuals in the team is an area of interest for ICC officials and fans alike. We apply network analysis to capture the importance of individuals in the team. The game of Cricket is based on a series of interactions between batsmen when they bat in partnership or when a batsman is facing a bowler. Thus a connected network among batsmen arises from these interactions.
In cricket two batsmen always play in partnership, although only one is on strike at any time. The partnership of two batsmen comes to an end when one of them is dismissed or at the end of an innings. In cricket there are $11$ players in each team. The batting pair who start the innings is referred to as opening-pair. For example two opening batsmen $A$ and $B$ start the innings for their team. In network terminology, this can be visualized as a network with two nodes $A$ and $B$, the link representing the partnership between the two players. Now, if batsman $A$ is dismissed by a bowler, then a new batsman $C$ arrives to form a new partnership with batsman $B$. This batsman $C$ thus bats at position number three. Batsmen who bat at positions three and four are called top-order batsman. Those batsmen who bat at positions five to seven are called middle-order batsmen. Finally the batsmen batting at position number eight to eleven are referred to as tail-enders.
Thus a new node $C$ gets linked with node $B$. In this way one can generate an entire network of batting-partnership till the end of an innings. The innings come to an end when all $11$ players are dismissed or when the duration of play comes to an end. The score of a team is the sum of all the runs scored during a batting partnership. In our work we considered the network of batting partnership for Australia and England during the Ashes $2013$ Test series.
\subsection*{Performance Index}
We generate weighted and directed networks of batting partnership for all teams, where the weight of a link is equal to the fraction of runs scored by a batsman to the total runs scored in a partnership with another batsman. Thus if two batsmen $A$ and $B$ score $n$ runs between them where the individual contributions are $n_A$ and $n_B$, then a directed link of weight $\frac{n_{A}}{n}$ from $B$ to $A$. In Figure~\ref{fig:directed} we show an example of weighted and directed batting partnership network for two teams - Australia and England. We quantify the batting performance of individual players within a team by analyzing the centrality scores - in-strength, PageRank score, betweenness centrality and closeness centrality.
For the weighted network the in-strength $s_{i}^{in}$ is defined as
\begin{equation}
s_{i}^{in} = \displaystyle\sum_{j \ne i} \omega_{ji}
\end{equation}
where $\omega_{ji}$ is given by the weight of the directed link.
We quantify the importance or `popularity' of a player with the use of a complex network approach and evaluating the PageRank score. Mathematically, the process is described by the system of coupled equations
\begin{equation}
p_i = \left(1-q\right) \sum_j \, p_j \, \frac{{\omega}_{ij}}{s_j^{\textrm{out}}}
+ \frac{q}{N} + \frac{1-q}{N} \sum_j \, \delta \left(s_j^{\textrm{out}}\right) \;\; ,
\label{eq:pg}
\end{equation}
where ${\omega}_{ij}$ is the weight of a link and $s_{j}^{out}$ = $\Sigma_{i} {\omega}_{ij}$ is the out-strength of a link. $p_i$ is the PageRank score assigned to team $i$ and represents the fraction of the overall ``influence'' sitting in the steady state of the diffusion process on vertex $i$ (\cite{radicchi11}). $q \in \left[0,1\right]$ is a control parameter that awards a `free' popularity to each player and $N$ is the total number of players in the network.
The term $ \left(1-q\right) \, \sum_j \, p_j \, \frac{{\omega}_{ij}}{s_j^{\textrm{out}}}$ represents the portion of the score received by node $i$ in the diffusion process obeying the hypothesis that nodes redistribute their entire credit to neighboring nodes. The term $\frac{q}{N}$ stands for a uniform redistribution of credit among all nodes. The term $\frac{1-q}{N} \, \sum_j \, p_j \, \delta\left(s_j^{\textrm{out}}\right)$ serves as a correction in the case of the existence nodes with null out-degree, which otherwise would behave as sinks in the diffusion process. It is to be noted that the PageRank score of a player depends on the scores of all other players and needs to be evaluated at the same time. To implement the PageRank algorithm in the directed and weighted network, we start with a uniform probability density equal to $\frac{1}{N}$ at each node of the network. Next we iterate through Eq.~(\ref{eq:pg}) and obtain a steady-state set of PageRank scores for each node of the network. Finally, the values of the PageRank score are sorted to determine the rank of each player. According to tradition, we use a uniform value of $q=0.15$. This choice of $q$ ensures a higher value of PageRank scores \cite{radicchi11}.
Another performance index is betweenness centrality, which measures the extent to which a node lies on a path to other nodes. In cricketing terms, betweenness centrality measures how the run scoring by a player during a batting partnership depends on another player. Batsmen with high betweenness centrality are crucial for the team for scoring runs without losing his wicket. These batsmen are important because their dismissal has a huge impact on the structure of the network. So a single player with a high betweenness centrality is also a weakness, since the entire team is vulnerable to the loss of his wicket. In an ideal case, every team would seek a combination of players where betweenness scores are uniformly distributed among players. Similarly the opponent team would seek to remove the player with higher betweenness centrality.
Closeness centrality measures how easy it is to reach a given node in the network \cite{wasserman,peay}. In cricketing terms, it measures how well connected a player is in the team. Batsmen with high closeness allow the option for changing the batting order depending on the nature of the pitch or match situation.
In Table~\ref{tableS3} we compare the performance of players for different teams.
In the first Test, according to PageRank, in-strength, betweenness and closeness measures {\it IR Bell} is the most successful batsman for England. The betweenness centrality of {\it IR Bell} is the highest among all the players. This is also supported by the fact that in the second innings of the first Test at Trent Bridge, he scored $109$ when England were $121$ for $3$. He added $138$ crucial runs for the seventh wicket with Broad, and helped England post a target of $311$. According to the centrality score {\it IR Bell} was the most deserving candidate for the Player of the match award.
In the second Test, England's {\it JE Root}, with the highest in-strength, betweenness and closeness emerges as the most successful of all batsmen. Even his PageRank score is third best among all the batsmen, with Australia's {\it UT Khawaja} and {\it MJ Clarke} occupying the first and second spot. Interestingly {\it JE Root} was declared the Player of the match. In the third Test at Old Trafford, Australia dominated the match. The dominance is reflected in the performance of {\it MJ Clarke} ( highest PageRank, in-strength, betweenness and closeness among all batsmen) and quite deservedly received the player of the match award. In the fourth Test we observe that for Australia the centrality scores are not evenly distributed. For example {\it CJL Rogers} has the highest PageRank, in-strength, betweenness and closeness, while the other players have much lower centrality scores. The pattern of run scoring is dependent heavily on the top order batsmen. On the contrary for England we observe that a lower order batsman like {\it TT Bresnan} has the highest PageRank, indicating strong contribution from the lower order batsmen. Similarly the betweenness scores are much more uniformly distributed $-$ a sign of well balanced batting unit. Lastly, in the final Test, we again observe a strong dependence on top order batsmen like {\it SR Watson} and {\it SPD Smith} in the Australian team. Both {\it SPD Smith} and {\it SR Watson} are the two most central players. The remaining batsmen have either small or zero betweenness scores. Looking into the scores of England, we observe that middle order batsman {\it IR Bell} is the most central player, {\it MJ Prior} has the highest closeness, top order batsman {\it IJL Trott} has the highest in-strength and finally lower order batsman {\it GP Swann} has the highest PageRank. Thus there is no predominant role in the English batting line-up, indicating a well connected team as compared to the Australian batting line-up. We also observe that in terms of centrality scores, England's IR Bell performed consistently in most of the Test matches and also shared the Player of the series with {\it RJ Harris} of Australia.
Thus we see that tools of social network analysis is able to capture the consensus opinion of cricket experts. The details of the procedure is available online \cite{satyamacs} and has been accepted for publication in Advances in Complex Systems.
\begin{table}[ht!]
\centering
\caption{{Performance of batsmen for England and Australia for Ashes $2013$ } The top five performers are ranked according to their PageRank score and their performance is compared with In-strength, betweenness centrality and closeness centrality.}
\begin{tabular}{cccccc}
\hline
\multirow{2}{*}{}
{Country} & {Players} & {PageRank} & {In-strength}& {Betweenness} & {Closeness}\\
\hline
\multirow{3}{*}{{Australia}}
& \multirow{6}{*}{}
BJ Haddin & 0.2327 & 2.51818 & 0.4666 & 0.4761\\
& CJL Rogers & 0.1296 & 1.4099 & 0.4555 & 0.4545\\
Test $1$ & PJ Hughes & 0.1087 & 1.1771 & 0.5555 & 0.5882\\
&SPD Smith & 0.1045 & 1.7468 & 0.5333 & 0.5555 \\
& PM Siddle & 0.0967 & 1.55 & 0.0 & 0.3333 \\
\hline
\multirow{3}{*}{{England}}
& \multirow{6}{*}{}
IR Bell & 0.1924 & 2.6740 & 0.7555 & 0.7142\\
& GP Swann &0.1208 & 1.4666 & 0.3777 & 0.5263 \\
Test $1$ & IJL Trott & 0.1043 & 1.4240 & 0.1666 & 0.5263\\
& JM Bairstow & 0.1038 & 1.1257 & 0.0111 & 0.4761 \\
& AN Cook & 0.0840 & 1.0473 & 0.1666 & 0.5263 \\
\hline
\hline
\multirow{3}{*}{{Australia}}
& \multirow{6}{*}{}
UT Khawaja & 0.2188 & 1.5897 & 0.3444 & 0.4166\\
& MJ Clarke & 0.2088 & 2.2086 & 0.3666 & 0.2777\\
Test $2$& SPD Smith & 0.1355 & 1.1176 & 0.0 & 0.1923\\
&CJL Rogers & 0.0846 & 0.5549 & 0.1444 & 0.3125\\
&BJ Haddin & 0.0754 & 0.8179 & 0.3333 & 0.4545 \\
\hline
\multirow{3}{*}{{England}}
& \multirow{6}{*}{}
JE Root & 0.1882 & 2.4721 & 0.6111 & 0.625\\
& SCJ Broad & 0.1306 & 0.9583 & 0.2000 & 0.3333\\
Test $2$& TT Bresnan & 0.0946 & 1.2727 & 0.4666 & 0.5555\\
& KP Pietersen & 0.0908 & 1.625 & 0.0 & 0.4 \\
& JM Anderson & 0.0862 & 1.0 & 0.3555 & 0.4347 \\
\hline
\hline
\multirow{3}{*}{{Australia}}
& \multirow{6}{*}{}
MJ Clarke & 0.2513 & 2.9611 & 0.7142 & 0.8\\
&CJL Rogers & 0.1590 & 2.5731 & 0.1071 & 0.6153\\
Test $3$& BJ Haddin & 0.1235 & 1.4841 & 0.25 & 0.5714\\
& SR Watson & 0.1113 & 1.6 & 0.1785 & 0.5714 \\
& MA Starc & 0.1042 & 1.2304 & 0.0 & 0.5333 \\
\hline
\multirow{3}{*}{{England}}
& \multirow{6}{*}{}
MJ Prior & 0.1787 & 1.5252 & 0.5333 & 0.5555\\
& KP Pietersen & 0.1757 & 2.6446 & 0.6777 & 0.6666\\
Test $3$& AN Cook & 0.1537 & 2.2147 & 0.2777 & 0.5263\\
& JE Root & 0.0970 & 1.3035 & 0.1 & 0.5263 \\
& GP Swann & 0.0749 & 0.7333 & 0.0 & 0.3703 \\
\hline
\hline
\multirow{3}{*}{{Australia}}
& \multirow{6}{*}{}
CJL Rogers& 0.2195 & 3.1408 & 0.7 & 0.7142\\
&BJ Haddin & 0.1523 & 1.9342 & 0.2666 & 0.4545\\
Test $4$& SR Watson & 0.1423 & 1.7771 & 0.0 & 0.4545\\
& DA Warner & 0.0849 & 1.0184 & 0.2777 & 0.5 \\
& MJ Clarke & 0.0797 & 1.6383 & 0.2777 & 0.5 \\
\hline
\multirow{3}{*}{{England}}
& \multirow{6}{*}{}
TT Bresnan & 0.2146 & 2.0091 & 0.5333 & 0.5263\\
& IR Bell & 0.1422 & 2.6809 & 0.5333 & 0.5555\\
Test $4$& JM Bairstow & 0.1291 & 1.2647 & 0.2 & 0.4761\\
& KP Pietersen & 0.0906 & 1.1675 & 0.3111 & 0.4761 \\
& AN Cook & 0.0902 & 1.2164 & 0.2 & 0.5 \\
\hline
\hline
\multirow{3}{*}{{Australia}}
& \multirow{6}{*}{}
SPD Smith & 0.2155 & 3.6472 & 0.5222 & 0.7363\\
& SR Watson & 0.1734 & 2.9791 & 0.3333 & 0.6231\\
Test $5$& MJ Clarke & 0.1576 & 2.1258 & 0.1 & 0.6231\\
& JP Faulkner & 0.1313 & 2.7275 & 0.0 & 0.54 \\
& RJ Harris & 0.0706 & 1.2333 & 0.0 & 0.45 \\
\hline
\multirow{3}{*}{{England}}
& \multirow{6}{*}{}
GP Swann & 0.1777 & 1.6097 & 0.3778 & 0.4166\\
& MJ Prior & 0.1367 & 1.5 & 0.6 & 0.5263\\
Test $5$& IJL Trott & 0.1310 & 2.0510 & 0.3555 & 0.4761\\
& JM Anderson & 0.1091 & 0.8 & 0.0 & 0.4761 \\
& IR Bell & 0.0945 & 1.1711 & 0.6444 & 0.3030 \\
\hline
\end{tabular}
\label{tableS3}
\end{table}
\section*{Conclusion}
To summarize, we investigated the structural properties of batsmen partnership network (BPN) in the Ashes 2013 Test series. Our study reveals that network analysis is able to examine individual level network properties among players in cricket.
The batting partnership networks not only provides a visual summary of proceedings of matches for various teams, they are also used to analyze the importance or popularity of a player in the team. We identify the pattern of play for Australia and England and potential weakness in batting line-up. Identifying the `central' player in a batting line up is always crucial for the home team as well as the opponent team. We observe that Australia depended more on the top batsmen, while England was involved in team-game.
There are some additional features which could be applied in our analysis. The networks in our study are static and we assumed all the batsmen are equally athletic in the field. One could add an ``athletic index" as an attribute to each batsman. Also adding the fielders as additional nodes in the networks could provide us with a true picture of the difficulty faced by a batsman while scoring.
In real life many networks display community structure : subsets of nodes having dense node-node connections, but between which few links exist. Identifying community structure in real world networks have could help us to understand and exploit these networks more effectively. Potentially our study leaves a wide range of open questions which could stimulate further research in other team sports as well.
\section*{Acknowledgements}
The author thanks the cricinfo website for public availability of data.
\clearpage
| {
"attr-fineweb-edu": 2.001953,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdRfxK1fBGsWmvhFx | \section{ Introduction }
{Agoda}~\footnote{{Agoda}.com} is a global online travel agency for
hotels, vacation rentals, flights and airport transfers.
Millions of guests find their accommodations and
millions of accommodation providers list their properties in {Agoda}.
Among these millions of properties listed in {Agoda},
many of their prices are fetched through third party suppliers.
These third party suppliers
do not synchronize the hotel prices to {Agoda}.
Every time, to get a hotel price from these suppliers,
{Agoda}\ needs to make 1 HTTP request call to the supplier to
fetch the corresponding hotel price.
However, due to the sheer volume of the search requests received from users,
it is impossible to forward every request to the supplier.
Hence, a cache database which temporarily stores the hotel prices is built.
For each hotel price received from the supplier,
{Agoda}\ stores it into this cache database for some amount of time and evicts the price from the cache once it expires.
Figure~\ref{fig:flow2} above abstracts the system flow.
Every time an user searches a hotel in {Agoda},
{Agoda}\ first reads from the cache.
If there is a hotel price for this search from the user in the cache,
it is a 'hit' and we will serve the user with the cached price.
Otherwise, it is a 'miss' and the user will not have the price for that hotel.
For every 'miss',
{Agoda}\ will send a request to the supplier to get the price for that hotel,
and put the returned price into the cache.
So that, the subsequent users can benefit from the cache price.
However, every supplier limits the amount of requests we can send at every second. Once we reach the limit, the subsequent messages will be ignored. Hence, this poses four challenges.
\begin{figure}[t]
\centering
\includegraphics[width = 4.3in]{flow.png}
\caption{System flow of third party supplier hotel serving.
If a cached price exists, {Agoda}\ first serves the cache price to the user.
Otherwise, {Agoda}, on a best-efforts basis,
sends a request to the supplier to fetch for hotel price and put it in cache.}
\label{fig:flow2}
\end{figure}
\textbf{Challenge 1: Time-to-live (TTL) determination}.
For a hotel price fetched from the supplier,
how long should we put such hotel price in the cache before expiring them?
We call this duration as time-to-live (TTL).
The larger the TTL, the longer the hotel prices stay in the cache database.
As presented in Figure~\ref{fig:ttl_role}, the TTL plays three roles:
\begin{figure}[t!]
\centering
\includegraphics[height=1.9in]{challenge1.png}
\caption{TTL v.s. cache hit, QPS and price accuracy.}
\label{fig:ttl_role}
\end{figure}
\begin{itemize}
\item \textbf{Cache Hit}.
With a larger TTL, hotel prices are cached in the database for a longer period of time and hence, more hotel prices will remain in the database. When we receive a search from our users, there is a higher chance of getting a hit in the database. This enhances our ability to serve our users with more hotel prices from the third party suppliers.
\item \textbf{QPS}.
As we have limited QPS to each supplier, a larger TTL allows more hotel prices to be cached in database. Instead of spending QPS on repeated queries, we can better utilise the QPS to serve a wider range of user requests.
\item \textbf{Price Accuracy}.
As the hotel prices from suppliers changes from time to time, a larger TTL means that the hotel prices in our cache database are more likely to be inaccurate. Hence, we will not be able to serve the users with the most updated hotel price.
\end{itemize}
There is a trade-off between cache hit and price accuracy. We need to choose the TTL that caters to both cache hit and price accuracy.
To our best knowledge,
most Online Travel Agents (OTA) typically pick
a small TTL ranging from 15 minutes to 30 minutes.
However, this is not optimal.
\textbf{Challenge 2: Cross data centre QPS management}.
\begin{figure}
\centering
\includegraphics[height = 1.9in]{qps_management.png}
\caption{Cross data centre QPS management limitation.
Data centre A peaks around $50\%$ QPS around \texttt{18:00} and
data centre B peaks around $50\%$ QPS around \texttt{04:00}. }
\label{fig:qps-multi-dc}
\end{figure}
{Agoda}\ has several data centres globally to handle the user requests.
For each supplier,
we need to set a maximum number of QPS that each data centre is allowed to send.
However, each data centre has its own traffic pattern.
Figure~\ref{fig:qps-multi-dc} presents an example of
the number of QPS sent to a supplier from
two data centres A and B.
For data centre A, it peaks around $50\%$ QPS around \texttt{18:00}.
At the same time, data centre B peaks around $50\%$ QPS around \texttt{04:00}.
If we evenly distribute this $100\%$ QPS to data centre A and data centre B,
then we are not fully utilizing this $100\%$ QPS.
If we allocate more than $50\%$ QPS to each data center,
how can we make sure that
data center A and data center B never exceed the $100\%$ QPS in total?
Note that, the impact of breaching the QPS limit could be catastrophic to the supplier,
which might potentially bring down the supplier to be offline.
\textbf{Challenge 3: Single data centre QPS utilization}.
\begin{figure}
\centering
\includegraphics[height = 1.9in]{qps_unutilize.png}
\caption{Un-utilized QPS}
\label{fig:qps-utilization}
\end{figure}
As mentioned in the previous section,
each data centre has its own traffic pattern,
there are peak periods when we send the most amount of requests to the supplier,
and non-peak period when we send much fewer number of requests to the supplier.
As demonstrated in Figure~\ref{fig:qps-utilization},
for this data centre,
it sends \textless $40\%$ QPS to the supplier around \texttt{08:00}.
However, similar to the abovementioned example, $100\% - 40\% = 60\%$ QPS of this data centre is not utilized.
\textbf{Challenge 4: Cache hit ceiling}.
The passive system flow presented in Figure~\ref{fig:flow2}
has an intrinsic limitation to improve the cache hit.
Note that, this design sends a request to supplier to fetch for price
only if there is a miss.
This is passive!
Hence, a cache hit only happens if
the same hotel search happened previously and the TTL is larger than the
time difference between the current and previous hotel search.
Note that we cannot set TTL to be arbitrarily large as this will lower the price accuracy as explained in Challenge 1. As long as TTL of a specific search is not arbitrarily large, it will expire and the next request of this search will be a miss.
Even though we can set the TTL to arbitrarily large,
those hotel searches that never happened before will always be miss.
For example, if more than 20\% of the requests are new hotel searches.
Then, it is inevitable for us to have \textless 80\% cache hit regardless of how large the TTL is set.
To overcome the 4 challenges mentioned above,
we propose ${PriceAggregator}$, an intelligent system for hotel price fetching.
As presented in Figure~\ref{fig:superAgg_flow},
before every price is written to cache (Price DB),
it always goes through a TTL service,
which assigns different TTL for the different hotel searches.
This TTL service is built on historical data extracted to
optimize the trade-off between cache hit and price accuracy,
which addresses the Challenge 1.
Apart from passively sending requests to supplier to fetch for hotel price,
${PriceAggregator}$ re-invent the process by
adding an aggressive service which pro-actively
sends requests to supplier to fetch for hotel price
on a constant QPS.
By having a constant QPS,
Challenge 2 and Challenge 3 can be addressed easily.
Moreover, this aggressive service does not wait for a hotel search
to appear before sending requests to supplier.
Therefore, it can increase the cache hit and hence, addresses Challenge 4.
In summary, we make the following contributions in the paper:
\begin{enumerate}
\item We propose ${PriceAggregator}$,
an intelligent system which maximizes the bookings for a limited QPS.
To the best of our knowledge,
this is the first productionised intelligent system which optimises the utilization of QPS.
\item We present a TTL service,
SmartTTL which optimizes the trade-off between cache hit and price accuracy.
\item Extensive A/B experiments were conducted to show that ${PriceAggregator}$ is effective and increases {Agoda}'s revenue significantly.
\end{enumerate}
\begin{figure}[t]
\centering
\includegraphics[height=1.5in]{system_flow.png}
\caption{${PriceAggregator}$ system flow. }
\label{fig:superAgg_flow}
\end{figure}
The rest of the paper is organized as following.
Section~\ref{sec:definition} presents the necessary definitions before presenting the TTL service, {SmartTTL }\ in Section 3.
In Section 4, we present the aggressive model.
In Section 5, we present the experiment results and analysis.
Section 6 presents the related work
before concluding the paper in Section 7.
\section{Preliminary and Definition}
\label{sec:definition}
In this section, we make necessary definitions.
Figure~\ref{fig:bookinig_flow} presents the major steps in the hotel booking process.
In stage 1, an user requests for a hotel price.
In stage 2, if the hotel price is already existing in the cache, then the user will be presented with the cached price. Otherwise, the user won't be able to see the hotel price.
In stage 3, if the user is happy with the hotel price, then the user clicks booking.
In stage 4, {Agoda}\ confirms with the hotel whether the price is eligible to sell. If the price is eligible to sell, then {Agoda}\ confirms the booking in stage 5.
\begin{figure}[t]
\centering
\includegraphics[height = 0.65in]{booking_flow.png}
\caption{{Agoda}\ booking flow}
\label{fig:bookinig_flow}
\end{figure}
\begin{definition}
Let $U=\{u_1, u_2, \dots, u_{|U|}\}$
be the set of users requesting for hotels in {Agoda}.
Let $H=\{h_1, h_2, \dots, h_{|H|}\}$
be the set of hotels that {Agoda}\ have.
Let $C=\{c_1,c_2,\dots, c_{|C|}\}$
be the set of search criteria that {Agoda}\ receives,
and each $c_i$ is in the form of
$ \langle \texttt{checkin,checkout,adults,children,}$
$\texttt{rooms} \rangle$.
\end{definition}
In the definition above, $U$ and $H$ are self-explanatory.
For $C$, $\langle \texttt{2020-05-01,}$
$\texttt{2020-05-02, 2,0,1} \rangle$
means a search criteria having the
checkin date as $\texttt{2020-05}$
$\texttt{-01}$,
the checkout date as $\texttt{2020-05-02}$,
the number of adults as $\texttt{2}$,
the number of children as $\texttt{0}$
and the number of room as $\texttt{1}$.
Therefore, we can define the itinerary request and the user search as follows.
\begin{definition}
Let $R = \{r_1, r_2, \dots, r_{|S|}\}$ be the set of itinerary request that {Agoda}\ sends to the suppliers, where $r_i \in H \times C$.
Let $S = \{s_1, s_2, \dots, s_{|S|}\}$ be the set of searches that {Agoda}\ receives from the user, where $s_i \in U \times H \times C$.
\end{definition}
For example, an itinerary request $r_i = \langle \texttt{Hilton Amsterdam}$,$\texttt{2020-06-01}$, $\texttt{2020-06-02,1,0,1} \rangle$ means {Agoda}\ sends a request to the supplier to
fetch price for hotel $\texttt{ Hilton Amsterdam}$ on $\texttt{checkin=2020-06-01, checkout=}$,\linebreak
$\texttt{2020-06-02}$,
$\texttt{adults=1}$,
$\texttt{children=0}$,
$\texttt{rooms=1}$.
Similarly, an user search
$s_i= \langle \texttt{Alex}$,
$\texttt{Hilton Amsterdam}$,
$\texttt{2020-05-01}$,
$\texttt{2020-}$
$\texttt{05-02}$,
$\texttt{2}$,$\texttt{0,1} \rangle$
means $\texttt{Alex}$ searched on hotel $\texttt{Hilton Amsterdam}$ for price
on $\texttt{checkin=2020-05-01}$,
$\texttt{checkout=2020-05-02}$,
$\texttt{adults=2}$,
$\texttt{children=0}$,
$\texttt{rooms=1}$.
Note that, if $\texttt{Alex}$ makes the same searches on
$\texttt{Hilton Amsterdam}$,
$\texttt{2020-05-01}$,
$\texttt{2020-05-02}$,
$\texttt{2}$,$\texttt{0,1}$
multiple times in a day, it is considered as multiple user searches. Therefore, S here is a multi-set.
\begin{definition}
$P_D(s_i)$ is the probability of an user search $s_i$ that hits on the hotel prices in the cache.
\end{definition}
For example, if $\texttt{Alex}$ makes the 10 searches on
$\texttt{Hilton Amsterdam}$,
$\texttt{2020-05-01}$,
$\texttt{2020-05-02}$,
$\texttt{2}$,
$\texttt{0,1}$, and 8 out of these 10 searches hit on the price cached.
Then, $P_D(\langle \texttt{Alex},
\texttt{Hilton Amsterdam},
\texttt{2020-05-01},
\texttt{2020-05-02},
\texttt{2},
\texttt{0,1} \rangle )
= \frac{8}{10} = 0.8$
\begin{definition}
$P_B(s_i)$ is the probability of an user search $s_i$ that ended up with booking attempt, given that the hotel price is in the cache.
\end{definition}
Following the above example, for $\langle \texttt{Alex}$,
$\texttt{Hilton Amsterdam}$,
$\texttt{2020-05-01}$,
$\texttt{2020-05-02},
\texttt{2},
\texttt{0,1} \rangle$,
$\texttt{Alex}$ has 8 searches returned prices.
And out of these 8 searches,
$\texttt{Alex}$ makes 2 booking attempts.
Then, $P_B(\langle \texttt{Alex},
\texttt{Hilton Amsterdam}$,
$\texttt{2020-05-01},
\texttt{2020-05-02},
\texttt{2},
\texttt{0,1} \rangle) = \frac{2}{8} = 0.25$
\begin{definition}
$P_A(s_i)$ is the probability of the hotel price is accurate after an user makes a booking attempt on search $s_i$.
\end{definition}
Continuing the example above,
out of the 2 booking attempts,
1 booking attempt succeeds.
Hence, $P_A(\langle \texttt{Alex},
\texttt{Hilton Amsterdam}$,
$\texttt{2020-05-01},
\texttt{2020-05-02}$,
$\texttt{2},\texttt{0,1} \rangle)$ = $\frac{1}{2} = 0.5$.
Therefore, we can formulate the number of bookings expected as follows.
\begin{definition}
The expected number of bookings is the following
\begin{equation}
K = \sum_{s_i} P_D(s_i) \times P_B(s_i) \times P_A(s_i)
\label{eqn:booking}
\end{equation}
\end{definition}
Therefore, our goal is to optimise such $K$.
To optimize $K$,
we would expect $P_D(s_i)$, $P_B(s_i)$, $P_A(s_i)$ to be as high as possible.
$P_B(s_i)$ is an user behaviour,
as a hotel price fetching system,
this is not controllable.
But we can learn this $P_B$ from historical data.
However, $P_D(s_i)$,
$P_A(s_i)$ could be tuned by adjusting the TTL.
As illustrated by Figure\ref{fig:ttl_role},
to increase $P_D(s_i)$, one can simply increase the TTL.
Similarly, to increase $P_A(s_i)$,
one just needs to decrease TTL.
We will discuss how to set the TTL to optimize the booking in Section~\ref{sec:smartTTL}.
\section{SmartTTL}
\label{sec:smartTTL}
In this section,
we explain how we build a smart TTL service which
assigns itinerary request specific TTL to optimize the bookings.
There are three major steps: price-duration extraction,
price-duration clustering and TTL assignment.
\subsection{Price-Duration Extraction}
Price-duration refers to how long each price stay unchanged. This is approximated by the time difference between two consecutive requests of the same itinerary that {Agoda}\ sends to the supplier. Figure 7 presents an example of extracting price-duration distribution from empirical data of hotel $\texttt{Hilton}$
$\texttt{Amsterdam}$ and search criteria
$\langle \texttt{2019-10-01,2019-10-02}$, $\texttt{1,0,1} \rangle$.
{Agoda}\ first sends a request to supplier at $\texttt{13:00}$ to fetch for price,
and that's the first time we fetch price for such itinerary.
So, there is no price change and no price-duration extracted.
Later, at $\texttt{13:31}$,
{Agoda}\ sends the second request to the supplier to fetch for price,
and observes that the price has changed.
Hence, the price-duration for the previous price is 31 minutes
(the time difference between \texttt{13:00} and \texttt{13:31}).
Similarly, at $\texttt{14:03}$,
{Agoda}\ sends the third request to the supplier to fetch for price,
and again, observes that the price has changed.
Hence, the price-duration for the second price is 32 minutes.
Therefore, for each search criteria, we can extract the empirical price-duration distributions.
\begin{figure}
\centering
\includegraphics[height = 1.8in]{ttl_extraction.png}
\caption{Price duration extraction from empirical data}
\label{fig:ttl_extract}
\end{figure}
\subsection{Price-Duration Clustering}
In {Agoda}, we have billions of such user searches every day.
It is practically intractable and unnecessary
to store such volume of search criteria's TTL into in-memory cache, e.g. Redis or Memcached.
Therefore, we need to reduce the cardinality of the user searches.
And we do it through clustering.
Figure~\ref{fig:ttl_cluster} presents the price-duration clustering process.
We cluster these user searches into clusters to reduce the cardinality.
In ${PriceAggregator}$, we used ${XGBoost}$~\cite{xgboost}
for the clustering feature ranking,
and the significant features are $\texttt{checkin}$, $\texttt{price\_availability}$.
We observe that the itinerary requests with same $\texttt{checkin}$ and $\texttt{price\_availability}$ (whether the hotel is sold out or not)
have the similar price-duration distribution.
Hence, for all supplier requests with same $\texttt{checkin}$ and $\texttt{price\_availability}$, we group them into the same cluster, and use the aggregated price-duration distribution to represent the cluster.
By doing this, we dramatically reduce the cardinality to $\sim1000$,
which can be easily stored into any in-memory data structure.
\begin{figure}[h]
\centering
\includegraphics[height = 1.4in]{TTL_Cluster.png}
\caption{Similar supplier requests are clustered together}
\label{fig:ttl_cluster}
\end{figure}
\subsection{TTL Assignment}
In the above section,
we finished clustering.
Next, we need to assign a TTL for each cluster.
Note that, we want to optimize the bookings as expressed in Equation 1,
and the TTL will affect the cache hit ($P_D$ in Equation 1)
and booking price ($P_A$ in Equation 1) accuracy.
Hence, we want to assign a TTL for each cluster in which Equation 1 is optimised.
For cache hit, we can easily approximate the cache miss ratio curve ~\cite{WWW2020} using Cumulative Distribution Function (CDF) of the gap time (time difference between current request and previous request for the same itinerary search).
Figure~\ref{fig:cdf} presents the CDF
of the gap time, where the x-axis is the gap time,
and the y-axis is the portion of requests whose gap time $\leq$ a specific gap time.
For example, $80\%$ of the requests are having gap time $\leq$ 120 minutes in Figure~\ref{fig:cdf}.
Hence, by setting TTL at $120$ minutes, we can achieve $80\%$ cache hit.
Therefore, the cache miss ratio curve related to TTL can be easily found,
and we can know the approximated cache hit rate for each TTL we choose.
For booking accuracy of a cluster $C$, this can be approximated by
$$
\frac{\sum_{r_i \in C } \min(1,\frac{TTL_{r_i}}{TTL_{assigned}} )}{|C|}
$$
For example, in a specific cluster,
if the empirical price-duration observed is $120$ minutes and $100$ minutes, and we assigned $150$ minutes.
Hence, we know that there are $120$ and $100$ minutes that we are using the accurate price.
Hence, the accuracy is $(\frac{120}{150} + \frac{100}{150})/2 = \frac{11}{15}$.
Hence, to optimize the bookings as expressed in Equation~\ref{eqn:booking},
we just need to numerate the different TTL in each cluster to find such TTL.
So far, we have completed the major steps in SmartTTL.
\begin{figure}[h]
\centering
\includegraphics[height=2in]{gap_time.png}
\caption{CDF of gap time. x-axis is the gap time in minutes.}
\label{fig:cdf}
\end{figure}
\section{From Passive Model to Aggressive Model}
As mentioned in Section 1, {SmartTTL } addresses the Challenge 1.
We still have three more challenges remaining untackled.
For Challenge 2 and Challenge 3,
we can resolve it by guaranteeing each data centre sends constant rate $\mu$ of requests to the suppliers.
Every time passive model sends $\mu_{passive}$ requests to the suppliers,
where $\mu_{passive} \textless \mu$,
we proactively send extra $\mu - \mu_{passive}$ requests to supplier.
The question is how to generate these $\mu - \mu_{passive}$ requests.
Next,
we will present one alternative of generating such $\mu - \mu_{passive}$ requests.
\subsection{Aggressive Model with LRU Cache}
In this section,
we describe an aggressive model which aggressively sends requests to the supplier to fetch for hotel price.
These requests are generated from the auxiliary cache ${\mathcal{C}}_{LRU}$. There are two major steps:
\textbf{Cache building}.
The auxiliary cache ${\mathcal{C}}_{LRU}$ is built up by using historical user searches.
For each user search $s_i$, they are always admitted into ${\mathcal{C}}_{LRU}$.
Once ${\mathcal{C}}_{LRU}$ reaches its maximum capacity specified,
${\mathcal{C}}_{LRU}$ will evict the user search $s_i$ which is Least Recently Used (LRU).
\textbf{Request pulling}.
At every second $t_i$,
passive model needs to send $\mu_{passive}$ requests to supplier.
And the supplier allows us to send $\mu$ requests per second.
Hence,
aggressive model will send $\mu_{aggressive} = \mu - \mu_{passive}$ requests to the supplier.
To generate such $\mu_{aggressive}$ requests,
{Agoda}\ pulls $\mu_{aggressive}$ requests from ${\mathcal{C}}_{LRU}$ which are going to expire (starts from requests that are closets to expiry until $\mu_{aggressive}$ is used up).
It is obvious that the above approach can solve Challenge 2 and Challenge 3.
Moreover, it can also help improve the cache hit by requesting the hotel prices
before an user searches for it.
However, this is not optimal.
For example,
a specific hotel could be very popular.
However, if the hotel is not price competitive,
then {Agoda}\ does not need to waste such QPS to
pull the hotel price from such supplier.
In the next section, we will introduce an aggressive model which optimizes the bookings.
\subsection{Aggressive model with SmartScheduler}
As mentioned, aggressive model with LRU cache is not optimal.
Moreover, previously, passive model always has the highest priority.
Meaning aggressive model only sends requests to supplier if there is extra QPS left.
However, this is again not optimized.
In this section, we present an aggressive model which optimizes the bookings.
It has 5 major steps.
\textbf{Itinerary frequency calculation}. This describes how many times an itinerary needs to be requested to ensure it is always available in database.
If we want a high cache hit rate, we want an itinerary $r_i$ to be always available in the database,
that means we need to make sure that such itinerary $r_i$ is fetched before it expires.
Moreover, for each $r_i$,
we have the generated $TTL_{r_{i}}$.
Hence, to make sure an itinerary $r_i$ is always available in database $D$
for 24 hours (1440 minutes),
we need to send $f_{r_i}$ requests to supplier, where $f_{r_i}$ is
\begin{equation}
f_{r_i}=\left \lceil \frac{1440}{TTL_{r_i}} \right \rceil
\label{eqn:itinerary_frequency}
\end{equation}
\textbf{Itinerary value evaluation}.This evaluates the value of an itinerary by the probability of booking from this itinerary.
With above itinerary frequency calculation, we can assume an itinerary request is always a 'hit' in the database. Hence, in this step, we evaluate the itinerary value given that such itinerary is always available in our Price DB.
That is, for all user search $s_i$ on the same itinerary $r_i$, $s_i \succ r_i$,
it will be always cache hit, i.e. ${P_D(s_i)}=1$.
Recall from Equation~\ref{eqn:booking},
for each itinerary request $r_i$,
we have now the expected number of bookings as
\begin{equation}
E[K_{r_i}] = \sum_{s_i \in r_i} {P_D(s_i)} \times {P_B(s_i)} \times {P_A(s_i)} = \sum_{s_i \in r_i} {P_B(s_i)} \times {P_A(s_i)}
\label{eqn:itinerary_value}
\end{equation}
\textbf{Request value evaluation}. This evaluates the value of a request by the probability of booking from this request.
By Equation~\ref{eqn:itinerary_value} and Equation~\ref{eqn:itinerary_frequency},
we can have the expected bookings per supplier request as
\begin{equation}
\frac{E[K_{r_i}]}{f_{r_i}}
\label{eqn:bookingperrequest}
\end{equation}
\textbf{Top request generation}. This generates the top requests we want to select according to their values.
Within a day, for a specific supplier,
we are allowed to send $M=\mu \times 60 \times 60 \times 24$ requests to supplier.
Therefore, by Equation~\ref{eqn:bookingperrequest},
we can order the supplier requests and pick the most valuable $M$ requests.
\textbf{Top request scheduling}. This describes how to schedule to pull the top requests we selected.
Given that we have $M$ requests need to be sent to the supplier,
we need to make sure 1. each of these requests is sent to the supplier
before its previous request expires.
2. at every second, we send exactly $\mu$ requests to the supplier.
For all itinerary requests, we group the itinerary request by their frequency,
where $G(f_i) = [r_1, r_2, r_3, \dots, r_k]$ and itinerary request $r_1, r_2, r_3, \dots, r_k$ all have frequency $f_i$ and same $TTL_{i}$. This means every single request $r_i$ where $i = 1, 2, 3, ..., k$, is to be scheduled to send for $f_i$ times and all k itinerary requests, $r_1, r_2, r_3, \dots, r_k$, are to be sent within a period of $TTL_{i}$.
To ensure every of these k itinerary requests are sent within a period of $TTL_{i}$, we can simply distribute these $r_1, r_2, r_3, \dots, r_k$ requests evenly over each second in $TTL_{i}$. Thus, we need to schedule to send $\frac{k}{TTL_i}$ requests each second within $TTL_i$. Now we just need to send the same set of requests every $TTL_i$ and repeats this process for $f_i$ times For example, if $G(4)=[r_1, r_2, \dots, r_{43200}]$,
then we have 43200 itinerary requests having frequency = $4$ and $TTL$ = $6$ hours which is 21600 seconds. That means, in every 6 hours, we need to schedule 43200 itinerary requests, which is $\frac{43200}{21600} = 2$ requests per second. That is, if we don't consider any ordering of the 43200 requests, we will send requests $r_1$ and $r_2$ in 1st second, $r3$ and $r4$ in 2nd second until $r_{43199}$ and $sr_{43200}$ itinerary requests in 21600th seconds. In 21601th second, $r_1$ and $r_2$ will be sent again and so on. These 43200 itinerary requests are scheduled to be sent for a frequency of 4 times in a single day.
By having the above 5 steps, we can see that the most valuable $M$ requests are sent by SmartScheduler which maximizes the booking.
\section{Experiment and Analysis}
The aggressive model with SmartScheduler has been deployed in production at {Agoda}.
The deployment has yielded significant gains on bookings and other system metrics.
Before the deployment,
we have done extensive online A/B experiment to evaluate the effectiveness of the model.
In the following section, we will present the experiments conducted in 2019.
As {Agoda}\ is a publicly listed company,
we are sorry that we can't reveal the exact number of bookings due to data sensitivity,
but we will try to be as informative as possible.
Overall, aggressive model with SmartScheduler wins other baseline algorithms by $10\%$ to $30\%$.
\subsection{Experimentation suppliers}
There are two types of suppliers {Agoda}\ have experimented with:
\begin{enumerate}
\item \textbf{Retailers}.
Retailer are those suppliers whose market manager from each OTA deals with hotel directly and they are selling hotel rooms online.
\item \textbf{Wholesalers}. Wholesalers are those suppliers that sell hotels/hotel's room/other products in large quantities at lower price (package rate), mainly selling to B2B or retailer not direct consumer.
\end{enumerate}
In this paper, we present the results from 5 suppliers.
\textit{ \textbf{Supplier A} } is a Wholesaler supplier which operates in Europe.
\textit{ \textbf{Supplier B} } is a Wholesaler supplier which operates worldwide.
\textit{ \textbf{Supplier C} } is a Wholesaler supplier which operates worldwide.
\textit{ \textbf{Supplier D} } is a Retailer supplier which operates in Japan.
\textit{ \textbf{Supplier E} } is a Retailer supplier which operates in Korea.
In this section, all the experiments were conducted through online A/B experiment over 14 days,
where half of the allocated users are experiencing algorithm A and
the other half are experiencing algorithm B.
Moreover, for all the plots in this section,
\begin{itemize}
\item x-axis is the nth day of the experiment.
\item bar-plot represents the bookings and line-plot represents the cache hit.
\end{itemize}
\subsection{Fixed TTL v.s. SmartTTL}
In this section, we compare the performance between passive model with Fixed TTL (A) and passive model with SmartTTL (B).
\begin{figure}
\centering
\includegraphics[height=1.8in]{SupplierA.png}
\caption{A/B Experiment on Supplier A}
\label{fig:expa}
\end{figure}
Figure~\ref{fig:expa} presents the results on Supplier A,
and we can see that B variant wins A variant by a small margin.
Overall, B variant wins by $2-4\%$ for cache hit, and $\sim2\%$ for bookings. This is expected as SmartTTL only address Challenge 1.
\subsection{SmartTTL v.s. Aggressive Model with SmartScheduler}
In this section, we compare the performance between passive model with SmartTTL (A) and aggreesive model with SmartScheduler (B).
We present the A/B experiment results
Supplier C and Supplier E.
Figure~\ref{fig:expc} presents the results on Supplier C,
and we can easily see that B variant wins A variant significantly in terms of booking and cache hit ratio.
For cache hit and bookings, B variant wins A variant consistently.
\begin{figure}
\centering
\includegraphics[height=1.8in]{SupplierC.png}
\caption{A/B Experiment on Supplier C}
\label{fig:expc}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=1.8in]{SupplierE.png}
\caption{A/B Experiment on Supplier E }
\label{fig:expe}
\end{figure}
Figure~\ref{fig:expe} presents the results on Supplier E,
and we can easily see that B variant wins A variant significantly in terms of booking and cache hit ratio.
For cache hit, B variant wins A variant consistently.
For bookings, we can see that B never lose to A on any single day.
\subsection{Aggressive Model with LRU Cache v.s. Aggressive Model with SmartScheduler}
In this section, we compare the performance between aggressive model with LRU cache (A) and aggreesive model with SmartScheduler (B).
We present the A/B experiment results
Supplier B and Supplier D.
\begin{figure}
\centering
\includegraphics[height=1.8in]{SupplierB.png}
\caption{A/B Experiment on Supplier B}
\label{fig:expb}
\end{figure}
Figure~\ref{fig:expb} presents the results on Supplier B,
and we can easily see that B variant wins A variant significantly in terms of booking and cache hit ratio.
For cache hit, B variant wins A variant consistently.
It is worthwhile to note that the overall booking declines along the x-axis,
this could be caused by many factors such as promotions from competitors, seasonality etc.
However, B variant is still able to win A variant by a consistent trend.
\begin{figure}
\centering
\includegraphics[height=1.8in]{SupplierD.png}
\caption{A/B Experiment on Supplier D}
\label{fig:expd}
\end{figure}
Figure~\ref{fig:expd} presents the results on Supplier D,
and we can easily see that B variant wins A variant significantly in terms of booking and cache hit ratio.
For cache hit, B variant wins A variant consistently.
For bookings, we can see that B consistently wins A by more than $10\%$.
And on certain days, e.g. day 5, B wins by more than $50\%$.
\section{Related Work}
The growth of traveling industry has attracted substantial academic attention~\cite{airbnb,bcom, europricing}.
To increase the revenue, many effort have been spent on enhancing the pricing strategy.
Aziz et al. proposed a revenue management system framework based on price decisions which optimizes the revenue~\cite{roompricing}.
Authors in ~\cite{airbnb} proposed Smart Price which improves the room booking by guiding the hosts to price the rooms in Airbnb.
As long-term stay is getting more common, Ling et al. ~\cite{long_stay} derived the optimal pricing strategy for long-term stay, which is beneficial to hotel as well as its customer.
Similar efforts have been seen in ~\cite{noone2016pricing,dynamicpricing} in using pricing strategies to increase the revenues.
Apart from pricing strategy,
some effort has been spent on overbooking~\cite{toh2002hotel,koide}.
For example, Antonio et al.~\cite{ANTONIO2017} built prediction models for predicting cancellation of booking to mitigate revenue loss derived from booking cancellations.
Nevertheless, none of the existing work has studied hotel price fetching strategy.
To the best of our knowledge, we are the first to deploy an optimized price fetching strategy which increases the revenue by large margin.
\section{Conclusion and Future Work}
In this paper, we presented ${PriceAggregator}$,
an intelligent hotel price fetching system which optimizes the bookings.
To the best of our knowledge,\\
${PriceAggregator}$ is the first productionized system which addresses the 4 challenges mentioned in Section 1.
It differs from most existing OTA system by having SmartTTL which determines itinerary specific TTL.
Moreover, instead of passively sending requests to suppliers,
${PriceAggregator}$ aggressively fetches the most valuable hotel prices from suppliers which optimizes the bookings.
Extensive online experiments shows that
${PriceAggregator}$ is not only effective in improving system metrics like cache hit,
but also grows the company revenues significantly.
We believe that ${PriceAggregator}$ is a rewarding direction for application of data science in OTAs.
One of the factor which brings bookings is pricing.
In the future, we will explore how to optimize the bookings through a hybrid of pricing strategy and pricing fetching strategy.
\tiny
\printbibliography
\end{document}
| {
"attr-fineweb-edu": 2.082031,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdWw5qX_AY3RB3vGe |
\section{Introduction}
\label{sec:intro}
Studying the interaction between players in the court, in relation to team performance, is one of the most important issue in Sport Science. In recent years, thanks to the advent of Information Technology Systems (ITS), it became possible to collect a large amount of different types of spatio-temporal data, which are, basically, of two kinds. On the one hand, play-by-play data report a sequence of relevant events that occur during a match. Events can be broadly categorized as player events such as passes and shots; and technical events, for example fouls and time-outs. Carpita et al. \cite{carpita2013,carpita2015} used cluster analysis and principal component analysis in order to identify the drivers that affect the probability to win a football match. Social network analysis has also been used to capture the interactions between players \cite{wasserman1994social}; Passos et al. \cite{passos2011networks} used centrality measures with the aim of identifying central players in water polo. On the other hand, object trajectories capture the movement of players or the ball. Trajectories are captured using optical- or device-tracking and processing systems. Optical systems use cameras, the images are then processed to compute the trajectories \cite{bradley2007reliability}, and commercially supplied to professional teams or leagues \cite{Tracab,Impire}. Device systems rely on devices that infer location based on Global Positioning Systems (GPS) and are attached to the players' clothing \cite{Catapult}. The adoption of this technology and the availability of data is driven by various factors, particularly commercial and technical. Even once trajectories data become available, explaining movement patterns remains a complex task, as the trajectory of a single player depends on a large amount of factors. The trajectory of a player depends on the trajectories of all other players in the court, both teammates and rivals. Because of these interdependencies a player action causes a reaction. A promising niche of Sport Science literature, borrowing from the concept of Physical Psychology \cite{turvey1995toward}, expresses players in the court as agents that face with external factors \cite{travassos2013performance,araujo2016team}. In addition, typically, there are certain role definitions in a sports team that influence movement. Predefined plays are used in many team sports to achieve specific objectives; moreover, teammates who are familiar with each other's playing style may develop productive interactions that are used repeatedly. Experts want to explain why, when and how specific movement behavior is expressed because of tactical behavior and to retrieve explanations of observed cooperative movement patterns. A common method to approach with this complexity in team sport analysis consists on segmenting a match into phases, as it facilitates the retrieval of significant moments of the game. For example, Perin et al. \cite{perin2013soccerstories} developed a system for visual exploration of phases in football. Using a basketball case study, and having available the spatio-temporal trajectories extracted from GPS tracking systems, this paper aims at characterizing the spatial pattern of the players in the court by defining different game phases by mean of a cluster analysis. We identify different game phases, each considering moments being homogenous in terms of spacings among players. First, we characterize each cluster in terms of players' position in the court. Then, we define whether each cluster corresponds to defensive or offensive actions and compute the transition matrices in order to examine the probability of switching to another group from time $t$ to time $t+1$.
\section{Data and Methods}
\label{sec:meth}
Basketball is a sport generally played by two teams of five players each on a rectangular court ($28m x 15m$). The match, according to International Basketball Federation (FIBA) rules, lasts $40$ minutes, and is divided in four periods of $10$ minutes each. The objective is to shoot a ball through a hoop $46 cm$ in diameter and mounted at a height of $3.05m$ to backboards at each end of the court. The data we used in the analyses that follow refers to a friendly match played on March 22th, 2016 by a team based in the city of Pavia (Italy). This team played the 2015-2016 season in the C-gold league, the fourth league in Italy. Totally, six players took part to the friendly match. All those players worn a microchip in their clothings. The microchip collects the position (in pixels of 1 $m^2$) in both the $x$-axis and the $y$-axis, as well as in the $z$-axis (i.e. how much the player jumps). The positioning of the players has been detected at millisecond level. Considering all the six players, the system recorded a total of $133,662$ space-time observations ordered in time. In average, the system collects positions about $37$ times every second. Considering that six players are in the court at the same time, the position of each single player is collected, in average, every $162$ milliseconds. $x$-axis (length) and $y$-axis (width) coordinates have been filtered with a Kalman approach. The Kalman filtering is an algorithm used to predict the future state of a system based on the previous ones, in order to produce more precise estimates. We cleaned the dataset by dropping the pre-match, the half-time break and the post match periods. Then, the dataset has been modified by adding all the milliseconds that have not been detected. We attributed to these moments the value of the coordinates of the first previous instant with non-missing values. We then reshaped the dataset in order that the dataset rows are uniquely identified by the millisecond, and each player's variable is displayed in a specific column. The final dataset counts for $3,485,147$ total rows. We applied a $k$-means Cluster Analysis in order to group a set of objects. Cluster analysis is a method of grouping a set of objects in such a way the objects in the same group (clusters) are more similar to each other than to those in other groups. In our case, the objects are represented by the time instants, expressed in milliseconds, while the similarity is expressed in terms of players' distance\footnote{In the analyses that follows, for the sake of simplicity, we only consider the period where player 3 was in the bench.}. Based on the value of the between
deviance (BD) / total deviance (TD) ratio and the increments of this value
by increasing the number of clusters by one, we chose $k$=8 (BD/TD=50\% and relatively low increments for increasing $k$, for $k$$\ge$8)
\section{Results}
\label{sec:res}
The first cluster (C1) embeds 13.56\% of the observations (i.e. 13.56\% of the total game time). The other clusters, named C2, ..., C8, have size of 4.59\%, 14.96\%, 3.52\%, 5.63\%, 35.33\%, 5.00\% and 17.41\% of the total sample size, respectively. We used Multidimensional Scaling (MDS) in order to plot the differences between the groups in terms of positioning in the court. With MDS algorithm we aim to place each player in $N$-dimensional space such that the between-player average distances are preserved as well as possible. Each player is then assigned coordinates in each of the $N$ dimensions. We choose $N$=2 and we draw the related scatterplots as in Figure \ref{fig:mds}. We observe strong differences between the positioning pattern among groups. In C1 and C5 players are equally spaced along the court. C6 also highlights an equally spaced structure, but the five players are more closed by. In other clusters we can see a spatial concentration: for example in C2 players 1, 5 and 6 are closed by while in C8 this is the case of players 1, 2 and 6.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.6]{mds_map}
\caption{Map representing, for each of the 8 clusters, the average position in the $x-y$ axes of the five players, using MDS.}
\label{fig:mds}
\end{figure}
Figure \ref{fig:profplot} reports cluster profile plots and helps us to better interpret the spacing structure in Figure \ref{fig:mds}, characterizing groups in terms of average distances among players. Profile plot for C6 confirms that players are more close by, in fact, all the distances are smaller than the average distance. At the same way, C2 presents distances among players 1, 5 and 6 smaller than the average.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.6]{profile_plot}
\caption{Profile plots representing, for each of the 8 clusters, the average distance among each pair of players.}
\label{fig:profplot}
\end{figure}
After having defined whether each moment corresponds to an offensive or a defensive action looking to the average coordinate of the five players in the court, we also found that some clusters represent offensive actions rather than defensive. More precisely, we found that clusters C1, C2, C3 and C4 mainly correspond to offensive actions (respectively, for the 85.88\%, 85.91\%, 73.93\% and 84.62\% of the times in each cluster) and C6 strongly corresponds to defensive actions (85.07\%). Figure \ref{fig:transmat_ad} shows the transition matrix, which reports the relative frequency in which subsequent moments in time report a switch from a cluster to a different one. It emerges that for the 31,54\% of the times C1 switches to a new cluster, it switches to C3, another offensive cluster. C2 switches to C3 for the 42.85\% of the times. When the defensive cluster (C6) switches to a new cluster, it switches to C8 for the 56.25\% of times.
\begin{figure}[!htb]
\centering
\includegraphics[scale=.8]{trans_mat_AD}
\caption{Transition matrix reporting the relative frequency subsequent moments ($t$, $t + 1$) report a switch from a group to a different one.}
\label{fig:transmat_ad}
\end{figure}
\section{Conclusions and future research}
\label{sec:conc}
In recent years, the availability of `big data" in Sport Science increased the possibility to extract insights from the games that are useful for coaches, as they are interested to improve their team's performances. In particular, with the advent of Information Technology Systems, the availability of players' trajectories permits to analyze the space-time patterns with a variety of approaches: Metulini \cite{metulini2016motion}, for example, adopted motion charts as a visual tool in order to facilitate interpretation of results. Among the existing variety of methods, in this paper we used a cluster analysis approach based on trajectories' data in order to identify specific pattern of movements. We segmented the game into phases of play and we characterized each phase in terms of spacing structure among players, relative distances and whether they represent an offensive or a defensive action, finding substantial differences among different phases.These results shed light on the potentiality of data-mining methods for trajectories analysis in team sports, so in future research we aim to i) extend the analysis to multiple matches, ii) match the play-by-play data with trajectories in order to extract insights on the relationship between particular spatial patterns and the team's performances.
\section*{Acknowledgments}
Research carried out in collaboration with the Big\&Open Data Innovation Laboratory (BODaI-Lab), University of Brescia (project nr. 03-2016, title Big Data Analytics in Sports, www.bodai.unibs.it/BDSports/), granted by Fondazione Cariplo and Regione Lombardia. Authors would like to thank MYagonism (https://www.myagonism.com/) for having provided the data.
\input{REF_Sensor_Fire_REV}
\end{document}
| {
"attr-fineweb-edu": 2.916016,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbO7xaKPQonPpKO0F | \section{Introduction}
It is estimated that golf is played by 80 million people worldwide \cite{hsbc2020golf}. The sport is most popular in North America, where 54\% of the world's golf facilities reside~\cite{ra2017golf}. In the United States, the total economic impact of the golf industry is estimated to be \$191.9 billion~\cite{golf2020}. In Canada, golf has had the highest participation rate of any sport since 1998~\cite{gc2013sport}. It would be reasonably contended that many golfers are drawn to the sport through the gratification of continuous improvement. The golf swing is a complex full-body movement requiring considerable coordination. As such, it can take years of practice and instruction to develop a repeatable and reliable golf swing. For this reason, golfers routinely scrutinize their golf swing and make frequent adjustments to their golf swing mechanics.
Several methods exist for analyzing golf swings. In scientific studies, researchers distill golf swing insights using optical cameras to track reflective markers placed on the golfer~\cite{m1998hip, lephart2007eight, myers2008role, chu2010relationship}. Often, these insights relate to kinematic variables at various \textit{events} in the golf swing. For example, in \cite{myers2008role} it was found that torso--pelvic separation at the top of the swing, referred to as the X-factor in the golf community, is strongly correlated with ball speed. Similarly, in \cite{chu2010relationship} it was found that positive lateral bending of the trunk at impact (away from the target) was also correlated with ball speed, as it potentially promotes the upward angle of the clubhead path and more efficient impact dynamics~\cite{mcnally2018dynamic}. Yet, examining a golf swing using motion capture requires special equipment and is very time consuming, making it impractical for the everyday golfer.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figs/Tiger-Woods-swing-sequence-panel1.jpg}
\includegraphics[width=1.0\linewidth]{figs/Tiger-Woods-swing-sequence-panel2.jpg}
\caption{Eight events in a golf swing sequence. Top: face-on view. Bottom: down-the-line view. The names of the events from left to right are \textit{Address}, \textit{Toe-up}, \textit{Mid-backswing}, \textit{Top}, \textit{Mid-downswing}, \textit{Impact}, \textit{Mid-follow-through}, and \textit{Finish}. Images used with kind permission from Golf Digest~\cite{golfdigest}.}
\label{fig:teaser}
\vspace{-15pt}
\end{figure}
Traditionally, professional golf instructors provide instant feedback to amateurs using the naked eye. Still, the underlying problem is not always immediately apparent due to the speedy nature of the golf swing. Consequently, slow-motion video has become a popular medium for dissecting the intricacies of the golf swing~\cite{guadagnoli2002efficacy}. Moreover, slow-motion video is readily available to the common golfer using the advanced optical cameras in today's mobile devices, which are capable of recording high-definition (HD) video at upwards of 240 frames per second (fps). Given a slow-motion recording of a golf swing, a golfer or golf instructor may scrub through the video to analyze the subject's biomechanics at various key events. These events comprise a golf swing \textit{sequence}~\cite{golfdigest}. For example, in the golf swing sequence of Tiger Woods depicted in Fig.~\ref{fig:teaser}, the X-Factor at the top of the swing and lateral bending of the trunk at impact, two strong indicators of a powerful golf swing, are easily identifiable in the face-on view. Still, scrubbing through a video to identify these events is time consuming and impractical because only one event can be viewed at a time.
In computer vision, deep convolutional neural networks (CNNs) have recently been shown to be highly proficient at video recognition tasks such as video representation~\cite{carreira2017quo}, action recognition~\cite{mcnally2019starnet}, temporal action detection~\cite{zhao2017temporal}, and spatio-temporal action localization~\cite{el2018real}. Following this line of research, CNNs adapted for video may be leveraged to facilitate golf swing analysis through the autonomous extraction of event frames in golf swing videos. To this end, we introduce \textbf{GolfDB}, a benchmark video database for the novel task of \textit{golf swing sequencing}. A total of 1400 HD golf swing videos of male and female professional golfers, comprising various native frame-rates and over 390k frames, were gathered from YouTube. Each video sample was manually annotated with eight event labels (event classes shown in Fig.~\ref{fig:teaser}). Furthermore, the dataset also contains bounding boxes and labels for club type (\textit{e.g.}, driver, iron, wedge), view type (face-on, down-the-line, or other), and player name and sex. With this supplemental data, GolfDB creates opportunities for future research relating to general recognition tasks in the sport of golf. Finally, we advocate mobile deployment by proposing a lightweight baseline network called \textbf{SwingNet} that correctly detects golf swing events at a rate of 76.1\% on GolfDB.
\section{Related Work}
\subsection{Computer Vision in Golf}
Arguably the most well-known use of computer vision in golf deals with the real-time tracing of ball flights in golf broadcasts~\cite{toptracer}. The technology uses difference images between consecutive frames and a ball selection algorithm to artificially trace the path of a moving ball. In a different light, radar vision is used in the TrackMan launch monitor to precisely track the 3D position of a golf ball in flight and measure its spin magnitude~\cite{trackman}.
Computer vision algorithms have also been implemented for analyzing golf swings. Gehrig \textit{et al.}~\cite{gehrig2003visual} developed an algorithm that robustly fit a global swing trajectory model to club location hypotheses obtained from single frames. Fleet \textit{et al.}~\cite{urtasun2005monocular} proposed incorporating dynamic models with human body tracking, and Park \textit{et al.}~\cite{park2017accurate} developed a prototype system to investigate the feasibility of pose analysis in golf using depth cameras. In line with these research directions, GolfDB may be conveniently extended in the future to include various keypoint annotations to support human pose and golf club tracking. Moreover, automated golf swing sequencing using SwingNet is complementary to pose-based golf swing analysis.
\subsection{Action Detection}
In the domain of action detection, there exist several sub-problems that correspond to increasingly complex tasks.
Action recognition is the highest-level task and corresponds to predicting a single action for a video. The first use of modern CNNs for action recognition was by Karpathy \textit{et al.} \cite{karpathy2014large}, wherein they investigated multiple methods of fusing temporal information from image sequences. Simonyan and Zisserman \cite{simonyan2014two} followed this work by incorporating a second stream of optical flow information to their CNN. A different approach to combine temporal information is to use 3D CNNs to perform convolution operations over an entire video volume. First implemented for action recognition by Baccouche \textit{et al.}\ \cite{baccouche2011sequential}, 3D CNNs showed state-of-the-art performance in the form of the C3D architecture on several benchmarks when trained on the Sports-1M dataset \cite{tran2015learning}. Combining two-stream networks and 3D CNNs, Carreira and Zisserman~\cite{carreira2017quo} created the I3D architecture. Recurrent neural networks (RNNs) provide a different approach to combining temporal information. RNNs with long short-term memory (LSTM) cells are well suited to capture long-term temporal dependencies in data \cite{hochreiter1997long} and these networks were first used for action recognition by Donahue \textit{et al.}~\cite{donahue2015long} in the long-term recurrent convolutional network (LRCN), whereby features extracted from a 2D CNN were passed to an LSTM network. A similar method to Donahue \textit{et al.}\ is adopted in this work.
Temporal action detection is a mid-level task wherein the start and end frames of actions are predicted in untrimmed videos. Shou \textit{et al.}~\cite{shou2016temporal} proposed the Segment-CNN (S-CNN) in which they trained three networks based on the C3D architecture. In a different approach, Yeung \textit{et al.}~\cite{yeung2016end} built an RNN that took features from a CNN and utilized reinforcement learning to learn a policy for identifying the start and end of events, allowing for the observation of only a fraction of all video frames.
Spatio-temporal action localization is the lowest level and most complex task in action detection. Both the frame boundaries and the localized area within each frame corresponding to an action are predicted. Several works in this domain approach the problem by combining 3D CNNs with object detection models, such as in \cite{girdhar2018better}, where the I3D model is combined with Faster R-CNN, and in \cite{hou2017tube}, where the authors generalize the R-CNN from 2D image regions to 3D video \textit{tubes} to create Tube-CNN (T-CNN).
\subsection{Event Spotting}
After asking Amazon Mechanical Turk workers to re-annotate the temporal extents of human actions in the Charades~\cite{sigurdsson2016hollywood} and MultiTHUMOS~\cite{yeung2018every} datasets, Sigurdsson \textit{et al.\ }\cite{sigurdsson2017actions} demonstrated that the temporal extents of actions in video are highly ambiguous; the average agreement in terms of temporal Intersection-over-Union (tIOU) was only 72.5\% and 58.7\%, respectively. This raised concerns over the inherent error associated with temporal action detection.
Considering the uncertainty surrounding the temporal extents of actions, Giancola \textit{et al.}~\cite{giancola2018soccernet} proposed the task of \textit{event spotting} within the context of soccer videos, arguing that in contrast to actions, events are anchored to a single time instance and are defined using a specific set of rules. As opposed to predicting the temporal bounds of an action, \textit{spotting} consists of finding the time instance (or \textit{spot}) when well-defined events occur. We consider event \textit{spotting} and event \textit{detection} as equivalent terminology, and use the latter moving forward.
\section{Golf Swing Sequencing}
Drawing inspiration from Giancola \textit{et al.}~\cite{giancola2018soccernet}, we describe the task of golf swing sequencing as the detection of events in \textit{trimmed} videos containing a single golf swing. The reasoning behind using trimmed golf swing videos is three-fold:
\begin{enumerate}
\item We speculate that the most compelling use-case for detecting golf swing events is to obtain instant biomechanical feedback in the field, as opposed to localizing golf swings in broadcast video. Although GolfDB contains the necessary data to perform spatio-temporal localization of full golf swings in untrimmed video, we consider this a separate task and a potential avenue for future research.
\item Golfers or golf instructors who wish to view a golf swing sequence in the field can simply record a constrained video of a single golf swing on a mobile device, ensuring that the subject is centered in the frame. This eliminates the need for spatio-temporal localization.
\item A video sample containing a single golf swing instance will consist of a specific number of events occurring in a specific order. This information can be leveraged to improve detection performance.
\end{enumerate}
\noindent\textbf{Golf Swing Events.} In \cite{giancola2018soccernet}, soccer events were resolved at a one-second resolution. In contrast, golf swing events can be localized to a single frame using strict event definitions. Although various golf swing events have been proposed in the literature~\cite{kwon2013validity}, we define the eight contiguous events comprising a golf swing sequence as follows:
\begin{enumerate}
\item \textit{Address (A).} The moment just before the takeaway begins, \textit{i.e.}, the frame before movement in the backswing is noticeable.
\item \textit{Toe-up (TU).} Shaft parallel with the ground during the backswing.
\item \textit{Mid-backswing (MB).} Arm parallel with the ground during the backswing.
\item \textit{Top (T).} The moment the golf club changes directions at the transition from backswing to downswing.
\item \textit{Mid-downswing (MD).} Arm parallel with the ground during the downswing.
\item \textit{Impact (I).} The moment the clubhead touches the golf ball.
\item \textit{Mid-follow-through (MFT).} Shaft parallel with the ground during the follow-through.
\item \textit{Finish (F).} The moment just before the golfer's final pose is relaxed.
\end{enumerate}
The above definitions do not always isolate a single frame. For example, a golfer may not hold a finishing pose at all, making the selection of the \textit{Finish} event frame subjective. These issues are discussed further in the next section.
\section{GolfDB}
In this section we introduce GolfDB, a high-quality video dataset created for general recognition applications in golf, and specifically for the novel task of golf swing sequencing. Comprising 1400 golf swing video samples and over 390k frames, GolfDB is relatively large for a specific domain. To our best knowledge, GolfDB is the first substantial dataset dedicated to computer vision applications in the sport of golf.
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\linewidth, trim={0 0 0 1.4cm},clip]{figs/tempo.pdf}
\vspace{-20pt}
\caption{Average event timings from \textit{Address} to \textit{Impact} for 5 female (top 5) and male (bottom 5) professional golfers using a driver or fairway wood. Event timings normalized from \textit{Address} to \textit{Impact}. The diamond indicates the \textit{Top} event. Blue represents the backswing, red represents the downswing. Average golf swing tempos (backswing duration$/$downswing duration) shown on the right.}
\label{fig:tempo}
\vspace{-5pt}
\end{figure*}
\subsection{Video Collection}
A collection of 580 YouTube videos containing real-time and slow-motion golf swings was manually compiled. For the task of golf swing sequencing, it is important that the shaft remains visible at all times. To alleviate obscurities caused by motion blur, only high quality videos were considered. The YouTube videos primarily consist of professional golfers from the PGA, LPGA and Champions Tours, totalling 248 individuals with diverse golf swings. The videos were captured from a variety of camera angles, and a variety of locations on various golf courses, including the driving range, tee boxes, fairways, and sand traps. The significant variance in overall appearance, taking into consideration the different players, clubs, views, lighting conditions, surroundings, and native frame-rates, benefits the generalization ability of computer vision models trained on this dataset. The YouTube videos were sampled at 30 fps and 720p resolution.
\subsection{Annotation}
\label{sec:annotation}
A total of 1400 trimmed golf swing video samples were extracted from the collection of YouTube videos using an in-house MATLAB code that was distributed to four annotators. For each YouTube video, the annotators were asked to identify full golf swings (\textit{i.e.}, excluding pitch shots, chips, and putts) and label 10 frames for each: the start of the sample, eight golf swing events, and the end of the sample. The number of frames between the start of the sample and \textit{Address}, and similarly, between \textit{Finish} and the end of the sample, was naturally random, and the beginning of samples occasionally included practice swings. Depending on the native frame-rate of the sample, it was not always possible to label events precisely. For example, in real-time samples with a native frame-rate of 30 fps, it was rare that the precise moment of impact was captured. The annotators were advised to chose the frame closest to when the event occurred at their own discretion.
\begin{figure}
\vspace{-5pt}
\centering
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=\textwidth,trim={3.2cm 3.5cm 3cm 3.5cm},clip]{figs/p1.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=\textwidth,trim={3.2cm 3.5cm 3cm 3.5cm},clip]{figs/p3.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=\textwidth,trim={3.2cm 3.5cm 3cm 3.5cm},clip]{figs/p4.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=\textwidth,trim={3.2cm 3.5cm 3cm 3.5cm},clip]{figs/p2.pdf}
\end{subfigure}
\caption{Distribution of GolfDB. The total number of frames in real-time and slow-motion samples was roughly equal ($\approx$195k each). The event densities for real-time and slow-motion samples were $3.072\times10^{-2}$ and $2.628\times10^{-2}$ events$/$frame, respectively.}
\label{fig:stats}
\vspace{-10pt}
\end{figure}
Besides labeling events, the annotators were asked to draw bounding boxes, enter the club and view type, and indicate whether the sample was in real-time or slow-motion (considering a playback speed of 30 fps). The bounding boxes were drawn to include the clubhead and golf ball through the full duration of the swing. Player names were extracted from the video titles, and sex was determined by cross-referencing the player name with information available online. The annotators were briefed on domain-specific knowledge before the annotation process, and the dataset was verified for quality by an experienced golfer. Fig.~\ref{fig:tempo} provides the average timing of events from \textit{Address} to \textit{Impact} for five male and female golfers using a driver or fairway wood, and Fig.~\ref{fig:stats} illustrates the distribution of the dataset.
\subsection{Evaluation Metric and Experimental Protocol}
\label{sec:protocol}
In a similar fashion to Giancola \textit{et al.}~\cite{giancola2018soccernet}, we introduce a tolerance $\delta$ on the number of frames within which an event is considered to be correctly detected. In the real-time samples, if an event was thought to occur between two frames, it was at the discretion of the annotator to select the event frame. Given the inherent variability of the annotator's selection, we consider a tolerance $\delta = 1$ for real-time videos sampled at 30 fps. For the slow-motion videos, the tolerance should be scaled based on the native frame-rate, but the native-frame rates are unknown. We therefore propose a sample-dependent tolerance based on the number of frames between \textit{Address} and \textit{Impact}. This value was approximately 30 frames on average for the real-time videos, matching the frame-rate of 30 fps (\textit{i.e.}, the average duration of a golf swing from \textit{Address} to \textit{Impact} is roughly 1s). Thus, we define the sample-dependent tolerance as
\begin{equation}
\delta = \max(\nint*{\frac{n}{f}}, 1)
\end{equation}
where $n$ is the number of frames from \textit{Address} to \textit{Impact}, $f$ is the sampling frequency, and $\nint{x}$ rounds $x$ to the nearest integer.
Drawing inspiration from the field of human pose estimation, we introduce the PCE evaluation metric as the ``Percentage of Correct Events'' within tolerance. PCE is the temporal equivalent to the popular PCKh metric used in human pose estimation~\cite{andriluka14cvpr}, which scales a spatial detection tolerance using head segment length. For the experimental protocol, four random splits were generated for cross-validation, ensuring that all samples from the same YouTube video were placed in the same split. PCE averaged over the 4 splits is used as the primary evaluation metric.
\section{SwingNet: A Swing Sequencing Network}
In this section, we describe SwingNet, a network architecture designed specifically for the task of golf swing sequencing, but generalizable to the swings present in various sports, such as baseball, tennis and cricket. Additionally, the implementation details are discussed.
\subsection{Network Architecture}
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\linewidth]{figs/arch.pdf}
\caption{The network architecture of SwingNet, a deep hybrid convolutional and recurrent network for swing sequencing. In an end-to-end architectural framework, SwingNet maps a sequence of RGB images $\mathbf{I}$ to a corresponding sequence of event probabilities $\mathbf{e}$. The sequence of feature vectors $\mathbf{f}$ generated by MobileNetV2 are input to a bidirectional LSTM. At each frame $t$, the LSTM output is fed into a fully-connected layer, and a softmax is applied to obtain the event probabilities.}
\label{fig:arch}
\vspace{-10pt}
\end{figure*}
MobileNetV2 is a CNN based on an inverted residual structure and makes liberal use of lightweight depthwise convolutions~\cite{howard2017mobilenets}. As such, it is well suited for mobile vision applications. Furthermore, MobileNetV2 includes a ``width multiplier'' that scales the number of channels in each layer, providing a convenient trade-off for network complexity and speed. For the task of image classification, it runs at 75ms per frame on a single core of the Google Pixel using an input size of 224$\times$224 and width multiplier of 1~\cite{Sandler_2018_CVPR}. Placing emphasis on mobile deployment, we employ MobileNetV2~\cite{Sandler_2018_CVPR} as the backbone CNN in an end-to-end network architecture that maps a sequence of $d \times d$ RGB images $\mathbf{I} = (\mathbf{I_1}, \mathbf{I_2}, ..., \mathbf{I_T} : \mathbf{I_t} \in \mathbb{R}^{d \times d \times 3})$ to a corresponding sequence of event probabilities $\mathbf{e} = (\mathbf{e_1}, \mathbf{e_2}, ..., \mathbf{e_T} : \mathbf{e_t} \in \mathbb{R}^C)$, where $T$ is the sequence length and $C$ is the number of event classes. For the task of golf swing sequencing, there are 9 event classes: eight golf swing events and one \textit{No-event} class.
Detecting golf swing events using a single frame would likely be a difficult task. Precisely identifying \textit{Address} requires knowledge of when the \textit{actual} golf swing commences. Without this contextual information, \textit{Address} may be falsely detected during the pre-shot routine, which often includes full or partial practice swings, and frequent clubhead waggling. In a similar manner, precisely identifying \textit{Top} is generally not possible using a single frame, based on its event definition. Moreover, \textit{Mid-backswing} and \textit{Mid-downswing} are relatively similar in appearance. For these reasons, temporal context is likely a critical component in the task of golf swing sequencing. To capture temporal information, the sequence of feature vectors $\mathbf{f} = (\mathbf{f_1}, \mathbf{f_2}, ..., \mathbf{f_T} : \mathbf{f_t} \in \mathbb{R}^{1280})$ obtained by applying global average pooling to the final feature map in MobileNetV2 is used as input to an $N$-layer bidirectional LSTM~\cite{hochreiter1997long} with $H$ hidden units in each layer. At each frame $t$, the $H$-dimensional output of the LSTM is fed through a final fully-connected layer, and a softmax is applied to obtain the class probabilities $\mathbf{e}$. The weights of the fully-connected layer are shared across frames. The overall architecture is illustrated in Fig.~\ref{fig:arch}. In Section~\ref{sec:hp_search}, an ablation study is performed to identify a suitable model configuration.
\subsection{Implementation Details}
\label{sec:implementation}
The frames were cropped using the golf swing bounding boxes. They were then resized using bilinear interpolation such that the longest dimension was equal to $d$, and padded using the ImageNet~\cite{imagenet_cvpr09} pixel means to reach an input size of $d \times d$. Finally, all frames were normalized by subtracting the ImageNet means and dividing by the ImageNet standard deviation.
The network was implemented using PyTorch version 1.0. Each convolutional layer in MobileNetV2\footnote{PyTorch implementation of MobileNetV2 available at \url{https://github.com/tonylins/pytorch-mobilenet-v2}} is followed by batch normalization~\cite{ioffe2015batch}. Unique to MobileNetV2, ReLU is used after batch normalization everywhere except for the final convolution in the inverted residual modules~\cite{Sandler_2018_CVPR}, where no non-linearity is used. The running batch norm parameters were updated using a momentum of 0.1. The MobileNetV2 backbone was initialized with weights pre-trained on ImageNet, and the weights in the fully-connected layer were initialized following Xavier Initialization~\cite{glorot2010understanding}. Given the significant class imbalance between the golf swing events and the \textit{No-event} class (roughly 35:1), a weighted cross-entropy loss was used, where the golf swing events were each given a weight of 1, and the \textit{No-event} class was given a weight of 0.1.
Training samples were drawn from the dataset by randomly selecting a start frame and, if the number of frames remaining in the sample was less than $T$, the sample was looped to the beginning. Randomly selecting the start frame serves as a form of data augmentation and is commonly used in action recognition applications~\cite{carreira2017quo, mcnally2019starnet}. Other forms of data augmentation used included random horizontal flipping, and slight random affine transformations ($-5^\circ$ to $+5^\circ$ shear and rotation). The intent of the horizontal flipping was to even out the imbalance between left- and right-handed golfers, and the intent of the affine transformations was to capture variations in camera angle. To enable training on batches of image sequences, multiple sequences of length $T$ were concatenated along the channel dimension before being input to the network. The output features $\mathbf{f}$ were reshaped to (batch size, $T$, 1280) before being passed to the LSTM. Various batch sizes and sequence lengths $T$ were explored in the experiments; they are explicitly declared in Section~\ref{sec:exp}. In all training configurations, the Adam optimizer~\cite{kingma2014adam} was used with an initial learning rate of 0.001. The number of training iterations and learning rate schedules are discussed in Section~\ref{sec:exp}. All training was performed on a single NVIDIA Titan Xp GPU.
At test time, a sliding window approach is used over the full-length golf swing video samples. The sliding window has a size $T$ and, to minimize the computational load, no overlap was used. Knowing that each sample contains exactly eight events, event frames were selected using the maximum confidence for each event class. The exploration of alternative selection criteria that leverages event order is encouraged in future research.
\section{Experiments}
\label{sec:exp}
In this section, an extensive ablation study is performed to determine suitable hyper-parameters. Following the ablation study, a final baseline SwingNet is proposed and evaluated.
\subsection{Ablation Study}
\label{sec:hp_search}
\begin{table*}[t]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Config. & \makecell{Input\\Size\\($d$)} & \makecell{Seq.\\Length\\($T$)} & \makecell{Batch\\Size} & \makecell{ImageNet\\Weights} & \makecell{LSTM\\Layers\\($N$)} & \makecell{LSTM\\Hidden\\($H$)} & Bidirect. & \makecell{Params\\($10^6$)} & \makecell{FLOPs\\($10^9$)} & \makecell{PCE\\(10k iter.)} \\
\hline\hline
0 & \textbf{224} & 32 & 6 & \textbf{Yes} & \textbf{2} & \textbf{128} & \textbf{Yes} & 4.07 & 10.32 & 66.8\\
1 & 224 & 32 & 6 & \textbf{No} & 2 & 128 & Yes & 4.07 & 10.32 & 1.5\\
2 & 224 & 32 & 6 & Yes & 2 & 128 & \textbf{No} & 3.08 & 10.26 & 54.7\\
3 & \textbf{192} & 32 & 6 & Yes & 2 & 128 & Yes & 4.07 & 7.62 & 45.7\\
4 & \textbf{160} & \textbf{32} & \textbf{6} & Yes & 2 & 128 & Yes & 4.07 & 5.33 & 62.4\\
5 & \textbf{128} & 32 & 6 & Yes & 2 & 128 & Yes & 4.07 & 3.45 & 57.7\\
6 & 160 & \textbf{64} & 6 & Yes & 2 & 128 & Yes & 4.07 & 10.65 & 71.1\\
7 & 160 & 32 & \textbf{12} & Yes & 2 & 128 & Yes & 4.07 & 5.33 & 70.1\\
8 & 224 & 32 & 6 & Yes & \textbf{1} & 128 & Yes & 3.67 & 10.39 & 69.4\\
9 & 224 & 32 & 6 & Yes & 2 & \textbf{64} & Yes & 3.01 & 10.26 & 66.9\\
10 & 224 & 32 & 6 & Yes & 2 & \textbf{256} & Yes & 6.96 & 10.51 & 69.3\\
\hline
\end{tabular}
\end{center}
\vspace{-10pt}
\caption{Hyper-parameter search for the proposed SwingNet. Each configuration was trained for 10k iterations on the first split of GolfDB, providing a proxy for final performance. Parameters used
in comparison are in \textbf{bold}.}
\vspace{-10pt}
\label{tab:ablation}
\end{table*}
The hyper-parameters of interest are the input size ($d$), sequence length ($T$), batch size, number of LSTM layers ($N$), and number of hidden units in each LSTM layer ($H$). It was also of interest to see whether initializing with pre-trained ImageNet weights was advantageous as opposed to training from scratch, and if LSTM bidirectionality had an impact. The goal was to identify a computationally efficient configuration to maximize performance with limited compute resources. Normally, MobileNetV2's width multiplier would be a key hyper-parameter to include in the ablation study; however, for reasons to be discussed, using a width multiplier other than 1 was not feasible. Table~\ref{tab:ablation} provides the PCEs of 11 model configurations, along with the number of parameters and floating point operations (FLOPs) for each. Each configuration was trained for 10k iterations on the first split of GolfDB, providing a proxy for overall performance. For the ablation study, no affine transformations were used, and the learning rate was held constant at 0.001. Hyper-parameters used in comparison are shown in bold.
Remarkably, it was found that the model did not train at all unless the pre-trained ImageNet weights were used. Recent research from Facebook AI~\cite{he2018rethinking} found that initializing with pre-trained ImageNet weights did not improve performance for object detection on the COCO dataset~\cite{lin2014microsoft}. COCO is a large-scale dataset that likely has a similar distribution to ImageNet. Thus, we speculate that using pre-trained weights is critical in domain-specific tasks, where the variation in overall appearance is minimal. Because the pre-trained weights were only available for a width multiplier of 1, adjusting the width multiplier was not feasible.
Another interesting finding was that the correlation between input size and performance did not behave as expected. With increasing input size, one would expect a monotonically increasing PCE. However, it was found that that input sizes of 160 and 128 outperformed 192 by a large margin, and the PCE using an input size of 160 was only 4.4 points worse than the 224.
The sequence length $T$ had a significant impact on performance, supporting the importance of temporal context. Additionally, increasing the batch size improved performance dramatically. Still, it is difficult to fairly assess these hyper-parameters as they may have had a significant impact on convergence speed. Regarding the LSTM, bidirectionality led to a 12.1 point improvement in PCE. The single-layer LSTM outperformed the two-layer LSTM, and 256 hidden units performed better than 64 and 128.
\subsection{Frozen Layers}
The results of the ablation study revealed that increasing the sequence length and batch size provided large performance gains. On a single GPU, the sequence length and batch size are severely limited. Knowing that the model relies heavily on pre-trained ImageNet weights, we hypothesized that some of the ImageNet weights could be frozen without a significant loss in performance. Thus, we experimented with freezing layers in MobileNetV2 prior to training to create space in GPU memory for larger sequence lengths and batch sizes. Fig.~\ref{fig:frozen} plots the results of freezing the first $k$ layers using configuration 4 from Table~\ref{tab:ablation}. Despite a few outliers, the PCE increased until $k = 10$ and then began to decrease. We leverage this finding in the next section to train a baseline SwingNet using a larger sequence length and batch size.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figs/frozen.pdf}
\caption{PCE after 10k iterations using configuration 4 from Table~\ref{tab:ablation} and freezing the ImageNet weights in the first $k$ layers. Freezing the first 10 layers provided optimal results.}
\label{fig:frozen}
\vspace{-10pt}
\end{figure}
\subsection{Baseline SwingNet}
\begin{table*}[t]
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|}
\hline
Model & A & TU & MB & T & MD & I & MFT & F & PCE\\
\hline\hline
SwingNet-160 (slow-motion) & 23.5 & 80.7 & 84.7 & 75.7 & 97.8 & 98.3 & 98.0 & 21.5 & 72.5 \\
SwingNet-160 (real-time) & 38.7 & 87.2 & 92.1 & 90.8 & 98.3 & 98.4 & 97.2 & 30.7 & 79.2 \\
\hline
SwingNet-160 & 31.7 & 84.2 & 88.7 & 83.9 & 98.1 & 98.4 & 97.6 & 26.5 & 76.1 \\
\hline
\end{tabular}
\end{center}
\vspace{-5pt}
\caption{Event-wise and overall PCE averaged over the 4 splits for the proposed baseline SwingNet. Configuration: bidirectional LSTM, $d=160$, $T=64$, $L=1$, $N=256$, $k=10$. This configuration has $5.38 \times 10^6$ parameters and $10.92 \times 10^9$ FLOPs.}
\vspace{-5pt}
\label{tab:baseline}
\end{table*}
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\makecell{Seq. Length ($T$)} & FLOPs ($10^9$) & CPU (ms)$^*$ & PCE\\
\hline\hline
64 & 10.92 & 10.6 & 76.2\\
32 & 5.41 & 10.8 & 74.0\\
16 & 2.70 & 11.5 & 71.0\\
8 & 1.35 & 12.0 & 66.0\\
4 & 0.68 & 13.8 & 63.1\\
\hline
\end{tabular}
\end{center}
\vspace{-5pt}
\caption{SwingNet-160 performance and CPU runtime on GolfDB split 1 using various sequence lengths at test time. $^*$Effective processing time for a single frame, excluding I/O, using an Intel i7-8700K processor.}
\label{tab:seq_length}
\vspace{-10pt}
\end{table}
\begin{figure*}[b!]
\centering
\includegraphics[width=1.0\linewidth]{figs/inference_frames.pdf}
\vspace{-20pt}
\caption{Using SwingNet to infer event probabilities in the slow-motion golf swing of LPGA Tour player Michelle Wie. Six out of eight events were correctly detected within a 5-frame tolerance. \textit{Address} and \textit{Finish} were missed by 10 and 7 frames, respectively. Best viewed in color. Video available at \url{https://youtu.be/QlKodM7RhH4?t=36}. }
\label{fig:inference}
\end{figure*}
The results in Table~\ref{tab:ablation} suggest that an input size of 160 is more cost effective than an input size of 224; the latter requires double the computation for a relative increase in PCE of just 7\%. Taking this into consideration, as well as the comparative results of the other hyper-parameters, we propose a baseline SwingNet with an input size of 160, sequence length of 64, and a single-layer bidirectional LSTM with 256 hidden units. By initializing with pre-trained ImageNet weights and freezing 10 layers, the model can be trained using a batch size of 24 on a single GPU with 12GB memory. With this relatively large batch size, the model converges faster and does not need to be trained for as many iterations. Thus, the baseline SwingNet was trained for 7k iterations on each split of GolfDB, and the learning rate was reduced by an order of magnitude after 5k iterations. Affine transformations were used to augment the training data (see Section~\ref{sec:implementation}).
Table~\ref{tab:baseline} provides the event-wise and overall PCE averaged over the 4 splits of GolfDB. Overall, SwingNet correctly detects events at a rate of 76.1\%. As expected, relatively poor detection rates were observed for the \textit{Address} and \textit{Finish} events. This was likely caused by the compounding factors of subjective labeling and the inherent difficulty associated with precisely localizing these events temporally. These factors may have also played a role in detecting the \textit{Top} event, which was detected in real-time videos more consistently than in slow-motion videos; in slow-motion, the exact frame where the club changes directions is difficult to interpret because the transition is more fluid. Moreover, the detection rate was generally lower in slow-motion videos for the backswing events. This was likely due to the fact that the backswing is much slower than the downswing, so there are more frames in the backswing that are similar in appearance to the ground-truth event frames.
\textit{Impact} was the event detected the most proficiently. This result is intuitive because the model simply needs to detect when the clubhead is nearest the golf ball. Interpreting when the arm or shaft is parallel with the ground, which is required for events like \textit{Toe-up} and \textit{Mid-backwing}, requires more abstract intuition. Disregarding the \textit{Address} and \textit{Finish} events, the overall PCE was 91.8\%. Fig.~\ref{fig:inference} illustrates the inferred event probabilities for a slow-motion swing. Within a 5-frame tolerance, SwingNet correctly detected all but the \textit{Address} and \textit{Finish} events, which were off by 10 and 7 frames from their respective ground-truth frames.
Table~\ref{tab:seq_length} demonstrates that, after being trained using a sequence length of 64, smaller sequence lengths can be used at test time with only a modest decrease in performance. This has implications to mobile deployment, where smaller sequence lengths may be leveraged to reduce the memory requirements of the network.
\section{Conclusion}
This paper introduced the task of golf swing sequencing as the detection of key events in trimmed golf swing videos. The purpose of golf swing sequencing is to facilitate golf swing analysis by providing instant feedback in the field through the automatic extraction of key frames on mobile devices. To this end, a golf swing video database (GolfDB) was established to support the task of golf swing sequencing. Advocating mobile deployment, we introduce SwingNet, a lightweight baseline network with a deep hybrid convolutional and recurrent network architecture. Experimental results showed that SwingNet detected eight golf swing events at an average rate of 76.1\%, and six out of eight events at a rate of 91.8\%. Besides event labels, GolfDB also contains annotations for golf swing bounding boxes, player name and sex, club type, and view type. We provide this data with the intent of promoting future computer vision research in golf, such as the spatio-temporal localization of golf swings in untrimmed broadcast video, and view type recognition.
\noindent\textbf{Acknowledgments.} We acknowledge the NVIDIA GPU Grant Program, and financial support from the Canada Research Chairs program and the Natural Sciences and Engineering Research Council of Canada (NSERC).
{\small
\bibliographystyle{ieee}
| {
"attr-fineweb-edu": 2.740234,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfLDxaKgQRkwUmY6B | \section{Introduction}
Soccer is one of the most popular sports of the world because of its simplistic rules. Three major tournaments of this game are The World Cup, The Euro Cup and The Copa America out of them The World Cup is the most popular tournament. Following \cite{santos2014} we know, after early $1990$s F\'ed\'eration Internationale de Football Association (FIFA) had been worried about that all the teams become defensive in terms of scoring goals. As a result, total number of goals have been falling which eventually leads to a fall in interest on this game. If we consider $2006$ men's soccer World champion Italy and $2010$ champion Spain, they only allowed two goals in their entire seven matches in the tournament \citep{santos2014}. On March $17$, $1994$ in USA Today, FIFA clearly specified its objective as encourage attacking and high scoring match. Since then we can see some changes in objectives of some teams like the $2014$ World Cup semifinal between Brazil and Germany where Germany scores five goals in the first half. Furthermore, in $2018$ men's soccer World cup we see France chose some offensive strategies. Furthermore, if we look at the French national team of $2018$ men's World cup, we see there are no big names (like Lionel Messi, Cristiano Ronaldo, Romelu Lukaku etc.) and their average age is lower compare to other big teams. Therefore, all the players played without any pressure to win the World Cup and, if some team has a big name, that means that player is in the game for a long time and opposition teams have strategies to stop him to score goals.
In this paper we present a new theoretical approach to find out an optimal weight associated with a soccer player under the presence of a stochastic goal dynamics by using Feynman path integral method, where the actions of every player of both the teams are on $\sqrt{8/3}$-Liouville Quantum Gravity (LQG) surface \citep{feynman1949,fujiwara2017,pramanik2020,pramanik2020opt,pramanik2021consensus,pramanik2021effects,pramanik2021s}. This surface is continuous but not differentiable. Furthermore, at $\sqrt{8/3}$ this behaves like a Brownian surface \citep{miller2015,miller2016,hua2019,pramanik2019,pramanik2020motivation,polansky2021,pramanik2021,pramanik2021e}. Furthermore, this approach can be used to obtain a solution for stability of an economy after pandemic crisis \citep{ahamed2021}, determine an optimal bank profitability \citep{islamrole,alamanalyzing,mohammad2010,alam2013,hossain2015,pramanik2016,alam2018,ahamed2021d,alam2021,alam2021output}. This approach can be used in population dynamics problems such as \cite{minar2018}, \cite{minar2019}, \cite{minar2021} and \cite{minarrevisiting}. In a very competitive tournament like a soccer World Cup, all possible standard strategies to score goals are known to the opposition teams. In this environment if a player's action is stochastic in nature then he would have some comparative advantage which is also known to the opposition team but, they do not know what type stochastic action is going to take place and his complete stochastic action profile is unknown to the opposition team. Apart from that, the conditions like uncertainties due to rain, dribbling and passing skill of a player, type of match (i.e. day match or a day-night match), and advantages of having home crowd have been considered as stochastic component of the goal dynamics on the way to determine the optimal weight.
Recent literature talks about whether a soccer team should choose offensive or defensive strategies \citep{santos2014}. Some studies say that, relatively new ``Three point" rule and ``Golden goal" do not necessarily create to brake a tie and score goals \citep{brocas2004, santos2014} and if asymmetry between two teams is big then these two rules induce the weaker team to play more defensively \citep{guedes2002}. There are some other studies combined them give mixed results on these two rules \citep{dilger2009,garicano2005, moschini2010, santos2014}. Therefore, we do not include these two rules in our analysis. On the other hand, as scoring a goal on a given condition of a match is purely stochastic, a discounted reward on a player's dynamic objective function can give them more incentive to score which is a common dynamic reward phenomenon in the animal kingdom \citep{kappen2007}.
\section{Construction of the problem}
In this section we construct a forward stochastic goal dynamics under a Liouville-like quantum gravity action space with a conditional expected dynamic objective function. The objective function gives an expected number of goals of a match based on total number of goals scored by a team at the beginning of each time interval. For example at the beginning of a match both of the teams start with $0$ goals. Therefore, the initial condition is $Z_0=0$, where $Z_0$ represents total number of goals scored by a team at time $0$ of the interval $[0,t]$. The objective of player $i\in I$ at the beginning of $(M+1)^{th}$ game is :
\begin{multline}\label{f0}
\mathbf {OB}_\a^i:\overline{\mathbf Z}_\a^i(\mathbf W,s)\\=h_0^{i*}+\max_{W_i\in W} \E_0\left\{\int_{0}^t\sum_{i=1}^I\sum_{m=1}^M\exp(-\rho_s^im)\a^iW_i(s)h_0^i[s,w(s),z(s)]\bigg|\mathcal F_0^Z\right\}ds,
\end{multline}
where $W_i$ is the strategy of player $i$ (control variable), $\a^i\in\mathbb R$ is constant weight, $I$ is total number of players in a team including those at the reserve bench, $\rho_s^i\in(0,1)$ is a stochastic discount rate for player $i$ with $w\in\mathbb R_+^{I'}$ and $z\in\mathbb R_+^{I'}$ are time $s\geq0$ dependent all possible controls and goals available to them, $h_0^{i*}\geq 0$ is the initial condition of the function $h_0^i$ and $\mathcal F_0^Z$ is the filtration process of goal dynamics starting at the beginning of the game. Therefore, for player $i$ the difference between $W_i$ and $w$ is that strategy $W_i$ is the subset of all possible strategies $w$ available to them at time $s$ before $(M+1)^{th}$ game starts. Furthermore, we assume $h_0^i$ is a known objective function to player $i$, which is partly unknown to the opposition team because of incomplete and imperfect information of them.
Suppose, the stochastic differential equation corresponding to goal dynamics is
\begin{equation}\label{f1}
d\mathbf Z(s,\mathbf W)=\bm\mu[s,\mathbf W(s),\mathbf{Z}(s,\mathbf W)]ds+\bm\sigma[s,\hat{\bm{\sigma}},\mathbf W(s),\mathbf{Z}(s,\mathbf W)]d\mathbf B(s),
\end{equation}
where $\mathbf W_{I\times I'}(s)\subseteq\mathcal W\subset\mathbb R_+^{I\times I'}$ is the control space and $\mathbf Z_{I\times I'}(s)\subseteq\mathcal Z\subset\mathbb R_+^{I\times I'}$ is space of scoring goals under the soccer rules such that $z\in\mathbf Z$, $\mathbf B_{p\times 1}(s)$ is a $p$-dimensional Brownian motion, $\bm\mu_{I\times 1}>0$ is the drift coefficient and the positive semidefinite matrix $\bm\sigma_{I\times p}\geq 0$ is the diffusion coefficient such that
\[
\lim_{s\downarrow \infty} \E\mu[s,\mathbf W(s),\mathbf{Z}(s,\mathbf W)]=\mathbf Z^*\geq0.
\]
Above argument states that, if enough time is allowed for a match then, $\mathbf Z^*$ number of goals would be achieved which turns out to be a stable solution of this system. Finally,
\begin{equation}\label{f2}
\bm\sigma[s,\hat{\bm{\sigma}},\mathbf W(s),\mathbf{Z}(s,\mathbf W)]=\gamma\hat{\bm{\sigma}}+\bm\sigma^*[s,\mathbf W(s),\mathbf{Z}(s,\mathbf W)],
\end{equation}
where $\hat{\bm\sigma}>0$ comes from the strategies of the opposition team with the coefficient $\gamma>0$ and $\bm\sigma^*>0$ comes from the weather conditions, venues, popularity of a club or a team before starting of $(M+1)^{th}$ game. The forward stochastic differential Equation (\ref{f2}) is the core of our analysis. We use all possible important conditions during a game.
\section{Definitions and Assumptions}
\begin{as}\label{asf0}
For $t>0$, let ${\bm{\mu}}(s,\mathbf{W},\mathbf{Z}):[0,t]\times \mathbb{R}^{I\times I'}\times \mathbb{R}^{I\times I'} \ra\mathbb{R}^{I\times I'}$ and $\bm{\sigma}(s,\hat{\bm{\sigma}},\mathbf{W},\mathbf{Z}):[0,t]\times \mathbb{S}^{(I\times I')\times t}\times \mathbb{R}^{I\times I'}\times \mathbb{R}^{I\times I'} \ra\mathbb{R}^{I\times I'}$ be some measurable function with $(I\times I')\times t$-dimensional two-sphere $\mathbb{S}^{(I\times I')\times t}$ and, for some positive constant $K_1$, $\mathbf{W}\in\mathbb{R}^{I\times I'}$ and, $\mathbf{Z}\in\mathbb{R}^{I\times I'}$ we have linear growth as
\[
|{\bm{\mu}}(s,\mathbf{W},\mathbf{Z})|+
|\bm{\sigma}(s,\hat{\bm{\sigma}},\mathbf{W},\mathbf{Z})|\leq
K_1(1+|\mathbf{Z}|),
\]
such that, there exists another positive, finite, constant $K_2$ and for a different score vector
$\widetilde{\mathbf{Z}}_{(I\times I')\times 1}$ such that the Lipschitz condition,
\[
|{\bm{\mu}}(s,\mathbf{W},\mathbf{Z})-
{\bm{\mu}}(s,\mathbf{W},\widetilde{\mathbf{Z}})|+|\bm{\sigma}(s,\hat{\bm{\sigma}},
\mathbf{W},\mathbf{Z})-\bm{\sigma}(s,\hat{\bm{\sigma}},\mathbf{W},\widetilde{\mathbf{Z}})|
\leq K_2\ |\mathbf{Z}-\widetilde{\mathbf{Z}}|,\notag
\]
$ \widetilde{\mathbf{Z}}\in\mathbb{R}^{I\times I'}$ is satisfied and
\[
|{\bm{\mu}}(s,\mathbf{W},\mathbf{Z})|^2+
\|\bm{\sigma}(s,\hat{\bm{\sigma}},\mathbf{W},\mathbf{Z})\|^2\leq K_2^2
(1+|\widetilde{\mathbf{Z}}|^2),
\]
where
$\|\bm{\sigma}(s,\hat{\bm{\sigma}},\mathbf{W},\mathbf{Z})\|^2=
\sum_{i=1}^I \sum_{j=1}^I|{\sigma^{ij}}(s,\hat{\bm{\sigma}},\mathbf{W},\mathbf{Z})|^2$.
\end{as}
\begin{as}\label{asf1}
There exists a probability space $(\Omega,\mathcal{F}_s^{\mathbf Z},\mathcal{P})$ with sample space $\Omega$, filtration at time $s$ of goal ${\mathbf{Z}}$ as $\{\mathcal{F}_s^{\mathbf{Z}}\}\subset\mathcal{F}_s$, a probability measure $\mathcal{P}$ and a $p$-dimensional $\{\mathcal{F}_s\}$ Brownian motion $\mathbf{B}$ where the measure of valuation of players $\mathbf{W}$ is an $\{\mathcal{F}_s^{\mathbf{Z}}\}$ adapted process such that Assumption \ref{asf0} holds, for the feedback control measure of players there exists a measurable function $h$ such that $h:[0,t]\times C([0,t]):\mathbb{R}^{I\times I'}\ra\mathbf{W}$ for which $\mathbf{W}(s)=h[\mathbf{Z}(s,w)]$ such that Equation (\ref{f1}) has a strong unique solution \citep{ross2008}.
\end{as}
\begin{as}\label{asf3}
(i). $\mathcal Z\subset\mathbb R^{I\times I'}$ such that a soccer player $i$ cannot go beyond set $\mathcal Z_i\subset \mathcal Z$ because of their limitations of skills. This immediately implies set $\mathcal Z_i$ is different for different players.\\
(ii). The function $h_0^i:[0,t]\times\mathbb R^{2I'}\ra\mathbb R^{I'}$. Therefore, all players in a team at the beginning of $(M+1)^{th}$ match have the objective function $h_0:[0,t]\times\mathbb R^{I\times I'}\times \mathbb R^{I\times I'}\ra\mathbb R^{I\times I'}$ such that $h_0^i\subset h_0$ in functional spaces and both of them are concave which is equivalent to Slater condition \citep{marcet2019}.\\
(iii). There exists an $\bm\epsilon>0$ such that for all $(\mathbf W,\mathbf Z)$ and $i=1,2,...,I$ such that
\[
\E_0\left\{\int_{0}^t\sum_{i=1}^I\sum_{m=1}^M\exp(-\rho_s^im)\a^iW_i(s)h_0^i[s,w(s),z(s)]\bigg|\mathcal F_0^Z\right\}ds\geq\bm\epsilon.
\]
\end{as}
\begin{definition}\label{def0}
Suppose $\mathbf{Z}(s,\mathbf W)$ is a non-homogeneous Fellerian semigroup on time in $\mathbb{R}^{I\times I'}$. The infinitesimal generator $A$ of $\mathbf{Z}(s,\mathbf W)$ is defined by,
\[
Ah(z)=\lim_{s\downarrow 0}\frac{\E_s[h(\mathbf{Z}(s,\mathbf W))]-h(\mathbf Z(\mathbf W))}{s},
\]
for $\mathbf Z\in\mathbb{R}_+^{I\times I'}$ where $h:\mathbb{R}_+^{I\times I'}\ra\mathbb{R}_+$ is a $C_0^2(\mathbb{R}_+^{I\times I'})$ function, $\mathbf{Z}$ has a compact support, and at $\mathbf Z(\mathbf W)>\bm 0$ the limit exists where $\E_s$ represents the soccer team's conditional expectation of scoring goals $\mathbf{Z}$ at time $s$. Furthermore, if the above Fellerian semigroup is homogeneous on times, then $Ah$ is the Laplace operator.
\end{definition}
\begin{defn}\label{def1}
For a Fellerian semigroup $\mathbf{Z}(s,\mathbf W)$ for all $\epsilon>0$, the time interval $[s,s+\epsilon]$ with $\epsilon\downarrow 0$, define a characteristic-like quantum operator starting at time $s$ is defined as
\[
\mathcal{A} h(\mathbf Z)=\lim_{\epsilon\downarrow 0}
\frac{\log\E_s[\epsilon^2\ h(\mathbf{Z}(s,\mathbf W))]-\log[\epsilon^2h(\mathbf Z(\mathbf W))]}{\log\E_s(\epsilon^2)},
\]
for $\mathbf Z\in\mathbb{R}_+^{I\times I'}$, where $h:\mathbb{R}^{I\times I'}\ra\mathbb{R}$ is a $C_0^2\left(\mathbb{R}_+^{I\times I'}\right)$ function, $\E_s$ represents the conditional expectation of goal dynamics $\mathbf{Z}$ at time $s$, for $\epsilon>0$ and a fixed $h$ we have the sets of all open balls of the form $B_\epsilon(h)$ contained in $\mathcal{B}$ (set of all open balls) and as $\epsilon\downarrow 0$ then $\log\E_s(\epsilon^2)\ra\infty$.
\end{defn}
\begin{defn}\label{def2}
Following \cite{frick2019} a dynamic conditional expected objective function explained in Equation (\ref{f0}) on the goal dynamics $\mathbf Z\in\{\mathbf Z_0,\mathbf Z_1,...,\mathbf Z_t\}$ is a tuple $\left(s,\bm\a^i,\{\mathbf{OB}_\a^i(W_i),\bm\tau_{W_i}\}_{W_i\in w}\right)$ where\\
(i). $w$ is a finite strategy space where player $i$ can choose strategy $W_i$ and $\bm\a^i$ is all probabilities available to them from where they can choose $\a^i$.\\
(ii). For each strategy $W_i\in w$, $\mathbf{OB}_\a^i\in\mathbb R^{\mathbf Z}$ is constrained objective function of soccer player $i$ such that Definition \ref{def1} holds.\\
(iii). For each strategy $W_i\in w$, define the rain or other environmental random factors which leads to a stoppage or termination of game $M+1$ at time $s$ as $\bm\tau_{W_i}$, which is a finitely-additive probability measure on the Borel $\sigma$-algebra on $\mathbb R^{\mathbf Z}$ and is proper.
\end{defn}
Following \cite{hellman2019}, two main types of logic are used in game theory: First-order Logic and Infinitary Logic. First-order mathematical logic is built on finite base language based on the connective symbols such as conjunction, negation, conversion, inversion and contrapositivity; countable collection of variables, quantifier symbols, constant symbols, predicate symbols and function symbols \citep{hellman2019}. Quantum formulae are of the form of $h(0,...,t)$ such that the game operates in a quantum field with the characteristic-like quantum generator defined in Definition \ref{def1}. Comparing this statement with the interpretation of Atomic formulae in \cite{hellman2019} we can say $h$ is a functional predicate symbol on terms $0,...,t$. Our Quantum formulae are more generalized version of Atomic formulae in the sense that, Quantum formulae consider improper differentiability on $2$-sphere continuous strategy available for all the players in both of the teams.
The problem of constructing a First-order logic is that, it can only handle expressions of finite variables. Dealing with infinite variables in strategy space is the primary objective in our paper. When player $i$ tries to score a goal, they change their actions based on the strategies of the opposition team and on their skills. If player $i$ is a senior player, then opposition team has more information about that player's strength or weakness which is also known to player $i$. Therefore, at time $s$ of match $M+1$ player $i$'s action is mixed. Furthermore, we assume each player's strategy set is a convex polygon with each side has the length of unity. The reason is that, probability of choosing a strategy is in between $0$ and $1$. Therefore, if a player has three strategies, their strategy set is a equilateral triangle with each side of length unity.
To get more generalized result we extend the standard First-order logic to Infinitary logic \citep{hellman2019}. This logic considers mixed actions with infinite possible strategies such that for each player is able to play a combination of infinitely many strategies at infinite number of states. For example, a striker $i$ gets the ball at time $s$ either from their team mate or by a result of a missed pass from an opponent. As striker $i$'s objective is to kick the ball through the goal, their strategy depends on to total number of opponents between them and the goal. Therefore, striker $i$ plays a mixed action or striker $i$ places weight $\a^i$ on scoring strategy $a^i$ at total number of goal scored $\mathbf Z_s$ at time $s$. For a countable collection of objective functions $\{{\bf{OB}}_\a^i\}_{i=1}^\infty$ such that $\bigwedge_{i=1}^\infty{\bf{OB}}_\a^i$ and $\bigvee_{i=1}^\infty{\bf{OB}}_\a^i$ exist, converges to $\overline{\mathbf Z}_\a$ in real numbers via the formula
\[
{\bf{OB}}_\a\left(\{\overline{\mathbf Z}_\a^i\}_{i=1}^I, \overline{\mathbf Z}_\a\right)=\forall\varepsilon\left[\varepsilon>0\ra\bigvee_{I\in\mathbb N}\bigwedge_{i>I}\left(\overline{\mathbf Z}_\a^i-\overline{\mathbf Z}_\a\right)^2<\varepsilon^2\right]
\]
or, without the quantifier the above statement becomes
\[
{\bf{OB}}_\a\left(\{\overline{\mathbf Z}_\a^i\}_{i=1}^I, \overline{\mathbf Z}_\a\right)=\bigwedge_{\mathcal K\in\mathbb N}\bigvee_{I\in\mathbb N}\bigwedge_{i>I}\mathcal K^2\left(\overline{\mathbf Z}_\a^i-\overline{\mathbf Z}_\a\right)^2<1.
\]
\subsection{$\sqrt{8/3}$ Liouville quantum gravity surface}
Following \cite{gwynne2016} we know, a Liouville quantum gravity (LQG) surface is a random Riemann surface parameterized by a domain $\mathbb D\subset\mathbb S^{(I\times I')\times t}$ with Riemann metric tensor $e^{\gamma k(l)} d\mathbf{Z}\otimes d\widehat{\mathbf{Z}}$, where $\gamma\in(0,2)$, $k$ is some variant of the Gaussian free field (GFF) on $\mathbb D$, $l$ is some number coming from $2$-sphere $\mathbb S^{(I\times I')\times t}$ and $d\mathbf{Z}\otimes d\widehat{\mathbf{Z}}$ is Euclidean metric tensor. In this paper we consider the case where $\gamma=\sqrt{8/3}$ because, it corresponds to a uniformly random planer maps. Because of incomplete and imperfect information the strategy space is quantum in nature and each player's decision is a point on a dynamic convex strategy polygon of that quantum strategy space. Furthermore, $k:\mathbb S^{(I\times I')\times t}\ra \mathbb R^{I\times I'}$ is a distribution such that each player's action which can be represented different shots to the goal including dribbling and passing. Although the strategy set is deterministic, the action on this space is stochastic.
\begin{defn}
An equivalence relation $\mathcal E$ on $\mathbb S$ is smooth if there is another $2$-sphere $\mathbb S'$ and a distribution function with a conformal map $k':\mathbb S\ra\mathbb S'$ such that for all $l_1,l_2\in\mathbb S$, we have $l_1\mathcal E l_2\iff k(l_1)=k(l_2)$.
\end{defn}
Now if $\mathcal E$ is the equivalence relation of each player's action, the smooth distribution function $k'$ is an auxiliary tool which helps determining whether $l_1$ and $l_2$ are in the same action component which occurs iff $k(l_1)=k(l_2)$.
\begin{exm}
Suppose $\mathbb S^I=\mathbb C^I$, where $\mathbb C^I$ represents a complex space. The relation given $l_1\sim_{\mathcal E} l_2$ if and only if $l_1-l_2\in\mathbb S'$ is smooth as the distribution $k:\mathbb C^I\ra[0,1)^I$ is defined by $k(l_1,...,l_I)=(l_1-\lfloor l_1\rfloor,...,l_I-\lfloor l_I\rfloor )$, where $\lfloor l_m\rfloor=\max\{l^*\in\mathcal S'|l^*>l_m\}$ is the integer part of $l_m$, then $k(l_1)=k(l_2)$ iff $l_1\mathcal E l_2$.
\end{exm}
Furthermore, $\sqrt{8/3}$-LQG surface is an equivalence class of action on $2$-sphere $(D,k)$ such that $D\subset\mathbb S^{(I\times I')\times t}$ is open and $k$ is a distribution function which is some variant of a GFF \citep{gwynne2016}. Action pairs $(D,k)$ and $(\widetilde D,\tilde k)$ are equivalent if there exists a conformal map $\zeta:\widetilde D\ra D$ such that, $\tilde k=k\circ\zeta+Q\log|\zeta'|$, where $Q=2/\gamma+\gamma/2=\sqrt{3/2}+\sqrt{2/3}$ \citep{gwynne2016}.
Suppose, $I$ be a non-empty finite set of players, $\mathcal F_s^{\bf Z}$ be the filtration of goal $\bf Z$, $\Omega$ be a sample space. There for each player $i\in I$ an equivalence relationship $\mathcal E_i\in\mathcal E$ on $2$-sphere, called player $i$'s quantum knowledge. Therefore, $\sqrt{8/3}$-LQG player knowledge space at time $s$ is $(\Omega,\mathcal F_s^{\bf Z},\mathbb S, I,\mathcal E)$. Given a $\sqrt{8/3}$-LQG player knowledge space $(\Omega,\mathcal F_s^{\bf Z},\mathbb S, I,\mathcal E)$ the equivalence relationship $\mathcal E$ is the transitive closure of $\bigcup_{i\in I}\mathcal E_i$.
\begin{defn}
A knowledge space $(\Omega,\mathcal F_s^{\bf Z},\mathbb S, I,\mathcal E)$ such that $i\in I$, each equivalent class of $\mathcal E_i$ with Riemann metric tensor $e^{\sqrt{8/3}k(l)}d\bf Z\otimes d\widehat{\mathbf{Z}}$ is finite, countably infinite or uncountable is defined as purely $\sqrt{8/3}$-LQG knowledge space which is purely quantum in nature (for detailed discussion about purely atomic knowledge see \cite{hellman2019}).
\end{defn}
\begin{defn}
For a fixed quantum knowledge space $(\Omega,\mathcal F_s^{\bf Z},\mathbb S, I,\mathcal E)$, for player $i$ a dribbling and passing function $p^i$ is a mapping $p^i:\Omega\times\mathbb S\ra\Delta(\Omega\times \mathbb S)$ which is $\sigma$-measurable and the equivalence relationship has some measure in $2$-sphere.
\end{defn}
Therefore, dribbling and passing space is a tuple $(\Omega,\mathcal F_s^{\bf Z},\mathbb S, I,p)$ which is a type of $\sqrt{8/3}$-LQG knowledge space. There are other skills needed to score a goal such as power, speed, agility, shielding, tackling, trapping and shooting but we assume only dribbling and passing function is directly related to the quantum knowledge. Rest of the uncertainties are coming from the stochastic part of the goal dynamics. The dribbling and passing space of player $i$ implicitly defines quantum knowledge relationship $\mathcal E_i$ of dribbling and passing functions. Hence, $(z,l)\mathcal E_i(z',l')$ iff $p_{z,l}^i=p_{z',l'}^i$, where $z$ is the goal situation according to player $i$'s perspective and $l$ is player $i$'s action on $2$-sphere at time $s$.
\begin{defn}
For player $i\in I$ and for all $z\in\Omega$, $l\in\mathbb S$ dribbling and passing space tuple $(\Omega,\mathcal F_s^{\bf Z},\mathbb S, I,p)$ has a function $p_{z,l}^i$ which is purely quantum. Therefore, this space is purely quantum dribbling and passing space.
\end{defn}
\begin{definition}
For $(\Omega,\mathcal F_s^{\bf Z},\mathbb S, I,p)$ if $p_{z,l}^i[z,l]>0$ for all $i\in I$, $z\in\Omega$ and $l\in\mathbb S$ then it is positive. Furthermore, the dribbling and passing space is purely quantum and positive, then the knowledge space is $\sqrt{8/3}$-LQG.
\end{definition}
\begin{defn}
A purely quantum space $(\Omega,\mathcal F_s^{\bf Z},\mathbb S, I,p)$ is smooth if $\Omega$, the $2$-sphere $\mathbb S$ and the common quantum knowledge equivalence relation $\mathcal E$ is smooth. As $k$ is a version of GFF the quantum space is not smooth at the vicinity of the singularity and for time being we exclude those point to make dribbling and the passing space smooth.
\end{defn}
\begin{example}
Suppose, a male soccer team has three strikers A, B and C such that player A is a left wing, B is a center-forward and C is a right wing. Furthermore, at time $s$ player B faces an opposition center defensive midfielder (CDM) with at least one of the center backs (left or right) and decides to pass the ball either of players A and C as they have relatively unmarked positions. Player A knows, if B passes him, based on goal condition $z_1$ based on his judgment he will take an attempt to score a goal either by taking a long distance shot with conditional probability $u(u_1|z_1)$ or by running with the ball closer to the goal post with probability $u(u_2|z_1)$, where for $k=1,2$, $u_k$ takes the value $1$ when player A is able to score by using any of the two approaches. As the dribbling and passing space is purely quantum, player A's expected payoff to score is $u(u_1|z_1)e^{\sqrt{8/3}k_1(l_1)}+u(u_2|z_1)e^{\sqrt{8/3}k_1(l_2)}$, where $l_k$ is the $k^{th}$ point observed on player A's $2$-sphere $\mathbb S$. Similarly, player C has the payoff of scoring a goal is $u(u_3|z_2)e^{\sqrt{8/3}k_2(l_3)}+u(u_4|z_2)e^{\sqrt{8/3}k_2(l_4)}$, where $u_3$ and $u_4$ are the probabilities of taking a direct shot to goal and running closer to goal post and score. However, player B's decision tends to be somewhat scattered, as he has to make a choice of passing either of A or C or mixes up the calculation and decides to goal by himself. Consider player B mixes up strategy and decides to goal by himself with probability $v$ or he passes either of A and C with probability $(1-v)$, which is know to players A and C.
Let us model the quantum knowledge space as of these three players as $\Omega=\mathbb{R}_+\times \mathbb{R}_+\times \mathbb{R}_+$ and $\mathbb S=\mathbb{R}_+\times \mathbb{R}_+\times \mathbb{R}_+$. Once, the payoff left wing A and right wing C has been revealed, player B calculates the ratio of of two expected payoffs. Therefore, player A knows that, player B is is giving him a pass if
\[
\frac{u(u_1|z_1)e^{\sqrt{8/3}k_1(l_1)}+u(u_2|z_1)e^{\sqrt{8/3}k_1(l_2)}}{u(u_3|z_2)e^{\sqrt{8/3}k_2(l_3)}+u(u_4|z_2)e^{\sqrt{8/3}k_2(l_4)}}>1
\]
with probability $(1-v)$ and with probability $v$ player B decides to score himself if the ratio is less than equal to $1$. As both of the wing players calculate the ratio of their expected payoffs, it is enough to define $\Omega=\mathbb R_+\times \mathbb R_+$ and $\mathbb S=\mathbb R_+\times \mathbb R_+$ for two wing players without losing any information. The quantum equivalent classes $\mathcal E^A$ which is player A's knowledge can be represented as
\begin{align*}
&\left[\left(0,\frac{u(u_1|z_1)e^{\sqrt{8/3}k_1(l_1)}+u(u_2|z_1)e^{\sqrt{8/3}k_1(l_2)}}{u(u_3|z_2)e^{\sqrt{8/3}k_2(l_3)}+u(u_4|z_2)e^{\sqrt{8/3}k_2(l_4)}}>1\right),\right.\\&\left.\left(0,\frac{u(u_1|z_1)e^{\sqrt{8/3}k_1(l_1)}+u(u_2|z_1)e^{\sqrt{8/3}k_1(l_2)}}{u(u_3|z_2)e^{\sqrt{8/3}k_2(l_3)}+u(u_4|z_2)e^{\sqrt{8/3}k_2(l_4)}}\leq 1\right)\right]_{(z_k\in\mathbb R_+,l_k\in \mathbb R_+,\ \text{for all $k=1,2$})},
\end{align*}
and player B's knowledge corresponding to the equivalent class $\mathcal E^B$ is
\begin{align*}
&\left[\left(\frac{u(u_3|z_2)e^{\sqrt{8/3}k_2(l_3)}+u(u_4|z_2)e^{\sqrt{8/3}k_2(l_4)}}{u(u_1|z_1)e^{\sqrt{8/3}k_1(l_1)}+u(u_2|z_1)e^{\sqrt{8/3}k_1(l_2)}}>1,0\right),\right.
\\ &\left.\left(\frac{u(u_3|z_2)e^{\sqrt{8/3}k_2(l_3)}+u(u_4|z_2)e^{\sqrt{8/3}k_2(l_4)}}{u(u_1|z_1)e^{\sqrt{8/3}k_1(l_1)}+u(u_2|z_1)e^{\sqrt{8/3}k_1(l_2)}}\leq 1,0\right)\right]_{(z_k\in\mathbb R_+,l_k\in \mathbb R_+,\ \text{for all $k=1,2$})}.
\end{align*}
Therefore, the belief of player A is
\begin{align*}
p_{z,l}^A[z,l]&=p_{[u(u_1|z_1)e^{\sqrt{8/3}}k_1(l_1),u(u_2|z_1)e^{\sqrt{8/3}}k_1(l_2)]}^A[z,l]\\&=\begin{cases}
1-v & \text{if}\ \frac{u(u_1|z_1)e^{\sqrt{8/3}k_1(l_1)}+u(u_2|z_1)e^{\sqrt{8/3}k_1(l_2)}}{u(u_3|z_2)e^{\sqrt{8/3}k_2(l_3)}+u(u_4|z_2)e^{\sqrt{8/3}k_2(l_4)}}>1,\\
v & \text{if}\ \frac{u(u_1|z_1)e^{\sqrt{8/3}k_1(l_1)}+u(u_2|z_1)e^{\sqrt{8/3}k_1(l_2)}}{u(u_3|z_2)e^{\sqrt{8/3}k_2(l_3)}+u(u_4|z_2)e^{\sqrt{8/3}k_2(l_4)}}\leq 1,\\
0 & \text{otherwise},
\end{cases}
\end{align*}
and similarly for the right wing B,
\begin{align*}
p_{z,l}^B[z,l]&=p_{u(u_3|z_2)e^{\sqrt{8/3}k_2(l_3)},u(u_4|z_2)e^{\sqrt{8/3}k_2(l_4)}}^B[z,l]\\&=\begin{cases}
1-v & \text{if}\ \frac{u(u_3|z_2)e^{\sqrt{8/3}k_2(l_3)}+u(u_4|z_2)e^{\sqrt{8/3}k_2(l_4)}}{u(u_1|z_1)e^{\sqrt{8/3}k_1(l_1)}+u(u_2|z_1)e^{\sqrt{8/3}k_1(l_2)}}>1,\\
v & \text{if}\ \frac{u(u_3|z_2)e^{\sqrt{8/3}k_2(l_3)}+u(u_4|z_2)e^{\sqrt{8/3}k_2(l_4)}}{u(u_1|z_1)e^{\sqrt{8/3}k_1(l_1)}+u(u_2|z_1)e^{\sqrt{8/3}k_1(l_2)}}\leq 1,\\
0 &\text{otherwise}.
\end{cases}
\end{align*}
\end{example}
\subsection{$\sqrt{8/3}$ Liouville quantum gravity metric}
For domain $\mathbb D\subset \mathbb S^{(I\times I')\times t}$ and for a variant of GFF the distribution $k$ suppose, $(\mathbb D, k)$ is a $\sqrt{8/3}$-LQG action surface. As $k$ is a variant of GFF, $k$ induces a metric $\bm \omega_k$ on $\mathbb{D}$ and furthermore, if $(\mathbb D, k)$ is a quantum sphere, the metric space $(\mathbb D, \bm\omega_k)$ is isometric to the Brownian map with two additional properties: a Brownian disk is obtained if $(\mathbb D, k)$ is a quantum disk and a Brownian plane is obtained if $(\mathbb D, k)$ is a $\sqrt{8/3}$-quantum cone \citep{bettinelli2017,curien2014,gwynne2016}. Following \cite{gwynne2016} we know that, $\sqrt{8/3}$-LQG surface can be represented as a Brownian surface with conformal structure. We assume the strategy space where the action is taken has the property like $\sqrt{8/3}$-LQG surface because, each soccer player has radius $r$ around themselves such that, if an opponent player comes in this radius he would be able to tackle. Furthermore, if $r\ra 0$, the player has a complete control over the opponent under no mistakes and same skill level. Therefore, the strategy space closer to the player (i.e. $r=0$) bends towards himself in such a way that, the surface can be approximated to a surface on a $2$-sphere and furthermore, as the movement on this space is stochastic in nature, it behaves like a Brownian surface with its convex strategy polygon changes its shape at every time point based on the condition of the game. At $r=0$ the surface hits essential singularity and the player has infinite power to control over the ball.
The $\sqrt{8/3}$-LQG metric will be constructed on a $2$-sphere $(\mathbb S,k)$ which is also quantum in nature. Based on the distribution function $k$ suppose, $C_i$ be the collection of i.i.d. locations of player $i$ on the strategy space sampled uniformly from the area of their convex dynamic polygon $\be_{k_i}$. Furthermore, consider inside a polygon with area measure $\be_{k_i}$, player $i$'s action at time $s$ is $a_{s}^i$ and at time $\tau$ is $a_\tau^i$ such that, for $\varepsilon>0$ we define $\tau=s+\varepsilon$. Therefore, $a_s^i,a_\tau^i\in C_i$ is a Quantum Loewner Evolution growth process $\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}$ starting from $a_s^i$ and ending at $a_\tau^i$ for all $\tilde s\in[s,\tau]$.
Let there are two growth processes $\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}$ and $\{\mathcal G_{*\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}$ where both of them starts at $a_s^i$ and ends at $a_\tau^i$. We will show on this surface they are homotopic in nature. Suppose $\{p_i^0,p_i^1,...,p_i^M\}$ are $M$-partitions on player $i$'s finite convex strategy polygon $\be_{k_i}$ on $\mathbb S^{(I\times I')\times t}$. Hence, each of $\be_k\in\be_{k_i}$ can be uniquely represented by $\be_k=\sum_{n=0}^M\theta_n(\be_k)p_i^n$, where $\theta_n(\be_k)\in[0,1]$ for all $n=0,...,M$ and $\sum_{n=0}^M\theta_n(\be_k)=1$. Thus,
\[
\theta_n(\be_k)=\begin{cases}
\text{barycentric coordinate of $\be_k$ relative to $p_i^n$ if $p_i^n$ is a vertex of $\be_{k_i}$,}\\ 0 \ \ \text{otherwise.}
\end{cases}
\]
Therefore, $\theta_n(\be_k)\neq 0$ are barycentric coordinates of $\be_k$. Furthermore, as $\theta_n$ is identically equal to zero or the barycentric function of $\be_{k_i}$ relative to the vertex $p_i^n$, $\theta_n:\be_{k_i}\ra \mathbf I$ is continuous, where $\mathbf I$ is the range of $p_i^n$ which takes the value between $0$ and $1$. Now assume the growth process has the map $\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}:\mathbf Y_i\ra\be_{k_i}$, where $\mathbf Y_i$ is an arbitrary topological space and it has a unique expression $\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}(\mathbf Y_i)=\sum_{n=0}^M\theta_n\circ p_i^n\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}$. As this growth process is a Quantum Loewner Evolution, it is continuous. Therefore, $\theta_n\circ \{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}:\mathbf Y_i\ra \mathbf I$ is continuous.
\begin{lem}
Let $\be_{k_i}$ be a finite convex strategy polygon of player $i$ in $\mathbb S^{(I\times I')\times t}$ such that the vertices are $\{p_i^0,...,p_i^M\}$. Suppose $\mathbf Y_i$ be an arbitrary topological space and let $\{\mathcal G_{n,\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}:\mathbf Y_i\ra\mathbf I$ be a family of growth processes, one for each vertex $p_i^n$, with $\sum_{n=0}^Mp_i^n\{\mathcal G_{n,\tilde s}^{a_s^i,a_\tau^i}(y_i)\}_{\tilde s\geq 0}\in\be_{k_i}$ for each $y_i\in\mathbf Y_i$. Then $y_i\mapsto\sum_{n=0}^Mp_i^n\{\mathcal G_{n,\tilde s}^{a_s^i,a_\tau^i}(y_i)\}_{\tilde s\geq 0}\in\be_{k_i}$ is a continuous map $\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}:\mathbf Y_i\ra\be_{k_i}$ as $\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}$ is a Quantum Loewner Evolution.
\end{lem}
\begin{proof}
As $\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}$ is a Quantum Loewner Evolution, it is continuous. Furthermore, as $\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}(\mathbf Y_i)\subset\be_{k_i}\subset\mathbb S^{(I\times I')\times t}$, the continuity of $\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}$ as a map of $\mathbf Y_i$ into $\be_{k_i}$ follows.
\end{proof}
\begin{lem}
Let $\be_{k_i}$ be a finite convex strategy polygon of player $i$ in $\mathbb S^{(I\times I')\times t}$. Suppose, $\mathbf Y_i$ be an arbitrary space such that, $\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0},\{\mathcal G_{*\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}:\mathbf Y_i\ra\be_{k_i} $ is a Quantum Loewner Evolution. For each $y_i\in\mathbf Y_i$ suppose there is a smaller finite convex strategy polygon $\rho\in\be_{k_i}$ containing both $\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}(y_i)\}_{\tilde s\geq 0}$ and $\{\mathcal G_{*\tilde s}^{a_s^i,a_\tau^i}(y_i)\}_{\tilde s\geq 0}$. Then these two growth processes $\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}$ and $\{\mathcal G_{*\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}$ are homotopic.
\end{lem}
\begin{proof}
For each $y_i\in\mathbf Y_i$ the line joining two growth process $\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}(y_i)\}_{\tilde s\geq 0}$ and $\{\mathcal G_{*\tilde s}^{a_s^i,a_\tau^i}(y_i)\}_{\tilde s\geq 0}$ is in $\rho$. Therefore, this line segment is definitely inside $\be_{k_i}$. The representation of this segment is
\begin{align*}
\mathcal S(y_i,U)=\sum_{n=0}^M \left[U\theta_n\left(\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}(y_i)\}_{\tilde s\geq 0}\right)+(1-U)\theta_n\left(\{\mathcal G_{*\tilde s}^{a_s^i,a_\tau^i}(y_i)\}_{\tilde s\geq 0}\right)\right].
\end{align*}
Furthermore, as the mapping $U\theta_n\circ \{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}+(1-U)\theta_n\circ \{\mathcal G_{*\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}:\mathbf Y_i\times\mathbf I\ra\mathbf I$ is continuous, the function $\mathcal S:\mathbf Y_i\times\mathbf I\ra\mathbf I$ is continuous. This is enough to show homotopy.
\end{proof}
\begin{cor}
Let $\be_{k_i}$ be a finite convex strategy polygon of player $i$ in $\mathbb S^{(I\times I')\times t}$. Suppose, $\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0},\{\mathcal G_{*\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}:\be_{k_i}\ra\be_{k_i} $ is a Quantum Loewner Evolution. For each $\be_k\in\be_{k_i}$ suppose there is a smaller finite convex strategy polygon $\rho\in\be_{k_i}$ containing both $\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}(\be_k)\}_{\tilde s\geq 0}$ and $\{\mathcal G_{*\tilde s}^{a_s^i,a_\tau^i}(\be_k)\}_{\tilde s\geq 0}$. Then these two growth processes $\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}$ and $\{\mathcal G_{*\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}$ are homotopic.
\end{cor}
From above Corollary it is clear that, any growth process in the interval $[a_s^i,a_\tau^i]$ inside the finite strategy polygon $\be_{k_i}$ is homotopic. Therefore, without any other further restriction a soccer player can follow any path without hampering their payoff.
\begin{defn}
A collection $k=\{k_{(I\times I')\times t}\}$ which are assumed to be homomorphisms $k_{(I\times I')\times t}:\mathbb C^{(I\times I')\times t}(\be_{k_i})\ra\mathbb C^{(I\times I')\times t}(\hat\be_{k_i})$, one for each dimension $(I\times I')\times t\geq 0$, and such that $\partial_{[(I+1)\times(I'+1)]\times (t+1)}\circ k_{[(I+1)\times(I'+1)]\times (t+1)}=k_{(I\times I')\times t}\circ\ \partial_{[(I+1)\times(I'+1)]\times (t+1)}$ is called a chain transformation or chain map on $\mathbb S$ where, $\hat \be_{k_i}$ is a convex strategy polygon of player $i$ other than $\be_{k_i}$.
\end{defn}
For a finite convex polygon $\be_{k_i}$ we take the chains over $2$-sphere $\mathbb S$ and the chain transformation $k$ of $\mathbb C_*(\be_{k_i};\mathbb S)$ into itself. Then the following relationship of trace is going to hold.
\begin{lem}
(Hopt trace theorem, \cite{granas2003}) Suppose, the dimension of player $i$'s strategy polygon $\be_{k_i}$ has the dimension of $(I\times I')\times t$, and the collection of distribution function which follows
\[
k_*:\mathbb C_*(\be_{k_i},\mathbb S)\ra \mathbb C_*(\be_{k_i},\mathbb S),
\]
by any chain transformation. Then
\[
\sum_{m=0}^{(I\times I')\times t}(-1)^m\text{tr}(k_m)=\sum_{m=0}^{(I\times I')\times t}(-1)^m\text{tr}(k_{m*}),
\]
where $\text{tr}(.)$ represents trace of the argument.
\end{lem}
For a finite convex strategy polygon $\be_{k_i}$ in $\sqrt{8/3}$-LQG consider the map $\hat k:\be_{k_i}\ra\be_{k_i}$. Using rational points on the surface $\mathbb Q$ as coefficients, then each induced homomorphism $\hat k_*^{(I\times I')\times t}:\mathbb H_{(I\times I')\times t}(\be_{k_i};\mathbb Q)\ra \mathbb H_{(I\times I')\times t}(\be_{k_i};\mathbb Q)$ is an endomorphism, where $\hat k_*$ is the linear transformation of the vector space. Following \cite{granas2003} each $\hat k_*^{(I\times I')\times t}$ has a trace $\text{tr}\left[\hat k_*^{(I\times I')\times t}\right]$.
\begin{defn}
For the strategy polygon of player $i$ with $\dim({\be_{k_i}})\leq(I\times I')\times t$ and the endomorphic map $\hat k:\be_{k_i}\ra\be_{k_i}$, the Lefschetz number $\o(\hat k)$ is
\[
\o(\hat k)=\sum_{m=0}^{(I\times I')\times t}(-1)^m\text{tr}[\hat k_{m*},\mathbb H_m(\be_{k_i};\mathbb Q)].
\]
\end{defn}
\begin{lem}
\citep{granas2003} For a strategy polygon $\be_{k_i}$ if $\hat k:\be_{k_i}\ra\be_{k_i}$ is continuous then, $\o(\hat k)$ depends on the homotopy class of $\hat k$. Furthermore, the Lefschetz number $\o(\hat k)$ is an integer irrespective of any $\sqrt{8/3}$-LQG field characteristics.
\end{lem}
\begin{prop}\label{fp0}
(Lefschetz-Hopf fixed point theorem on $\sqrt{8/3}$-LQG surface) If player $i$ has a finite strategy polygon $\be_{k_i}$ on $\mathbb S$ with the map $\hat k:\be_{k_i}\ra\be_{k_i}$ then, $\hat k$ has a fixed point for all Lefschetz number $\o(\hat k)\neq0$.
\end{prop}
\begin{proof}
We will prove this theorem by contradiction. Let $\hat k$ has no fixed points. As player $i$'s strategy polygon $\be_{k_i}$ compact, for all $\be_k\in\be_{k_i}$ there $\exists\varepsilon>0$ such that the measure $d[\hat k(\be_k),\be_k]\geq\varepsilon$. In this proof a repeated barycentric subdivision of $\be_{k_i}$ will be used with a fixed quadragulation of mesh $<\varepsilon/n_i$, where $n_i\in\mathbb N$.
Let $\be_{k_i}^{(m)}$ is $m^{th}$ barrycentric subdivision of $\be_{k_i}$ such that the mapping $\psi:\be_{k_i}^{(m)}\ra\be_{k_i}$ be a simplicical approximation of $\hat k$. Define $m^{th}$ barrycentric subdivision's map when chain characteristic is present in $\text{sub}^m:\mathbb C_*(\be_{k_i})\ra\mathbb C_*(\be_{k_i}^{(m)})$. It is enough to determine the trace of $\psi\text{sub}^m:\mathbb C_q(\be_{k_i};\mathbb Q)\ra\mathbb C_q(\be_{k_i};\mathbb Q)$ for each oriented $q$-sided smaller finite strategy polygons inside $\be_{k_i}$, where $\mathbb C_q(\be_{k_i};\mathbb Q)$ is the chain characteristic on $q$-sided smaller strategy polygon. To determine trace we would use the Hopf trace theorem explained above. Suppose, for each $\mathbb C_q(\be_{k_i};\mathbb Q)$ there exists a basis $\{\rho_l^q\}$ of all oriented $q$-sided smaller strategy polygons in $\be_{k_i}$. Expressing $\psi\text{sub}^m$ in terms of the basis function would be,
\[
\text{sub}^m\rho_l^q=\sum\a_{ll'}k_{l'}^q,\ \a_{ll'}=0,\pm 1,\ k_{l'}^q\in\be_{k_i}^{(m)}, \ k_{l'}^q\subset\rho_l^q.
\]
Hence,
\[
\psi\text{sub}^m\rho_l^q=\sum\a_{ll'}\psi(k_{l'}^q)=\sum \gamma_{ll'}\rho_{l'}^q,
\]
where $\gamma_{ll'}$ is another basis. Now suppose, $\nu$ is the vertex of any $k_{l'}^q\subset\rho_l^q$. Define $\nu_*:=\be_{k_i}\setminus\bigcup\{\rho\in\be_{k_i}|\nu\notin\rho\}$. Then for vertex $\nu$ we have $\hat k(\nu)\in\hat k(\nu_*)\subset\psi_*(\nu)$ such that $d[\hat k(\nu),\psi(\nu)]<\varepsilon/n_i$, where $\psi_*:=\be_{k_i}\setminus\{\rho\in\be_{k_i}|\psi\notin\rho\}$. Clearly,
\[
d[\nu,\psi(\nu)]\geq d[\nu,\hat k(\nu)]-d[\hat k(\nu),\psi(\nu)]\geq 2\varepsilon/n_i,
\]
such that if $\psi(\nu)$ belongs to $\rho_l^q$ then, $\delta(\rho_l^q)\geq 2\varepsilon/n_i$, which is the contradiction of our argument that quadragulation is $<\varepsilon/n_i$. Therefore, $\text{tr}[\psi\text{sub}^m,\mathbb C_q(\be_{k_i};\mathbb Q)]=0$ for each $q$-sided finite smaller strategy space. Finally, by Hopf trace theorem we can say that Lefschetz number $\o(\hat k)=0$. This completes the proof.
\end{proof}
According to \cite{gwynne2016} it is a continuum analog of the first passage percolation on a random planner map. Suppose, for $\tau>0$ and let $\eta_s^\tau$ be a whole plane of a Schramm-Loewner Evolution with the central charge $6$ ($SLE_6$) from $a_s^i$ to $a_\tau^i$ which is sampled independent with respect to $k$. Now the plane $\eta_s^\tau$ has been run by terminal time interval $\tau$ such that $\forall\ \varepsilon>0$ and determine it by $k$. For $\varepsilon>0$ and $\tilde s\in[s,\tau] $, suppose $\mathcal G_{\tilde s}^{a_s^i,a_\tau^i,\tau}:=\eta_s^\tau([s,\tau\wedge h_s^\tau])$, where $h_s^\tau$ is the first time $\eta_s^\tau$ hits $a_\tau^i$ when the player $i$ starts the process at $a_s^i$ and going towards $a_\tau^i$ \citep{gwynne2016}. Following \cite{miller2015} we know that, for $\varepsilon\downarrow 0$ a growing family of sets $\{\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}\}_{\tilde s\geq 0}$ in the action interval $[a_s^i,a_\tau^i]$ can be found by taking almost sure limits of an appropriate chosen subsequence, which \cite{gwynne2016} calls as Quantum Loewner Evolution defined on $\sqrt{8/3}$-quantum sphere (QLE($8/3,0$)). For $\varepsilon\geq0$, suppose $\mathcal M_{\tilde s}^{a_s^i,a_\tau^i}(\bf Z, \bf W)$ be some length of the boundary of the connected set $\mathbb S\setminus\mathcal G_{\tilde s}^{a_s^i,a_\tau^i}$, such that it contains the terminal action $a_\tau^i$, where $\bf Z$ be all possible the goals, $\bf W$ be all possible control strategies available to player $i$ at time $s$. Furthermore, assume the stopping time due to reach action $a_\tau^i$ is $\sigma_{\mathcal M}^{a_s^i,a_\tau^i}>0$ define a measure $\mathcal M\geq 0$ such that
\[
\mathcal M({\bf Z},{\bf W})=\int_s^{\sigma_{\mathcal M}^{a_s^i,a_\tau^i}}\frac{1}{\mathcal M_{\tilde s}^{a_s^i,a_\tau^i}({\bf Z}, {\bf W})}d\tilde s.
\]
Define $\widehat{\mathcal G_{\mathcal M}^{a_s^i,a_\tau^i}}:=\mathcal G_{\sigma_{\mathcal M}^{a_s^i,a_\tau^i}}^{a_s^i,a_\tau^i}$. Then following \cite{gwynne2016} $\sqrt{8/3}$-LQG distance of $[a_s^i,a_\tau^i]$ is defined as
\[
{\bm\omega}_k(a_s^i,a_\tau^i):=\inf\left\{\mathcal M({\bf Z},{\bf W})\geq 0;\ a_\tau^i\in\widehat{\mathcal G_{\mathcal M}^{a_s^i,a_\tau^i}} \right\}.
\]
Finally, by \cite{gwynne2016} and \cite{miller2016}, for all $\hat r_i\in\mathbb R$ the construction of this metric for $k+\hat r_i$ yields a scaling property such as
\[
\bm{\omega}_{k+\hat r_i}(a_s^i,a_\tau^i)=e^{\left(\sqrt{8/3}\right)\hat r_i/4}\bm\omega_k(a_s^i,a_\tau^i).
\]
\subsection{Interpretation of the diffusion part of goal dynamics}
Equation (\ref{f2}) talks about two components of the diffusion part, $\hat{\bm\sigma}$ which comes from the strategies from the opposition team and $\bm{\sigma}^*$ consists of the venue, the percentage of attendance of the home crowd, the type of match (i.e. day or day-night match), the amount of dew on the field and the speed of wind (which gives advantage to those free kickers who like make swings due to wind).
Firstly, consider the situation where a team is playing abroad. In this case the players have harder time scoring a goal than in their home environment. For example if an Argentine or a Brazilian player plays in an European club tournament it is extremely hard to score in European environment as the playing style is very different than their native land of Latin America. This thing is also true for World Cup where only Brazilian male soccer team is the only team from Latin America is able to win the cup in Europe ($1958$ FIFA World Cup in Sweden). As playing abroad would create extra mental pressure on the players, assume that pressure is a non-negative $C^2$ function $\mathbf{p}(s,\mathbf{Z}):[0,t]\times\mathbb{R}^{(I\times I')\times t}\ra\mathbb{R}_+^{(I\times I')\times t}$ at match $M+1$ such that if $\mathbf{Z}_{s-1}<\E_{s-1}(\mathbf{Z})$ then $\mathbf{p}$ takes a very high positive value. In other words, if actual number of goals at time $s-1$ (i.e., $\mathbf{Z}_{s-1}$) is less than the expected number of goals at that time then, the pressure to score a goal at $s$ is very high.
Secondly, percentage of home crowd in the total crowd of the stadium matters in the sense that, if it is a home match for a team, then players get extra support from their fans and are motivated to score more. If a team has a mega-star, they will access more crowd. Generally, mega stars have fans all over the world, and therefore they might get more crowd from the opposition team than their team mates. Define a positive finite $C^2$ function $\mathbf{A}(u,\mathbf{W}):[0,t]\times\mathbb{R}^{(I\times I')\times t}\ra\mathbb{R}_+^{(I\times I')\times t}$ with $\partial \mathbf{A}/\partial\mathbf{W}>0$ and $\partial \mathbf{A}/\partial s\gtreqless 0$ depends on if at time $s$, player $i$ with valuation $W_i\in\mathbf{W}$ is still playing, or is out of the field due to injury or other reasons.
Thirdly, if the match is a day match, then both teams have comparative advantage in better visibility due to the sun. On the other hand, if the match is a day-night match then a team's objective is to choose the side of the field in such a way that they get a better visibility and get advantage by scoring more goals. Therefore, in this game a team who lose the toss clearly has disadvantage in terms of the position of the field. Furthermore, because of the dew the ball becomes wet and heavier at night and, it would be harder to grip from a goal-keeper's point of view as well as move the ball and score from a striker's point of view. Therefore, if the toss winning team scores some goals in the first half, that team surely has some comparative advantage to win the game. Hence, winning the toss is important. Furthermore, if a team loses the toss, then its decision to score in the either of two halves depends on its opposition. Hence, define a function $\mathfrak{B}(\mathbf{Z})\in\mathbb{R}_+^{(I\times I')\times t}$ such that,
\begin{align}\label{f3}
\mathfrak{B}(\mathbf{Z})=\mbox{$\frac{1}{2}$}
[\mbox{$\frac{1}{2}$} \E_0(\mathbf{Z}_D^2)+ \mbox{$\frac{1}{2}$}\ \E_0(\mathbf{Z}_{DN}^1)]+
\mbox{$\frac{1}{2}$}
[\mbox{$\frac{1}{2}$} \E_0(\mathbf{Z}_D^1)+\mbox{$\frac{1}{2}$} \E_0(\mathbf{Z}_{DN}^2)],
\end{align}
where for $i=1,2$, $\E_0(\mathbf{Z}_D^i)$ is the conditional expectation of goal of a team before the starting of the day match $M+1$ with total number of goals at $i^{th}$ half $\mathbf{Z}_D^i$, and $\E_0(\mathbf{Z}_{DN}^i)$ is the conditional expectation of the goal before starting a day-night match $M+1$. Furthermore, if a team wins the toss, then it will go for the payoff $\mbox{$\frac{1}{2}$} \left[\mbox{$\frac{1}{2}$} \E_0(\mathbf{Z}_D^2)+ \mbox{$\frac{1}{2}$} \E_0(\mathbf{Z}_{DN}^1)\right]$, and the later part of the Equation (\ref{f3}) otherwise.
Finally, we consider the dew point measure and the speed of wind at time $s$ as an important factor in scoring a goal. As these two are natural phenomena and represent ergodic behavior, we assume this can be represented by a Weierstrass function $\mathbf{Z}_e:[0,t]\ra\mathbb{R}$ \citep{falconer2004} defined as,
\begin{equation}\label{f4}
\mathbf{Z}_e(s)=\sum_{\a=1}^{\infty}(\lambda_1+\lambda_2)^{(s-2)\a} \sin\left[(\lambda_1+\lambda_2)^\a u\right],
\end{equation}
where $s\in(1,2)$ is a penalization constant of weather at over $u$, $\lambda_1$ is the dew point measure defined by the vapor pressure $0.6108*\exp\left\{\frac{17.27T_d}{T_d+237.3}\right\}$, where dew point temperature $T_d$ in defined in Celsius and $\lambda_2$ is the speed of wind such that $(\lambda_1+\lambda_2)>1$.
\begin{as}\label{asf4}
$\bm{\sigma}^*(s,\mathbf{W},\mathbf{Z})$ is a positive, finite part of the diffusion component in Equation (\ref{f1}) which satisfies Assumptions \ref{asf0} and \ref{asf1} and is defined as
\begin{multline}\label{f5}
\bm{\sigma}^*(s,\mathbf{W},\mathbf{Z})=\mathbf{p}(s,\mathbf{Z})+\mathbf{A}(s,\mathbf{W})+
\mathfrak{B}(\mathbf{Z})+\mathbf{Z}_e(s)\\
+\rho_1 \mathbf{p}^T(s,\mathbf{Z})\mathbf{A}(s,\mathbf{W})+
\rho_2\mathbf{A}^T(s,\mathbf{W})\mathfrak{B}(\mathbf{Z})+
\rho_3\mathfrak{B}^T(\mathbf{Z})\mathbf{p}(s,\mathbf{Z}),
\end{multline}
where $\rho_j\in(-1,1)$ is the $j^{th}$ correlation coefficient for $j=1,2,3$, and $\mathbf{A}^T,\mathfrak{B}^T$ and $\mathbf{p}^T$ are the transposition of $\mathbf{A},\mathfrak{B}$ and $\mathbf{p}$ which satisfy all conditions with Equations (\ref{f3}) and (\ref{f4}). As the ergodic function $\mathbf{Z}_e$ comes from nature, a team does not have any control on it and its correlation coefficient with other terms in Equation (\ref{f5}) are assumed to be zero.
\end{as}
The randomness $\hat{\bm{\sigma}}$ of Equation (\ref{f2}) comes from type of skill of a soccer star of the opposition team. There are mainly two main types of players: dribblers and tacklers, and free kickers. Free kickers have two components, the speed of the ball after their kick $s\in\mathbb{R}_+^{(I\times I')\times t}$ in miles per hour and the curvature of the bowl path measure by the dispersion from the straight line connecting the goal keeper and the striker measured by $x\in\mathbb{R}_+^{(I\times I')\times t}$ inches. Define a payoff function $A_1(s,x,G):\mathbb{R}_+^{(I\times I')\times t}\times \mathbb{R}_+^{(I\times I')\times t}\times[0,1]\ra \mathbb{R}^{2(I\times I')\times t}$ such that, at time $s$ the expected payoff after guessing a ball right is $\E_s A_1(s,x,G)$, where $G$ is a guess function such that, if a player $i$ guesses the curvature and speed of the ball after an opposition player kicks then $G=1$ and if player $i$ does not then $G=0$, and if player $i$ partially guesses then, $G\in(0,1)$.
On the other hand, there is a payoff function $A_2$ for a dribbler such that $A_2(s,x,\theta_1,G):\mathbb{R}_+^{(I\times I')\times t}\times \mathbb{R}_+^{(I\times I')\times t}\times[-k\pi,k\pi]\times[0,1]\ra\mathbb{R}^{2(I\times I')\times t} $, where $\theta_1$ is the angle between the beginning and end points when an opposition player start dribbling and end it after player $i$ gets the ball and $k\in\mathbb N$. The expected payoff to score a goal at time $s$ when the opposition player is a dribbler is $\E_s A_2(s,x,\theta_1,G)$. Finally, an tackler's payoff function is $A_3(s,x,\theta_1,\theta_2,G):\mathbb{R}_+^{(I\times I')\times t}\times \mathbb{R}_+^{(I\times I')\times t}\times[-k\pi,k\pi]\times[-k,k\pi]\times[0,1]\ra\mathbb{R}^{2(I\times I')\times t}$, where $\theta_2$ is the allowable tackle movement in terms of angle when they do either of block, poke, slide tackles at time $s$ as $\E_s A_3(s,x,\theta_1,\theta_2,G)$. If $\theta_2$ is more than $k\pi$, the opposition player gets a foul or a yellow card. As player $i$ does not know who is what type of opposition they are going to face at a certain time during a game, their total expected payoff function at time $s$ is $\mathcal{A}(s,x,\theta_1,\theta_2,G)=\wp_1 \E_s A_1(s,x,G)+\wp_2 \E_s A_2(s,x,\theta_1,G)+\wp_3 \E_s A_3(s,x,\theta_1,\theta_2,G)$, where for $j\in\{1,2,3\}$, $\wp_j$ is the probability of each of a dribbler, a tackler and a free kicker with $\wp_1+\wp_2+\wp_3=1$. Therefore, $\mathcal{A}(s,x,\theta_1,\theta_2,G):\mathbb{R}_+^{(I\times I')\times t}\times \mathbb{R}_+^{(I\times I')\times t}\times[-k\pi,k\pi]\times[-k,k\pi]\times[0,1]\ra\mathbb{R}^{2(I\times I')\times t}$.
\section{Main results}
The components of stochastic differential games under $\sqrt(8/3)$-LQG with a continuum of states with dibbling and the passing function with finite actions are following:
\begin{itemize}
\item $I$ be a non-empty finite set of players, $\mathcal F_s^{\mathbf Z}$ be the filtration of goal $\mathbf{Z}$ with the sample space $\Omega$.
\item A finite set of actions at time $s$ for player $i$ such that $a_s^i\in A^i$ for all $i\in I$.
\item A discount rate $\rho_s^i\in(0,1)$ for player $i\in I$ with the constant weight $\a^i\in\mathbb R$.
\item The bounded objective function $\mathbf OB_\a^i$ expressed in Equation (\ref{f0}) is Borel measurable. Furthermore, the system has a goal dynamics expressed in the Equation (\ref{f1}).
\item The game must be on a $\sqrt{8/3}$-LQG surface on $2$-sphere $\mathbb{S}^{(I\times I')\times t}$ with the Riemann metric tensor $e^{\sqrt{8/3}k(l)}d\mathbf {Z}\otimes d\widehat{\mathbf{Z}}$, where $k$ is some variant of GFF such that $k:\mathbb{S}^{(I\times I')\times t}\ra\mathbb R^{I\times I'}$.
\item For $\varepsilon>0$ there exists a transition function from time $s$ to $s+\varepsilon$ expressed as $\Psi_{s,s+\varepsilon}^i(\mathbf Z):\Omega\times\mathbb{S}\times\mathbb R_+^{I\times I'}\times\prod_i A^i\ra\Delta(\Omega\times \mathbb S\times\mathbb R_+^{I\times I'})$ which is Borel-measurable.
\item For a fixed quantum knowledge space $(\Omega,\mathcal F_s^{\bf Z},\mathbb S, I,\mathcal E)$, for player $i$ a dribbling and passing function $p^i$ is a mapping $p^i:\Omega\times\mathbb S\ra\Delta(\Omega\times \mathbb S)$ which is $\sigma$-measurable and the equivalence relationship has some measure in $2$-sphere.
\end{itemize}
The game is played in continuous time. If $\mathbf{Z}_{I\times I'}\subset \mathcal Z\subset(\Omega\times \mathbb S)$ be the goal condition after the start of $(M+1)^{th}$ game and player $i$ select an action profile at time $s$ such that $a_s^i\in\prod_i A^i$, then for $\varepsilon>0$, $\Psi_{s,s+\varepsilon}(\mathbf Z,a_s^i)$ is the conditional probability distribution of the next stage of the game. A stable strategy for a soccer player $i$ is a behavioral strategy that depends on the goal condition at time $s$. Therefore, we can say it is Borel measurable mapping associates with each goal $\mathbf Z\subset\Omega$ a probability distribution on the set $A^i$.
\begin{definition}
A stochastic differential game is purely quantum if it has countable orbits on $2$-sphere. In other words,
\begin{itemize}
\item For each goal condition $\mathbf{Z}_{I\times I'}\subset \mathcal Z\subset(\Omega\times \mathbb S)$, every action profile of player $i$ starts at time $s$, $a_s^i\in A^i$ with the $\sqrt{8/3}$-LQG measure for a very small space on $\mathbb S$ defined as $e^{\sqrt{8/3}k(l)}$, $i^{th}$ player's transition function $\Psi_{s,s+\varepsilon}^i(\mathbf Z)$ is a purely quantum measure. Define
\[
\mathcal Q(\mathbf Z):=\left\{\mathbf Z'\in(\Omega\times\mathbb S)\big|\exists a_s^i\in A^i,e^{\sqrt{8/3}k(l)}\neq 0, \Psi_{s,s+\varepsilon}(\mathbf Z'|\mathbf Z,a_s^i)>0 \right\}.
\]
\item For each goal condition $\mathbf{Z}_{I\times I'}\subset \mathcal Z\subset(\Omega\times \mathbb S)$, the set
\[
\mathcal Q^{-1}(\mathbf Z):=\left\{\mathbf Z'\in(\Omega\times\mathbb S)\big|\exists a_s^i\in A^i,e^{\sqrt{8/3}k(l)}\neq 0, \Psi_{s,s+\varepsilon}(\mathbf Z'|\mathbf Z,a_s^i)>0 \right\}
\]
is countable.
\end{itemize}
\end{definition}
For purely atomic games see \cite{hellman2019}.
\begin{prop}\label{fp1}
A purely quantum game on $\mathbb S$ with the objective function expressed in the Equation (\ref{f0}) subject to the goal dynamics expressed in the Equation (\ref{f1}) in which the orbit equivalence relation is smooth admits a measurable stable equilibrium.
\end{prop}
We know a soccer game stops if rainfall is so heavy that it restricts the vision of the player, making it dangerous or the pitch becomes waterlogged due to the heavy rain. After a period of stoppage, the officials will determine the conditions with tests using the ball while the players wait off the field. Suppose, a match stops after time $\tilde t$ because of the rain. After that there are two possibilities: first, if the rain is heavy, the game will not resume; secondly, if the rain is not so heavy and stops after certain point of time then, after getting water out of the field, the match might be resumed. Based on the severity of the rain and the equipment used to get the water out from the field, the match resumes for $(\tilde t,t-\varepsilon]$ where $\varepsilon\geq 0$. The importance of $\varepsilon$ is that, if the rain is very heavy, $\varepsilon=t-\tilde t$ and on the other hand, for the case of very moderate rain, $\varepsilon=0$. Therefore, $\varepsilon\in[0,t-\tilde t]$.
\begin{definition}\label{fde3}
For a probability space $(\Omega, \mathcal{F}_s^{\mathbf{Z}},\mathcal{P})$ with sample space $\Omega$, filtration at time $s$ of goal condition ${\mathbf{Z}}$ as $\{\mathcal{F}_s^{\mathbf{Z}}\}\subset\mathcal{F}_s$, a probability measure $\mathcal{P}$ and a Brownian motion for rain $\mathbf{B}_s$ with the form $\mathbf{B}_s^{-1}(E)$ such that for $s\in[\tilde t,t-\varepsilon]$, $E\subseteq\mathbb{R}$ is a Borel set. If $\tilde{t}$ is the game stopping time because of rain and ${b}\in \mathbb{R}$ is a measure of rain in millimeters then $\tilde t:=\inf\{s\geq 0|\ \mathbf{B}_s>{b}\}$.
\end{definition}
\begin{defn}\label{fde4}
Let $\delta_{\mathfrak s}:[\tilde t,t-\varepsilon] \ra(0,\infty)$ be a $C^2(s\in[\tilde t,t-\varepsilon])$ time-process of a match such that, it replaces stochastic process by It\^o's Lemma. Then $\delta_{\mathfrak s}$ is a stochastic gauge of that match if $\tilde t:=\mathfrak s+\delta_{\mathfrak s}$ is a stopping time for each $\mathfrak s\in[\tilde t,t-\varepsilon]$ and $\mathbf B_{\mathfrak s}>b$, where $\mathfrak{s}$ is the new time after resampling the stochastic interval $[\tilde t,t-\varepsilon]$ on $\sqrt{8/3}$-LQG surface.
\end{defn}
\begin{defn}\label{fde5}
Given a stochastic time interval of the soccer game $\hat I=[\tilde t,t-\varepsilon]\subset\mathbb{R}$, a stochastic tagged partition of that match is a finite set of ordered pairs $\mathcal{D}=\{(\mathfrak s_i,\hat I_i):\ i=1,2,...,p\}$ such that $\hat I_i=[x_{i-1},x_i]\subset[\tilde t,t-\varepsilon]$, $\mathfrak s_i\in \hat I_i$, $\cup_{i=1}^p \hat I_i=[\tilde t,t-\varepsilon]$ and for $i\neq j$ we have $\hat I_i\cap \hat I_j=\{\emptyset\}$. The point $\mathfrak s_i$ is the tag partition of the stochastic time-interval $\hat I_i$ of the game.
\end{defn}
\begin{defn}\label{fde6}
If $\mathcal{D}=\{(\mathfrak s_i,\hat I_i): i=1,2,...,p\}$ is a tagged partition of stochastic time-interval of the match $\hat I$ and $\delta_{\mathfrak s}$ is a stochastic gauge on $\hat I$, then $\mathcal D$ is a stochastic $\delta$-fine if $\hat I_i\subset \delta_{\mathfrak s}(\mathfrak s_i)$ for all $i=1,2,...,p$, where $\delta(\mathfrak s)=(\mathfrak s-\delta_{\mathfrak s}(\mathfrak s),\mathfrak s+\delta_{\mathfrak s}(\mathfrak s))$.
\end{defn}
For a tagged partition $\mathcal D$ in a stochastic time-interval $\hat I$, as defined in Definitions \ref{fde5} and \ref{fde6}, and a function $\tilde f:[\tilde t,t-\varepsilon]\times \mathbb{R}^{2(I\times I')\times \hat t}\times\Omega\times\mathbb S\ra\mathbb{R}^{(I\times I')\times \hat t}$ the Riemann sum of $\mathcal D$ is defined as
\[
S(\tilde f,\mathcal D)=(\mathcal D_\delta)\sum \tilde
f(\mathfrak s,\hat I,\mathbf W,\mathbf Z)=
\sum_{i=1}^p \tilde f(\mathfrak s_i,\hat I_i,\mathbf W,\mathbf Z),
\]
where $\mathcal{D}_\delta$ is a $\delta$-fine division of
$\mathbb{R}^{(I\times I')\times \widehat U}$
with point-cell function
$\tilde f(\mathfrak s_i,\hat I_i,\mathbf W,\mathbf Z)=
\tilde f(\mathfrak s_i,\mathbf W,\mathbf Z)\ell(\hat I_i)$,
where $\ell$ is the length of the over interval and
$\hat{t}=(t-\varepsilon)-\tilde{t}$
\citep{kurtz2004,muldowney2012}.
\begin{defn}\label{fde7}
An integrable function $\tilde f(\mathfrak s,\hat I,\mathbf W,\mathbf Z)$ on $\mathbb{R}^{(I\times I')\times\hat t}$, with integral
\[
\mathbf a=\int_{\tilde t}^{t-\varepsilon}\tilde{f}(\mathfrak s, \hat I,\mathbf W,\mathbf Z)
\]
is stochastic Henstock-Kurzweil type integrable on $\hat I$ if, for a given vector $\hat{\bm\epsilon}>0$, there exists a stochastic $\delta$-gauge in $[\tilde t,t-\varepsilon]$ such that for each stochastic $\delta$-fine partition $\mathcal D_\delta$ in $\mathbb R^{(I\times I')\times \hat t}$ we have,
\[
\E_{\mathfrak s}\left\{\left|\mathbf a-(\mathcal D_\delta)\sum \tilde f(\mathfrak s,\hat I,\mathbf W,\mathbf Z)\right|\right\}<\hat{\bm\epsilon},
\]
where $\E_{\mathfrak s}$ is the conditional expectation on goal $\mathbf{Z}$ at sample time $\mathfrak s\in[\tilde t,t-\varepsilon]$ of a non-negative function $\tilde f$ after the rain stops.
\end{defn}
\begin{prop}\label{fp2}
Define
\[
\mathfrak h=\exp\left\{-
\tilde\epsilon\E_{\mathfrak s}\left[\int_{\mathfrak s}^{\mathfrak s+
\tilde\epsilon}\tilde f(\mathfrak s,\hat I,\mathbf W,\mathbf Z)\right]\right\}
\Psi_{\mathfrak s}(\mathbf{Z})d\mathbf Z.
\]
If for a small sample time interval $[\mathfrak s,\mathfrak s+\tilde\epsilon]$,
\[
\frac{1}{N_{\mathfrak s}}\int_{\mathbb R^{2(I\times I')\times\widehat t\times I}}\mathfrak h
\]
exists for a conditional gauge $\gamma=[\delta,\omega(\delta)]$, then the indefinite integral of $\mathfrak h$,
\[
\mathbf H(\mathbb R^{2(I\times I')\times\hat t\times I})=\frac{1}{N_{\mathfrak s}}\int_{\mathbb R^{2(I\times I')\times\hat t\times I}}\mathfrak h
\]
exists as Stieltjes function in $ \mathbf E([\mathfrak s,\mathfrak s+\tilde\epsilon]
\times \mathbb R^{2(I\times I')\times\hat t}\times \Omega\times\mathbb S\times\mathbb R^I)$ for all $N_{\mathfrak s}>0$.
\end{prop}
\begin{coro}\label{fc0}
If $\mathfrak h$ is integrable on $\mathbb R^{2(I\times I')\times\hat t\times I}$ as in Proposition \ref{fp2}, then for a given small continuous sample time the interval $[\mathfrak s,\mathfrak s']$ with
$\tilde{\epsilon}=\mathfrak s'-\mathfrak s>0$,
there exists a $\gamma$-fine division $\mathcal D_\gamma$ in $\mathbb R^{2(I\times I')\times\hat t\times I}$ such that,
\[
|(\mathcal D_\gamma)\mathfrak h[\mathfrak s,\hat I,\hat I(\mathbf Z),\mathbf W,\mathbf Z]-\mathbf H(\mathbb R^{2(I\times I')\times\hat t\times I})|\leq\mbox{$\frac{1}{2}$}|\mathfrak s-\mathfrak s'|<\tilde{\epsilon},
\]
where $\hat I(\mathbf Z)$ is the interval of goal $\mathbf Z$ in $\mathbb R^{2(I\times I')\times\hat t\times I}$. This integral is a stochastic It\^o-Henstock-Kurtzweil-McShane-Feynman-Liouville type path integral in goal dynamics of a sample time after the beginning of a match after rain interruption.
\end{coro}
As after the rain stops, the environment of the field changes, giving rise to a different probability space $(\Omega,\mathcal F_{\mathfrak{s}}^{\hat{ \mathbf Z}},\mathcal P)$, where $\mathcal F_{\mathfrak{s}}^{\hat{ \mathbf Z}}$ is the new filtration process at time $\mathfrak s$ after rain. The objective function after rain becomes,
\begin{multline}\label{f8}
\overline{\mathbf {OB}}_\a^i:\widehat{\mathbf Z}_\a^i(\mathbf W,\mathfrak s)\\=h_{\tilde t}^{i*}+\max_{W_i\in W} \E_{\tilde t}\left\{\int_{\tilde t}^{t-\varepsilon}\sum_{i=1}^I\sum_{m=1}^M\exp(-\rho_{\mathfrak s}^im)\a^iW_i(\mathfrak s)h_0^i[s,w(\mathfrak s),z(\mathfrak s)]\bigg|\mathcal F_{\mathfrak s}^{\hat{ \mathbf Z}}\right\}d\mathfrak s.
\end{multline}
Furthermore, if the match starts at time $\tilde t$ after the stoppage of the match due to rain then Liouville like action function on goal dynamics after the match starts after the rain is
\begin{multline}\label{f9}
\mathcal{L}_{\tilde{t},t-\varepsilon}(\mathbf{Z})=
\int_{\tilde{t}}^{t-\varepsilon}\E_{\mathfrak{s}} \left\{\sum_{i=1}^{I}\sum_{m=1}^M
\exp(-\rho_{\mathfrak s}^im)\a^iW_i(\mathfrak s)h_0^i[\mathfrak s,w(\mathfrak s),z(\mathfrak s)] \right.\\
+\lambda_1[\Delta\mathbf Z(\mathfrak s,\mathbf W)-\bm\mu[\mathfrak s,\mathbf W(\mathfrak s),\mathbf Z(\mathfrak s,\mathbf W)]d\mathfrak s-\bm\sigma[\mathfrak s,\hat{\bm\sigma},\mathbf W(\mathfrak s),\mathbf Z(\mathfrak s,\mathbf W)]d\mathbf B(\mathfrak s)]\\
\left.\phantom{\int}
+\lambda_2 e^{\sqrt{8/3}k(l(\mathfrak s))}d\mathfrak s
\right\}.
\end{multline}
The stochastic part of the Equation (\ref{f1}) becomes ${\bm{\sigma}}$ as $\hat{\lambda}_1>\lambda_1$. Equation (\ref{f9}) follows Definition \ref{fde7} such that $\mathbf a=\mathcal{L}_{\tilde t, t-\varepsilon}(\mathbf Z)$ and it is integrable according to Corollary \ref{fc0}.
\begin{prop}\label{fp3}
If a team's objective is to maximize Equation (\ref{f8}) subject to the goal dynamics expressed in the Equation (\ref{f1}) on the $\sqrt{8/3}$-LQG surface, such that Assumptions \ref{asf0}- \ref{asf4} hold with Propositions \ref{fp0}-\ref{fp2} and Corollary \ref{fc0}, then after a rain stoppage under a continuous sample time, the weight of player $i$ is found by solving
\begin{multline*}
\sum_{i=1}^{I}\sum_{m=1}^M\exp(-\rho_{\mathfrak s}^im)\a^ih_0^i[\mathfrak s,w(\mathfrak s),z(\mathfrak s)]\\
+g_{\mathbf{Z}}[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]
\frac{\partial
\{\bm\mu[\mathfrak s,\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s,\mathbf W)]\}
}{\partial \mathbf{W}}
\frac{\partial \mathbf{W}}{\partial W_i }\\
+\mbox{$\frac{1}{2}$}
\sum_{i=1}^I\sum_{j=1}^I
\frac{\partial
{\bm\sigma}^{ij}[\mathfrak s,\hat{\bm{\sigma}},\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s,\mathbf W)] }
{\partial \mathbf{W}}
\frac{\partial \mathbf{W}}{\partial W_i }
g_{Z_iZ_j}[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]=0,
\end{multline*}
with respect to $\a_i$, where the initial condition before the first kick on the soccer ball after the rain stops is $\mathbf{Z}_{\tilde s}$. Furthermore, when $\a_i=\a_j=\a^*$ for all $i\neq j$ we get a closed form solution of the player weight as
\begin{eqnarray*}
\a^* & = &
-\left[\sum_{i=1}^{I}\sum_{m=1}^M\exp(-\rho_{\mathfrak s}^im)\a^ih_0^i[\mathfrak s,w(\mathfrak s),z(\mathfrak s)]\right]^{-1}\\
& & \times\left[\frac{\partial g[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]}{\partial{\mathbf{Z}}}
\frac{\partial
\{\bm{\mu}[\mathfrak s,\mathbf{W}(\mathfrak{s}),\mathbf{Z}(\mathfrak s,\mathbf W)]\} }{\partial \mathbf{W}}
\frac{\partial \mathbf{W}}{\partial W_i } \right.\\
& & \left.
+\mbox{$\frac{1}{2}$}
\sum_{i=1}^I\sum_{j=1}^I
\frac{\partial {\bm\sigma}^{ij}[\mathfrak s,\hat{\bm{\sigma}},\mathbf{W}(\mathfrak s),
\mathbf{Z}(\mathfrak s,\mathbf W)]}{\partial \mathbf{W}}
\frac{\partial \mathbf{W}}{\partial W_i }
\frac{\partial^2 g[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]}{\partial Z_i\partial Z_j} \right],
\end{eqnarray*}
where function $g\left[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)\right]\in C_0^2\left([\tilde t,t-\varepsilon]\times \mathbb R^{2(I\times I')\times \hat t}\times \mathbb{R}^I\right)$ with $\mathbf{Y}(\mathfrak s)=g\left[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)\right]$ is a positive, non-decreasing penalization function vanishing at infinity which substitutes for the goal dynamics such that, $\mathbf{Y}(\mathfrak s)$ is an It\^o process.
\end{prop}
\section{Proofs}
\subsection{Proof of Proposition \ref{fp1}}
Consider a stochastic differential game $\left[\Omega\times\mathbb S,k,(\a^i)_{i\in I},(\rho_s^i)_{i\in I}, \left(\Psi_{s,s+\varepsilon}^i\right)_{i\in I},\right.\\\left.(p^i)_{i\in I}\right]$. Let us define the goal space $\mathcal Z$ is a countable set such that all common knowledge equivalence relations can be represented cardinally in $\mathcal Z$. First we define a quantifier free formula whose free variables are beliefs, payoffs of a player, actions based on the goal condition at time $s$ of stochastic differential game on $2$-sphere with goal space $\mathcal Z$ such that at time $s$ for a given goal condition $\mathbf Z_s$ and beliefs, the strategy polygon $\be_{k_i}$ has a Lefschetz-Hopf fixed point on $\sqrt{8/3}$-LQG action space. Assume $z_1,z_2,z_3,z_4$ are indices in $\mathcal Z$, $i,j$ are players in $I$, $a_s^i$ is player $i$'s action at time $s$ in $A^i$, and the action profile at time $s$ defined as $a_s:=\left(a_1^s(p^i), a_2^s(p^i),...,a_\kappa^s(p^i)\right)$ in $\prod_iA^i$, where $p^i$ is the dribbling and passing function.
For $j\in I$, $p^j:\Omega\times\mathbb S\ra\Delta(\Omega\times \mathbb S)$ and $z_3,z_4\in\mathcal Z$ the variable $\eta_{z_3,z_4,p^j}^j(s)$ is player $j$'s belief about the goal condition $z_3$ at time $s+\varepsilon$ while at the goal condition $z_4$ at time $s$. For $j\in I$, $z_2\in\mathcal Z$, $\mathfrak B(\mathbf Z)\in\mathbb R_+^{(I\times I')\times t}$ and $a_s\in\prod_j A^j$, the variable $\omega_{z_2,a_s,\mathfrak B}^j(s)$ is defined as player $j$'s payoff at time $s$ with goal condition $z_2$ given their action profile and the expectation of goals based on whether their team is playing a day match or a day-night match defined by the function $\mathfrak B$ in the Equation (\ref{f3}) in the previous section. Finally, For $j\in I$, $z_2\in\mathcal Z$, $\mathbf{Z}_e:[0,t]\ra\mathbb{R}$ and $a_s^j\in A^j$, the variable $\a_{z_2,a_s^j,\mathbf{Z}_e}^j(s)$ is the weight of player $j$ puts at time $s$ on their action $a_s^j$ when the goal condition is $z_2$ and at dew condition on the field $\mathbf{Z}_e$ defined in the Equation (\ref{f4}).
For $i\in I$ and for a mixed action $\a_{a_s^i}\in\Delta A^i$ define a function
\[
\tau^i(\a_{a_s^i})=\left(\sum_{a_s^i\in A^i}\a_{a_s^i}=1\right)\bigwedge_{a_s^i\in A^i}(\a_{a_s^i}\geq 0),
\]
and for $z_1,z_2\in\mathcal Z$ define
\[
\gamma_{z_1,z_2,p^j}^j\left[\left(\eta_{z_3,z_4,p^j}^j(s)\right)_{z_3,z_4,p^j}\right]=\bigwedge_{z_3,p^j}\left[\eta_{z_3,z_1,p^j}^j(s)=\eta_{z_3,z_2,p^j}^j(s)\right],
\]
where the above argument means for a given dribbling and passing function $p^j$ player $j$ has the same belief at both goal condition $z_1$ and $z_2$. Therefore, the function
\begin{align*}
&\tau\left\{\left[\a_{z_2,a_s^i,\mathbf{Z}_e}^i(s)\right],\left[\eta_{z_3,z_4,p^i}^i(s)\right]\right\}\\&=\left\{\bigwedge_i\bigwedge_{z_2}\tau^i\left[\a_{z_2,a_s^i,\mathbf{Z}_e}^i(s)\right]\right\}\bigwedge\left\{\bigwedge_i\bigwedge_{z_1,z_2}\gamma_{z_1,z_2,p^i}^i\left[\left(\eta_{z_3,z_4,p^j}^j(s)\right)_{z_3,z_4,p^j}\right]\right.\\&\hspace{1cm}\left.\ra\bigwedge_{a_s^i}\left[\a_{z_1,a_s^i,\mathbf{Z}_e}^i(s)=\a_{z_2,a_s^i,\mathbf{Z}_e}^i(s)\right]\right\},
\end{align*}
exists iff mixed actions are utilized at every goal condition (i.e. $\bigwedge_i\bigwedge_{z_2}\tau^i[\a_{z_2,a_s^i,\mathbf{Z}_e}^i(s)]$), and strategies are measurable with respect to player $i$'s knowledge of the game. Now for a transition function of player $i$ in the time interval $[s,s+\varepsilon]$ defined as $\Psi_{s,s+\varepsilon}^i(\mathbf Z)$ with the payoff of them at the beginning of time $s$ is $v_s^i\geq 0$. For $k$ on $\mathbb S$ define a state function
\begin{align}\label{f6}
&\widehat{\mathbf Z}\left\{v_s^i,\omega_{z_2,a_s,\mathfrak B}^i(s),\a_{z_2,a_s^i,\mathbf{Z}_e}^i(s),\Psi_{s,s+\varepsilon}^i(\mathbf Z)\right\}\notag\\&=\bigwedge_i\bigwedge_{z_1}\left[v_s^i=\sum_{a_s}\left(\prod_{i\in I}\a_{z_1,a_s^i,\mathbf{Z}_e}^i(s)\right)\left(\omega_{z_2,a_s,\mathfrak B}^i(s)+(1-\theta)\Psi_{s,s+\varepsilon}^i(\mathbf Z) v_{s+\varepsilon}^i\right)\right],
\end{align}
such that for the goal condition $z_1\in\mathcal Z$, the payoffs under mixed action profile of the game is $v_s^i$, where $\theta\in(0,1)$ is a discount factor of this game. Furthermore, if we consider the objective function expressed in the Equation (\ref{f0}) subject to the goal dynamics expressed in the Equation (\ref{f1}) on the $\sqrt{8/3}$-LQG action space, the function defined in Equation (\ref{f6}) in time interval $[s,s+\varepsilon]$ would be
\begin{align*}
&\widehat{\mathbf Z}_{s,s+\varepsilon}\left\{v_s^i,\a_{z_2,a_s^i,\mathbf{Z}_e}^i(s),\Psi_{s,s+\varepsilon}^i(\mathbf Z)\right\}\notag\\&=\bigwedge_i\bigwedge_{z_1}\biggr\{v_s^i=\sum_{a_s}\left(\prod_{i\in I}\a_{z_1,a_s^i,\mathbf{Z}_e}^i(s)\right)\\&\left[\E_s\int_s^{s+\varepsilon}\bigg(\sum_{i=1}^I\sum_{m=1}^M\exp(-\rho_s^im)\a^iW_i(s)h_0^i[s,w(s),z(s)]\right.\\&\left.+\lambda_1[\Delta\mathbf Z(\nu,\mathbf W)-\bm\mu[\nu,\mathbf W(\nu),\mathbf Z(\nu,\mathbf W)]d\nu-\bm\sigma[\nu,\hat{\bm\sigma},\mathbf W(\nu),\mathbf Z(\nu,\mathbf W)]d\mathbf B(\nu)]\right.\\&\hspace{4cm}\left.+\lambda_2e^{\sqrt{8/3}k(l)}d\nu\bigg)\right]\biggr\},
\end{align*}
where the expression inside the bracket $[.]$ is the quantum Lagrangian with two non-negative time independent Lagrangian multipliers $\lambda_1$ and $\lambda_2$ where the second stands for $\sqrt{8/3}$-LQG surface. Define
\begin{align}\label{f7}
&\widetilde{\mathbf Z}\left\{v_s^i,\omega_{z_2,a_s,\mathfrak B}^i(s),\a_{z_2,a_s^i,\mathbf{Z}_e}^i(s),\Psi_{s,s+\varepsilon}^i(\mathbf Z)\right\}\notag\\&=\bigwedge_i\bigwedge_{z_1}\bigwedge_{d\in A^i}\left[v_s^i\geq\sum_{z_2}\sum_{a_s}\left(\prod_{j\neq i}\a_{z_1,a_s^j,\mathbf{Z}_e}^j(s)\right)\right.\notag\\&\left.\times\left(\omega_{z_2,d,a_s,\mathfrak B}^j(s)+(1-\theta)\Psi_{s,s+\varepsilon}^j(\mathbf Z) v_{s+\varepsilon}^j\right)\right],
\end{align}
a goal condition $z_1\in\mathcal Z$ such that with no deviation in the game with continuation payoff $v_{s+\varepsilon}^j$ gets the payoff no more than the payoff at time $s$ or $v_s^i$. Therefore, for time interval $[s,s+\varepsilon]$ the goal condition should be,
\begin{align*}
&\widetilde{\mathbf Z}_{s,s+\varepsilon}\left\{v_s^i,\a_{z_2,a_s^i,\mathbf{Z}_e}^i(s),\Psi_{s,s+\varepsilon}^i(\mathbf Z)\right\}\notag\\&=\bigwedge_i\bigwedge_{z_1}\bigwedge_{d\in A^i}\biggr\{v_s^i\geq\sum_{z_2}\sum_{a_s}\left(\prod_{j\neq i}\a_{z_1,d,a_s^j,\mathbf{Z}_e}^j(s)\right)\\&\left[\E_s\int_s^{s+\varepsilon}\bigg(\sum_{i=1}^I\sum_{m=1}^M\exp(-\rho_s^jm)\a^jW_j(s)h_0^j[s,w(s),z(s)]\right.\\&\left.+\lambda_1[\Delta\mathbf Z(\nu,\mathbf W)-\bm\mu[\nu,\mathbf W(\nu),\mathbf Z(\nu,\mathbf W)]d\nu-\bm\sigma[\nu,\hat{\bm\sigma},\mathbf W(\nu),\mathbf Z(\nu,\mathbf W)]d\mathbf B(\nu)]\right.\\&\hspace{4cm}\left.+\lambda_2e^{\sqrt{8/3}k(l)}d\nu\bigg)\right]\biggr\}.
\end{align*}
Finally, define
\begin{align*}
&\widetilde{\mathbf Z}_{s,s+\varepsilon}^*\left\{v_s^i,\omega_{z_2,a_s,\mathfrak B}^i(s),\a_{z_2,a_s^i,\mathbf{Z}_e}^i(s),\Psi_{s,s+\varepsilon}^i(\mathbf Z)\right\}\\&=\bigwedge_{j\in I}\bigwedge_{z_1\in\mathcal Z}\tau^i\left[\a_{z_2,a_s^i,\mathbf{Z}_e}^i(s)\right]\wedge\widehat{\mathbf Z}_{s,s+\varepsilon}\left\{v_s^i,\a_{z_2,a_s^i,\mathbf{Z}_e}^i(s),\Psi_{s,s+\varepsilon}^i(\mathbf Z)\right\}\\&\hspace{1cm}\wedge \widetilde{\mathbf Z}_{s,s+\varepsilon}\left\{v_s^i,\a_{z_2,a_s^i,\mathbf{Z}_e}^i(s),\Psi_{s,s+\varepsilon}^i(\mathbf Z)\right\},
\end{align*}
such that each player is playing a probability distribution in each goal condition subject to a goal dynamics, dew condition of the field, whether the match is a day or day-night match, and the strategy profiles in goal condition $z_1\in\mathcal Z$ give the right payoff using equilibrium strategies on $\sqrt{8/3}$-LQG action space. By \cite{parthasarathy1972} we know that, a stochastic game with countable states has a stationary equilibrium and by Theorem $4.7$ of \cite{hellman2019} we conclude that, the system admits a stable equilibrium under $2$-sphere. $\square$
\subsection{Proof of Proposition \ref{fp2}}
Define a gauge $\gamma=[\delta,\omega(\delta)]$ for all possible combinations of a $\delta$ gauge in $[\tilde t,t-\varepsilon]\times \mathbb R^{2(I\times I')\times\hat t}\times \Omega$ and $\omega(\delta)$-gauge in $\mathbb R^{2(I\times I')\times\hat t\times I}$ such that it is a cell in $[\tilde t,t-\varepsilon]\times \mathbb R^{2(I\times I')\times\hat t}\times \Omega\times\mathbb S\times\mathbb R^{2(I\times I')\times\hat t\times I}$, where $\omega(\delta):\mathbb{R}^{2(I\times I')\times\hat t\times I}\ra(0,\infty)^{2(I\times I')\times\hat t\times I}$ is at least a $C^1$ function. The reason behind considering $\omega(\delta)$ as a function of $\delta$ is because, after rain stops, if the match proceeds on time $s$ then we can get a corresponding sample time $\mathfrak{s}$ and a player has the opportunity to score a goal. Let $\mathcal D_\gamma$ be a stochastic $\gamma$-fine in cell $\mathbf E$ in $[\tilde t,t-\varepsilon]\times \mathbb R^{2(I\times I')\times\hat t}\times \Omega\times\mathbb S\times\mathbb R^I$. For any $\bm{\varepsilon}>0$ and for a $\delta$-gauge in $[\tilde t,t-\varepsilon]\times \mathbb R^{2(I\times I')\times\hat t}\times \Omega\times\mathbb S$ and $\omega(\delta)$-gauge in $\mathbb R^{2(I\times I')\times\hat t\times I}$ choose a $\gamma$ so that
\[
\left|\frac{1}{N_{\mathfrak s}}(\mathcal D_\gamma)\sum \mathfrak h-\mathbf H(\mathbb R^{2(I\times I')\times\hat t\times I})\right|<\mbox{$\frac{1}{2}$}|\mathfrak s-\mathfrak s'|,
\]
where $\mathfrak s'=\mathfrak s+\tilde{\epsilon}$. Assume two disjoint sets $E^a$ and $E^b=[\mathfrak s,\mathfrak s+\tilde{\epsilon}]\times \mathbb R^{2(I\times I')\times\hat t}\times \Omega\times\mathbb S\times\mathbb \{R^I\setminus E^a\}$ such that $E^a\cup E^b=E$ . As the domain of $\tilde f$ is a $2$-sphere, Theorem $3$ in \cite{muldowney2012} implies there is a gauge $\gamma_a$ for set $E^a$ and a gauge $\gamma_b$ for set $E^b$ with $\gamma_a\prec\gamma$ and $\gamma_b\prec\gamma$, so that both the gauges conform in their respective sets. For every $\delta$-fine in $[\mathfrak s,\mathfrak s']\times\mathbb{R}^{2(I\times I')\times\hat t}\times\Omega\times\mathbb S$ and a positive
$\tilde{\epsilon}=|\mathfrak s-\mathfrak s'|$,
if a $\gamma_a$-fine division $\mathcal D_{\gamma_a}$ is of the set $E^a$ and $\gamma_b$-fine division $\mathcal D_{\gamma_b}$ is of the set $E^b$, then by the restriction axiom we know that $\mathcal D_{\gamma_a}\cup\mathcal D_{\gamma_b}$ is a $\gamma$-fine division of $E$. Furthermore, as $E^a\cap E^b=\emptyset$
\begin{align}
\mbox{$\frac{1}{N_{\mathfrak s}}$}\left(\mathcal D_{\gamma_a}\cup\mathcal D_{\gamma_b}\right)\sum\mathfrak h=\mbox{$\frac{1}{N_{\mathfrak s}}$}\left[(\mathcal D_{\gamma_a})\sum \mathfrak h+(\mathcal D_{\gamma_b})\sum\mathfrak h\right]=\a+\be.\notag
\end{align}
Let us assume that for every $\delta$-fine we can subdivide the set $E^b$ into two disjoint subsets $E_1^b$ and $E_2^b$ with their $\gamma_b$-fine divisions given by $\mathcal D_{\gamma_b}^1$ and $\mathcal D_{\gamma_b}^2$, respectively. Therefore, their Riemann sum can be written as $\be_1=\frac{1}{N_{\mathfrak s}}(\mathcal D_{\gamma_b}^1)\sum\mathfrak h$ and $\be_2=\frac{1}{N_{\mathfrak s}}(\mathcal D_{\gamma_b}^2)\sum\mathfrak h$, respectively. Hence, for a small sample time interval $[\mathfrak s,\mathfrak s']$,
\[
\big|\a+\be_1-\mathbf H(\mathbb R^{2(I\times I')\times\hat t\times I})\big|\leq\mbox{$\frac{1}{2}$}|\mathfrak s-\mathfrak s'|
\]
and
\[
\big|\a+\be_2-\mathbf H(\mathbb R^{2(I\times I')\times\hat t\times I})\big|\leq\mbox{$\frac{1}{2}$}|\mathfrak s-\mathfrak s'|.
\]
Therefore,
\begin{eqnarray}\label{fCauchy}
|\be_1-\be_2| & = &
\left|\left[\a+\be_1-\mathbf H(\mathbb R^{2(I\times I')\times\hat t\times I})\right]-
\left[\a+\be_2-\mathbf H(\mathbb R^{2(I\times I')\times\hat t\times I})\right]\right|\notag \\
& \leq & \left|\a+\be_1-\mathbf H(\mathbb R^{2(I\times I')\times\hat t\times I})\right|+
\left|\a+\be_2-\mathbf H(\mathbb R^{2(I\times I')\times\hat t\times I})\right| \notag \\
& \leq & |\mathfrak s-\mathfrak s'|.
\end{eqnarray}
Equation (\ref{fCauchy}) implies that the Cauchy integrability of $\mathfrak h$ is satisfied, and
\[
\mathbf H(\mathbb R^{2(I\times I')\times\hat t\times I})=
\frac{1}{N_{\mathfrak s}}\int_{\mathbb R^{2(I\times I')\times\hat t\times I}}\mathfrak h.
\]
Now consider two disjoint set $M^1$ and $M^2$ in $\mathbb R^{2(I\times I')\times\hat t\times I}$ such that $M=M^1\cup M^2$ with their corresponding integrals $\mathbf H(M^1), \mathbf H(M^2)$, and $\mathbf H(M)$. Suppose $\gamma$-fine divisions of $M^1$ and $M^2$ are given by $\mathcal D_{\gamma_1}$ and $\mathcal D_{\gamma_2}$, respectively, with their Riemann sums for $\mathfrak h$ are $m_1$ and $m_2$. Equation (\ref{fCauchy}) implies, $\big|m_1-\mathbf H(M^1)\big|\leq\big|\mathfrak s-\mathfrak s'\big|$ and $\big|m_2-\mathbf H(M^2)\big|\leq\big|\mathfrak s-\mathfrak s'\big|$. Hence, $\mathcal D_{\gamma_1}\cup\mathcal D_{\gamma_2}$ is a $\gamma$-fine division of $M$. Let $m=m_1+m_2$ then Equation (\ref{fCauchy}) implies $\big|m-\mathbf H(M)\big|\leq |\mathfrak s-\mathfrak s'|$ and
\begin{eqnarray*}
|[\mathbf H(M^1)+\mathbf H(M^2)]-\mathbf H(M)| & \leq &
|m-\mathbf{H}(M)|+|m_1-\mathbf H(M^1)|+ \\
& & |m_2-\mathbf{H}(M^2)| \\
& \leq & 3|\mathfrak s-\mathfrak s'|.
\end{eqnarray*}
Therefore, $\mathbf H(M)=\mathbf H(M^1)+\mathbf H(M^2)$ and it is Stieljes. $\square$
\subsection{Proof of Proposition \ref{fp3}}
For a positive Lagrangian multipliers $\lambda_1$ and $\lambda_2$, with initial goal condition $\mathbf Z_{\tilde t}$ the goal dynamics are expressed in Equation (\ref{f1}) such that such that Definition \ref{fde7}, Propositions \ref{fp0}-\ref{fp2} and Corollary \ref{fc0} hold. Subdivide $[\tilde t,t-\varepsilon]$ into $n$ equally distanced small time-intervals $[\mathfrak s,\mathfrak s']$ such that $\tilde\epsilon\downarrow 0$, where $\mathfrak s'=\mathfrak s+\tilde\epsilon$. For any positive $\tilde\epsilon$ and normalizing constant $N_{\mathfrak s}>0$, define a goal transition function as
\[
\Psi_{\mathfrak s,\mathfrak s+\tilde\epsilon}(\mathbf{Z})=
\frac{1}{N_{\mathfrak s}}
\int_{\mathbb{R}^{2(I\times I')\times\hat t\times I}} \exp\bigg\{-\tilde\epsilon \mathcal{L}_{\mathfrak s,\mathfrak s+\tilde\epsilon}(\mathbf{Z}) \bigg\} \Psi_{\mathfrak s}(\mathbf{Z})d\mathbf{Z},
\]
where $\Psi_{\mathfrak t}(\mathbf{Z})$ is the goal transition function at the beginning of $\mathfrak t$, ${N_{\mathfrak s}}^{-1} d\mathbf{Z}$ is a finite Riemann measure which satisfies Proposition \ref{fp2}, and for $k^{th}$ sample time interval a goal transition function of $[\tilde t,t-\varepsilon]$ is,
\[
\Psi_{\tilde t,t-\tilde\varepsilon}(\mathbf{Z})=
\frac{1}{(N_{\mathfrak s})^n}
\int_{\mathbb{R}^{2(I\times I')\times\hat t\times I\times n}} \exp\bigg\{-\tilde\epsilon \sum_{k=1}^n\mathcal{L}_{\mathfrak s,\mathfrak s+\tilde\epsilon}^k(\mathbf{Z}) \bigg\}\Psi_0(\mathbf{Z}) \prod_{k=1}^n d\mathbf{Z}^k,
\]
with finite measure $\left(N_{\mathfrak s}\right)^{-n}\prod_{k=1}^{n}d\mathbf Z^k$ satisfying Corollary \ref{fc0} with its initial goal transition function after the rain stops as $\Psi_{\tilde t}(\mathbf Z)>0$ for all $n\in\mathbb N$. Define $\Delta \mathbf{Z}(\nu)=\mathbf{Z}(\nu+d\nu)-\mathbf{Z}(\nu)$, then Fubuni's Theorem for the small interval of time $[\mathfrak s,\mathfrak s']$ with $\tilde\epsilon\downarrow 0$ yields,
\begin{multline*}
\mathcal{L}_{\mathfrak s,\mathfrak s'}(\mathbf{Z})
=\int_{\mathfrak s}^{\mathfrak s'}\E_{\nu} \left\{\sum_{i=1}^{I}\sum_{m=1}^M
\exp(-\rho_{\nu}^im)\a^iW_i(\nu)h_0^i[\nu,w(\nu),z(\nu)] \right.\\
+\lambda_1[\Delta\mathbf Z(\nu,\mathbf W)-\bm\mu[\nu,\mathbf W(\nu),\mathbf Z(\nu,\mathbf W)]d\nu-\bm\sigma[\nu,\hat{\bm\sigma},\mathbf W(\nu),\mathbf Z(\nu,\mathbf W)]d\mathbf B(\nu)]\\
\left.\phantom{\int}
+\lambda_2 e^{\sqrt{8/3}k(l(\nu))}d\nu
\right\}.
\end{multline*}
As we assume the goal dynamics have drift and diffusion parts, $\mathbf{Z} (\nu,\mathbf W)$ is an It\^o process and $\mathbf{W}$ is a Markov control measure of players. Therefore, there exists a smooth function $g[\nu,\mathbf{Z}(\nu,\mathbf W)]\in C^2\left([\tilde t,t-\varepsilon]\times\mathbb R^{2(I\times I')\times\hat t}\times \mathbb{R}^I\right)$ such that $\mathbf{Y}(\nu)=g[\nu,\mathbf{Z}(\nu,\mathbf W)]$ with $\mathbf{Y}(\nu)$ being an It\^o's process. Assume
\begin{multline*}
g\left[\nu+\Delta\nu,\mathbf Z(\nu,\mathbf W)+\Delta\mathbf Z(\nu,\mathbf W)\right]=\\
\lambda_1[\Delta\mathbf Z(\nu,\mathbf W)-\bm\mu[\nu,\mathbf W(\nu),\mathbf Z(\nu,\mathbf W)]d\nu-\bm\sigma[\nu,\hat{\bm\sigma},\mathbf W(\nu),\mathbf Z(\nu,\mathbf W)]d\mathbf B(\nu)]\\
+\lambda_2 e^{\sqrt{8/3}k(l(\nu))}d\nu.
\end{multline*}
For a very small sample over-interval around $\mathfrak s$ with
$\tilde\epsilon\downarrow 0$ generalized It\^o's Lemma gives,
\begin{multline*}
\tilde\epsilon\mathcal{L}_{\mathfrak s,\mathfrak s'}(\mathbf{Z})=
\E_{\mathfrak s}\left\{\sum_{i=1}^{I}\sum_{m=1}^M
\tilde\epsilon\exp(-\rho_{\mathfrak s}^im)\a^iW_i(\mathfrak s)h_0^i[\mathfrak s,w(\mathfrak s),z(\mathfrak s)] \right.\\ +
\tilde\epsilon g[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]
+\tilde\epsilon g_{\mathfrak s}[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]+
\tilde\epsilon g_{\mathbf{Z}}[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]
\{\bm{\mu}[\mathfrak s,\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s,\mathbf W)]\} \\
+ \tilde\epsilon g_{\mathbf{Z}}[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]
{\bm{\sigma}}\left[\mathfrak s,\hat{\bm{\sigma}},
\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s,\mathbf W)\right]
\Delta\mathbf{B}(\mathfrak s) \\
\left.\phantom{\sum}
+\mbox{$\frac{1}{2}$}\tilde\epsilon\sum_{i=1}^I\sum_{j=1}^I
{\bm\sigma}^{ij}
[\mathfrak s,\hat{\bm{\sigma}},\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s,\mathbf W)]
g_{Z_iZ_j}[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]+o(\tilde\epsilon)\right\},
\end{multline*}
where ${\bm\sigma}^{ij}\left[\mathfrak s,\hat{\bm{\sigma}},\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s,\mathbf W)\right]$ represents $\{i,j\}^{th}$ component of the variance-covariance matrix, $g_{\mathfrak s}=\partial g/\partial\mathfrak s$,
$g_{\mathbf{Z}}=\partial g/\partial \mathbf{Z}$,
$g_{Z_iZ_j}=\partial^2 g/(\partial Z_i\partial Z_j)$,
$\Delta B_i\Delta B_j=\delta^{ij}\tilde\epsilon$,
$\Delta B_i\tilde\epsilon=\tilde\epsilon\Delta B_i=0$, and
$\Delta Z_i(\mathfrak s)\Delta Z_j(\mathfrak s)=\tilde\epsilon$,
where $\delta^{ij}$ is the Kronecker delta function.
As $\E_{\mathfrak s}[\Delta \mathbf{B}(\mathfrak s)]=0$, $\E_{\mathfrak s}[o(\tilde\epsilon)]/\tilde\epsilon\ra 0$ and for $\tilde\epsilon\downarrow 0$,
\begin{multline*}
\mathcal{L}_{\mathfrak s,\mathfrak s'}(\mathbf{Z})=
\sum_{i=1}^{I}\sum_{m=1}^M
\exp(-\rho_{\mathfrak s}^im)\a^iW_i(\mathfrak s)h_0^i[\mathfrak s,w(\mathfrak s),z(\mathfrak s)] \\ +
g[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]
+ g_{\mathfrak s}[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]+
g_{\mathbf{Z}}[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]
\{\bm{\mu}[\mathfrak s,\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s,\mathbf W)]\} \\
+ g_{\mathbf{Z}}[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]
{\bm{\sigma}}\left[\mathfrak s,\hat{\bm{\sigma}},
\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s,\mathbf W)\right]
\Delta\mathbf{B}(\mathfrak s) \\
\phantom{\sum}
+\mbox{$\frac{1}{2}$}\sum_{i=1}^I\sum_{j=1}^I
{\bm\sigma}^{ij}
[\mathfrak s,\hat{\bm{\sigma}},\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s,\mathbf W)]
g_{Z_iZ_j}[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]+o(1).
\end{multline*}
There exists a vector $\mathbf{\xi}_{(2(I\times I')\times\hat t\times I)\times 1}$ so that
\[
\mathbf{Z}(\mathfrak s,\mathbf W)_{(2(I\times I')\times\hat t\times I)\times 1}=\mathbf{Z}(\mathfrak s',\mathbf W)_{(2(I\times I')\times\hat t\times I)\times 1}+\xi_{(2(I\times I')\times\hat t\times I)\times 1}.
\]
Assume $|\xi|\leq\eta\tilde\epsilon [\mathbf{Z}^T(\mathfrak s,\mathbf W)]^{-1}$, then
\begin{multline*}
\Psi_{\mathfrak s}(\mathbf{Z})+
\tilde\epsilon\frac{\partial \Psi_{\mathfrak s}(\mathbf{Z})}{\partial \mathfrak s}+o(\tilde\epsilon)=
\frac{1}{N_{\mathfrak{s}}}
\int_{\mathbb{R}^{2(I\times I')\times \hat t\times I}}
\left[\Psi_{\mathfrak s}(\mathbf{Z})+
\xi\frac{\partial \Psi_{\mathfrak u}(\mathbf{Z})}{\partial \mathbf{Z}}+
o(\tilde\epsilon)\right] \\\times
\exp\left\{-\tilde\epsilon\left[\sum_{i=1}^{I}\sum_{m=1}^M\exp(-\rho_{\mathfrak s}^im)\a^iW_i(\mathfrak s)h_0^i[\mathfrak s,w(\mathfrak s),z(\mathfrak s)+\xi]\right.\right. \\
+g[\mathfrak s,\mathbf{Z}(\mathfrak s',\mathbf W)+\xi]+
g_{\mathfrak s}[\mathfrak s,\mathbf{Z}(\mathfrak s',\mathbf W)+\xi] \\
+g_{\mathbf{Z}}[\mathfrak s,\mathbf{Z}(\mathfrak s',\mathbf W)+\xi]
\{\bm{\mu}[\mathfrak s,\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s',\mathbf W)+\xi] \}\\
\left.\left.
+\mbox{$\frac{1}{2}$}\sum_{i=1}^I\sum_{j=1}^I
{\bm\sigma}^{ij}[\mathfrak s,\hat{\bm{\sigma}},\mathbf{W}(\mathfrak s),
\mathbf{Z}(\mathfrak s',\mathbf W)+\xi]
g_{Z_iZ_j}[\mathfrak s,\mathbf{Z}(\mathfrak s',\mathbf W)+\xi]\right]\right\}d\xi+
o(\tilde\epsilon^{1/2}).
\end{multline*}
Define a $C^2$ function
\begin{eqnarray*}
f[\mathfrak s,\mathbf W(\mathfrak s),\xi]
& = & \sum_{i=1}^{I}\sum_{m=1}^M\exp(-\rho_{\mathfrak s}^im)\a^iW_i(\mathfrak s)h_0^i[\mathfrak s,w(\mathfrak s),z(\mathfrak s)+\xi] \\
& & +g[\mathfrak s,\mathbf{Z}(\mathfrak s',\mathbf W)+\xi]+
g_{\mathfrak s}[\mathfrak s,\mathbf{Z}(\mathfrak s',\mathbf W)+\xi] \\
& & +g_{\mathbf{Z}}[\mathfrak s,\mathbf{Z}(\mathfrak s',\mathbf W)+\xi]
\{\bm{\mu}[\mathfrak s,\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s',\mathbf W)+\xi] \\
& & +\mbox{$\frac{1}{2}$}\sum_{i=1}^I\sum_{j=1}^I
{\bm\sigma}^{ij}[\mathfrak s,\hat{\bm{\sigma}},\mathbf{W}(\mathfrak s),\mathbf{Z}
(\mathfrak s',\mathbf W)+\xi] \\
& & \times g_{Z_iZ_j}[\mathfrak s,\mathbf{Z}(\mathfrak s',\mathbf W)+\xi].
\end{eqnarray*}
Hence,
\begin{eqnarray*}
\Psi_{\mathbf{s}}(\mathbf{Z})+
\tilde\epsilon\frac{\partial \Psi_{\mathfrak s}(\mathbf{Z})}{\partial \mathfrak s} & = &
\frac{\Psi_{\mathfrak s}(\mathbf{Z})}{N_{\mathfrak s}}
\int_{\mathbb{R}^{2(I\times I')\times \hat t\times I}}
\exp\{-\tilde\epsilon f[\mathfrak s,\mathbf{W}(\mathfrak s),\xi]\}d\xi+ \\
& &
\hspace{-1cm}
\frac{1}{N_{\mathfrak s}}\frac{\partial \Psi_{\mathfrak s}(\mathbf{Z})}{\partial \mathbf{Z}}
\int_{\mathbb{R}^{2(I\times I')\times\hat{t}\times I}}
\xi\exp\{-\tilde\epsilon f[\mathfrak{s},\mathbf{W}(\mathfrak{s}),\xi]\}d\xi+
o(\tilde\epsilon^{1/2}).
\end{eqnarray*}
For $\tilde\epsilon\downarrow0$, $\Delta \mathbf{Z}\downarrow0$
\begin{eqnarray*}
f[\mathfrak s,\mathbf{W}(\mathfrak s),\xi] & = &
f[\mathfrak s,\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s',\mathbf W)]\\
& &+ \sum_{i=1}^{I}f_{Z_i}[\mathfrak s,\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s,\mathbf W)][\xi_i-Z_i(\mathfrak s,\mathbf W)]\\
& &
\hspace{-2.5cm}
+\mbox{$\frac{1}{2}$}\sum_{i=1}^{I}\sum_{j=1}^{I}
f_{Z_iZ_j}[\mathfrak s,\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s',\mathbf W)]
[\xi_i-Z_i(\mathfrak s',\mathbf W)][\xi_j-Z_j(\mathfrak s',\mathbf W)]+o(\tilde\epsilon).
\end{eqnarray*}
There exists a symmetric, positive definite and non-singular Hessian matrix $\mathbf{\Theta}_{[2(I\times I')\times\hat t\times I]\times [2(I\times I')\times\hat t\times I]}$ and a vector $\mathbf{R}_{(2(I\times I')\times\hat t\times I)\times 1}$ such that,
\begin{multline*}
\int_{\mathbb{R}^{2(I\times I')\times\hat t\times I}}
\exp\{-\tilde\epsilon f[\mathfrak s,\mathbf{W}(\mathfrak s),\xi]\}d\xi=\\
\sqrt{\frac{(2\pi)^{2(I\times I')\times\hat t\times I}}{\tilde\epsilon |\mathbf{\Theta}|}}
\exp\left\{-\tilde\epsilon f[\mathfrak s,\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s',\mathbf W)]
+\mbox{$\frac{1}{2}$}\tilde\epsilon\mathbf{R}^T\mathbf{\Theta}^{-1}\mathbf{R}\right\},
\end{multline*}
where
\begin{multline*}
\int_{\mathbb{R}^{2(I\times I')\times\hat t\times I}} \xi
\exp\{-\tilde\epsilon f[\mathfrak s,\mathbf{W}(\mathfrak s),\xi]\} d\xi \\
=\sqrt{\frac{(2\pi)^{2(I\times I')\times\hat t\times I}}{\tilde\epsilon |\mathbf{\Theta}|}}
\exp\{-\tilde\epsilon f[\mathfrak s,\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s',\mathbf W)]+
\mbox{$\frac{1}{2}$}\tilde\epsilon \mathbf{R}^T
\mathbf{\Theta}^{-1}\mathbf{R}\}\\\times
[\mathbf{Z}(\mathfrak s',\mathbf W)+\mbox{$\frac{1}{2}$} (\mathbf{\Theta}^{-1}\mathbf{R})].
\end{multline*}
Therefore
\begin{multline*}
\Psi_{\mathfrak s}(\mathbf{Z})+\tilde\epsilon
\frac{\partial \Psi_{\mathfrak s}(\mathbf{Z})}{\partial \mathfrak{s}}=
\frac{1}{N_{\mathfrak s}}
\sqrt{\frac{(2\pi)^{2(I\times I')\times\hat t\times I}}{\tilde\epsilon |\mathbf{\Theta}|}} \\\times
\exp\{-\tilde\epsilon f[\mathfrak s,\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s',\mathbf W)]+
\mbox{$\frac{1}{2}$}\tilde\epsilon\mathbf{R}^T\mathbf{\Theta}^{-1}\mathbf{R}\}\\
\times\left\{\Psi_{\mathfrak s}(\mathbf{Z})+
[\mathbf{Z}(\mathfrak s',\mathbf W)+\mbox{$\frac{1}{2}$}(\mathbf{\Theta}^{-1} \mathbf{R})]
\frac{\partial \Psi_{\mathfrak{s}}(\mathbf{Z})}{\partial \mathbf{Z}}
\right\}+o(\tilde\epsilon^{1/2}).
\end{multline*}
Assuming $N_s=\sqrt{(2\pi)^{2(I\times I')\times\hat t\times I}/\left(\epsilon |\mathbf{\Theta}|\right)}>0$,
we get Wick rotated Schr\"odinger type equation as,
\begin{multline*}
\Psi_{\mathfrak s}(\mathbf{Z})+
\tilde\epsilon\frac{\partial \Psi_{\mathfrak s}(\mathbf{Z})}{\partial\mathfrak s}
=\left\{1-\tilde\epsilon f[\mathfrak s,\mathbf{W}(s),\mathbf{Z}(\mathfrak s',\mathbf W)]+
\mbox{$\frac{1}{2}$}\tilde\epsilon\mathbf{R}^T\mathbf{\Theta}^{-1}\mathbf{R}\right\} \\\times
\left[\Psi_{\mathfrak s}(\mathbf{Z})+
[\mathbf{Z}(\mathfrak s',\mathbf W)+\mbox{$\frac{1}{2}$}(\mathbf{\Theta}^{-1} \mathbf{R})]
\frac{\partial \Psi_{\mathfrak{s}}(\mathbf{Z})}{\partial \mathbf{Z}}
\right]+o(\tilde\epsilon^{1/2}).
\end{multline*}
As $\mathbf{Z}(\mathfrak s,\mathbf W)\leq\eta\tilde\epsilon|\xi^T|^{-1}$, there exists $|\mathbf{\Theta}^{-1}\mathbf{R}|\leq 2 \eta\tilde\epsilon|1-\xi^T|^{-1}$ such that for
$\tilde\epsilon\downarrow 0$ we have $\big|\mathbf{Z}(\mathfrak s',\mathbf W)+\mbox{$\frac{1}{2}$}\left(\mathbf{\Theta}^{-1}\ \mathbf{R}\right)\big|\leq\eta\tilde\epsilon$ and hence,
\[
\frac{\partial \Psi_{\mathfrak s}(\mathbf{Z})}{\partial \mathfrak s}=
\big[- f[\mathfrak s,\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s',\mathbf W)]+
\mbox{$\frac{1}{2}$}\mathbf{R}^T\mathbf{\Theta}^{-1}\mathbf{R}\big]\Psi_{\mathfrak s}(\mathbf{Z}).
\]
For $|\mathbf{\Theta}^{-1}\mathbf{R}|\leq 2 \eta\tilde\epsilon|1-\xi^T|^{-1}$
and at $\tilde\epsilon\downarrow 0$,
\[
\frac{\partial \Psi_{\mathfrak s}(\mathbf{Z})}{\partial \mathfrak s}=
-f[\mathfrak s,\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s',\mathbf W)]
\Psi_{\mathfrak s}(\mathbf{Z}),
\]
and
\begin{equation}\label{f10}
-\frac{\partial }{\partial W_i}
f[\mathfrak s,\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s',\mathbf W)]
\Psi_{\mathfrak s}(\mathbf{Z})=0.
\end{equation}
In Equation (\ref{f10}),
$\Psi_{\mathfrak s}(\mathbf{Z})$ is the transition wave function and cannot be zero therefore,
\[
\frac{\partial }{\partial W_i}
f[\mathfrak s,\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s',\mathbf W)]=0.
\]
We know, $\mathbf{Z}(\mathfrak s',\mathbf W)=\mathbf{Z}(\mathfrak s,\mathbf W)-\xi$ and for $\xi\downarrow 0$ as we are looking for some stable solution. Hence, $\mathbf{Z}(\mathfrak s',\mathbf W)$ can be replaced by $\mathbf{Z}(\mathfrak s,\mathbf W)$ and,
\begin{multline*}
f[\mathfrak s,\mathbf W(\mathfrak s),\mathbf Z(\mathfrak s,\mathbf W)]=
\sum_{i=1}^{I}\sum_{m=1}^M\exp(-\rho_{\mathfrak s}^im)\a^iW_i(\mathfrak s)h_0^i[\mathfrak s,w(\mathfrak s),z(\mathfrak s)] \\
+g[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]+
g_{\mathfrak s}[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)] \\
+g_{\mathbf{Z}}[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]
\{\bm{\mu}[\mathfrak s,\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s,\mathbf W)]\} \\
+\mbox{$\frac{1}{2}$}
\sum_{i=1}^I\sum_{j=1}^I{\bm\sigma}^{ij}
[\mathfrak s,\hat{\bm{\sigma}},\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s,\mathbf W)]\ g_{Z_iZ_j}[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)],
\end{multline*}
so that
\begin{multline}\label{f11}
\sum_{i=1}^{I}\sum_{m=1}^M\exp(-\rho_{\mathfrak s}^im)\a^ih_0^i[\mathfrak s,w(\mathfrak s),z(\mathfrak s)]\\
+g_{\mathbf{Z}}[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]
\frac{\partial
\{\bm\mu[\mathfrak s,\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s,\mathbf W)]\}
}{\partial \mathbf{W}}
\frac{\partial \mathbf{W}}{\partial W_i }\\
+\mbox{$\frac{1}{2}$}
\sum_{i=1}^I\sum_{j=1}^I
\frac{\partial
{\bm\sigma}^{ij}[\mathfrak s,\hat{\bm{\sigma}},\mathbf{W}(\mathfrak s),\mathbf{Z}(\mathfrak s,\mathbf W)] }
{\partial \mathbf{W}}
\frac{\partial \mathbf{W}}{\partial W_i }
g_{Z_iZ_j}[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]=0.
\end{multline}
If in Equation (\ref{f11}) we solve for $\a_i$, we can get a solution of the weight attached to $W_i(\mathfrak s)$. In order to get a closed form solution we have to assume $\a_i=\a_j=\a^*$ for all $i\neq j$ which yields,
\begin{eqnarray}\label{f12}
\a^* & = &
-\left[\sum_{i=1}^{I}\sum_{m=1}^M\exp(-\rho_{\mathfrak s}^im)\a^ih_0^i[\mathfrak s,w(\mathfrak s),z(\mathfrak s)]\right]^{-1}
\notag \\
& & \times\left[\frac{\partial g[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]}{\partial{\mathbf{Z}}}
\frac{\partial
\{\bm{\mu}[\mathfrak s,\mathbf{W}(\mathfrak{s}),\mathbf{Z}(\mathfrak s,\mathbf W)]\} }{\partial \mathbf{W}}
\frac{\partial \mathbf{W}}{\partial W_i } \right. \notag \\
& & \left.
+\mbox{$\frac{1}{2}$}
\sum_{i=1}^I\sum_{j=1}^I
\frac{\partial {\bm\sigma}^{ij}[\mathfrak s,\hat{\bm{\sigma}},\mathbf{W}(\mathfrak s),
\mathbf{Z}(\mathfrak s,\mathbf W)]}{\partial \mathbf{W}}
\frac{\partial \mathbf{W}}{\partial W_i }
\frac{\partial^2 g[\mathfrak s,\mathbf{Z}(\mathfrak s,\mathbf W)]}{\partial Z_i\partial Z_j} \right].
\end{eqnarray}
Expression in Equation (\ref{f12}) is a unique closed form solution.
The sign of $\a^*$ varies along with the signs of all the partial derivatives. $\square$
\section{Discussion}
In this paper we obtain a weight $\a^*$ for a soccer match with rain interruption. This coefficient tells us how to select a player to score goals at a certain position based on the condition of that match. It is a common practice to hide a new talented player until the $15$-minutes of the game. As the player is new opposition team has less information about them and as they are playing at the first time of the game, they have more energy than a player who is playing for last $75$ minutes. Our model will determine the weight associated with these players under more generalized and realistic conditions of the game.
We use a Feynman path integral technique to calculate $\a^*$. In the later part of the paper we focus on the more volatile environment after the rain stops. We assume that, after a rain stoppage the occurrence of each kick towards the goal strictly depends on the amount of rain at that sample time. If it is more than $b\in\mathbb R$ millimeters, then it is very hard to move with the ball on the field, and it is extremely difficult for a goal keeper to grip the ball which results not to resume the match again. Using It\^ o's lemma we define a $\delta_{\mathfrak s}$-gauge which generates a sample time $\mathfrak s$ instead of an actual time $s$ is assumed to follow a Wiener process. Furthermore, we assume the action space of a soccer player has a $\sqrt{8/3}$-LQG surface and, we construct a stochastic It\^o-Henstock-Kurzweil-McShane-Feynman-Liouville type path integral to solve for the optimal weight associated with them. As before rain, environment does not offer an extra moisture, a technically sound player does not need to predict the behavior of an opponent which is not true for the case of a match after a rain stoppage.
| {
"attr-fineweb-edu": 2.216797,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUd2PxaL3SuhsxKorv |
\section{Introduction}
Historically travel demand modeling has been carried out for an analysis period of one day using macroscopic
models that produce aggregate data for a day~\citep{mcnally2000fourstep}.
With the emergence of activity-based models~\citep{bhat1999activity} and
agent-based models~\citep{raney2003truly} much
more fine-grained results were possible.
While much progress has been made in this direction,
the typical analysis period used in travel demand models
like the Day-Activity-Schedule approach~\citep{bowman2001activity},
TRANSIMS~\citep{rilett2001overview},
MATSim~\citep{horni2016multi}, ALBATROSS~\citep{arentze2004learning},
Aurora/FEATHERS~\citep{arentze2006aurora2feathers},
or Sacsim~\citep{bradley2010sacsim}
is still one day.
In travel behavior research it has become common to use multi-day surveys
to study phenomena like
repetition, variability of
travel~\citep{huff1986repetition,hanson1988systematic,pas1995intrapersonal},
or the rhythms of daily life~\citep{axhausen2002observing}.
\citet{jones1988significance}
point out the importance of multi-day travel surveys for the assessment of transport policy measures,
since one-day surveys
do not allow assessing how severe individual persons are affected by a policy measure.
Taking these results of travel behavior research into account,
travel demand models should aim to model periods longer than a day.
Of the established travel demand models, however,
only MATSim has at least envisioned
an extension of the simulation period from a day to a week~\citep{ordonez2012simulating},
followed by a prototypical implementation, which has been applied to a
1\%~scenario~\citep{horni2012matsim}.
Some hints indicate that Aurora might be able to simulate
a multi-day period~\citep{arentze2010agent}, but the focus of this work
is on the rescheduling of daily activity plans.
A specific multi-day travel demand model has been presented
by~\citet{kuhnimhof2009multiday} and \citet{kuhnimhof2009measuring},
but the implementation has only prototypical character
and the model is only applied to a sample of 10\,000 agents.
A model for generating multi-week activity agendas,
which considers household activities, personal activities, and joint activities,
is presented by \citet{arentze2009need},
based on the theory of need-based activity generation described by~\citet{arentze2006new}.
The model has been extended by~\citet{nijland2014multi} to take long-term planned activities
and future events into account.
A destination choice model that could be used in combination with the
activity generation model is outlined by~\citet{arentze2013location}.
When \mobitopp was initially designed, a simulation period of
one week was already envisaged~\citep{schnittger2004longitudinal}.
However, the first implementation supported only the simulation of one day.
After some early
applications of mobiTopp~\citep{gringmuth2005impacts,bender2005enhancing},
which stimulated further extensions,
the development almost ceased and work at \mobitopp was limited to maintenance.
Nevertheless, \mobitopp was still in use for transport planning,
for example in the Rhine-Neckar Metropolitan Region~\citep{kagerbauer2010mikroskopische}.
In 2011 the \mobitopp development was resumed~\citep{mallig2013mobitopp}
and a simulation period of one week was finally implemented.
Using this new feature
\mobitopp was successfully applied to the Stuttgart Region, a metropolitan area in Germany, as study area.
This work was the foundation for further extensions
in recent years,
like
ridesharing~\citep{mallig2015passenger},
electric vehicles~\citep{weiss2017assessing,mallig2016modelling},
and carsharing~\citep{heilig2017implementation}.
This paper describes the current state of \mobitopp.
The body of the paper is divided into three parts.
The first part (Section~\ref{sec:mobitopp}) describes the overall
structure of the \mobitopp framework.
The second part (Section~\ref{sec:extensions}) describes the extensions made to \mobitopp during the last years,
which are not part of \mobitopp's core.
The third part (Section~\ref{sec:application}) describes an application of \mobitopp.
It contains the specifications of the
actual mode choice and destination choice models used for this scenario and the simulation results.
The simulation results show that simply extending a single-day model
to a multi-day horizon already leads to useful results.
However, there are some issues that need special attention,
namely ensuring realistic start times of activities,
joint activities, and seldom-occurring long-distance trips.
\section{The \mobitopp model}
\label{sec:mobitopp}
\mobitopp is an activity-based travel demand
simulation
model
in the tradition of \emph{simulating activity chains} \citep{axhausen1989simulating}.
It is based on the principle of agent based simulation~\citep{bonabeau2002agent},
meaning that each person of the study area is modeled as an agent.
An agent in this context is an entity that makes decisions autonomously, individually,
and situation-dependent
and interacts with other agents.
In \mobitopp,
each agent has an individual activity schedule (activity chain)
that is executed over the simulation period,
making
decisions for destination choice and mode choice.
These decisions are based on discrete choice models.
Since \mobitopp does not yet contain an internal traffic assignment procedure
and relies on external tools for this purpose,
interactions
between agents occur
in the basic version of \mobitopp
only indirectly by
availability or non-availability of cars in the household context.
When a household's last available car is used by an agent, the mode \emph{car as driver} is not longer available
for other household members until a car
is returned to the household's car pool.
The mode choice options of the other household members are therefore restricted by the action of the agent.
Direct interaction between the agents occurs in the case of ridesharing;
this functionality is provided by
one of \mobitopp's extensions (see Section~\ref{sec:extensions:ridesharing}).
When this extension is activated,
agents
travelling by car offer ridesharing
opportunities; agents that choose the mode \emph{car as passenger} actively seek for
ridesharing opportunities. Agents travel together in the same car if a ridesharing request matches a
corresponding offer.
The agents' activities and trips are simulated
chronologically over a simulation period
up to one week.
The temporal resolution is one minute; the spatial resolution is based on traffic analysis zones.
\mobitopp has been successfully applied to a study area with more than two million inhabitants
distributed over
more than thousand traffic analysis zones~\citep{mallig2013mobitopp,kagerbauer2016modeling}.
\mobitopp consists of two major parts, the \emph{\stageinit} and the \emph{\stagesimu},
each of them making use of several modules (see Figure~\ref{fig:mobitopp}).
The \stageinit represents the long-term aspects of the system like population synthesis, assignment
of home zone and zone of workplace, car ownership, and ownership of transit pass.
These long-term aspects define the framework conditions for the subsequent travel demand simulation.
The \stagesimu models the travel behavior of the agents, consisting of destination choice
and mode choice, over the course of the simulation period of one week.
\mobitopp is implemented in JAVA using the object-oriented design paradigm.
Every module is described by an interface.
Typically, several implementations for each module exist, which are easily exchangeable.
Exchanging existing implementations of a module is basically a matter of configuration.
New implementations can be plugged-in easily.
There is typically a default implementation, which provides only the basic functionality.
More complex behavior is realized by specialized implementations, typically as a result of a
specific research project.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{structure-crop}
\caption{Structure of \mobitopp: long-term and short-term model.}
\label{fig:mobitopp}
\end{figure}
\subsection{Long-term model}
The first part of the \stageinit is the population synthesis module.
Households and persons are generated for each zone based on total numbers of households and persons
given on the level of
traffic analysis zones and distributions of the households' and persons' attributes.
The corresponding zone is assigned as home zone.
An activity schedule is assigned to each person.
In addition, long-term decisions like workplace or school place, car ownership and transit pass ownership
are modeled.
These assignments remain fixed for the following \stagesimu.
\subsubsection{Population synthesis}
Central input of the population synthesis is the data of a household travel survey.
Preferably, a survey for the planning area is used; however nationwide
surveys like the German Mobility Panel~\citep{wirtz2013panel} can
be used
as replacement.
The population of each zone is generated by repeated random draws of households and the corresponding persons
from the survey data. The distributions of households' and persons' attributes are taken into account
by an appropriate weighting of a household's probability to be drawn.
The population synthesis uses a two-stage process similar to the method described
by~\citet{mueller2011hierarchical},
which is based on the idea of iterative proportional fitting
introduced by~\citet{beckman1996creating} to population synthesis.
In the first stage,
an initially equally distributed weight is assigned to each household.
These weights are subsequently
adjusted in an iterative process until the weighted distribution of the households matches the given distribution
of the household types and the distribution of the persons weighted by the corresponding household weight matches
the given marginal distributions of the persons' attributes.
In the second stage,
the corresponding number of household
for each household type
is drawn randomly
with replacement from the weighted distribution of households.
The survey household acts as prototype for the household in the model, meaning that the household created in the
model has the same attributes
as the prototype household.
Likewise,
an agent is created
for each person of the survey household,
which has
the same attributes
and the same \activityprogram
as the survey person.
\subsubsection{Activity Schedule Generation}
Once the agent is created, he is assigned an \activityprogram for a complete week.
The \activityprogram consists of a sequence of activities with the attributes
purpose, planned start time, and duration.
The default implementation of the \emph{\activityprogram creator} is quite simplistic:
it basically copies the \activityprogram{}s for all persons from the persons of a matching survey household.
A more sophisticated implementation of an \activityprogram creator,
which synthetically calculates the whole weekly \activityprogram{}s,
is under development~\citep{hilgert2017modeling}.
This \activityprogram creator, called \emph{actiTopp},
consists of a hierarchy of submodels with the levels week, day, tour, and activity.
Multinomial Logit models (MLM)
are used for decisions with a finite number of alternatives, for example the number
of activities per tour.
Continuous variables like start time are determined using a hybrid approach.
A MLM is used to determine the broad period during the day;
a random draw from an empirical distribution is used to determine the exact time during the broad period.
The upper level of the hierarchy models for each activity type the decisions relevant for the whole week:
the number of days an activity of this type will take place, the available time for each activity type,
and the usual start time of the main tours.
The levels below (day, tour, activity) use an approach similar to
the Day Activity Schedule approach~\citep{bowman2001activity}, but take also into account
the decisions made at the week level.
The day level determines the main tour and the number of secondary tours.
The tour level determines the main activity of the secondary tours and the number of secondary activities
for each tour.
The activity level determines the activity types for the secondary activities and the activity durations.
\subsubsection{Assignment of workplace}
Workplace and school place typically remain stable over a longer period
and are
therefore modeled in the
\stageinit and kept constant during the \stagesimu.
In the \stagesimu, no destination choice is made for activities of
type work or education; instead the location assigned in the \stageinit is used.
The assignment of workplace and school place is based on external matrices,
called \emph{commuting matrices} in \mobitopp,
representing
the distribution of workplaces and school places for the inhabitants of each zone.
For Germany, matrices of this type
can be acquired from the Federal Employment Agency for the workplace
on the level of municipalities;
however, disaggregation to the level of traffic analysis
zones is necessary.
Disaggregation can be done proportional to the
size of an appropriate variable, known for each zone,
the number of inhabitants for the residential side,
and for example the number of workplaces or total size of office space for the work-related side
of each relation.
For school places, this type of data is typically not available in Germany, so one has to resort to modeled
data.
Ideally, matrices can be adopted from an existing macroscopic model.
The agents are assigned workplaces and school places
based on these matrices.
Agents whose prototypes in the survey have reported long commuting distances are assigned workplaces
with long distances from their home zones, while agents whose prototypes reported short commuting
distances are assigned workplaces close to their home zones.
That way it is ensured that the commuting distance is consistent with the \activityprogram, which is taken from
the survey data.
The assignment of workplace and school place is done for each home zone in the following way:
The number of workplaces for the current zone giving by the commuting matrix is normalized to match
the total number of working persons living in the zone. The workplaces are ordered by increasing distance.
The working persons are ordered by increasing commuting distance reported by their prototype in the survey.
Then each $k$th working person is assigned the $k$th workplace.
The procedure for the assignment of school places is the same.
\subsubsection{Car ownership}
The number of cars, each household owns, is an attribute used by the population synthesis module
and is therefore already defined.
The car ownership model determines the type of car in terms of segment and engine type.
It consists of two submodels: a car segment model and a car engine model.
The default implementations of these model are mere placeholders for more complex models
that can be plugged-in as needed.
The default car segment model always assigns a car of the midsize class.
The car engine model assigns randomly either a combustion engine or an electric engine, while the probability
to assign an electric engine is configurable.
More elaborate implementations of the models were developed as part of \mobitopp's
electric vehicle extension~\citep{weiss2017assessing}.
\subsubsection{Transit pass}
The transit pass model decides for each agent whether he owns a transit pass or not.
The model is a binary logit model using the attributes sex, employment status, car availability,
and the households' number of cars divided by household size.
\subsection{Short-term model}
The main components of the \stagesimu (see Fig.~\ref{fig:mobitopp})
are the \emph{destination choice model}, \emph{the mode choice model} and the \emph{mode availability model}.
The destination choice model and the mode choice model are used directly to model the decisions of the agents.
The mode availability model
is an auxiliary model used to determine the agent's available modes.
It is used by the mode choice model and by the destination choice model.
During the \stagesimu, the travel behavior of all agents is simulated
simultaneously and chronologically.
The simulation starts
at Monday 00:00 and typically ends
at Sunday 23:59 for a simulation period of one week.
During the simulation period, the agents execute their \activityprogram{}s.
Each agent typically starts the simulation performing an activity at home.
When an agent has finished his current activity,
he inspects his activity schedule and identifies the next activity.
For this activity, he
makes a destination choice followed by a mode choice,
taking into account the
available modes. Then he makes the trip to the chosen destination using the selected mode. When he reaches
the destination, he starts performing the next activity.
\mobitopp uses a fixed order of destination choice first and mode choice second.
This fixed order can be seen as a certain limitation of the model, since in reality these decisions are not
necessarily made in this order.
In case of intermediate stops of a tour made by car, for example, the mode choice decision is made
at the beginning of the tour, while the locations of the intermediate stops may be still unknown.
Destination choice for these stops may be made at later point of time during the tour.
We made the decision to use a fixed order because it simplifies the implementation.
Besides, in many cases the destination choice is made before the mode choice decision,
for example for work and school trips.
The issue of mode choice before destination choice for intermediate stops is addressed
by the mode availability model (see Section~\ref{sec:modeavailability}).
This model allows a mode choice for intermediate stops only when the mode used for the tour
is one of the modes walking, car as passenger, or public transport.
In case of the modes car as driver or cycling, the agent is constrained to the currently used mode.
So the order of destination choice and mode choice is virtually reversed in this case:
after the mode choice for the first trip of the tour only destination choices are
made for the trips afterwards.
For home-based trips that do not involve an activity with fixed location
destination choice and mode choice can be considered as simultaneous.
The joint probability for each combination of destination and mode can be partitioned
into a marginal probability and a conditional probability~\citep{benakiva1974structure},
which leads to a Nested Logit structure~\citep[Ch.~10]{benakiva1987discrete_choice_analysis}.
The applicability of this approach in \mobitopp has been shown by \citet{heilig2017largescale}.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.7]{mobitopp_fsm_base-crop}
\caption{State diagram describing the behavior of an agent during \mobitopp's \stagesimu.}
\label{fig:simulationstage}
\end{figure}
The behavior of an agent can be described as a state diagram (Fig.~\ref{fig:simulationstage}).
The circles denote states and the arrows denote transitions between states. The clocks at the arrows indicate
that the agent remains in these states until some specific time has elapsed,
i.\,e. until the trip is finished or the activity is finished.
The agent starts in the state \stateUninitialized; after initialization, the state changes to \stateActivity.
The agent remains in this state for the duration of the activity. When the activity is finished, the state
changes to \stateTrip. The agent remains in this state until the trip is finished and the state changes
again to \stateActivity.
As long as there are more activities left in the activity program,
the cycle \stateActivity--\stateTrip--\stateActivity repeats.
When the last activity is finished the state changes to \stateFinished.
The agents act
at the transitions between the states.
At the transition from state \stateActivity to state \stateTrip, marked as \emph{end activity},
the agents
makes
first a destination choice and then a mode choice.
If the agent is currently
at home and the mode chosen is \emph{car as driver},
the agent takes one of the household's available cars. This car is then not longer available for other agents
until the agent that has taken the car returns home again.
At the transition from state \stateTrip to state \stateActivity,
marked as \emph{end trip},
information about the trip made is written to the trip file.
If the trip was a trip back home made by mode \emph{car as driver},
the car is returned to the household's car pool and
is again available to other members of the household.
During initialization, at the transition from state \stateUninitialized to state \stateActivity,
the first activity is initialized.
If the activity is not of type \emph{at home}, a destination choice and a mode choice are made for
the trip preceding the activity.
If the mode is \emph{car as driver} the agent is assigned a car and
the car is removed from the pool of available cars of his household.
\subsubsection{Destination choice}
\label{sec:destination_choice}
The destination choice model distinguishes between two types of activities: activities with fixed locations
(work, school, at home) and activities with flexible locations, for example shopping or leisure.
For activities with fixed locations, no destination choice is made, since these destinations have already
been determined by the \stageinit.
For activities with flexible locations, a destination choice is made
on the level of traffic analysis zones
using a discrete choice model.
Different implementations of destination choice models exist for \mobitopp.
The different implementations of the
destination
choice model have in common that they do not only consider
travel time and cost for the trip to the potential destination,
but also travel time and cost for the trip to the next fixed location
the agent knows he will visit.
The actual specification of the destination model used for the results shown in this paper is given in
Section~\ref{sec:implementation_destination_choice}.
\subsubsection{Mode choice}
\label{sec:mode_choice}
In \mobitopp a trip is a journey from the location of one activity to the location
of the next activity.
\mobitopp models only the main transportation mode for each trip;
trips with several legs travelled by different modes are not supported.
So every trip has exactly one mode.
In its basic version,
\mobitopp distinguishes between five modes: walking, cycling, public transport, car as driver,
and car as passenger.
Two additional modes, station-based carsharing and free-floating carsharing,
are provided by an extension~\citep{heilig2017implementation}
(see Section~\ref{sec:extensions:carsharing}).
The actual available choice set is situation-dependent, consisting of a non-empty subset of the full choice set.
\mobitopp aims at modeling the available choice set realistically.
This means for example that an agent, who is not at home and has arrived to
his current location by public transport
should not have available the modes cycling and car as driver for his next trip.
Or if an agent is at home and all cars of his household are currently in use by other agents, then
the agent should not be able to choose the mode car as driver.
The complexity of these dependencies is captured in the \emph{mode availability model},
which is described in the next section.
The choice between the available modes is made by a
discrete choice model, which uses
typically service variables of the transport system like travel time and cost
and sociodemographic variables of the agents.
The mode is selected by a random draw
from the resulting discrete probability distribution,
The actual discrete choice model used for mode choice in \mobitopp is configurable.
The actual specification of the mode choice model used for the results of this paper is given in
Section~\ref{sec:implementation_mode_choice}.
\subsubsection{Mode availability model}
\label{sec:modeavailability}
The actual choice set, an agent has available, depends on the current situation of the agent,
i.\,e. the current location, the previous mode choice
and the mode choices of the other agents of the same household.
The most important factor is the agent's current location.
If the agent is at home, in principle all modes are
available independently of the mode used before. However, the mode car as driver is not available if the agent
does not hold a driving license or the household's cars are currently all in use.
It is assumed that every agent owns a
bicycle, so if the agent is at home the mode cycling is always available.
If the agent is not at home, the available choice set depends essentially on the mode used before.
If the previous mode has been car as driver or cycling, only the mode used before is available for the next trip.
This approach is based on the idea that a car or a bicycle that has been used at the start of a tour has
eventually to return home. This approach is a small oversimplification, since it does not allow for tours
that start with the mode cycling or car as driver, are followed by a sub-tour using another mode
that ends at the location where the vehicle has been left, and are finished by a trip using the initial mode.
However, tours of this type are rarely found in reported behavior~\citep{kuhnimhof2009measuring}.
If the agent is not at home and the previous mode is one of the \emph{flexible modes} walking,
public transport, or car as passenger,
the choice set for the next trip consists of these three modes.
The modes car as driver and cycling are not available,
since these modes require a vehicle that is typically not available if the trip before is made by another mode.
\begin{figure}
\centering
\subfigure[No rescheduling.]{
\includegraphics[width=\textwidth]{personsenroute_ignore-crop}
\label{fig:rescheduling:no}
}
\subfigure[Remaining activities truncated when end of day reached.]{
\includegraphics[width=\textwidth]{personsenroute_truncate-crop}
\label{fig:rescheduling:truncate}
}
\subfigure[Activities between current activity and next activity at home skipped.]{
\includegraphics[width=\textwidth]{personsenroute_skip-crop}
\label{fig:rescheduling:skip}
}
\caption{Times series of persons en route for different rescheduling strategies.}
\label{fig:rescheduling}
\end{figure}
\subsubsection{Rescheduling}
For each agent, the simulation flow is highly influenced by his \activityprogram.
This \activityprogram\ consists of
a sequence of activities for each day, with a given
planned start time and a duration for each activity,
both together implying a planned end time.
However, the travel times during the simulation may
not match exactly
the gaps between
the planned end of an activity and the start of the next activity.
In this case, the start time of the following activity deviates from the planned start time,
since an agent starts his activity immediately after arriving at the location for the activity and performs the
activity for the planned duration.
If an agent arrives early at the destination, this means that the next activity starts earlier and
ends earlier than planned.
If an agent arrives late at the destination, this means that next activity starts later and
ends later than planned.
During the course of the simulation, it is possible that the start times of activities differ more and more
form the planned start times.
This may finally lead to the situation that
some of the remaining activities of the day can not be executed
during the day for which they have been originally scheduled.
By this point at the latest, some rescheduling is needed.
Several rescheduling approaches can be employed.
The simplest approach is just performing the activities in the planned order, ignoring the deviating schedule.
This approach has the obvious drawback that
the deviations may add up in the course of the week,
resulting in unrealistic start times for activities at later days, for example activities
scheduled for the evening, but performed in the early morning.
The second simplest approach is performing the activities
until the day ends and skipping the rest. This approach has the advantage that the result of a simulation run
with a simulation period of one day is the same as the result for the first
day of a multi-day simulation run.
It has the drawback that the last activity is skipped, which is typically an \emph{at home} activity.
For a simulation period of one day,
this is typically not a problem,
since the following \emph{at home} activity
could take place just at the beginning of the next day. So the sequence of activities is still reasonable.
In the case of a multi-day simulation however, this would mean that the agent does not return home
at the end
of the day, making the simulation unrealistic.
The third approach is keeping the last activity of the day and skipping
activities in between the current activity and the last activity. If necessary the duration of the last activity
can be adjusted, so that its ending time matches its planned ending time.
A more sophisticated rescheduling approach would at first not skip any activity,
but try to rearrange the \activityprogram\
by moving activities to other days or reducing the
duration of some activities~\citep{arentze2006aurora2feathers}.
\mobitopp implements the first three approaches mentioned above.
The third approach is enabled as default:
activities are performed until the day ends, all remaining
activities of the day but the last are removed from the schedule.
For the last activity of a day, the duration is adjusted
such that the first activity of the next day can start at the planned time.
The resulting time series of the persons en route for the different rescheduling approaches
are shown in Figure~\ref{fig:rescheduling}.
For Monday, all time series show a typical workday profile: a distinct morning peak
is found around 7 a.\,m., when workers and students leave for work or school.
Around one 1~p.\,m. is a much smaller peak, when students return home.
The evening peak around 6~p.\,m., when people return home, is smaller but wider than the morning peak.
In Fig.~\ref{fig:rescheduling:no},
without explicit rescheduling, this pattern gets more and more distorted with each day.
In Fig.~\ref{fig:rescheduling}~\subref{fig:rescheduling:truncate} and~\subref{fig:rescheduling:skip},
with rescheduling strategies, the typical workday profile is preserved for
from Monday to Friday. Saturday and Sunday show another profile, but as these days are not typical workdays
a different profile is reasonable.
The shape of the Friday profile differs slightly from the other workday profiles, but as the Friday is at the
transition to the weekend this small divergence seems reasonable as well.
The heights of the workday profiles are slightly less for Tuesday to Friday than on Monday.
In Fig.~\ref{fig:rescheduling:skip},
representing the rescheduling strategy where only not feasible activities before the
home activity are skipped, the decrease in height is not so pronounced.
The Friday profile is still lower than the Monday profile, though the total number of trips is larger on Friday.
However, the Friday trips are distributed more evenly in time. In consequence,
the evening peak on Friday is smaller, but broader than on Monday.
This is the rescheduling strategy, which is enabled by defaults as it preserves the workday profile best.
\subsubsection{Route Choice / Traffic Assignment}
As route choice model there exists currently only a dummy implementation, returning the result of a
simple Dijkstra search, which is not coupled with a traffic flow model.
So there is currently no direct feedback loop,
where destination choice and mode choice influence travel time and the adjusted travel
time influences destination choice and mode choice, within \mobitopp.
The feedback loop has to be realized using an external tool instead.
PTV~Visum was used for this purpose since a Visum model was already available.
This has the obvious disadvantages of leaving the agent-based world.
We have not yet found the best option to overcome this limitation.
One option is to use MATSim; the other option is implementing our own traffic flow simulation.
First experiments with MATSim have shown that an integration is not straightforward since \mobitopp
work on the level of zones while MATSim works on the level of links.
In \mobitopp agents performing activities are located at the zone center,
which is connected to the road network via artificial links, so-called connectors.
These connectors should only be used when leaving or entering a zone, but not for travel between other zones.
When the network is converted to MATSim, the distinction between regular links and connectors is lost.
This makes it difficult to ensure that the connectors are only used for entering or leaving a zone.
If the capacity of the connectors is set too high and travel time too low, agents start traveling via the
connectors and zone centers instead of the road network.
When the capacity of the connectors is set too low, there is a lot of congestion on the connectors.
We have recently experimented with randomly distributing \mobitopp's agent within the zone,
but have not yet tried out how this affects the interaction with MATSim.
However, according to \cite{nagel2017matsim_zone}, this approach should solve the issue.
\section{Extensions to the base model}
\label{sec:extensions}
Some extensions have been made to \mobitopp during the last years, namely the implementation of
carsharing,
ridesharing,
and electric vehicles.
\subsection{Carsharing}
\label{sec:extensions:carsharing}
The \emph{carsharing extension}~\citep{heilig2017implementation}
extends
\mobitopp by two additional modes:
station-based carsharing and free-floating carsharing.
The \stageinit has been extended by a carsharing customer model.
It is used to model the customer for each of the modeled car sharing companies.
For the mode choice model this extension means only minor changes:
an enriched choice set and
different cost and increased travel time
compared to the mode car
as result of increased
access and egress times.
The main work for the carsharing extension regarding travel behavior is performed
by the mode availability model.
Station-based carsharing cars have to be picked-up at a carsharing station, typically located in the
vicinity of the home place, and returned to the same station.
Thus the mode \emph{station-based carsharing} is handled like the mode car as driver:
It is only available when the agent is at home or when it has already been used for the previous trip.
Changing the mode is not allowed while not at home.
The mode \emph{free-floating carsharing} is handled differently.
Free-floating cars can be picked-up and dropped-off anywhere within a defined free-floating operating area.
Thus free-floating carsharing is handled like the flexible modes with some restrictions.
Free-floating carsharing is available if the zone of the current location belongs to a zone of
the free-floating operating area and if a car is available.
If an agent uses free-floating carsharing outside the free-floating operating area, switching the mode
is not permitted until he returns to the free-floating operating area.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.7]{mobitopp_fsm_pass-crop}
\caption{State diagram describing the behavior of an agent during \mobitopp's \stagesimu
using the ridesharing extension.}
\label{fig:simulationstage_passenger}
\end{figure}
\subsection{Ridesharing}
\label{sec:extensions:ridesharing}
The implementation of the mode \emph{car as passenger} in \mobitopp's base model is quite simplistic.
It is assumed that this mode is always available, if not restricted to another mode.
In reality, however, travelling as car passenger is only possible if someone else offers a ride.
Thus the ridesharing extension of \mobitopp~\citep{mallig2015passenger} aims at modeling this mode
more realistically.
In this implementation, agents using a car as driver offer ridesharing opportunities.
Agents that have chosen the mode \emph{car as passenger} seek actively for ridesharing opportunities.
However, when the departure time is fixed, it is practically impossible to find a ridesharing opportunity to the
desired destination. Therefore more flexibility is needed.
Therefore in the ridesharing extension agents make their mode choices not at the end of the activity,
but up to 30~minutes earlier and end their activities earlier or later to match the departure time of
the ridesharing offer.
The extended state diagram for the ridesharing extension is shown in Figure~\ref{fig:simulationstage_passenger}.
The state \stateActivity is split into two. Between these two states the destination choice and the
mode choice is made.
When the agent chooses the mode car as passenger he checks the availability of ride offers.
If no matching ride offer is found, the agent changes in the state \stateWait where he
periodically checks the availability of ride offers until he finds one or his maximum waiting time is elapsed.
If the agent finds a matching ride offer or chooses one of the other modes, the next stage
is the second part of the state \stateActivity.
The
remainder
of the diagram is in principle the same as the original state diagram.
\subsection{Electric vehicles}
Within the scope of the \emph{electric vehicle extension} \citep{weiss2017assessing,mallig2016modelling}
different car classes have been created, differentiated
by propulsion, namely combustion engine cars, battery electric vehicles, and plug-in hybrid vehicles.
The cars have been extended to provide current fuel levels, battery levels, and odometer values.
The \stageinit has been extended by a sophisticated car ownership model~\citep{weiss2017assessing}.
The car ownership model consists of a car segment model and a car engine model.
The car segment model is a multinomial logit model estimated on the data of the
German Mobility Panel~\citep{wirtz2013panel}.
The car engine model combines the ideas of propensity to own an electric car and suitability of an electric car.
The propensity model models the propensity to own an electric car based on the similarity of the
agent with early adaptors of electric cars based on sociodemographic attributes.
The suitability model is a binary logit model
based on the sociodemographic attributes of the agents
and estimated on the data generated by the CUMILE approach~\citep{chlond2014hybrid}.
Both models calculate a probability, which are multiplied to generate a probability to own an electric car.
In the \stagesimu, the destination choice model, and the mode choice model have been changed
to take the limited range into account.
In the destination choice model, possible destinations out of range of the electric vehicle are excluded
from the choice set if the electric car is the only mode available. A safety margin, to account for detours
and range anxiety, is considered.
In the mode choice model, the limited range is handled by the mode availability model.
If the chosen destination is out of range of the electric vehicle, again taking the safety margin
into account, the mode car as driver is not available.
\section{Application of the model}
\label{sec:application}
\mobitopp has been applied to the Stuttgart Region, a metropolitan area in Germany,
with a population of approximately 2.7~million.
The Stuttgart Region consists of the city of Stuttgart
and the five surrounding administrative districts: B\"{o}blingen, Esslingen, Ludwigsburg, G\"{o}p\-pingen,
and Rems-Murr.
The study area is divided into 1\,012 traffic analysis zones. Additional 152 zones are used
to represent the surrounding land.
The division into zones, the travel time, cost and commuting matrices,
and the structural data have been borrowed
from an existing macroscopic model~\citep{ptv2011verkehrsmodellierung}.
For population synthesis, activity schedule generation,
and parameter estimation of the different modules,
a recent household travel survey~\citep{stuttgart2011haushaltsbefragung} has been used.
This survey
was conducted between September~2009 and April~2010 in the Stuttgart Region.
Each person, aged 6 and above, of the surveyed households was asked to fill out an activity-travel diary over
a period of seven days.
The starting days of the survey period were equally distributed over all seven days of the week.
In addition, socioeconomic data of the household and its members was gathered using a questionary.
The sample has been generated by random draws from the municipal population registers.
The dataset contains trip data for 275913 trips of 13731 persons in 5567 households.
\subsection{Long-term model}
For population synthesis and
the generation of the \activityprogram{}s, the survey is used as input.
The default implementation of the activity schedule generator has been used.
The commuting matrices of the VISUM model mentioned above have been used as input for the assignment
of workplaces
and school places. The default car ownership model has been used.
\subsubsection{Transit pass model}
Statistics on transit pass ownership was only available as totals for the whole planning area.
The analysis of the household travel survey has shown substantially different shares of transit pass ownership
for the different administrative districts.
These differences could not be explained by the socio-econometric attributes alone.
So the variable \emph{administrative district} has been added as explanatory variable.
The model has been estimated using the method \emph{glm} contained in the \emph{stats} package
of the statistical software package \gnuR~\citep{RCoreTeam2013R}
based on the data of the socioeconomic questionary.
The resulting parameter estimates are given
in Table~\ref{tab:transitpass}.
As only the total number of transit passes sold was known, the sole parameter adjusted during calibration
was the intercept, set to $0.48$.
\begin{table}[htbp]
\centering\small
\begin{tabular}{@{}lQQ@{}}
\escl{coefficient} & \escl{estimate} & \escl{std. error} \\
\hline
intercept & 0.93383 & 0.11802 \\
female & 0.21752 & 0.07000 \\
number of cars divided by household size & -1.17629 & 0.12277 \\
car availability&&\\
\tabindent personal car & -1.32630 & 0.11780 \\
\tabindent after consultation & -0.13437 & 0.11191 \\
employment&&\\
\tabindent part time & -0.63909 & 0.09894 \\
\tabindent unemployed & -0.56445 & 0.28009 \\
\tabindent vocational education & 1.44164 & 0.20866 \\
\tabindent homemaker & -1.89145 & 0.18279 \\
\tabindent retired & -0.91735 & 0.08390 \\
\tabindent unknown & -0.65950 & 0.27122 \\
\tabindent secondary education & 1.45025 & 0.13669 \\
\tabindent tertiary education & 1.87161 & 0.15488 \\
administrative district&&\\
\tabindent BB & -1.15480 & 0.11197 \\
\tabindent ES & -1.30675 & 0.10157 \\
\tabindent GP & -1.91134 & 0.16567 \\
\tabindent LB & -1.04585 & 0.09297 \\
\tabindent WN & -0.94395 & 0.09760 \\
\hline
\end{tabular}
\caption{Parameter estimation results for the transit pass model.}
\label{tab:transitpass}
\end{table}
\subsection{Destination choice model}
\label{sec:implementation_destination_choice}
The destination choice model used, for the results presented here,
is a multinomial logit model based on the following variables:
purpose (type of activity), attractivity of the potential destination zone,
travel time and travel cost.
Travel time and travel cost are not only based on the current zone and the potential destination zone,
but also on the next zone where an activity with fixed location takes
place.
The following utility function is used:
\begin{equation*}
\begin{split}
V_{ij} &= \beta_{time\times purpose}\cdot (t_{ij}+t_{jn}) \cdot x_{purpose} \\
&+ \beta_{time\times employment}\cdot (t_{ij}+t_{jn}) \cdot x_{employment} \\
&+ \beta_{cost\times purpose}\cdot (c_{ij}+c_{jn}) \cdot x_{purpose} \\
&+ \beta_{opportunities\times purpose}\cdot \log(1+A_{j,purpose}) \cdot x_{purpose}
\end{split}
\end{equation*}
where
$V_{ij}$ is the utility for a trip from the current zone $i$ to the zone $j$.
$A_{j,purpose}$ is the attractivity of zone $j$ for an activity of type $purpose$.
$t_{ij}$ and $c_{ij}$ are travel time and travel cost from the current zone $i$ to zone $j$.
$t_{jn}$ and $c_{jn}$ are travel time and travel cost from zone $j$ to the next zone $n$ of an activity with
fixed location.
$x_{purpose}$ and $x_{employment}$ are dummy variables, denoting the purpose of the trip and the employment
status of the person respectively.
The $\beta$s are the corresponding model parameters.
$\beta_{time\times purpose}$ and $\beta_{time\times employment}$ are the parameters for travel time, which
vary with purpose and employment, respectively.
$\beta_{cost\times purpose}$ is the model parameter for cost, which varies with the purpose of the trip.
$\beta_{opportunities\times purpose}$ is the model parameter for the available opportunities,
which also varies with purpose.
The model parameters
of the destination choice model
have been estimated
based on the trip data of
the whole Stuttgart household travel survey
using the package \emph{mlogit}~\citep{croissant2012estimation}
of the statistical software \gnuR~\citep{RCoreTeam2013R}.
For parameter estimation, a random sample of 100 alternatives has been used,
as described by~\citet[Chapter 9.3]{benakiva1987discrete_choice_analysis}.
The resulting parameter estimates are showon in Table~\ref{tab:destinationchoicemodel}.
\begin{table}[htbp]
\begin{minipage}[t]{0.5\textwidth}
\centering\small
\begin{tabular}{@{}lQQ@{}}
\escl{coefficient} & \escl{estimate} & \escl{std. error} \\
\hline
time $\times$ purpose&&\\
\tabindent business & -0.01054540 & 0.00111986\\
\tabindent service & -0.09110973 & 0.00209485\\
\tabindent private business & -0.07776060 & 0.00152797\\
\tabindent private visit & -0.05457433 & 0.00125026\\
\tabindent shopping daily & -0.11127299 & 0.00199560\\
\tabindent shopping other & -0.05947906 & 0.00171003\\
\tabindent leisure indoor & -0.03639682 & 0.00141101\\
\tabindent leisure outdoor & -0.06822290 & 0.00179649\\
\tabindent leisure other & -0.05259948 & 0.00145475\\
\tabindent strolling & -0.17635836 & 0.00357956\\
time $\times$ employment status&&\\
\tabindent part-time & -0.01155729 & 0.00074396\\
\tabindent unemployed & -0.00224253 & 0.00202341\\
\tabindent homemaker & -0.01375566 & 0.00103072\\
\tabindent retired & -0.00343436 & 0.00062823\\
\tabindent student & -0.01159740 & 0.00074088\\
\tabindent vocational education & 0.00010795 & 0.00187697\\
\tabindent other & -0.00699810 & 0.00196023\\
cost $\times$ purpose&&\\
\tabindent business & -0.41934536 & 0.01549222\\
\hline
\end{tabular}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\centering\small
\begin{tabular}{@{}lQQ@{}}
\escl{coefficient} & \escl{estimate} & \escl{std. error} \\
\hline
cost $\times$ purpose (cont.)&&\\
\tabindent service & -0.33389025 & 0.02598859\\
\tabindent private business & -0.26954419 & 0.01817515\\
\tabindent private visit & -0.12415494 & 0.01360810\\
\tabindent shopping daily & -0.47753511 & 0.02667347\\
\tabindent shopping other & -0.33848300 & 0.02121217\\
\tabindent leisure indoor & -0.49940005 & 0.01844977\\
\tabindent leisure outdoor & -0.22842223 & 0.02129771\\
\tabindent leisure other & -0.31403821 & 0.01735821\\
\tabindent strolling & -0.70369097 & 0.04702899\\
opportunities $\times$ purpose&&\\
\tabindent business & 0.26737545 & 0.01313610\\
\tabindent service & 0.33691243 & 0.00701132\\
\tabindent private business & 0.46352602 & 0.00684424\\
\tabindent private visit & 0.37621891 & 0.00622737\\
\tabindent shopping daily & 0.27836507 & 0.00464230\\
\tabindent shopping other & 0.35537183 & 0.00467768\\
\tabindent leisure indoor & 0.38143512 & 0.00663500\\
\tabindent leisure outdoor & 0.29088396 & 0.00551271\\
\tabindent leisure other & 0.47347275 & 0.00847906\\
\tabindent strolling & 0.09158896 & 0.01121878\\
\hline
\end{tabular}
\end{minipage}
\caption{Parameter estimates for the destination choice model.}
\label{tab:destinationchoicemodel}
\end{table}
For calibration an additional scaling parameter $\gamma = \gamma_{purpose} \cdot \gamma_{employment}$
has been introduced, so the resulting selection probability for each zone is given by
\[
P_{ij} = \exp(\gamma_{purpose} \cdot \gamma_{employment} \cdot V_{ij})
/ \sum_{k} \exp(\gamma_{purpose} \cdot \gamma_{employment} \cdot V_{ik}).
\]
\begin{table}[htbp]
\tiny
\begin{minipage}[t]{0.5\textwidth}
\begin{tabular}[t]{@{}lQQq@{}}
\escl{coefficient} & \esc{estimate} & \esc{std. error} & \esc{calibration}\\
\hline
\multicolumn{2}{@{}l}{alternative specific constants} &&\\
\tabindent cycling & -1.1356e+00 & 7.5440e-02 & \\
\tabindent car driver & -1.5672e-01 & 6.5945e-02 & +0.2 \\
\tabindent car passenger & -4.2941e+00 & 8.0241e-02 & -0.4 \\
\tabindent public transport & -2.8372e+00 & 8.0784e-02 & +0.5 \\
travel cost per kilometer & -3.5730e-01 & 6.8557e-02 & \\
travel time per kilometer & -2.8840e-02 & 2.6580e-03 & \\
distance in kilometer &&&\\
\tabindent cycling & 5.6343e-01 & 1.5842e-02 & \\
\tabindent car driver & 7.8261e-01 & 1.4669e-02 & \\
\tabindent car passenger & 7.9470e-01 & 1.4680e-02 & \\
\tabindent public transport & 7.9439e-01 & 1.4771e-02 & \\
intrazonal trip &&&\\
\tabindent cycling & -9.5082e-02 & 4.7767e-02 & +0.3 \\
\tabindent car driver & -9.2063e-01 & 3.7774e-02 & +0.6 \\
\tabindent car passenger & -1.1832e+00 & 4.4493e-02 & -0.2 \\
\tabindent public transport & -2.0302e+00 & 1.1787e-01 & -0.9 \\
\multicolumn{2}{@{}l}{transit pass} &&\\
\tabindent cycling & -4.2752e-01 & 4.1864e-02 & \\
\tabindent car driver & -8.2268e-01 & 3.3557e-02 & \\
\tabindent car passenger & -1.9584e-01 & 3.6310e-02 & \\
\tabindent public transport & 2.3695e+00 & 3.8847e-02 & \\
no driving licence &&&\\
\tabindent cycling & -5.9253e-01 & 8.1967e-02 & \\
\tabindent car driver & -4.1309e+00 & 1.2221e-01 & \\
\tabindent car passenger & 3.1978e-03 & 6.1948e-02 & \\
\tabindent public transport & -5.5511e-02 & 6.8374e-02 & \\
female &&&\\
\tabindent cycling & -5.6703e-01 & 3.8076e-02 & \\
\tabindent car driver & -6.3911e-01 & 2.9206e-02 & \\
\tabindent car passenger & 6.7420e-01 & 3.2735e-02 & \\
\tabindent public transport & 8.4118e-02 & 3.6384e-02 & \\
day of week &&&\\
\tabindent saturday &&&\\
\tabindent\tabindent cycling & -1.9626e-01 & 5.6173e-02 & +0.1\\
\tabindent\tabindent car driver & -4.2888e-03 & 3.6395e-02 & +0.6\\
\tabindent\tabindent car passenger & 5.2985e-01 & 4.1188e-02 & +0.9\\
\tabindent\tabindent public transport & -1.3885e-01 & 5.3039e-02 & +0.6\\
\tabindent sunday &&&\\
\tabindent\tabindent walking & \esc{---} & \esc{---} & -0.3\\
\tabindent\tabindent cycling & -4.5429e-01 & 7.0118e-02 & \\
\tabindent\tabindent car driver & -5.5067e-01 & 4.6085e-02 & +1.3\\
\tabindent\tabindent car passenger & 1.6730e-01 & 4.9243e-02 & +1.6\\
\tabindent\tabindent public transport & -1.0634e+00 & 6.7111e-02 & +1.4\\[1.5ex]
employment status &&&\\
\tabindent vocational education &&&\\
\tabindent\tabindent cycling & 1.1499e-01 & 2.0892e-01 & \\
\tabindent\tabindent car driver & 5.0964e-01 & 1.5566e-01 & +0.5\\
\tabindent\tabindent car passenger & 3.3904e-01 & 1.6909e-01 & \\
\tabindent\tabindent public transport & 4.7232e-02 & 1.7418e-01 & +0.1\\
\tabindent infant &&&\\
\tabindent\tabindent cycling & -3.6321e-01 & 7.5163e-01 & \\
\tabindent\tabindent car driver & -3.7374e-02 & 1.5694e+04 & \\
\tabindent\tabindent car passenger & 9.1319e-01 & 2.9921e-01 & \\
\tabindent\tabindent public transport & 6.9077e-01 & 6.8562e-01 & \\
\tabindent unemployed &&&\\
\tabindent\tabindent cycling & -3.1530e-01 & 1.7854e-01 & +0.1\\
\tabindent\tabindent car driver & -5.8424e-01 & 1.1414e-01 & +0.3\\
\tabindent\tabindent car passenger & -2.1840e-01 & 1.3849e-01 & +0.1\\
\tabindent\tabindent public transport & 5.8309e-02 & 1.6185e-01 & +0.1\\
\tabindent other &&&\\
\tabindent\tabindent cycling & -6.8318e-01 & 1.8341e-01 & \\
\tabindent\tabindent car driver & -4.7136e-01 & 9.9071e-02 & +0.5\\
\tabindent\tabindent car passenger & -1.8951e-01 & 1.2550e-01 & \\
\tabindent\tabindent public transport & -3.4615e-02 & 1.4701e-01 & +0.1\\
\tabindent homemaker &&&\\
\tabindent\tabindent cycling & -1.9816e-01 & 8.4113e-02 & +0.1\\
\tabindent\tabindent car driver & -2.1550e-01 & 5.1162e-02 & +0.25\\
\tabindent\tabindent car passenger & -2.8003e-02 & 6.5064e-02 & +0.1\\
\tabindent\tabindent public transport & -1.5727e-01 & 9.4391e-02 & -0.2\\
\tabindent partime &&&\\
\tabindent\tabindent cycling & 3.2448e-01 & 5.7914e-02 & \\
\tabindent\tabindent car driver & 1.3589e-01 & 4.1725e-02 & +0.2\\
\tabindent\tabindent car passenger & 1.8310e-01 & 5.3525e-02 & \\
\tabindent\tabindent public transport & 1.0515e-02 & 5.8937e-02 & \\
\tabindent retired &&&\\
\tabindent\tabindent cycling & -6.4015e-01 & 1.0530e-01 & +0.4\\
\tabindent\tabindent car driver & -6.2650e-01 & 6.6882e-02 & +0.45\\
\tabindent\tabindent car passenger & -4.1904e-01 & 8.1543e-02 & +0.2\\
\tabindent\tabindent public transport & -7.7916e-02 & 9.6886e-02 & \\
\tabindent student &&&\\
\tabindent\tabindent cycling & 3.9692e-01 & 1.5259e-01 & +0.3\\
\tabindent\tabindent car driver & -7.5693e-02 & 1.1227e-01 & +0.5\\
\tabindent\tabindent car passenger & 4.6439e-02 & 1.2607e-01 & -0.2\\
\tabindent\tabindent public transport & -2.0048e-01 & 1.3115e-01 & -0.4\\[1.5ex]
trip purpose &&&\\
\tabindent business &&&\\
\tabindent\tabindent cycling & -3.1571e-01 & 1.6633e-01 & -0.1\\
\tabindent\tabindent car driver & 5.6801e-01 & 1.2078e-01 & +0.3\\
\tabindent\tabindent car passenger & 7.4322e-01 & 1.6613e-01 & +0.8\\
\tabindent\tabindent public transport & -3.1422e-02 & 1.5198e-01 & +0.1\\
\tabindent education &&&\\
\tabindent\tabindent cycling & -7.2092e-01 & 7.6064e-02 & +0.05\\
\tabindent\tabindent car driver & -1.0215e+00 & 8.8507e-02 & \\
\hline
\end{tabular}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\begin{tabular}[t]{@{}lQQq@{}}
\escl{coefficient} & \esc{estimate} & \esc{std. error} & \esc{calibration}\\
\hline
trip purpose (cont.)\hspace{20ex}&&&\\
\tabindent education (cont.)&&&\\
\tabindent\tabindent car passenger & -1.3862e-01 & 8.0878e-02 & +0.5\\
\tabindent\tabindent public transport & -1.1122e-01 & 7.9957e-02 & +0.5\\
\tabindent home &&&\\
\tabindent\tabindent cycling & -8.2356e-01 & 1.6348e-01 & \\
\tabindent\tabindent car driver & -6.2113e-02 & 1.0463e-01 & \\
\tabindent\tabindent car passenger & 1.5310e+00 & 1.1728e-01 & \\
\tabindent\tabindent public transport & -1.5520e-01 & 1.2665e-01 & \\
\tabindent leisure indoor &&&\\
\tabindent\tabindent cycling & -1.6181e+00 & 1.1181e-01 & -0.3\\
\tabindent\tabindent car driver & -8.4596e-01 & 6.9106e-02 & -1.2\\
\tabindent\tabindent car passenger & 1.1639e+00 & 8.2267e-02 & \\
\tabindent\tabindent public transport & -4.5739e-01 & 8.3763e-02 & -0.4\\
\tabindent leisure outdoor &&&\\
\tabindent\tabindent cycling & -3.1209e-01 & 7.6533e-02 & +0.1\\
\tabindent\tabindent car driver & 6.0902e-01 & 6.0189e-02 & -0.2\\
\tabindent\tabindent car passenger & 1.8137e+00 & 7.5388e-02 & +0.5\\
\tabindent\tabindent public transport & -8.2445e-01 & 8.3885e-02 & -0.2\\
\tabindent leisure other &&&\\
\tabindent\tabindent cycling & -8.4608e-01 & 7.7222e-02 & -0.1\\
\tabindent\tabindent car driver & -1.7938e-01 & 5.8599e-02 & -0.6\\
\tabindent\tabindent car passenger & 1.2538e+00 & 7.4144e-02 & +0.1\\
\tabindent\tabindent public transport & -4.2970e-01 & 7.5348e-02 & -0.2\\
\tabindent strolling &&&\\
\tabindent\tabindent cycling & -2.3744e+00 & 1.6047e-01 & +0.5\\
\tabindent\tabindent car driver & -2.9623e+00 & 1.2915e-01 & -3.0\\
\tabindent\tabindent car passenger & -1.2092e+00 & 1.5164e-01 & \\
\tabindent\tabindent public transport & -3.6069e+00 & 2.1196e-01 & \\
\tabindent other &&&\\
\tabindent\tabindent cycling & -3.5610e-01 & 5.6294e-01 & -2.0\\
\tabindent\tabindent car driver & 6.6026e-01 & 3.5894e-01 & -9.0\\
\tabindent\tabindent car passenger & 2.3180e+00 & 3.1746e-01 & +2.0\\
\tabindent\tabindent public transport & 6.7591e-01 & 3.7530e-01 & -5.0\\
\tabindent private business &&&\\
\tabindent\tabindent cycling & -7.0594e-01 & 7.1261e-02 & +0.3\\
\tabindent\tabindent car driver & 2.5110e-01 & 5.2300e-02 & \\
\tabindent\tabindent car passenger & 1.3480e+00 & 7.1297e-02 & +0.5\\
\tabindent\tabindent public transport & -4.8652e-01 & 7.1584e-02 & +0.2\\
\tabindent private visit &&&\\
\tabindent\tabindent cycling & -8.7128e-01 & 8.2963e-02 & +0.2\\
\tabindent\tabindent car driver & 4.6199e-02 & 6.0521e-02 & +0.3\\
\tabindent\tabindent car passenger & 1.2351e+00 & 7.6256e-02 & +0.5\\
\tabindent\tabindent public transport & -1.1572e+00 & 8.4478e-02 & +0.4\\
\tabindent service &&&\\
\tabindent\tabindent cycling & -9.4805e-01 & 8.1146e-02 & \\
\tabindent\tabindent car driver & 1.0449e+00 & 5.3128e-02 & +0.7\\
\tabindent\tabindent car passenger & 9.8400e-01 & 8.2154e-02 & +0.3\\
\tabindent\tabindent public transport & -1.3799e+00 & 1.0213e-01 & \\
\tabindent shopping daily &&&\\
\tabindent\tabindent cycling & -5.6289e-01 & 6.6363e-02 & +0.3\\
\tabindent\tabindent car driver & 2.5313e-01 & 5.0078e-02 & +0.2\\
\tabindent\tabindent car passenger & 1.2224e+00 & 7.1302e-02 & +0.6\\
\tabindent\tabindent public transport & -1.1002e+00 & 7.9112e-02 & +0.4\\
\tabindent shopping other &&&\\
\tabindent\tabindent cycling & -5.9659e-01 & 9.7271e-02 & \\
\tabindent\tabindent car driver & 3.0277e-01 & 6.8813e-02 & +0.2\\
\tabindent\tabindent car passenger & 1.5966e+00 & 8.5150e-02 & +0.6\\
\tabindent\tabindent public transport & -2.9474e-01 & 8.7803e-02 & +0.1\\
age &&&\\
\tabindent from 0 to 9 &&&\\
\tabindent\tabindent cycling & -1.2200e+00 & 1.9258e-01 & +0.1\\
\tabindent\tabindent car driver & -1.9409e+01 & 2.5083e+03 & \\
\tabindent\tabindent car passenger & 1.4448e+00 & 1.5028e-01 & +0.4\\
\tabindent\tabindent public transport & -3.6995e-01 & 1.8504e-01 & -1.2\\
\tabindent from 10 to 17 &&&\\
\tabindent\tabindent cycling & 9.0540e-01 & 1.7174e-01 & -0.2\\
\tabindent\tabindent car driver & -2.8867e-02 & 1.7507e-01 & \\
\tabindent\tabindent car passenger & 1.6410e+00 & 1.4140e-01 & +0.1\\
\tabindent\tabindent public transport & 6.5017e-01 & 1.4839e-01 & \\
\tabindent from 18 to 25 &&&\\
\tabindent\tabindent cycling & 6.6218e-02 & 1.4544e-01 & \\
\tabindent\tabindent car driver & 6.1154e-01 & 1.0340e-01 & \\
\tabindent\tabindent car passenger & 1.1004e+00 & 1.1831e-01 & \\
\tabindent\tabindent public transport & 5.1406e-01 & 1.2382e-01 & \\
\tabindent from 26 to 35 &&&\\
\tabindent\tabindent cycling & -4.8777e-01 & 7.4708e-02 & \\
\tabindent\tabindent car driver & -4.6501e-01 & 4.8725e-02 & \\
\tabindent\tabindent car passenger & -1.9761e-01 & 6.5731e-02 & \\
\tabindent\tabindent public transport & -8.4504e-02 & 6.7550e-02 & \\
\tabindent from 51 to 60 &&&\\
\tabindent\tabindent cycling & -2.7928e-01 & 5.6226e-02 & \\
\tabindent\tabindent car driver & 1.3524e-01 & 3.8045e-02 & \\
\tabindent\tabindent car passenger & 3.6019e-01 & 5.0391e-02 & \\
\tabindent\tabindent public transport & 1.8826e-01 & 5.5333e-02 & \\
\tabindent from 61 to 70 &&&\\
\tabindent\tabindent cycling & -1.2830e-01 & 9.5410e-02 & \\
\tabindent\tabindent car driver & 6.3149e-02 & 6.3070e-02 & \\
\tabindent\tabindent car passenger & 6.8302e-01 & 7.6379e-02 & \\
\tabindent\tabindent public transport & 3.3423e-01 & 9.0021e-02 & \\
\tabindent 71 and above &&&\\
\tabindent\tabindent cycling & -2.2630e-01 & 1.1599e-01 & \\
\tabindent\tabindent car driver & 2.6608e-02 & 7.2146e-02 & \\
\tabindent\tabindent car passenger & 6.1939e-01 & 8.7350e-02 & \\
\tabindent\tabindent public transport & 4.8159e-01 & 1.0401e-01 & \\
\hline
\end{tabular}
\end{minipage}
\caption{Parameter estimates for the mode choice model.
The column \emph{calibration} contains the adjustment made as result of the calibration process.
}
\label{tab:modechoicemodel}
\end{table}
\subsection{Mode choice model}
\label{sec:implementation_mode_choice}
The mode choice model used is a multinomial logit model with the following utility function:
\begin{equation*
\begin{split}
V_{m} = \beta_{m,0} &+ \beta_{m,dist} \cdot x_{dist}
+ \beta_{time} \cdot x_{m,time\_km}
+ \beta_{cost} \cdot x_{m,cost\_km} \\
&+ \beta_{m,intra} \cdot x_{intra} \\
&+ \beta_{m,female} \cdot x_{female}
+ \beta_{m,employment} \cdot x_{employment}
+ \beta_{m,age} \cdot x_{age} \\
&+ \beta_{m,ticket} \cdot x_{ticket}
+ \beta_{m,license} \cdot x_{license} \\
&+ \beta_{m,purpose} \cdot x_{purpose}
+ \beta_{m,day} \cdot x_{day},
\end{split}
\end{equation*}
where $x_{dist}$ is the road-based distance between zone centroids,
$x_{m,time\_km}$ and $ x_{m,cost\_km}$ are travel time per kilometer and travel cost per kilometer
for the mode $m$ based on the distance~$x_{dist}$.
The binary variable $x_{intra}$ denotes, whether the trip is an intrazonal trip,
whereas in addition to genuine intrazonal trips,
trips between zones with a distance $x_{dist} < 1\,\mathrm{km}$ are also counted as intrazonal.
The variables $x_{female}$, $x_{employment}$, $x_{age}$, $x_{ticket}$ und $x_{license}$
are person specific variables for sex, employment status, age group, transit pass ownership, and holding
of a driving license.
The variables $x_{purpose}$ and $x_{day}$ denote the purpose of the trip (activity type)
and the day of the week.
The variable day of the week has the possible values workday, Saturday, and Sunday.
The coefficients $\beta_{time}$ and $\beta_{cost}$ are generic, i.\,e. not mode-dependant.
The constants $\beta_{m,0}$ and the other coefficients are alternative-specific, which means they are different
for each mode $m$.
The model parameters
have been estimated using \emph{mlogit}/\gnuR\
based on the trip data of
the whole Stuttgart household travel survey.
The resulting parameter estimates are shown in Table~\ref{tab:modechoicemodel}.
\subsection{Traffic assignment}
As \mobitopp does not contain a traffic flow simulation, the feedback look is realized with the help of external
tools.
The resulting trip file of the \stagesimu is aggregated
to OD-matrices, differentiated by mode and time period.
A traffic assignment is made for each period using an external
tool and the resulting travel time matrices are fed back into the next run of the \stagesimu.
For traffic assignment, PTV Visum has been used.
For this purpose, all trips starting within the same hour have been aggregated into hourly OD matrices.
Hourly matrices have been generated for each day.
Traffic assignment has been done for 24 hourly matrices for workday, Saturday, and Sunday each.
The hourly matrices of Tuesday have been considered as representative for a workday.
PTV Visum's \emph{Equilibrium Lohse} procedure with 50~iterations has been used for traffic assignment.
With this procedure, traffic assignment took about 25~minutes for each of the 72~hourly matrices.
\begin{table}[htbp]
\small
\begin{minipage}[t]{0.40\textwidth}
\centering
\hspace{\fill}
\begin{tabular}{@{}lD{.}{.}{2}@{}}
$\gamma_{purpose}$ & \multicolumn{1}{r@{}}{value} \\
\hline
business & 1.6\\
service & 0.95\\
private business & 0.95\\
private visit & 1.05\\
shopping daily & 0.85\\
shopping other & 0.95\\
leisure indoor & 1.15\\
leisure outdoor & 0.95\\
leisure other & 1.05\\
strolling & 3.0\\
\hline
\end{tabular}
\end{minipage}
\hspace{0.05\textwidth}
\begin{minipage}[t]{0.40\textwidth}
\centering
\begin{tabular}{@{}lD{.}{.}{2}@{}}
$\gamma_{employment}$ & \multicolumn{1}{r@{}}{value} \\
\hline
fulltime & 1.3\\
part-time & 1.3\\
unemployed & 0.9\\
homemaker & 0.95\\
retired & 1.0\\
student primary & 0.75\\
student secondary & 0.8\\
student tertiary & 0.9\\
vocational education & 1.4\\
other & 1.0\\
\hline
\end{tabular}
\end{minipage}
\hspace{\fill}
\caption{Scaling parameters of the destination choice model obtained by calibration.}
\label{tab:destinationchoicemodel_scale}
\end{table}
\begin{figure}[htbp]
\includegraphics[width=0.9\textwidth]{trip_length_distribution_purpose}
\caption{Trip length distribution by purpose}
\label{result:triplength_purpose}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95\textwidth]{modal_split_purpose}
\caption{Modal split by purpose}
\label{result:modalsplit_purpose}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95\textwidth]{modal_split_day}
\caption{Modal split by day of the week}
\label{result:modalsplit_day}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95\textwidth]{modal_split_passenger_day-crop}
\caption{Modal split by day of the week using the \emph{ridesharing} extension.}
\label{result:modalsplit_day_ridesharing}
\end{figure}
\subsection{Calibration}
The destination choice model and the mode choice model have been calibrated using the data of the household
travel survey.
As starting point for the calibration, the estimated logit parameters
have been used.
Since the parameters of the destination choice model and the mode choices have been estimated independently,
but actually destination choice and mode choice mutually influence each other, the initial match between
simulated results and survey results was only moderately good,
so subsequent calibration was necessary.
The calibration has been based on plots of trips length distributions and modal split differentiated by several
criteria that contrast the simulation results with the survey results, see for example
Figures~\ref{result:triplength_purpose} and Figure~\ref{result:modalsplit_purpose}.
An iterative process consisting of visually judging the plots, adjusting the model parameter, and rerunning the
simulation has been employed.
When a good fit of the model had been reached, a new traffic assignment run was made.
The resulting travel time matrices were used for the next calibration round.
Each calibration round consisted of several simulation runs with $1\%$ samples of the population until
a good fit of the distributions was reached. These were followed by a simulation run with the whole population.
These results were used for the next round of traffic assignment.
In total, three complete iterations involving traffic assignment were made.
The resulting scaling parameters for the destination choice model
are shown in Table~\ref{tab:destinationchoicemodel_scale}.
The adjustments to the mode choice parameters are shown in the last column of Table~\ref{tab:modechoicemodel}.
\subsection{Simulation results}
\label{sec:results}
The simulation results in
in Figures~\ref{result:triplength_purpose} to \ref{result:modalsplit_day}
show that the model is
mostly
calibrated well regarding trip length distribution and modal split.
The plot of the trip length distributions in Figure~\ref{result:triplength_purpose}
shows a good match between simulated data and survey data;
however the number of trips with a distance over 50~km is somewhat underestimated.
Plots of trip length distributions by employment type (not shown here) show similar results.
It seems that this discrepancy can be at least partially attributed to
infrequently occurring long-distance trips like business trips, holiday trips or trips
to visit friends or relatives. So as future improvement, a separate model for such seldom-occurring events might
be useful.
The modal split by purpose in Figure~\ref{result:modalsplit_purpose}
shows a good match between simulated and empirical data.
The same holds for the modal split by employment status (not shown here).
And finally the modal split by day of the week
shows a good match as well (see Fig.~\ref{result:modalsplit_day}).
Enabling the \emph{ridesharing extension}, similar results for the modal split by day could be produced
for all days except Sunday (Fig. \ref{result:modalsplit_day_ridesharing}).
Even an excessive adjustment of the mode choice parameters was not able to produce a higher share
of the mode car as passenger on Sunday.
We suppose that joint activities are the reason for this.
Joint activities occur particularly at weekends and are combined with joined trips.
Many of these trips are made as a combination of the modes car as driver and car as passenger.
As \mobitopp does not yet support joint activities,
agents choose their destinations independently of each other
and therefore choose in many cases different destinations.
Hence a joint trip is not possible and
this opportunity for trips as car passenger is missing.
As we were not able to correctly reproduce the modal split for Sunday,
the ridesharing extension is not enabled
as default option and and the simplistic implementation is used instead.
\subsection{Transferability}
All recent developments of \mobitopp are based on the model for the Stuttgart area.
However, single-day versions of \mobitopp have been previously applied to
the Rhine-Neckar Metropolitan Region~\citep{kagerbauer2010mikroskopische} and the Frankfurt Rhine-Main area.
An application of the current multi-day version of \mobitopp to the Karlsruhe Region is planned.
It is intended to use the division into zones and the commuting matrices from an existing VISUM model.
A synthetic population will be generated based on statistics at the zonal level.
Recalibration of the destination choice and mode choice models in order to match trip length distributions
and modal split of the latest household travel survey conducted
in the Karlsruhe area~\citep{heilig2017implementation} will be necessary.
\section{Conclusions and outlook}
\label{sec:discussion}
We have presented the
travel demand
simulation
model
\mobitopp, which
is able to simulate travel demand
over a period of one week for a large-scale study area common in practice.
The model is derived as an extension of a single-day model; the activity schedule is extended from one
day to a week;
some measures had to be taken to prevent that actual start times of activities differ too much from
planned start times.
The model has been successfully calibrated: trip length distributions and modal split by several aggregation
levels match the results of a household travel survey well; even the modal split by day of the week shows
a good match.
Using the model in different projects we have gained the following insights:
First, with an increasing simulation period,
it is more and more important to assure that the start times of activities are correct.
An analysis of persons en route showed that the simplistic model of simulating the sequence of
trips and activities without further checking the start times of activities
results in unrealistic time series.
While for a simulation period of one day, this is not a real problem,
the time series is already severely blurred for the second day.
However a very simple rescheduling strategy can already fix this issue.
Second, as weekend travel is different from weekday travel, models or extension of models that work
for weekdays quite well may have problems at weekends.
In our case, the ridesharing extension was not able to reproduce the modal split at Sundays correctly
and revealed that the activity model is not sophisticated enough for a realistic modeling
of car passenger trips.
So an explicit joint activity and joint trip modeling is necessary
Third, there are infrequent extraordinary activities, which should be treated differently than the
everyday activities of the same type.
Comparing the simulation results with the survey data using trip length distributions
revealed a non-negligible number of long-distance trips in the survey data for which
the distance was not correctly reproduced by the model.
These trips are mainly of the types \emph{business}, \emph{visit}, or \emph{leisure other}.
As the relative number of these trips is small, we assume that these are infrequent
extraordinary activities that have few in common with the everyday activities of the same type.
Due to the small relative number of these activities, the destination choice model is
mainly calibrated for the everyday activities.
It seems that a long-distance trip model of infrequent events is needed.
The issue of the diverging start times of the activities has already resolved by the re\-scheduling strategy
described in this work. However, this strategy is still rough.
A superior solution would include the adjustment of the durations of the activities and take
time-space-constraints~\citep{hagerstraand1970people} into account during destination choice.
Taking time-space-constraints into account and restricting the choice set to those locations that can be reached
within the timespan available between two activities would prevent activities from starting later than planned.
An early start of activities could be prevented by extending the duration of the previous activity.
Joint activities need a flag indicating whether an activity is a joint activity or not and the information
about the participating persons.
The destination choice model has to consider that only one choice is made for a joint activity and
this choice applies for all involved persons.
The subsequent mode choice has to be aware of the joint activity, with a high probability of a joint trip
as result.
As a first step, joint activities in the household context should be sufficient, already providing a considerable
improvement. Modeling joint activities completely would require a representation of the social network,
whereby joint activities could take place within every clique of the social network.
The necessary information for joint activities will be eventually provided by
the \emph{actiTopp} implementation of the activity schedule creator module.
The issue with the infrequent long-distance events can be resolved by subdividing the
affected activity types
each into \emph{distant} and \emph{local} and estimating new parameters of the destination choice model
for the new activity types.
| {
"attr-fineweb-edu": 2.199219,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdUY4eIXguxF2ofxc |
\section{Descriptive Analysis}
\label{sec:descriptive}
In this section, we perform a descriptive analysis of the data, understanding some key aspects of the European basketball in the new format era, i.e. the seasons 2016 - 2017, 2017 - 2018 and 2018 - 2019. For simplicity, we refer to each season using the end year of the season, e.g. 2016 - 2017 is referred to as season 2017 thereafter.
In Figure \ref{fig:box-plot-score-season-home-away}, we plot the distribution of the scores for the Home and Away teams for each of the three first seasons of the modern era. We observe that Home team distribution is shifted to higher values than the Away team distributions. Also the latter are wider for the season 2017 and 2018. Median and 75th quantile values for the Home Team distributions increase from 2017 to 2019, i.e. Home teams tend to score more.
\begin{figure}[]
\centering
\includegraphics[scale=0.5]{box-plot-score-season-home-away.png}
\caption{Box plot of the score distributions for each season for the home and away teams.}
\label{fig:box-plot-score-season-home-away}
\end{figure}
In Figure \ref{fig:box-plot-home-score-season-1-2} the Home Team Score distributions are plotted for wins (blue) and losses (red). The Win distributions are shifted to higher values in recent seasons, i.e. Home teams score more points on average each year in victories, whereas in losses the median score remains constant. Home Team victories are achieved when the Home team score is around 85 points, where in losses their median score is 76 points.
\begin{figure}[]
\centering
\includegraphics[width=\textwidth]{box-plot-home-score-season-1-2.png}
\caption{Box plot of the Home Team score distributions for wins (1) and losses (2).}
\label{fig:box-plot-home-score-season-1-2}
\end{figure}
A similar plot, but for the Away Teams is shown in Figure \ref{fig:box-plot-away-score-season-1-2}. Same conclusions as before can be drawn.
\begin{figure}[]
\centering
\includegraphics[width=\textwidth]{box-plot-away-score-season-1-2.png}
\caption{Box plot of the Away Team score distributions for wins (1) and losses (2).}
\label{fig:box-plot-away-score-season-1-2}
\end{figure}
Finally, in Figure \ref{fig:box-plot-diff-season-1-2} the distribution of points difference for each season for Home and Away wins respectively. The distribution of the score difference for the Home wins remains unchanged across the years, $50\%$ (the median) of the Home wins are determined with less than 9 points difference, whereas $25\%$ (the first quartile) of the Home wins the game difference is 5 points or less. For Away wins, the distribution of differences is shifted to lower values, with $50\%$ of the Away wins end with 8 points difference or less, and $25\%$ of the Away wins end with 4 points difference or less. In Layman's term, one could say that is harder to win on the road.
\begin{figure}[]
\centering
\includegraphics[width=\textwidth]{box-plot-diff-season-1-2.png}
\caption{Box plot of the (absolute value) score difference distributions for home (1) and away (2) wins.}
\label{fig:box-plot-diff-season-1-2}
\end{figure}
We summarise, a few more statics in Table \ref{tbl:summary-stats}. We notice that at least $63\%$ of the games end in Home win every year. The average Home Score has an increasing trend, same as the average Home Score of the Home wins. However, the Away scores fluctuate for seasons 2017 - 2019.
\begin{table}[]
\centering
\begin{tabular}{|l||p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{2cm}|p{2cm}|}
\hline
Season & Home Wins & Away Wins & Mean Home Score & Mean Away Score & Mean Home Win Score & Mean Away Win Score \\ \hline \hline
2017 & 152 & 88 & 80.8 & 77.5 & 78.9 & 79.6 \\ \hline
2018 & 151 & 89 & 82.6 & 78.8 & 80.5 & 81.0 \\ \hline
2019 & 155 & 85 & 82.8 & 78.6 & 80.6 & 80.9 \\ \hline
\end{tabular}
\caption{Summary statistics of the seasons 2017 - 2019.}
\label{tbl:summary-stats}
\end{table}
It should be emphasised that these trends and patterns across the years are very preliminary, as there are only three regular seasons completed so far, and no statistically significant conclusions can be made for trends across the seasons. One conclusion that seems to arise is that Home court is an actual advantage as it has a biggest share in wins and also teams tend to score more, and hence win, when playing at Home.
\subsection{Probability of Winning}
Now, we explore the probability of a team to win when it scores more than $N$ points. Particularly, we would like to answer the question: \textit{When a team scores at least, say, 80 points, what is the probability of winning?}
The results are plotted in Figure \ref{fig:probability-all-home-away}. We plot the probability of winning for all games, the Home and the Away games respectively. It becomes evident that for the same points, a Home team is more likely to win than an Away team, which means that the defence is better when playing at home. We also observe that in Euroleague a team must score more than 80 points to have a chance better than random to win a game.
\begin{figure}[]
\centering
\includegraphics[width=\textwidth]{prob_winning_scoring_at_least_n_points-home-away.png}
\caption{Probability of winning when a team scores certain points.}
\label{fig:probability-all-home-away}
\end{figure}
The conclusion of this descriptive analysis is that patterns remain unchanged across the years and no major shift in basketball style has been observed during these three years of the new format of the Euroleague. This is an important conclusion for the modelling assumptions discussed in the next section.
\section{Summary and Conclusions}
\label{sec:conclusions}
In this article we focused on Euroleague basketball games in the modern format era. We first explored their descriptive characteristics of the last three seasons and highlighted the importance of home advantage. Also, the game characteristics remain stable across the three seasons, allowing us to use data across the seasons for modelling purposes.
We then introduced a framework for machine-learning modelling of the games' outcomes. This is a binary classification task in machine learning terms. We compared the nine most commonly used algorithms for classification. The AdaBoost algorithm outperforms the others in this classification task.
Two classification methodologies were also compared, match-level and team-level classification, with the former being more accurate.
Feature selection and hyper-parameter tuning are very essential steps in the machine learning pipeline, and previous studies had ignored either or both of these steps. Wrapper methods proved to outperform filter methods and PCA.
Also, proper validation of the models on a hold-out set and their benchmarking had been omitted in some of the previous studies (most of them validated their models using $k$-fold cross validation).
Our model achieves 75\% accuracy using 5-fold cross validation. However, this is not the complete picture. The model was properly validated on a hold-out set, the season 2018--2019, achieving accuracy 66.8\%. This indicates that either the models were over-fitted during $k$-fold cross validation or the season 2018--2019 is exceptionally difficult to predict. However, the success of simple benchmarks shows that over-fitting is more likely to be the case. Over-fitting during the training phase (including feature selection and hyper-parameter tuning) is always a risk in machine learning. One lesson taken is that $k$-fold cross validation is not always an informative and conclusive way to validate a model.
We also compared our results to the collective predictions of a group of basketball enthusiasts. The collective predictive power of that group outperforms any model (machine learning or benchmark) in this study. We are intrigued by this result. We believe that any machine learning model to be considered successful should outperform the human predicting capabilities (either individual or collective). For this reason, we set the accuracy of 73\% as the baseline for accuracy of Euroleague basketball games. This is also in agreement with \cite{zimmermann2013predicting}, in which the authors found that all machine learning models had a predictive accuracy bounded by 74\% (althouth this was for NBA games). Future models for basketball predictions should aim to break this threshold.
\subsection{What is next?}
There are many potential directions and improvements of this study. First, we aim to repeat this analysis with more team features, such as field goals percentages, field goals made, three pointers, free throws, blocks, steals (normalised either per game or per possesion), etc. Additional features from advanced statistics could be included, such as effective field goals, efficiencies, etc. The feature space then becomes larger and feature reduction techniques might prove meaningful.
Another direction for improvement is the inclusion of players' information and features. We discussed that the study in \cite{Oursky2019mla} achieves 80\% accuracy in NBA game predictions by including players' features in their classifier.
Finally, as the seasons progress higher volumes of data will become available. Although more data does not necessarily lead to more accurate models, we will be able to conduct more analysis and potentially avoid over-fitting. In many studies we reviewed in the Introduction, the accuracy of the models vary greatly from one season to the next. More data will provide us with more robust results. Additionally, we will be able to exploit methods such as neural networks which proved successful in \cite{Oursky2019mla}.
This study is neither a final solution nor the end-story. In contrast, it lays the methodology and best-practices for further exploring the question of predictions of game outcomes. We will continue re-train and re-evaluate our models with the 73\% accuracy threshold (the accuracy of the wisdom basketball crowds) in mind.
\section{Data Description}
\label{sec:data}
The data is collected from the Euroleague's official web-site \footnote{\url{https://www.euroleague.net/}} using scraping methods and tools, such as the \code{Beautiful Soup} package \footnote{\url{https://pypi.org/project/beautifulsoup4/}} in \code{Python}. The code for data extraction is available on Github \footnote{\url{https://github.com/giasemidis/basketball-data-analysis}}. The data collection data focuses on the regular seasons only.
\subsection{Data Extraction}
Data is collected and organised in three types, (i) game statistics, (ii) season results and (iii) season standings.
Game statistics includes statistics of each team in a game, such as the offense, defense, field goal attempted and made, percentages, rebounds, assists, turnovers etc. Hence, each row in the data represents a single team in a match and its statistics.
Season results data files include game-level results, i.e. each row correspond to a game. The data file stores the round number, date, home and away teams and their scores.
Season standings data files contains the standings with the total points, number of wins and losses, total offense, defense and score difference at the end of each round.
\section{Introduction}
\label{sec:intro}
Basketball is changing with fast pace. It is broadly considered that the game has entered a new era, with a different style of play dominated by three-point shots. An enjoyable and illustrative overview of the main changes is given in \cite{sprawnball}, where the author demonstrates the drastic changes in the game through graph illustrations of basketball statistics.
Advanced statistics have played a key-role in this development. Beyond the common statistics, such as shoot percentages, blocks, steals, etc., the introduction of advanced metrics, (such as the ``Player Efficiency Rate'', ``True Percentage Rate'', ``Usage Percentage'', ``Pace factor'', and many more), attempt to quantify ``hidden" layers of the game, and measure a player's or a team's performance and efficiency on a wide range of fields. In Layman's terms, the following exampel is illustrative. A good offensive player, but bad defender, might score 20 points on average, but concede 15. An average offensive player, but good defender might score on average 10 points, but deny 10 points (i.e. 5 good defensive plays), not through blocks or steals, but through effective personal and team defense. Using traditional basketball statistics, the former player scores more points but overall is less efficient to the team.
A major contribution to the field of advanced basketball statistics is made in \cite{Oliver20014}, where the author ``\textit{demonstrates how to interpret player and team performance, highlights general strategies for teams when they're winning or losing and what aspects should be the focus in either situation.}''. In this book, several advanced metrics are defined and explained. In this direction, further studies have contributed too, see \cite{Kubatko2007} and \cite{RustThesis2014}.
With the assistance of the advanced statistics modern NBA teams have optimised the franchise's performance, from acquiring players with great potential, which is revealed though the advanced metrics, a strategy well-known as \textit{money-ball}, to changing the style of the game. A modern example is the increase in the three-point shooting, a playing strategy also known as ``\textit{three is greater than two}''.
Despite the huge success of the advanced statistics, these measures are mostly descriptive. Predictive models started arising for NBA and college basketball in the past ten years.
The authors in \cite{Beckler2009nbao} first used machine learning algorithms to predict the outcome of NBA basketball games. Their dataset spans the seasons 1991-1992 to 1996-1997. They use standard accumulated team features (e.g. rebounds, steals, field goals, free throws etc.), including offensive and defensive statistics together with wins and loss, reaching 30 features in total. Four classifiers were used, linear regression, logistic regression, SVMs and neural networks. Linear regression achieves the best accuracy, which does not exceed 70\% on average (max 73\%) across seasons. The authors observe that accuracy varies significantly across seasons, with ``\textit{some seasons appear to be inherently more difficult to predict}''. An interesting aspect of their study is use of the ``NBA experts'' prediction benchmark, which achieves 71\% accuracy.
In \cite{Torres2013pdn}, the author trains three machine learning models, linear regression, maximum likelihood classifier and a multilayer perceptron method, for predicting the outcome of NBA games. The dataset of the study spans the seasons 2003--2004 to 2012--2013. The algorithms achieve accuracy 67.89\%, 66,81\% and 68.44\% respectively. The author used a set of eight features and in some cases PCA boosted the performance of the algorithms.
The authors of \cite{zimmermann2013predicting} use machine learning approaches to predict the outcome of American college basketball games (NCAAB). These games are more challenging than NBA games for a number of reasons (less games per season, less competition between teams, etc.). They use a number of machine learning algorithms, such as decision trees, rule learners, artificial neural networks, Naive Bayes and Random Forest. Their features are the four factors and the adjusted efficiencies, see \cite{Oliver20014,zimmermann2013predicting}, all team-based features. Their data-set extends from season 2009 to 2013. The accuracy of their models varies with the seasons, with Naive Bayes and neural networks achieving the best performance which does not exceed 74\%. An interesting conclusion of their study is that there is \textit{``a "glass ceiling" of about 74\% predictive accuracy that cannot be exceeded by ML or statistical techniques''}, a point that we also discuss in this study.
In \cite{Fang2015ngp}, the authors train and validate a number of machine learning algorithms using NBA data from 2011 to 2015. They assess the performance of the models using 10-fold cross-validation. For each year, they report the best-performing model, achieving a maximum accuracy of 68.3\%. They also enrich the game-data with players' injury data, but with no significant accuracy improvements. However, a better approach for validating their model would be to assess it on a hold-out set of data, say the last year. In their study, different algorithms achieve the best performance in every season, which does not allow us to draw conclusions or apply it to a new dataset; how do we know which algorithm is right for the next year?
The authors of \cite{Jain2017mla} focused on two machine learning models, SVM and hybrid fuzzy-SVM (HFSVM) to predict NBA games. Their data-set is restricted to the regular season 2015-2016, which is split into training and test sets. Initially, they used 33 team-level features, varying from basic to more advanced team statistics. After applying a feature selection algorithm, they concluded to 21 most predictive attributes. The SVM and HFSVM achieve 86.21\% and 88.26\% average accuracy respectively on 5-fold cross-validation. Their high accuracy though could be an artefact of the restricted data-set, as it is applied to a single season, which could be an exceptionally good season for predictions. A more robust method to test their accuracy would be to use $k$-fold cross validation on the training set for parameter tuning and then validate the accuracy of the model on a hold-out set of games (which was not part of cross-validation). Otherwise, the model might be over-fitted.
In \cite{Oursky2019mla}, the authors use deep neural networks to predict the margin of victory (MoV) and the winner of NBA games. They treat the MoV as a regression model, whereas the game winner is treated as a classification model. Both models are enhanced with a RNN with time-series of players' performance. Their dataset includes data from 1983. They report an accuracy of 80\% on the last season.
Although advanced statistics have been introduced for more than a decade in the NBA, European basketball is only recently catching up. The Euroleague \footnote{The Euroleague is the top-tier European professional basketball club competition.} organised a student's competition in 2019 \footnote{The SACkathon, see \url{http://sackathon.euroleague.net/}.}, whose aim was ``\textit{to tell a story through the data visualization tools}''.
Despite the oustanding progress on the prediction of NBA games, European basketball has been ignored in such studies. European basketball has a different playing style, the duration of the games is shorter and even some defensive rules are different. Applying models trained on NBA games to European games cannot be justified.
Predicting Euroleague games has a number of challenges. First, the Euroleague new format is only a few years old, therefore data is limited compared to the NBA, where one can use tens of seasons for modelling. Also, the number of games is much lower, 30 in the first three seasons compared to about 80 games in NBA. Therefore, the volume of data is much smaller. Secondly, Euroleague teams change every season. Euroleague has closed contracts with 11 franchises which participate every year, however the remaining teams are determined through other paths \footnote{\url{https://www.euroleaguebasketball.net/euroleague-basketball/news/i/6gt4utknkf9h8ryq/euroleague-basketball-a-licence-clubs-and-img-agree-on-10-year-joint-venture}}. Hence, Euroleague is not a closed championship like the NBA, teams enter and leave the competition every year, making it more challenging to model the game results.
When the current study started, no study existed that predicted European basketball games. During this project, a first attempt was made by the Greek sport-site \url{sport24.gr}, which was giving the probabilities of the teams winning a Euroleague game, probabilities which were updated during the game. Their model was called the ``Pythagoras formula'' \footnote{\url{https://www.sport24.gr/Basket/live-prognwseis-nikhs-sthn-eyrwligka-apo-to-sport24-gr-kai-thn-pythagoras-formula.5610401.html}}. According to this source, ``\textit{the algorithm was developed by a team of scientists from the department of Statistics and Economics of the University of Athens in collaboration with basketball professionals. The algorithm is based on complex mathematical algorithms and a big volume of data}''. However, this feature is no longer available on the website. Also, no performance information is available about it.
More recent attempts on predicting European basketball games can be found on Twitter accounts, which specialise in basketball analytics. Particularly, the account ``cm simulations'' (\url{@cm_silulations}) gives the predictions of Euroleague and Eurocup games each week for the current season 2019-2020. On a recent tweet \footnote{\url{https://twitter.com/CmSimulations/status/1225901155743674368?s=20}, \url{https://twitter.com/CmSimulations/status/1226233097055850498?s=20}}, they claim that they have achieved an accuracy of 64.2\%, for rounds 7 to 24 of this season. However, no details of the algorithm, the analysis and the methodology have been made public.
In this article, we aim to fill this gap, using modern machine learning tools in order to predict \textit{European} basketball game outcomes. Also, we aim to underline the machine-learning framework and pipeline (from data collection, feature selection, hyper-parameter tuning and testing) for this task, as previous approaches might have ignored one or more of these steps.
From the machine-learning point of view, this is a binary classification problem, i.e. an observation must be classified as 1 (home win) or 2 (away win). For our analysis we focus on the Euroleague championship in the modern format era covering three seasons, 2016-2017, 2017-2018 and 2018-2019. The reason we focus on the Euroleague is entirely personal, due to the experience and familiarity of the authors with the European basketball. The reason we focus on the latest three seasons is that prior to 2016, the championship consisted of several phases in which teams were split in several groups and not all teams played against each other. In the modern format, there is a single table, where all teams play against each other. This allows for more robust results.
We should emphasize that the aim of the paper is to extend and advance the knowledge of the machine learning and statistics in basketball, no-one should use it for betting purposes.
This article is organised as follows. In Section \ref{sec:data}, the data-set, its size, features, etc., is described, whereas in Section \ref{sec:descriptive} a descriptive analysis of the data-set is presented. Section \ref{sec:predictive-modelling} focuses on the machine-learning predictive modelling and explains the calibration and validation processes, while it also discusses further insights such as the ``wisdom of the crowd''. We summarise and conclude in Section \ref{sec:conclusions}
\section{Predictive Modelling}
\label{sec:predictive-modelling}
In this section, we formulate the problem and discuss the methodology, the feature selection, calibration and testing phases.
\subsection{Methodology}
From the machine-learning point of view, the prediction of the outcome of basketball games is a binary classification problem, one needs to classify whether a game ends as 1 (home win) or 2 (away win).
Second, we need to define the design matrix of the problem, i.e. the observations and their features. Here, we experiment with two approaches, match-level and team-level predictions, elaborated in the next subsections.
\subsubsection{Match-level predictions}
\label{sec:match-level-predictions}
At the match-level, the observations are the matches and the features are:
\begin{itemize}
\item Position in the table, prior the game, of the home and away team respectively.
\item The average offense of the season's matches, prior the game, of the home and away team respectively.
\item The average defense of the season's matches, prior the game, of the home and away team respectively.
\item The average difference between the offense and defense of the season's matches, prior the game, of the home and away team respectively.
\item The form, defined as the fraction of the past five games won, of the home and away team respectively.
\item The final-four flag, a binary variable indicating whether the home (resp. away) team qualified to the final-four of the \textit{previous} season.
\end{itemize}
At the match-level, we consider features of both the home and away teams, resulting in 12 features in total. All features quantify the teams' past performance, there is no information for the game to be modelled. These features are calculated for each season separately, as teams change significantly from one season to the next. As a result, there is no information (features) for the first game of each season, which we ignore from our analysis.
This list of features can be further expanded with more statistics, such as the average shooting percentage, assists, etc., of the home and away teams respectively. For the time-being, we focus on aforementioned simple performance statistics and try to explore their predictive power.
\subsubsection{Team-level predictions}
At team-level, the observations are teams and the machine learning model predicts the probability of a team to win. To estimate the outcome of a match, we compare the winning probabilities of the competing teams, the one with higher probability wins the match. Hence, we build a team-predictor, with the following features:
\begin{itemize}
\item Home flag, an indicator whether the team play at home (1) or away (0).
\item Position of the team in the table prior the game.
\item The average offense of the season's matches of the team prior the game.
\item The average defense of the season's matches of the team prior the game.
\item The average difference between the offense and defense of the season's matches of the team prior the game.
\item The form, defined as above, of the team.
\item The final-four flag, a binary variable indicating whether the team qualified to the final-four of the \textit{previous} year.
\end{itemize}
Now, we have seven features for each observation (i.e. team), but we also doubled the number of observations compared to the match-level approach. The machine-learning model predicts the probability of a team to win, and these probabilities are combined to predict the outcome of a game.
As mentioned before, these features can be expanded to include percentage shootings, steals, etc. We leave this for future work.
For the remaining of the analysis, we split the data into two sub-sets, the 2017, 2018 seasons for hyper-parameter calibration, feature selection and training, and the final season 2019 for testing. Also, all features are normalised to the interval $[0, 1]$, so that biases due to magnitude are removed.
\subsubsection{Classifiers}
The binary classification problem is a well-studied problem in the machine-learning literature and numerous algorithms have been proposed and developed \cite{Hastie2009elements}. Here, we experiment with the following list of the off-the-shelves algorithms that are available in the \textit{scikit-learn} Python library \footnote{\url{https://scikit-learn.org/stable/}} \cite{scikit-learn}. The reason we use these algorithms is two-fold; (i) we do not want to re-invent the wheel, we use well-established algorithms (ii) the \textit{scikit-learn} library has been developed and maintained at high-standards by the machine-learning community, constituting an established tool for the field.
The classifiers are:
\begin{itemize}
\item Logistic Regression (LR)
\item Support Vector Machines (SVM) with linear and \textit{RBF} kernels
\item Decision Tree (DT)
\item Random Forest (RF)
\item Naive Bayes (NB)
\item Gradient Boosting (GB)
\item $k$-Nearest Neighbours ($k$NN)
\item Discriminant Analysis (DA)
\item AdaBoost (Ada)
\end{itemize}
\subsection{Calibration}
\label{sec:calibration}
The above classifiers have hyper-parameters, which we tune using 5-fold cross-validation and grid-search on the training set. Although some algorithms have more than one hyper-parameters, in the majority of the classifiers, we limit ourselves to tuning only a few parameters in order to speed the computations. These hyper-parameters are:
\begin{itemize}
\item The regularisation strength for LR and SVM-linear
\item Number of estimators for RF, GB and Ada
\item Number of neighbours for $k$NN
\item The regularisation strength and the width of the RBF kernel for the SVM-rbf
\item Number of estimators and learning rate for Ada, abbreviated as Ada2~\footnote{We experiment with two versions of the Ada classifiers.}
\end{itemize}
All other hyper-parameters are set to their default values.
We record both the accuracy and the weighted accuracy across the folds. For each value of the hyper-parameter search, we find the mean score across the folds and report its maximum value. In Figure \ref{fig:match-level-cross-validation}, we plot the scores of the 5-fold cross validation process for the aforementioned classifiers at the match-level classification. We observe that the Ada and Ada2 classifiers outperform all others for all scores, with gradient boosting coming third.
\begin{figure}[]
\centering
\includegraphics[width=\textwidth]{match-level-cross-validation.png}
\caption{Accuracy (blue) and weighted accuracy (red) scores of 5-fold cross-validation at the match-level classification.}
\label{fig:match-level-cross-validation}
\end{figure}
In Figure \ref{fig:team-level-cross-validation}, we plot the scores of the 5-fold cross validation for the classifiers of the team-level analysis. We observe that the team-level analysis is under-performing the match-level models for almost all classifiers.
\begin{figure}[]
\centering
\includegraphics[width=\textwidth]{team-level-cross-validation.png}
\caption{Accuracy (blue) and weighted accuracy (red) scores of 5-fold cross-validation at the team-level classification.}
\label{fig:team-level-cross-validation}
\end{figure}
We conclude that the Ada and Ada2 classifiers at the match-level are the best-performing models, having accuracy score 0.705 and 0.708 respectively, and we select them for further analysis.
\subsection{Feature Selection}
\label{sec:feature-selection}
Feature selection is an important step in machine-learning methodoloty. There are several methods for selecting the most informative features; filter, embedded, wrapper and feature transformation (e.g. Principal Component Analysis (PCA)) methods, see \cite{Guyon:2003,Verleysen:2005} for further details. Here, we explore the aforementioned methods and attempt to understand the most informative features.
\subsubsection{Filter Methods}
Filter methods are agnostic to the algorithm and using statistical tests they aim to determine the features with the highest statistical dependency between the target and feature variables \cite{Guyon:2003,Verleysen:2005}. For classification, such methods are: (i) the ANOVA F-test, (ii) the mutual information (MI) and (iii) chi-square (Chi2) test.
We use the match-level features and apply the three filter methods. We rank the features from the most informative to the least informative for the three methods. We plot the results in Figure \ref{fig:filter-ranking}. The darker blue a feature is, the most informative it is for the corresponding method. On the other extreme, least informative features are yellowish. We observe some common patterns, the position and F4 features are very informative, whereas the Offence of the Away team and the Defence features are the least informative for all methods.
\begin{figure}[]
\centering
\includegraphics[scale=0.5]{filter-feature-ranking-heatmap.png}
\caption{Ranking (1 most, 12 lest informative) of features according to the ANOVA, mutual information (``MI'') and Chi-square (``Chi2'') filter methods. The ``x'' and ``y'' suffixes refer to the features of the Home and Away team respectively.}
\label{fig:filter-ranking}
\end{figure}
If some features are less informative, we assess the performance of the model with increasing number of features, from the most informative according to each filter method, adding the less informative ones, incrementally. If some features have negative impact to the performance of the models, we expect the accuracy to peak at some features and then decrease for the rest. However, we do observe, see Figure \ref{fig:filter-components}, that the maximum performance scores (accuracy and weighted accuracy) are achieved when all 12 features are included in the model. Hence, although some features are less informative, they all contribute positively to the model's performance.
As no subset of features can exceed the performance of "all features" model, we discard filter methods as a potential method for feature reduction/selection.
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{filter-anova-ada}
\caption{ANOVA}
\label{fig:filter-anova-ada}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{filter-mi-ada}
\caption{Mutual Information}
\label{fig:filter-mi-ada}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{filter-chi2-ada}
\caption{Chi-square}
\label{fig:filter-chi2-ada}
\end{subfigure}
\caption{Performance socres for increasing number of features, starting with the most informative features for each method.}
\label{fig:filter-components}
\end{figure}
\subsubsection{PCA}
Principal Component Analysis (PCA) is a dimensional reduction of the feature space method, see \cite{Hastie2009elements} for further details. We run the PCA for increasing number of components, from 1 to the maximum number of features and assess the number of components using the Ada model. In addition, at each value, grid-search finds the optimal hyper-parameter (the number of estimators) and optimal model is assessed \footnote{The reason we re-run the grid-search is that the hyper-parameter(s) found in the previous section is optimal to the original feature-set. PCA transforms the features and hence a new hyper-parameter might be optimal.}.
The best accuracy and weighted-accuracy scores for different number of components are plotted in Figure \ref{fig:pca-ada}. First, we observe that the model's performance peaks at three components, which capture most of the variance. As more components are added, noise is added and the performance decreases. Second, even at peak performance the accuracy and weighted accuracy do not exceed those of the Ada model with the originals features. Hence, we discard PCA as a potential method for feature reduction/selection.
\begin{figure}[]
\centering
\includegraphics[scale=0.7]{pca-ada.png}
\caption{Accuracy and weighted accuracy for increasing number of principal components.}
\label{fig:pca-ada}
\end{figure}
\subsubsection{Wrapper Methods}
\label{sec:wrapper-methods}
Wrapper methods scan combinations of features and assess their performance using a classification algorithm and select the best feature (sub-)set that maximises the performance score (in a cross-validation fashion). Hence, they depend on the classification method.
Since the number of features in our study is relatively small ($12$), we are able to scan all possible combinations of features subsets ($4,095$ in total). As we cannot find the optimal hyper-parameters for each subset of features (the search space becomes huge), we proceed in a two-step iterative process. First, we use the best pair of hyper-parameters of the ``Ada2'' algorithm\footnote{We remind the reader, that we focus on the ``Ada2'' algorithm, as it is shown in the previous section that it is the best-performing algorithm for this task.} to scan all the combinations of features. Then, we select the combination of features which outperforms the others. In the second step, we re-train the algorithm to find a new pair of hyper-parameters that might be better for the selected feature subset.
In first step, we sort the different combinations of features by their performance and plot the accuracy and weighted accuracy of the top ten combination in Figure \ref{fig:wrapper-method}. We observe that the two sub-sets of features are in the top two feature sub-set both in accuracy and weighted accuracy. We select these two feature sub-sets for further evaluation, hyper-parameter tuning, final training and testing. These feature tests are:
\begin{itemize}
\item Model 1: Position Home, Position Away, Offence Home, Offence Away, Defence Away, Difference Away, F4 Away
\item Model 2: Position Home, Offence Home, Offence Away, Defence Away, Difference Away, F4 Home, F4 Away.
\end{itemize}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{wrapper-top10-bar-accuracy}
\caption{Accuracy}
\label{fig:wrapper-accuracy}
\end{subfigure}
~
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{wrapper-top10-bar-w-accuracy}
\caption{Weighted Accuracy}
\label{fig:wrapper-w-accuracy}
\end{subfigure}
\caption{Performance scores of the top-10 sub-sets of features via the wrapper method of feature selection. The x-axis indicates the index (zero-based) of the feature in the following list ['Position Home', 'Position Away', 'Offence Home', 'Offence Away', 'Defence Home', 'Defence Away', 'Form Home', 'Form Away', 'Difference Home', 'Difference Away', 'Home F4', 'Away F4'], see also Section \ref{sec:match-level-predictions}}
\label{fig:wrapper-method}
\end{figure}
Having selected these two sets of features, we proceed to the step two of the iterative process and re-calibrate their hyper-parameters. At the end of the two-step iterative process, we identify the following subset of features with their optimal hyper-parameters:
\begin{itemize}
\item Model 1: Ada2 with 141 estimators and 0.7 learning rate achieving 0.7498 accuracy and 0.7222 weighted accuracy.
\item Model 2: Ada2 with 115 estimators and 0.7 learning rate, achieving 0.7375 accuracy and 0.7066 weighted accuracy.
\end{itemize}
We conclude that the wrapper method results in the best performing features and hence model, achieving an accuracy of 75\% via 5-fold cross-validation. We select these models to proceed to the validation phase.
\subsection{Validation}
\label{sec:validation}
In this section, we use the models (i.e. features and hyper-parameters) selected in the previous section for validation in the season 2018-2019. For comparison, we also include the Ada2 model with all features (without feature selection) as ``Model 3''.
At each game round, we train the models given all the available data before that round. For example, to predict the results of round 8, we use all data available from seasons 2017, 2018 and the rounds 2 to 7 from season 2019. We ignore round 1 in all seasons, as no information (i.e. features) is still available for that round.
\footnote{We comment on the ``cold-start'' problem in the Discussion section.}
Then we predict the results of the 8 games of that round, we store the results and proceed to the next round.
We benchmark the models against three simple models. These are:
\begin{itemize}
\item Benchmark 1: Home team always wins. This is the majority model in machine learning terms.
\item Benchmark 2: The Final-4 teams of the previous year always win, if there are no Final-4 teams playing against each other, or both teams are Final-4 teams, the home team always wins.
\item Benchmark 3: The team higher in the standings, at that moment in the season, wins.
\end{itemize}
\begin{table}[]
\centering
\begin{tabular}{l||r|r}
& Accuracy & Weighted Accuracy \\
\hline
Model 1 & 0.6595 & 0.6096 \\
Model 2 & 0.6595 & 0.6211 \\
Model 3 & 0.6681 & 0.6306 \\
Benchmark 1 & 0.6509 & 0.5000 \\
Benchmark 2 & 0.7198 & 0.6446 \\
Benchmark 3 & 0.6810 & 0.7035
\end{tabular}
\caption{Accuracy and weighted accuracy of the three models and three benchmarks.}
\label{tbl:validation-scores-benchmarking}
\end{table}
From Table \ref{tbl:validation-scores-benchmarking} we observe the following. First, the machine-learning models (marginally) outperform the majority benchmark. However, they fail to outperform the more sophisticated benchmarks 2 and 3. We also observe that the models 1 and 2 resulting from feature selection perform worse than model 3 (all features are present). This implies that the models 1 and 2 have been over-fitted and no feature selection was required at this stage as the number of features is still relatively small. This is also in agreement with the filter methods feature selection. The performance of benchmark 2 is a special case, as the season 2018-2019 the teams that qualified to the Final-4 of the previous season (2017-2018) continued their good basketball performance. However, historically, this is not always the case and it is expected that this benchmark not to be robust for other seasons.
We focus on ``Model 2'' and plot the number of correct games in every round of the season 2018-2019 in Figure \ref{fig:corrects-per-round}, and the model's accuracy per round in \ref{fig:accuracy-per-round}. From these Figures, we observe that the model finds perfect accuracy in rounds 20 and 23, whereas it achieves 7/8 in four other rounds. With the exception of 4 rounds the model always predicts at least half of the games correctly. We manually inspected round 11, where the model has its lower accuracy, and there were 4 games that a basketball expert would consider ``unexpected'', a factor that might contribute to the poor performance of the model in that round.
In Figure \ref{fig:accuracy-per-round} we also plot the trend of the accuracy, which is positive, and indicates that as the seasons progresses, the teams become more stable and the model more accurate.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{bar-plot-correct-games-2018-2019}
\caption{Number of correct games per round}
\label{fig:corrects-per-round}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{accuracy-trend-season2018-2019}
\caption{Accuracy and the trend of accuracy.}
\label{fig:accuracy-per-round}
\end{figure}
\section{The Wisdom of (Basketball) Crowds}
The author is a member of a basketball forum on a social networking platform. On this forum, a basketball predictions championship is organised for fun. In this championship, the members give their predictions for the Euroleague games in the upcoming round. They get 1 point if the prediction is correct, 0 otherwise. At the end of the season the players that come in the top eight of the table qualify to the play-offs phase.
Here, we collect the players' predictions. For each game in each round, we assign a result, which is the result of the majority prediction of the players~\footnote{Round 26 is omitted, due to a technical issue on the forum, results were not collected and the round was invalid.}. The championship started with about 60 players and finished with 41 players still active, as many dropped out during the course of the season.
We update the Table \ref{tbl:validation-scores-woc} to include the majority vote of the members of the forum (but excluding the 26th round in the accuracy due to the technical issue - first round is excluded in all performance scores.).
We observe that the collective predictions of the members of the forum have a much better accuracy (and weighted accuracy) than any model in this study (machine-learning or benchmark). This is a known phenomenon in collective behaviour, termed the ``wisdom of the crowds'' \cite{Surowiecki2005woc}, observed in social networks and rumour detection \cite{Giasemidis:2016dtv} among others. For this study, we adapt the term and refer to it as ``the wisdom of the \textit{basketball} crowds''.
\begin{table}[]
\centering
\begin{tabular}{l||r|r}
& Accuracy & Weighted Accuracy \\
\hline
Model 1 & 0.6607 & 0.6113 \\
Model 2 & 0.6563 & 0.6199 \\
Model 3 & 0.6696 & 0.6331 \\
Benchmark 1 & 0.6518 & 0.5000 \\
Benchmark 2 & 0.7188 & 0.6449 \\
Benchmark 3 & 0.6786 & 0.7027 \\
Majority Vote of the Members & 0.7321 & 0.6811
\end{tabular}
\caption{Accuracy and weighted accuracy of the three models, three benchmarks and the forum results.}
\label{tbl:validation-scores-woc}
\end{table}
| {
"attr-fineweb-edu": 2.460938,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdqA4ubng04WQt1lc | \section{Introduction}
\label{sec:introduction}
With the growing desire to travel on a day-to-day basis, the number of accommodation offers a user can find on the Web is increasing significantly. It has therefore become important to help travellers choosing the right accommodation to stay among the multitude of available choices. Customizing the offer can lead to a better conversion of the offers presented to the user. Recommender Systems play an important role in order to filter-out the undesired content and keep only the content that a user might like. In particular, in a user's navigation settings, session-based recommender systems can help the user to more easily find the elements she wants based on the actions she has performed. Most popular recommender system approaches are based on historical interactions of the user. This user's history allows to build a long-term user profile. However, in such setting, users are not always known and identified, and we do not necessarily have long-term user profile for all users. Traditional models propose to use item nearest neighbor schemes to overcome this user cold start problem \cite{Linden03} or association rules in order to capture the frequency of two co-occurring events in the same session \cite{Agrawal93}. In recent years, some research works have focused on recurrent neural networks (RNNs) \cite{hidasi15} considering the sequence of user's actions as input of the RNN. The RNN learns to predict the next action given a sequence of actions.
Inspired by this recent work, we propose a many-to-one recurrent neural network that predicts whether or not the last item in a sequence of actions is observed or not as illustrated in figure~\ref{fig:Many_to_one_rnn}. More formally, our RNN returns the probability that a user clicks on an item given the previous actions made in the same session: $P(r_{t}|ar_{t-1}, ar_{t-2}, ..., ar_{0})$, where $r_{t}$ is the item referenced by $40225$ in this example (see figure~\ref{fig:Many_to_one_rnn}). This value is then combined with a rule-based approach that explicitly places the elements seen in the previous steps at the top of the accommodation list displayed to the user. To the best of our knowledge, this is the first time where many-to-one RNN architecture is used for session-based recommendation, and this represents our main contribution.
\begin{figure}[h]
\includegraphics[width=110mm]{Many_to_one_rnn.PNG}
\caption{Many-to-one Recurrent Neural Network}
\label{fig:Many_to_one_rnn}
\end{figure}
The remainder of the paper is organized as follows. Section~\ref{sec:dataset} provides an exploratory data analysis of the Trivago dataset\footnote{\url{http://www.recsyschallenge.com/2019/}}. In Section~\ref{sec:approach}, we present the approach to build the many-to-one recurrent neural network and combine it with the rule-based approach. Section~\ref{sec:experiments} presents the experiments carried out to show the effectiveness of our model and an empirical comparison with some session-based recommender baseline methods. Finally, in Section~\ref{sec:conclusion}, we provide some conclusions and we discuss future works.
\section{Dataset}
\label{sec:dataset}
The first part of our work consists of conducting an exploratory data analysis to understand user behavior on the Trivago website\footnote{\url{https://www.trivago.com}} and then interpret the results we will obtain from our models. The dataset published for the challenge consists of interactions of users browsing the trivago website collected from 01-11-2018 to 09-11-2018 (9 days). More precisely, for a given session, we have the sequence of actions performed by the user, the filters applied, the accommodation list displayed to the user when performing a clickout action, plus the price of each accommodation in the list. In addition to this information, we have two contextual features: the device and the platform used by the user to perform the searches. The remainder of this section presents statistics and overviews of training data.
\textbf{General statistics on training data:} We report in Figure~\ref{fig:Stats_Trivago} 5 summary statistics of different variables that characterize user sessions. The statistic tables highlight two important observations:
\begin{itemize}
\item \textit{Dispersion}: The number of actions per session has a high standard deviation which means that the data is highly spread. For all the variables, we also have a very high maximum value which demonstrates the skewness of users' behaviors.
\item \textit{Actions required for 'Clickout Action'}: On average, a user performs $17.5$ actions in a session. However, the average number of actions needed to perform a 'Clickout Action' is only $8$, so what does the rest of the clicks correspond to? In more than $72 \%$ of cases, the last performed action in a session is a 'Clickout Action'. However, in $28 \%$ of all sessions, there are other actions following the click out action.
\end{itemize}
\begin{comment}
\begin{table}
\caption{Dataset Characteristics}
\label{tab:Variables}
\begin{tabular}{cc}
\toprule
Variable&Value\\
\midrule
Number of users & 730 803\\
Number of sessions & 310 683\\
Number of actions & 15 932 992\\
Number of different accommodations & 927 142\\
Number of different destinations & 34 752\\
Number of different platforms & 55\\
\bottomrule
\end{tabular}
\end{table}
\end{comment}
\squeezeup
\begin{figure}[h]
\includegraphics[width=\linewidth]{Stats_Trivago.PNG}
\caption{Statistics on Trivago dataset}
\label{fig:Stats_Trivago}
\end{figure}
\squeezeup
\textbf{Filters and sort by actions:} One could also be interested to know more about some variables' distributions. We have plotted the histogram of most of the 15 filters used. The observed distribution does not follow a long-tail distribution and all filters are more or less used in similar proportion. We can thus infer that there are different types of user behaviors. More specifically, we compute the ratio of sessions where users use filter or sort buttons: this ratio is equal to $14 \%$ which represents a significant subset of the data. We also compute the average number of clickout actions performed per session for each platform and noticed that there is a significant difference between people that are searching for accommodation using the Japan platform ($8.7$ clickout actions) and the Brazil one ($23.9$ clickout actions). Finally, we compute the average time a user spends in a session ($8$ minutes), and we noticed that there is a high standard deviation for this variable ($22$ minutes) which again demonstrates the dispersion in users' sessions. This leads us to the conclusion that there are different user profiles and behaviors. For example, we have users who need a lot of actions to finally perform a clikckout, users who perform volatile clicks, users who have to look at the images of the accommodation and then click on it, etc. Explicitly adding this information to our model can help to more effectively predict the user's clickout element. In addition, the idea of having a different model for each type of user is something that should be experimented with.
\textbf{Accommodation Content:} In addition to information on user sessions, a description of the accommodation is also provided. This enriches the input data of our recommendation system. We have $157$ different properties that describe an accommodation (wifi, good rating, etc.). We use these properties to enrich the input data of the RNN as explained in~\ref{fig:acc_cnt}.
\section{Approach}
\label{sec:approach}
Our approach is a two-stage model that consists in computing a score for each element of the impression list displayed to the user when she performs a clickout, based on the RNN, and then applying a rule-based algorithm to the ranked list returned by the RNN (Figure~\ref{fig:acc_cnt}).
\begin{figure}[h]
\includegraphics[width=100mm,scale=0.5]{Gen_Framework.PNG}
\caption{Our approach}
\label{fig:acc_cnt}
\end{figure}
\squeezeup
\subsection{Recurrent Neural Network}
\label{subsec:RNN}
Recurrent neural networks are widely used for many NLP tasks such as named entity recognition, machine translation or semantic classification \cite{TaiSM15}. Indeed, this architecture works very well when it comes to recognizing sequence-based patterns and predicting the following element from a sequence of previous elements. It was therefore natural to use this neural network architecture to predict the next click based on the sequence of actions performed by the user. However, unlike \cite{Sutskever11}, we consider our problem as a binary classification instead of a multi-label classification problem. More precisely, the RNN takes as input a sequence of actions with their corresponding references, represented by a one-hot encoding vector and fed into a one fully connected neural network in order to compute the (action, reference) embeddings, plus the last action that corresponds to a clickout with its reference, and then returns $P(r_{t}|ar_{t-1}, ar_{t-2}, ..., ar_{0})$, where $ar_{i}$ indicates the (action, reference) pair made by the user at step $i$. This probability indicates if the user has clicked in the accommodation $r_{t}$ given the sequence of previous (action, reference) pairs $(ar_{t-1}, ar_{t-2}, ..., ar_{0})$. Therefore, for each clickout action, our RNN returns a score for each item in the accommodation list, and the list of accommodations is reorganized in a decreasing way according to the score of the items.
In addition to the sequence of actions performed by the user, we first enrich our input data with the content of the accommodation, and then we add the contextual information of the session as shown in figure~\ref{fig:rnn_2}. The content information are represented using one hot-encoding technique where each element of the vector corresponds to a property (e.g. wifi, restaurant, etc.) that represents the accommodation. We use the device and the platform as session-contextual information. These two categorical features are one-hot encoded as well and fed into a Multi-layer perceptron (MLP) as represented in Figure~\ref{fig:rnn_2}. The MLP used is a 2 layer feed-forward neural network. The size of each layer is being optimized using Grid search as specified in Section~\ref{subsec:results}. We use GRU cells \cite{ChungGCB14} in order to compute the hidden states $h_{t}$ for each step $t$ and a sigmoid function in order to compute the probability score $\hat{y} = \sigma(W_{y} h_{t} + b_{y})$, where $\sigma(x) = \frac{1}{1+e^{-x}}$.
\squeezeup
\begin{figure}[h]
\includegraphics[width=\linewidth]{RNN_2.PNG}
\caption{RNN and MLP combination for content and context information}
\label{fig:rnn_2}
\end{figure}
\squeezeup
\subsection{Rule-based Algorithm}
\label{subsec:rule-based algo}
Contrarily to the RNN algorithm which predicts a score for each element in the accommodation list, the rule-based algorithm simply reorders the accommodation list based on explicit prior items the user interacted with in previous actions. The motivation to use this rule-based algorithm was the analysis made beforehand on the data which showed us an interesting and recurrent pattern: in several sessions, users who have interacted with an accommodation have performed a clickout action on this accommodation slightly later, where interacting with an accommodation is among the following actions: \{Interaction item ratings, Interaction item deals, Interaction item image, Interaction item information, Search for item, Clickout item\}.
More precisely, we consider, among the elements in the accommodation list, those that were interacted with before the clickout action, we name this ensemble of elements $I_{acc}$. The closer the element in $I_{acc}$ is to the clickout action, the higher it is placed at the top of the list as illustrated in the Figure~\ref{fig:acc_cnt}.
\section{Experiments}
\label{sec:experiments}
Trivago published users' session data split into training and test set. The training set has been fully used in order to train our RNN. The test set, as proposed by the organizers\footnote{\url{http://www.recsyschallenge.com/2019/Dataset}}, has been split into validation set in order to compute scores of our model in local and confirmation set which is the subset of the data used to submit the results in the submission page\footnote{\url{http://www.recsyschallenge.com/2019/submission}}.
As proposed by the organizers, we have used mean reciprocal rank as metric to evaluate the algorithm, which can be defined as follows:
\begin{equation}
MRR = \frac{1}{n} \sum_{t=1}^{n} \frac{1}{rank(c_{t})},
\end{equation}
where $c_{t}$ represents the accommodation that the user clicked on in the session $t$.
We also implement a set of baseline methods (Section~\ref{sec:baselines}) with which we compare our method.
\subsection{Baseline Methods}
\label{sec:baselines}
Most used recommender systems are based on a long-term user history which lead one to implement methods such as matrix factorization \cite{Koren09} as a baseline. However, in session-based recommender system, we do not have such long user past interaction \cite{Ludewig18}. Different baselines were implemented in the setting of session-based recommendation as proposed in \cite{Ludewig18}, and are described bellow:
\begin{itemize}
\item Association rules \cite{Agrawal93}: The association rule is designed to capture the frequency of two co-occurring events in the same session. The output of this recommender system is to give a ranking of next items based on a current item.
\item Markov Chains \cite{norris97}: Similarly to the association rules approach, markov chains is also capturing the co-occurring events in the same session, but only takes into account two events that follow one after the other(in the same session).
\item Sequential rules \cite{Kamehkhosh17}: This method is similar to association rules and markov chains since it tries to capture frequency of co-occurring events, but it adds a weight term that captures the distance between the two occurring events.
\item IKNN \cite{hidasi15}: Each item is represented by a sparse vector $V_{i}$ of length equal to the number of sessions, where $V_{i}[j] = 1$ if the item is seen in the session j and $0$ otherwise.
\end{itemize}
\subsection{Experimental Results}
\label{subsec:results}
\textbf{Implementation Framework \& Parameter Settings:}
Our model and all the baselines are implemented using Python and Tensorflow library\footnote{Python Tensorflow API: \url{https://www.tensorflow.org}}. The hyper-parameters of the RNN were tuned using grid-search algorithm. First, we initialized all the weights randomly with a Gaussian Distribution ($\mu = 0$, $\sigma = 0.01$), and we used mini-batch Adam optimizer \cite{Kingma14}. It is worth mentioning that other optimizers were tested. However, Adam Optimizer has shown to be the most efficient in time and also accuracy. We evaluated our model using different values for the hyper-parameters:
\begin{itemize}
\item Size of $ar$ embeddings: $E\_size \in \{64, 128, 256, 512\}$
\item Hidden state: $h\_size \in \{64, 128\}$
\item Batch size: $B\_size \in \{32, 64, 128, 256, 512\}$
\item Number of epochs: $epochs \in \{5, 10, 15, 20\}$
\item Learning rate: $l_{r} \in \{0.0001, 0.0005, 0.001, 0.005, 0.01\}$
\item MLP layers size: $l\_sizes \in \{[256, 128], [128, 64], [64, 32]\}$
\end{itemize}
\textbf{Results \& Discussion:}
The results are reported in table~\ref{tab:results}. The scores correspond to an average of numerous experiments of the MRR metric computed on the validation set proposed by the organizers. The rule-based approach is the most effective with a score of $0.56$. The ranking of these methods remains the same for the confirmation set, where the best score was obtained using the rule-based algorithm ($MRR = 0.648$). The RNN does not work as expected, even when session context information and accommodation meta-data have been added to the model. Association rules and sequential rules give promising results: $0.52$ and $0.51$ respectively, when the Markov chains give only a score of $0.34$. This shows that it is more important to consider all the elements seen in a session as close to each other than to consider only those seen sequentially close to each other. Finally, our approach of combining the RNN and the rule-based method is less successful than the simple rule-based method. This is because the order obtained by the RNN is worse than the order given by Trivago for the rule-based method, even if the order given by the RNN has better raw results than the order given by Trivago when calculating the MRR.
\begin{table}
\caption{MRR scores on Validation set}
\label{tab:results}
\begin{tabular}{cl}
\toprule
Model& MRR\\
\midrule
Association Rules & 0.52 \\
Markov Chains & 0.34\\
Sequential Rules & 0.51\\
IKNN & 0.54\\
RNN & 0.49\\
RNN + Metadata & 0.50 \\
RNN + Context & 0.49 \\
RNN + Metadata + Context & 0.50 \\
\textbf{Rule-based} & 0.56 \\
RNN + Metadata + Context + Rule-based & 0.54 \\
\bottomrule
\end{tabular}
\end{table}
\squeezeup
\subsection{Lessons Learned}
\label{subsec:lessons}
The task of predicting which element the user will click on based on performed actions has been treated in a similar way to predicting the next word in a sentence. However, while the context in a sentence is very important and plays a big role in considering that two consecutive words have a sense, hence the use of RNN, we cannot be sure that the context is just as important for our task, especially when we look at the volatility of actions made by users in the same session. This leads us to question ourselves, especially when we look at the results obtained from the association rules and the method of the K-nearest neighbors which are better than the Markov chain method or the sequential rule method. This demonstrates that the succession of actions is not as important as it is assumed at the beginning of our study, and that the simple fact of considering the set of actions than the sequence of actions could have been better.
The second important point to emphasize is the dispersion of user behavior with the website: indeed, when analyzing the data, we noticed that there are several types of users, which makes it complicated and difficult to build a model for all types of users and to find a pattern that generalizes all the different behaviors in order to make accurate predictions for our task. The idea proposed during the data analysis which is to create different models per user seems to be a good idea as well.
Lastly, the simple rule-based method is the most efficient and is not as far from the method that obtained the best result in this challenge (0.648 against 0.689). Given that this method does not require any learning, nor much computation time, it is worth using this method for obvious use cases as the example shown in Figure~\ref{fig:acc_cnt}.
\squeezeup
\section{Conclusion and Future Work}
\label{sec:conclusion}
Recommender systems help users to find relevant elements of interest to facilitate navigation and thus increase the conversion rate in e-commerce sites or build user loyalty in streaming video sites. In this challenge, the aim is to help the user to easily find the accommodation in which she wants to go, and to place it in the top of a list of different accommodations that are proposed to her, given previous performed actions in a session. However, by having only the actions performed by the user in addition to some information related to the context of the session such as the user's device or Trivago's platform, such a task is hard. Especially, in the world of travel where the context is very important, such as the season in which a user travels but also if this user travels alone, in a group or with a family. In this paper, we presented the D2KLAB solution that implements a two-stage model composed by a one-to-many recurrent neural network that predicts a score for each item in the accommodation list to reorder the list given as input and then apply a rule-based algorithm to reorder once again this list based on explicit prior action performed in the session. The implementation of our method is publicly available at \url{https://gitlab.eurecom.fr/dadoun/hotel_recommendation}.
One major outstanding question remains regarding the applicability of the methods designed for this challenge: once a user has interacted with an element of the accommodation list and has performed a clickout action on an element of this list which redirects her to the merchant website, it is already too late to change the order of this list. Furthermore, having removed the three features (namely, stay duration, number of travelers plus the arrival date) from the dataset makes those models not representative of the real traffic. In future work, we would like to exploit session actions as a set of actions instead of a sequence of actions and use an algorithm that combines different types of inputs, namely, collaboration, knowledge and contextual information in a learning model as proposed in \cite{Dadoun19}. We also envision to better segment users and to build a model per user segment.
\newpage
\bibliographystyle{ACM-Reference-Format}
| {
"attr-fineweb-edu": 2.195312,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUb1fxK4tBVhat2rQO | \section{Game State Features}
Besides the final set of features listed in Section~4.1, we considered a large list of additional game state features. These are listed below.\\
\noindent\textbf{Offensive strength}
\begin{itemize}
\item Offensive strength: A team's estimated offensive strength, according to the ODM model~\cite{govan2009} (relative to the competition)
\item Number of shots: The total number of shots a team took during the game.
\item Number of shots on target: The total number of shots on target a team took during the game.
\item Number of well positioned shots: The number of shots attempted from the middle third of the pitch, in the attacking third of the pitch.
\item Opportunities: The total number of goal scoring opportunities (as annotated in the event data).
\item Number of attacking passes: The total number of passes attempted by a team in the attacking third of the field.
\item Attacking pass success rate: The percentage of passes attempted in the attacking third that were successful.
\item Number of crosses: The total number of made crosses by a team.
\item Balls inside the penalty box: Number of actions that end in the opponents penalty box.
\end{itemize}
\noindent\textbf{Defensive strength}
\begin{itemize}
\item Defensive strength: A team's estimated defensive strength, according to the ODM model (relative to the competition).
\item Tackle success rate: The percentage of all attempted tackles that were successful.
\end{itemize}
\noindent\textbf{Playing style}
\begin{itemize}
\item Tempo: The number of actions per interval.
\item Average team position: Average x coordinate of the actions performed.
\item Possessions: The percentage of actions in which a team had possession of the ball during the game.
\item Pass length: The average length of all the attempted passes by a team.
\item Attacking pass length: The average length of all attacking passes attempted by a team.
\item Percentage of backward passes: The percentage of passes with a backward direction.
\item Number of attacking tackles: The total number of tackles attempted in the attacking third.
\item Attacking tackle success rate: The percentage of tackles attempted in the attacking third that were successful.
\end{itemize}
\noindent\textbf{Game situation}
\begin{itemize}
\item Time since last goal: The number of time frames since the last scored goal.
\end{itemize}
\section{World Cup Predictions}
FiveThirthyEight's win probability estimates for the 2018 World Cup are the only publicly available\footnote{FiveThirtyEight's predictions can be found at \url{https://projects.fivethirtyeight.com/2018-world-cup-predictions/matches/}} in-game win probability estimates for football. To compare our model against these predictions, we apply our Bayesian statistical model on the 48 games of the 2018 World Cup group stage using the event data provided by StatsBomb.\footnote{Data repository at \url{https://github.com/statsbomb/open-data}}. We only look at the group stage, because the knockout stage does not allow for ties and has extra time. This would require a different modelling approach.
\begin{figure}[!h]
\centering
\begin{subfigure}[t]{.45\linewidth}
\includegraphics[width=\linewidth]{fig/worldcup18_acc.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.45\linewidth}
\includegraphics[width=\linewidth]{fig/worldcup18_RPS.pdf}
\end{subfigure}
\caption{A comparison between FiveThirtyEight's in-game predictions for the group stage of the 2018 World Cup and our Bayesian model.}
\label{fig:538}
\end{figure}
Figure~\ref{fig:538} compares both models in terms of accuracy and RPS. The differences can mainly be attributed to FiveThirtyEight's better pre-game prediction model. Their SPI ratings, which are the underlying basis for the pre-game projections, are based on a combination of national team results and an estimate of each team's individual player abilities using club-level performance. At least for the 2018 World Cup, these outperformed a simple Elo-based system~\cite{robberechts2018}. Near the end of each game, when the importance of these pre-game ratings decreases, our model performs similarly to FiveThirtyEight's. Another interesting observation is that both RPS and accuracy have a sudden increase in the final 10\% of the game, indicating that most games in the World Cup could go either way until the final stages of the game. For domestic league games, this increase in accuracy over time has a much more gradual increase.
\section{``A 2-0 lead is the worst lead''}
Win probability can be used to debunk some of the most persistent football myths, such as ``2-0 is the worst lead'', ``10 do it better'' and ``No better moment to score a goal than just before half time''. Perhaps not unsurprisingly, none of these appear to be true. In this section we focus on the ``2-0 is the worst lead'' myth, which we introduced in the introduction of the main article. To debunk this myth, we adapt the game state of all the games in our dataset such that the scoreline is 2-0 at each moment in the game. This allows us to capture a large range of possible game states (i.e., strong team leading, weak team leading, multiple players red carded, \ldots). In Figure~\ref{fig:myth_2_0_worst_lead}, we show the win probabilities for all resulting match scenarios. A 2-0 lead appears to be a very safe lead, much safer than a 1-0 lead or 2-1 lead. A team leading 2-0 has a minimal win probability of 70\%, but in most scenarios (and especially in the second half) the probability is 90\% of higher.
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\textwidth]{fig/2_0_worst_lead.pdf}
\caption{``2-0 is the worst lead'', at least according to the clich\'e . In reality, a 2-0 lead is much safer than a 1-0 or 2-1 lead, which are the true ``worst leads''. A 2-0 lead in the second half guarantees a win probability close to 90\%.}
\label{fig:myth_2_0_worst_lead}
\end{figure}
\section{Introduction}
``A 2-0 lead is the worst lead'' is a clich\'e commonly used in association football. This saying implies that the chances a team will lose (or at best draw the game) are maximized compared to other leads.\footnote{\url{https://en.wikipedia.org/wiki/2-0_lead_is_the_worst_lead}} The underlying idea is that a team leading 2-0 will have a false sense of security and therefore become complacent.
In contrast, a team leading 1-0 will tend to concentrate and play with intensity to protect or extend their narrow lead, whilst teams leading by three or more goals have a sufficiently large buffer that comebacks are unlikely.
This is an interesting theory and one might wonder whether there is any statistical proof to it. An adequate answer obviously depends on several parameters, such as the time remaining in the match and the relative strengths of both teams. For example, the win probability of a team that leads by two goals with less than a minute remaining in the game should be approaching 100\%. How would that probability differ if the team leads by two goals at half time, or with ten minutes remaining? Or with 15 minutes and 31 seconds remaining and having had one player red carded? Such questions can be answered by an in-game win-probability model, which provides the likelihood that a particular team will win a game based upon a specific game state (i.e., score, time remaining, \ldots).
In-game win probability models have become increasingly popular in a variety of sports over the last decade. Nowadays, in-game win probability is widely used in baseball, basketball and American football. It has a number of relevant use-cases within these sports' ecosystems. First, the win probability added (WPA) metric computes the change in win probability between two consecutive game states. It allows one to rate a player's contribution to his team's performance~\cite{pettigrew2015,lindholm2014}, measure the risk-reward balance of coaching decisions~\cite{lock2014,morris2017}, or evaluate in-game decision making~\cite{mcfarlane2019}. Second, win probability models can improve the fan experience by telling the story of a game.\footnote{ESPN includes win probability graphs in its match reports for basketball (e.g., \url{http://espn.com/nba/game?gameId=401071795}) and American football (e.g., \url{http://espn.com/nfl/game?gameId=401030972})} For example, they can help identify exciting or influential moments in the game~\cite{schneider2014}, which may be useful for broadcasters looking for game highlights. Third, they are relevant to in-game betting scenarios. Here, gamblers have the option to continue to bet once an event has started, and adapt their bets depending on how the event is progressing. This became a popular betting service in many countries, and is estimated to account for over one-third of online betting gross gambling yield in Britain~\cite{inplaybetting}.
While well established in these American sports, in-game win probability is a relatively new concept in association football. It first emerged during the 2018 World Cup when both FiveThirtyEight and Google published such predictions. The lack of attention in win probability in association football can probably be attributed to its low-scoring nature and high probability of ties, which makes the construction of a good in-game win probability model significantly harder in comparison to the aforementioned sports. Unfortunately, FiveThirtyEight and Google do not provide any details about how they tackled those challenges.
We present a machine learning approach for making minute-by-minute win probability estimates for association football. By comparing with state-of-the-art win probability estimation techniques in other sports, we introduce the unique challenges that come with modelling these probabilities for football. In particular, it involves challenges such as capturing the current game state, dealing with stoppage time, the frequent occurrence of ties and changes in momentum. To address these challenges we introduce a Bayesian model that models the future number of goals that each team will score as a temporal stochastic process. We evaluate our model on event stream data from the four most recent seasons of the major European football leagues. Finally, we introduce two relevant use cases for our win probability model: a ``story stat'' to enhance the fan experience and a tool to quantify player performance in the crucial moments of a game.
\section{Related Work}
Win probability models emerged in Major League Baseball as early as the 1960s~\cite{lindsey1961}. Baseball can be easily analyzed as a sequence of discrete, distinct events rather than one continuous event. Each point of the game can be characterized by a game-state, which typically captures at least the following information: the score differential, the current inning, the number of outs and which bases are occupied by runners. Given the relatively low number of distinct game-states and the large dataset of historical games,\footnote{Baseline records go back more than 100 years.} it is possible to accurately predict the probability of winning based on the historical final results for that game-state.
However, defining the game-state in other sports is not as straightforward as it is in baseball. Most sports, like football, basketball, and ice hockey, run on continuous time, and both score differentials and game states can be extremely variable. Typically, win probability models in these sports use some regression approach to generate predictions for previously unseen game states.
Stern~\cite{stern1991} proposed one of the first win probability models in American football. Using data from the 1981, 1983 and 1984 NFL seasons, he found that the observed point differential (i.e., the winning margin) is normally distributed with a mean equal to the pregame point spread and a standard deviation of around 14 points. The probability that a team favoured by $p$ points wins the game can then be estimated from this distribution. PFR~\cite{PFR2012} adjusted this model to incorporate the notion of expected points (EP). The EP captures the average number of points a team would expect to score on its current drive based on the game state (down, distance, quarter, time left, etc.). It adjusts the current score difference by adding the EP to it, which yields a de facto ``current expected margin'' based on the game conditions. The more recent publicly available models either use a linear logistic regression~\cite{burke2010,pelechrinis2017} or non-linear random forest model~\cite{lock2014} using various features such as the score difference, time remaining, field position, down, the Las Vegas spread and the number of time-outs remaining.
Basketball win probability models are quite similar to the ones employed for American football. The ``standard'' model of NBA win probability considers game time, point differential, possession, and the Vegas point spread and predicts the match outcome using a logistic regression model.\footnote{\url{http://www.inpredictable.com/2015/02/updated-nba-win-probability-calculator.html}} However, several authors have noted that while a single logistic regression model works reasonably well for most of the game, such an approach performs poorly near the end of a game. Seemingly, this occurs because the model misses the fact that non-zero score differentials at the end of games are deterministic. The crux of this issue lies in the fact that there is a non-linear relationship between time remaining and win-probability. Both Bart Torvik\footnote{\url{http://adamcwisports.blogspot.com/2017/07/how-i-built-crappy-basketball-win.html}} and Brian Burke\footnote{\url{http://wagesofwins.com/2009/03/05/modeling-win-probability-for-a-college-basketball-game-a-guest-post-from-brian-burke/}} have solved this issue by partitioning the game into fixed-time intervals and learning a separate logistic regression model for each interval.
Recently, Ganguly and Frank~\cite{ganguly2018} have identified two other shortcomings of the existing basic win probability models. They claim that they (1) do not incorporate sufficient context information (e.g., team strength, injuries), and (2) lack a measure of uncertainty. To address these issues, they introduce team lineup encodings and an explicit prediction of the score difference distribution.
In ice hockey -- which has the most similar score progression compared to association football, win probability models are less well established. A basic one was proposed by Pettigrew~\cite{pettigrew2015}. The bulk of his model estimates the win probability from an historical average for a given score differential and time remaining, but the model also takes into account power plays by using conditional probabilities.
A third approach to win probability prediction estimates the final score line by simulating the remainder of the game. For example, Rosenheck~\cite{rosenheck2017} developed a model that considers the strength of offence and defence, the current score differentials and the field position to forecasts the result of any possession in the NFL. This model can be used to simulate the rest of the game several times to obtain the current win probability. {\v{S}}trumbelj and Vra{\v{c}}ar~\cite{vstrumbelj2012} did something similar for basketball. These approaches can be simplified by estimating the scoring rate during each phase of the game instead of looking at individual possessions. For example, Buttrey et al.~\cite{buttrey2011} estimate the rates at which NHL teams score using the offensive and defensive strength of the teams playing, the home-ice advantage, and the manpower situation. The probabilities of the different possible outcomes of any game are then given by a Poisson process. Similarly, Stern~\cite{stern1994} assumes that the progress of scores is guided by a Brownian motion process instead, and applied this idea to basketball and baseball. These last two approaches are most similar to the model that we propose for football. However, in contrast to the aforementioned methods, we consider the in-game state while estimating the scoring rates and assume that these rates change throughout the game and throughout the season.
\section{Task and Challenges}
This paper aims to construct an in-game win probability model for football games. The task of a win probability model is to predict the probability distribution over the possible match outcomes given the current game state. In football, this corresponds to predicting the probability of a win, a draw and a loss.
\iffalse
Formally, the problem can be defined as:
\begin{problem_def}
\probleminput{$x_{t}$: the game state at time $t$ }
\problemquestion{
Estimate probabilities
\begin{itemize}
\item $P(Y = \textrm{win}\ |\ x_{t})$ that the home team will win the game
\item $P(Y = \textrm{tie}\ |\ x_{t})$ that the game will end in a tie
\item $P(Y = \textrm{loss}\ |\ x_{t})$ that the away team will win the game
\end{itemize}
}
\end{problem_def}
\fi
A na\"{i}ve model may simply report the cumulative historical average for a given score differential and time remaining, i.e., the fraction of teams that went on to win under identical conditions. However, such a model assumes that all outcomes are equally likely across all games. In reality, the win probabilities will depend on several features of the game state, such as the time remaining and the score difference as well as the relative strengths of the opponents.
While the same task has been solved for the major American sports, football has some unique distinguishing properties that impact developing a win probability model. We identify four such issues.
\textbf{1. Describing the game state.}
For each sport, win probability models are influenced by the time remaining, the score differential and sometimes the estimated difference in strength between both teams. The remaining features that describe the in-game situation differ widely between sports. American football models typically use features such as the current down, distance to the goal line and number of remaining timeouts, while basketball models incorporate possession and lineup encodings. Since there are no publicly available models for association football, it is unclear which features should be used to describe the game state.
\textbf{2. Dealing with stoppage time.}
In most sports, one always knows exactly how much time is left in the game, but this is not the case for football. Football games rarely last precisely 90 minutes. Each half is 45 minutes long, but the referee can supplement those allotted periods to compensate for stoppages during the game. There are general recommendations and best practices that allow fans to project broadly the amount of time added at the end of a half, but no one can ever be quite certain.
\textbf{3. The frequent occurrence of ties.}
Another unique property of football is the frequent occurrence of ties. Due to the low-scoring nature, football games are often very close, with a margin less than or equal to a single goal. In this setting the win-draw-loss outcome provides essentially zero information. At each moment in time, a win or loss could be converted to a tie, and a tie could be converted to a win or a loss for one of both teams. Therefore, a setting that directly predicts the win-draw-loss outcomes breaks down in late game situations.
\textbf{4. Changes in momentum.}
Additionally, the fact that goals are scarce in football (typically less than three goals per game) means that when they do arrive their impact is often game-changing in terms of the ebb and flow of the game thereafter, how space then opens up and who dominates the ball -- and where they do it. The existing win probability models are very unresponsive to such shifts in the tone of a game.
\section{A Win Probability Model for Football}
In this section, we outline our approach to construct a win probability model for association football. First, we discuss how to describe the game state. Second, we introduce our four win probability models.
The first three models are inspired by the existing models in other sports. With a fourth Bayesian model, we address the challenges described in the previous section.
\subsection{Describing the game state}
\label{sec:features}
To deal with the variable duration of games due to stoppage time, we split each game into $T=100$ time frames, each corresponding to a percentage of the game. Each frame can capture a reasonable approximation of the game state, since the events that have the largest impact on the game state (goals and red cards) almost never occur multiple times within the same time frame. In our dataset of 6,712 games, only 6 of the 22,601 goals were scored within the same percentage of the game and no two players of the same team were red carded. Next, we describe the game state in each of these frames using the following variables:
\begin{enumerate}
\item \textbf{Base features}
\begin{itemize}
\item Game Time: Percentage of the total game time completed.
\item Score Differential: The current score differential.
\end{itemize}
\item \textbf{Team strength features}
\begin{itemize}
\item Rating Differential: The difference in Elo ratings~\cite{hvattum2010} between both teams, which represents the prior estimated difference in strength with the opponent.
\end{itemize}
\item \textbf{Contextual features}
\begin{itemize}
\item Team Goals: The number of goals scored so far.
\item Yellows: Number of yellow cards received.
\item Reds: The difference with the opposing team in number of red cards received.
\item Attacking Passes: A rolling average of the number of successfully completed attacking passes (a forward pass ending in the final third of the field) during the previous 10 time frames.
\item Duel Strength: A rolling average of the percentage of duels won in the previous 10 time frames.
\end{itemize}
\end{enumerate}
The challenge here is to design a good set of contextual features. The addition of each variable increases the size of the state space exponentially and makes learning a well-calibrated model significantly harder. On the other hand, they should accurately capture the likelihood of each team to win the game. The five contextual features that we propose are capable of doing this: the number of goals scored so far gives an indication of whether a team was able to score in the past (and is therefore probably capable of doing it again); a difference in red cards represents a goal-scoring advantage~\cite{cerveny2018}; a weaker team that is forced to defend can be expected to commit more fouls and incur more yellow cards; the percentage of successful attacking passes captures a team's success in creating goal scoring opportunities; and the percentage of duels won captures how effective teams are at regaining possession. Besides these five contextual features, we experimented with a large set of additional features. These features are listed in the appendix.
\subsection{Applying existing win probability models to football}
At first, association football seems not very different from basketball and American football.
In all these sports, two teams try to score as much as possible while preventing the other team from scoring. After a fixed amount of time, the team that scored most wins. Therefore, it seems straightforward that a model similar to the ones used in basketball and American football could be applied to association football too. We consider three such models:
\begin{enumerate}
\item \textbf{Logistic regression model~\cite{pelechrinis2017,burke2010} (LR).} This is a basic multi-class logistic regression model that calculates the probability of the win, tie and loss outcomes given the current state of the game:
\begin{equation}
P(Y = o | x_{t}) = \frac{e^{\Vec{w}^T \vec{x}_t}}{1 + e^{\Vec{w}^T \Vec{x}_t}},
\end{equation}
where Y is the dependent random variable of our model representing whether the game ends in a win, tie or loss for the home team, $\Vec{x}_t$ is the vector with the game state features, while the coefficient vector $\Vec{w}$ includes the weights for each independent variable and is estimated using historic match data.
\item \textbf{Multiple logistic regression classifiers~\cite{burke2009} (mLR).}
This model removes the remaining time from the game state vector and trains a separate logistic classifier per time frame. As such, this model can deal with non-linear effects of the time remaining on the win probability.
\item \textbf{Random forest model~\cite{lock2014} (RF).} Third, a random forest model can deal with non-linear interactions between all game state variables.
\end{enumerate}
\subsection{Our model}
Most existing win probability models use a machine learning model that directly estimates the probability of the home team winning. Instead, we model the number of future goals that a team will score and then map that back to the win-draw-loss probability. Specifically, given the game state at time $t$, we model the probability distribution over the number of goals each team will score between time $t+1$ and the end of the match.
This task can be formalized as:
\begin{problem_def}
\probleminput{A game state $(x_{t, home},\ x_{t, away})$ at time $t$.}
\problemquestion{
Estimate probabilities
\begin{itemize}
\item $P(y_{>t, home} = g\ |\ x_{t, home})$ that the home team will score $g \in \mathbb{N}$ more goals before the end of the game
\item $P(y_{>t, away} = g\ |\ x_{t, away})$ that the away team will score $g \in \mathbb{N}$ more goals before the end of the game
\end{itemize}
such that we can predict the most likely final scoreline $(y_{home},\ y_{away})$ for each time frame in the game as
$
(y_{<t, home}+y_{>t, home},\ y_{<t, away} + y_{>t, away})
$.
}
\end{problem_def}
This formulation has two important advantages. First, the goal difference contains a lot of information and the distribution over possible goal differences provides a natural measure of prediction uncertainty~\cite{ganguly2018}. By estimating the likelihood of each possible path to a win-draw-loss outcome, our model can capture the uncertainty of the win-draw-loss outcome in close games. Second, by modelling the number of future goals instead of the total score at the end of the game, our model can better cope with these changes in momentum that often happen after scoring a goal.
We model the expected number of goals that the home ($y_{>t, home}$) and away ($y_{>t, away}$) team will score after time $t$, as independent Binomial distributions:
\begin{equation}
\begin{aligned}
&y_{>t,\mathrm{home}} &\sim B(T-t,\theta_{t,\mathrm{home}}), \\
&y_{>t,\mathrm{away}} &\sim B(T-t,\theta_{t,\mathrm{away}}),
\end{aligned}
\end{equation}
where the $\theta$ parameters represent each team's estimated scoring intensity in the $t$\textsuperscript{th} time frame. These scoring intensities are estimated from the current game state $x_{t, i}$. However, the importance of these game state features varies over time. At the start of the game, the prior estimated strengths of each team are most informative, while near the end the features that reflect the in-game performance become more important. Moreover, this variation is not linear, for example, because of a game's final sprint. Therefore, we model these scoring intensity parameters as a temporal stochastic process. In contrast to a multiple regression approach (i.e., a separate model for each time frame), the stochastic process view allows sharing information and performing coherent inference between time frames. As such, our model can make accurate predictions for events that occur rarely (e.g., a red card in the first minute of the game). More formally, we model the scoring intensities as:
\begin{equation}
\begin{aligned}[c]
\theta_{t, home} &= \invlogit (\Vec{\alpha}_t * x_{t, home} + \beta + \textrm{Ha}) \\
\theta_{t, away} &= \invlogit (\Vec{\alpha}_t * x_{t, away} + \beta)
\end{aligned}
\qquad\qquad
\begin{aligned}[c]
&\alpha_t \sim N(\alpha_{t-1},2) \\
&\beta \sim N(0,10) \\
&\textrm{Ha} \sim N(0,10)
\end{aligned}
\end{equation}
where $\Vec{\alpha}_t$ are the regression coefficients and $\textrm{Ha}$ models the home advantage.
Our model was trained using PYMC3's Auto-Differentiation Variational Inference (ADVI) algorithm~\cite{kucukelbir2017}. To deal with the large amounts of data, we also take advantage of PYMC3's mini-batch feature for ADVI.
\section{Experiments}
The goal of our experimental evaluation is to: (1) explore the prediction accuracy and compare with the various models we introduced in the previous section and (2) evaluate the importance of each feature.
\subsection{Dataset}
The time remaining and score differential could be obtained from match reports, but the contextual features that describe the in-game situation require more detailed data. Therefore, our analysis relies on event stream data. This kind of data is collected manually from watching video feeds of the matches. For each event on the pitch, a human annotator records the event with a timestamp, the location (i.e., a $(x,y)$ position), the type of the event (e.g., pass, shot, foul, \ldots) and the players that are involved. From these data streams, we can extract both the game changing event (i.e., goals and cards), and assess how each team is performing at each moment in the game.
We use data provided by Wyscout from the English Premier League, Spanish LaLiga, German Bundesliga, Italian Serie A, French Ligue 1, Dutch Eredivisie, and Belgian First Division A. For each league, we used the 2014/2015, 2015/2016 and 2016/2017 seasons to train and validate our models. This training set consists of 5967 games (some games in the 2014/2015 and 2015/2016 season were ignored due to missing events). The 2017/2018 season was set aside as a test set containing 2227 games. Due to the home advantage, the distribution between wins, ties and losses is unbalanced. In the full dataset, 45.23\% of the games end in a win for the home team, 29.75\% end in a tie and 25.01\% end in a win for the away team.
To asses the pre-game strength of each team, we scraped Elo ratings from \url{http://clubelo.com}. In the case of association football, the single rating difference between two teams is a highly significant predictor of match outcomes~\cite{hvattum2010}.
\subsection{Model evaluation}
We trained all four models using 3-fold cross validation on the train set to optimize model parameters with respect to the Ranked Probability Score (RPS)~\cite{constantinou2012}:
\begin{equation}
RPS_t = \frac{1}{2} \sum_{i=1}^2(\sum_{j=1}^i p_{t,j} - \sum_{j=1}^i e_j)^2,
\label{eq:RPS}
\end{equation}
where $\Vec{p}_{t} = [P(Y = \text{win}\ |\ x_t),\ P(Y = \text{tie}\ |\ x_t),\ P(Y = \text{loss}\ |\ x_t)]$ are the estimated probabilities at a time frame $t$ and
$\Vec{e}$ encodes the final outcome of the game as a win ($\Vec{e} = [1,1,1]$), a tie ($\Vec{e} = [0, 1, 1]$) or a loss ($\Vec{e} = [0,0,1]$).
This metric reflects that an away win is in a sense closer to a draw than a home win. That means that a higher probability predicted for a draw is considered better than a higher probability for home win if the actual result is an away win.
\begin{figure}[!b]
\centering
\begin{subfigure}[t]{.45\linewidth}
\includegraphics[width=\linewidth]{fig/calibration_LR.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.45\linewidth}
\includegraphics[width=\linewidth]{fig/calibration_mLR.pdf}
\end{subfigure}
\begin{subfigure}[b]{.45\linewidth}
\includegraphics[width=\linewidth]{fig/calibration_RF.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.45\linewidth}
\includegraphics[width=\linewidth]{fig/calibration_bayesian.pdf}
\end{subfigure}
\caption{Only the Bayesian classifier has well calibrated win, draw and loss probabilities.}
\label{fig:pc_existing_models}
\end{figure}
We then evaluate the quality of the estimated win probabilities on the external test set, which is a challenging task.
For example, when a team is given an 8\% probability of winning at a given state of the game, this essentially means that if the game was played from that state onwards a hundred times, the team is expected to win approximately eight of them. This cannot be assessed for a single game, since each game is played only once. Therefore, we calculate for all games in the test set where our model predicts a win, draw or loss probability of x\% the fraction of games that actually ended up in that outcome. Ideally, this fraction should be x\% as well when averaged over many games. This is reflected in the probability calibration curves in Figure~\ref{fig:pc_existing_models}. Only our proposed Bayesian classifier has a good probability calibration curve. Among the three other models, the RF classifier performs well, but the predictions for the probability of ties break down in late game situations. Similarly, the LR and mLR models struggle to accurately predict the probability of ties. Additionally, their win and loss probabilities are also not well calibrated.
Besides the probability calibration, we also look at how the accuracy and RPS of our predictions on the test set evolve as the game progresses (Figure~\ref{fig:acc_rps_all_models}). To measure accuracy, we take the most likely outcome at each time frame
\begin{equation}
\argmax_{o \in \{\text{win}, \text{tie}, \text{loss}\}} P(Y = o\ |\ x_t)
\end{equation}
and compare this with the actual outcome at the end of the game. Both the RPS and accuracy of all in-game win probability models improve when the game progresses, as they gain more information about the final outcome. Yet, only the Bayesian model is able to make consistently correct predictions at the end of each game. For the first few time frames of each game, the models' performance is similar to a pre-game logistic regression model that uses the Elo rating difference as a single feature. Furthermore, the Bayesian model clearly outperforms the LR, mLR and RF models.
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{.45\linewidth}
\includegraphics[width=\linewidth]{fig/all_acc.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.45\linewidth}
\includegraphics[width=\linewidth]{fig/all_RPS.pdf}
\end{subfigure}
\caption{All models' performance improves as the game progresses, but only our Bayesian model makes consistently correct predictions at the end of each game. Early in the game, the performance of all models is similar to an Elo-based pre-game win probability model.}
\label{fig:acc_rps_all_models}
\end{figure}
Finally, we apply our Bayesian statistical model on the 48 games of the 2018 World Cup group stage and compare our predictions against the ones by FiveThirtyEight. Due to FiveThirtyEight's better pre-game team ratings, their model performs better in the early stages of the game. Our model has a similar performance in the later stages. More details about this evaluation can be found in the appendix.
\subsection{Feature importance}
In addition to the predictive accuracy of our win probability estimates, it is interesting to observe how these estimates are affected by the different features. To this end, we inspect the simulated traces of the weight vector $\vec{\alpha}$ for each feature. In the probabilistic framework, these traces form a marginal distribution on the feature weights for each time frame. Figure~\ref{fig:feat_importance} shows the mean and variance of these distributions. Primarily of note is that winning more duels has a negative effect on the win probability, which is not what one would intuitively expect. Yet, this is not a novel insight.\footnote{\url{https://fivethirtyeight.com/features/what-analytics-can-teach-us-about-the-beautiful-game/}} Furthermore, we notice that a higher Elo rating than the opponent, previously scored goals, yellow cards for the opponent and more successful attacking passes all have a positive impact on the scoring rate. On the other hand, receiving red cards decreases a team's scoring rate. Finally, the effect of goals, yellows and attacking passes increases as the game progresses. Red cards and duel strength have a bigger impact on the scoring rate in the first half. For the difference in Elo rating, mainly the uncertainty about the effect on the scoring rate increases during the game.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{fig/feature_weights.pdf}
\caption{Estimated mean weight and variance for each feature per time frame.}
\label{fig:feat_importance}
\end{figure}
\section{Use Cases}
In-game win-probability models have a number of interesting use cases. In this section, we first show how win probability can be used as a story stat to enhance fan engagement. Second, we discuss how win probability models can be used as a tool to quantify the performance of players in the crucial moments of a game. We illustrate this with an Added Goal Value (AGV) metric, which improves upon standard goal scoring statistics by accounting for the value each goal adds to the team's probability of winning the game.
\subsection{Fan Engagement}
Win probability is a great ``story stat'' -- meaning that it provides historical context to specific in-game situations and illustrates how a game unfolded. Figure~\ref{fig:bel_jap} illustrates how the metric works for Belgium's illustrious comeback against Japan at the 2018 World Cup. As can be seen, the story of the game can be told right from this win probability graph. It shows how Japan managed to take the lead, shortly after a scoreless first half in which neither team could really threaten the opponent. The opening goal did not faze the Belgians. Their win probability increased as Eden Hazard hit a post. Nevertheless, Japan scored a second on a counter-attack. In the next 15 minutes, Belgium hit a lull, which further increased Japan's win probability. Right when a Belgium win seemed improbable, Belgium got the bit of luck they probably deserved when Vertonghen's header looped over the Japanese keeper. This shifted the momentum of the game in Belgium's favour and five minutes later Belgium were level, before snatching the win with a stunning counter attack in the last second of the game. With this comeback, Belgium created a little bit of history by becoming the first team to come from two goals down to win a World Cup knockout match since 1970.
\setlength{\belowcaptionskip}{-5pt}
\begin{figure}[h!]
\centering
\includegraphics[width=0.75\textwidth]{fig/bel_jap_small.pdf}
\caption{Win probability graph for the 2018 World Cup game between Belgium and Japan.}
\label{fig:bel_jap}
\end{figure}
Undoubtedly fans implicitly considered these win probabilities too as the game unfolded. Where football fans and commentators have to rely on their intuition and limited experience, win probability stats can deliver a more objective view on these probabilities. Therefore, win probability could be of interest to fans as they watch a game in progress or afterwards, to put the (un)likeliness of certain game situations into a historical context. For example, one could wonder whether Belgium's comeback was truly that exceptional. After all, only 145 World Cup knockout games were played since that one game in 1970 of which very few (if any) had a similar scenario, making this statistic not very valuable. According to our model, Belgium had a win probability of about only 8\% right before their first goal. This indicates that it was indeed an exceptional performance, although perhaps not as exceptional as the ``once in 50 years'' statistic suggests.
Similarly, win probability can be used to debunk some of the most persistent football myths, such as the earlier introduced ``2-0 is the worst lead'' myth. Perhaps not unsurprisingly, a 2-0 lead turns out to be a very safe lead, much safer than a 1-0 or 2-1 lead. The appendix includes the details of this analysis.
\subsection{Quantifying Performance under Mental Pressure}
``Clutch'' performance, or performance in crucial situations is a recurring concept in many sports -- including football. Discussions about which players are the most clutch are popular among fans\footnote{\url{https://www.thetimes.co.uk/article/weight-of-argentinas-collapse-is-too-much-for-even-messi-to-shoulder-wl7xbzd83}} and teams define the ability to perform under pressure as a crucial asset.\footnote{\url{https://api.sporza.be/permalink/web/articles/1538741367341}} However, such judgements are often the product of short-term memory among fans and analysts.
Perhaps the most interesting application of win probability is its ability
to identify these crucial situations. By calculating the difference in win probability between the current situation and the win probability that would result after a goal, one can identify these specific situations where the impact of scoring or conceding a goal would be much greater than in a typical situation~\cite{robberechts2019}. It is a reasonable assumption that these situations correspond to the crucial moments of the game.
To illustrate this idea, we show how win probability can be used to identify clutch goal scorers. The number of goals scored is the most important statistic for offensive players in football. Yet, not all goals have the same value. A winning goal in stoppage time is clearly more valuable than another goal when the lead is already unbridgeable. By using the change in win probability\footnote{We remove the pre-game strength from our win probability model for this analysis. Otherwise, games of teams such as PSG that dominate their league would all start with an already high win probability, reducing a goal's impact on the win probability.} when a goal is scored, we can evaluate how much a player's goal contributions impact their team's chance of winning the game. This leads to the Added Goal Value metric below, similar to Pettigrew's added goal value for ice hockey~\cite{pettigrew2015}.
\begin{equation}
\textrm{AGVp90}_i = \frac{\sum_{k=1}^{K_i} 3 * \Delta P(\text{win} | x_{t_k}) + \Delta P(\text{tie} | x_{t_k}) }{M_i} * 90
\end{equation}
where $K_i$ is the number of goals scored by player $i$, $M_i$ is the number of minutes played by that same player and $t_k$ is the time at which a goal $k$ is scored.
This formula calculates the total added value that occurred from each of player $i$'s goals, averaged over the number of games played. Since both a win and a draw can be an advantageous outcome in football, we compute the added value as the sum of the change in win probability multiplied by three and the change in draw probability. The result can be interpreted as the average boost in expected league points that a team receives each game from a player's goals.
Figure~\ref{fig:AGVp90} displays the relationship between AGVp90 and goals per game for the most productive Bundesliga, Ligue 1, Premier League, LaLiga and Serie A players who have played at least the equivalent of 20 games and scored at least 10 goals in the 2016/2017 and 2017/2018 seasons. The diagonal line denotes the average AGVp90 for a player with a similar offensive productivity. The players with the highest AGVp90 are Lionel Messi, Cavani, Balotelli, Kane and Giroud. Also, players such as Neymar, Lewandowski, Lukaku, Mbapp\'e and Mertens have a relatively low added value per goal; while players such as Austin, Balottelli, Dybala, Gameiro and Giroud add more value per goal than the average player.
\setlength{\belowcaptionskip}{-20pt}
\begin{figure}[h]
\centering
\includegraphics[width=0.83\textwidth]{fig/AGVp90_small_alt.pdf}
\caption{The relation between goals scored per 90 minutes and AGVp90 for the most productive Bundesliga, Ligue 1, Premier League, LaLiga and Serie A players in the 2016/2017 and 2017/2018 seasons.}
\label{fig:AGVp90}
\end{figure}
\section{Conclusions}
This paper introduced a Bayesian in-game win probability model for football. Our model uses eight features for each team and models the future number of goals that a team will score as a temporal stochastic process. Our evaluations indicate that the predictions made by this model are well calibrated and outperform the typical modelling approaches that are used in other sports.
The model has relevant applications in sports story telling and can form a central component in the analysis of player performance in the crucial moments of a game.
| {
"attr-fineweb-edu": 2.011719,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcqjxK7ICUuXeknvZ | \section{Introduction}
Adjusting athletic sprinting races for atmospheric drag effects has been
the focus of many past studies [1--18].
Factors influencing the outcome
of the races are both physiological and physical. The latter has included
the effects of wind speed and variations in air density, primarily introduced
through altitude variations. This is due in
part to the International Association of Athletics Federation's (IAAF)
classification of ``altitude assistance'' for competitions at or above
1000~m elevation.
For the 100~m, with the exception of a few references
it is generally accepted that a world
class 100~m sprint time will be 0.10~s faster when run with a 2$~{\rm ms}^{-1}$
tail-wind, where as a rise of 1000~m in altitude will decrease a race
time by roughly 0.03~s. The figures reported in \cite{dapena} are
slight underestimates, although one of the authors has since re-assessed
the calculations and reported consistent values \cite{dapena2000}
to those mentioned above. A 0.135~s correction for this windspeed
is reported in \cite{behncke}, which overestimates the accepted value
by almost 40\%.
Wind effects in the 200~m are slightly more
difficult to determine, based on lack of data (the race is around a curve,
but the wind is measured in only one direction).
As a result, the corrections are less certain, but several
References \cite{jrm200,quinn1} do agree that a 2$~{\rm ms}^{-1}$ wind will provide
an advantage between 0.11-0.12~s at sea level. Altitude adjustments
are much more spread out in the literature, ranging between 0.03-0.10~s
per 1000~m increase in elevation. The corrections discussed in
\cite{frohlich} and \cite{behncke} are mostly overestimates, citing
(respectively) 0.75~s and 0.22~s for a 2$~{\rm ms}^{-1}$ tail-wind, as well as 0.20~s and
0.08~s for 1000~m elevation change (although this latter altitude correction
is fairly consistent with those reported here and in \cite{jrm200}.
Conversely, an altitude change of 1500~m is found to assist a sprinter
by 0.11~s in the study presented in \cite{quinn1}.
Several other factors not generally considered in these analyses, which are
nonetheless crucial to drag effects, are air temperature, atmospheric
pressure, and humidity level (or alternatively dew point). There is a
brief discussion of temperature effects in \cite{dapena}, but the authors
conclude that they are largely inconsequential to the 100~m race. However,
the wind and altitude corrections considered therein have been shown to be
underestimates of the now commonly accepted values, so a more detailed
re-evaluation of temperature variations is in order.
This paper will investigate the individual and combined effects
of these three factors, and hence will discuss the impact of {\it density
altitude} variations on world class men's 100~m and 200~m race times.
The results suggest
that it is not enough to rely solely on physical altitude measurements for
the appropriate corrections, particularly in the 200~m event.
\section{The quasi-physical model with hydrodynamic modifications}
The quasi-physical model introduced in \cite{jrm100} involves a set of
differential equations with five degrees of freedom,
of the form
\begin{eqnarray}
\dot{d}(t) & = & v(t) \nonumber \\
\dot{v}(t) & = & f_s + f_m - f_v - f_d~.
\label{quasi}
\end{eqnarray}
where the terms $\{f_i\}$ are functions of both $t$ and $v(t)$. The first
two terms are propulsive (the drive $f_s$ and the maintenance $f_m$), while
the third and fourth terms are inhibitory (a speed term $f_v$ and drag
term $f_d$). Explicitly,
\begin{eqnarray}
f_s & = & f_0 \exp(-\sigma\; t^2)~, \nonumber \\
f_m & = & f_1 \exp(- c\;t)~, \nonumber \\
f_v & =& \alpha\; v(t)~, \nonumber \\
f_d & =& \frac{1}{2} \left(1-1/4\exp\{-\sigma t^2\}\right) \rho A_d\;(v(t) - w)^2~.
\label{terms}
\end{eqnarray}
The drag term $f_d$ is that which is affected by atmospheric conditions. The
component of the wind speed in the direction of motion $w$ is explicitly
represented, while additional variables implicitly modify the term by changing
the air density $\rho$.
The motivation for using time as the control variable in the system of
equations (\ref{terms}) stems from the idea that efficient running of the
short sprints is an time-optimization problem. This has been discussed
in the context of world records in Reference~\cite{keller}.
\subsection{The definitions of altitude}
In general, altitude is not a commonly-measured quantity when considering
practical aerodynamics, simply because the altitude relevant to such
problems is {\it not} the physical elevation. It is much easier to measure
atmospheric pressure, which is implicitly determined by the altitude $z$ via
the differential relationship $dP(z) = -\rho\; g\; dz$, known as the
hydrostatic equation \cite{haughton}. Thus, the pressure
$P(z)$ at some altitude $z$ may be obtained by integration assuming the
functional form of the density profile $\rho$ is known. The air density
can depend non-trivially on factors such as vapor pressure ({\it i.e.}
relative humidity) and temperature.
It is useful to embark on a slight digression
from the analysis to consider several possible definitions of altitude.
Including density altitude, there are at least four possible types of
altitudes used in aerodynamic studies: geometric, geopotential, density,
and pressure. Focus herein will be primarily on the first three.
Both geometric and geopotential altitudes are determined independently
of any atmospheric considerations. Geometric altitude is literally the
physical elevation measured from mean sea level to a point above the surface.
Geopotential altitude could just as easily be defined as {\it equipotential}
altitude. It is defined as the radius of a specific (gravitational)
equipotential surface which surrounds the earth.
Density and pressure altitude, on the other hand, are defined as the altitudes
which yields the standard altitude for a given set of parameters.
The variation of pressure as a function of height $z$ is
\begin{equation}
\frac{dp}{dz} = -\rho(z) g(z) = -\frac{p g(z)}{RT(z)}
\end{equation}
which can be rearranged to give the integrals
\begin{equation}
\int_{p_0}^{p(H)} \frac{dp}{p} = -\frac{1}{R} \int_{0}^{H}
\frac{dz\; g(z)}{T(z)}~.
\label{int0}
\end{equation}
For the small altitude changes considered herein, it is more than appropriate
to adopt the approximations $g(z) \approx g \equiv{\rm constant}$ and the
linear temperature gradient $T(z) = T_0 - \Lambda z$, where
$\Lambda$ is the temperature lapse rate.
In this case, Equation~\ref{int0} may be solved to give
\begin{eqnarray}
\ln\left(\frac{p(H)}{p_0}\right)&=& \frac{g}{R\Lambda}
\ln\left(\frac{T_0}{T(H)}\right) \nonumber \\
& = & \frac{g}{R\Lambda} \ln\left(\frac{T_0}{T_0 - \Lambda H}\right)~.
\label{int1}
\end{eqnarray}
After some mild algebra, the pressure altitude $H_p$ is determined
to be
\begin{equation}
H_p \equiv \frac{T_0}{\Lambda} \left\{1-\left(
\frac{p(H)}{p_0}\right)^\frac{R\Lambda}{g}\right\}
\label{p_alt}
\end{equation}
Since it is of greater interest to find an expression for altitude as a
function of air density, substituting $p(z) = \rho(z) R T(z)$ in
Equation~\ref{int1} yields
\begin{equation}
H_\rho = \frac{T_0}{\Lambda} \left\{ 1 - \left(\frac{RT_0 \rho(H)}{\mu P_0}\right)^{\left[\frac{\Lambda R}{g\mu-\Lambda R}\right]} \right\}
\label{dens_alt}
\end{equation}
which is the density altitude.
Here, $P_0$ and $T_0$ are the standard sea level pressure and
temperature, $R$ the gas constant, $\mu$ the molar mass of dry
air, $\Lambda$ the temperature lapse rate, and $g$ the sea level
gravitational constant.
The explicit values of these parameters are given in \cite{isa} as
\begin{eqnarray}
T_0 & = & 288.15~{\rm K} \nonumber \\
P_0 & = & 101.325~{\rm kPa} \nonumber \\
g & = & 9.80665~{\rm ms}^{-2} \nonumber \\
\mu & = & 2.89644 \times 10^{-2}~{\rm kg}/{\rm mol} \nonumber \\
\Lambda & = & 6.5 \times 10^{-3}~{\rm K\cdot m}^{-1} \nonumber \\
R & = & 8.31432~{\rm J}\;{\rm mol}^{-1}\;{\rm K}^{-1} \nonumber
\end{eqnarray}
In the presence of humidity, the density $\rho$ is a combination
of both dry air density $\rho_a$ and water vapor density $\rho_v$. This
can be written in terms of the associated pressures as \cite{murray}
\begin{equation}
\rho = \rho_{a} + \rho_{v} = \frac{P_{a}}{R_{a}\;T} + \frac{P_v}{R_v \; T}~,
\end{equation}
where each term is derived from the ideal gas law, with $P_a$ and $P_v$ the
pressure of dry air and water vapor, and $R_a = 287.05$ and $R_v = 461.50$
the corresponding gas constants. Since the total pressure is simply the sum
of both dry air pressure and vapor pressure, $P = P_a + P_v$, the
previous equation may be simplified to give
\begin{equation}
\rho = \frac{P-P_v}{R_a\; T} + \frac{P_v}{R_v \; T}~.
\end{equation}
The presence of humidity lowers the density of dry air, and hence lowers
aerodynamic drag. However, the magnitude of this reduction is critically
dependent on temperature. A useful and generally accurate approximation
to calculate vapor pressure is known as the Magnus Teten equation,
given by \cite{murray}
\begin{equation}
P_v \approx 10^{7.5 T/(237.7+T)}\cdot \frac{H_r}{100}
\end{equation}
where $H_r$ is the relative humidity measure and $T$ is the temperature
in $^\circ{\rm C}$.
Figures~\ref{FigDA1} and
~\ref{FigDA2} show the calculated density altitudes for the temperature
range in question, including curves of constant relative humidity, for
normal pressure (101.3~kPa) as well as high-altitude pressure (75~kPa).
The latter is representative of venues such as those in Mexico City, where
world records were set in every sprint race and jumping event at the 1968
Olympic Games.
It is somewhat striking to note that a 20$^\circ{\rm C}$ temperature range
at a fixed pressure can yield an effective altitude change of over 600~m,
and even greater values with the addition of humidity and lowered station
pressure. This suggests that
a combination of the effects considered in this paper can have significant
influence on sprint performances (especially in the 200~m which has been
shown to be strongly influenced by altitude \cite{jrm200}).
\subsection{A note on meteorological pressure}
A review of reported meteorological pressure variations seems to suggest
that annual barometric averages for all cities range between about
100-102~kPa. This might
seem contradictory to common sense, since one naturally expects atmospheric
pressure to drop at higher altitudes. Much in the same spirit as this work,
it is common practice to report {\it sea-level corrected} (SL)
pressures instead
of the measured (station) atmospheric pressures, in order to compare the
relative pressure differences experienced between weather stations, as
well as to predict the potential for weather system evolution.
Given a reported SL pressure $P_{\rm SL}$ at a (geophysical)
altitude $z$, the station pressure $P_{\rm stn}$ can be
computed as \cite{llnl}
\begin{equation}
P_{\rm stn} = P_{\rm SL} \left(\frac{288-0.0065z}{288}\right)^{5.2561} ~.
\end{equation}
The present study uses only station atmospheric pressures, but a conversion
formula which can utilize reported barometric pressure
will be offered in a subsequent section.
\section{Results}
\label{results}
The model parameters used for the 100~meter analysis in this study are
identical to those in \cite{jrm100},
\begin{eqnarray}
f_0=6.10~{\rm ms}^{-2}&f_1 = 5.15~~{\rm ms}^{-2}&\sigma=2.2~{\rm s}^{-2} \nonumber \\
c=0.0385~{\rm s}^{-1}~&\alpha=0.323~{\rm s}^{-1}~~&A_d = 0.00288~{\rm m}^2{\rm kg}^{-1}
\label{params1}
\end{eqnarray}
as well as the ISA parameters described previously. By definition, these
give a density altitude of 0~meters at 15$^\circ{\rm C}$, $P_0 = 101.325~$kPa
and $0\%$ relative humidity.
Since few track meets are held at such ``low'' temperatures, the standard
performance to which all others will be compared will be for the conditions
$P = 101.3~$kPa, $T=25^\circ{\rm C}$, and a relative humidity level of $50\%$,
which produces a raw time ({\it i.e.} excluding reaction) of 9.698~s. Note
that the corresponding density altitude for this performance
under the given conditions would be $H_\rho = 418$~meters. Thus, if the
performance were physically at sea level, the conditions would replicate
an atmosphere of elevation $H_\rho$.
Also, the relative humidity level is never measured at 0\%. The
range of possible relative humidity readings considered herein are thus
constrained between 25\% and 100\%.
The associated performance adjustments in the 100~meter sprint for various
conditions are displayed
in Figures~\ref{Fig100a} through \ref{Fig100f}, while Figures~\ref{Fig200a}
through ~\ref{Fig200e} show possible adjustments to the 200~meter sprint
(see Section~\ref{section200} for further discussion). General features of
all graphs include a narrowing parameter space for increasing wind speeds.
This implies
that regardless of the conditions, stronger tail winds will assist performances
by a smaller degree with respect to the ``base'' performance than will head
winds of similar strength. This is completely consistent with previous
results in the literature, and furthermore is to be expected based on the
mathematical form of the drag component.
Ranked in terms of magnitude of effect, humidity variations show the least
impact on race times. Small changes in atmospheric pressure that one might
expect at a given venue show slightly more influence on times. Temperature
changes over the range considered show the greatest individual impact.
However, it is the combination of the three factors that creates the greatest
impact, as is predictable based on the calculated density altitudes.
\subsection{Temperature and relative humidity variations}
At fixed pressure and temperature, the range of realistic humidity variations
shows little influence on 100~meter race times (Figure~\ref{Fig100a}), yielding
corrections of under 0.01~s for the range considered. Since race times are
measured to this precision, the effects would be no doubt negligible.
The corrections for low pressure regions are also less than the 0.01~s.
Due to the extremely slim nature of the parameter space, the effects of
wind will essentially be the same regardless of the relative humidity reading.
Similarly, if temperature is allowed to vary at a fixed humidity level,
the corrections grow in magnitude but are still relatively small
(Figure~\ref{Fig100b}).
Although one would realistically expect variations in barometer
reading depending on the venue, the chosen values are to reflect a sampling
of the possible range of pressures recorded at the events. On its own,
temperature does not have a profound impact on the simulation times over
the $\Delta T = 20^\circ{\rm C}$ range considered herein. With no wind or relative humidity
at fixed pressure (101.3~kPa),
a 100~m performance will vary only 0.023~s in the given temperature range.
Overall, this corresponds to approximately a 750~m change in density
altitude (the 15$^\circ{\rm C}$ condition corresponds roughly to the standard
atmosphere except for the non-zero humidity, and
thus a density altitude not significantly different from 0~m).
Compared to the standard performance,
temperatures below $25^\circ{\rm C}$ are equivalent
to running at a lower altitude ({\it e.g.} below sea level).
A $+2~{\rm ms}^{-1}$ tailwind and standard conditions will
assist a world class 100~m performance by 0.104~s using the input parameters
defined in Equation~\ref{params1}. This is essentially identical to the
prediction in \cite{jrm100}, since the standard 100~m performance has been
defined to be under these conditions. Increasing the temperature to 35$^\circ{\rm C}$
yields a time differential of $0.111$~seconds over the standard race, whereas
decreasing the temperature to 15$^\circ{\rm C}$ shows a difference of 0.097~seconds.
Hence, the difference in performance which should be observed over the
20$^\circ{\rm C}$ range is roughly $0.02~$s.
In combination, these factors expand the parameter
space from that of temperature variation alone. Figure~\ref{Fig100c}
shows the region bounded by the lowest density altitude conditions
(low temperature and low humidity) and the highest (higher temperature
and higher humidity). In this case, with no wind the performances can
be up to 0.026~s different (effectively the additive result of the individual
temperature and humidity contributions). Although such adjustments to the
performances seem small, it should be kept in mind that a difference of
0.03~s is a large margin in the 100~meter sprint, and can cause an athlete
to miss a qualifying time or even a record.
\subsection{Pressure variations}
\label{pvar100}
As one would expect, the greatest variations in performances next to wind
effects is introduced by changes in atmospheric pressure. Barometric pressure
changes will be addressed in two ways. First, variations in pressure for
a fixed venue will be considered. Diurnal and seasonal fluctuations in
barometer reading are generally small for a given location \cite{ncdc},
at most 1~kPa from average.
Model simulations for performances run at unusually high pressure (102.5~kPa)
and unusually low pressure (100.5~kPa) with constant temperature and relative
humidity actually show little variation (under 0.01~s). However, if the
temperature and humidity are such as to allow extreme under-dense and
over-dense atmospheres, then the effects are amplified. Figure~\ref{Fig100d}
demonstrates how such ambient weather could potentially affect performances
run at the same venue. With no wind, the performances can be
different by almost 0.03~seconds for the conditions considered, similar
to the predicted adjustments for the combined temperature and humidity
conditions given in the previous subsection.
The performance range for races run at high-altitude venues under similar
temperature and humidity extremes is plotted in Figure~\ref{Fig100e}. The
base performance line is given as reference for the influence of larger
pressure changes. At the lower pressure, the width of the
parameter space for zero wind conditions is 0.021~seconds, but the lower
density altitude point is already 0.071~s faster than the base performance.
Thus, depending on the atmospheric conditions at the altitude venue races
could be upward of 0.1~seconds faster than those run under typical conditions
at sea level.
\subsection{Combined effects of temperature, pressure, and humidity}
The largest differences in performances arise when one considers combined
effects of pressure, temperature, and relative humidity. Figure~\ref{Fig100f}
shows the allowed parameter space regions for performances
run in extremal conditions: 75~kPa, 100\% humidity, and 35$^\circ{\rm C}$ (yielding
the least dense atmosphere) and 101.3~kPa, 25\% humidity, 15$^\circ{\rm C}$. The
total allowed performance space is shown in Figure~\ref{Fig100f}.
Note that the difference in extremes is exceptionally amplified for head-winds.
Based on the total horizontal width in the Figure, a performance
run with a strong tail-wind (top of left curve) at high altitude can be
almost half a second faster than the same performance run with an equal-magnitude
headwind in extremely low density altitude conditions.
\section{Back-of-the-envelope conversion formula}
Reference~\cite{jrm100} gives a simple formula which can be used to correct
100~meter sprint times according to both wind and altitude conditions,
\begin{equation}
t_{0,0} \simeq t_{w,H} [1.03 - 0.03 \exp(-0.000125\cdot H)
(1 - w\cdot t_{w,H} / 100)^2 ] ~.
\label{boteq}
\end{equation}
The time $t_{w,H}$ (s) is the recorded race time run with wind $w$ (ms$^{-1}$)
and at altitude $H$ (m), while the time $t_{0,0}$ is the adjusted time
as if it were run at sea level in calm conditions.
It is a simple task to modify Equation~\ref{boteq} to account instead for
density altitude. The exponential term represents the change in altitude,
thus it can be replaced with a term of the form $\rho/\rho_0$, where
$\rho$ is the adjusted density, and $\rho_0$ is the reference density.
Alternatively, the original approximation in Equation~\ref{boteq} may be
used with the altitude $H$ being the density altitude.
Table~\ref{botecomp} shows how the top 100~meter performances are re-ordered
according to the given approximation. These are compared with the original
back-of-the-envelope approximations given in Reference~\cite{jrm100}, based
exclusively on altitude. The weather conditions have been taken from the
NC DC database \cite{ncdc}. The relative humidity has been calculated from
the mean dew-point. The cited temperature is the maximum temperature
recorded on that particular date, which is assumed to be reflective of
the conditions near the surface of the track (since the surface reflection
and re-emission of heat from the rubberized material usually increases the
temperature from the recorded mean). Typical trackside temperatures reported
in \cite{martin} support this argument. For example, the
surface temperature during the 100~m final in Atlanta (9.84~s, +0.7~$~{\rm ms}^{-1}$
performance in Table~\ref{botecomp}) was reported as 27.8$^\circ{\rm C}$.
The effects of density altitude are for the most
part overshadowed by wind effects and are generally within 0.01~s of each
other after rounding to two decimal places. However, larger variations are
observable in certain cases. The most notable differences are in the
9.80~s performance (Maurice Greene, USA) at the 1999 World Championships in Seville, ESP, due
to the unusually high temperature, as well as the 9.85~s performance in
Ostrava, CZE (Asafa Powell, Jamaica). The temperature measurement during the latter was reportedly
a chilly 10$^\circ{\rm C}$ and humidity levels near 100\%.
At the time of writing of this manuscript, the world record in the men's
100~meter sprint is 9.77~seconds by Asafa Powell, set on 14 June
2005 in Athens, Greece (note that although there is a faster performance
listed in the Table, only races with winds under +2$~{\rm ms}^{-1}$ are eligible for
record status). The wind reading for this performance was $+1.6~{\rm ms}^{-1}$.
According to Table~\ref{botecomp}, Powell's world record run actually
adjusts to about 9.85~s, very close to his earlier time from Ostrava.
In fact, prior to this the world record was
9.78~s by Tim Montgomery of the USA. The wind in this race was just at
the legal limit for performance ratification ($+2~{\rm ms}^{-1}$). This time adjusts
to an even slower performance of 9.87~s with density altitude considerations
(9.88~s using the older method). Both of these times are eclipsed by the
former world record of Maurice Greene, who posted a time of 9.79~s
(+0.1$~{\rm ms}^{-1}$) in Athens roughly six years to the day prior to Powell's race.
This time adjusts to between 9.80-9.81~s depending on the conversion method.
\section{Corrections to 200~meter race times}
\label{section200}
Aerodynamic drag effects in the 200~m sprint have been found to be compounded
due to the longer duration and distance of the race.
In Reference~\cite{jrm200}, it was suggested that ``extreme'' conditions such
as high wind and altitude can considerably affect performances for better or
worse. That is, a 1000~m altitude alone can improve a world class sea-level
performance by up to 0.1~s, the equivalent assistance provided by a $+2~{\rm ms}^{-1}$
tail-wind in the 100~m sprint. Higher altitudes can yield even greater boosts.
Thus, the variability of density altitude would seem to be all the more
relevant to the 200~m sprint.
This section does not seek to provide a comprehensive analysis for the 200~m
({\it e.g.} the influence of cross-winds or lane dependence), so only the
results for a race run in lane~4 of a standard IAAF outdoor track will
be reported. The model equations (\ref{quasi}) are adapted for the curve
according to Reference~\cite{jrm200},
\begin{equation}
\dot{v}(t) = \beta (f_s + f_m) - f_v - f_d~,
\end{equation}
where
\begin{equation}
\beta(v(t); R_l) = (1\; -\;\xi\;v(t)^2/R_{\cal l} )~, \label{damping}
\end{equation}
and $R_l$ is the radius of curvature (in meters) for the track in lane ${\cal l}$,
\begin{equation} R_l = 36.80+1.22\;({\cal l} - 1)~, \end{equation}
compliant with the standard IAAF track.
For the 200~m, slightly different parameters used are to reflect
the more ``energy-conservative'' strategy for the race \cite{jrm200}:
$f_0 = 6.0\;{\rm ms}^{-2}; f_1 = 4.83\;{\rm ms}^{-2}; c = 0.024~{\rm s}^{-1}$.
The other parameters remain unchanged. The curve factor is $\xi = 0.015$.
As with the 100~meter sprint, the standard performance adopted for comparison
is 19.727~s, run in lane 4 at 25$^\circ{\rm C}$ and 50\% relative humidity. Under these
conditions, a 2$~{\rm ms}^{-1}$ wind assists the sprinter by approximately 0.113~s.
The effects of humidity alone are again effectively inconsequential, in this
case showing less than a 0.01~s differential with no wind (Figure~\ref{Fig200a}.
For wind speeds
of 2$~{\rm ms}^{-1}$ at the same pressure value combined with a 50\% increase in
humidity, the advantage over the base conditions grows to 0.118~s. Thus,
for all wind speeds considered, the effects of humidity are negligible.
Temperature plays a much stronger role in the longer sprint.
Figure~\ref{Fig200b} shows this effect for fixed pressure and relative humidity.
In both 15$^\circ{\rm C}$ and 35$^\circ{\rm C}$ weather, a world class 200~meter sprint can be
approximately 0.03~s slower or faster, giving a total differential of
0.065~s over the entire 20$^\circ{\rm C}$ range.
Figure~\ref{Fig200c} demonstrates the effects of pressure variation at
a specific venue (at constant temperature and relative humidity). At higher
pressures (102.5~kPa), the simulation shows the race time slowed by
0.011~s, while at unusually low pressures (100.5~kPa) the race by only 0.007~s.
In extremal conditions, including temperature and humidity variations as well
causes these differentials to dramatically change (Figure~\ref{Fig200d}.
For unusually high
pressure, low temperature and humidity conditions, the race time is
0.045~s slower as compared to the base conditions. On the other hand,
the lower pressure, higher temperature and humidity conditions yield
a 0.048~s decrease in the race time. Thus, the difference between two
200~m races run at the same venue but under these vastly differing
conditions could up up to 0.1~seconds different even if there is no
wind present.
The most striking difference is observed when considering differences between
venues or large station pressure differences (Figure~\ref{Fig200e}).
With no wind, the difference
between the base conditions and those at high altitude (at the same
temperature and relative humidity) can be
0.23~s, which is consistent with the figures reported in Reference~\cite{jrm200}
for a 2500~m altitude difference. Recall that the difference in density
altitude between these two conditions is roughly 3000~m.
A 2$~{\rm ms}^{-1}$ wind improves the low pressure performance by only 0.083~s over still
conditions at the same pressure, but when compared to the base conditions
this figure becomes 0.313~s. In fact, the complete horizontal span of the
parameter space depicted in Figure~\ref{Fig200e} is almost 0.64~seconds,
demonstrating the exceeding variability that one could expect in 200~m
race times. Again, this assumes that the wind is blowing exclusively in
one direction (down the 100~meter straight portion of the track), and is
unchanged throughout the duration of the race.
\subsection{Can density altitude explain the men's 200~meter world record?}
\label{mjwr}
Michael Johnson (USA) upset the standard in the men's 200~m event in 1996
when he set two world records over the course of the summer. At the USATF
Championships in Atlanta, his time of 19.66~s (wind $+1.6~~{\rm ms}^{-1}$) erased the
25~year old record of 19.72~s (set in Mexico City, and thus altitude assisted).
However, it was his performance of 19.32~s (wind $+0.4~~{\rm ms}^{-1}$) at the Olympic
Games which truly redefined the race. Reference~\cite{jrm200} offers a
thorough analysis of the race which addresses wind and altitude effects.
It was suggested that overall, the race received less than a 0.1~s boost from
the combined conditions (Atlanta sits at roughly 350~m above sea level).
In light of the current analysis, however, the question can be posed: ``By
how much {\it could} Johnson's race have been assisted?''. Extensive
trackside meteorological data was recorded at the Games, and is reported
in Reference\cite{martin}.
According to this data, the mean surface temperature during the race (21:00~EDT,
01 August 1996)
was 34.7$^\circ{\rm C}$, with a relative humidity of 67\%. From the NCDC database,
the adjusted sea-level pressure was recorded as 101.7~kPa. Combined,
these values give a density altitude of 1175~meters (Atlanta's physical
elevation is roughly 315~meters).
As compared to the base conditions (a density altitude of about 400~m), this
represents an 800~m change in effective altitude. According to the
results of Reference~\cite{jrm200}, such an altitude increase results in
an advantage of approximately 0.05~seconds. When the minimal wind is taken
into account (0.3$~{\rm ms}^{-1}$), the difference rises to 0.1~s. Previously,
the ``corrected'' value of the World Record was reported as 19.38~s
\cite{jrm200} using only wind-speed and physical altitude, so the
inclusion of density altitude enhances the correction by 0.04~s and
thus adjusts the time to a base 19.42~s.
While this does not explain the enormous improvement over the previous
record, is does highlight that density altitude considerations become
increasingly important in the 200~meter sprint, and no doubt even moreso
for the 400~meter event.
\section{Conclusions}
This report has considered the effect of pressure, temperature, and relative
humidity variations on 100~ and 200~meter sprint performances. It has been
determined that the influence of each condition can be ranked (in order of
increasing assistance) by humidity, pressure, and temperature. The combined
effects of each are essentially additive. When wind conditions are
taken into account, the impact is amplified for head-winds but dampened for
tail-winds.
All in all, the use of density altitude over geophysical (or geopotential)
altitude seems somewhat irrelevant, since the associated corrections are
different by a few hundredths of a second at best. Nevertheless, it is
the difference of these few hundredths which can secure a performance in
the record books, or lead to lucrative endorsement deals for world class
athletes.
\vskip 1cm
\noindent{\bf {\Large Acknowledgments}} \\
I thank J.\ Dapena (Indiana University) for insightful discussions.
This work has been financially supported by a grant from Loyola Marymount
University.
| {
"attr-fineweb-edu": 2.994141,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcA_xK7kjXIdG9c7L | \section{Introduction}
Browsing concise and personalized on-demand sports video highlights, after the game, is a key usage pattern among sports fans. Searching by favourite players, teams and the intent to watch key moments in these videos has been recognized widely\footnote{https://www.thinkwithgoogle.com/consumer-insights/consumer-trends/sports-fans-video-insights-5/}. Enabling such searches or content-based recommendations requires deep indexing of videos at specific moments in time so that the user can be brought to watch the player or moment of intent, without necessarily starting from the beginning of a video. Typically that requires understanding actions happening in video segments and key players in them, apart from participating teams and the date of the game.
\newline
In broadcast videos, key players can be identified either from zoomed-in views just post action (more commonly found in soccer videos, but also in indoor court games, such as basketball), or from the moment a shot is made (by methods like tracking the last human in contact with the ball). Detecting players through both face detection and jersey numbers have been widely studied in recent computer vision conferences. Understanding the context of a sports broadcast video can be achieved through the task of event or action recognition. Doing so accurately at a fine-grained level for multiplayer sports (e.g. professional leagues like \textit{NBA/Basketball, NFL/American football, NHL/ice hockey, Premier League/soccer}) is a challenging task but often necessary in order to support deep video indexing use cases. In several prior research works on action recognition, such as in \cite{playerTrackMurphy,SoccerNet}, the authors discuss methods of generating automatic training data for action recognition by aligning time and quarter information (signifying point in time in a game) available in game clocks to match reports available on the web or in play-by-play records of the game. \textit{Our motivation is to extend such video frame to play-by-play alignment robustly to any freely available sports broadcast highlight video, such that it can directly be used as a method of content understanding at scale, rather than just being a method of gathering training data}.
\newline
Figure \ref{fig:frame-text-alignment} and \ref{fig:e-to-e recognition} show an example of a sample NBA game clock on a broadcast video highlight, and the different text segments in the clock pertaining to team abbreviations ("PHX", "GS"), quarter ("2nd"), and time (":17.3"). Slight misinterpretation of any of these text segments could lead to a potentially wrong play-by-play alignment, or miss a possible alignment altogether. The clock times and quarter transition occurs smoothly, while team names remain same during full game sports broadcast videos, which are usually used for training in the referred works. Hence, correct recognition towards the beginning and end of the game help align the rest of the play-by-plays to the entire game. However, in more abundantly available \textit{short form sports highlights} (typically ranging from 5-15 minutes of video), broadcasters compile several notable plays across the span of the entire game or a season. So, \textit{expected semantic text} like time, quarters and teams in clocks can vary quite frequently. Some videos are compilations from different networks over an entire week or season, so the clocks within the same video or overlaid positions can be different. This makes accurate text detection (such that they can be reliably used for alignment to play-by-plays) a hard problem. Figure \ref{fig:Highlight_chop} shows clocks picked from adjacent video times (frames from 30 or 25 fps videos, which are approximately few frames apart) which experience sudden changes in formatting such as occlusion/blurring, sudden variation in quarter and time, as well as change in clock type in subsequent frames in short game highlights. Such accurate alignments, if done from commonly available short game highlights, can enable deep linking, search and browse use cases across different sports media products at scale in an industrial setting. Text detectors \cite{zhou2017east} and recognizers \cite{shi2015endtoend} which are trained on general purpose datasets tend to not generalize well on domain-specific text and character sequences, with occlusions, occasional blurring and transitions in the varied clock types from different sports. Instead of manually collecting training data for each sport's specific clocks, their text region bounding boxes, and their corresponding texts, we utilize the property that such clock text \textit{come from a finite fixed set of possible strings} (team name abbreviations), as well follow a certain surface form pattern (eg. time consists of digit strings followed by ":" and other digits) as shown in Figure \ref{fig:knowledge-constraints}a. Using this property we develop distant supervision based in-domain text detection and recognition datasets, using which we transfer learn pre-trained detectors and recognizers to the in-domain data. While there has been much study in detecting jersey numbers in order to recognize players via jersey numbers, accurate detection of text in clocks to align them to play-by-play can lead to more fine grained sports video understanding. Thus, our contributions are the following: (1) \textit{we propose a distant supervision approach for in-domain specialized text detection and recognition training}, (2) \textit{a data augmentation method to reduce biases in image sizes and aspect ratios which affect the precision of text detection and recognition across different sports clocks, without using any domain specific neural architecture}, (3) \textit{we propose a diverse sports clock dataset and comparison with other text detection and recognition datasets to emphasize the nature of the problem} and (4) \textit{this work showcases the practical challenges in making play-by-play alignment work in an industrial setting on highlight videos}. Such multi-modal alignments form the basis for various upstream search and recommendation tasks like in-video search, efficient video scrubbing by deep linking of players and events at specific moments in videos, and content-based recommendation graphs.
\section{Related Work}
\textbf{Identifying key players in the videos:} In \cite{DBLP:conf/visapp/NadyH21} the authors discuss an end-to-end pipeline of separate player object detectors, text region detection and recognition models, for detecting jersey numbers for basketball and ice hockey. They also propose a dataset for the same. In \cite{Gerke} the authors proposed a dataset focused on recognizing jersey text for soccer and propose a dataset. \cite{Liu} attempted to enrich a Faster RCNN's region proposal with human body part keypoint cues to localize text regions in player jerseys. The authors also propose their own dataset. In \cite{s11042-011-0878-y} the authors approach jersey number localization, in running sports using domain knowledge.
\newline
\textbf{Broadcast footage text recognition}: In \cite{ref1} authors describe a method of localizing broadcast text region using traditional vision techniques, while \cite{ref2} authors showcase an architecture specific to the task of understanding clock text.
\newline
\textbf{Event detection for contextual information:} Recent works on detecting actions in basketball (NBA) \cite{ramanathan2016detecting}, baseball (MLB) \cite{piergiovanni2018finegrained} and soccer \cite{SoccerNet} attempt to solve the problem at a coarse-grained level. In \cite{Yu_2018_CVPR} the authors propose an approach to fine-grained captioning though not covering player identification or fine-grained shooting actions for basketball.
\newline
\textbf{Text detection and recognition:} Text detection model architectures in recent works have ranged from SSD \cite{LiuAESR15} motivated, anchor box based TextBox++ \cite{textbox}, designed for horizontally aligned text. However, like in SSD, the anchor boxes have to be tuned outside regular training, based on the common image and text box sizes in the training dataset. Seglink \cite{seglink} links neighbouring text segments based on spatial relations, helping identify long lines as in Latin. CTPN \cite{ctpn} used an anchor mechanism to predict the location and score of each fixed-width proposal simultaneously, and then connected the sequential proposals by a recurrent neural network. Finally in EAST \cite{zhou2017east}, a fully convolutional network is applied to detect text regions directly without using the steps of candidate aggregation and word partition, and then NMS is used to detect word or line text. In \cite{shi2015endtoend}, CNN-based image representations combines with a Bi-LSTM layer for text recognition.
\section{Approach}
\label{sec:Approach}
Accurate semantic text detection and recognition (time left in clock or time elapsed, quarter or half of the game, abbreviation of team names) from sports clocks and matching them to an external knowledge of play-by-play records yields fine-grained player and action detection. For such alignment or linking to external knowledge bases, its critical that the limited pieces of semantic texts are properly understood in the clock. Instead of relying on fine grained image classification (to different teams, or times, as often done in case of jersey number identification of players) or any domain specific neural architecture, or any classical vision/geometric heuristic (for text localization as in \cite{ref1}), we resort to accurate text region detection and text recognition methods (using well used model architectures for maintainability and ease of use in production environments), without getting large sets of humanly labelled sports clock domain training data. Domain-specific knowledge of correctness of detection and recognition can be utilized to mitigate requirement of hand labelled data. Parallely there has been related work on using domain knowledge to regularize model posteriors \cite{hu2016harnessing, hu2018deep}. We formalize domain knowledge of correctness of recognized team names, time, or half/quarter as first-order logical rules and refer to them as Knowledge Constraints (KC). Set of correctly recognized \textit{team names} and \textit{game quarters/halves} form a low cardinality set and \textit{time} comes from a set of known surface forms. We innovate on distilling out correct training data from a pool of noisy instances using KC as a medium for distant supervision. Specifically, we run off the shelf pre-trained text detectors and recognizer on unlabelled cropped clock objects. Predictions from such general-purpose models tend to be inexact, which we correct or reject based on the knowledge constraints to gather high quality in-domain training data without manual labelling. We discuss on KCs in detail in section \ref{sec:knowledge-constraints}.
Our end-to-end process of aligning sports clocks to play-by-play is broken down into the following steps (illustrated in Figure \ref{fig:frame-text-alignment} and \ref{fig:e-to-e recognition}). :
\newline
\textbf{Extracting clock regions from full frames:} For this purpose we train a clock object detector by collecting training samples from different sports highlight videos both internal as well freely available on YouTube. We use semi-automatic methods (detailed in section \ref{section:dataset}) to gather bounding boxes for sports clocks of various sizes, shapes and colors across different sports. We train a single shot object detector \cite{LiuAESR15} with a VGG16 backbone. The SSD bounding box loss consists of localization loss, \begin{math}L_{loc}\end{math} (a smooth L1 loss) which starts with positive samples and regresses the predictions closer to the matched ground-truth boxes and the confidence loss \begin{math}L_{conf}\end{math} (a softmax loss), used for optimizing classification confidence of objects over multiple classes. In our case we resort to only two classes \textit{clock} and \textit{background}, and we increase the importance of the localization loss to 3 times the confidence loss, such as to obtain a tight bounding box around the clock, while not caring so much about the right class prediction.
\newline
\textbf{Semantic text area detection on clock objects:}
Once we extract the clocks from the video frames, we get to the next stage of detecting text segments. Here we adapt the framework of EAST \cite{zhou2017east}, primarily because of its lightweight layers. EAST is a fully convolutional network (FCN) based text detector, which eliminates intermediate steps of candidate proposal, text region formation and word partition. The backbone in EAST is a pre-trained ResNet-50.
\newline
\textbf{Adaptation of text detector to our domain:} We pass the cropped clocks to an in-domain trained (in section \ref{sec:knowledge-constraints} we discuss how we generate in-domain text bounding box ground truths) EAST model, to detect the different text areas (of team, time, quarter). To capture image representations of different resolutions four levels of feature maps, are extracted from the backbone, whose sizes are 1/32, 1/16, 1/8 and 1/4 of the input image, respectively \cite{zhou2017east}. Since the backbone's most granular convolution feature covers 1/32 of the size of an image, and clock cropped images may not come as a multiple of 32, we resort to resizing before passing to the prediction stage. Usually an image is up-scaled or down-scaled to a multiple of 32, \textit{however we amalgamate both techniques for each dimension such that their distortion from their original values are the least, in turn providing less information loss along each axis as well as lesser aspect ratio distortion overall}. In Figure \ref{fig:ResizeAlgoCompare}, we compare relative aspect ratio distortions of just up-scaling or down-scaling with our amalgamation method in different size ranges and find it to achieve fewer distortion cases for our target image sizes of multi-domain sports clocks.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{Images/ResizeAlgoCompareFixed2.png}
\vspace*{-7mm}
\caption{Aspect ratio distortion comparison}
\label{fig:ResizeAlgoCompare}
\vspace*{-3mm}
\end{figure}
\textbf{Text recognition on detected semantic text areas:} Finally, we crop out text regions inside the clocks and pass them to an in-domain trained text recognizer. We fine tune a CRNN \cite{shi2015endtoend} model with our sports domain text. \cite{shi2015endtoend} combine CNN-based image representations with a Bi-LSTM layer to detect the character sequence with a Connectionist Temporal Classification layer (CTC).
\newline
\textbf{Adaptation of text recognizer to our domain:} Since we have a good proportion of special characters in our dataset, we train a CRNN model \cite{shi2015endtoend} by combining Synth90k dataset \cite{synth90k1,synth90k2} with our in-domain data. While time and quarter texts have important punctuation (e.g., ":"), they also pose different character sequences for recognition, unavailable in synthesized text. Hence, we include in-domain clock text regions and appropriate strings as shown in Figure \ref{fig:e-to-e recognition} in the training data. All words and characters are converted to lowercase to reduce the complexity of predictions. As for our domain, if the full strings predicted precisely match the knowledge base or the known surface form, we can uniquely identify team names, time or quarter (e.g., "23:21", "1st", "PHX").
\newline
\textbf{Aligning to play-by-play}: Play-by-play commentary can be visualized as text captions or structured data as shown in Figure \ref{fig:frame-text-alignment}, which is a fine-grained description of the game state. If the team names, time, and quarter, as shown in Figure \ref{fig:e-to-e recognition}, in the game clock signifying the game state, can be accurately recognized, then they can be used to uniquely align a group of consecutive video frames (assuming a video is typically more than 1fps) to the play-by-play caption. Such alignments yield a fine-grained understanding of sports broadcast video scenes as shown in Figure \ref{fig:frame-text-alignment}.
\subsection{Distant supervision using Knowledge Constraints (KC)}
\vspace*{-4mm}
\label{sec:knowledge-constraints}
\begin{figure}[htp]
\centering
\includegraphics[width=8.5cm]{Images/KC_Main.png}
\vspace*{-7mm}
\caption{Distant supervision through KC}
\label{fig:knowledge-constraints}
\vspace*{-4mm}
\end{figure}
The KC can be broadly classified into the following:
\newline
\textbf{Semantic text surface form check (KC1)}: Time text on the clocks can be easily recognized by their surface forms: \textit{n\string{1+\string}\textbf{:}n\string{1+\string}} or \textit{n\string{1+\string}\textbf{:}n\string{1+\string}\textbf{.}n\string{1+\string}} as shown in Figure \ref{fig:knowledge-constraints}a. In case of leagues like NBA, NFL, and NHL, time representing quarters have limited string combinations like \textit{"1st", "2nd", "3rd" or "4th"}, while in soccer, times in both play-by-play and the game clocks are \textit{continuous} i.e. they do not reset as the game rolls on to the next half. To ensure that the right contextual object is cropped from the frame, we check for the existence of all semantic text classes in them. For example, in Figure \ref{fig:knowledge-constraints}b, shown cropped images are very likely to be confused for the contextual object (clock) by any object detection model. Whereas they have some semantic texts in each of them, and none has all semantic texts. Specifically, we select clocks with at least two team texts, one quarter text, and one time text surface form as potential training candidates. Incorrect clocks are used for hard negative mining to improve clock detection.
\newline
\textbf{Surface form consistencies with the other contextual objects (KC2)}: Some leagues or sports require us to disambiguate further text recognition beyond surface form checks (as there could be multiple candidates), e.g., to disambiguate text representing quarter in NFL, there could be two strings \textit{1st} followed by \textit{10:32} and \textit{2nd} followed by \textit{\& 10} as illustrated in Figure \ref{fig:knowledge-constraints}c, former represents the quarter and time on the clock, and the latter represents players down, with yards remaining. We use the detection of a time surface form and the nearest qualifying quarter text as the right pick.
\newline
\textbf{Correct time text selection by prioritizing a surface form (KC3)}: It is often observed that there exist multiple candidates for the same semantic text but with different surface forms. For example. as shown in Figure \ref{fig:knowledge-constraints}d, \textit{4:38} and \textit{:17} are potential candidates for time text. We set up the priorities of time text surface forms based on the external knowledge we gathered about the clock.
\newline
\textbf{Temporal consistencies across video frames (KC4)}: Finally, with the presence of surface form consistency and correct time text known, we validate the text recognition mistakes made by a general-purpose model by the continuity of corresponding text in the next frame. Specifically, we convert time text to their corresponding value in seconds or minutes and compare if they are monotonically increasing (in soccer) or decreasing (in NFL, NBA, and NHL) in group of three consecutive frames. The recognized clock sequence time text not following the monotonicity can either be discarded or corrected if predictions in previous and subsequent frames largely comply with monotonicity. An example of a soccer clock sequence is shown in Figure \ref{fig:knowledge-constraints}e.
\vspace*{-4mm}
\subsection{Data augmentation for bias correction}
\label{sec:dataaug}
We observe that the typical size of images and number of text regions (especially NFL versus other leagues/sports), and aspect ratios (especially NBA versus soccer) vary across sports domains. For generalizing models on all sports domains (e.g., soccer), we collect wide range of different broadcaster/sport clocks across different seasons. We further analyze the size dimensions (height and width of the sports clocks) and styles of clocks by clustering them and find representative samples and cluster population sizes.
\newline
\textbf{Aspect ratio and size sensitive custom data augmentation} (represented as custom data augmentation in algorithm \ref{algo_main} step 11): Post correction/filtering of the text string and bounding box labels using knowledge constraints mentioned in section \ref{sec:knowledge-constraints}, we calculate the possible range of height and width across different sports clocks (NBA/soccer/NFL/NHL). We found that most variations are present between NBA and soccer clocks (detailed in section \ref{section:dataset}). We break down both clock image dimension ranges, and sliced text box dimension ranges into buckets and measure population density in each of these buckets. We then re-scale each original-sized soccer or NBA clock image to randomly fall in a bucket of the fewer population such that the population densities across buckets normalize. As a result, we observe that a typical NBA clock resized between a min-max range of 0.4x-1.2x of its original size, while for a relatively small soccer clock, the range is 0.7x-1.2x. This fixed range of NBA and soccer clocks shows presence of label bias \cite{Shah_2020}. These data augmentations are carried out in addition to random resizing which is a standard augmentation approach for text detector training.
\newline
\textbf{Labels correction for sport sub-domains:} Taking advantage of the bias present in training set for the text detector and recognizer model, we resize the unlabeled clock images from soccer to the typical sizes of NBA clocks and predict the labels from the NBA sub-domain trained text detector and recognizer. We then filter correct labels by applying knowledge constraints for the target sub-domain (soccer) and then scale back clock images and text box ground truths to original sizes. This is done before the data augmentation described above. Initially we train for the sub-domain of NBA by using AWS Rekognition as noisy pre-trained model. For other sports domains, we have a choice of AWS Rekognition or our NBA trained model as the pre-trained model. The choice of which pre-trained model to pick (in the wild AWS Rekognition or sub-domain trained NBA model) depends on how minimal the aspect ratio/size sensitive resizing is required such that the out-of-domain detector and recognizer perform well on the new sub-domain (e.g. new sport or league). In section \ref{section:dataset}, we discussed the individual properties of the sub-domain datasets (e.g. NBA, soccer).
\vspace*{-3mm}
\begin{algorithm}[]
\SetAlgoLined
\SetKwInOut{KwIn}{Input}
\SetKwInOut{KwOut}{Output}
\KwIn{Out-of-domain text detector and recognizer $otdr$; knowledge constraints $kc$ = \{KC1, KC2, KC3, KC4\}; sport domains $sd$ =\{NBA, soccer, NHL, NFL\}}
\KwOut{Generalized text detection and recognition models for all $sd$, $itdr$; Clean text detection and recognition training dataset $T$}
$itdr$ $ \leftarrow $ $otdr$\;
Clean text detection and recognition training dataset $T$ $ \leftarrow $ None\;
\For{$sport \leftarrow NBA$ \KwTo NFL}{
$videos$ $ \leftarrow $ set of videos from $sport$\;
\For{$i \leftarrow 1$ \KwTo number of videos}{
$clocks$ $ \leftarrow $ (cropped clocks from $videos[i]$, frame ids)\;
clocks with noisy text boxes and noisy recognized text $clocks_{n}$ $ \leftarrow $ $itdr ( clocks )$\;
clocks with clean text boxes and clean recognized text $clocks_{c}$ $ \leftarrow $ $kc (clocks_{n})$\;
$T$ $ \leftarrow $ $T\cup clocks_{c}$
}
Do custom data augmentation on $T$\;
Do stratified sampling across different clock types on $T$\;
$itdr$ $ \leftarrow $ subdomain adaptive learning update:\
Transfer learn text detection model on all text boxes from clocks in $T$;
Transfer learn text recognizer model on text regions pertaining to semantic classes (time, quarter, team names) mixed with Synth90k data\;
}
Return $itdr$ and $T$\;
\caption{Text detection and recognition models learning algorithm for all sports domains}\label{algo_main}
\end{algorithm}
\vspace{-5mm}
\section{Dataset}
\label{section:dataset}
We collect an extensive collection of sports clocks across NBA, soccer, NFL, and NHL sports and broadcasting networks varying in look and feel, sizes, and fonts. For each such clock, we collect the text bounding boxes encompassing the different types of semantic text {like time left in the quarter, team names abbreviations and score}, and the recognized strings. However, our focus is on team names, time, and quarter texts.
\newline
\textbf{Crawling and clock bounding box detection:} We crawled around 1,500 videos, which are short highlights (30 seconds - 2 min) of 720p resolution from nba.com\footnote{https://www.nba.com/sitemap\_video\_*.xml}\footnote{https://www.nba.com/sitemap\_index.xml} or longer highlights videos averaging to 10 minutes in duration from official NBA and soccer league channels from Youtube\footnote{https://www.youtube.com/user/NBA}\footnote{https://www.youtube.com/channel/UCqZQlzSHbVJrwrn5XvzrzcA}\footnote{https://www.youtube.com/c/BRFootball/featured}\footnote{https://www.youtube.com/c/beinsportsusa/featured}. We use CVAT \footnote{https://github.com/openvinotoolkit/cvat} tool to semi-automatically annotate clock object bounding boxes of around 40,000 frames of the above videos. CVAT's object tracking feature reduces manual effort in annotating clocks which are relatively static in their spatial position across the span of the videos. Finally, using the in-domain trained SSD \cite{LiuAESR15} model as stated in section \ref{sec:Approach}, we predict clock bounding boxes and crop nearly 80,049 clocks from the pool of downloaded videos.
\newline
\textbf{Text detection and recognition ground truth generation using knowledge constraints:}
As mentioned in section \ref{sec:knowledge-constraints}, we resort to knowledge constraints to filter quality ground truth text bounding boxes and recognition semantic text strings from noisy predictions of pre-trained models. We use AWS Rekognition \footnote{https://aws.amazon.com/rekognition/} as our pre-trained models both for detection and recognition. Figure \ref{fig:AWSModel_errors} shows detection and recognition errors made by the pre-trained model and results of the in-domain transfer learned model on training data corrected by knowledge constraints.
\newline
\begin{figure}[htp]
\centering
\includegraphics[width=8.5cm]{Images/ConsolidatedAspectRatio.png}
\vspace*{-6mm}
\caption{Text box and image aspect ratio comparison}
\vspace*{-2mm}
\label{fig:AspecRationComparison}
\end{figure}
We generate cropped clocks for other sports like soccer, American football (NFL), and ice hockey (NHL) in the same manner. We use the in-domain trained model on the NBA clocks as the pre-trained model for other sports since it generalizes better to other sports clocks than in-the-wild trained text detectors. But we still filter predictions using knowledge constraints for the target sports.
\begin{figure}[htp]
\centering
\includegraphics[width=8.5cm]{Images/ConsolidatedSizeDistribution_new.png}
\vspace*{-6mm}
\caption{Distribution of image and text size across datasets
\vspace*{-2mm}
\label{fig:ALLimageSize}
\end{figure}
\textbf{Research dataset and origins:}
We compare our dataset\footnote{will be made available at https://webscope.sandbox.yahoo.com/} with the dataset properties of ICDAR challenges appearing from different domains. Born-digital, ICDAR 2015 \cite{10.1109/ICDAR.2013.221, inproceedings}, the dataset contains low-resolution images with texts digitally created on them, taken from web content. Focused Scene Text dataset in ICDAR 2015 \cite{10.1007/s10032-004-0134-3} has captured images from majorly horizontally aligned scenes text without significant blurring. Incidental Scene Text dataset in ICDAR 2015 \cite{7333942} contains casually in-the-wild captured images, including blur and text alignments. DeText dataset in ICDAR 2017 \cite{8270166} has images specifically about graphs, tables, and charts with text focused on biomedical literature. SROIE, ICDAR 2019 \cite{8395172, 10.1109/ICDAR.2013.221, 7333942} dataset is generated by combining data from different resources focused on reading text from images obtained by scanning receipts and invoices.
\begin{table}
\caption{Datasets text properties and our dataset size}
\vspace{-4mm}
\begin{subtable}{\columnwidth}
\captionsetup{size=scriptsize
\caption{Properties comparison between different datasets. \textsuperscript{*} represents strings which are composed of only digits, punctuations and space characters}
\vspace{-2mm}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{cccccccc}
\toprule
Dataset& Textboxes& Words\textsuperscript{*} & Characters & Characters\textsuperscript{*} & words\textsuperscript{*} (\%)& characters\textsuperscript{*} (\%)\\
\midrule
IncidentalSceneText& 11875 & 222 & 46070 & 1418 & 1.86(\%) & 3.07(\%)\\
FocusedSceneText& 849 & 40 & 4784 & 296 & 4.71 (\%) & 6.81 (\%)\\
BornDigital& 4204 & 403 & 21317 & 2164 & 9.58 (\%) & 10.15 (\%)\\
SROIE& 45279 & 14561 & 525183 & 222216 & 32.16 (\%) & 42.31 (\%)\\
DeText& 2314 & 1026 & 17199 & 5930 & 44.33 (\%) & 34.47 (\%)\\
NBA& 397807 & 165527 & 1414348 & 517046 & 41.61 (\%) & 36.56 (\%)\\
Soccer& 66792 & 32471 & 188006 & 87112 & 48.62(\%) & 46.33 (\%)\\
\bottomrule
\end{tabular}%
}
\label{tab:PropertyComp}
\end{subtable}
\begin{subtable}{\columnwidth}
\captionsetup{size=scriptsize
\caption{Total clocks semantic objects samples in our datasets. Knowledge constraints based cleaning, KC1 and KC4 as mentioned in section \ref{sec:knowledge-constraints}. KC4 comprises of KC2 and KC3}
\vspace{-2mm}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{ccccccccc}
\toprule
Dataset & Clock Styles & Aug. NBA (subset) & Aug. soccer & Noisy & KC1 & KC4 & Clean \\
\midrule
NBA & 20 & - & - & 80049 & 6677 & 7389 & 65983 \\
Soccer & 10 & - & - & 20437 & 378 & 1221 & 18838 \\
Generalised & 30 & 29117 & 52372 & - & - & - & 81489 \\
\bottomrule
\end{tabular}%
}
\label{tab:Dataset Size}
\end{subtable}
\label{tab:Dataset and Property}
\vspace{-4mm}
\end{table}
In Figure \ref{fig:AspecRationComparison} we compare image aspect ratio and text box ground truth's aspect ratio across different datasets. We observe that our sports clock sub-domains (NBA, soccer) have considerably higher image aspect ratios (q1-q3 quartiles in the box plots) as compared to other ICDAR datasets, while in comparison our text box aspect ratios are relatively small among the same datasets. We see considerable relative variation in image \text{aspect ratios} among NBA and soccer, while their text box aspect ratios are almost comparable. In figures \ref{fig:ALLimageSize}a and \ref{fig:ALLimageSize}b we find considerable variation in \textit{image sizes} and \textit{text box sizes} between our in-domain datasets (NBA and soccer) versus various ICDAR datasets. In addition, Figure \ref{fig:ALLimageSize}c and Figure \ref{fig:ALLimageSize}d show how variation in NBA and soccer text box and image sizes are handled in augmented dataset. This justifies our need for large in-domain datasets as well as handling variations in image sizes in sub-domains and that's why specifically aspect ratio preserving image resizing strategies discussed under data augmentations in section \ref{sec:dataaug} performs well. As seen in Table \ref{tab:PropertyComp}, our sports clock datasets from NBA and soccer contain texts which are richer in numbers and special characters (many being only numbers and special characters). Text in our dataset are not majorly blurred or are arbitrarily aligned (as in real scene images). Finally in Table \ref{tab:Dataset Size} we report the dataset sizes for sports clocks. In the columns KC1 and KC4 we report number of noisy examples which are rejected, as we sequentially apply each knowledge constraint.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.65\textwidth}
\centering
\captionsetup{size=scriptsize}
\includegraphics[width=\textwidth]{Images/ComputationalArchitecture_v6.png}
\vspace*{-8mm}
\caption{Computational Architecture}
\label{fig:computational-architecture}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.29\textwidth}
\centering
\captionsetup{size=scriptsize}
\includegraphics[width=\textwidth]{Images/Runtime.png}
\vspace*{-6mm}
\caption{Average runtime for processing one min long video clip from various sports domains on v100 gpu; legend represents clock types and average size}
\label{fig:Runtime bargraphs}
\end{subfigure}
\caption{Computational Architecture and processing run time taken by different components on v100 GPU }
\label{fig:CompArch and Process time}
\vspace*{-4mm}
\end{figure*}
\section{System Architecture and runtime performance}
In this section, we describe the system architecture of our sports video understanding framework. Figure \ref{fig:computational-architecture} outlines the high-level architecture overview of the system.
\vspace*{-2mm}
\subsection{System Overview}
\label {subsec:sysoverview}
Both licensed and crawled sports videos (including YouTube) are processed through this computational architecture. \textit{Deep linked high recall} {player, action} triples are extracted from these sports video multimedia content, apart from \textit{low recall} triples obtained from text. The video metadata (video url, title, description, video creation time) per produced video is asynchronously added to video information extraction queue (as shown in Figure \ref{fig:computational-architecture}, component 1). The queue is drained by a cluster of gpu enabled computational units which act as consumers. Each video event usually consists of a video of maximum length of 15 minutes. Longer full game videos are usually broken into smaller segments and playlist manifests are converted to messages, which can be parallely processed by different computational units in the cluster. Figure \ref{fig:computational-architecture} shows the different components which are described below:
\newline\textbf{Component 1, 2}: Downloads the video file/segment (mp4 or otherwise) using the video url to the local disk. In case of crawled videos (like YouTube) we need to extract the downloadable video url from the video player in the loaded html page. Multi-threaded ffmpeg (on cpu) decodes the video file to image sequences at a reduced frame rate (1/3rd) than the original (usually 25 or 30 fps), at a smaller resolution( 512 x 512 ) than the original (usually 720p or 1080p). The lower resolution, as well as the video chunk size (in case of long full game videos) is chosen such that for the given length of the video, they fit in gpu's native memory in a single batch prediction call to the trained game clock object detection model (specifically we train a SSD512 model).
\newline\textbf{Component 3}: Predictions of bounding box coordinates for each frame in input image sequence. Post prediction we drop the reduced image frame sequence from memory to keep the memory footprint low.
\newline\textbf{Component 4}: Post detection of clock bounding boxes, we scale the bounding box coordinates to original image resolution in the videos so that we can crop out natural sized clock images (again using ffmpeg decode). Post decode, we hold on to the cropped clock image objects (in memory) for every frame (at the reduced frame rate of processing) and send them to the text detection model to predict the text bounding boxes within the clock (as in Figure \ref{fig:computational-architecture} component 4).
\newline\textbf{Component 5}: The text regions identified within a clock are then passed to the text recognition model to extract textual strings.
\newline\textbf{Components 6}: In the post processing stage (as in Figure \ref{fig:computational-architecture} component 6), the extracted textual information from clock regions (teams, clock time and the current quarter) for the video are checked for consistency (across frames before and after) and filtered based on the knowledge constraints as explained in section \ref{sec:knowledge-constraints}.
\newline\textbf{Components 7,8,9,10}: The textual information from the post processing step is then aligned with the game play-by-play feed (as shown in Figure \ref{fig:frame-text-alignment}). Start and end time of the events are determined per sport (minimum time for of completion of a event given sport). The extracted information are then added to a vertical search index and as well fused with a larger knowledge graph to power rich sports cards.
\begin{table}[]
\caption{Sport leagues, YouTube channels and actions }
\vspace{-4mm}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{@{}lll@{}}
\toprule
Sport leagues & YouTube channels & Detected actions \\ \midrule
NBA & NBA & 3-pointer, 2-pointer, layup, slam-dunk \\
NHL & NBC Sports, MSG, TSN, NHL (Official) & goal, turnover, shot, penalty \\
NFL & NBC Sports, NFL CBS, ESPN MNF, NFL & tackle, touchdown, penalty, field goal \\
Premiere League & NBC Sports & goal, corner \\
Champions League & B/R Football & goal, yellow card, wide \\
La Liga & Bein Sports USA & goal \\
Europa league & B/R Football & goal \\ \bottomrule
\end{tabular}%
}
\vspace{-3mm}
\label{tab:coverage}
\end{table}
\begin{table}[]
\caption{E2E pipeline component's runtime (mins) on sports videos with average duration of 8 mins}
\vspace{-4mm}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{@{}cccccccc@{}}
\toprule
GPU & Parse video & Object detection & Text detection & Text recognition & Total runtime \\ \midrule
p100 & 2.76 & 1.34 & 4.88 & 1.64 & 10.62 \\
v100 & 1.16 & 1.05 & 5.21 & 1.73 & 9.15 \\ \bottomrule
\end{tabular}%
}
\vspace{-5mm}
\label{tab:avg runtime}
\end{table}
\begin{table*}[]
\caption{End-to-end semantic classes text recognition performance by our models trained for NBA domain and models generalized to other sports subdomains on some YouTube videos. Videos 1 - 6 are from soccer and rest videos are from NFL and NHL sports. Each video has different style of clock. Models are never trained on clock samples from NFL or NHL sports.}
\vspace{-4mm}
\resizebox{\textwidth}{!}{%
\begin{tabular}{llllllllllllll}
\toprule
Video & Video 1 & Video 2 & Video 3 & Video 4 & Video 5 & Video 6 & Video 7 & Video 8 & Video 9 & Video 10 & Video 11 & Video 12 & Video 13 \\
\midrule
YouTube Id& ivHTSIzpzHo & FKbZYlcBNzQ & BN4lomASb1k & YycQcJKLBeM & aAGDem2XWi8 & LUvlSDwO2js & nMJVlo2h104 & T2DisWdsibw & dtoWs31-qj8 & RxKjW0Srk14 & xb4\_OYxoH0Q & xHBUkVqFHks & RJR3bAW4IIg \\
\midrule
Frames & 9330 & 9600 & 21750 & 20370 & 10230 & 13650 & 20220 & 16260 & 7590 & 12210 & 18180 & 15750 & 15330 \\
\midrule
Clocks & 5021 & 6088 & 13832 & 12917 & 7146 & 2168 & 18136 & 15554 & 6254 & 6575 & 17506 & 5831 & 10053 \\
\midrule
acc e-to-e (NBA models) & 0.985 & 0.571 & 0.97 & 0.881 & 0.972 & 0.971 & 0.894 & 0.897 & 0.969 & 0.992 & 0.48 & 0.981 & 0.482 \\
\midrule
acc e-to-e (Final models) & 0.991 & 0.923 & 0.983 & 0.992 & 0.983 & 0.973 & 0.996 & 0.994 & 0.993 & 0.993 & 0.61 & 0.992 & 0.997 \\
\bottomrule
\end{tabular}%
}
\label{tab:NFL-NHL}
\vspace*{-3mm}
\end{table*}
\subsection{Runtime performance}
Table \ref{tab:coverage} shows a snapshot of the different YouTube channels we experimented on with our end-to-end processing pipeline, along with our licensed videos. NBA has a largely the variety of small sized clocks (similar to soccer leagues Premiere League, Champions's league, La Liga, or NHL) than NFL. Figure \ref{fig:CompArch and Process time}b shows the comparison of processing times for different leagues. Since clock object detection is done on a standard reduced size of the video (mentioned in section \ref{subsec:sysoverview}), prediction times are comparable across different leagues, while for text detections it is a function of the size of the clock (NFL being the largest) as they are performed on natural sized clocks. Finally bigger clock imply more text regions, thus increasing prediction time( as seen in NFL). Table \ref{tab:avg runtime} shows the average run times of different components across different sports on Nvidia P100 and V100 gpus both with 16GB of memory. Parse video (which is largely ffmpeg decode, running on cpus) time differently as they have different cpus. Most of the gpu bound components perform similarly in both gpu types.
\section{Training Results}
\begin{figure}[htp]
\centering
\begin{subfigure}[b]{\columnwidth
\centering
\includegraphics[width=\textwidth]{Images/AWS_Errors.png}
\captionsetup{size=scriptsize}
\vspace*{-6mm}
\caption{AWS Rekognition service vs our model trained on NBA clocks}
\label{fig:AWSModel_errors}
\end{subfigure}
\begin{subfigure}[b]{\columnwidth}
\centering
\includegraphics[width=\textwidth]{Images/NBAModel_incorrect.png}
\captionsetup{size=scriptsize}
\vspace*{-6mm}
\caption{Text detection generalization issues across sub-domain and corrections post training}
\label{fig:NBAModel_errors}
\end{subfigure}
\caption{Generalization errors and corrected predictions by transfer learned models}
\label{fig:Clock issues}
\vspace{-4mm}
\end{figure}
\begin{table}[]
\caption{Performance of our models trained on the NBA dataset against the baseline}
\vspace{-4mm}
\begin{subtable}{0.4\textwidth}
\captionsetup{size=scriptsize
\caption{Comparison of precision and recall over semantic classes and all classes for EAST model transfer learned on NBA clocks from pre-trained model as the baseline}
\vspace{-2mm}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lllllllll}
\toprule
IoU & P\_scb & P\_sc & R\_scb & R\_sc &P\_acb & P\_ac & R\_acb & R\_ac \\
\midrule
0.5 & 0.992 & 0.996 & 0.988 & 0.998 & 0.922 & 0.942 & 0.844 & 0.971 \\
0.6 & 0.983 & 0.995 & 0.988 & 0.998 & 0.889 & 0.934 & 0.839 & 0.971 \\
\textbf{0.7} & \textbf{0.88} & \textbf{0.991} & \textbf{0.987} & \textbf{0.998} & \textbf{0.736} & \textbf{0.915} & \textbf{0.812} & \textbf{0.971} \\
0.8 & 0.504 & 0.958 & 0.977 & 0.998 & 0.381 & 0.851 & 0.692 & 0.968 \\
0.9 & 0.097 & 0.57 & 0.893 & 0.997 & 0.067 & 0.468 & 0.283 & 0.944 \\
\bottomrule
\end{tabular}
}
\label{tab:EASTvsBaseline}
\end{subtable}
\begin{subtable}{0.4\textwidth}
\captionsetup{size=scriptsize
\caption{Comparison of accuracy over semantic classes and all classes for CRNN model trained on text from our NBA clock semantic classes and subset of synth90k jointly from clocks against baseline}
\vspace{-2mm}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lllllllll}
\toprule
IoU & Acc\_scb & Acc\_sc & Acc\_acb & Acc\_ac & C\_sc & I\_sc & C\_ac & I\_ac \\
\midrule
0.5 & 0.849 & 0.932 & 0.581 & 0.798 & 71821 & 5260 & 152761 & 38551 \\
0.6 & 0.855 & 0.933 & 0.593 & 0.804 & 71812 & 5168 & 152513 & 37199 \\
\textbf{0.7} & \textbf{0.877} & \textbf{0.936} & \textbf{0.622} & \textbf{0.814} & \textbf{71679} & \textbf{4898} & \textbf{151412} & \textbf{34519} \\
0.8 & 0.932 & 0.949 & 0.698 & 0.832 & 69599 & 3752 & 143864 & 28976 \\
0.9 & 0.962 & 0.97 & 0.743 & 0.854 & 41864 & 1315 & 81132 & 13863 \\
\bottomrule
\end{tabular}
}
\label{tab:CRNNvsBaseline}
\end{subtable}
\begin{subtable}{0.4\textwidth}
\captionsetup{size=scriptsize
\caption{Comparison of precision and recall over semantic classes and all classes for e-to-e text detection and recognition system against baseline}
\vspace{-2mm}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lllllllll}
\toprule
IoU & P\_scb & P\_sc & R\_scb & R\_sc &P\_acb & P\_ac & R\_acb & R\_ac \\
\midrule
0.5 & 0.829 & 0.925 & 0.981 & 0.998 & 0.536 & 0.752 & 0.759 & 0.964 \\
0.6 & 0.826 & 0.924 & 0.981 & 0.998 & 0.527 & 0.751 & 0.756 & 0.964 \\
\textbf{0.7} & \textbf{0.762} & \textbf{0.923} & \textbf{0.979} & \textbf{0.998} & \textbf{0.457} & \textbf{0.745} & \textbf{0.729} & \textbf{0.964} \\
0.8 & 0.477 & 0.896 & 0.967 & 0.998 & 0.266 & 0.708 & 0.611 & 0.962 \\
0.9 & 0.093 & 0.539 & 0.85 & 0.997 & 0.05 & 0.399 & 0.226 & 0.935 \\
\bottomrule
\end{tabular}
}
\label{tab:E2EvsBaseline}
\end{subtable}
\label{tab:OurModelsvsBaseline}
\end{table}
\begin{table}[]
\caption{Performance of our models trained on the NBA\_Soc\_Augmented dataset}
\vspace{-4mm}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|l|rrrr|rr|rrrr|}
\toprule
&
\multicolumn{4}{c|}{\textbf{EAST}} &
\multicolumn{2}{c|}{\textbf{CRNN}} &
\multicolumn{4}{c|}{\textbf{End-to-End}} \\ \hline
\textbf{IoU} &
\multicolumn{1}{l}{\textbf{P\_sc}} &
\multicolumn{1}{l}{\textbf{R\_sc}} &
\multicolumn{1}{l}{\textbf{P\_ac}} &
\multicolumn{1}{l|}{\textbf{R\_ac}} &
\multicolumn{1}{l}{\textbf{Acc\_sc}} &
\multicolumn{1}{l|}{\textbf{Acc\_ac}} &
\multicolumn{1}{l}{\textbf{P\_e2e\_sc}} &
\multicolumn{1}{l}{\textbf{R\_e2e\_sc}} &
\multicolumn{1}{l}{\textbf{P\_e2e\_ac}} &
\multicolumn{1}{l|}{\textbf{R\_e2e\_ac}} \\ \hline
0.5 &
0.995 &
0.989 &
0.949 &
0.935 &
0.937 &
0.839 &
0.93 &
0.989 &
0.796 &
0.923 \\
\textbf{0.6} &
\textbf{0.99} &
\textbf{0.989} &
\textbf{0.936} &
\textbf{0.934} &
\textbf{0.939} &
\textbf{0.846} &
\textbf{0.928} &
\textbf{0.989} &
\textbf{0.792} &
\textbf{0.923} \\
0.7 &
0.969 &
0.989 &
0.9 &
0.931 &
0.945 &
0.858 &
0.913 &
0.989 &
0.772 &
0.921 \\
0.8 &
0.869 &
0.988 &
0.767 &
0.92 &
0.955 &
0.877 &
0.825 &
0.988 &
0.673 &
0.91 \\
0.9 &
0.378 &
0.973 &
0.309 &
0.823 &
0.951 &
0.882 &
0.359 &
0.972 &
0.272 &
0.804 \\
\bottomrule
\end{tabular}
}
\label{tab:evaluation_metrics}
\end{table}
\textbf{Text detector training}: We fine tune EAST \cite{zhou2017east}, pre-trained on Incidental Scene Text dataset \cite{7333942} of ICDAR 13 and 15 (using Adam optimizer), for both semantic (time, quarter, teams) as well as other text regions (to avoid them being treated as hard negative samples) with a learning rate of 0.0001 and batch size of 12. Max and min text box sizes in training were 800px to 10px with minimum height/width ratio of 0.1 for 65 epochs on NBA data followed by 35 epochs on NBA\_Soc\_Augmented dataset.
\newline\textbf{Text recognizer training}: We train a fresh CRNN \cite{shi2015endtoend} model with clock text regions pertaining to the semantic classes of time, quarter, teams (using SGD optimizer and a learning rate of 0.1/ dropout rate of 0.25) at a batch size of 256. Maximum label length is 23 with number of output timestamps set to 50. We trained CRNN \cite{shi2015endtoend} model with mix of NBA and Synth90k \cite{synth90k1,synth90k2} for 50 epochs, followed by fine tuning on NBA\_Soc\_Augmented (rich in special characters) for 32 epochs.
\newline In Table \ref{tab:EASTvsBaseline} we report the performance of in-domain trained EAST (transfer learned from baseline) w.r.t baseline pre-trained EAST on Incidental Scene Text dataset \cite{7333942} of ICDAR 13 and 15. Combining it with tables \ref{tab:CRNNvsBaseline}, \ref{tab:E2EvsBaseline} and \ref{tab:evaluation_metrics} we compare end to end performance of training as we increase sports domains in our training datasets (namely NBA or NBA\_Soc\_Augmented). We do not train specifically on clocks obtained from american football (NFL) or ice-hockey (NHL) but find our model trained on the combined dataset (as described in section \ref{sec:dataaug}) to generalize well. \begin{math}P\_*,R\_*\end{math} denote precision and recall. Subscripts of \begin{math}sc,scb, ac,acb\end{math} signify only semantic classes, baseline model performance on semantic classes, all classes (i.e. text box of semantic text areas + other text) and baseline performance on all classes respectively. True positivies (TP), false positives (FP), and false negatives (FN) are calculated based on intersection over union (IoU) between predicted and ground truth boxes, at different thresholds of [0.5, 0.6, 0.7, 0.8, 0.9]. We find that even at a threshold of 0.8 the in-domain EAST model's precision and recall are very high. In Table \ref{tab:CRNNvsBaseline}, we compare the performance of an in-domain trained CRNN coupling with a baseline EAST model vs in-domain trained EAST model. It would be unfair to compare an in-the-wild trained CRNN model with an in-domain trained CRNN model as it does not have the full list of required characters to predict the semantic text successfully. Thus in Table \ref{tab:CRNNvsBaseline}, we test the robustness of an in-domain trained CRNN model's capability, under varying degrees of noise in text box predictions (based on IoU thresholds) to predict the entire semantic text string (i.e. full team names, time, quarter) correctly. We treat any partially correct recognition as a false positive. \begin{math}Acc\_*\end{math} signifies accuracy, while the subscripts have the same convention as mentioned above. Here also we see that under low IoU thresholds (0.5 in first row), the accuracy of in-domain trained CRNN model under supposedly noisy text box predictions, also is considerably high. In Table \ref{tab:E2EvsBaseline}, we measure the end to end performance of both EAST (pretrained baseline vs in-domain trained) and CRNN (in-domain trained), however accounting both systems in the precision and recall calculation. Across tables \ref{tab:EASTvsBaseline}, \ref{tab:CRNNvsBaseline}, \ref{tab:E2EvsBaseline} we find that at an operating range of EAST's predicted text box's IoU=0.7, we find good performance of the end to end systems on semantic classes. Table \ref{tab:evaluation_metrics} shows the component wise performance of both EAST and CRNN as well end to end on the augmented NBA\_Soc\_Augmented dataset as discussed in section \ref{sec:dataaug} and analyzed in section \ref{section:dataset}. We find that an operating range of IoU=0.6 we get high precision and recall of the semantic texts. Table \ref{tab:NFL-NHL} shows end to end performance on fresh videos scrapped from YouTube across different sports and comparing accuracy for models trained only over NBA data, versus NBA\_Soc\_Augmented data. The performance of video 11 is unusually low, primarily because we encounter a totally new styled clock.
\section{Conclusion and future work} Figure \ref{fig:AWSModel_errors} and \ref{fig:NBAModel_errors} show how in-the-wild or not completely trained in domain detection and recognition models can impact precision and recall in a highly demanding mutimodal alignment scenario. We present an end to end system which scales both in terms of training and runtime. Prior works use play-by-play alignment over full game videos as a mechanism of generating training data, but our work showcases the intricacies of using a similar technique on highlight videos in a production environment. Finally we gather a rich dataset of fine grained shooting actions by running this pipeline in production, which we hope to contribute to the research community in future.
\clearpage
\bibliographystyle{ACM-Reference-Format}
\balance
| {
"attr-fineweb-edu": 2.429688,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfwE241xiQk66oQ2x | \section{Introduction}
In the article \cite{efthimiou}, a theoretical study to construct a reasonable model for allocating dominance space to each player during a soccer match
was initiated. There was one fundamental requirement in the quest for the model: it had to be driven by the formulas which are relevant to the underlying dynamics of the activity. This is a common practice in natural phenomena but it is often conveniently ignored by other practitioners --- mathematicians and computer scientists --- who often build models by fitting arbitrary curves to data. In order to derive a meaningful curve fitting, the dynamical principles which generate the observed behavior must necessarily be used. Hence, in \cite{efthimiou}, starting with the kinematical equations which describe the motion of the players, we were able to conclude that the standard Voronoi diagram is not the accurate concept for computing dominance space. Instead, it must be replaced by a compoundly weighted Voronoi diagram which takes into account the maximal speeds that can be reached by the players, as well as their time delays in response to an event.
Despite the progress in the original article, there are many additional details which must be understood. In particular, two assumptions stand out as the most important to be removed: for each player, the model assumed an isotropic coverage of space and constant acceleration to a maximum speed. However, it is intuitively obvious that players already in motion find it easier to challenge the ball on points belonging in the `forward cone' of their motion and find it more difficult to challenge the ball at points which lie inside the `backward cone' of their motion. Also, we know that players do not maintain a constant acceleration while sprinting. A drag force from the air will resist players' acceleration proportionally to the square of their speed \cite{Hill}. Also, there will be some internal friction limiting acceleration within the player, as even in the absence of air drag the players will reach a maximum speed. At the suggestion of biomechanics \cite{Furu}, we take this to be proportional to speed. Those two effects --- directional bias and friction --- are added to the model in this paper.
With the improved model, we are able to create diagrams which more accurately depict the dominance regions of players. These diagrams often carry conceptually interesting aspects which are not present in the standard Voronoi model. These include, but are not limited to, disconnected and non-convex regions for individual players.
The new additions to the model further demonstrate an appeal of approaching soccer analysis with a theoretical mind set: The assumptions of the model are physically founded.
They are neither hypothetical probabilistic guesses, nor designed to produce a certain result. Instead, they are designed to accurately depict the basic actions of soccer. Further research into the abilities of individual players or the interactions between players can be naturally included into the model, and so the model can match the continual growth in our understanding of soccer.
This further refinement of the assumptions will lead to increasingly complex and enlightening models of the sport.
It is important to stress that our results are novel and quite distinct from other similar attempts.
Taki and Hasegawa had been thinking about the player dominance region in soccer as far back as 1996. In 2000, they proposed \cite{TH} what is probably the first attempt to quantify the dominance area with the player's motion taken into account. Their proposal has the same motivation as our original article \cite{efthimiou} but, unfortunately, besides stating that the dominance region should be based on a shortest time principle and proposing two possibilities for the time-dependence of the player's acceleration, the authors do not work out an explicit analytical model. In particular, their proposals for the acceleration are as follows: (a) It has the same value in all directions and (b) the acceleration is isotropic at low speeds but as the speed of the player increases, the acceleration becomes anisotropic.
Taki and Hasegawa applied their ideas to scenes of games that had been recorded by conventional cameras. Hence, they were lacking precise tracking data and thus accurate Voronoi diagrams. However, for the data they had, they have done a decent job and motivated many other researchers (including us) to seriously think about the problem.\footnote{For a summary of what the Voronoi diagrams are and what they are not, as well as a review of related efforts, please see \cite{efthimiou}.}
One important deficiency in Taki and Hasegawa's idea is that allows the players to reach unreasonable high speeds if we allow time to run long enough. In the models of \cite{efthimiou}, this difficulty was bypassed by arguing that the players reach a target point with their highest speed (characteristic speed).
Our models include several parameters. Two parameters already present in \cite{efthimiou}, are the maximal speeds
(characteristic speeds) and the reaction time delays of the players. Unfortunately, we have not found any source which records them for each player. EA Sports' FIFA game
rates the players based on these (and other data) but it provides no exact values. Due to this drawback, in this paper we set all time delays to zero. However, we do not do the same for the characteristic speeds. Although the same difficulty exists for them, there is extensive literature discussing maximal speeds of soccer players without identifying the players explicitly. In particular, the paper \cite{Djaoui} reports maximal speeds per position. Hence, in our paper, we adopt that wing defenders have a characteristic speed of $8.62 \text{m/s}$, central defenders have a characteristic speed of $8.50 \text{m/s}$ , wing midfielders have a characteristic speed of $8.76 \text{m/s}$, central midfielders have a characteristic speed of $8.58 \text{m/s}$, and forwards have a characteristic speed of $8.96
\text{m/s}$. We were unable to find any studies of the maximal speeds of goalkeepers, so we took their characteristic speed to be $8.25 \text{m/s}$. Although this is not optimal, it provides a good approximation and a step forward.
To produce pitch diagrams based on our models,
we have used the open data set (both tracking and event data) provided by Metrica Sports \cite{Metrica}. The data consists of three games in the standard CSV format set by Metrica. There are no references to the players' names, teams or competitions. In each axis the positioning data goes from 0 to 1 with the top left corner of the pitch having coordinates $(0, 0)$, the bottom right corner having coordinates $(1, 1)$ and the kick off point having coordinates $(0.5, 0.5)$. For our presentation, the pitch dimensions have been set to $105\,\text{m} \times 68\,\text{m}$. Finally, the tracking and the event data are synchronized. We have used Python 3+ for our analysis.
\section{Construction of the Dominance Regions}
Consider two players P$_1$ and P$_2$ on an infinite plane and let $r_1, r_2$ be their respective distances from any point P on the plane.
Also, let $\vec v_1, \vec v_2$ be the initial velocities and $\vec V_1, \vec V_2$ be the final velocities when they reach the point P.
As in \cite{efthimiou}, we will assume that the players, when they compete to reach the point P, place their maximum effort to reach
it with their characteristic speeds. Finally, we assume that each player reacts at different time to make a run towards the point P. Player one reacts at time $t_1$ and player two at time $t_2$ after a `signal' was given. Depending on the specific details for each player's motion, the points that each can reach will be given by the functions $r_1=r_1(t,t_1)$ and $r_2=r_2(t,t_2)$.
Our goal is to determine the locus of all points which the two players can reach simultaneously,
$$
r_1(t,t_1) = r_2(t,t_2),
$$
for any later time $t$ after the `signal'. Knowing all such loci for the 22 players on the pitch allows us to construct the dominance areas (Voronoi regions) of the players.
{\begin{figure}[h!]
\centering
\setlength{\unitlength}{1mm}
\begin{picture}(100,45)
\put(0,-3){\includegraphics[width=10cm]{BipolarCoordinates}}
\put(42,24){\footnotesize $\vec r_2$}
\put(14,21){\footnotesize $\vec r_1$}
\put(88,8.5){\footnotesize $\vec v_2$}
\put(23.5,15){\footnotesize $\vec v_1$}
\put(23.5,36){\footnotesize P}
\put(9,1){\footnotesize P$_1$}
\put(74,1){\footnotesize P$_2$}
\put(85,5.4){\footnotesize $\alpha_2$}
\put(16,6){\footnotesize $\alpha_1$}
\put(76,8){\footnotesize $\theta_2$}
\put(15,11){\footnotesize $\theta_1$}
\end{picture}
\caption{\footnotesize Bipolar coordinates for the calculation of the borderline of the two Voronoi regions. Player 1 uses the polar coordinates $(r_1,\theta_1)$ while player 2 uses the polar coordinates $(r_2,\theta_2)$. The polar angles $\theta_1, \theta_2$ are computed from the corresponding directions of motion $\vec v_1$ and $\vec v_2$. These directions do not necessarily coincide with the line P$_1$P$_2$ joining the two players.}
\label{fig:BipolarCoordinates}
\end{figure}}
\subsection*{\normalsize Past Results}
Although the standard Voronoi diagram was not constructed with motion in mind, its adoption is equivalent to the assumption that all players
can react without any delay and run with uniform speed V along any direction. Hence $r(t)=V\, t$ for all players. Consequently, the boundary
between any two players is given by the equation
$$
{r_1(t)\over r_2(t)} = 1
$$
and it is therefore the perpendicular bisector of the segment that joins the two players. Examples of the standard Voronoi diagrams
are shown in Figure \ref{fig:standardVoronoi}.
{\begin{figure}[h!]
\centering
\subfigure[Frame: 98202 ]{\includegraphics[width=12cm]{Standard-Voronoi-1.pdf}}\\
\subfigure[Frame: 123000]{\includegraphics[width=12cm]{Standard-Voronoi-2.pdf}}
\caption{\footnotesize Sketches of standard Voronoi diagrams representing dominance areas for players adopted from Metrica Sports open
tracking data \cite{Metrica}. It is customary that, for the standard Voronoi diagrams, the velocities not to be shown since they are
not used for any calculation. However, the above diagrams do show the instantaneous velocities of the players for the moment
the diagram was created. }
\label{fig:standardVoronoi}
\end{figure}}
When the initial speeds and the characteristic speeds of the players are taken into account, it was shown in \cite{efthimiou} that
\begin{equation}
{r_1(t)\over r_2(t)} = {A_1\over A_2} = \text{const.}
\label{eq:O}
\end{equation}
where $A_i=(v_i+V_i)/2$, $i=1,2$.
This equation identifies the boundary between any two players as an Apollonius circle\footnote{The Apollonius circle reduces to the perpendicular
bisector when $A_1=A_2$.} surrounding the slower player. A curious
and demanding reader should refer to the original article to fully understand the underlying details of the derivation. For the current work those details are
not necessary.
Figure \ref{fig:BifocalOriginal} shows examples of boundaries (and hence their dominance regions) between two players placed in a pitch 212 meters by 136 meters\footnote{The size of this pitch is much larger than a standard soccer pitch. However, this choice is useful so the reader better evaluates the way boundaries curve. With not much effort, they can mentally truncate the diagrams down to the real size.} for eight different scenarios. As expected, the boundaries are either straight lines or circular arcs. The size of the circle is determined by the ratio $A_1/A_2$.
{\begin{figure}[h!]
\centering
\subfigure[\parbox{5cm}{$v_1 = 8\, \text{m/s}, v_2 = 6\, \text{m/s},\\ \alpha_1 = -135^{\circ}, \alpha_2 = 45^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_apo.pdf}}
\subfigure[\parbox{5cm}{$v_1 = 8\, \text{m/s}, v_2 = 3\, \text{m/s},\\ \alpha_1 = 0^{\circ}, \alpha_2 = 180^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_apo_2.pdf}}\\
\subfigure[\parbox{5cm}{$v_1 \sim 9\, \text{m/s}, v_2 = 0\, \text{m/s},\\ \alpha_1 = 45^{\circ}, \alpha_2 = -50^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_apo_3.pdf}}
\subfigure[\parbox{5cm}{$v_1 \sim 9\,\text{m/s}, v_2 \sim 9\,\text{m/s},\\ \alpha_1 = 90^{\circ}, \alpha_2 = 90^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_apo_4.pdf}}\\
\subfigure[\parbox{5cm}{$v_1 \sim 9\, \text{m/s}, v_2 \sim 9\, \text{m/s},\\ \alpha_1 = 45^{\circ}, \alpha_2 = 135^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_apo_5.pdf}}
\subfigure[\parbox{5cm}{$v_1 \sim 9\, \text{m/s}, v_2 \sim 9\, \text{m/s},\\ \alpha_1 = 90^{\circ}, \alpha_2 = 180^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_apo_6.pdf}}\\
\subfigure[\parbox{5cm}{$v_1 = 6\, \text{m/s}, v_2 = 6\, \text{m/s},\\ \alpha_1 = -45^{\circ}, \alpha_2 = 135^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_apo_7.pdf}}
\subfigure[\parbox{5cm}{$v_1=3\,\text{m/s}, v_2=8\,\text{m/s},\\ \alpha_1 = -45^{\circ}, \alpha_2 = -45^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_apo_8.pdf}}
\caption{\footnotesize Sketch of the boundary between two players on a $212$ meters by $136$ meters pitch according to \cite{efthimiou}.
The bottom left corner and the upper right corner have coordinates $(-106,-68)$ and $(106,68)$ respectively.
The two players P$_1$ and P$_2$ are located at the points $(-10, -10)$ and $(20, -10)$. We assume the players have a negligible
difference in reaction time. In this model, only the magnitudes of the velocities
are assumed to have an effect; their directions do not. Finally, as there is a division by $v-9$ in our code, whenever the speeds are $v=\text{9 m/s}$ we use $v=\text{8.9999 m/s}$ to avoid a \texttt{ZeroDivisionError}. Therefore, we use $v \sim \text{9 m/s}$ rather than $v=\text{9 m/s}$ in our figure captions.}
\label{fig:BifocalOriginal}
\end{figure}}
When equation \eqref{eq:O} is used to determine the dominance regions for the 22 players of a game,
the resulting diagram for the dominance regions was called Apollonius diagram in \cite{efthimiou}. It turns out that this diagram is a variation of the standard Voronoi diagram which mathematicians refer to as the multiplicatively weighted Voronoi diagram.
The dominance areas might not always be convex, might contain holes, and might be disconnected. Examples of Apollonius diagrams are shown in
Figure \ref{fig:apollonius}.
{\begin{figure}[h!]
\centering
\subfigure[Frame: 98202]{\includegraphics[width=12cm]{Apollonius-Voronoi-1.pdf}}\\
\subfigure[Frame: 123000]{\includegraphics[width=12cm]{Apollonius-Voronoi-2.pdf}}
\caption{\footnotesize Sketches of Apollonius diagrams representing dominance areas for players adopted from Metrica Sports open data.}
\label{fig:apollonius}
\end{figure}}
When the delayed reactions and the speeds are taken into account, it was shown \cite{efthimiou} that the boundary between any two players is
a Cartesian oval. Although our theoretical models can accommodate such a parameter easily, Metrica Sports does not provide any reaction time data for the players in their open data \cite{Metrica}. Hence, when we draw diagrams in this article, we assume the difference betwwen reaction times of the players is negligible.
\subsection*{\normalsize Inclusion of Air Drag}
In this section, we present the first extension of the model presented in \cite{efthimiou}.
When there is no air drag, the player is propelled by the frictional force $a$ acting on their feet from the ground:\footnote{Please note that we use Newton's law of motion in the form $F'=a$ instead of $F=ma$. In other words,
our force $F'$ is really force per unit mass $F/m$.}
$$
{dv\over dt} = a .
$$
Here $a$ is the value of the acceleration that we used in \cite{efthimiou}. However, when an object is moving
inside a fluid, there is a frictional force --- drag force --- from the fluid which is proportional to the square of the velocity also acting
on the object:
\begin{align}
{dv\over dt} = a- k\, v^2 , \quad a,k>0 .
\label{eq:9}
\end{align}
In general, the coefficient $k$ does not have to be constant but we will assume that this is the case.
Notice that, as the speed increases, the air drag increases too. The velocity can increase only up to the point that the air drag balances the propelling force from the ground. At that point, $dv/dt=0$ and hence
$$
v_\text{limit}^2 = {a\over k}.
$$
Equation \eqref{eq:9} can be integrated easily by separation of variables:
\begin{align}
\int_{v_0}^v {dv\over k\left( v_\text{limit}^2- v^2 \right)} = \int_{t_0}^t dt .
\label{eq:10}
\end{align}
It is known that, if $v^2<b^2$, then
$$
\int {du\over b^2-v^2} = {1\over 2b} \, \ln{b+v\over b-v} = {1\over b} \, \tanh^{-1}{v\over b}.
$$
Since $v \le v_\text{limit}$, equation \eqref{eq:10} gives
\begin{align*}
\tanh^{-1}{v\over v_\text{limit}} - \tanh^{-1} { v_0 \over v_\text{limit} } = k v_\text{limit}\,(t-t_0) ,
\end{align*}
or
\begin{align}
\tanh^{-1}{v\over v_\text{limit}} = kv_\text{limit}\,(t-t_0) + \delta ,
\label{eq:11}
\end{align}
where we have set
$$
\delta = \tanh^{-1} {v_0\over v_\text{limit} } .
$$
Equation \eqref{eq:11} can easily be solved for $v$.
\begin{equation}
v = v_\text{limit} \, \tanh[kv_\text{limit} (t-t_0)+\delta] .
\label{eq:12}
\end{equation}
The previous equation can be integrated further, also by separation of variables, to find the distance travelled by the player:
\begin{align*}
\int_{r_0}^ rdr=& {1\over k} \, \int_{t_0}^t {d\cosh[kv_\text{limit} (t-t_0)+\delta] \over \cosh[kv_\text{limit} (t-t_0)+\delta]} ,
\end{align*}
or
\begin{align*}
r-r_0=& {1\over k} \, \, \ln {\cosh[kv_\text{limit}(t-t_0)+\delta] \over \cosh\delta} .
\end{align*}
In bipolar coordinates, the constant $r_0$ vanishes.
And since
\begin{align*}
\cosh u = {1\over\sqrt{1-\tanh^2u}} ,
\end{align*}
we have
\begin{equation}
r = -{1\over 2k}\, \ln\left[ \cosh^2\delta \, \Big(1-{v^2\over v_\text{limit}^2}\Big)\right] .
\label{eq:14}
\end{equation}
The last equation expresses the distance run by a player assuming that their speed has the value $v$.
Our assumption is that, when a player competes to reach a point, they do so by placing maximum effort and reaching it with their characteristic speed $V$.
Hence at point P:
\begin{align}
V &= v_\text{limit} \, \tanh[kv_\text{limit} (t_\text{P}-t_0)+\delta] , \label{eq:17a} \\
r_\text{P} &= -{1\over 2k}\, \ln\left[ \cosh^2\delta \, \Big(1- {V^2\over v_\text{limit}^2}\Big)\right] . \label{eq:17b}
\end{align}
Here $t_\text{P}$ is the time the player needs to reach the point P starting from their position with time delay $t_0$.
We can now use equation \eqref{eq:17a} to eliminate $k$ from equation \eqref{eq:17b}.
In particular:
$$
k = {1\over v_\text{limit} (t_\text{P}-t_0)} \, (\Delta - \delta),
$$
where we have set
$$
\Delta = \tanh^{-1} {V\over v_\text{limit}} .
$$
So,
\begin{equation}
r_\text{P} = {v_\text{limit} \,
\ln\left[ \cosh^2\delta \, \Big(1-{V^2\over v_\text{limit}^2}\Big) \right] \over 2(\delta - \Delta) }\, (t_\text{P}-t_0) .
\label{eq:19}
\end{equation}
This result is striking: It has the form
$$
r_\text{P} = A\, (t_\text{P}-t_0)
$$
with the factor $A$ dependent on the characteristic speed of the player.
Hence, when air drag is included but the remaining premises remain the same, one could rush to state that all the results of \cite{efthimiou} are transferable in this case too. However, this is not as automatic as it appears to be!
In addition to $V$, the factor $A$ depends on $v_\text{limit}$ which, in turn, depends on $a$ and $k$. Since the speed $V$ has a fixed value at any point P on the boundary of the dominance regions, the acceleration $a$ has different values along different rays (although on each ray it is constant).
This implies that $v_\text{limit}$ has a different value along each ray and complicates our analysis. Although, numerically, this issue is not a big obstacle as a computer program can be set to compute $v_\text{limit}$ ray by ray, for an analytical analysis it creates complications. To remove this difficulty,
we will assume that $k$ has also a different value along each ray such that $v_\text{limit}$ maintains a constant value for each player.
This constant value may be assumed to be different from player to player or might be universal. Having a universal value is more appealing --- an ultimate speed that no runner can surpass. From \cite{Ryu}, we see that the maximal sprinting speeds of elite sprinters approach 12 $\text{m/s}$. We assume that these atheletes have not reached the theoretical maximum yet, and so take $v_\text{limit}$ to be 13 $\text{m/s}$.
Hence, we will assume that the coefficient $k$ is proportional to $a$ with the proportionality constant being the fixed number $1/v^2_\text{limit}$. With this assumption,
equation \eqref{eq:9} has the form:
$$
{dv\over dt} = a \left( 1 - {v^2\over v^2_\text{limit}} \right) .
$$
Given the previous discussion, when air drag is included then indeed all results of \cite{efthimiou} (subjected to all underlying assumptions for the original model) are transferable to this case too.
This is clear in Figure \ref{fig:BifocalAirDrag}, which demonstrates the effects of a drag force by plotting the boundaries between two players for the
same scenarios as Figure \ref{fig:BifocalOriginal}. By comparing the two figures, we notice immediately that, when the boundary is straight in the original model, it remains straight when air drag is added if the players have the same initial speeds and same characteristic speeds.\footnote{Since the functional forms of $A=A(v,V)$ in the original model and the model with air drag are different, the ratio $A_1/A_2$ can be different in the two models
if this condition is not true even if $v_1+V_1=v_2+V_2$. In particular, when the condition is not true, a straight boundary in the original model becomes an Apollonius arc in the new model.}
The times required for the players to reach each point do change, but since the factors $A$ remain equal, there is no observable change in the dominance regions. When the conditions require an Apollonius circle in the original case, in general this will be the case in the new model but the radius of the circle changes due to the air drag.
{\begin{figure}[h!]
\centering
\subfigure[\parbox{5cm}{$v_1 = 8\, \text{m/s}, v_2 = 6\, \text{m/s},\\ \alpha_1 = -135^{\circ}, \alpha_2 = 45^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_drag.pdf}}
\subfigure[\parbox{5cm}{$v_1 = 8\, \text{m/s}, v_2 = 3\, \text{m/s},\\ \alpha_1 = 0^{\circ}, \alpha_2 = 180^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_drag_2.pdf}}\\
\subfigure[\parbox{5cm}{$v_1 \sim 9\, \text{m/s}, v_2 = 0\, \text{m/s},\\ \alpha_1 = 45^{\circ}, \alpha_2 = -50^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_drag_3.pdf}}
\subfigure[\parbox{5cm}{$v_1 \sim 9\,\text{m/s}, v_2 \sim 9\,\text{m/s},\\ \alpha_1 = 90^{\circ}, \alpha_2 = 90^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_drag_4.pdf}}\\
\subfigure[\parbox{5cm}{$v_1 \sim 9\, \text{m/s}, v_2 \sim 9\, \text{m/s},\\ \alpha_1 = 45^{\circ}, \alpha_2 = 135^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_drag_5.pdf}}
\subfigure[\parbox{5cm}{$v_1 \sim 9\, \text{m/s}, v_2 \sim 9\, \text{m/s},\\ \alpha_1 = 90^{\circ}, \alpha_2 = 180^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_drag_6.pdf}}\\
\subfigure[\parbox{5cm}{$v_1 = 6\, \text{m/s}, v_2 = 6\, \text{m/s},\\ \alpha_1 = -45^{\circ}, \alpha_2 = 135^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_drag_7.pdf}}
\subfigure[\parbox{5cm}{$v_1=3\,\text{m/s}, v_2=8\,\text{m/s},\\ \alpha_1 = -45^{\circ}, \alpha_2 = -45^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_drag_8.pdf}}
\caption{\footnotesize Sketch of the boundary between two players in the presence of air resistance on a 212 meters
by 136 meters pitch.
The bottom left corner and the upper right corner have coordinates $(-106,-68)$ and (106,68) respectively.
The two players P$_1$ and P$_2$ are located at the points $(-10, -10)$ and $(20, -10)$. The cases are identical with those
of Figure \ref{fig:BifocalOriginal}. However, the radii of the Apollonius circles are altered as explained in the text
following equation \eqref{eq:19}.
The reader can compare the two results to see the changes.}
\label{fig:BifocalAirDrag}
\end{figure}}
Figure \ref{fig:frictionVoronoi} shows the Apollonius diagram with air drag included for the same frames of the Metrica Sports data.
Comparing the diagrams with those of Figure \ref{fig:apollonius}, one can see the changes in the dominance areas induced by air drag.
{\begin{figure}[h!]
\centering
\subfigure[Frame: 98202]{\includegraphics[width=12cm]{Dissipative-Voronoi-1.pdf}}\\
\subfigure[Frame: 123000]{\includegraphics[width=12cm]{Dissipative-Voronoi-2.pdf}}
\caption{\footnotesize Sketches of Apollonius diagrams in the presence of air drag for the same frames of the Metrica Sports open tracking data.}
\label{fig:frictionVoronoi}
\end{figure}}
\subsection*{\normalsize Inclusion of Air Drag and Internal Dissipation}
In physics courses, it is common to introduce students to a frictional force that is proportional to velocity $-\gamma \, v$, $\gamma>0$. Interestingly,
it has been known for some time that this frictional force can be used to model internal dissipation during sprinting \cite{Furu}. In fact, it is known that more energy is dissipated by internal friction than air drag. Hence, as an immediate extension to
the previous model, we can imagine that the player is subjected, besides the air drag, to this force too:\footnote{Physically, a player cannot apply a force on theirself. Therefore, from a fundamental point of view this force is not an internal one. The way to state it precisely is as follows: As the player sprints at higher speeds, energy consumption affects the ability of the player to push the ground with constant force. Instead, the player pushes it with a variable effective force of $a-\gamma v$; in turn, the ground reacts and applies the opposite force on the player.}
\begin{align}
{dv\over dt} = a - \gamma\, v - k\, v^2 .
\label{eq:20}
\end{align}
The additional term provides us the quantification of the physiological reason (i.e. the construction of the human body) that restricts players from keeping a constant rate even in the absence of air drag.\footnote{Incidentally, notice that the inclusion of the internal dissipation implements suggestion (b) of \cite{TH}. As explained in the introduction, Taki and Hasegawa thought that as the player's speed increases, the rate of the increase has to decrease. Lacking a model, they did not link this effect to any force; it was just an expectation.}
And, if we wish to activate the term above a critical speed $v_\text{cr}$, then the coefficient $\gamma$ must be discontinuous:
$$
\gamma = \begin{cases}
0, &\text{if } v<v_\text{cr}, \\
\gamma_0, &\text{if } v\ge v_\text{cr}.
\end{cases}
$$
It appears that this force quantifying internal dissipation is of great interest for many different sports in which sprinting occurs. Therefore, it would be a great feat if one can derive it from more fundamental principles of biokinesis and construction of the human body.
Returning to equation \eqref{eq:20}, by completing the square in the right-hand side, the equation can be written in the form
\begin{align}
{dv\over dt} = \left( a+{\gamma^2\over4k}\right) - k\, \left(v+{\gamma\over 2k}\right)^2 .
\label{eq:21}
\end{align}
Hence, it has the exact same form with the equation studied in the previous section
\begin{align*}
{du\over dt} = a' - k\, u^2 ,\
\label{eq:22}
\end{align*}
upon defining
$$
u = v+ {\gamma\over 2k}, \quad a'=a+{\gamma^2\over4k}.
$$
Integration of this equation gives
\begin{equation}
u = u_\text{limit} \, \tanh[ku_\text{limit} (t-t_0)+\delta] ,
\label{eq:23}
\end{equation}
where
$$
\delta = \tanh^{-1} {u_0\over u_\text{limit} } , \quad
u_\text{limit}^2={a'\over k}.
$$
Then, integration of \eqref{eq:23} gives:
\begin{align*}
r=& {1\over k} \, \, \ln\cosh{[ku_\text{limit}(t-t_0)+\delta]\over\cosh\delta} - {\gamma\over 2k} \, (t-t_0) ,
\end{align*}
or
\begin{equation}
r = -{1\over 2k}\, \ln\left[\cosh^2\delta\, \Big(1-{u^2\over u_\text{limit}^2}\Big)\right] - {\gamma\over 2k} \, (t-t_0).
\label{eq:24}
\end{equation}
At the point P where the player arrives with maximal speed,
\begin{equation*}
r_\text{P} = {u_\text{limit} \, \ln\left[ \cosh^2\delta \, \Big(1-{U^2\over u_\text{limit}^2}\Big) \right] \over 2(\delta - \Delta) }\, (t-t_0)
- {\gamma \over 2k}\, (t-t_0) ,
\end{equation*}
where
$$
\Delta = \tanh^{-1} {U\over u_\text{limit} } , \quad
U = V+ {\gamma\over 2k} .
$$
Factoring out the time,
\begin{equation}
r_\text{P} = \left[ {u_\text{limit} \, \ln\left[\cosh^2\delta\, \Big(1-{U^2\over u_\text{limit}^2}\Big)\right] \over 2(\delta - \Delta) }
- \varGamma\right] \, (t-t_0) .
\label{eq:25}
\end{equation}
Similarly to $a,k$ being a function of the direction the player runs, $\gamma$ should be so too. After all, it relates to the fatigue of the muscles which do work differently along different directions (since $a$ is different for example). We will also assume that the ratio $\varGamma=\gamma/2k$ is the same in any direction.
Once more, with the assumptions presented, the points reached by the player obey an equation of the familiar form
$$
r = A\, (t-t_0),
$$
where $A$ depends on the player's initial speed $v_0$ and characteristic speed $V$, as well the parameters $\Gamma$ and
$u_\text{limit}$. The last two parameters could be player dependent or could be universal constants. Since we have assumed that $V$ is the maximal
speed that a player can achieve based on their physiology, it is reasonable to assume that $v_\text{limit}$ is a universal constant imposed by nature. As such, since it depends on $\varGamma$, $u_\text{limit}$ should also be a universal constant. Using our previous estimation of $v_\text{limit}$, we can calculate a reasonable value for $\Gamma$ to use in our calculations. If we assume an average elite level player can run from one end of the pitch to the other in 15 seconds, we find that $\Gamma = 2.237$ $\text{m/s}$.
Since $U$ is a particular value of the variable $u$ whose maximum value is $u_\text{limit}$, the ratio $U / u_\text{limit}$ must be
always less than one.
The choice of our parameters entering our calculations are consistent with this restriction.
In Figure \ref{fig:BifocalTwinFriction}, we again plot the eight scenarios for two players with these new considerations.
{\begin{figure}[h!]
\centering
\subfigure[\parbox{5cm}{$v_1 = 8\, \text{m/s}, v_2 = 6\, \text{m/s},\\ \alpha_1 = -135^{\circ}, \alpha_2 = 45^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_twin_friction.pdf}}
\subfigure[\parbox{5cm}{$v_1 = 8\, \text{m/s}, v_2 = 3\, \text{m/s},\\ \alpha_1 = 0^{\circ}, \alpha_2 = 180^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_twin_friction_2.pdf}}\\
\subfigure[\parbox{5cm}{$v_1 \sim 9\, \text{m/s}, v_2 = 0\, \text{m/s},\\ \alpha_1 = 45^{\circ}, \alpha_2 = -50^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_twin_friction_3.pdf}}
\subfigure[\parbox{5cm}{$v_1 \sim 9\,\text{m/s}, v_2 \sim 9\,\text{m/s},\\ \alpha_1 = 90^{\circ}, \alpha_2 = 90^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_twin_friction_4.pdf}}\\
\subfigure[\parbox{5cm}{$v_1 \sim 9\, \text{m/s}, v_2 \sim 9\, \text{m/s},\\ \alpha_1 = 45^{\circ}, \alpha_2 = 135^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_twin_friction_5.pdf}}
\subfigure[\parbox{5cm}{$v_1 \sim 9\, \text{m/s}, v_2 \sim 9\, \text{m/s},\\ \alpha_1 = 90^{\circ}, \alpha_2 = 180^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_twin_friction_6.pdf}}\\
\subfigure[\parbox{5cm}{$v_1 = 6\, \text{m/s}, v_2 = 6\, \text{m/s},\\ \alpha_1 = -45^{\circ}, \alpha_2 = 135^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_twin_friction_7.pdf}}
\subfigure[\parbox{5cm}{$v_1=3\,\text{m/s}, v_2=8\,\text{m/s},\\ \alpha_1 = -45^{\circ}, \alpha_2 = -45^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_twin_friction_8.pdf}}
\caption{\footnotesize Sketch of the boundary between two players in the presence of air resistance and internal resistance on a
212 meters by 136 meters pitch. The bottom left corner and the upper right corner have coordinates $(-106,-68)$ and
(106,68) respectively.
The two players P$_1$ and P$_2$ are located at the points $(-10, -10)$ and $(20, -10)$. The cases are identical with those
of Figure \ref{fig:BifocalOriginal}.
The reader can compare the results with those of Figures \ref{fig:BifocalOriginal} and \ref{fig:BifocalAirDrag} to see the
relative changes.}
\label{fig:BifocalTwinFriction}
\end{figure}}
Next, we plot the boundaries between all twenty-two players for the same two Metrica Sports frames --- see Figure~\ref{fig:twinFrictionVoronoi}.
{\begin{figure}[h!]
\centering
\subfigure[Frame: 98202]{\includegraphics[width=12cm]{Twin-Dissipative-Voronoi-1.pdf}}\\
\subfigure[Frame: 123000]{\includegraphics[width=12cm]{Twin-Dissipative-Voronoi-2.pdf}}
\caption{\footnotesize Sketches of Apollonius diagrams in the presence of air drag and internal friction, representing dominance areas for players
adopted from Metrica Sports open tracking data.}
\label{fig:twinFrictionVoronoi}
\end{figure}}
\subsection*{\normalsize Introduction of Directional Bias}
For uniformly accelerated motion with acceleration $\vec a$,
at time $t$, the location of each player will be given by
$$
\vec r_i = \vec v_i \, (t-t_i) + {1\over 2} \, \vec a_i \, (t-t_i)^2, \quad i=1,2,
$$
Similarly, the velocities of the two players will be
$$
\vec V_i = \vec v_i + \vec a_i \, (t-t_i) , \quad i=1,2.
$$
We can decompose the above vectorial equations to two algebraic equations: one radial equation along P$_i$P and one equation along the perpendicular direction. It is tempting to argue that the latter direction determines the reaction time $t_1$; it is connected, at least in part, to the time necessary to make the component $v_i\sin\theta_i$ vanish. Experience indicates that this can happen fast: The player can plant their foot on the ground and receive a push in the proper direction.\footnote{In fact, these abrupt changes often lead to injuries. It would be interesting here to examine the forces on the players' legs and feet. However, this would take us off target.} Hence, the time delays due to the vanishing of extraneous component of the velocity are expected to be very small. When the angle of the change of direction is quite big (greater than 90$^\circ$), there will be some delay since the player will need to reorient their body and hence will need to spin about their axis. Finally, recall from \cite{efthimiou} that the time delay parameter is a way to quantify our ignorance about all different aspects influencing the playing not to respond instantaneously. For example, besides the time delay originating from attempting to change the direction of motion (aka make the component $v_i\sin\theta_i$ vanish), it includes the player's physiological delay and perhaps other environmental factors that influence the motion. Eventually, we will set the time delay parameter $t_i$ to zero but experience indicates that it does not have to be so.
The equations of motion along the radial direction are:
\begin{align*}
r_i =& v_i \, \cos\theta_i \, (t-t_i) + {1\over 2} \, a_i \, (t-t_i)^2 , \\
V_i =& v_i \, \cos\theta_i + a_i \, (t-t_i), \quad i=1,2,
\end{align*}
With the help of the second relation, we can rewrite the first in the form
\begin{equation}
r_i = A_i \, (t-t_i), \quad i=1,2,
\label{eq:15}
\end{equation}
where
$$
A_i = { v_i \, \cos\theta_i+ V_i \over 2 } , \quad i=1,2 .
$$
Result \eqref{eq:15} is similar to those in \cite{efthimiou}. However, the factor $A_i$ in the original work was \textit{isotropic}; now it has a directional bias. In particular, the factor is skewed along the direction of the player's motion. The points $r_i=\text{const.}$ that the player can reach at time $t$ from their original position lie on a limacon of Pascal.
When $v_i=V_i$, the limacon becomes a cardioid. Figure \ref{fig:limacon} plots the function $A=A(\theta)$ for five different values of the ratio $v/V$.
{\begin{figure}[h!]
\centering
\setlength{\unitlength}{1mm}
\begin{picture}(60,55)
\put(0,-3){\includegraphics[width=6cm]{limacon}}
\put(38,27.5){\footnotesize $\vec v$}
\end{picture}
\caption{\footnotesize Plot of the function $2A= V+v\, \cos\theta$ in polar coordinates for different magnitudes of $v$. The player is located at the point (0,0) and moving with initial velocity $\vec v$ as indicated by the corresponding vector. When the player is at rest, $v=0$, the curve is a circle. That is, the player can move along any direction with the same level of ease. However, as their speed $v$ increases, the player starts to have greater difficulty to reach points behind them. At $v=V$, the points on the halfline behind the player's back are unaccessible.}
\label{fig:limacon}
\end{figure}}
The points that the two players can reach simultaneously at time $t$ from their original positions satisfy
\begin{equation}
k_1 \, r_1 -k_2 \, r_2 = t_0 ,
\label{eq:6}
\end{equation}
where $k_1, k_2$ are the players' slownesses:
\begin{equation}
k_i = {1\over A_i} = {2 \over \ v_i \, \cos\theta_i + V_i } \, \raisebox{2pt}{,} \quad i=1,2 ,
\label{eq:6a}
\end{equation}
and $t_0$ the relative time delay, $t_0=t_2-t_1$.
Assuming that the players' time delays vanish or they are equal, we find
\begin{equation}
{r_1 \over r_2} = {v_1 \, \cos\theta_1 + V_1 \over v_2 \, \cos\theta_2 + V_2 } .
\label{eq:7}
\end{equation}
Because the angle variables $\theta_1, \theta_2$ are measured from different axes, it is better to replace them by
$$
\phi_i = \theta_i + \alpha_i, \quad i=1,2,
$$
where $\alpha_1,\alpha_2$ are the angles the velocity vectors $\vec v_1, \vec v_2$ make with the line P$_1$P$_2$. Hence,
\begin{equation}
{r_1 \over r_2} = {v_1 \, \cos(\phi_1-\alpha_1) + V_1 \over v_2 \, \cos(\phi_2-\alpha_2) + V_2 } .
\label{eq:16}
\end{equation}
The pairs $(r_1,\phi_1)$ and $(r_2,\phi_2)$ form a set of twin polar coordinates with the angles measured from the same line.
In \cite{efthimiou}, we have reviewed the curves resulting from equation \eqref{eq:6} for constant slowness. These are the well known Cartesian
ovals. When $t_0=0$, the Cartesian ovals reduce to Apollonius circles. In the new scenario in which the player's slowness depends on the direction of motion, the Cartesian ovals are in general deformed to new shapes. Unfortunately, to the best of our knowledge, there is no study for such curves. Hence, at the moment, to get a feeling on how the curves look like, we rely on numerical plots.
In Figure \ref{fig:BifocalDirectionalBias}, we plot the boundary between two players (equation \eqref{eq:16}) in the eight scenarios of Figure
\ref{fig:BifocalOriginal}. In these diagrams, we see that each player has a stronger control in front of theirself than behind theirself, the boundary between only two players can have disconnected regions,\footnote{A set $S=A\cup B$ is made of two disconnected sets $A$ and $B$, if $A\cap B=\varnothing$. In the examples
of Figures \ref{fig:BifocalDirectionalBias} and \ref{fig:BifocalAll}, when the dominance region is made of two sets $A$ and $B$, their intersection is either empty or it contains a point. Hence, for our purposes, we shall call a set $S$ the disconnected union of $A$ and $B$ if the intersection of $A$ and $B$ is a countable set.} and the player with a higher initial speed does not necessarily control a greater area. The reader will certainly appreciate the boundary changes induced by the directional bias in Figure \ref{fig:BifocalDirectionalBias} compared
to Figure \ref{fig:BifocalOriginal}.
{\begin{figure}[h!]
\centering
\subfigure[\parbox{5cm}{$v_1 = 8\, \text{m/s}, v_2 = 6\, \text{m/s},\\ \alpha_1 = -135^{\circ}, \alpha_2 = 45^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_directional_bias.pdf}}
\subfigure[\parbox{5cm}{$v_1 = 8\, \text{m/s}, v_2 = 3\, \text{m/s},\\ \alpha_1 = 0^{\circ}, \alpha_2 = 180^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_directional_bias_2.pdf}}\\
\subfigure[\parbox{5cm}{$v_1 \sim 9\, \text{m/s}, v_2 = 0\, \text{m/s},\\ \alpha_1 = 45^{\circ}, \alpha_2 = -50^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_directional_bias_3.pdf}}
\subfigure[\parbox{5cm}{$v_1 \sim 9\,\text{m/s}, v_2 \sim 9\,\text{m/s},\\ \alpha_1 = 90^{\circ}, \alpha_2 = 90^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_directional_bias_4.pdf}}\\
\subfigure[\parbox{5cm}{$v_1 \sim 9\, \text{m/s}, v_2 \sim 9\, \text{m/s},\\ \alpha_1 = 45^{\circ}, \alpha_2 = 135^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_directional_bias_5.pdf}}
\subfigure[\parbox{5cm}{$v_1 \sim 9\, \text{m/s}, v_2 \sim 9\, \text{m/s},\\ \alpha_1 = 90^{\circ}, \alpha_2 = 180^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_directional_bias_6.pdf}}\\
\subfigure[\parbox{5cm}{$v_1 = 6\, \text{m/s}, v_2 = 6\, \text{m/s},\\ \alpha_1 = -45^{\circ}, \alpha_2 = 135^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_directional_bias_7.pdf}}
\subfigure[\parbox{5cm}{$v_1=3\,\text{m/s}, v_2=8\,\text{m/s},\\ \alpha_1 = -45^{\circ}, \alpha_2 = -45^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_directional_bias_8.pdf}}
\caption{\footnotesize Sketch of the boundary between two players in the presence of directional bias on a 212 meters by 136 meters pitch.
The bottom left corner and the upper right corner have coordinates $(-106,-68)$ and $(106,68)$ respectively.
The two players P$_1$ and P$_2$ are located at the points $(-10, -10)$ and $(20, -10)$. The cases are identical with those
of Figure \ref{fig:BifocalOriginal}. However, both the magnitude and direction of the velocity affect the dominance regions.
The reader can easily compare the two results to see how drastic the changes are.}
\label{fig:BifocalDirectionalBias}
\end{figure}}
{\begin{figure}[h!]
\centering
\subfigure[\parbox{5cm}{$v_1 = 8\, \text{m/s}, v_2 = 6\, \text{m/s},\\ \alpha_1 = -135^{\circ}, \alpha_2 = 45^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_ddf.pdf}}
\subfigure[\parbox{5cm}{$v_1 = 8\, \text{m/s}, v_2 = 3\, \text{m/s},\\ \alpha_1 = 0^{\circ}, \alpha_2 = 180^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_ddf_2.pdf}}\\
\subfigure[\parbox{5cm}{$v_1 \sim 9\, \text{m/s}, v_2 = 0\, \text{m/s},\\ \alpha_1 = 45^{\circ}, \alpha_2 = -50^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_ddf_3.pdf}}
\subfigure[\parbox{5cm}{$v_1 \sim 9\,\text{m/s}, v_2 \sim 9\,\text{m/s},\\ \alpha_1 = 90^{\circ}, \alpha_2 = 90^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_ddf_4.pdf}}\\
\subfigure[\parbox{5cm}{$v_1 \sim 9\, \text{m/s}, v_2 \sim 9\, \text{m/s},\\ \alpha_1 = 45^{\circ}, \alpha_2 = 135^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_ddf_5.pdf}}
\subfigure[\parbox{5cm}{$v_1 \sim 9\, \text{m/s}, v_2 \sim 9\, \text{m/s},\\ \alpha_1 = 90^{\circ}, \alpha_2 = 180^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_ddf_6.pdf}}\\
\subfigure[\parbox{5cm}{$v_1 = 6\, \text{m/s}, v_2 = 6\, \text{m/s},\\ \alpha_1 = -45^{\circ}, \alpha_2 = 135^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_ddf_7.pdf}}
\subfigure[\parbox{5cm}{$v_1=3\,\text{m/s}, v_2=8\,\text{m/s},\\ \alpha_1 = -45^{\circ}, \alpha_2 = -45^{\circ}$.}]%
{\includegraphics[width=6cm]{Bifocal_ddf_8.pdf}}
\caption{\footnotesize Sketch of the boundary between two players in the presence of directional bias, air drag, and internal dissipation on a
212 meters by 136 meters pitch. The bottom left corner and the upper right corner have coordinates $(-106,-68)$ and (106,68)
respectively. The two players P$_1$ and P$_2$ are located at the points $(-10, -10)$ and $(20, -10)$. We notice many of the same features from Figure \ref{fig:BifocalDirectionalBias}, but the players still have some control over areas behind themselves and minor alterations to their control in other directions.}
\label{fig:BifocalAll}
\end{figure}}
Next, we plot the boundaries between all twenty-two players from two random frames and record our observations, see Figure~\ref{fig:directionalBiasVoronoi}. It is immediately clear that players control more space in the direction of their velocity. Further, we see that this asymmetry is stronger for players with high initial speeds. We also see that the boundaries between any two players are far more complex and diverse than previous models.
{\begin{figure}[h!]
\centering
\subfigure[Frame: 98202]{\includegraphics[width=12cm]{DDA-Voronoi-1.pdf}}\\
\subfigure[Frame: 123000]{\includegraphics[width=12cm]{DDA-Voronoi-2.pdf}}
\caption{\footnotesize Sketches of Apollonius diagrams in the presence of directional bias for two frames of the
Metrica Sports open tracking data.}
\label{fig:directionalBiasVoronoi}
\end{figure}}
\subsection*{\normalsize The Boundary in the Presence of Air Drag, Internal Dissipation and Directional Bias}
We can now combine the two models in a unified one. In this case, equation \eqref{eq:15} remains valid
\begin{equation*}
r =\Bigg[ {u_\text{limit} \, \ln\left[ \cosh^2\delta \, \Big(1-{U^2\over u_\text{limit}^2}\Big) \right] \over 2(\delta - \Delta) }
- \Gamma \Bigg] (t-t_0) ,
\end{equation*}
but
$$
\delta = \tanh^{-1} {v_0\, \cos\theta + \Gamma \over u_\text{limit} } .
$$
Therefore the borderline is determined by
\begin{equation*}
k_1 \, r_1 -k_2 \, r_2 = t_2-t_1 ,
\end{equation*}
with the players' slownesses being
\begin{equation}
k_i = \Bigg[ {u_\text{limit} \, \ln\left[ \cosh^2\delta \, \Big(1-{U^2\over u_\text{limit}^2}\Big) \right] \over 2(\delta - \Delta) }
- \Gamma \Bigg] ^{-1}, \quad i=1,2 .
\label{eq:40}
\end{equation}
To get a feeling of the result, we again plot the boundaries between two players in the same eight 2-player scenarios --- see Figure \ref{fig:BifocalAll}.
Finally, we plot the boundaries between all twenty-two players for two Metrica Sports frames in Figure \ref{fig:twin_direction}.
%
{\begin{figure}[h!]
\centering
\subfigure[Frame: 98202]{\includegraphics[width=12cm]{DDF-Voronoi-1.pdf}}\\
\subfigure[Frame: 123000]{\includegraphics[width=12cm]{DDF-Voronoi-2.pdf}}
\caption{\footnotesize Sketches of Apollonius diagrams in the presence of directional bias, air drag, and internal friction for two frames of the Metrica Sports open tracking data.}
\label{fig:twin_direction}
\end{figure}}
\section{Conclusions and Discussion}
Our work in this article has focused on building models of dominance regions for soccer players in a match incorporating the effects of directional bias and frictional forces, building on the deterministic models introduced in \cite{efthimiou}.
The concept of asymmetric influence along different directions in the dominance region of the players was based on a reasonable kinematical assumption. Similarly, the introduction of the frictional forces were not arbitrary; they are well known forces that have been verified by biokinetics researchers who have studied sprinting in athletes (\cite{Furu}). Our most polished model combines all effects.
The dominance areas show interesting features not encountered (and, most importantly not expected) in the standard Voronoi diagrams.
In fact, starting with the result of \cite{efthimiou}, the dominance areas between any two players are separated by an
Apollonius circle\footnote{A perpendicular bisector is a special case of an Apollonius circle.} whose
size depends on the ratio of the average speeds $A_1/A_2$ of the two players, assuming that the players have a negligible difference in reaction times. When we introduce air drag and internal resistance, the boundaries between players remain Apollonius circles but the radius of each circle resizes since the factors $A$ have a different functional form compared to the original case.
In general, these models predict that, given that two players have the same characteristic speed, the player with the greater initial speed is favored. This coincides with the widely accepted idea that it is never a good idea for soccer players to stand still or to be walking. Following the rhythm of the game is a good plan. When players get exhausted, substitutions are warranted.
For the models including frictional forces, the results were obtained under some simplifying assumptions for the acceleration $a$, the drag coefficient $k$ and the internal dissipation constant $\gamma$. Although there are articles discussing sprinting of soccer players, (see, for example, \cite{Gissis}, \cite{Salvo}, \cite{da Silva}) we have not been able to find precise data in the literature which would allow us to accurately compute the constants $a, k,$ and $\gamma$. So, at the moment the precise behavior of the constants $a, k,$ and $\gamma$ remains an open problem to be settled in the future. Settling it is a relatively easy problem and it can provide hints for modeling several different aspects of the soccer players.
When directional bias is added to the model, the results get modified substantially. Imagine a sprinter who, as they approach the finish line at full speed, is asked to turn back and reach one of the previous points. Obviously, it is impossible for this to happen fast; the sprinter must decelerate first and then turn back. Keeping this in mind, it is easily understood that when directional bias is added to the model (which already includes the frictional forces), the slower player can reclaim many points in their dominance area. Hence, although it is always a good plan to follow the rhythm of the game, the direction of motion is also of paramount importance and it can be used by either player to their advantage. In other words, there should be a new parameter that should `enter' into the model (and every soccer model): the `Soccer IQ' of the players --- how fast they think and process the visual and auditory data to optimize their motion and be ready to reach vital points on the pitch before opposing players are able to. But that's a different problem and not the goal of this article.
Currently, our model is primarily an observational tool: It measures/evaluates instantaneous moments of a match as given by the tracking data. However, it is easy to extrapolate that further models can be constructed with it as a foundation which predict or suggest various strategies during a match. We hope to present such works in the near future.
Finally, we close this article by returning to comment furthermore on the simplifying assumptions that we have made.
Although the resulting diagrams are more precise than the widely used standard Voronoi diagrams or other suggestions based on arbitrary assumptions, from a perfectionist's point a view a simplifying assumption is never a satisfying action. Our interest is to continue streamlining these models by using modern human biokinetics research to remove more simplifying assumptions. We are currently working on this and will soon be able to report on the progress.
| {
"attr-fineweb-edu": 2.988281,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUd7E5qX_Bxod3mCwK | \section{Introduction}
In this article, we formulate a mathematical model to examine the power expended by a cyclist on a velodrome.
Our assumptions limit the model to individual time trials; we do not include drafting effects, which arise in team pursuits.
Furthermore, our model does not account for the initial acceleration; we assume the cyclist to be already in a launched effort, which renders the model adequate for longer events, such as a 4000-metre pursuit or, in particular, the Hour Record.
Geometrically, we consider a velodrome with its straights, circular arcs and connecting transition curves, whose inclusion\,---\,as indicated by \citet{FitzgeraldEtAl}\,---\,has been commonly neglected in previous studies.
While this inclusion presents a mathematical challenge, it increases the empirical adequacy of the model.
We begin this article by expressing mathematically the geometry of both the black line, which corresponds to the official length of the track, and the inclination of the track.
Our expressions are accurate representations of the common geometry of modern $250$\,-metre velodromes~(Mehdi Kordi, {\it pers.~comm.}, 2020).
We proceed to formulate an expression for power used against dissipative forces and power used to increase mechanical energy.
Subsequently, we focus our attention on a model for which we assume a constant centre-of-mass speed.
We use this assumption in a numerical example and complete the article with conclusions.
In appendices, we present derivations of expressions for the air resistance and lean angle.
\section{Track}
\label{sec:Formulation}
\subsection{Black-line parameterization}
\label{sub:Track}
To model the required power of a cyclist who follows the black line, in a constant aerodynamic position, as illustrated in Figure~\ref{fig:FigBlackLine}, we define this line by three parameters.
\begin{figure}[h]
\centering
\includegraphics[scale=0.35]{FigBlackLine}
\caption{\small A constant aerodynamic position along the black line}
\label{fig:FigBlackLine}
\end{figure}
\begin{itemize}
\item[--] $L_s$\,: the half-length of the straight
\item[--] $L_t$\,: the length of the transition curve between the straight and the circular arc
\item[--] $L_a$\,: the length of the remainder of the quarter circular arc
\end{itemize}
The length of the track is $S=4(L_s+L_t+L_a)$\,.
In Figure~\ref{fig:FigTrack}, we show a quarter of a black line for $L_s=19$\,m\,, $L_t=13.5$\,m and $L_a=30$\,m\,, which results in $S=250$\,m\,.
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{FigTrack.pdf}
\caption{\small A quarter of the black line for a $250$\,-metre track}
\label{fig:FigTrack}
\end{figure}
This curve has continuous derivative up to order two; it is a $C^2$ curve, whose curvature is continuous.
To formulate, in Cartesian coordinates, the curve shown in Figure~\ref{fig:FigTrack}, we consider the following.
\begin{itemize}
\item[--] The straight,
\begin{equation*}
y_1=0\,,\qquad0\leqslant x\leqslant a\,,
\end{equation*}
shown in gray, where $a:=L_s$\,.
\item[--] The transition, shown in black\,---\,following a standard design practice\,---\,we take to be an Euler spiral, which can be parameterized by Fresnel integrals,
\begin{equation*}
x_2(\varsigma)=a+\sqrt{\frac{2}{A}}\int\limits_0^{\varsigma\sqrt{\!\frac{A}{2}}}\!\!\!\!\cos\!\left(x^2\right)\,{\rm d}x
\qquad{\rm and}\qquad
y_2(\varsigma)=\sqrt{\frac{2}{A}}\int\limits_0^{\varsigma\sqrt{\!\frac{A}{2}}}\!\!\!\!\sin\!\left(x^2\right)\,{\rm d}x\,,
\end{equation*}
with $A>0$ to be determined; herein, $\varsigma$ is the curve parameter.
Since the arclength differential,~${\rm d}s$\,, is such that
\begin{equation*}
{\rm d}s=\sqrt{x_2'(\varsigma)^2+y_2'(\varsigma)^2}\,{\rm d}\varsigma
=\sqrt{\cos^2\left(\dfrac{A\varsigma^2}{2}\right)+\sin^2\left(\dfrac{A\varsigma^2}{2}\right)}\,{\rm d}\varsigma
={\rm d}\varsigma\,,
\end{equation*}
we write the transition curve as
\begin{equation*}
(x_2(s),y_2(s)), \quad 0\leqslant s\leqslant b:=L_t\,.
\end{equation*}
\item[--] The circular arc, shown in gray, whose centre is $(c_1,c_2)$ and whose radius is $R$\,, with $c_1$\,, $c_2$ and $R$ to be determined.
Since its arclength is specified to be $c:=L_a,$ we may parameterize the quarter circle by
\begin{equation}
\label{eq:x3}
x_3(\theta)=c_1+R\cos(\theta)
\end{equation}
and
\begin{equation}
\label{eq:y3}
y_3(\theta)=c_2+R\sin(\theta)\,,
\end{equation}
where $-\theta_0\leqslant\theta\leqslant 0$\,, for $\theta_0:=c/R$\,.
The centre of the circle is shown as a black dot in Figure~\ref{fig:FigTrack}.
\end{itemize}
We wish to connect these three curve segments so that the resulting global curve is continuous along with its first and second derivatives.
This ensures that the curvature of the track is also continuous.
To do so, let us consider the connection between the straight and the Euler spiral.
Herein, $x_2(0)=a$ and $y_2(0)=0$\,, so the spiral connects continuously to the end of the straight at $(a,0)$\,.
Also, at $(a,0)$\,,
\begin{equation*}
\frac{{\rm d}y}{{\rm d}x}=\frac{y_2'(0)}{x_2'(0)}=\frac{0}{1}=0\,,
\end{equation*}
which matches the derivative of the straight line.
Furthermore, the second derivatives match, since
\begin{equation*}
\frac{{\rm d}^2y}{{\rm d}x^2}=\frac{y''_2(0)x_2'(0)-y'_2(0)x_2''(0)}{(x_2'(0))^2}=0\,,
\end{equation*}
which follows, for any $A>0$\,, from
\begin{equation}
\label{eq:FirstDer}
x_2'(\varsigma)=\cos^2\left(\dfrac{A\,\varsigma^2}{2}\right)\,, \quad y_2'(\varsigma)=\sin^2\left(\dfrac{A\,\varsigma^2}{2}\right)
\end{equation}
and
\begin{equation*}
x_2''(\varsigma)=-A\,\varsigma\sin\left(\dfrac{A\,\varsigma^2}{2}\right)\,, \quad y_2''(\varsigma)=A\,\varsigma\cos\left(\dfrac{A\,\varsigma^2}{2}\right)\,.
\end{equation*}
Let us consider the connection between the Euler spiral and the arc of the circle.
In order that these connect continuously,
\begin{equation*}
\big(x_2(b),y_2(b)\big)=\big(x_3(-\theta_0),y_3(-\theta_0)\big)\,,
\end{equation*}
we require
\begin{equation}
\label{eq:Cont1}
x_2(b)=c_1+R\cos(\theta_0)\,\,\iff\,\,c_1=x_2(b)-R\cos\!\left(\dfrac{c}{R}\right)
\end{equation}
and
\begin{equation}
\label{eq:Cont2}
y_2(b)=c_2-R\sin(\theta_0)\,\,\iff\,\, c_2=y_2(b)+R\sin\!\left(\dfrac{c}{R}\right)\,.
\end{equation}
For the tangents to connect continuously, we invoke expression~(\ref{eq:FirstDer}) to write
\begin{equation*}
(x_2'(b),y_2'(b))=\left(\cos\left(\dfrac{A\,b^2}{2}\right),\,\sin\left(\dfrac{A\,b^2}{2}\right)\right)\,.
\end{equation*}
Following expressions~(\ref{eq:x3}) and (\ref{eq:y3}), we obtain
\begin{equation*}
\big(x_3'(-\theta_0),y_3'(-\theta_0)\big)=\big(R\sin(\theta_0),R\cos(\theta_0)\big)\,,
\end{equation*}
respectively.
Matching the unit tangent vectors results in
\begin{equation}
\label{eq:tangents}
\cos\left(\dfrac{A\,b^2}{2}\right)=\sin\!\left(\dfrac{c}{R}\right)\,,\quad \sin\left(\dfrac{A\,b^2}{2}\right)=\cos\!\left(\dfrac{c}{R}\right)\,.
\end{equation}
For the second derivative, it is equivalent\,---\,and easier\,---\,to match the curvature.
For the Euler spiral,
\begin{equation*}
\kappa_2(s)=\frac{x_2'(s)y_2''(s)-y_2'(s)x_2''(s)}
{\Big(\big(x_2'(s)\big)^2+\big(y_2'(s)\big)^2\Big)^{\frac{3}{2}}}
=A\,s\cos^2\left(\dfrac{A\,s^2}{2}\right)+A\,s\sin^2\left(\dfrac{A\,s^2}{2}\right)
=A\,s\,,
\end{equation*}
which is indeed the defining characteristic of an Euler spiral: the curvature grows linearly in the arclength.
Hence, to match the curvature of the circle at the connection, we require
\begin{equation*}
A\,b=\frac{1}{R} \,\,\iff\,\,A=\frac{1}{b\,R}\,.
\end{equation*}
Substituting this value of $A$ in equations~(\ref{eq:tangents}), we obtain
\begin{align*}
\cos\!\left(\dfrac{b}{2R}\right)&=\sin\!\left(\dfrac{c}{R}\right)\,,\quad \sin\!\left(\dfrac{b}{2R}\right)=\cos\!\left(\dfrac{c}{R}\right)\\
&\iff\dfrac{b}{2R}=\dfrac{\pi}{2}-\dfrac{c}{R}\\
&\iff R=\frac{b+2c}{\pi}.
\end{align*}
It follows that
\begin{equation*}
A=\frac{1}{b\,R}=\frac{\pi}{b\,(b+2c)}\,;
\end{equation*}
hence, the continuity condition stated in expressions~(\ref{eq:Cont1}) and (\ref{eq:Cont2}) determines the centre of the circle,~$(c_1,c_2)$\,.
For the case shown in Figure~\ref{fig:FigTrack}, the numerical values are~$A=3.1661\times10^{-3}$\,m${}^{-2}$, $R=23.3958$\,m\,, $c_1=25.7313$\,m and $c_2=23.7194$\,m\,.
The complete track\,---\,with its centre at the origin\,,~$(0,0)$\,---\,is shown in Figure~\ref{fig:FigComplete}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{FigCompleteTrack.pdf}
\caption{\small Black line of $250$\,-metre track}
\label{fig:FigComplete}
\end{figure}
The corresponding track curvature is shown in Figure~\ref{fig:FigTrackCurvature}.
Note that the curvature transitions linearly from the constant value of straight,~$\kappa=0$\,, to the constant value of the circular arc,~$\kappa=1/R$\,.
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{FigTrackCurvature.pdf}
\caption{\small Curvature of the black line,~$\kappa$\,, as a function of distance,~$s$\,, with a linear transition between the zero curvature of the straight and the $1/R$ curvature of the circular arc}
\label{fig:FigTrackCurvature}
\end{figure}
\subsection{$\boldsymbol{C^3}$ transition curve}
\label{sub:BiTraj}
As stated by~\cite{FitzgeraldEtAl},
\begin{quote}
\label{FitzQuote}
[t]here can be a trade-off between good spatial fit (low continuity for matching a built velodrome) and a good physical fit (high continuity for matching a cyclist's path).
If this model is used to calculate the motion of cyclists on a velodrome surface then a spatial fit can be prioritised.
However, if it is used to directly model the path of a cyclist then higher continuity is required.
\end{quote}
Should it be desired that the trajectory have continuous derivatives up to order $r\geqslant 0$\,---\,which makes it in class $C^r$\,, so that its curvature is of class $C^{r-2}$\,---\,it is possible to achieve this by a transition curve $y=p(x)$\,, where $p(x)$ is a polynomial of degree $2r-1$ and the solution of a $4\times 4$ system of nonlinear equations, assuming a solution exists.
In case $r=2$\,, this would correspond to a transition by a polynomial of degree $2r-1=3$\,, often referred to as a cubic spiral.
In case $r=3$\,, this would correspond to a transition by a polynomial of degree $2r-1=5$\,, often referred to as a Bloss transition.
To see how to do this, for simplicity's sake, let $a=L_s$\,.
We use a Cartesian formulation, namely,
\begin{itemize}
\item[--] $y=0,$ $0\leqslant x\leqslant a$\,: the halfstraight
\item[--] $y=p(x),$ $a\leqslant x\leqslant X$\,: the transition curve between the straight and circular arc
\item[--] $y=c_2-\sqrt{R^2-(x-c_1)^2}$\,: the end circle, with centre at $(c_1,c_2)$ and radius $R.$
\end{itemize}
Here $x=X$ is the point at which the polynomial transition connects to the circle $(x-c_1)^2+(y-c_2)^2=R^2$\,.
$X$\,, $c_1$\,, $c_2$ and $R$ are {\it a priori} unknowns that need to be determined.
These four values determine completely the transition zone, $y=p(x),$ $a\leqslant x\leqslant X$\,.
Given $X$\,, $c_1$\,, $c_2$ and $R$\,, we can compute $p(x)$ as the Lagrange-Hermite interpolant of the data
\[p^{(j)}(a)=0,\quad 0\leqslant j\leqslant r,\,\,\hbox{and}\,\, p^{(j)}(X)=y^{(j)}(X),\quad 0\leqslant j\leqslant r\,,\]
where $y(x)=c_2-\sqrt{R^2-(x-c_1)^2}$ is the circle.
Indeed, this is given in Newton form as
\[p(x)=\sum_{j=0}^{2r+1} p[x_0,\cdots,x_j]\prod_{k=0}^{j-1}(x-x_k)\,,\]
with
\[x_0=x_1=\cdots=x_r=a\,\,\hbox{and},\,\, x_{r+1}=\cdots=x_{2r+1}=X\]
and, for any values $t_1,t_2,\cdots,t_m$\,, the divided difference\,,
\[p[t_1,t_2,\cdots,t_m]\,,\]
may be computed from the recurrence
\[p[t_1,\cdots,t_m]=\frac{p[t_2,\cdots.t_m]-p[t_1,\cdots,t_{m-1}]}{t_m-t_1}\,,\]
with
\[p\underbrace{[a,\cdots,a]}_{{\rm multiplicity}\,\,j+1}:=0\,,\,\, 0\leqslant j\leqslant r\]
and
\[p\underbrace{[X,\cdots,X]}_{{\rm multiplicity}\,\,j+1}:=\frac{y^{(j)}(X)}{j!}\,,\,\, 0\leqslant j\leqslant r\,,\]
where again $y(x)=c_2-\sqrt{R^2-(x-c_1)^2}$ is the circle.
Note, however, that this defines $p(x)$ as a polynomial of degree $2r+1$ exceeding our claimed degree $2r-1$ by 2.
Hence, the first two of our equations are that $p(x)$ is actually of degree $2r-1$\,, i.e., its two leading coefficients are zero.
More specifically, from the Newton form,
\begin{equation}\label{eq1}
p[\,\underbrace{a,a,\cdots,a}_{r+1\,\,{\rm copies}},\underbrace{X,X,\cdots,X}_{r+1\,\,{\rm copies}}\,]=0
\end{equation}
and
\begin{equation}\label{eq2}
p[\,\underbrace{a,a,\cdots,a}_{r+1\,\,{\rm copies}},\underbrace{X,X,\cdots,X}_{r\,\,{\rm copies}}\,]=0\,.
\end{equation}
The third equation is that the length of the transition is specified to be $L_t$\,, i.e.,
\begin{equation}\label{eq3}
\int\limits_a^X \sqrt{1+\left(p'(x)\right)^2}\,{\rm d}x=L_t\,.
\end{equation}
Note that this integral has to be estimated numerically.
Finally, the fourth equation is that the remainder of the circular arc\,---\,from $x=X$ to the apex $x=c_1+R$\,---\,have length $L_a$\,.
This is equally accomplished.
The two endpoints of the circular arc are
$(X,y(X)$ and $(c_1+R,c_2)$ giving a circular segment of angle $\theta$\,, say, between the two vectors connecting these points to the centre, i.e., $\langle X-c_1,y(X)-c_2\rangle$ and $\langle R,0\rangle$\,.
Then,
\[\cos(\theta)=\frac{\langle X-c_1,y(X)-c_2\rangle\,\cdot\,\langle R,0\rangle} {\|\langle X-c_1,y(X)-c_2\rangle\|_2 \|\langle R,0\rangle\|_2}=\frac{X-c_1}{R}.\]
Hence the fourth equation is
\begin{equation}\label{eq4}
R\,\cos^{-1}\left(\dfrac{X-c_1}{R}\right)=L_a\,.
\end{equation}
Together, equations \eqref{eq1}, \eqref{eq2}, \eqref{eq3} and \eqref{eq4} form a system of four nonlinear equations in four unknowns, $c_1$\,, $c_2$\,, $R$ and $X$\,, determining a $C^r$ transition given by $p(x)$\,, a polynomial of degree $2r-1$\,, satisfying the design parameters.
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{FigBloss.pdf}
\caption{\small Transition zone track curvature,~$\kappa$\,, as a function of distance,~$s$\,, which\,---\,in contrast to Figure~\ref{fig:FigTrackCurvature}\,---\,exhibits a smooth ($C^1$) transition between the zero curvature of the straight and the $1/R$ curvature of the circular arc}
\label{fig:FigBloss}
\end{figure}
For the velodrome under consideration, and a Bloss $C^3$ transition, with curvature $C^1$\,, we obtain $c_1=25.7565\,\rm m$\,, $c_2=23.5753\,\rm m$\,, $R = 23.3863\,\rm m$ and $X =32.3988\,\rm m$\,, which\,---\,as expected\,---\,are similar to values obtained in Section~\ref{sub:Track}.
The corresponding trajectory curvature is shown in Figure~\ref{fig:FigBloss}.
Returning to the quote on page~\pageref{FitzQuote}, the requirement of differentiability of the trajectory traced by bicycle wheels, which is expressed in spatial coordinates, is a consequence of a dynamic behaviour, which is expressed in temporal coordinates, namely, the requirement of smoothness of the derivative of acceleration, which is commonly referred to as a jolt.
In other words, a natural bicycle-cyclist motion, which is laterally unconstrained, ensures that the third temporal derivative of position is smooth; the black line smoothened by this motion.
\subsection{Track-inclination angle}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{FigInclinationAngle.pdf}
\caption{\small Track inclination,~$\theta$\,, as a function of the black-line distance,~$s$}
\label{fig:FigAngle}
\end{figure}
There are many possibilities to model the track inclination angle.
We choose a trigonometric formula in terms of arclength, which is a good analogy of an actual $250$\,-metre velodrome.
The minimum inclination of $13^\circ$ corresponds to the midpoint of the straight, and the maximum of $44^\circ$ to the apex of the circular arc.
For a track of length $S$\,,
\begin{equation}
\label{eq:theta}
\theta(s)=28.5-15.5\cos\!\left(\frac{4\pi}{S}s\right)\,;
\end{equation}
$s=0$ refers to the midpoint of the lower straight, in Figure~\ref{fig:FigComplete}, and the track is oriented in the counterclockwise direction.
Figure \ref{fig:FigAngle} shows this inclination for $S=250\,\rm m$\,.
It is not uncommon for tracks to be slightly asymmetric with respect to the inclination angle.
In such a case, they exhibit a rotational symmetry by $\pi$\,, but not a reflection symmetry about the vertical or horizontal axis.
This asymmetry can be modeled by including $s_0$ in the argument of the cosine in expression~(\ref{eq:theta}).
\begin{equation}
\label{eq:s0}
\theta(s)=28.5-15.5\cos\!\left(\frac{4\pi}{S}(s-s_0)\right)\,;
\end{equation}
$s_0\approx 5$ provides a good model for several existing velodromes.
Referring to discussions about the London Velodrome of the 2012 Olympics, \citet{Solarczyk} writes that
\begin{quote}
the slope of the track going into and out of the turns is not the same.
This is simply because you always cycle the same way around the track, and you go shallower into the turn and steeper out of it.
\end{quote}
The last statement is not obvious.
For a constant centre-of-mass speed, under rotational equilibrium, the same transition curves along the black line imply the same lean angle.
\section{Dissipative forces}
\label{sec:InstPower}
A mathematical model to account for the power required to propel a bicycle against dissipative forces is based on
\begin{equation}
\label{eq:BikePower}
P_F=F\,V\,,
\end{equation}
where $F$ stands for the magnitude of forces opposing the motion and $V$ for speed.
Herein, we model the rider as undergoing instantaneous circular motion, in rotational equilibrium about the line of contact of the tires with the ground.
In view of Figure~\ref{fig:FigFreeBody}, along the black line, in windless conditions,
\begin{subequations}
\label{eq:power}
\begin{align}
\nonumber P_F&=\\
&\dfrac{1}{1-\lambda}\,\,\Bigg\{\label{eq:modelO}\\
&\left.\left.\Bigg({\rm C_{rr}}\underbrace{\overbrace{\,m\,g\,}^{F_g}(\sin\theta\tan\vartheta+\cos\theta)}_N\cos\theta
+{\rm C_{sr}}\Bigg|\underbrace{\overbrace{\,m\,g\,}^{F_g}\frac{\sin(\theta-\vartheta)}{\cos\vartheta}}_{F_f}\Bigg|\sin\theta\Bigg)\,v
\right.\right.\label{eq:modelB}\\
&+\,\,\tfrac{1}{2}\,{\rm C_{d}A}\,\rho\,V^3\Bigg\}\label{eq:modelC}\,,
\end{align}
\end{subequations}
where $m$ is the mass of the cyclist and the bicycle, $g$ is the acceleration due to gravity, $\theta$ is the track-inclination angle, $\vartheta$ is the bicycle-cyclist lean angle, $\rm C_{rr}$ is the rolling-resistance coefficient, $\rm C_{sr}$ is the coefficient of the lateral friction, $\rm C_{d}A$ is the air-resistance coefficient, $\rho$ is the air density, $\lambda$ is the drivetrain-resistance coefficient.
Herein, $v$ is the speed at which the contact point of the rotating wheels moves along the track, which we assume to coincide with the black-line speed.
$V$ is the centre-of-mass speed.
Since lateral friction is a dissipative force, it does negative work, and the work done against it\,---\,as well as the power\,---\,are positive.
For this reason, in expression~(\ref{eq:modelB}), we consider the magnitude,~$\big|{\,\,}\big|$\,.
\begin{figure}
\centering
\includegraphics[scale=0.7]{FigFreeBody.pdf}
\caption{\small Force diagram: Inertial frame}
\label{fig:FigFreeBody}
\end{figure}
To formulate summand~(\ref{eq:modelB}), we use the relations among the magnitudes of vectors~$\bf N$\,, ${\bf F}_g$\,, ${\bf F}_{cp}$ and ${\bf F}_f$\,, illustrated in Figure~\ref{fig:FigFreeBody}.
In accordance with Newton's second law, for a cyclist to maintain a horizontal trajectory, the resultant of all vertical forces must be zero,
\begin{equation}
\label{eq:Fy0}
\sum F_y=0=N\cos\theta+F_f\sin\theta-F_g
\,.
\end{equation}
In other words, ${\bf F}_g$ must be balanced by the sum of the vertical components of normal force, $\bf N$\,, and the friction force, ${\bf F}_f$\,, which is parallel to the velodrome surface and perpendicular to the instantaneous velocity.
Depending on the centre-of-mass speed and the radius of curvature for the centre-of-mass trajectory, if $\vartheta<\theta$\,, ${\bf F}_f$ points upwards, in Figure~\ref{fig:FigFreeBody}, which corresponds to its pointing outwards, on the velodrome;
if $\vartheta>\theta$\,, it points downwards and inwards.
If $\vartheta=\theta$\,, ${\bf F}_f=\bf 0$\,.
Since we assume no lateral motion, ${\bf F}_f$ accounts for the force that prevents it.
Heuristically, it can be conceptualized as the force exerted in a lateral deformation of the tires.
For a cyclist to follow the curved bank, the resultant of the horizontal forces,
\begin{equation}
\label{eq:Fx0}
\sum F_x=-N\sin\theta+F_f\cos\theta=-F_{cp}
\,,
\end{equation}
is the centripetal force,~${\bf F}_{cp}$\,, whose direction is perpendicular to the direction of motion and points towards the centre of the radius of curvature.
According to the rotational equilibrium about the centre of mass,
\begin{equation}
\label{eq:torque=0}
\sum\tau_z=0=F_f\,h\,\cos\left(\theta-\vartheta\right)-N\,h\,\sin(\theta-\vartheta)\,,
\end{equation}
where $\tau_z$ is the torque about the axis parallel to the instantaneous velocity, which implies
\begin{equation}
\label{eq:torque}
F_f=N\tan(\theta-\vartheta)\,.
\end{equation}
Substituting expression~(\ref{eq:torque}) in expression~(\ref{eq:Fy0}), we obtain
\begin{equation}
\label{eq:N}
N=\dfrac{m\,g}{\cos\theta-\tan(\theta-\vartheta)\sin\theta}
=m\,g\,(\sin\theta\tan\vartheta+\cos\theta)
\,.
\end{equation}
Using this result in expression~(\ref{eq:torque}), we obtain
\begin{equation}
\label{eq:Ff}
F_f=m\,g\,(\sin\theta\tan\vartheta+\cos\theta)\tan(\theta-\vartheta)
=m\,g\,\dfrac{\sin(\theta-\vartheta)}{\cos\vartheta}\,.
\end{equation}
Since the lateral friction, ${\bf F}_f$\,, is a dissipative force, it does negative work.
Hence, the work done against it\,---\,as well as the power\,---\,needs to be positive.
For this reason, in expression~(\ref{eq:modelB}), we consider the magnitude of ${\bf F}_f$\,.
For summand~(\ref{eq:modelC}), a formulation of ${\rm C_{d}A}\,\rho\,V^2/2$\,, therein, which we denote by $F_a$\,, is discussed in Appendix~\ref{sec:AirRes}.
We restrict our study to steady efforts, which\,---\,following the initial acceleration\,---\,are consistent with a pace of an individual pursuit or the Hour Record.
Steadiness of effort is an important consideration in the context of measurements within a fixed-wheel drivetrain, as discussed by \citet[Section~5.2]{DanekEtAl2021}.%
\footnote{For a fixed-wheel drivetrain, the momentum of a bicycle-cyclist system results in rotation of pedals even without any force applied by a cyclist.
For variable efforts, this leads to inaccuracy of measurements of the instantaneous power generated by a cyclist.}
For road-cycling models \citep[e.g.,][expression~(1)]{DanekEtAl2021}\,---\,where curves are neglected and there is no need to distinguish between the wheel speed and the centre-of-mass speed\,---\,such a restriction would correspond to setting the acceleration,~$a$\,, to zero.
On a velodrome, this is tantamount to a constant centre-of-mass speed, ${\rm d}V/{\rm d}t=0$\,.
Let us return to expression~(\ref{eq:power}).
Therein, $\theta$ is given by expression~(\ref{eq:theta}).
As shown in Appendix~\ref{app:LeanAng}, the lean angle is
\begin{equation}
\label{eq:LeanAngle}
\vartheta=\arctan\dfrac{V^2}{g\,r_{\rm\scriptscriptstyle CoM}}\,,
\end{equation}
where $r_{\rm\scriptscriptstyle CoM}$ is the centre-of-mass radius, and\,---\,along the curves, at any instant\,---\,the centre-of-mass speed is
\begin{equation}
\label{eq:vV}
V=v\,\dfrac{\overbrace{(R-h\sin\vartheta)}^{\displaystyle r_{\rm\scriptscriptstyle CoM}}}{R}
=v\,\left(1-\dfrac{h\,\sin\vartheta}{R}\right)\,,
\end{equation}
where $R$ is the radius discussed in Section~\ref{sub:Track} and $h$ is the centre-of-mass height of the bicycle-cyclist system at $\vartheta=0$\,.
Along the straights, the black-line speed is equal to the centre-of-mass speed, $v=V$\,.
As expected, $V=v$ if $h=0$\,, $\vartheta=0$ or $R=\infty$\,.
The lean angle\,, $\vartheta$\,, is determined by assuming that, at all times, the system is in rotational equilibrium about the line of contact of the tires with the ground; this assumption yields the implicit condition on $\vartheta$\,, stated in expression~(\ref{eq:LeanAngle}).
As illustrated in Figure~\ref{fig:FigFreeBody}, expressions~(\ref{eq:LeanAngle}) and (\ref{eq:vV}) assume instantaneous circular motion of the centre of mass to occur in a horizontal plane.
Therefore, using these expression implies neglecting the vertical motion of the centre of mass.
Accounting for the vertical motion of the centre of mass would mean allowing for a nonhorizontal centripetal force, and including the work done in raising the centre of mass.
In our model, we also neglect the effect of air resistance of rotating wheels \citep[e.g.,][Section~5.5]{DanekEtAl2021}.
We consider only the air resistance due to the translational motion of the bicycle-cyclist system.
\section{Conservative forces}
\label{sec:MechEn}
\subsection{Mechanical energy}
\label{sub:Intro}
Commonly, in road-cycling models \citep[e.g.,][expression~(1)]{DanekEtAl2021}, we include $m\,g\sin\Theta$\,, where $\Theta$ corresponds to a slope; this term represents force due to changes in potential energy associated with hills.
This term is not included in expression~(\ref{eq:power}).
However\,---\,even though a black-line of a velodrome is horizontal\,---\,we need to account for the power required to increase potential energy to raise the centre of mass upon exiting the curves.
In Section~\ref{sec:InstPower}, we assume the cyclist's centre of mass to travel in a horizontal plane.
This is a simplification, since the centre of mass follows a three-dimensional trajectory, lowering while entering a turn and rising while exiting it.
We cannot use the foregoing assumption to calculate\,---\,from first principles\,---\,the change in the cyclist's potential energy, even though this approach allows us to calculate the power expended against dissipative forces, which is dominated by the power expended against air resistance, which is proportional to the cube of the centre-of-mass speed,~$V$\,.
Along the transition curves, $\bf V$ has both horizontal and vertical components, and we are effectively neglecting the latter.
The justification for doing so\,---\,in Section~\ref{sec:InstPower}\,---\,is that the magnitude of the horizontal component, corresponding to the cyclist's forward motion, is much larger than that of the vertical component, corresponding to the up and down motion of the centre of mass.
In Section~\ref{sub:PotEn}, below, we calculate the change in the cyclist's potential energy instead using the change in the lean angle, whose values are obtained from the assumption of instantaneous rotation equilibrium
Since dissipative forces, discussed in Section~\ref{sec:InstPower}, are nonconservative, the average power expended against them depends on the details of the path traveled by the cyclist, not just on its endpoints.
For this reason, to calculate the average power expended against dissipative forces, we require a detailed model of the cyclist's path to calculate the instantaneous power.
Instantaneous power is averaged to obtain the average power expended against dissipative forces over a lap.
This is not the case for calculating the average power needed to increase the cyclist's mechanical energy.
Provided this increase is monotonic, the change in mechanical energy\,---\,and therefore the average power\,---\,does not depend on the details of the path traveled by the cyclist, but only on its endpoints.
In brief, the path dependence of work against dissipative forces requires us to examine instantaneous motion to calculate the associated average power, but the path independence of work to change mechanical energy makes the associated average power independent of such an examination.
\subsection{Kinetic energy}
To discuss the effects of mechanical energy, let us consider a bicycle-cyclist system whose centre-of-mass, at a given instant, moves with speed~$V_1$ along the black line whose curvature is $R_1$\,.
Neglecting the kinetic energy of the rotating wheels, we consider the kinetic energy of the system due to the translational motion of the centre of mass,
\begin{equation*}
K = \tfrac{1}{2}\,m\,V^2\,.
\end{equation*}
A bicycle-cyclist system is not purely mechanical; in addition to kinetic and potential energy, the system possesses internal energy in the form of the cyclist's chemical energy.
When the cyclist speeds up, internal energy is converted into kinetic.
However, the converse is not true: when the cyclist slows down, kinetic energy is not converted into internal.
Thus, to determine the work the cyclist does to change kinetic energy, we should consider only the increases.
If the centre-of-mass speed increases monotonically from $V_1$ to $V_2$\,, the work done is the increase in kinetic energy,
\begin{equation*}
\Delta K = \tfrac{1}{2}\,m\,\left(V_2^2-V_1^2\right)\,.
\end{equation*}
The stipulation of a monotonic increase is needed due to the nonmechanical nature of the system, with the cyclist doing positive work\,---\,with an associated decrease in internal energy\,---\,when speeding up, but not doing negative work\,---\,with an associated increase in internal energy\,---\,when slowing down.
In the case of a constant centre-of-mass speed, $\Delta K=0$\,.
Otherwise, for instance, in the case of a constant black-line speed \citep[e.g.,][Appendix~A, Figure~A.1]{BSSSbici4}, the increase of $V$ occurs twice per lap, upon exiting a curve; hence, the corresponding increase in kinetic energy is $\Delta K = m\,\left(V_2^2-V_1^2\right)$\,.
The average power, per lap, required for this increase is
\begin{equation*}
P_K=\dfrac{1}{1-\lambda}\dfrac{\Delta K}{t_\circlearrowleft}\,,
\end{equation*}
where $t_\circlearrowleft$ stands for the laptime.
Since, for a constant centre-of-mass speed, $\Delta K=0$\,, no power is expended to increase kinetic energy,~$P_K=0$\,.
\subsection{Potential energy}
\label{sub:PotEn}
Let us determine the work\,---\,and, hence, average power, per lap\,---\,performed by a cyclist to change the potential energy,
\begin{equation*}
U = m\,g\,h\,\cos\vartheta\,.
\end{equation*}
Since we assume a model in which the cyclist is instantaneously in rotational equilibrium about the black line, only the increase of the centre-of-mass height is relevant for calculating the increase in potential energy, and so for obtaining the average power required for the increase.
Since the lean angle,~$\vartheta$\,, depends on the centre-of-mass speed,~$V$\,, as well as on the radius of curvature,~$R$\,, of the black line, $\vartheta$\,---\,and therefore the height of the centre of mass\,---\,changes if either $V$ or $R$ changes.
The work done is the increase in potential energy resulting from a monotonic decrease in the lean angle from $\vartheta_1$ to $\vartheta_2$\,,
\begin{equation*}
\Delta U = m\,g\,h\,\left(\cos\vartheta_2-\cos\vartheta_1\right)\,.
\end{equation*}
In the case of a constant centre-of-mass speed, as well as of constant black-line speed,
\begin{equation*}
\Delta U = 2\,m\,g\,h\,\left(\cos\vartheta_2-\cos\vartheta_1\right)\,,
\end{equation*}
since\,---\,in either case\,---\,the bicycle-cyclist system straightens twice per lap, upon exiting a curve.
The average power required for this increase of potential energy is
\begin{equation}
\label{eq:PU}
P_U=\dfrac{1}{1-\lambda}\dfrac{\Delta U}{t_\circlearrowleft}\,.
\end{equation}
The same considerations\,---\,due to the nonmechanical nature of the system\,---\,apply to the work done to change potential energy as to that done to change kinetic energy.
The cyclist does positive work\,---\,with an associated decrease in internal energy\,---\,when straightening up, but not negative work\,---\,with an associated increase in internal energy\,---\,when leaning into the turn.
As stated in Section~\ref{sub:Intro}, while we neglect the vertical motion of the centre of mass to calculate, with reasonable accuracy, the power expended against dissipative forces, this approach cannot capture the changes in the cyclist's potential energy along the transition curves.
Instead, herein, we calculate these changes starting from the assumption of instantaneous rotational equilibrium and the resulting decrease in lean angle while exiting a turn.
Viewed from the noninertial reference frame of a cyclist following a curved trajectory, the torque causing the lean angle to decrease and potential energy to increase is the fictitious centrifugal torque.
This torque arises from the speed of the cyclist, which is in turn the result of the power expended on pedaling.
Therefore it follows that the increase in potential energy is subject to the same drivetrain losses as the power expended against dissipative forces and to increase kinetic energy; hence, the presence of $\lambda$ in expression~(\ref{eq:PU}).
Returning to the first paragraph of Section~\ref{sub:Intro}, we conclude that the changes in potential energy discussed herein are different from the changes associated with hills.
When climbing a hill, a cyclist does work to increase potential energy.
When descending, at least a portion of that energy is converted into kinetic energy of forward motion.
This is not the case with the work the cyclist does to straighten up.
When the cyclist leans, potential energy is not converted into kinetic energy of forward motion.
\section{Constant centre-of-mass speed}
\label{sec:ConstCoM}
In accordance with expression~(\ref{eq:power}), let us find the values of $P$ at discrete points of the black line\,, under the assumption of a constant centre-of-mass speed,~$V$\,.
Stating expression~(\ref{eq:vV}), as
\begin{equation*}
v=V\dfrac{R}{R-h\sin\vartheta}\,,
\end{equation*}
we write expression~(\ref{eq:power}) as
\begin{align}
\label{eq:PowerCoM}
P&=\\
\nonumber&\dfrac{V}{1-\lambda}\,\,\Bigg\{\\
\nonumber&\left.\left.\Bigg({\rm C_{rr}}\,m\,g\,(\sin\theta\tan\vartheta+\cos\theta)\cos\theta
+{\rm C_{sr}}\Bigg|\,m\,g\,\frac{\sin(\theta-\vartheta)}{\cos\vartheta}\Bigg|\sin\theta\Bigg)\,\dfrac{R}{R-h\sin\vartheta}
\right.\right.\\
\nonumber&+\,\,\tfrac{1}{2}\,{\rm C_{d}A}\,\rho\,V^2\Bigg\}\,.
\end{align}
Equation~(\ref{eq:LeanAngle}), namely,
\begin{equation*}
\vartheta=\arctan\dfrac{V^2}{g\,(R-h\sin\vartheta)}\,,
\end{equation*}
can be solved for~$\vartheta$\,---\,to be used in expression~(\ref{eq:PowerCoM})\,---\,given $V$\,, $g$\,, $R$\,, $h$\,; along the straights, $R=\infty$ and, hence, $\vartheta=0$\,; along the circular arcs, $R$ is constant, and so is $\vartheta$\,; along transition curves, the radius of curvature changes monotonically, and so does $\vartheta$\,.
Along these curves, the radii are given by solutions of equations discussed in Section~\ref{sub:Track} or in Section~\ref{sub:BiTraj}.
The values of $\theta$ are given\,---\,at discrete points\,---\,from expression~(\ref{eq:theta}), namely,
\begin{equation*}
\theta(s)=28.5-15.5\cos\!\left(\frac{4\pi}{S}s\right)\,.
\end{equation*}
The values of the bicycle-cyclist system, $h$\,, $m$\,, ${\rm C_{d}A}$\,, ${\rm C_{rr}}$\,, ${\rm C_{sr}}$ and $\lambda$\,, are provided, and so are the external conditions, $g$ and $\rho$\,.
Under the assumption of a constant centre-of-mass speed,~$V$\,, there is no power used to increase the kinetic energy, only the power,~$P_U$\,, used to increase the potential energy.
The latter is obtained by expression~\eqref{eq:PU}, where the pertinent values of $\vartheta$ for that expression are taken from model~\eqref{eq:PowerCoM}.
\section{Numerical example}
\label{sec:NumEx}
To examine the model, including effects of modifications in its input parameters, let us consider the following values.
An Hour Record of~$57500\,\rm m$\,, which corresponds to an average laptime of $15.6522\,\rm s$ and the average black-line speed of $v=15.9776\rm m/s$\,.
A bicycle-cyclist mass of~$m=90\,\rm kg$\,,
a surrounding temperature of~$T=27^\circ{\rm C}$\,,
altitude of~$105\,\rm m$\,, which correspond to Montichiari, Brescia, Italy; hence,
acceleration due to gravity of~$g=9.8063\,\rm m/s^2$\,,
air density of~$\rho=1.1618\,\rm kg/m^3$\,.
Let us assume
$h=1.2\,\rm m$\,,
${\rm C_dA}=0.19\,\rm m^2$\,,
${\rm C_{rr}}=0.0015$\,,
${\rm C_{sr}}=0.0025$\,,
$\lambda=0.015$\,.
The required average power\,---\,under the assumption of a constant centre-of-mass speed\,---\,is $P=499.6979\,\rm W$\,, which is the sum of $P_F=454.4068\,\rm W$ and $P_U=45.2911\,\rm W$\,; $90.9\%$ of power is used to overcome dissipative forces and $9.1\%$ to increase potential energy.
The corresponding centre-of-mass speed is $V=15.6269\rm m/s$\,.
During each lap, the power,~$P$\,, oscillates by~$3.1\%$ and the black-line speed,~$v$\,, by $3.9\%$\,, which agrees with empirical results for steady efforts.
Also, during each lap, the lean angle varies, $\vartheta\in[\,0^\circ,47.9067^\circ\,]$\,, and hence, for each curve, the centre of mass is raised by $1.2\,(\cos(0^\circ)-\cos(47.9067^\circ))=0.3956\,\rm m$\,.
For this Hour Record, which corresponds to $230$ laps, the centre of mass is raised by $2(230)(0.3956\,\rm m)=181.9760\,\rm m$ in total.
To gain an insight into the sensitivity of the model to slight variations in the input values, let us consider $m=90\pm1\,\rm kg$\,, $T=27\pm 1^\circ{\rm C}$\,, $h=1.20\pm 0.05\,\rm m$\,, ${\rm C_dA}=0.19\pm 0.01\,{\rm m^2}$\,, ${\rm C_{rr}}=0.0015\pm 0.0005$\,, ${\rm C_{sr}}=0.0025\pm 0.0005$ and $\lambda=0.015\pm 0.005$\,.
For the limits that diminish the required power\,---\,which are all lower limits, except for $T$\,---\,we obtain $P=463.6691\,\rm W$\,.
For the limits that increase the required power\,---\,which are all upper limits, except for $T$\,---\,we obtain $P=536.3512\,\rm W$\,.
The greatest effect is due to ${\rm C_dA}=0.19\pm 0.01\,{\rm m^2}$\,.
If we consider this variation only, we obtain $P=477.1924\,\rm W$ and $P=522.2033\,\rm W$\,, respectively.
To gain an insight into the effect of different input values on the required power, let us consider several modifications.
\paragraph{Hour Record:}
Let the only modification be the reduction of Hour Record to $55100\,\rm m$\,.
The required average power reduces to~$P=440.9534\,\rm W$\,.
Thus, reducing the Hour Record by~$4.2\%$ reduces the required power by~$11.8\%$\,.
However, since the relation between power and speed is nonlinear, this ratio is not the same for other values, even though\,---\,in general\,---\,for a given increase of speed, the proportional increase of the corresponding power is greater~\citep[e.g.,][Section~4, Figure~11]{DanekEtAl2021}.
\paragraph{Altitude:}
Let the only modification be the increase of altitude to $2600\,\rm m$\,, which corresponds to Cochabamba, Bolivia.
The corresponding adjustments are $g=9.7769\,\rm m/s^2$ and $\rho=0.8754\,\rm kg/m^3$\,; hence, the required average power reduces to~$P=394.2511\,\rm W$\,, which is a reduction by~$21.1\%$.
\paragraph{Temperature:}
Let the only modification be the reduction of temperature to $T=20^\circ{\rm C}$\,.
The corresponding adjustment is $\rho=1.1892\,\rm kg/m^3$\,; hence, the required average power increases to~$P=509.7835\,\rm W$\,.
\paragraph{Mass:}
Let the only modification be the reduction of mass to $m=85\,\rm kg$\,.
The required average power reduces to~$P=495.6926\,\rm W$\,, which is the sum of $P_F+P_U=452.9177\,{\rm W} + 42.7749\,{\rm W}$\,.
The reduction of $m$ by~$5.6\%$ reduces the required power by~$0.8\%$, which is manifested in reductions of $P_F$ and $P_U$ by~$0.3\%$ and $5.6\%$\,, respectively.
In accordance with a formulation in Section~\ref{sub:PotEn}, the reductions of $m$ and $P_U$ are equal to one another.
\paragraph{Centre-of-mass height\,:}
Let the only modification be the reduction of the centre-of-mass height to $h=1.1\,\rm m$\,.
The corresponding adjustments are $P_F=456.7678\,\rm W$\,, $P_U=41.5346\,\rm W$ and $V=15.6556\,\rm m/s$\,; hence, the required average power decreases to~$P=498.3023\,\rm W$\,, which reduces the required power by 0.3\%\,; this reduction is a consequence of relations among $P_F$\,, $P_U$\,, $V$ and $\vartheta$\,.
\paragraph{$\boldsymbol{\rm C_dA}$\,:}
Let the only modification be the reduction of the air-resistance coefficient to ${\rm C_dA}=0.17\,\rm m^2$\,.
The required average power reduces to~$P=454.6870\,\rm W$\,, which is the sum of $P_F+P_U=409.3959\,{\rm W} + 45.2911\,{\rm W}$\,.
The reduction of $\rm C_dA$ by~$10.5\%$ reduces the required power by~$9.0\%$, $P_F$~by~$9.9\%$ and no reduction in~$P_U$\,.
\paragraph{Mass and $\boldsymbol{\rm C_dA}$\,:}
Let the two modifications be $m=85\,\rm kg$ and ${\rm C_dA} = 0.17$\,.
The required average power is~$P=450.6817\,\rm W$\,, which is the sum of $P_F+P_U=407.9068\,{\rm W} + 42.7749\,{\rm W}$\,.
The reduction in required average power is the sum of reductions due to decreases of $m$ and ${\rm C_dA}$\,, $4.0\,{\rm W}+45.0\,{\rm W}=49.0\,{\rm W}$\,.
In other words, The two modifications reduce the required power by~$9.8\%$\,, wherein $P_F$ and $P_U$ are reduced by $10.2\%$ and $5.6\%$\,, respectively.
\paragraph{Track-inclination asymmetry:}
Let the only modification be $s_0=5$\,, in expression~(\ref{eq:s0}).
The required average power is~$P=499.7391\,\rm W$\,, which is a reduction of less than 0.0\%\,.
\paragraph{Euler spiral:}
Let the only modification be the $C^2$\,, as opposed to $C^3$\,, transition curve.
The required average power is~$P=499.7962\,\rm W$\,, which is a reduction of less than 0.0\%\,.
\section{Conclusions}
\label{sec:DisCon}
The mathematical model presented in this article offers the basis for a quantitative study of individual time trials on a velodrome.
For a given power, the model can be used to predict laptimes; also, it can be used to estimate the expended power from the recorded times.
In the latter case, which is an inverse problem, the model can be used to infer values of parameters, in particular, $\rm C_dA$\,, $\rm C_{rr}$\,, $\rm C_{sr}$ and $\lambda$\,.
Let us emphasize that our model is phenomenological.
It is consistent with\,---\,but not derived from\,---\,fundamental concepts.
Its purpose is to provide quantitative relations between the model parameters and observables.
Its key justification is the agreement between measurements and predictions.
\section*{Acknowledgements}
We wish to acknowledge Mehdi Kordi, for information on the track geometry and for measurements, used in Section~\ref{sec:Formulation};
Elena Patarini, for her graphic support;
Roberto Lauciello, for his artistic contribution;
Favero Electronics for inspiring this study by their technological advances of power meters.
\bibliographystyle{apa}
| {
"attr-fineweb-edu": 2.703125,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdJc25YjgKOYKJoYU | \subsection{Player Tracking Data in Professional Sports}
Major League Baseball (MLB) has been tracking pitch trajectory, location, and speed since 2006 with PITCHf/x \citep{what-is-pitchfx}. In 2015, MLB launched Statcast, which additionally tracks the exit velocity and launch angle of a batted ball along with location and movements of every player during a game \citep{statcast}.
The National Basketball Association (NBA) mandated the installation of an optical tracking system in all stadiums in the 2013-14 season \citep{nba-sportvu}.
This system captures the location of all the players on the court and the ball at a rate of 25 times per second (25 Hz).
This data is further annotated with other information, such as event tracking (``play-by-play''), current score, shot clock, time remaining, etc.
The National Hockey League (NHL) plans to begin the league-wide use of player- and puck-tracking technology in the 2019-20 season \citep{nhl-tracking}.
The NFL installed a player tracking system in all of its venues during the 2015 season \citep{nfl-tracking}.
NFL's tracking system is RFID-based and records the location of the players and the ball at a frequency of 12.5 Hz.
This type of data have spurred innovation by driving a variety of applications.
For example, \cite{cervone2016nba} computed the basketball court's Voronoi diagram based on the players' locations and formalized an optimization problem that provides court realty values for different areas of the court.
This further allowed the authors to develop new metrics for quantifying the spacing and the positioning of a lineup/team.
As another example, {\em ghosting} models have been developed in basketball and soccer when tracking data is available \citep{Le2017CoordinatedMI, grandland}.
The objective of these models is to analyze the players' movements and identify the optimal locations for the defenders, and consequently evaluate their defensive performance.
Other models driven by player spatio-temporal data track the possible outcomes of a possession as it is executed, allowing to evaluate a variety of (offensive and defensive) actions that can contribute to scoring but are not captured in traditional boxscore statistics \citep{cervone2016multiresolution,fernandezdecomposing}, while \cite{seidlbhostgusters} further used optical tracking data to learn how a defense is likely to react to a specific offensive set in basketball using reinforcement learning.
Recently, \cite{burkedeepqb} developed a model using player tracking data from the NFL to predict the targeted receiver, pass outcome and gained yards.
Coming to soccer, \cite{Power:2017:PCE:3097983.3098051} define and use a supervised learning approach for the risk and reward of a specific pass that can further quantify offensive and defensive skills of players and teams.
For a complete review of sports research with player tracking data, see \citep{gudmundsson2017spatio}.
However, one common trend for player tracking datasets across all sports leagues is its limited availability to the public.
For example, there are very limited samples of player tracking data for the NBA, and those that exist are mostly from the early days of their player-tracking systems\footnote{\url{https://github.com/linouk23/NBA-Player-Movements}}. Additionally, while pitch-level data is publicly available from the MLB system\footnote{\url{http://gd2.mlb.com/components/game/mlb}}, the Statcast player and ball location data is not.
\subsection{Player Tracking Data in the NFL}
In December 2018, the NFL became the first professional sports league to publicly release a substantial subset of its player-tracking data of entire games for its most recent completed season, for two league-run data analysis competitions: (1) the NFL Punt Analytics competition\footnote{\url{https://www.kaggle.com/c/NFL-Punt-Analytics-Competition}}, and (2) the Big Data Bowl\footnote{\url{https://operations.nfl.com/the-game/big-data-bowl}}.
In total, tracking data from the punt plays from the 2016 and 2017 seasons, as well as all tracking data from the first six weeks of the 2017 regular season were temporarily released to the public for the purposes of these competitions. The Big Data Bowl player tracking data was removed shortly after the completion of this competition.
While this is a tremendous step forward for quantitative football research, the limited scope of the released data limits the conclusions that analysts can draw about player and team performances from the most recent NFL season.
The only metrics available to fans and analysts are those provided by the league itself through their Next Gen Stats (NGS) online platform.
The NFL's NGS group uses the league's tracking data to develop new metrics and present aggregate statistics to the fans.
For example, the NGS website presents metrics such as time-to-throw for a QB, completion probability for a pass, passing location charts, and other metrics.
However, NGS only provides summaries publicly, while the raw data is not available to analysts or fans. Additionally, any metrics derived from the player tracking data temporarily made available via the NFL's Big Data Bowl is limited in scope and potentially outdated, as this data only covers the first six weeks of the 2017 season.\footnote{\url{https://twitter.com/StatsbyLopez/status/1133729878933725184}}
\subsection{Our Contributions}
The objective of our work is twofold:
\begin{enumerate}
\item We create open-source image processing software, {{\texttt{next-gen-scraPy}}}, designed specifically for extracting the underlying data (on-field location of pass attempt relative to the line of scrimmage, pass outcome, and other metadata) from the NGS pass chart images for regular season and postseason pass attempts from the 2017 and 2018 NFL seasons.
\item We analyze the resulting dataset, obtaining a detailed view of league-wide passing tendencies and the spatial performance of individual quarterbacks and defensive units. We use a generalized additive model for modeling quarterback completion percentages by location, and an empirical Bayesian approach for estimating the 2-D completion percentage surfaces of individual teams and quarterbacks. We use the results of these models to create one-number summary of passing performance, called Completion Percentage Above Expectation (CPAE). We provide a ranking of QBs and team pass defenses according to this metric, and we compare our version of CPAE with the NFL's own CPAE metric.
\end{enumerate}
Our work follows in the footsteps of \texttt{openWAR} \citep{Baumer15}, \texttt{pitchRx}
\citep{pitchRx}, \texttt{nflscrapR} \citep{nflscrapR}, \texttt{nhlscrapR} \citep{nhlscrapr}, \texttt{Lahman} \citep{Lahman}, \texttt{ballr} \citep{ballr}, and \texttt{ncaahoopR} \citep{ncaahoopR}, who each promote reproducible sports research by providing open-source software for the collection and processing of data in sports
With {{\texttt{next-gen-scraPy}}}, we rely on a variety of image processing and unsupervised learning techniques.
The input to {{\texttt{next-gen-scraPy}}} is a pass chart obtained from NFL's NGS, similar to the example in Figure \ref{fig:foles} (which we detail in the following sections).
The output includes the $(x,y)$ coordinates (relative to the line of scrimmage) for the endpoint of each pass (e.g. the point at which the ball is caught or hits the ground) present in the input image, as well as additional metadata such as the game, the opponent, the result of each pass, etc.
We then process these data and build a variety of models for evaluating passing performance.
In particular, we develop spatial models for the target location of the passes at the league, team defense, and individual quarterback (QB) levels.
We use generalized additive models (GAMs) and a 2-D empirical Bayesian approach to estimate completion percentage surfaces (i.e., smoothed surfaces that capture the completion percentage expected in a given location) for individual QBs, team offenses, and team defenses.
The rest of this paper is organized as follows:
Section \ref{sec:ngs-scrapy} describes the data collected and the processing performed by {\texttt{next-gen-scraPy}}.
Section \ref{sec:analysis} presents the methods used to analyze the raw pass data obtained from {{\texttt{next-gen-scraPy}}}.
Section \ref{sec:results} demonstrates the accuracy of the image processing procedure and presents the results of our analyses of individual QBs and team defenses.
Finally, Section \ref{sec:conclusions} concludes our study, presenting future directions and current limitations of {{\texttt{next-gen-scraPy}}}.
\begin{comment}
\section{Related Work in NFL Tracking Data}
\label{sec:related}
Currently, the NFL's primary method of collecting pass and player location data is Next Gen Stats, player tracking technology that captures real time field location data, speed, and acceleration for players for every play during a game. Sensors throughout the stadium track tags placed on players' shoulder pads, charting individual movements within inches. From these charted movements, the Next Gen Stats then creates player tracking charts which visualize passing, rushing, and receiving performances. While these charts are available to the public to provide a visual for each individual game, the Next Gen Stats website does not provide pass locations in the form of a dataset, nor does it provide pass charts that aggregate pass locations over multiple games for a team or player.
In 2018, the NFL has made significant headway in making data publicly available, with the release of punt data on Kaggle for the 2016-2017 seasons, and player tracking data of the 2017 regular season for the inaugural Big Data Bowl. While the Big Data Bowl player tracking data provides passing information, it is only for the 93 games played throughout the first 6 weeks of the regular season. Furthermore, for the 2018 Pro Bowl and Super Bowl, Next Gen Stats introduced an interactive player tracking experience on their website, in which users could watch real time movement positional statistics of players, play diagrams, and box score statistics. However, this player tracking data has yet to be released as a dataset.
The dataset we present goes further in providing pass location information for 501 games: 239 in the 2017 regular season, 9 in the 2017 postseason, 242 in the 2018 regular season, and 11 in the 2018 postseason.
\subsection{Our Contributions: Pass Location Detection and Passing Performance Analysis}
We present two contributions to analysis of player tracking data in the NFL:
\begin{enumerate}
\item The release of the next-gen-scraPy dataset, which includes pass location data for all pass charts presented on the Next Gen Stats website from 2017 onwards:
\item A generalized additive model and kernel density estimate for estimate quarterback completion percentage across the field, relative to the line of scrimmage.
\end{enumerate}
These contributions are fully reproducible, and as Next Gen Stats always releases weekly pass chart images for almost every game played, this dataset will be added to weekly during the upcoming NFL seasons.
\end{comment}
\subsection{Collecting the Raw Image Data}
The NFL provides passing charts via their online NGS platform, in the form of JPEG images. Each chart displays the on-field locations (relative to the line of scrimmage) of every pass thrown by a single quarterback in a single game. The points, which represent the ending location of each pass (e.g. the point at which a ball is caught or hits the ground) are colored by the outcome of the pass (completion, incompletion, interception, or touchdown). Each chart is accompanied by a JavaScript Object Notation (JSON) data structure that details metadata about the quarterback or game represented in that chart. We link each passing chart to the data provided by \texttt{nflscrapR} to obtain additional metadata for each game \citep{nflscrapR}. These charts are available for most games from the 2017 and 2018 NFL regular and post-seasons, though some are missing from the NGS website without explanation. An example passing chart is provided in Figure \ref{fig:foles}.
In total, there are 402 pass charts for 248 games throughout the 2017 regular and postseason, and 438 pass charts for 253 games throughout the 2018 regular and postseason. Each pass chart image is of 1200 $\times$ 1200 pixels, and is annotated with metadata that includes information such as player name, team, number of total pass attempts, etc. Appendix \ref{app:appendix-A} provides a detailed list of the metadata necessary to perform data collection and pass detection.
In our analysis, we do not include data from the 2016 season (the first season in which NGS provided these charts), since approximately 65\% of the pass charts are missing.
Furthermore, the missing charts are not uniformly distributed across teams, biasing potential analyses.
Finally, the existing 2016 charts frequently do not have the necessary metadata, introducing additional challenges to the data extraction process that we outline below.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{figures/Nick_Foles_original}
\caption{Nick Foles' Pass Chart from Super Bowl LII - Extra-large (1200$\times$1200) image extracted from the HTML of the Next Gen Stats website, which visualizes the location relative to the line of scrimmage of all complete, incomplete, touchdown, and intercepted passes.}
\label{fig:foles}
\end{figure}
\subsection{Image Pre-Processing}
Before being able to extract the raw data from the pass charts, the images must be pre-processed.
First, the field in the raw image is presented as a trapezoid, rather than a rectangle, which would distort the pass locations extracted from the chart.
Second, the images include unnecessary information that needs to be eliminated in order to allow for the streamlined and accurate processing of pass locations.
\textbf{Removing Distortion.} The football field and touchdown pass trajectories as shown in Figure \ref{fig:foles} are distorted, because the trapezoidal projection of the field results in a warped representation of the on-field space, such that (for example) a uniform window of pixels contains more square yards towards the top of the image as compared to the bottom of the image. To fix this, we start by cropping each image to contain only the field, while at the same time we extend the sidelines by 10 yards behind the line of scrimmage, as consistent with every pass chart given by NGS.
We then project the trapezoidal plane into a rectangular plane that accurately represents the geometric space of a football field.
In particular, we calculate the homography matrix \textbf{H}, such that every point between each point on the trapezoidal plane $(x_t, y_t)$ is mapped to the rectangular plane $(x_r, y_r)$ through the following equation:
$$
\begin{bmatrix}
x_r \\
y_r \\
1
\end{bmatrix} =
\begin{bmatrix}
h_{00} & h_{01} & h_{02} \\
h_{10} & h_{11} & h_{12} \\
h_{20} & h_{21} & 1
\end{bmatrix}
\begin{bmatrix}
x_t \\
y_t \\
1
\end{bmatrix} = \textbf{H}
\begin{bmatrix}
x_t \\
y_t \\
1
\end{bmatrix},
$$
\noindent where $h_{00}, ... , h_{21}$ are the elements of the homography matrix, \textbf{H}, to obtain the relation between all initial $(x_t, y_t)$ to the corresponding resulting $(x_r, y_r)$ coordinates \citep{szeliski2010computer}. This results in a fully rectangular birds-eye view of the football field, depicting uniform yardage across the height and width of the field
\textbf{Eliminating Unnecessary Details from Image.}
Next, we remove the white sideline numbers of the newly projected field.
Each pass chart depicts 10 yards behind the line of scrimmage, and also either 55 or 75 yards beyond the line of scrimmage. Based on the pixel height of the newly transformed rectangular field image, we deduce whether or not the field depicts 55 yards (6 sideline markings) or 75 yards (8 sideline markings), and thus find the approximate locations of the white pixels. We then replace the white pixels with the same shade of grey as the sidelines. Removal of the white sideline numbers allows us to easily use a simple white color threshold to extract the incomplete pass locations, as described in Section \ref{sec:clustering}.
Based on the pixel height and width of the newly transformed rectangular field image, we can calculate the locations of the sideline markings by pixel ratios, and can locate the positions of the sideline markings to remove.
The final image of the unwarped, clean, field from which the raw locations of the passes are extracted is shown in Figure \ref{fig:clean}(a).
\begin{figure}[!ht]
\centering
\includegraphics[width=.85\textwidth]{figures/axes.jpg}
\caption{Nick Foles' Pass Chart from Super Bowl LII after fixing the warped perspective of the field and removing the sidelines. The axes are displayed to show the x-axis falling directly on the line of scrimmage and the y-axis dividing the field vertically in half down the middle.}
\label{fig:clean}
\end{figure}
\subsection{Clustering Methods to Extract Pass Locations}
Every pass chart shows the locations of four different types of passes, relative to the line of scrimmage: completions (green), incompletions (white), touchdowns (blue), and interceptions (red).
The JSON metadata includes the number of each pass type regardless of whether the pass is being depicted on the image or not\footnote{For example, passes that were thrown out-of-bounds are not depicted in pass charts.}.
There are a number of technical challenges to overcome when extracting pass locations:
\begin{itemize}
\item two or more pass locations can overlap,
\item the number of passes shown on the image does not always match the number of passes given in the JSON metadata of the image,
\item for touchdown passes, the color of the pass location is the same color as the line of scrimmage and pass trajectory path, rendering color thresholding used for the other types of passes ineffective.
\end{itemize}
To address these issues, {\texttt{next-gen-scraPy}}$~$ combines density-based and distance-based clustering methods (DBSCAN and $K$-means, respectively) with basic image processing techniques to overcome the aforementioned challenges and extract all the pass locations presented on an NGS pass chart.
\subsubsection{Image Segmentation by Pass Type}
\label{sec:clustering}
All four different pass types are marked on the pass chart with different colors.
Therefore, we examine the Hue-Saturation-Value (HSV) pixel coordinates from the image to identify parts of the image that fall within a specified HSV color range.
In brief, hue characterizes the dominant color contained in the pixel.
It is captured by the angular position on the color wheel, with red being the reference color (i.e., H = 0 and H = 360).
Complementary colors are located across of each other on the color wheel and hence, they are 180 degrees apart.
Saturation measures the color purity, and is captured by the distance of the color from the center of the color wheel. Value characterizes the brightness of the color \citep{hsv}. This color system is visualized in Figure \ref{fig:hsv-figure}.
\begin{figure}
\includegraphics[width=.47\textwidth]{figures/hsv_hexacone.png}
\caption{HSV color space representation}
\label{fig:hsv-figure}
\end{figure}
Based on the color of the pass type we want to detect, we use a basic thresholding technique to obtain images where all pixels are black except the ones with the pass locations.
For example, for completed passes. we will keep the value of pixels within the respective range presented in Table \ref{tab:hsv-table} unchanged, and set the value of all other pixels to 0 corresponding to the black color.
\begin{table}
\begin{center}
\begin{tabular}{P{20mm}|P{20mm}|P{20mm}} \toprule
Pass Type & Lower Threshold & Upper Threshold \\ \midrule
Complete & (80,100,100) & (160,255,255) \\
Incomplete & (0,0,90) & (0,0,100) \\
Touchdown & (220,40,40) & (260,100,100) \\
Interception & (0, 60, 60) & (20,100,100) \\
\end{tabular}
\caption{HSV color thresholds for every pass type}%
\label{tab:hsv-table}
\end{center}
\end{table}
The resulting segmented image is shown in Figure \ref{fig:segmented}(a), along with the images obtained for the other pass types. As aforementioned in the technical challenges, for the touchdown passes the color thresholding includes additional noise in the final image, as shown in Figure \ref{fig:segmented}(d).
\begin{figure}[!ht]
\centering
\subfloat[][Green thresholding for completions]{\includegraphics[width=.47\textwidth]{figures/thresh_complete.jpeg}}\quad
\subfloat[][White thresholding for incompletions]{\includegraphics[width=.47\textwidth]{figures/thresh_incomplete.jpeg}}\\
\subfloat[][Red thresholding for interceptions]{\includegraphics[width=.47\textwidth]{figures/thresh_interception.jpeg}}\quad
\subfloat[][Blue thresholding for touchdowns (with noise)]{\includegraphics[width=.47\textwidth]{figures/touchdown_pt1.jpeg}}
\caption{Color thresholding for all pass types. $K$-means and DBSCAN are subsequently performed on each of these images for pass detection. Noise in the segmented touchdown image results from the pass location having similar color to the line of scrimmage and pass trajectory lines.}
\label{fig:segmented}
\end{figure}
From the JSON metadata associated with the image, we know the number of pass attempts, $n_a$, touchdowns, $n_{td}$, total completions, $n_{tot-c}$, and interceptions, $n_{int}$. The number of touchdowns and interceptions on each image is then simply $n_{td}$ and $n_{int}$, respectively.
The number of green non-touchdown completions $n_c$ presented on the image excludes the touchdown passes and hence, $n_c = n_{tot-c} - n_{td}$, while the number of gray incompletions is $n_{inc} = n_{a} - n_c - n_{td} - n_{int}$.
Using the segmented images and the number of passes of each type, we detect the pixel locations of each pass type within the respective image (see Sections \ref{sec:kmeans} and \ref{sec:dbscan}).
We then map these locations to the dimensions of a real football field to identify the on-field locations of each pass in $(x, y)$ coordinates, on the coordinate system presented in Figure \ref{fig:clean}.
The $y=0$ vertical line runs through the center of the field, while the $x=0$ horizontal line always represents the line of scrimmage.
\subsubsection{Identifying Pass Types with $K$-Means++}
\label{sec:kmeans}
$K$-means is a distance-based clustering method used to define a clustering partition $\mathcal{C}$ of $n$ observations $X_i \in \mathbb{R}^p$, for $i = 1, \hdots, n$, into a pre-determined number of clusters $K$ such that the within-cluster variation (in Euclidean space),
\begin{gather*}
\sum_{k=1}^K \sum_{\mathcal{C}(i) = k} || X_i - c_k ||^2,\\
\text{with } c_k = \overline{X}_k = \frac{1}{n_k} \sum_{\mathcal{C}(i) = k} X_i,
\end{gather*}
is minimized \citep{macqueen1967}. The resulting clustering $\mathcal{C}$ from $K$-means assumes the co-variance structure of the clusters is spherical, which works well for our purposes, as all pass locations are represented by circles in the images.
Rather than searching over all possible partitions, Lloyd's algorithm is the standard approach used for determining $K$-means partitions $\mathcal{C}$:
\begin{enumerate}
\item choose $K$ random points as starting centers $c_i, \hdots, c_K$,
\item minimize over $\mathcal{C}$: assign each point $X_i, \hdots, X_n$ to its closest center $c_k$, $C(i) = k$,
\item minimize over $c_i, \hdots, c_K$: update the centers to be the average points $c_k = \overline{X}_k$ for each $k = 1, \hdots, K$ clusters,
\item repeat steps (2) and (3) until within-cluster variation doesn't change.
\end{enumerate}
Rather than using random initialization for the cluster centers, we use the $K$-means++ algorithm to choose better starting values \citep{plusplus}. An initial point is randomly selected to be $c_1$, initializing the set of centers $C = \{c_1\}$. Then for each remaining center $j = 2, \hdots, K$:
\begin{enumerate}
\item for each $X_i$, compute $D(X_i) = \underset{c \in C}{\text{min}} ||X_i = c||$,
\item choose a point $X_i$ with probability,
$$
p_i = \frac{D^2(X_i)}{\sum_{j=1}^n D^2(X_j) }
$$
\item use this point as $c_k$, update $C = C \cup \{c_k\}$.
\end{enumerate}
Then we proceed to use Lloyd's algorithm from above with the set of starting centers $C$ chosen from a weighted probability distribution.
According to this distribution, each point has a probability of being chosen proportional to its squared Euclidean distance from the nearest preceding center \citep{plusplus}.
The $K$-means++ algorithm is more appropriate to use for pass detection for two reasons. First, by definition, the initialized cluster centers are more likely to be spread further apart from each other in comparison to $K$-means. This means that the possibility of two cluster centroids falling within the same cluster/single pass location is greatly reduced.
Second, even though $K$-means++ is typically sensitive to outliers, our data does not present this issue, since all colors on the charts aside from the pass locations (e.g. the line of scrimmage and yardline markers) are already removed, as described above.
To identify complete and intercepted passes, we simply perform $K$-means++ clustering on the appropriate segmented images with $K = n_c$ and $K = n_{int}$, respectively.
While in theory we can do the same for the touchdown passes (i.e. perform $K$-means++ with $K = n_{td}$), the segmented image sometimes includes outliers, since the line of scrimmage and the touchdown pass trajectories have similar colors to the points representing catch locations for touchdown passes.
Thus, we need to further process the corresponding segmented image to remove these unnecessary colored pixels.
For this step, we use DBSCAN as we detail in the following section, and apply $K$-means++ to the resulting image.
One major difficulty in detecting the number of incompletions is that the number of incomplete passes shown on a pass chart image may not match the number given in the JSON data corresponding to an image, $n_{inc}$.
This is most likely because out-of-bounds passes and spikes are not presented in the charts despite being counted as pass attempts.
Our first step to solving this issue is performing $K$-means++ clustering on the segmented image for incompletions, with $K = n_{inc}$. Once we have all $n_{inc}$ cluster centers, we iterate through each cluster center and examine how far away the other cluster centers are.
If two cluster centers are distinctly close to each other, one of following two cases is true:
\begin{enumerate}
\item Two cluster centers have been mistakenly detected for a single pass location. This might happen if the metadata specifies that there are 21 incompletions, but the image only shows 20 (e.g. because one pass was out of bounds). In this case, $K$-means++ will split a single pass location into two, in order to achieve the specified number of clusters $K$
\item Two cluster centers have been correctly detected for two pass locations that are close to each other.
\end{enumerate}
We can infer which of the two cases is true by comparing the within-cluster variation of each of these two clusters with the within-cluster variation of a {\em normal}, single pass location cluster.
If the former is significantly smaller than the latter, then case (1) is detected and we reduce the number of incompletions shown in the image by one, otherwise, case (2) is detected and no additional action is required.
The result of this iterative process is a newly-adjusted number of incompletions, $n_{inc-adj}$. If $n_{inc-adj}=n_{inc}$ the process is terminated; otherwise we perform $K$-means++ clustering again with $K = n_{inc-adj}$.
After obtaining the $(x,y)$ pixel locations of all cluster centers of a given pass type, we map these coordinates to real field locations relative to the line of scrimmage. For the number of incomplete passes whose locations could not be identified, $n_{inc} - n_{inc-adj}$, we populate these rows in the data with N/A values. Only 2.8\% of pass locations in 2017 and 2.7\% of pass locations in 2018 had missing coordinates in the final output dataset. These missing coordinates appear to happen more often for certain home teams (e.g. the LA Chargers and Buffalo Bills) than others (e.g. the Minnesota Vikings and Cleveland Browns). With the exception of these outliers (Bills and Chargers in 2017), the number of missing pass locations by team is typically small, as shown in Figure \ref{fig:missing}.
\begin{figure}[!ht]
\centering
\includegraphics[width=.85\textwidth]{figures/next-gen-scraPy-missing-coordinates.pdf}
\caption{Number of missing pass attempt coordinates by home team and season.}
\label{fig:missing}
\end{figure}
Figure \ref{fig:kmeans} shows the result of extracting pass locations using $K$-means++, and we mark the cluster centers in red.
We note that $K$-means++ is able to detect overlapping pass locations as multiple passes.
\begin{figure}[!ht]
\centering
\includegraphics[width=.85\textwidth]{figures/kmeans/complete_kmeans.jpeg}
\caption{An example result of performing $K$-means on Nick Foles' complete passes from Super Bowl LII, with centers of each pass depicted in red.}
\label{fig:kmeans}
\end{figure}
\subsubsection{Removing Noise in Touchdown Images with DBSCAN}
\label{sec:dbscan}
Our segmented touchdown images require additional processing after color thresholding in order for $K$-means++ to correctly identify the pass locations. As mentioned above, this is because line of scrimmage is shown in blue, and because pass trajectories for touchdown passes are included in the image and shown in a similar color. As a result, extraneous blue pixels (``noise'') are often present after image segmentation.
To address this issue, we use DBSCAN, a density-based clustering algorithm that identifies clusters of arbitrary shape for a given set of data points \citep{dbscan}.
DBSCAN is a non-parametric clustering algorithm that identifies clusters as {\em maximal} sets of density-connected points. Specifically, the DBSCAN algorithm works as follow:
\begin{enumerate}
\item Let $X = \{x_1, x_2, ..., x_n\}$ be a set of observations (``points'') to cluster
\item For each point $x_i$, compute the $\epsilon$-neighborhood $N(x_i)$; all observations within a distance $\epsilon$ are included in $N(x_i)$
\item Two points, $x_i$ and $x_j$, are merged into a single cluster if $N(x_i)$ overlaps with $N(x_j)$
\item Recompute the $\epsilon$-neighborhood $N(C_k))$ for each cluster $C_k$
\item Two clusters, $C_k$ and $C_l$, are merged into a single cluster if $N(C_k)$ overlaps with $N(C_l)$
\item Repeat steps 4-5 until no more clusters overlap
\item If the number of points in a cluster is greater than or equal to a pre-defined threshold $\tau$ (i.e. if $|N(p)|\ge$ {\tt $\tau$}), this cluster is retained
\item If the number of points in a cluster is less than a pre-defined threshold $\tau$ (i.e. if $|N(p)|<$ {\tt $\tau$}), this cluster is considered ``noise'' and discarded
\end{enumerate}
Even though DBSCAN does not require a direct specification of the number of clusters, it should be clear that the choice of $\epsilon$ and $\tau$ (often referred to as ``minPoints'') impacts the number of clusters identified. Figure \ref{fig:dbscan} depicts a high level representation of DBSCAN's operations, with $\tau = 3$.
\begin{figure}[h]
\centering
\includegraphics[width=0.95\linewidth]{figures/DBSCAN.png}
\caption{Pictorial representation of DBSCAN's operations ($\tau$=3).}
\label{fig:dbscan}
\end{figure}
DBSCAN's ability to identify {\em noise} makes it particularly good choice for identifying the locations of touchdown passes. For our purposes, the observations are the individual pixels in the segmented image.
To distinguish between actual pass locations and this noise, we use DBSCAN on these observations/pixels to find the $n_{td}$ highest density clusters, with $\epsilon = 10 $ and $\tau = n_{td}$.
Then we remove from the image any pixels detected as noise or as not belonging to the $n_{td}$-top density clusters.
We finally pass the resulting image to the $K$-means++ algorithm described above to obtain the raw locations.
Figure \ref{fig:dbscan-output} shows the output after applying DBSCAN on Figure \ref{fig:segmented}(d) to extract touchdown pass locations.
In the original pass chart there are three touchdown passes, while as we see at Figure \ref{fig:dbscan-output}(a) DBSCAN has detected 4 distinct clusters (after removing the points identified by the algorithm as noise).
Figure \ref{fig:dbscan-output}(b) shows the selection of the $n_{td} = 3$ most dense clusters that represent the touchdown pass locations. This new version of the segmented image, with noise removed, is then used as input to the $K$-means++ approach described above.
\begin{figure}[!ht]
\centering
\subfloat[][4 clusters identified by DBSCAN, circled in red. ]{\includegraphics[width=.47\textwidth]{figures/dbscan/touchdown_dbscan_circled.jpeg}}\quad
\subfloat[][$n_{td} = 3$ largest clusters]{\includegraphics[width=.47\textwidth]{figures/dbscan/touchdown_km.jpeg}}\\
\caption{The result of performing DBSCAN for touchdown pass detection.
}
\label{fig:dbscan-output}
\end{figure}
\subsubsection{Output Data}
The resulting data from the pass charts cover the 2017 and 2018 regular seasons and postseasons.
There are 27,946 rows containing data for 840 pass charts spanning 491 games, with 27,171 rows containing no missing values.
Of the cases with missingness, 33 are due to Next Gen Stats not providing pass charts for either team during a game.
Appendix \ref{app:appendix-A} provides an overview of all variables provided by {\texttt{next-gen-scraPy}}$\mbox{ }$and their descriptions, while Table 2 provides a breakdown of some basic statistics for the number of pass locations detected by season.
Finally, Appendix \ref{app:appendix-B} contains a subset of the dataset for 10 of Nick Foles' passes in Super Bowl LII.
\subsection{Validation}
In order to showcase the quality of the data collected by {{\texttt{next-gen-scraPy}}}, we turned to the tracking data provided by the NFL for the Big Data Bowl competition mentioned earlier. We provide two methods of validation of our algorithm: First, an ``anecdote validation,'' where we visually inspect the pass locations obtained through {{\texttt{next-gen-scraPy}}} and compare them to the pass locations in the NFL's tracking data (via the Big Data Bowl). Second, we provide a method for linking {{\texttt{next-gen-scraPy}}} passes to the corresponding play in the NFL's tracking data.
\subsubsection{Anecdote Validation}
\label{sec:anecdote}
We cannot compare all the data points collected from {{\texttt{next-gen-scraPy}}} to the league-provided tracking data.
For example, there are ambiguities in the tracking data when it comes to incomplete passes: the NFL's tracking data does not specify the point at which the ball hits the ground and is rendered incomplete. Therefore, we focus on completed passes (excluding touchdown passes), where the event of completion is clearly annotated in the tracking data.
Figure \ref{fig:number} shows the number of completed passes for each QB in each game across the {{\texttt{next-gen-scraPy}}} and Big Data Bowl datasets, focusing on the first six weeks of the 2017 season, when the two datasets overlap. There are small differences in the number of passes in each dataset for each QB in each game. In games where both datasets contain completed pass locations for a given QB, {{\texttt{next-gen-scraPy}}} always has at least as many completed pass locations as the Big Data Bowl Data, but the difference is never more than three passes; the two datasets agree (in terms of the number of completed passes for a QB) on most games for which they both have data. Figure \ref{fig:number} also shows that while it's rare for a QB-game to be included in the {{\texttt{next-gen-scraPy}}} dataset but not in the Big Data Bowl dataset, there are many QB-games that are included in the Big Data Bowl but whose passing charts are not available for inclusion in the {{\texttt{next-gen-scraPy}}} dataset.
\begin{figure}
\centering
\includegraphics[width = \linewidth]{figures/number-of-passes.pdf}
\caption{Number of completed passes (excluding touchdowns) at the QB-Game level in {{\texttt{next-gen-scraPy}}} compared to NFL's tracking data from the Big Data Bowl.}
\label{fig:number}
\end{figure}
We further transform the tracking data coordinates in reference to the line of scrimmage in order to be directly comparable to the data obtained from {{\texttt{next-gen-scraPy}}}. We do this because the charts from which the {{\texttt{next-gen-scraPy}}} data is sourced show coordinates \emph{relative to where the ball was snapped}, i.e. the line of scrimmage. To reconcile the two coordinate systems, we first calculate the yards gained between the snap location (i.e. line of scrimmage) and the location at which the ball is caught in the tracking data. Next, since the {{\texttt{next-gen-scraPy}}} coordinates are shown with respect to the center of the football field, we calculate the horizontal distance (i.e. along the line of scrimmage) between where the ball was caught and the center of the field.
For a visual anecdote validation of our approach, we demonstrate the accuracy of the data obtained from {{\texttt{next-gen-scraPy}}} in Section \ref{sec:accuracy} by plotting the coordinates from both datasets on a single chart for a single QB in a single game, repeating this for multiple QB-games.
\subsubsection{Linkage Validation}
\label{sec:linkage}
Using the same procedure to adjust the coordinates as above, we design a greedy one-to-one record linkage algorithm to link completed passes from the NFL's tracking data to completed passes in {{\texttt{next-gen-scraPy}}} (excluding touchdown passes). This can be thought of as an algorithm for defining a injection across the two data sources, with the caveat that there is no linear transformation that maps the points in one dataset to the points in the other. That is, the locations across the two datasets deviate randomly, as shown in Figures \ref{fig:distance-difference} and \ref{fig:angle-difference}.
Our record linkage algorithm works as follows. For each set of completed passes for a single QB in a single game:
\begin{enumerate}
\item Compute the Euclidean distance between all pairs of coordinates across the Big Data Bowl and {{\texttt{next-gen-scraPy}}} datasets.
\item Identify the two closest pass locations and link them.
\item Remove the distances corresponding to all other pairs involving the linked records from (2).
\item Repeat (2) and (3) until there are no more pairs of records across datasets to link.
\end{enumerate}
Because we link the closest pair, and then the next closest remaining pair, and so on until there are no more pairs to link, we likely make some linkage errors for clusters of passes that are close together, and the resulting distances between linked coordinates are right-skewed (i.e. we likely make some linkage errors at the right-tail). Results of this record linkage procedure are discussed in Section \ref{sec:accuracy}.
\subsection{NFL-Wide Models}
\label{sec:nfl-models}
Below, we describe two models for NFL passing: (1) a league-wide pass target location distribution, estimated using kernel density estimation; and (2) a model for league-wide completion percentage by pass target location, estimated with a generalized additive model.
It is important to recognize that both models use only observational data, and thus should only be used to describe what has happened in the past. The observed data is biased in obvious ways: QBs tend to target open receivers, defenses tend to cover high-leverage areas of the field more closely, and target locations depend greatly on the game situation (e.g. yards to first down). None of this information is available in our dataset, but all of this information will influence on-field decisions that are made by QBs and teams.
\subsubsection{Estimating the Distribution of NFL Pass Locations}
We estimate the league-wide pass location distribution via kernel density estimation (KDE). KDE is a non-parametric approach for estimating a probability distribution given only observational data. In the univariate case, a small probability density function (``kernel'') is placed over each observation $x_i$, and these kernels are aggregated across the entire dataset $x_1, ..., x_n$. Let $K$ be the kernel function, $n$ the number of data points, and $h$ a smoothing parameter. Then, the univariate, empirical density estimate via KDE is: $\hat{f}_h(x) = \frac{1}{nh} \sum^n_{i=1}{K(\frac{x - x_i}{h})}$.
In our case, we use KDE to estimate the probability of a pass targeting a two-dimensional location $(x,y)$ on the field, relative to the line of scrimmage, $\hat{f}(x,y)$. Density estimation via two-dimensional KDE follows a similar form:
$$
\hat{f}_h(x,y) = \dfrac{1}{nh_xh_y} \sum^n_{i=1} \{K_x(\dfrac{x - x_i}{h_x}) K_y(\dfrac{y - y_i}{h_y})\}
$$
\noindent where $h_x$ and $h_y$ are bandwidths for $x$ and $y$, respectively, and $K_x$ and $K_y$ are the respective kernels. Alternative definitions use joint kernels $K_{x,y}$ or identical kernels $K$, but for our purposes, the above definition suffices.
We use a bivariate normal kernel $K$ for our estimation (which can be decomposed into independent kernels $K_x$ and $K_y$), and Scott's rule of thumb heuristic for the bandwidth \citep{mass}.\footnote{$\hat{h} = 1.06 \times \text{min}(\hat{\sigma}, \frac{IQR}{1.34}) \times n^{-1/5}$, where $\sigma$ is the standard deviation and $IQR$ is the interquartile range of the values in the corresponding dimension} We bound the KDE within the rectangular box described above: 10 yards behind the line of scrimmage, 55 yards in front of the line of scrimmage, and between both sidelines. The resulting KDE gives us an empirical estimate of the pass target locations for the entire NFL.
An alternative approach would be to obtain a two-dimensional histogram using a grid over the field and estimating the number of passes that were thrown in each of the grid cells.
However, this approach has some limitations, including the histograms' sensitivity to the anchor point \citep{silverman1986density}.
Furthermore, the estimates obtained from overlaying a grid over the field are essentially a discrete approximation of a continuous surface.
Two points ($x_1, y_1$) and ($x_2, y_2$) that belong to the same grid, will have the same probability of being targeted with a pass since they belong to the same grid cell.
Of course, this problem can be minimized by having the grid cells as small as possible (e.g., 0.5 yards each cell side), but then, sample size concerns limit the interpretability of the resulting density estimate, since we will end up with several empty cells and a noisy estimation.
KDE provides a continuous approximation over the surface, smoothing the differences between neighboring points.
\subsubsection{Estimating the NFL Completion Percentages Surface}
We use generalized additive models (GAMs) to model the probability $P(\mbox{Completion} | x,y)$ of a pass being completed given the targeted location $(x,y)$ of the field \citep{hastie1990monographs}.
GAMs are similar to generalized linear models (GLMs), except where the response variable is linearly associated with smooth functions of the independent variables (via some link function, e.g. the logit for a binomial response). This is in contrast to typical GLMs (e.g. logistic regression), where the response variable is linearly associated with the independent variables themselves (via some link function).
Formally, if $y$ is the dependent variable, and $x_i,~i\in \{1,\dots,n\}$ are the independent variables:
$$
y = g(\alpha_0 + f_1(x_1)+f_2(x_2)+\dots+f_n(x_n))
$$
where $g^{-1}$ is the link function for the model.
For binary data, as in our case, the link function is the logit function, $g^{-1}(z) = \log (\dfrac{z}{1-z})$.
The functions $f_i$ can take many forms, but can be thought of as smoothers between the dependent variable $y$ and the independent variable $x_i$.
For our model, we use two independent variables -- the vertical coordinate $x$ and the horizontal coordinate $y$ -- and an interaction between them $x \cdot y$. We posit that all three terms are necessary:
First, $x$ represents the pass location down the length of the field, and intuitively should have some marginal effect on completion percentage. For example, longer passes are more difficult to complete.
Second, $y$ represents the pass location across the width of the field, and intuitively should have some (non-linear) marginal effect on completion probability. For example, passes towards the sidelines are farther away, while passes to the middle of the field are closer, potentially affecting completion probability; target locations towards the middle of the field typically have more defenders in that area, potentially affecting completion probability.
Third, we include the interaction term, since we hypothesize that there is some joint non-linear effect on completion probability that is not described by the marginal terms alone. For example, the completion probabilities of passes behind the line of scrimmage likely are not aptly described by the marginal terms alone; there is likely some joint relationship between $(x,y)$ and the completion probability.
Thus, we have:
$$\log(\dfrac{P(Complete)}{P(Incomplete)}) = c_0 + f_x(x) + f_y(y) + f_{xy}(x,y)$$
We choose $f_x$, $f_y$ and $f_{xy}$ to be tensor product interaction smoothers, which work well in situations where both marginal and interaction effects are present \citep{mgcv}.
Specifically, this type of smoothing function accounts for the marginal bases from the main effects when estimating the smoothing function for the interaction effect, and it allows for the functional ANOVA decomposition in the model equation above, which is more interpretable than alternative choices. Cross-validation supports our choice of smoother, and visual inspection indicates that this smoother yields interpretable results that make sense in the context of football (see Section \ref{sec:results} for examples).
Alternative models to the GAM are not ideal for this situation. Generalized linear models, for example, will not allow for non-parametric or smooth relationships between the independent variables (distance from line of scrimmage, distance from center of field) and the response variable (completion percentage), and thus will yield a poor fit to the data. Tree-based models (e.g. decision trees, random forests, gradient boosted trees, etc) partition the surface of the field into discrete areas, and will not yield the same smooth surfaces that we obtain from the GAM. In short, the GAM is an ideal choice for the continuous nature of our response variable and the structure of our explanatory data; and it appropriately models and captures the relationship between our explanatory and response variables.
\subsection{2-D Naive Bayes for Individual Completion Percentage Surfaces}
For individual QBs or team defenses, we can directly apply the methodology used in Section \ref{sec:nfl-models} to subsets of the data corresponding to these groups. As a result, we can obtain quick estimates of the pass target location for an individual quarterback and the completion percentage surface for an individual quarterback. Similarly, for team defenses, we can take the subset of passes against this team (since opposing defense is included in the metadata described in Section \ref{sec:ngs-scrapy}), and examine the distribution of pass locations \emph{allowed} by a specific team, and the pass completion percentage surface \emph{allowed} by a specific team.
However, in doing so, we quickly run into sample size issues when estimating the completion percentage model. While we had over 27,000 passes with which to estimate the league-wide models in Section \ref{sec:nfl-models}, a typical NFL team only attempts between 400 and 700 passes in an entire season, and the number of attempts for individual QBs can be even lower than that. Moreover, in specific areas of the field that are less frequently targeted, some QBs may only attempt a handful of passes to that area of the field over the course of an entire season. As such, any model built using these small samples of data for individual QBs and team defenses will likely overfit the data.
In response to this issue, we introduce a two-dimensional naive Bayes approach for estimating the completion percentage surfaces for individual QBs and team defenses. Our approach regresses the estimates for individual QBs and team defenses towards the league-average completion percentage \emph{in each area of the field}, and adaptively accounts for the sample size of passes in each particular area of the field. Our model is described as follows:
\begin{itemize}
\item Let $g$ be the group or player of interest (e.g. an individual QB or a team defense).
\item Let $\hat{f}_{NFL}(x,y)$ be the pass target location distribution for the NFL. Let $\hat{f}_{g}(x,y)$ be the pass target location distribution for group $g$.
\item Let $\hat{P}_{NFL}(\mbox{Complete} | x,y)$ be the estimated completion percentage surface for the NFL.
\item Let $\hat{P}_{g}(\mbox{Complete} | x,y)$ be the estimated completion percentage surface for $g$.
\item Let $N_{Median}$ be the median number of pass attempts in the class of group $g$ (for QBs, the median number of passes attempted for all QBs; for team defenses, the median number of pass attempts allowed for all teams). (This is a parameter that can be adjusted to give more or less weight to the league-wide distribution.)
\item Let $N_g$ be the number of total passes attempted in the NFL and for $g$ (or against $g$, in the case of team defenses).
\item Finally, let $\hat{P}_G^*(\mbox{Complete} | x,y)$ be our 2-D naive Bayes estimate of the completion percentage surface for $g$.
\end{itemize}
Then,
$$
\hspace{-1in}
\hat{P}_G^*(\mbox{Complete} | x,y) = \frac{N_g \times \hat{f}_{g}(x,y) \times \hat{P}_{g}(\mbox{Complete} | x,y) + N_{Median} \times \hat{f}_{{NFL}}(x,y) \times \hat{P}_{{NFL}}(\mbox{Complete} | x,y)} {N_g \times \hat{f}_{{g}}(x,y) + N_{Median} \times \hat{f}_{{NFL}}(x,y)}
$$
In words, our approach goes through each location on the completion percentage surface for $g$ and scales it towards the league-average completion percentage surface from Section \ref{sec:nfl-models}, using a scaling factor of $N_{Median}$.
The above approach is Bayesian in nature: The second terms, on the right hand side of the numerator and denominator, can be thought of as the {\em prior} estimate for the completion percentage. Then, as we accumulate more evidence (i.e. as $N_{g}$ increases), the first term is given more weight.
Similar alternatives may also be ideal, depending on the context. For example, we could instead shift the weight from the prior to the data in group $g$ instead of using a static weight of $N_{Median}$. This is easily implemented, and we encourage interested readers to use the open-source code that we provide for these models and try their own approaches.\footnote{https://github.com/ryurko/next-gen-scrapy}
\iffalse
\todo{++++++++++++++++++++++++++++++++++++++}
In our completion probability estimate, we use generalized additive models (GAMs) to estimate the probability of a pass being completed by location on the field relative to the line of scrimmage.
We condition GAMs to be dependent on the entire league for a given season, a single quarterback, or a single team.
Using a logit link function, the model for predicting completion percentages across the league, or conditioned on a quarterback or team has the form:
where $(x, y)$ are the field coordinates for every pass given a quarterback or throughout the league, respectively.
As we speculate that a pass's location down the length of the field ($x$ direction) is correlated to it's location across the width of the field ($y$ direction), for our choice of spline in the GAM it is necessary to take into account both the main effects of $x$ and $y$ field location, as well as their interactive effects.
Thus, we evaluate the tensor product interaction in our model, $ti()$.
The advantage of using a GAM for modeling this probability is to allow for smooth non-linear relationships between the $x$ and $y$ field location coordinates to be calculated.
\fi
\subsection{Accuracy of Data Provided by {\texttt{next-gen-scraPy}}{}}
\label{sec:accuracy}
As described in Section \ref{sec:anecdote}, we cannot compare \emph{all} of the data provided by {\texttt{next-gen-scraPy}}{} to league-provided Big Data Bowl tracking data for several reasons: (1) the pass locations for incompletions are not marked in the tracking data; (2) the tracking data provides only six weeks of data from 2017 (while we provide data from all weeks from the 2017 and 2018 seasons); and (3) passes from {{\texttt{next-gen-scraPy}}} do not contain play identifiers or timestamps, so we cannot join them with their counterpart pass from the Big Data Bowl.
We can, however, provide examples of how our completed pass locations closely match those from the Big Data Bowl for individual QB-games. Figure \ref{fig:ngs_tracking} visualizes the locations of the completed passes for four QB-games in the 2017 season: (A) Alex Smith against New England on September 7th, (B) Ben Roethlisberger against Cleveland on September 10th, (C) Russell Wilson against Indianapolis on October 1st, and (D) Tom Brady against New Orleans on September 17th. The points are very well-aligned, providing a validating example of the accuracy with which {{\texttt{next-gen-scraPy}}} obtains raw pass locations by processing the images from Next Gen Stats.
\begin{figure}
\centering
\includegraphics[width = \linewidth]{figures/game_plots.pdf}
\caption{The pass locations obtained from {{\texttt{next-gen-scraPy}}} closely match those from the NFL's player tracking data.}
\label{fig:ngs_tracking}
\end{figure}
The small deviations for individual passes that we observe in Figure \ref{fig:ngs_tracking} could occur for several reasons. For example, one source may mark the location of the ball, while another source may mark the location of the player catching the ball. Since the Radio Frequency Identification (RFID) chips used to obtain the player tracking data are placed in players' shoulder pads, any catch that involves the receiver reaching out for the ball will have some small deviation between the ball and player locations. Second, there may be differences in the timing of when these pass locations are tracked. For example, the NFL tracking data annotates when a pass arrives in some places, and when it is caught in others. It is possible that the data underlying the NGS passing charts uses one set of coordinates, while the NFL tracking data uses another. If there is any time between these two events, and if the receiver moving, this may lead to a deviation in the coordinates across the two data sources. Third, we use the center of field as the reference when calculating the horizontal distance that a pass travels in the Big Data Bowl Dataset, but it is possible that alternative approaches yield better results. Fourth, there may be a small amount of error associated with our methods for identifying the locations of passes in the images.
\begin{figure}
\centering
\includegraphics[width = \linewidth]{figures/distance-plot.pdf}
\caption{Difference in distances between completed pass locations in {{\texttt{next-gen-scraPy}}} compared to NFL's tracking data from the Big Data Bowl.}
\label{fig:distance-difference}
\end{figure}
Circumstantial evidence for the first two reasons above can be found in Figures \ref{fig:distance-difference} and \ref{fig:angle-difference}. Figure \ref{fig:distance-difference} shows that for most linked observations across the two datasets, the distance between the linked coordinates is small. As discussed in Section \ref{sec:linkage}, the distribution of the distances is heavily skewed right (likely because we make linkage errors at the tails of our greedy record linkage algorithm). The median deviation in the pass coordinates across the two datasets is only 1.7 yards.
\begin{figure}
\centering
\includegraphics[width = \linewidth]{figures/angle-plot.pdf}
\caption{Angular difference between completed pass locations in {{\texttt{next-gen-scraPy}}} compared to NFL's tracking data from the Big Data Bowl.}
\label{fig:angle-difference}
\end{figure}
Figure \ref{fig:angle-difference} provides some evidence that one data source marks the location of the ball, while another marks the location of the receiver. As shown here, passes to the middle 15 yards of the field are not strongly associated with any angular difference across the two datasets. However, for passes to the left side of the field, the Big Data Bowl Data pass locations tend to be closer to the line of scrimmage and more towards the right (i.e. towards the middle of the field) than those from {{\texttt{next-gen-scraPy}}}. Similarly, for passes to the right side of the field, the Big Data Bowl Data pass locations tend to be closer to the line of scrimmage and more towards the left (i.e. towards the middle of the field) than those from {{\texttt{next-gen-scraPy}}}. This might indicate that the Big Data Bowl coordinates correspond to the location of the player (in particular, the RFID chips in the players' shoulder pads), while the {{\texttt{next-gen-scraPy}}} coordinates correspond to the location of the ball.
Finally, the pass locations from the NFL tracking data are only available for the first six weeks of the 2017 season. It is possible that the NGS images have improved since then, and thus that the error associated with {{\texttt{next-gen-scraPy}}} has decreased over time. We look forward to re-examining this when additional, more recent tracking data is released by the NFL.
\subsection{League-Wide Passing Trends}
Figure \ref{fig:league_wide} shows the distribution of pass locations (relative to the line of scrimmage) and the results of the generalized additive model for predicting completion probabilities for the 2017 and 2018 NFL seasons. Comparing the two seasons, we can see that the distributions of pass locations are almost identical, with most passes are relatively short and near the line of scrimmage.
Specifically, in 2017, 68.5\% of targets were within 10 yards down the field past the line of scrimmage, and 88.8\% were within 20 yards. In 2018, 66.5\% of targets were within 10 yards, with 89.2\% were within 20 yards.
For targets further than 20 yards past the line of scrimmage, almost all were towards the sidelines.
Only 2.96\% of targets past 20 yards in 2017 and 3.11\% in 2018 were in the middle of the field (i.e., $-13.33 \leq x \leq 13.33$).
The predicted completion probability follows a similar trend.
Passes closer to line of scrimmage have a higher completion percentage.
The median predicted completion percentages for the 2017 and 2018 seasons are 73.3\% and 76.8\% respectively for passes within 10 yards of the line of scrimmage, while these numbers drop to 65.4\% and 69.7\% for passes within 20 yards.
For passes more than 20 yards past the line of scrimmage, the median completion percentages fall drastically to 28.4\% and 28.0\% respectively.
For these deeper passes, the median completion percentage was 30.4\% towards the middle and 26.5\% towards the sidelines in 2017, and 30.5\% towards the middle and 22.7\% towards the sidelines in 2018.
This analysis is strictly retrospective, given the observational nature of our dataset, and does not include every game played throughout 2017 and 2018. Potential biases in our dataset may result from the differences in play-calling for specific teams and QBs, differences in decision-making for specific teams and QBs, and the openness receivers and areas of the field to which passes are targeted.
\begin{figure}[!ht]
\captionsetup[subfigure]{labelformat=empty}
\centering
\includegraphics[width=.85\textwidth]{figures/league_wide_charts/league_wide_4.png}
\caption{The individual components of our model: kernel density estimates for league-wide pass location probability (first column), and generalized additive models for league-wide completion probability (second column).}
\label{fig:league_wide}
\end{figure}
\subsection{Team Defense Completion Percentage Allowed Surfaces}
Our framework also allows us to explore completion percentage allowed for each team defense by location of the field for an entire season.
For this analysis, we compare our model output with Football Outsiders' Defensive Efficiency Ratings, specifically Defense DVOA (Defense-adjusted Value over Average). DVOA uses the idea of ``success points" to measure a team's success based on total yards and yards towards a first down for every play in an NFL season. With the Defense DVOA metric, $0\%$ represents league-average performance, a positive percentage signifies performance in a situation benefitting the opposing offense, and a negative percentage signifies performance benefitting the team's own defense \citep{FO_DVOA}.
In the 2018 regular season, the Chicago Bears and Baltimore Ravens had the first and third lowest Defensive DVOAs of $-26.9\%$ and $-14.2\%$, respectively.
In Figure \ref{fig:good_team_d}, we present the predicted completion percentage from our model for the Chicago Bears and Baltimore Ravens.
We present both the {\em raw} predictions (left column), as well as, relative to the league-average (right column).
As we can see, in agreement with the Defensive DVOA metric, both teams are projected to allow lower completion percentage over the entire field.
Furthermore, Figure \ref{fig:bad_team_d} depicts the same surfaces for the two defensive teams with the third and fifth highest Defensive DVOAs, namely, the Oakland Raiders at $12.3\%$ and Cincinnati Bengals at $9.0\%$.
Compared to the results for the Bears and the Ravens, we can see that Bengals and Raiders are worse than average in completion percentage allowed over several parts of the field and particularly down the middle of the field.
\begin{figure}[!ht]
\captionsetup[subfigure]{labelformat=empty}
\centering
\subfloat[][]{\includegraphics[width=.4\textwidth]{figures/team_d_charts_bold/CHI_cpa.png}}\quad
\subfloat[][]{\includegraphics[width=.4\textwidth]{figures/team_d_charts_bold/CHI_vs_league.png}}\\
\subfloat[][]{\includegraphics[width=.4\textwidth]{figures/team_d_charts_bold/BAL_cpa.png}}\quad
\subfloat[][]{\includegraphics[width=.4\textwidth]{figures/team_d_charts_bold/BAL_vs_league.png}}\\
\caption{Completion percentage allowed surfaces of the Chicago Bears and Baltimore Ravens defense, who have the highest Expected Points Contributed By All Defense according to Pro Football Reference.}
\label{fig:good_team_d}
\end{figure}
\begin{figure}[!ht]
\captionsetup[subfigure]{labelformat=empty}
\centering
\subfloat[][]{\includegraphics[width=.4\textwidth]{figures/team_d_charts_bold/CIN_cpa.png}}\quad
\subfloat[][]{\includegraphics[width=.4\textwidth]{figures/team_d_charts_bold/CIN_vs_league.png}}\\
\subfloat[][]{\includegraphics[width=.4\textwidth]{figures/team_d_charts_bold/OAK_cpa.png}}\quad
\subfloat[][]{\includegraphics[width=.4\textwidth]{figures/team_d_charts_bold/OAK_vs_league.png}}\\
\caption{Completion percentage allowed surfaces of the Cincinnati Bengals and Oakland Raiders defense, who have the lowest Expected Points Contributed By All Defense according to Pro Football Reference.}
\label{fig:bad_team_d}
\end{figure}
\subsection{Quarterback Completion Percentage Surfaces}
Our framework also allows us to examine patterns and make league comparisons at the quarterback level. We also compare our quarterback evaluations to ESPN's Total QBR metric, which describes a quarterback's contributions to winning in terms of passing, rushing, turnovers, and penalties, as well as accounting for his offense's performance for every play \citep{ESPN_total_QBR}.
In the 2018 regular season, the quarterbacks with the highest Total QBRs were Patrick Mahomes and Drew Brees, with total ratings of 81.8 and 80.8, respectively.
Figures \ref{fig:good_qb} and \ref{fig:bad_qb} present the predicted from our model completion percentage by field location for the two QBs overlaid by the raw pass charts.
When comparing Brees and Mahomes to league average, we find that both quarterbacks perform either equal to or above league average in almost possible target location on the field, indicated by the white (average) and green (above average) areas. On the other end of the spectrum, the quarterbacks with the lowest total ratings were Joshua Allen, ranked $24^{th}$ in the league, and Joshua Rosen, ranked last in the league at $33^{rd}$, with ratings of 52.2 and 25.9, respectively.
Similarly, when comparing Allen and Rosen to league average, we observe that they performed below average (purple) across the field.
\begin{figure}[!ht]
\captionsetup[subfigure]{labelformat=empty}
\centering
\subfloat[][]{\includegraphics[width=.4\textwidth]{figures/qb_charts_bold/DrewBrees_cp.png}
\subfloat[][]{\includegraphics[width=.4\textwidth]{figures/qb_charts_bold/DrewBrees_vs_league.png}}\\
\subfloat[][]{\includegraphics[width=.4\textwidth]{figures/qb_charts_bold/PatrickMahomes_cp.png}
\subfloat[][]{\includegraphics[width=.4\textwidth]{figures/qb_charts_bold/PatrickMahomes_vs_league.png}}\\
\caption{Completion percentage surfaces for Drew Brees and Patrick Mahomes, the quarterbacks with the highest passer ratings in the 2018 regular season, according to Next Gen Stats.}
\label{fig:good_qb}
\end{figure}
\begin{figure}[!ht]
\captionsetup[subfigure]{labelformat=empty}
\centering
\subfloat[][]{\includegraphics[width=.4\textwidth]{figures/qb_charts_bold/JoshuaAllen_cp.png}
\subfloat[][]{\includegraphics[width=.4\textwidth]{figures/qb_charts_bold/JoshuaAllen_vs_league.png}}\\
\subfloat[][]{\includegraphics[width=.4\textwidth]{figures/qb_charts_bold/JoshuaRosen_cp.png}
\subfloat[][]{\includegraphics[width=.4\textwidth]{figures/qb_charts_bold/JoshuaRosen_vs_league.png}}\\
\caption{Completion percentage surfaces for Joshua Allen and Joshua Rosen, the quarterbacks with the lowest passer ratings in the 2018 regular season, according to Next Gen Stats.}
\label{fig:bad_qb}
\end{figure}
\subsection{Completion Percentage Above Expectation}
\label{sec:cpae}
Using the estimated completion percentage surfaces for team defenses or individual QBs, we can obtain a single-valued summary for the performance of a QB or team defense in terms of completion percentage.
In particular, we define the Completion Percentage Above Expectation of $g$ ($CPAE_g$) as the integral of the above-league-average completion surface for $g$, $\hat{P}_{g,league}^*(\mbox{Complete} | x,y)$, weighted by the spatial density of pass attempts for $g$, $\hat{f}_{g}(x,y)$:
\begin{equation}
CPAE_g = \int\limits_{\mathcal{X}} \int\limits_{\mathcal{Y}} \hat{P}_{g,league}^*(\mbox{Complete} | x,y)\cdot \hat{f}_{g}(x,y) \,dx\,dy
\label{eq:cpae}
\end{equation}
where the double integral is over the two spatial dimensions of the field.
We estimated the $CPAE_g$ for each QB with at least 100 passes for both seasons in our dataset, and Table \ref{tab:cpae} in Appendix \ref{app:appendix-C} presents the results. From the table, we see that in the 2018 season, Drew Brees (+6.14\%), Ryan Fitzpatrick (+3.42\%), Nick Foles (+3.42\%), Russell Wilson (+3.39\%), Matt Ryan (+3.22\%), and Carson Wentz (+3.08\%) performed best by this measure; while Blake Bortles (-5.04\%), Jeff Driskel (-4.83\%), Josh Rosen (-4.54\%), and Casey Beathard (-4.37\%) performed worst by CPAE.
We similarly estimate $CPAE_g$ for each team defense in both seasons of the dataset, and Table \ref{tab:cpae_def} in Appendex \ref{app:appendix-D} presents the results. Noting that negative CPAEs are better when evaluating team defenses (i.e. they reduce opponents' completion percentage), we see that in the 2018 season, the best pass defenses were Baltimore (-5.24\%), Chicago (-2.43\%), Los Angeles Rams (-2.35\%), Oakland (-2.12\%), and Kansas City (-1.98\%); while the worst pass defenses were Tampa Bay (+6.89\%), Atlanta (+4.31\%), and New Orleans (+4.07\%). Anecdotally, four of the top five teams by CPAE pass defense made the playoffs in 2018 (all except for the Oakland Raiders).
We further explore the {\em stability} of QB $CPAE_t$ across the two seasons available in our data ($t \in \{2017, 2018\}$).
Figure \ref{fig:cpae} presents the $CPAE$ for all qualifying QBs (i.e., at least 100 passes in each of the season).
The linear correlation between $CPAE_{2017}$ and $CPAE_{2018}$ is 0.41 (p-val < 0.05), hence, exhibiting medium levels of stability across the seasons examined. Of course, if we were to account for selection bias (e.g. below-average QBs tend to stay below-average, but also tend to lose their starting jobs), this relationship may actually prove stronger than stated here.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/qb_charts_bold/CPAE.png}
\caption{Stability of CPAE for QBs with at least 100 passes in 2017 and 2018.}
\label{fig:cpae}
\end{figure}
Since the NFL player tracking data became available to teams and league officials in 2016, the NFL has developed various metrics based on these data as part of their NextGen Stats (NGS) initiative.
One of these metrics is the completion percentage above expectation\footnote{\url{ https://nextgenstats.nfl.com/stats/passing/2019/all}}, which quantifies how a QB has fared compared to the expected completion percentage of his throws.
Contrary to our approach, the NGS CPAE takes into consideration various variables\footnote{\url{http://www.nfl.com/news/story/0ap3000000964655/article/next-gen-stats-introduction-to-completion-probability}} such as, the location of the receiver, the location of the defenders, the, distance from the sideline, etc. NFL NGS first creates a completion probability model for each throw.
Then, NGS CPAE is simply the difference between the actual completion result and the model's expected completion probability.
We collected the NGS CPAE for all QBs from the 2017 and 2018 regular seasons and compared them with the CPAE results obtained from our Naive Bayes approach.
Figure \ref{fig:cpae-ngs} presents the results.
Overall, the correlation between the two metrics is high ($\rho_{2017}$= 0.81, $\rho_{2018} = 0.91$).
Some of the differences in the CPAE values between the two approaches can be attributed to the missing passing charts that are used to create the {{\texttt{next-gen-scraPy}}} dataset.
Nevertheless, our version of CPAE follows closely the one provided from NGS, despite the use of only publicly available data.
Interestingly, Figure \ref{fig:cpae-ngs} also shows that NGS CPAE appears to be higher on average in 2018 than 2017, while no temporal trend exists when using {{\texttt{next-gen-scraPy}}} CPAE.
\begin{figure}
\centering
\includegraphics[width = \textwidth]{figures/qb_charts_bold/NB_NGS.png}
\caption{NFL NGS CPAE vs. {{\texttt{next-gen-scraPy}}} CPAE in 2017 and 2018.}
\label{fig:cpae-ngs}
\end{figure}
\section{Data Scraped from Next Gen Stats}
\label{app:appendix-A}
\begin{table}[!htb]
\begin{center}
\begin{tabular}{p{3cm}p{10cm}}
\toprule
\textbf{Variable} & \textbf{Description} \\
\midrule
completions & number of completions thrown \\
\hline
touchdowns & number of touchdowns thrown \\
\hline
attempts & number of passes thrown \\
\hline
interceptions & number of interceptions thrown \\
\hline
extraLargeImg & URL of extra-large-sized image (1200 x 1200) \\
\hline
week & week of game \\
\hline
gameId & 10-digit game identification number \\
\hline
season & NFL season \\
\hline
firstName & first name of player \\
\hline
lastName & last name of player \\
\hline
team & team name of player \\
\hline
position & position of player \\
\hline
seasonType & regular ("reg") or postseason ("post") \\
\bottomrule
\end{tabular}
\vspace{0.2cm}
\end{center}
\end{table}
\section{Example Subset of Data}
\label{app:appendix-B}
\begin{adjustbox}{angle=0}
\begin{tiny}
\begin{tabular}{llllllllllll}
\toprule
game\_id & team & week & name & pass\_type & x\_coord & y\_coord & type & home\_team & away\_team & season \\
\midrule
2018020400 & PHI & super-bowl & Nick Foles & COMPLETE & -3.6 & 16.9 & post & NE & PHI & 2017 \\
2018020400 & PHI & super-bowl & Nick Foles & COMPLETE & 16.2 & -3.0 & post & NE & PHI & 2017 \\
2018020400 & PHI & super-bowl & Nick Foles & COMPLETE & 11.5 & -6.4 & post & NE & PHI & 2017 \\
2018020400 & PHI & super-bowl & Nick Foles & TOUCHDOWN & -8.5 & 5.7 & post & NE & PHI & 2017 \\
2018020400 & PHI & super-bowl & Nick Foles & TOUCHDOWN & -18.8 & 30.1 & post & NE & PHI & 2017 \\
2018020400 & PHI & super-bowl & Nick Foles & TOUCHDOWN & -19.3 & 41.2 & post & NE & PHI & 2017 \\
2018020400 & PHI & super-bowl & Nick Foles & INTERCEPTION & 21.8 & 37.9 & post & NE & PHI & 2017 \\
2018020400 & PHI & super-bowl & Nick Foles & INCOMPLETE & 5.1 & 7.9 & post & NE & PHI & 2017 \\
2018020400 & PHI & super-bowl & Nick Foles & INCOMPLETE & -12.9 & 39.6 & post & NE & PHI & 2017 \\
2018020400 & PHI & super-bowl & Nick Foles & INCOMPLETE & 26.1 & 8.0 & post & NE & PHI & 2017 \\
\bottomrule
\end{tabular}
\end{tiny}
\end{adjustbox}
\section{QB CPAE}
\label{app:appendix-C}
\begin{table}[!htbp] \centering
\begin{tabular}{@{\extracolsep{5pt}} ccccc}
\\[-1.8ex]\hline
\hline \\[-1.8ex]
QB & CPAE17 & npasses\_2017 & CPAE18 & npasses\_2018 \\
\hline \\[-1.8ex]
Drew Brees & 4.21 & 439 & 6.14 & 473 \\
Ryan Fitzpatrick & 0.47 & 112 & 3.42 & 157 \\
Nick Foles & -3.64 & 152 & 3.42 & 229 \\
Russell Wilson & 5.77 & 309 & 3.39 & 295 \\
Matthew Ryan & 2.77 & 524 & 3.22 & 552 \\
Carson Wentz & 0.07 & 333 & 3.08 & 313 \\
Derek Carr & 0.27 & 300 & 2.96 & 429 \\
Kirk Cousins & -0.15 & 394 & 2.53 & 467 \\
Derrick Watson & 2.53 & 110 & 2.43 & 492 \\
Cameron Newton & -1.18 & 352 & 2.14 & 392 \\
Marcus Mariota & 0.95 & 495 & 1.75 & 275 \\
Jared Goff & -0.41 & 428 & 1.7 & 553 \\
Ben Roethlisberger & 2.46 & 394 & 1.29 & 518 \\
Patrick Mahomes & & & 1.27 & 445 \\
Philip Rivers & 0.34 & 416 & 1.15 & 560 \\
Rayne Prescott & -0.14 & 408 & 1.11 & 434 \\
Jameis Winston & 2.55 & 268 & 0.44 & 295 \\
Andrew Luck & & & 0.33 & 559 \\
Mitchell Trubisky & -1.36 & 262 & 0.27 & 323 \\
Ryan Tannehill & & & 0.08 & 191 \\
Brock Osweiler & & & 0.06 & 163 \\
John Stafford & 3.14 & 384 & -0.04 & 480 \\
Aaron Rodgers & & & -0.15 & 573 \\
Baker Mayfield & & & -0.38 & 269 \\
Alexander Smith & 4.31 & 418 & -0.88 & 254 \\
Tom Brady & 3.23 & 524 & -0.89 & 519 \\
Elisha Manning & -2.22 & 369 & -1 & 536 \\
Sam Darnold & & & -1.05 & 289 \\
Casey Keenum & 0.33 & 382 & -1.38 & 509 \\
Joseph Flacco & -0.23 & 438 & -1.67 & 367 \\
Nicholas Mullens & & & -1.87 & 118 \\
Andrew Dalton & -1.25 & 307 & -1.89 & 195 \\
Lamar Jackson & & & -2.07 & 112 \\
Joshua Allen & & & -3.44 & 237 \\
Casey Beathard & -4.94 & 185 & -4.37 & 168 \\
Joshua Rosen & & & -4.54 & 260 \\
Jeffrey Driskel & & & -4.83 & 110 \\
Robby Bortles & -1.9 & 399 & -5.04 & 336 \\
\hline \\[-1.8ex]
\caption{CPAE for 2017 and 2018 seasons for QBs with at least 100 passes in a season.}
\label{tab:cpae}
\end{tabular}
\end{table}
\section{Defense CPAE}
\label{app:appendix-D}
\begin{table}[!htbp] \centering
\begin{tabular}{@{\extracolsep{5pt}} ccccc}
\\[-1.8ex]\hline
\hline \\[-1.8ex]
Team & CPAE17 & npasses\_2017 & CPAE18 & npasses\_2018 \\
\hline \\[-1.8ex]
TB & 3.54 & 380 & 6.89 & 452 \\
ATL & 2.36 & 524 & 4.31 & 552 \\
NO & -0.68 & 439 & 4.07 & 495 \\
DAL & 2.81 & 408 & 3.82 & 434 \\
IND & 1.91 & 388 & 2.94 & 559 \\
CIN & -2.37 & 307 & 2.52 & 305 \\
DET & 5.45 & 384 & 2.46 & 480 \\
MIN & -4.93 & 382 & 2.39 & 467 \\
MIA & 1.86 & 355 & 2.38 & 354 \\
WAS & -4.5 & 394 & 2.17 & 403 \\
SEA & -3.89 & 309 & 1.33 & 295 \\
CAR & 1.59 & 352 & 1.19 & 443 \\
PHI & -0.59 & 485 & 0.99 & 542 \\
HOU & 7 & 410 & 0.27 & 492 \\
ARI & -1.93 & 462 & 0.22 & 354 \\
SF & 0.95 & 500 & -0.33 & 344 \\
JAX & -3.99 & 399 & -0.7 & 401 \\
NE & 1.56 & 524 & -1 & 519 \\
GB & 7.1 & 363 & -1.06 & 573 \\
NYG & 1.19 & 402 & -1.08 & 536 \\
LAC & 0.81 & 416 & -1.11 & 560 \\
TEN & -0.85 & 526 & -1.13 & 323 \\
DEN & -1.27 & 386 & -1.13 & 509 \\
CLE & 3.97 & 427 & -1.27 & 339 \\
BUF & 2.41 & 212 & -1.48 & 327 \\
PIT & -1.75 & 421 & -1.65 & 526 \\
NYJ & -3.45 & 370 & -1.93 & 352 \\
KC & -1.64 & 452 & -1.98 & 445 \\
OAK & 1.71 & 324 & -2.12 & 429 \\
LA & -2.81 & 462 & -2.35 & 553 \\
CHI & 3.32 & 368 & -2.43 & 360 \\
BAL & -3.36 & 438 & -5.24 & 479 \\
\hline \\[-1.8ex]
\caption{Defensive CPAE for 2017 and 2018 seasons. Lower number represents better defense.}
\label{tab:cpae_def}
\end{tabular}
\end{table}
\section{Introduction}
\label{sec:intro}
\input{1_Introduction.tex}
\tikzstyle{block} = [rectangle, draw, fill=blue!20,
text width=5em, text centered, rounded corners, minimum height=4em]
\tikzstyle{blockred} = [rectangle, draw, fill=red!20,
text width=5em, text centered, rounded corners, minimum height=4em]
\tikzstyle{line} = [draw, -latex']
\begin{figure}
\begin{tikzpicture}[node distance = 3cm, auto]
\node [block] (b1) {Scraping NGS HTML};
\node [block, right of=b1] (b2) {Collecting Raw Images};
\node [block, right of=b2] (b3) {Image Preprocessing};
\node [block, below of=b3] (b4) {Cropping Field};
\node [block, left of=b4] (b5) {Undistortion of Field};
\node [block, left of=b5] (b6) {Removal of Extraneous Detail};
\node [block, below of=b6] (b7) {Color Thresholding};
\node [block, right of=b7] (b8) {Pass Extraction with Clustering};
\node [block, right of=b8] (b9) {Map Pixels to Field Locations};
\node [blockred, below of=b9] (b10) {Output Data};
\path [line] (b1) -- (b2);
\path [line] (b2) -- (b3);
\path [line] (b3) -- (b4);
\path [line] (b4) -- (b5);
\path [line] (b5) -- (b6);
\path [line] (b6) -- (b7);
\path [line] (b7) -- (b8);
\path [line] (b8) -- (b9);
\path [line] (b9) -- (b10);
\end{tikzpicture}
\caption{Flow chart of the {{\texttt{next-gen-scraPy}}} system}
\label{fig:flow}
\end{figure}
\section{Image Processing Methods for Pass Location Data Extraction}
\label{sec:ngs-scrapy}
\input{2_Methods.tex}
\section{Evaluating and Characterizing Passers and Pass Defenses}
\label{sec:analysis}
\input{3_Analysis.tex}
\section{Results}
\label{sec:results}
\input{4_Results.tex}
\section{Discussion and Conclusions}
\label{sec:conclusions}
\input{5_Discussion.tex}
\bibliographystyle{DeGruyter}
| {
"attr-fineweb-edu": 2.292969,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdHs5jDKDyJ94XD6t | \section{Introduction}
\label{sec:1}
In the past decades, cruise travel has been a rapid expanding field around the world. A worldwide annual growth rate of 6.55\% for passage by sea from 1990 to 2019 has been recorded which is expected to grow continuously. The transportation of passengers in water has also made a rapid development. It is expected that ships will be more and more widely used in transporting passengers. Unfortunately, several ship disasters occurred in these years which caused serious losses to people's lives and properties. It raised the concern on the effectiveness of large crowd passenger evacuation on ships.
To mitigate the ill effect of an emergency, evacuating passengers from the ships will be necessary. The International Maritime Organization (IMO) has already developed guidelines for passenger ship evacuation. However, these guidelines and regulations, like most building codes, only enforces capacity limit of individual components such as exits, passenger way width, etc., thus can barely provide efficient setting and management strategies. Computer simulation, on the contrary, can be helpful by performing simulations even at the ship design stage. The International Conventions for the Safety of Life at Sea (SOLAS) now requires evacuation analysis at the early stage of ship design (IMO MSC.1/Circ.1238, 2007). Thus some simulation models have been built to perform computer simulation based evacuation analysis.
When compared with building evacuation, passenger ship evacuation is still a new research topic and only limited publications can be found. Existing works were more or less inspired by the IMO guidelines. The two full scale drill exercise projects, i.e., "FIRE-EXIT" and "SAFEGUARD" have provided valuable information for model validation, especially for "maritime-Exodus" \cite{Galea2013}. Some other models have also been established including, AENEAS \cite{Valanto2006}, EVI \cite{Azzi} and VELOS \cite{Ginnis2010}. The former two models are grid-based models, and thus have great advantage in computing efficiency, yet the discretization may affect their ability to model environmental space and detailed pedestrian motion when compared with the last two continuous space models. The influences of ship motion on pedestrian movement in these models are almost mimicked by speed reduction considering different inclination angle rather than considering the coupled-forced pedestrian movement features.
Thus, in the present paper we establish an evacuation model taking into account forced pedestrian movement pattern. The rest of the present paper is organized as follows. In section 2, we introduce the model which takes into account ship swaying effect. In section 3, we firstly compare several single pedestrian movement features under different conditions and then analyze a ship evacuation process. In the last section, conclusions are made.
\section{Ship evacuation model}
\label{sec:2}
Basically, the design layout of the cruise ships is same as that of hotels on the ground, which includes separated accommodation cabins, restaurants, sports facilities, etc., so that cruise ships can be regarded as mobile hotels. However, one of the important differences between them is that the deck of cruise ship has a complex motion pattern as a result of periodical water movement. Thus pedestrian on board would be affected by the inertial force, which further makes pedestrian movement characteristic different from when walking on a static horizontal floor. Taking into consideration of this special feature, an agent-based pedestrian model is formulized, and the effect of ship swaying on pedestrian evacuation efficiency is investigated.
\subsection{Ship swaying features}
\label{subsec:1}
As mentioned before, ship movement in water is very complex. It displays a six-degree-of-freedom motion feature because of the periodical water movement and wind influence, as shown in Fig.~\ref{fig:1}a. These motions include linear translation motions (surge, sway and heave) and nonlinear rotation motions (pitch, roll and yaw). Swaying refers to the linear side-to-side motion, while rolling and pitching represent the tilting rotation of a ship about its front-back axis and side-to-side axis, respectively. Due to combined motions of swaying, rolling and pitching, the deck of a cruise ship may get inclining and then recovering periodically as a result of the counter-rotating torque. Hereinafter, we do not distinguish these motion types and use swaying instead. This special combined movement feature can be quantified by two parameters, swaying amplitude $B(t)$ and phase $\varphi(t)$. It should be noticed that here $\varphi(t)$ is periodically changing with time $t$. The exact form of ship motions in seaway should base on water movement and wind influence features, however, for simplicity and without loss of generality, we assume,
\begin{eqnarray}
\varphi(t)= M \times sin(t),
\label{eq:01}
\end{eqnarray}
where $M$ means the maximal ship ground inclining degree.
\begin{figure}[b]
\sidecaption
\includegraphics[bb=0 270 580 540, scale=0.3]{fig1a.eps}
\includegraphics[bb=0 290 280 540, scale=0.5]{fig2.eps}
\caption{(a) Scheme of ship motion types and (b) Force analysis for an on-board passenger.}
\label{fig:1}
\end{figure}
\subsection{Pedestrian movement features}
\label{subsec:2}
When a pedestrian moves smoothly on a horizontal floor and there is no swaying, the pedestrian movement is driven by the internal self-driven force, $F_e$, as shown in Fig.~\ref{fig:1}b. This internal driven force makes the pedestrian move with a speed not far away from his expected speed $v_e$, thus the required evacuation time can be estimated based on walking speed, queuing length, and emergency exit capacity \cite{Daamen2012}. However, it should be noticed that the movement of a pedestrian is accomplished step by step using their two legs. As a consequence, there is a gait cycle, as reported in Ref\cite{Olivier2012,Ma2013}. Thus, when the ground is swaying, the pedestrian gait would be affected by the new force component, the inertial force $F_s$, as shown in Fig.~\ref{fig:1}b. This new force component can be projected onto two directions, i.e., the pedestrian movement direction and the direction perpendicular to his movement direction. The force component along pedestrian movement direction, $F_{e||}$, would affect the maximal speed that pedestrian can archive, while the force component along the lateral direction would affect the gait cycle, so we introduce a periodically changing force component along the lateral direction, i.e., $F_p$ to quantify this influence.
For the internal self-driven force, as in other force-based models \cite{Helbing2000,Chraibi2010}, we take the following form,
\begin{eqnarray}
F_e = m \frac{\vec{v} - \vec{v_e}}{\tau}
\label{eq:02}
\end{eqnarray}
where $\tau=0.5$s is selected according to Ref\cite{Ma2010}. The ship swaying induced parallel force component $F_{e||}$ can be denoted as,
\begin{eqnarray}
F_{e||} = F_s \times cos\psi
\label{eq:03}
\end{eqnarray}
Where $\psi$ means the angle between $F_s$ and $F_e$. For the step by step lateral movement of the pedestrian, we assume that,
\begin{eqnarray}
F_p = A \times sin\theta
\label{eq:04}
\end{eqnarray}
It should be noticed that a pedestrian can sense the ship swaying, and can as a result adjust his gait, thus in Equation (4), $A$ quantifies the influence of ship swaying amplitude $B(t)$, while $\theta$ represents his gait cycle, which can be determined by,
\begin{eqnarray}
A = f(B) = F_s \times cos\psi = \beta B(t) cos\psi
\label{eq:05}
\end{eqnarray}
\begin{eqnarray}
\frac{d\theta}{dt} = \Omega + c B(t) sin(\varphi -\theta +\alpha)
\label{eq:06}
\end{eqnarray}
Here, $c$ quantifies pedestrians' sensitivity to ship swaying amplitude $B(t)$ and phase $\varphi(t)$. $\Omega$ is a random step frequency, which can be estimated following the pedestrian movement experiments \cite{Fang2012}. $\alpha$ is a phase lag parameter. $\beta$ represents the effect of friction when a pedestrian walks on a inclining ground.
\section{Results and discussion}
\label{sec:3}
The Equations (\ref{eq:01}) to (\ref{eq:06}) in Section \ref{sec:2} together describes forced movement pattern for a pedestrian on a swaying ship. It should be noticed that in reality, the ship swaying induced force component $F_s$ for a pedestrian might be very complex. So in the present section, we explore its effects by performing two simple case studies.
\begin{figure}[b]
\sidecaption
\includegraphics[scale=.5]{fig3.eps}
\caption{Single pedestrian movement trajectories under different conditions.}
\label{fig:3}
\end{figure}
Firstly, we place a pedestrian $5m$ away from his target, as shown in Fig.~\ref{fig:3}a. The pedestrian has a free movement speed of $1.2m*s^{-1}$. Three scenarios were considered. I) We assume the pedestrian is moving on a horizontal ground, and there is no ship swaying. II) The ship was swaying along the direction perpendicular to the pedestrian's movement direction. III) The ship is swaying along the direction which parallels to the pedestrian movement direction. Simulation snapshots for these three different scenarios can be found in Fig.~\ref{fig:3}b,~\ref{fig:3}c and~\ref{fig:3}d, respectively. As can be found in these figures, when there is no swaying, the pedestrian can move freely towards his target. When the ship was swaying perpendicular to the pedestrian movement direction, we can see as a result of the swaying, the pedestrian makes lateral movement when approaching his target. That is due to the ship swaying induced inertial force exerted on that pedestrian and changed his gait. We can also find although the pedestrian made lateral movement, the movement towards his direction has barely been influenced, as can be found in Fig.~\ref{fig:4}a. When the ship is swaying along the direction paralleling to the pedestrian movement direction, as shown in Fig.~\ref{fig:3}d, his trajectory seems like the one when there was no ship swaying, as in Fig.~\ref{fig:3}a. We further compared the spatiotemporal feature of these trajectories shown in Fig.~\ref{fig:3}a and~\ref{fig:3}d, and found as shown in Fig.~\ref{fig:4}a, the pedestrian accelerate at first and then decelerate, meaning he keeps changing his speed during the process when he moves towards his target in the case when the ship is swaying along the direction paralleling to his movement direction. We can see that the ship swaying affected pedestrian microscopic features even in these simple conditions.
\begin{figure}[b]
\sidecaption
\includegraphics[scale=.25]{fig4.eps}
\includegraphics[scale=.25]{fig6.eps}
\caption{Comparison of (a) pedestrian movement features and (b) ship passenger evacuation process.}
\label{fig:4}
\end{figure}
Secondly, scenarios of a ship evacuation with/without swaying were simulated. Considering that in case of emergency, prior to any decision to actually abandon the ship, passengers on a ship have to be evacuated to the assembling site. That is because if abandoning is unavoidable, these passengers can be evacuated immediately, and if there is no need to abandon ship, dealing with emergency would be much easier with no passengers around for those crew members. So, passengers were ordered to evacuate to the assembling area on the right side, i.e., front of ship in the present case, as shown in Fig.~\ref{fig:5}. There were in total 60 passengers on the ship. In Fig.~\ref{fig:5}a and~\ref{fig:5}b we show the trajectories of those passengers on the ship when there was no sway and there was sway, respectively. Comparing these figures we can easily find that due to ship swaying, passengers made lateral movements during evacuation. The lateral movement slowed down the evacuation process, as shown in Fig.~\ref{fig:4}a. We can also find that for those passengers initially located in the relatively open area, as shown in the left part of Fig.~\ref{fig:5}, their trajectories show clear curved features when the ship was swaying. When there was no sway, these trajectories were almost linear, representing these passengers can move freely towards their targets. For those who were located in the long channel, as can be found in the right part of Fig.~\ref{fig:5}, they were barely influenced by the swaying ship. The reason is that the channel is so narrow that passengers' lateral movements were hindered by those chairs and walls.
\begin{figure}[b]
\sidecaption[t]
\includegraphics[scale=.5]{fig5.eps}
\caption{Passenger evacuation simulation snapshots (a) with and (b) without ship swaying.}
\label{fig:5}
\end{figure}
When we compare the evacuation process, as can be found in Fig.~\ref{fig:4}b, when there was no ship swaying, the total assembling time for 60 passengers is only slightly shorter than when there was ship swaying. That is because the evacuation processes were mainly performed in the long channel section, as shown in Fig.~\ref{fig:5}. But the assembling efficiency when there was no sway is always higher, as indicated by the number of assembled people shown in Fig.~\ref{fig:4}b. That is because, passengers located on the back and front sections had to make movement along the ship swaying direction, which would slower them down, as shown in Fig.~\ref{fig:4}a.
\section{Conclusion}
\label{sec:4}
An agent-based passenger evacuation model was built. Each agent in the model can sense the ship swaying feature and adjust its own gait, in this way, forced pedestrian movement feature as a result of ship swaying was mathematically quantified. Simulations of single pedestrian movement show that the angle between pedestrian movement direction and ship swaying direction may influence spatiotemporal features of a pedestrian. Thus under the situation of total evacuation, the passenger assembling efficiency would be affected. The simulated assembling time provides a very important criterion to evaluate the total evacuation time needed for a completed orderly evacuation. It should also be noticed that the computed evacuation time gives an estimate of the time the passengers need to get out of the ship interior and reach a boarding site, thus the influence of the ship interior can be evaluated to find out bottlenecks and make improvement accordingly.
\begin{acknowledgement}
The authors sincerely appreciate the supports from the Research Grant Council of the Hong Kong Administrative Region, China (Project No. CityU11209614), and from China National Natural Science Foundation (No. 71473207, 51178445, 71103148) as well as the Fundamental Research Funds for the Central Universities (2682014CX103).
\end{acknowledgement}
\bibliographystyle{spmpsci}
| {
"attr-fineweb-edu": 2.486328,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdP3xaKgS2OSYGxoL | \section{Introduction}
Census and household travel surveys are conducted and published every 5 or 10 years in most countries. They provide rich content about the demographic information of households and geospatial information on the daily trip routines these households have. The Census dataset includes socioeconomic characteristics of the population such as age, sex, and industry, while household surveys include variables related to the personal trip behaviors of the participants in addition to socioeconomic variables. Although Census data provides various demographic features of the households, the information about travel behavior is mostly not given or very limited. On the other hand, household travel surveys are more travel oriented and provides more details in relation to the travel features of the households.
Census and household survey datasets are commonly used to understand and model travel habits and preferences of the households\cite{bindschaedler2017plausible,stopher1996methods,horl2021synthetic} These datasets are useful for developing traditional travel demand models as well as agent-based simulation models (e.g., MATSim - Multi-Agent Transport Simulation Toolkit). Particularly, the agent-based simulation models require a large amount of micro-level population data to create the agents in the simulation environment. However, the publicly available samples are limited to get comprehensive outcomes
because they only represent a small proportion of the population. Moreover, published datasets are given aggregated and anonymized due to privacy reasons. For example, the conditional distribution of variables (e.g., age distribution given sex) for the entire population of the census are deemed confidential and usually not published. One of the solutions to overcome these challenges is supporting original samples with synthetic data~\cite{badu2022composite,horl2021synthetic,farooq2013simulation,lederrey2022datgan}.
The process of a synthetic population for transportation systems can be formulated as a two-step process. First, the household population and its socioeconomic features should be created, and afterward, their trip activities need to be combined ~\cite{horl2021synthetic}. Population data synthesis differs from trajectory data synthesis due to the nature of the datasets. Population features are nominal and kept in a tabular format, whereas trajectories are sequential and not necessarily tabular. Although Census data is generally used to generate population data, it is not useful for trajectory prediction and generation tasks. In addition to the socioeconomic features that Census dataset comprises, household travel surveys include activity locations of individuals. Due to their nature, the activity locations are given as a sequence of spatial zones and will be referred to as trip sequences or trajectories.
Our methodology offers a novel way to generate a synthetic population along with corresponding trip sequences. Our framework is capable of generating a relational database including both socioeconomic features and sequential activity locations. Current state-of-the-art models~\cite{badu2022composite,lederrey2022datgan,horl2021synthetic,sun2015bayesian,farooq2013simulation,xu2019modeling} mostly focus on only socioeconomic features, while we offer a broader scope by incorporating sequential trip data into the synthetic population generation problem.
In summary our contributions can be listed as:
\begin{itemize}
\item Using a Conditional GAN to learn the joint distribution of the socioeconomic variables in the tabular population data
\item Using recurrent neural networks to create synthetic sequences of locations
\item Proposing a novel merging method to integrate population and trip data
\item Offering new evaluation methods for aggregated and disaggregated results
\end{itemize}
\section{Literature Review}
The proposed algorithms for synthetic data generation problem depends on the type of the data and the task. For example, generative adversarial network models are generally preferred for image generation and recurrent network models for text generation~\cite{shorten2021text}. Our problem includes two types of data: tabular type of population and sequential type of trajectories. The generation of these datasets require different methodologies. There is a wide range of methods that has been proposed to produce synthetic population data; Iterative Proportional Fitting~\cite{horl2021synthetic}, Markov Chain Monte Carlo~\cite{farooq2013simulation}, Bayesian Network~\cite{sun2015bayesian}, and deep learning models~\cite{badu2022composite,lederrey2022datgan}.
Iterative proportional fitting is based on replication of the sample data. It calculates marginal weights for specified columns of the dataset and generates a synthetic population using these weights. IPF based methods are prone to overfitting, because the interactions between columns are not taken into account. Sometimes these weights are available ~\cite{horl2021synthetic}, but it is rarely the case where the weights of all column combinations in this form are available for a given dataset and if the dataset does not contain such information IPF becomes computationally expensive.
Monte Carlo Markov Chain (MCMC) method has been used when direct sampling is difficult and joint distributions $P(x,y)$ are not available. It learns from marginal distributions to generate data, and captures characteristics between variables. The principle of the MCMC approach is to use the Gibbs Sampler and combinations of marginal distributions for up-sampling the empirical data~\cite{farooq2013simulation}. The Gibbs sampler's input is created using marginals and partial conditionals from the dataset.
Bayesian network (BN) method uses Directed Acyclic Graph (DAG) and structure learning methods to find dependencies between variables. BN does not perform well with complex and large-scale datasets since it is time-consuming and computationally expensive. Additionally, prior knowledge is needed to describe the relation between variables and to build DAGs.
Generative adversarial networks (GAN) are the most up-to-date approach to generate population data. GAN architectures are commonly designed to handle uniform datasets such as generation of images~\cite{karras2020analyzing} rather than tabular datasets. Vanilla and most of the other GAN models suffer from mode collapse problems while generating tabular data~\cite{badu2022composite, xu2019modeling}. Nevertheless, some of the recent GAN models differ from others considering their capability of handling mixed variables (i.e., numerical and categorical) and learning the joint distribution between variables. Of particular interest to our study, conditional GAN ~\cite{xu2019modeling,lederrey2022datgan} models seem to address the limitations that existing algorithms face when generating tabular population data.
While GAN is a promising direction to produce synthetic population data, it has difficulties in directly generating sequences of discrete tokens, such as trajectories or texts. This is partly because taking the gradient of discrete values directly is not possible and using gradient eventually makes little sense~\cite{yu2017seqgan}. Secondly, loss functions used by GANs are not taken into account by partial sequences during generation and score output sequences. Sequence Generative Adversarial Nets with Policy Gradient (SEQGAN)~\cite{yu2017seqgan} adapts the reinforcement learning approach to overcome these problems where generator network is treated as a reinforcement learning agent. This allows the algorithm to generate sequential dataset by using generative networks.
By nature, trajectory generation problem is very similar to text generation problem, where predicting the next element in a sequence is the main task, therefore text generation algorithms can be implemented for trajectory generation purposes. For example, recurrent neural network (RNN) based text generation algorithms are used to create synthetic trajectory data ~\cite{berke2022generating}. Badu-Marfo and Farooq~\cite{badu2022composite} has developed a model named Composite Travel GAN, where the tabular population data is generated by Vanilla GAN and the trip data is generated by SEQGAN. To the best of our knowledge, this is the only study that combines the generation of synthetic population and trajectory problems. Nonetheless, Badu-Marfo and Farooq ~\cite{badu2022composite} does not provide a distinct matching algorithm that enables the merge of the two synthetic datasets, which is one of the contributions that our study makes.
\section{Methodology}
As previously mentioned, most studies focus on either population generation or trajectory generation. However, these two pieces of information are often needed together (e.g., an agent with demographic features and activity locations in an agent-based simulation model). This paper proposes a comprehensive generation framework that handles both tabular population and non-tabular trajectory data. Our framework consists of three main components: $(i)$ Conditional GAN model to generate synthetic population data, $(ii)$ RNN-based generation algorithm for trip sequence synthesis, $(iii)$ combinatorial optimization algorithm to merge the output of these two algorithms effectively. Please see Fig. 1 for an overview of the proposed framework. We have two sets of features to be trained. The first set represents population features, and the second represents trip features. First set consists of elements of the population training data $ P_i = \{P_{i_1}, P_{i_2}, P_{i_3}, \dots\ P_{i_m}\} $ where each $P_i$ represents a unique individual and $m$ is the total number of features. The second set is defined as $ L_{i_n} = \{L_{i_1}, L_{i_2}, L_{i_3}, \dots\ L_{i_n} \} $ where i represents individual i and n represents the maximum number of activity locations. Note that the number of activity locations might differ form one individual to another; we consider the individuals that have visited at most n locations. Feature set L is used to train the trajectory generation component of our algorithm.
\begin{table}[!ht]
\centering
\begin{minipage}[t]{0.48\linewidth}\centering
\caption{Population Features}
\begin{tabular}{ c c }
\toprule
Features & Samples \\
\midrule
Age & 25 \\
Sex & Male \\
Industry & Construction \\
Origin & (-27.468, 152.021) \\
Destination &(-27.270, 153.493) \\
\bottomrule
\end{tabular}
\end{minipage}\hfill%
\begin{minipage}[t]{0.48\linewidth}\centering
\caption{Sequential Features}
\label{tab:The parameters 2 }
\begin{tabular}{ c c }
\toprule
Features & Samples \\
\midrule
Location 1 & Kenmore \\
Location 2 & St. Lucia \\
Location 3 & South Brisbane \\
Location 4 & Kenmore \\
\bottomrule
\end{tabular}
\end{minipage}
\end{table}
Although the feature set P is used to generate population samples, it includes origin destination values in addition to the socioeconomic features. These two features can in fact be matched with the activity locations in the feature set L, see Table 2. These common features will be used later within the bipartite assignment to match the observations in the population and trajectory data, see Figure 1. Nevertheless, origin and destination values in the population feature set P are given in coordinates, while the activity locations in the feature set L are represented as zone names or zone ids. This is because it is computationally easier to train continuous variables in the CTGAN model; the excessive number of discrete zone variables in the network increases the computational complexity and causes memory issues within the CTGAN framework. Therefore, we use centroid coordinates instead of zone ids to train the CTGAN model. We keep the locations as categorical for the feature set L, because sequential models such as RNN and SEQGAN need discrete variables for training process.
\begin{figure}[htb]
\centering
\includegraphics[width=8.5cm]{flowchrt.pdf}
\caption{Schematic representation of the framework}
\label{fig:galaxy}
\end{figure}
\subsection{Tabular data generation}
\subsubsection{Generative Adversarial Networks}
Generative Adversarial Networks (GANs) have two neural network components as generator and discriminator, where they play a minimax game against each other. The generator tries to minimize the loss. The discriminator takes input from real and generated data for classification and returns values between 0 and 1 representing the probability of the given input is fake. Therefore, the discriminator wants to maximize the loss.
\begin{equation}
\min_{G}\max_{D}\mathbb{E}_{x\sim p_{\text{data}}(x)}[\log{D(x)}] + \mathbb{E}_{z\sim p_{\text{z}}(z)}[1 - \log{D(G(z))}]\label{eq}
\end{equation}
The generator produces desired output by using empty latent space and the discriminator behaves like a detective and tries to understand if the output from the generator comes from the real sample or not.
Latent space is nothing but a feature vector drawn from a Gaussian distribution with a mean of zero and a standard deviation of one. Through training, the generator updates the weights of this vector and learns the real distributions of the sample dataset. Vanilla GAN architecture commonly fails to handle imbalanced and mixed dataset, however recent studies propose significant changes in the GAN architecture to handle tabular data and learn joint distributions~\cite{xu2019modeling,zhao2021ctab,lederrey2022datgan}.
\subsubsection{Conditional GAN}
Conditional Tabular GAN (CTGAN) architecture differs from other GAN models considering its ability to generate tabular data and learn joint distributions. We make use of CTGAN model to synthesize the population because it allows us to generate discrete values conditionally. Besides, it ensures that correleations are kept. CTGAN explores all possible combinations between categorical columns during training. The model consists of fully connected networks to capture all possible relations between columns~\cite{xu2019modeling} and learns probability distributions over all categories. It uses mode-specific normalization and Gaussian mixture to shift the continuous values to an arbitrary range. One hot encoding and Gumble softmax methods are used for discrete values. By using Gumble Softmax trick, the model is able to sample from a categorical variables. The generator creates samples using batch normalization and ReLU (rectified linear unit) activation functions after two fully connected networks. Finally, the Gumble Softmax for discrete values and $tanh$ function is applied to generate synthetic population rows.
Besides, it includes PacGAN framework ~\cite{lin2018pacgan} to prevent mode collapse, which is the common failure for vanilla GANs ~\cite{badu2022composite}. CTGAN performs well in synthetic population generation, however, it has difficulties generating realistic trip distributions because the model does not support hierarchical table format and sequential features. Moreover, one hot encoding method can cause memory issues due to high dimensionality of locations. The composite models are proposed to address this issue ~\cite{liu2016coupled,badu2022composite}.
\subsection{Trajectory generation}
Synthetic data generation for music, text, and trajectory can be formulated similarly considering the sequential nature of the underlying data. To create input sequences, raw input data is converted into vectors of real numbers. Each discrete value is represented with an unique number in these vectors. This process is called tokenization in natural language processing. The number of tokens is determined by the number of the unique sequence values, which is named vocabulary size in text generation algorithms. RNNs show good performance in predicting and generating conditional elements in a sequence ~\cite{berke2022generating,caccia2018language}. In this study, we use \emph{textgenrnn} library for producing trajectory sequences. The model is inspired by char-rnn~\cite{karpathy2015unreasonable} and takes trajectory tokens as the input layer. Afterwards, it generates equal size of sequence lists by padding. The sequences are given to the embedding layer as input and create weighted vectors for each token. The embedding layer is followed by two hidden LSTM layers. The attention layer is skip connected to the LSTM and embedding layers. These layers are followed lastly by a dense and output layer.
\subsection{Bipartite assignment}
We sample synthetic population and trajectory datasets separately after training CTGAN and RNN models. Because our purpose is to generate individuals with trajectory sequences, we produce the same number of samples for population and trajectory components. We then use the Hungarian algorithm~\cite{munkres1957algorithms} to merge the generated samples.
The Hungarian method takes a non-negative n by n matrix as input, where n represents the number of sampled synthetic individuals. All row and column values are assigned a cost value in the matrix. This cost is defined as the distance between the origin coordinates by CTGAN and RNN components. Since the generated trajectory sequences are discrete, we use the centroid coordinates of the given locations. The cost can be formulated as $d = \sqrt {\left( {x_{ctgan} - x_{rnn} } \right)^2 + \left( {y_{ctgan} - y_{rnn} } \right)^2 }$, where $x_{ctgan}$ and $y_{ctgan}$ are the coordinates of the origin in the synthetic population data, and $x_{rnn}$ and $y_{rnn}$ are the coordinates of the origin zone centroid in the synthetic trajectory data. With the distance values, we build a complete bipartite graph $G=(P, T)$ with $n$ population vertices $(P$) and $n$ trajectory vertices $(T)$ where the edges are distances between the vertices. Hungarian algorithm finds the optimal matching by minimizing total distance. We use linear assignment without replacement to keep generated trajectories as they are and ensure the trip distributions do not change in the merging process. The algorithm minimizes the total cost (i.e., total distance between the pairs) via the following three steps. Firstly it subtracts the smallest element in each row from all the other elements. Then the smallest entry is equal to 0. Secondly, it subtracts the smallest element in each column. This makes the smallest entry 0 in the column. The steps continue iteratively until the optimal number of zeroes is reached. The optimal number of zeroes is determined by the minimum number of lines required to draw through the row and columns that have the 0 entries. One of the drawbacks of the Hungarian assignment method is the high computation cost with $O(n3)$ time complexity.
\section{Experimental Results}
We test our algorithm using the Queensland Household Travel Survey that includes travel behavior data from randomly selected households. The population and trip activity routines of the households are published as separate tables. Each observation is assigned to a unique person identifier code. We therefore connect population and trip activity tables by using these identifiers.
The population features contain socioeconomic characteristics such as age, sex, and industry (see Table 1), while the trajectory data includes the sequences of activity locations. The trip locations are categorical and defined as Statistical Area Level 1 (SA1) built from whole Mesh Blocks by Australian Bureau of Statistics. We eliminated observations which contains missing column values and limited the data considering a 40 km radius around Brisbane CBD. We have 5356 unique individuals with their activity chains in our dataset. The maximum number of locations in the trajectory feature set is 4 as 84\% of our observations in training data consist of short trips. In order to build a consistent setup, we train the population and trajectory components with the same dataset.
\subsection{Parameter Configuration}
We trained CTGAN with 300 epochs with batch size is 10. The layer sizes for the generator and discriminator are (512,512,512). The GAN models are fragile and hard to configure, thus we tested the hyperparameter with different values. We applied the same number of epochs and batch sizes to our RNN model for training. We picked the temperature value as 0.7. The temperature takes values between 0 and 1. It allows the algorithm to generate more divergent results. If the temperature is set as 0, it will make the model deterministic and the algorithm will always generate the same outputs.
\subsection{Results}
We will examine the results in three sections. First, we focus on our results for population generation from the CTGAN algorithm. Second, we discuss the trip sequence distributions resulting from the RNN algorithm.
Last, we focus on the final dataset that results from the merging of the synthetic population and trajectory datasets and analyze the conditional distributions between population and trip specific features. We use a wide range of test metrics considering the most prevalent statistical tests in the transportation literature~\cite{lederrey2022datgan,badu2022composite}.
\subsubsection{Synthetic Population Data}
\begin{figure}[htbp]
\centerline{\includegraphics[width=7cm]{vanillaganindustries.png}}
\caption{Comparison of marginals for industry attribute}
\label{fig}
\end{figure}
One of the prevalent methods to evaluate synthetic population is comparing population attributes. As discussed in literature review, GAN models are the most successful algorithms to generate synthetic population to our knowledge. We picked Vanilla GAN model for comparison and industry as the feature since amongst other features it has maximum number of classes and therefore hardest to get correct distributions, especially for Vanilla models. As observed in Figure 2, CTGAN outperforms Vanilla GAN in parallel with literature results. Vanilla GAN model cannot learn imbalanced distributions in the tabular data and struggles to reproduce the distribution of categorical industry features.
\subsubsection{Synthetic Trip Sequences}
Comparing trip sequences one at a time only provides little insight into the trip sequence distributions and cannot be used to determine whether the realistic route patterns are formed. Therefore, we consider the trip length to measure the similarity between real and synthetic trip sequences~\cite{badu2022composite,berke2022generating}. We calculate the total distance between all activity locations for each trajectory in the real and synthetic dataset and compare the trip distance distributions that result from this analysis. We also compare the resulting distributions with the synthetic trajectory dataset from SEQGAN that been applied earlier by ~\cite{badu2022composite} for the same purpose.
Fig 3 clearly shows that RNN closely replicates the trip length distribution in the real dataset, while SEQGAN largely fails to capture the trend in the trip distance distribution. SEQGAN under predicts the distances less than 5km and over predicts the trip lengths greater than 10km, which has been observed in other studies~\cite{badu2022composite} as well. This comparison clearly shows that RNN outperforms SEQGAN and produces realistic trip distance distributions.
\begin{figure}[htbp]
\centerline{\includegraphics[width=8.5cm]{comprr.png}}
\caption{Comparison of trip length distributions}
\label{fig}
\end{figure}
\subsubsection{Synthetic Composite Data}
The marginal distribution of trip distances are solely not enough to show if the correlations are kept between population and trajectories. One needs to consider joint or conditional distributions between population and trip specific features to further examine this aspect. Figure 4 shows conditional trip distances on a given population feature, industry in this case. In particular, Figure 4a presents the distribution of trip distance for those individuals who are employed in the education sector. Note that this conditional distribution can be observed only after the two datasets are merged using the Hungarian algorithm considering the distance between the pairs. Therefore, these results are labeled as 'CTGAN+RNN' in Figure 4a. The comparison between real dataset and the synthetic dataset that results from our algorithm reveals again a very strong similarity. The framework that we propose captures the joint distribution between variables after matching the two datasets.
Figure 4b depicts the distribution of trip distances for all sectors on the coordinate plane to see if the results in 4a are biased only for certain industries. We have 19 industry variables and use 30 bins to obtain trip length intervals. Thus, figure 4b has 570 points in total. Each point on the plane refers to conditional trip distance percentages of the synthetic and real samples. The percentage values on the X and Y axes represent the frequency of the observations in a specific bin interval. Therefore, for each industry, the sum of 19 separate points is equal to 1. Fig. 4b also includes a proportionality line and a regression line to illustrate the overall agreement between synthetic and ground trip distributions.
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{condisteducinkm.png}\llap{
\parbox[b]{3in}{(A)\\\rule{0ex}{1.4in}
}}
\label{fig:1a}
\end{subfigure}
~
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{bonibonlines.png}\llap{
\parbox[b]{3in}{(B)\\\rule{0ex}{1.7in}
}}
\label{b}
\end{subfigure}
~
\caption{Comparison of trip length distributions for (a) Education (b) All industries}\label{fig:animals}
\end{figure}
The conditional and unconditional distributions we get from figure 3 and 4 are also tested with statistical metrics; Pearson Correlation, Mean-standardized root mean square error $ (SRMSE)$ and $ R2 $ Score (Coefficient of Determination). We created two frequency lists for the trip length distributions of Conditional GAN, Sequential GAN and RNN models. In Table 3, CTGAN simply indicates the results based on origin and destination values (in coordinates) that are generated as part of the population features. SEQGAN and RNN can be used solely to generate marginal trip distance distributions, while CTGAN+SEQGAN and CTGAN+RNN represents the results after merging of the corresponding population and trajectory datasets. We set the constant term as 0 when we fit a linear regression equation and calculate the $R2$ score. Overall, SEQGAN exhibits a poor performance on imitating real distribution because the generated conditional distributions follow Gaussian distribution rather than tracking the real distribution. Table III gives a summary of the synthetic trip sequence generation performance of three algorithms.
\begin{table}[htbp]
\def\arraystretch{1}%
\caption{Benchmarking algorithms on statistical tests}
\centering
\begin{adjustbox}{width=\columnwidth,center}
\begin{tabular}{|l|c|c|c|c|}
\hline
\textbf{Trip Distance} & \textbf{Algorithm} & \textbf{Pearson} & \textbf{R2 Score} & \textbf{SRMSE} \\
\hline
\hspace{0.1cm} & CTGAN & $0.495$ & $0.231$ & $0.0572$ \\\cline{2-5}
Unconditional & SEQGAN & $ -0.256$ & $ -0.558 $ & $0.116 $ \\\cline{2-5}
\hspace{0.1cm} & RNN & $0.967$ & $0.927$& $0.0053 $ \\
\hline
\hline
\hline
\hspace{0.1cm} & CTGAN & $0.302$ & $0.018$ & $0.0479$ \\\cline{2-5}
Conditional & CTGAN + SEQGAN & $ -0.161$ & $ -0.375 $ & $0.0789 $ \\\cline{2-5}
\hspace{0.1cm} & CTGAN + RNN & $0.867$ & $0.724$& $0.0134$ \\
\hline
\end{tabular}
\end{adjustbox}
\label{tab:template}
\end{table}
Fig 5 exhibits the spatial distribution of activity locations in both real and synthetic datasets. The most popular activity locations such as airport and city center are coloured with yellow, as most intense areas according to the real samples. As seen in Figure 5, the synthetic location distribution is similar to the ground truth. Even though some of the areas may exhibit a mismatch in terms of trip frequency, the percentage error does not deviate from the real amount significantly, Therefore we can conclude that our method is able to learn geospatial distributions contextually.
\begin{figure}[htbp]
\centering
\includegraphics[width=.45\textwidth]{brisbanerealsa2map.png}\hfill
\includegraphics[width=.45\textwidth]{brisbanesa2mapping.png}\hfill
\caption{Spatial distribution of real and synthetic datasets}
\label{fig:figure3}
\end{figure}
\section{Conclusion and Future Work}
Synthesis of a mobility database is challenging due to the complex nature of the data. We propose a hybrid framework for generating such complex dataset. We divided the synthesis problem into two sub-parts by generating population and sequential trajectories separately. We use up-to-date models such as CTGAN for tabular data generation and RNN for trajectory generation. We have also applied a combinatorial optimization algorithm to merge these two synthetic datasets. In our future work, we will extend our experiments and design a new framework considering other sources of data to help capture correlations between population and trajectories.
\bibliographystyle{unsrt}
| {
"attr-fineweb-edu": 2.802734,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfdI5qhLBjuJj-cwv | \section{Introduction}
Statistical research in cricket has been somewhat overlooked in the stampede to model football and baseball. Moreover, the research that has been done on cricket has largely focused on batsmen, whether modelling individual, partnership or team scores \citep{Kimber, Scarf2011, Pollard77}, ranking Test batsmen \citep{Brown2009, Rohde2011, Boys2019, Stevenson2021}, predicting match outcomes \citep{Davis2015} or optimising the batting strategy in one day and Twenty20 international cricket \citep{Preston2000, Swartz2006, Perera2016}. Indeed, to the author's knowledge, this is the first work that explicitly focuses on Test match bowlers.
Test cricket is the oldest form of cricket. With a rich and storied history, it is typically held up as being the ultimate challenge of ability, nerve and concentration, hence the origin of the term `Test' to describe the matches. For Test batsmen, their value is almost exclusively measured by how many runs they score and their career batting average, with passing mention made of the rate at which they score, if this is remarkable. Test bowlers are also primarily rated on their (bowling) average, which in order to be on the same scale as the batting average is measured as runs conceded per wicket, rather than the more natural rate of wickets per run.
In this work we consider the problem of comparing Test bowlers across the entire span of Test cricket (1877- ).
Rather uniquely, the best bowling averages of all-time belong to bowlers who played more than a hundred years ago, contrast this with almost any other modern sport where records are routinely broken by current participants, with their coteries of support staff dedicated to fitness, nutrition and wellbeing along with access to detailed databases highlighting their strengths and weaknesses. The proposed model allows us to question whether the best Test bowlers are truly those who played in the late 19th and early 20th century or whether this is simply a reflection of the sport at the time. Along the way, we also deliberate whether the classic bowling average is the most suitable measure of career performance. Taking these two aspects together suggests that there are more suitable alternatives than simply ranking all players over time based on their bowling average, as seen at
\url{https://stats.espncricinfo.com/ci/content/records/283256.html}.
The structure of the paper is as follows. The data are described in
Section~2 and contain truncated, small counts which, at the player level, are both under and overdispersed, leading to the statistical model in Section~3.
Section~4 details the prior distribution alongside the computational details. Section 5 presents some of the results and the paper
concludes with some discussion and avenues for future work in Section~6.
\section{The data}
The data used in this paper consists of $N = 45906$ individual innings bowling figures by $n=2148$ Test match bowlers from the first Test played in 1877 up to Test 2385, in February 2020. There are currently twelve Test playing countries and far more Test matches are played today than at the genesis of Test cricket \citep{Boys2019}. World Series Cricket matches are not included in the dataset since these matches are not considered official Test matches by the International Cricket Council (ICC).
Bowling data for a player on a cricket scorecard comes in the form `overs-maidens-wickets-runs' - note that in this work the first two values provide meta-information that are not used for analysis - as seen at the bottom of Figure~\ref{bowlingfigs}. A concrete example are the bowling data for James Anderson with figures of 25.5-5-61-2. The main aspect of this to note is that the data are aggregated counts for bowlers - 2 wickets were taken for 61 runs in this instance, but it is not known how many runs were conceded for each individual wicket. This aggregation is compounded across all matches to give a career bowling average for a particular player, corresponding to the average number of runs they concede per wicket taken. This measure is of a form that is understandable to fans, but counter-intuitive from the standpoint of a statistical model; this point is revisited in subsection~\ref{subsec:runs}.
\begin{figure
\centering
\includegraphics[height = 3cm]{scorecard4.pdf}
\caption{Bowling figures as seen on a typical cricket scorecard}
\label{bowlingfigs}
\end{figure}
In each Test match innings there are a maximum of ten wickets that can be taken by the bowling team, and these wickets are typically shared amongst the team's bowlers, of which there are nominally four or five. Table~\ref{Table:TestMatchWickets} shows the distribution of wickets taken in an innings across all bowlers. Taking six wickets or more in an innings is rare and the most common outcome is that of no wickets taken.
\begin{table
\caption{\label{Table:TestMatchWickets}Frequency and percentage of Test match wickets per innings
\centering
\fbox{%
\begin{tabular}{l | c c c c c c c c c}
\hline
Wickets& 0& 1& 2& 3& 4& 5& 6& 7& $\geq$ 8 \\
\hline
Frequency& 9698& 7511& 5726& 3993& 2511& 1557& 637& 245& 80 \\
Percentage& 30.3& 23.5& 17.9& 12.5& 7.9& 4.9& 2.0& 0.8& 0.2 \\
\hline
\end{tabular}}
\end{table}
Alongside the wickets and runs, there are data available for the identity of the player, the opposition, the venue (home or away), the match innings, the winners of the toss and the date the match took place. These are all considered as covariates in the model.
To motivate the model introduced in section~\ref{sec:model}, the raw indices of dispersion at the player level show that around $25\%$ of players have underdispersed counts, approximately $10\%$ of players have equidispersed counts, with the remainder having overdispersed counts. This overlooks the effects of covariates but suggests that any candidate distribution ought to be capable of handling both under- \textit{and} overdispersion, or \textit{bidispersion}.
\section{The model}\label{sec:model}
Wickets taken in an innings are counts and a typical, natural starting point may be
to consider modelling them via the Poisson or negative binomial distributions. However, in light of the bidispersion seen at the player level in the raw data, we instead turn to a distribution capable of handling such data.
The Conway-Maxwell-Poisson (CMP) distribution \citep{ConwayMaxwell} is a generalisation of the Poisson distribution, which includes an extra parameter to account for possible over- and underdispersion. Despite being introduced almost sixty years ago to tackle a queueing problem, it has little footprint in the statistical literature, although it has gained some traction in the last fifteen years or so with applications to household consumer purchasing traits \citep{Boatwright2003}, retail sales, lengths of Hungarian words \citep{Shmueli2005}, and road traffic accident data \citep{Lord2008, Lord2010}. Its wider applicability was demonstrated by \cite{Guikema2008} and \cite{Sellers2010}, who recast the CMP distribution in the generalised linear modelling framework for both Bayesian and frequentist settings respectively, and through the development of an R package \citep{COMPoissonReg} in the case of the latter, to help facilitate routine use.
In the context considered here, the CMP distribution is particularly appealing as it allows for both over- and underdispersion for individual players.
For the cricket bowling data, we define $X_{ijk}$ to represent the number of wickets taken by player $i$ in his $j$th year during his $k$th bowling performance of that year.
Also $n_i$ and $n_{ij}$ denote, respectively, the number of years in the
career of player $i$ and the number of innings bowled in during year $j$
in the career of player $i$. Using this notation, the CMP distribution has probability mass function given by
\[
\Pr(X_{ijk} = x_{ijk} |\lambda_{ijk}, \nu) = \frac{\lambda_{ijk}^{x_{ijk}}}{(x_{ijk}!)^{\nu}} \frac{1}{G_\infty(\lambda_{ijk}, \nu)}
\]
with $i=1,\ldots, 2148, \; j=1,\ldots,n_i$ and $k=1,\ldots, n_{ij}$. In this (standard) formulation, $\lambda_{ijk}$ is the rate parameter and $\nu$ models the dispersion. The normalising constant term $G_{\infty}(\lambda_{ijk}, \nu) = \sum_{r=0}^{\infty} \lambda_{ijk}^r/(r!)^{\nu}$ ensures that the CMP distribution is proper, but complicates analysis.
\citet{COMPoissonReg} implemented the CMP model using a closed-form approximation \citep{Shmueli2005, Gillispie2015} when $\lambda_{ijk}$ is large and $\nu$ is small, otherwise truncating the infinite sum to ensure a pre-specified level of accuracy is met.
Alternative methods to circumvent intractability for the standard CMP model in the Bayesian setting have been proposed by \cite{Chanialidis2018}, who used rejection sampling based on a piecewise enveloping distribution and more recently \cite{Benson2020} developed a faster method using a single, simple envelope distribution, but adapting these methods to the mean-parameterised CMP (MPCMP) distribution introduced below is a non-trivial task.
\subsection{Truncated mean-parameterised CMP distribution}
The standard CMP model is not parameterised through its mean, however, restricting its wider applicability in regression settings since this renders effects hard to quantify, other than as a general increase or decrease. To counter this, two alternative parameterisations via the mean have been developed \citep{Huang2017, Ribeiro2018, Huang2019}, each with associated R packages \citep{mpcmp, cmpreg}. The mean of the standard CMP distribution can be found as
\[
\mu_{ijk} = \sum_{r=0}^{\infty} \frac{r \lambda_{ijk}^r}{(r!)^{\nu}G_\infty(\lambda_{ijk}, \nu)},
\]
which, upon rearranging, leads to
\begin{equation}\label{eq:lam_nonlin}
\sum_{r=0}^{\infty} (r - \mu_{ijk}) \frac{\lambda_{ijk}^r}{(r!)^{\nu}}= 0.
\end{equation}
Hence, the CMP distribution can be mean-parameterised to allow a more conventional count regression interpretation, where
$\lambda_{ijk}$ is a nonlinear function of $\mu_{ijk}$ and $\nu$ under this reparameterisation. \cite{Huang2017} suggested a hybrid bisection and Newton-Raphson approach to find $\lambda_{ijk}$ and applied this in small sample Bayesian settings \citep{Huang2019}, whereas \cite{Ribeiro2018} used an asymptotic approximation of $G_\infty(\lambda_{ijk}, \nu)$ to obtain a closed form estimate for $\lambda_{ijk}$. The appeal of the former is its more exact nature, but this comes at considerable computational cost in the scenario considered here as the iterative approach would be required at each MCMC iteration, and, in this case, for a large number of (conditional) mean values. The approximation used by the latter is conceptually appealing due to its simplicity and computational efficiency, but is likely to be inaccurate for some of the combinations of $\mu_{ijk}, \nu$ encountered here, and the level of accuracy will also vary across these combinations.
Irrespective of mean parameterisation and method, the above model formulation has two obvious flaws: the counts lie on a restricted range, i.e. {0, \ldots, 10}, and there is no account taken of how many runs the bowler conceded in order to take their wickets.
For the first issue, the model can be easily modified using truncation, which, in this case, leads to a simplified form for the CMP distribution, which is exploited below:
\begin{eqnarray*}
\Pr(X_{ijk} = x_{ijk} |\lambda_{ijk}, \nu) &=& \frac{\lambda_{ijk}^{x_{ijk}}}{(x_{ijk}!)^{\nu}} \frac{1}{G_{\infty}(\lambda_{ijk}, \nu)} \frac{1}{\Pr(X_{ijk} \leq 10 |\lambda_{ijk}, \nu)} \\
&=& \frac{\lambda_{ijk}^{x_{ijk}}}{(x_{ijk}!)^{\nu}} \frac{1}{G_{10}(\lambda_{ijk}, \nu),}
\end{eqnarray*}
where $G_{10} = \sum_{r=0}^{10} \lambda_{ijk}^r/(r!)^{\nu}$
is used to denote the finite sum. This yields the truncated mean-parameterised CMP distribution (MPCMP$_{10}$), where the subscript denotes the value at which the truncation occurs. Furthermore, the infinite sum in (\ref{eq:lam_nonlin}) is replaced by the finite sum
\begin{equation}\label{eq:lam_nonlin_10}
\sum_{r=0}^{10} (r - \mu_{ijk}) \frac{\lambda_{ijk}^r}{(r!)^{\nu}}= 0.
\end{equation}
Since $\lambda_{ijk}$ is positive, there is a single sign change in (\ref{eq:lam_nonlin_10}) when $\mu_{ijk} > r$, which, by Descartes' rule of signs, informs us that there is a solitary positive real root. Hence, a solution for $\lambda_{ijk}$ can be found without recourse to approximations or less scaleable iterative methods in this case and we can directly solve the tenth order polynomial using the \verb|R| function \verb|polyroot|, which makes use of the Jenkins-Traub algorithm. The substantive question of the relationship between wickets and runs is deliberated in the next subsection.
\subsubsection{Functional form of runs}\label{subsec:runs}
As noted earlier, data are only available in an aggregated form.
That is, the total number of wickets is recorded alongside the total number of runs conceded. Historically, this has been converted to a cricket bowling average by taking the rate of runs conceded to wickets taken, chiefly to map this on to a similar scale as to the classic batting average.
However, in the usual (and statistical) view of a rate this is more naturally expressed as wickets per run, rather than runs per wicket, and this rate formulation is adopted henceforth. In either event, the number of runs conceded conveys important information since taking three wickets at the cost of thirty runs is very different to taking the same number of wickets for, say, ninety runs.
\begin{figure}
\centering
\includegraphics[width=0.44\textwidth]{wkts_vs_runs.pdf}
\includegraphics[width=0.44\textwidth]{wkts_vs_runs_log.pdf}
\caption{Average wickets taken for values of runs on the linear (left) and log (right) scales. The size of the data points reflects the amount of data for each value of runs; the dashed line represents the relationship under the traditional cricket bowling average.}
\label{Fig:wkts_vs_runs}
\end{figure}
By looking at the mean number of wickets for each value of runs the relationship between wickets and runs can be assessed, see Figure~\ref{Fig:wkts_vs_runs}; note that there is very little data for values of runs exceeding 120 so the plot is truncated at this point. The nonlinear relationship rules out instinctive choices such as an offset or additive relationship - this makes sense as the number of wickets taken by a bowler cannot exceed ten in a (within-match) innings, which suggests that the effect of runs on wickets is unlikely to be wholly multiplicative (or additive on the log-scale).
Smoothing splines in the form of cubic B-splines are adopted to capture the nonlinear relationship, with knots chosen at the quintiles of runs (on the log scale). This corresponds to internal knots at 18, 35, 53 and 77 along with the boundary knots at 1 and 298 on the runs scale. Various nonlinear models were also considered but did not capture the relationship as well as the proposed spline.
\subsubsection{Opposition effects}\label{subsec:opp_data}
As the data span $\numyears$ years it is perhaps unreasonable to assume that some effects are constant over time. In particular, teams are likely to have had periods of strength and/or weakness whereas conditions, game focus, advances in equipment and technology may have drastically altered playing conditions for all teams at various points in the Test cricket timeline. A plot of the mean bowling average across decades for a selection of opposition countries is given in Figure~\ref{RawOppPlot}. Here we use the conventional bowling average on the y-axis for ease of interpreation, but the story is similar when using the rate. Clearly, the averages vary substantially over time as countries go through periods of strength and weakness and game conditions and rules evolve. As such, treating them as fixed effects does not seem appropriate. Note also that some countries started playing Test cricket much later than 1877 (see the lines for West Indies and India in Figure~\ref{RawOppPlot}) and teams appear to take several years to adapt and improve.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{raw_opposition}
\caption{Mean bowling averages across decades for Australia (solid line), India (dashed), South Africa (dotted) and West Indies (dot-dash).}
\label{RawOppPlot}
\end{figure}
This motivates the inclusion of dynamic opposition effects, where we again adopt smoothing splines, this time with knots chosen at the midpoints of decades (of which there are fourteen); this is a natural timespan, both in a general and sporting sense. For reasons of identifiability, the opposition effect of Australia in 2020 is chosen as the reference value.
\subsection{Log-linear model for the mean rate}
As well as runs conceded by a bowler and the strength of opposition, there are several other factors that can affect performance, some of which are considered formally in the model. Introducing some notation for available information: $r_{ijk}$ is the number of runs conceded by bowler $i$ during the the $k$th innings in the $j$th year of his career, $y_{ijk}$ represents the year of this same event, $o_{ijk}$ indicates the opposition (there are twelve Test playing countries in these data), $h_{ijk}$ indicates whether the innings took place in the bowler's home country ($1=\text{home}$, $2=\text{away}$), $m_{ijk}$ is the within-match innings index and $t_{ijk}$ represents winning or losing the toss ($1=\text{won}$, $2=\text{lost}$).
These remaining terms in the model for the wicket-taking rate are assumed to be additive and, hence, the mean log-rate is modelled as
\begin{multline}\label{eq:loglinmodel}
\log\mu_{ijk} = \sum_{p=1}^{8}\beta_{p} B_{p}\left\{\log r_{ijk} \right\}
+ \sum_{q=1}^{q_o}\omega_{q, o_{ijk}} B_{q, o_{ijk}}(y_{ijk})I(\textrm{Opp} = o_{ijk}) \\
+ \theta_i
+ \zeta_{h_{ijk}} + \xi_{m_{ijk}} + \gamma I(t_{ijk} = 1) I(m_{ijk} = 1)
\end{multline}
where $\theta_i$ represents the ability of player $i$ and the next three terms in the model are game-specific, allowing home advantage, pitch degradation (via the match innings effects), and whether the toss was won or lost, respectively, to be taken into account. Home advantage is ubiquitous in sport \citep{Pollard2005}, and it is widely believed that it is easier to bowl as a match progresses to the third and fourth innings due to pitch degradation, and winning the toss allows a team to have `best' use of the playing/weather conditions. The effect of losing the toss is ameliorated as the match progresses so we anticipate that this only effects the first innings of the match, and this effect is captured by $\gamma$.
The design matrices $B_p$ and $B_{q, o_{ijk}}$ are the spline bases for (log) runs and each opposition detailed in the previous two subsections, with associated parameters $\beta_p$ and $\omega_{q, o_{ijk}}$. Note that the summation index for the opposition spline varies across countries owing to their different spans of data - as seen in subsection \ref{subsec:opp_data} - with the total number of parameters for each opposition denoted denoted by $q_o, o = 1, \ldots, 12$.
For identifiability purposes we set $\zeta_1 = \xi_1 = 0$, measuring the impact of playing away via $\zeta_2$ and the innings effects relative to the first innings through $\xi_2, \xi_3$ and $\xi_4$. We impose a sum-to-zero constraint on the player abilities, thus in this model $\exp(\theta_i)$ is the rate of wickets per innings taken by player $i$ relative to the average player, with the remaining effects fixed.
CMP regression also allows a model for the dispersion. Here, recognising that we may have both under and overdispersion at play, we opt for a player-specific dispersion term $\nu_i, i = 1, \ldots 2148$. Naturally, this could be extended to include covariates, particularly runs, but this is not pursued here since the model is already heavily parameterised (for the mean).
\section{Computational details and choice of priors}
R \citep{rcore} was used for all model fitting, analysis and plotting, with the \verb|splines| package used to generate the basis splines for runs and the temporal opposition effects. The Conway-Maxwell-Poisson distribution is not included in standard Bayesian software such as \verb|rstan| \citep{rstan} and \verb|rjags| \citep{Plummer2004} so bespoke code was written in R to implement the model. The code is available on GitHub at \url{https://github.com/petephilipson/MPCMP_Test_bowlers}.
For analysis, four MCMC chains were run in parallel with 1000 warm-up iterations followed by 5000 further iterations. Model fits took approximately eight hours on a standard MacBook Pro. A Metropolis-within-Gibbs algorithm is used in the MCMC scheme, with component-wise updates for all parameters except for those involved in the spline for runs, $\vec{\beta}$. For these parameters a block update was used to circumvent the poor mixing seen when deploying one-at-a-time updates, with the proposal covariance matrix based on the estimated parameter covariance matrix from a frequentist fit using the \verb|mpcmp| package. In order to simulate from the CMP distribution to enable posterior predictive checking, the \verb|COMPoissonReg| \citep{COMPoissonReg} package was used. Plots were generated using \verb|ggplot2| \citep{ggplot2} and highest posterior density intervals were calculated using \verb|coda| \citep{Rcoda}.
\subsection{Prior distribution}
For the innings, playing away and winning the toss effects we use zero mean normal distributions with standard deviation $0.5 \log 2$, reflecting a belief that these effects are likely to be quite small on the multiplicative wicket-taking scale, with effects larger than two-fold increases or decreases deemed unlikely (with a 5\% chance a priori). The same prior distribution is used for the player ability terms, $\vec{\theta}$, reflecting that while heterogeneity is expected we do not expect players to be, say, five and ten times better/worse in terms of rate. The coefficients for the splines in both the runs and opposition components are given standard normal priors. We recognise that smoothing priors could be adopted here, but as we have already considered the choice of knots we do not consider such an approach here.
For the dispersion parameters we work on the log-scale, introducing $\eta_i = \log(\nu_i)$ for $i = 1, \ldots, 2148$. We adopt a prior distribution that assumes equidispersion, under which the MPCMP model is equivalent to a Poisson distribution. Due to the counts being small we do not expect the dispersion in either direction to be that extreme, allowing a 5\% chance for $\eta_i$ to be a three-fold change from the a priori mean of equidispersion. Hence, the prior distribution for the log-dispersion is $\eta_i \sim N(0, 0.5 \log 3)$ for $i = 1, \ldots, 2148$.
\section{Results}
\subsection{Functional form for runs}
A plot of wickets against runs using the posterior means for $\beta$ is given in Figure~\ref{fitted_runs_spline}. This clearly shows the non-linear relationship between wickets and runs and suggests that using the standard bowling average, which operates in a linear fashion, overlooks the true nature of how the number of wickets taken varies with the number of runs conceded. An important ramification of this is that the standard bowling average overestimates the number of wickets taken as the number of runs grows large, whereas the true relationship suggests that the rate starts to flatten out for values of runs larger than 50. Returning to the figure, we clearly see much more uncertainty for larger values of runs, where, as seen earlier, the data are considerably more sparse.
\begin{figure
\centering
\includegraphics[width=0.48\textwidth]{runs_fitted_spline.pdf}
\includegraphics[width=0.48\textwidth]{logruns_fitted_spline.pdf}
\caption{Mean wickets taken against runs on the linear (left) and log (right) scales with 95\% HDI interval uncertainty bands; solid circles represent the knots for the spline}
\label{fitted_runs_spline}
\end{figure}
\subsection{Opposition effects}
A plot of the fitted posterior mean profiles for each opposition is given in Figure~\ref{fitted_opp_splines}. The largest values of all occur for South Africa when they first played (against the more experienced England and Australia exclusively); most teams struggle when they first play Test cricket, as shown by the largest posterior means at the left-hand side of each individual plot. Overall, this led to the low Test bowling averages of the 1880s-1900s that still stand today as the lowest of all-time and adjusting for this seems fundamental to a fairer comparison and/or ranking of players.
\begin{figure
\centering
\includegraphics[width=0.85\textwidth]{fitted_opp_splines_2.pdf}
\caption{Posterior mean and 95\% HDI bands for each opposition}
\label{fitted_opp_splines}
\end{figure}
\subsection{Game-specific effects}
The posterior means and 95\% highest density intervals (HDIs) for the innings effects - on the multiplicative scale - are 1.07 (1.04 - 1.09), 1.03 (1.00 - 1.05) and 1.00 (0.97 - 1.03) for the second, third and fourth innings respectively, when compared to the first innings. Similarly, the effect of playing away is to reduce the mean number of wickets taken by around $10\%$; posterior mean and 95\% HDI are 0.92 (0.90 - 0.94). The impact of bowling after winning the toss is the strongest of the effects we consider, with a posterior mean and 95\% HDI interval of 1.12 (1.08 - 1.15). Boxplots summarising the game-specific effects are shown in Figure~\ref{FigInnsHA}.
\begin{figure
\centering
\includegraphics[width=0.45\textwidth]{game_effects_2.pdf}
\caption{Boxplots of the posterior distributions for playing away ($\zeta_2$), match innings 2-4 ($\xi_2, \xi_3, \xi_4$) and winning the toss and bowling ($\gamma$).}
\label{FigInnsHA}
\end{figure}
\subsection{Player rankings}
The top thirty bowlers, as ranked by their posterior mean ability, $\theta_i$, are given in Table~\ref{TableOfRanks}. We also include the posterior dispersion parameter for each player along with information on the era in which they played (via the date of their debut) and the amount of available data as measured through the total number of innings they played in ($n_i$).
\begin{table}
\caption{\label{TableOfRanks}Top 30 Test match bowlers ranked by posterior mean multiplicative rate}
\fbox{%
\begin{tabular}{r l r r r r c}
\hline
& & & & \multicolumn{2}{c}{Player ability} & Dispersion\\
Rank& Name& Debut& Innings& $E(e^\theta)$& $SD(e^\theta)$ & $E(\nu)$ \\
\hline
1& M Muralitharan& 1992& 230& 2.04& 0.08& 0.83\\
2& SF Barnes& 1901& 50& 1.86& 0.18& 0.64\\
3& WJ O'Reilly& 1932& 48& 1.81& 0.18& 0.71\\
4& Sir RJ Hadlee& 1973& 150& 1.79& 0.11& 0.59\\
5& CV Grimmett& 1925& 67& 1.75& 0.14& 0.80\\
6& AA Donald& 1992& 129& 1.75& 0.10& 1.04\\
7& MJ Procter& 1967& 14& 1.74& 0.26& 1.12\\
8& MD Marshall& 1978& 151& 1.72& 0.10& 0.76\\
9& DW Steyn& 2004& 171& 1.71& 0.09& 0.71\\
10& GD McGrath& 1993& 241& 1.70& 0.09& 0.65\\
11& JJ Ferris& 1887& 16& 1.70& 0.23& 1.03\\
12& CEL Ambrose& 1988& 179& 1.70& 0.10& 0.69\\
13& SK Warne& 1992& 271& 1.70& 0.08& 0.64\\
14& T Richardson& 1893& 23& 1.69& 0.21& 0.93\\
15& J Cowie& 1937& 13& 1.69& 0.26& 0.89\\
16& JC Laker& 1948& 86& 1.68& 0.14& 0.64\\
17& Mohammad Asif& 2005& 44& 1.66& 0.17& 0.76\\
18& DK Lillee& 1971& 132& 1.66& 0.10& 0.84\\
19& K Rabada& 2015& 75& 1.66& 0.12& 1.15\\
20& R Ashwin& 2011& 129& 1.64& 0.10& 0.73\\
21& Imran Khan& 1971& 160& 1.63& 0.10& 0.71\\
22& CTB Turner& 1887& 30& 1.62& 0.20& 0.68\\
23& Waqar Younis& 1989& 154& 1.62& 0.09& 0.77\\
24& SE Bond& 2001& 32& 1.62& 0.16& 1.22\\
25& H Ironmonger& 1928& 27& 1.62& 0.21& 0.57\\
26& J Garner& 1977& 111& 1.62& 0.10& 1.23\\
27& K Higgs& 1965& 27& 1.61& 0.19& 1.01\\
28& FS Trueman& 1952& 126& 1.61& 0.10& 0.85\\
29& AK Davidson& 1953& 82& 1.60& 0.14& 0.63\\
30& FH Tyson& 1954& 29& 1.60& 0.20& 0.66\\
\hline
\end{tabular}}
\end{table}
The rankings under the proposed model differ substantially from a ranking based on bowling average alone. As a case in point, the lowest, i.e. best, bowling average of all time belongs to GA Lohmann, who is ranked 52nd in our model. Similarly, Muttiah Muralitharan, who tops our list, is 45th on the list of best averages. The main ramifications of using the proposed model is that players from the early years are ranked lower, and spinners are generally ranked higher. The latter is an artefact of the model, in that bowling long spells and taking wickets is now appropriately recognised.
We also see from Table~\ref{TableOfRanks} that seven players have underdispersed data judging from the posterior means of $\nu_i$, verifying that a model capable of handling underdispersed (and overdispersed) counts is required for these data.
\subsection{Model fitting}
The performance of the model is evaluated using the posterior predictive distribution. Namely, due to the small number of observed counts, it is viable to compare the model-based posterior predictive probability for each observed value of 0, 1, \ldots, 10 and compare that to the observed probability in each case. A summary of this information is given in Figure~\ref{OvE_summary}, where we see excellent agreement between the observed and expected proportions at each value of wickets.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{obs_vs_exp_wkts.pdf}
\caption{Observed (grey circles) and model-based (crosses) probabilities for each value of wickets}
\label{OvE_summary}
\end{figure}
\section{Discussion}
The outcome of interest considered here was the number of wickets taken by a bowler in an innings, which was modelled using a truncated mean-parameterised Conway-Maxwell-Poisson distribution. Other classic count models were considered, but it was found that Poisson and negative binomial regression models failed to adequately describe the data. Alternative count models capable of handling both under- and overdispersion may perform equally well, such as those based on the Sichel and Generalised Waring distributions, but alternative models are not pursued further here, principally owing to the good performance of the chosen model.
Additional metrics not formally modelled here are bowling strike rate and bowling economy rate, which concern the number of balls bowled (rather than the number of runs conceded) per wicket and runs conceded per over respectively. Whilst both informative measures, they are typically viewed as secondary and tertiary respectively to bowling average in Test cricket, where time is less constrained than in shorter form cricket. Hence, future work looking at one day international or Twenty20 cricket could consider the triple of average, strike rate and economy rate for bowlers, alongside average and strike rate for batsmen.
One limitation of this work is that we have ignored possible dependence between players, the taking of wickets can be thought of as a competing resource problem and the success (or lack of) for one player may have an impact on other players on the same team. Indeed, the problem could potentially be recast as one of competing risks. However, this argument is rather circular in that a bowler would not concede many runs if a teammate is taking lots of wickets and this is taken into account through the modelling of the relationship between wickets and runs.
The MPCMP model may have broader use in other sports where there may be bidispersed data, for example modelling goals scored in football matches - stronger and weaker teams are likely to exhibit underdispersion; hockey, ice-hockey and baseball all have small (albeit not truncated) counts as outcomes of interest with bidispersion likely at the team or player level, or both. Moving away from sports to other fields, the MPCMP model could be used to model parity, which is known to vary widely across countries with heavy underdispersion \citep{Barakat2016} and for longitudinal counts with volatile (overdispersed) and stable (underdispersed) profiles at the patient level, where the level of variability may be related to an outcome of interest in a joint modelling setting. Indeed, it will have broader use in any field where counts are subject to bidispersion.
This work has shown that truncated counts subject to bidispersion can be handled in a mean-parameterised CMP model, based on a large dataset, without too much computational overhead. Further methodological work is needed to implement the model in the case of non-truncated counts and to seek faster computational methods.
\newpage
\bibliographystyle{rss}
| {
"attr-fineweb-edu": 2.554688,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeF_xK7IAHQ7RZZIC | \section{Introduction}
In the last years, outdoor sports activities have become one of the most favourite pastimes. Mountains are usually the most popular place for practising outdoor sports in spite of the fact that these activities usually involve going to dangerous places where people can have to face critical situations. Indeed, mountains are one of the places where more Search And Rescue (SAR) operations are undertaken.
Most situations where a SAR team is needed are usually caused either by bad weather, or negligence, carelessness or lack of concentration of people.
The typical procedure when someone gets lost is as follows. A SAR team is deployed to the area where the person might be. The timeout period elapsed prior to completion of the SAR operation can make rescue tasks more difficult due to fainting, anxiety, sunstroke, etc. When a SAR mission starts, nobody knows how long it will take because it depends on the number of police officers, fire-fighters or civilians involved in the operation. Time is one of the main problems of SAR missions because when someone gets lost, in the middle of a mountain, for example, the period till being rescued could determine the health and even the life of the missing person. Sometimes, helicopters are needed in SAR operations to facilitate search work, but rarely more than one can be used due to the high costs involved in each SAR operation.
In this work, we have developed a novel solution to face the difficulties involved in each SAR operation. In particular, this proposal consists of a platform to synchronize SAR operations involving the use of drones to perform the search tasks that would nowadays be performed by helicopters in combination with people, but without the high costs that this would involve. The way a person will be located within this proposal is by making use of signals emitted by beacon devices using Bluetooth Low Energy. These signals can come from mobile devices such as smartphones, or Bluetooth gadgets such as Physical Web beacon devices developed by Google.
This work is structured as follows.
Section \ref{sec:preliminaries} describes the used beacon technology and introduces some basic concepts on Unmanned Aerial Vehicles. Section \ref{sec:rescuesystem} presents the state of the art on technological solutions used for SAR operations. Section \ref{sec:beaconimpl} includes the main details of beacon signalling implementation in smartphones. Section \ref{sec:emergencyprotocol} describes in detail the steps followed in the proposal to successfully complete a SAR mission. Section \ref{sec:security} contains a study of some possible attacks that can affect the proposal. Section \ref{sec:benchmark} shows a comparative study between the current searching methods and this proposal. Finally, Section \ref{sec:conclusions} closes the paper with some conclusions and open problems.
\section{Preliminaries}
\label{sec:preliminaries}
Two different technologies are combined in the proposal presented in this paper. The first one is beacon technology, which is a low energy device that emits a broadcast signal thanks to Bluetooth Low Energy (BLE) technology. Traditionally, beacons have been used in indoor positioning systems \citep{inoue2009indoor} \citep{yim2008introducing}, for which they obtain accurate locations thanks to the short range that beacons offer. Besides, beacons have been also widely used in the commerce world where for instance when a client is approaching a certain shop or product, he or she automatically receives an advertisement beacon signal with discounts for that specific shop/product.
Beacon performance depends on the specific hardware, which is usually composed of a BLE chip and a power battery. The use of BLE technology allows beacons to be used with minimum power consumption, so the battery of a beacon usually resists for a long time, and even for years \citep{kamath2010measuring}.
A beacon device usually broadcasts its Universally Unique IDentifier, which uniquely identifies the beacon and the application that uses it. In addition, the broadcast signal also includes some bytes that can be used to define the device position, send a URL, or automatically execute an action.
In particular, the performance of beacon devices is based on two main properties of beacons: the Transmission Power (TX Power) and the Advertising Interval. The TX Power is the power with which the beacon signal is broadcasted. This transmission power can be manually established. On the one hand, a bigger TX Power implies that the beacon can be discovered in a larger distance due to the properties of the signal intensity (that decreases with the distance), but implies bigger battery consumption too. On the other hand, a lower TX Power implies a minor range but also a lower battery consumption. The Advertising Interval is the frequency with which a beacon signal is emitted. This frequency can be established, depending on the necessity of the developed system and its latency requisites. The Advertising Interval is established in milliseconds so that bigger values imply bigger battery consumptions, while lower values involve lower battery consumptions.
Different protocols can be used to send data in packets through the Advertising mode. These protocols establish a format in the advertising data so that the applications can read these data in a proper way. The most extended protocols are: iBeacon \citep{ibeacon}, created by Apple; Eddystone \citep{eddystone}, which is the open source project created by Google; and AltBeacon \citep{helms2015altbeacon}\citep{github:altbeacon}, which is the protocol developed by Radius Networks.
The second technology used in the proposed system corresponds to Unmanned Aerial Vehicles (UAVs), which are unmanned aircraft vehicles controlled either remotely by pilots or autonomously by on-board computers with GPS technology. There are two different kinds of UAVs: gliders and multi-copters. On the one hand, gliders are UAVs that, as their name indicates, can glide, what is an important advantage over other type of drones because it supposes a relevant improvement on the energy consumption. On the other hand, multi-copters are UAVs that have more than one helix. Their main advantage is the ability to be static in the air at the expense of higher power consumption.
Autonomous UAVs are the types of UAVs that are more interesting for the proposal. In this regard, there are mainly two different systems that have been used to get an autonomous operation. The first one is known as Inertial Navigation System (INS) \citep{berg1970inertial} and uses the output of the inertial sensors of the UAV to estimate its position, speed and orientation. In these positioning systems, the UAV uses a 3-axis accelerometer that measures the specific force acting on the platform and a 3-axis gyroscope that measures its rotation. An advantage of this kind of positioning systems is that it does not require the use of any other external signal to provide an external solution. On the other hand, this type of systems produces positioning errors of the order of several hundreds of meters. The other main kind of positioning system used by the UAVs is the GPS system. Since the GPS system needs an external signal to work, and the navigation accuracy depends on the signal quality and the geometry of the satellite in view, depending on the used GPS, the error can vary between one and hundreds of meters. To solve the problems of these navigation systems in UAVs, it is usual to combine INS and GPS to get a good precision from the GPS system and the frequency positioning data of the INS. This solution is also applied in the proposal here described.
A mandatory prerequisite for the proper operation of the system is that the devices (smartphone and drone) have enough charge to work throughout the rescue mission. If the drone (or smartphone) does not have enough battery, this could produce money and information losses (mainly because of the hardware). In spite of the fact that this paper does not take into consideration other supply power system, it could be possible to feed the air vehicles as in other aircraft such as the "Solar Impulse 1" or as it is proposed in the patent of the Boeing company \citep{boeingPatent} where a recharging system is defined by a static flight over a series of charging-antennas. However, this kind of improvements would not allow current drones to fly due to the weight, surface area, load balancing or hardware cost.
\section{State of the Art}
\label{sec:rescuesystem}
During the last years, different proposals have been presented in the field of SAR systems. Some of them are only communication systems, but others include different rescue mechanisms. A lot of them focus on the use of the so-called emergency call, included nowadays in all mobile devices. This mode allows users to make emergency calls, even when they have no coverage to make a phone call to any of their contacts.
To make this system work, the European Union decided in 1991 that its member states had to use the 112 number for emergency calls. Moreover, mobile operators and mobile phone manufacturers were forced to adapt all their products to let users operate with the network infrastructure of any other operator to make emergency calls, independently of the mobile operator. This ability is the basis of Alpify \citep{alpify}, a mobile application that allows using the emergency call system when there is no Internet connection, and notifying emergency services and contacts through SMS including the georeferenced coordinates of current position. A problem of this proposal that derives from the use of the emergency call system, is the necessity of availability of some network infrastructure because it requires coverage of at least of one operator.
In the situation analysed in this work, the initial hypothesis is that there is no coverage of any mobile operator because all network infrastructures are unusable, collapsed, or simply do not exist because the place is remote or difficult to access. In these cases, the use of a system based on communications through network infrastructures is not a feasible solution. Moreover, the use of systems that need previous agreements between the company that develops the system and the different emergency services of every state of each country, makes it more difficult to use them in different places around the world.
Centralized systems like Alpify only notifies the emergency services and the contact person previously selected, and depending on the location, they may be delayed in arriving to the position. The fact of not letting people who are near help in a faster way, supposes an important disadvantage of this kind of systems.
Other systems, like the one described in \citep{santos2014alternative}, propose the use of Wi-Fi Direct technology, so in the case where there is no Internet connection, people that are in the range of the Wi-Fi Direct technology are notified. The basis of this system consists of sending an emergency message that includes the geolocated position of the person, to other people in the range of Wi-Fi Direct. When this emergency message is received, if the user has Internet connection, he/she automatically notifies the warning to the emergency services. If the user does not have Internet connection, the message is re-sent through broadcast in a multi-hop system. This system involves an improvement compared to other systems than only use the emergency call, but it has problems when the number of users is not enough or they do not have Wi-Fi Direct technology in their smartphones.
The authors of the work \citep{jang2009rescue} try to solve this problem through the use of other devices like laptops carried by the emergency services. In that proposal, the goal is that the first units of the emergency services that arrive to the place deploy a Mobile Ad-hoc Network based on Wi-Fi through laptops. After the network deployment, a group communication system based on peer-to-peer architecture, which admits communications such as VoIP, Push-to-Talk, instant messaging, social networks, etc. is proposed. The main advantage of this kind of solutions is that the network is managed by specialized personnel and the use is restricted only to emergency services, preventing the access of other users to the network. Its main disadvantage is that people who have suffered a catastrophe and are isolated, continue being isolated, during the time that the emergency services take to arrive to the catastrophe place.
Different schemes, like the one described in \citep{lamarca2005place}, use Physical Web beacons to find injured people, once the emergency or rescue protocol is activated. This kind of systems has the disadvantage that the search has to be done manually, and the rescue or emergency service must have access to the emergency zone, which might not be possible in some situations.
Other proposals use UAVs for SAR operations in emergency situations where rescue teams cannot access. This is the case described in \citep{dickensheets2015avadrone}, where the use of UAVs in combination with Physical Web beacons is proposed to search people affected by avalanches.
Moreover, there are systems, like the one described in \citep{ezequiel2014uav}, which use aerial imaging processing to detect people in problems for post-disaster assessment. Their main disadvantage is that the use of image processing could be imprecise in situations where the visibility is not good, as can be in fires where smoke hinders visibility.
Other solutions, like the one proposed in \citep{chou2010disaster}, for the management and monitoring in emergency situations are based on the use of real-time aerial photos. The main advantage of that proposal is the use of real-time aerial photos to produce a digital elevation model data that can be very useful to see the latest terrain environment and provide reference for disaster recovery in the future.
The work \citep{Silvagni201718} offers solution related with the proposal of this paper, which is composed of a body detection system using sensors on drones in snow avalanches (and ski areas), forest and mountains. The authors make use of two cameras: the normal day camera and one thermal for detection in programmed night flights. They also discuss the possibility of including a weight of up to 5kg as a payload package. Another alternative is the proposal of \citep{bejiga2016convolutional}, which describes the use of UAVs equipped with cameras to find people missing in snow avalanches using neural networks trained to detect changes in patterns. The proposal here described be complementary to both proposals in order to improve them with remote administration and including real-time monitoring.
\section{Beacon Implementation}
\label{sec:beaconimpl}
The use of beacons implies an important advantage in SAR operations thanks to the higher speed and accuracy of operations for locating missing people who carry beacons. However, a disadvantage of this system is the need to carry a device with us to allow being rescued if an emergency situation happens. That is the reason why in this proposal smartphones are used, because they are devices that people always carry with them.
In order to use the beacon system in the mobile platform, the best solution is the use of BLE wireless technology to simulate the beacon. In the Android operating system, this is possible in all the devices with 5.0 and higher Android versions and Bluetooth 4.1 compatible devices. To implement the simulated beacon, the Android BLE peripheral mode is used to allow running applications that detect the presence of other smartphones nearby, with minimum battery consumption.
For the implementation the AltBeacon Android Beacon Library \citep{github:altbeacon} has been used. This is an open source project that allows detecting beacons, meeting the open AltBeacon standard by default. Besides, it can be easily configured to work with the most popular beacon types like Eddystone or iBeacon formats. In the proposed system, beacons have been implemented using the Eddystone-URL format, a new variant of Google UriBeacon, also known as the Physical Web beacon. The main difference of the Eddystone-URL format is that it does not transmit an UID but a URL that can be automatically detected by the smartphone browser or an application and open the URL in the browser without user intervention. This system uses 17 bytes to store the URL, fact that makes the use of short URLs necessary. However, this is not a problem because there are a lot of services, including one developed by Google, which allows shortened URL.
\section{Proposed System}
\label{sec:emergencyprotocol}
In order to design a system that can provide a complete solution on this field, the described proposal has been developed in two different parts. On the one hand, an active search solution is offered combining emerging technologies like UAVs and BLE beacons. On the other hand, Bluetooth and the help of citizens are combined to locate missing people. These two parts of the system are below named active and passive method, respectively.
\subsection{Active method}
The so-called active method is focused on the use of Bluetooth protocol devices that emit a spherical radio signal containing a unique identifier for each device so that when a user gets lost, he or she could be located by performing a scanning of Bluetooth signal in order to determine his/her approximate position. This scan would be performed over the area using one or more drones.
First of all, every user must have an Android device and register his/her user data, such as name, surname, address, blood type, etc., using the system web platform, which must be handled by the operator of the emergency service. Then, the operator, once the user is registered, provides the code that the user must introduce in his/her terminal to be paired with the database user profile. Afterwards, if a user gets lost, the SAR steps would be as shown in Fig.
\ref{fig:nuevoEsquema}:
\begin{figure}[ht]
\centering
\includegraphics[width=1.35\textwidth]{cliente_servidor.jpg}
\caption{Flow diagram of the proposed emergency protocol}
\label{fig:nuevoEsquema}
\end{figure}
\begin{itemize}
\item \textbf{Step 1}: The operator, through a phone call, is notified by a contact person about the disappearance of a registered person. Then, the operator launches a control panel web platform (shown in Fig. \ref{fig:nuevoPanel}). This panel shows in its main screen an interactive Google Map that allows the operator to intersect with the back-end system through a few clicks. For instance, using it, the operator can click on the map to set a series of points in order to create a search area. The panel also allows selecting several parameters such as the user/s in search, the number of drones in the mission, relative flight altitude with respect to the floor and distance between points inside the polygon.
\item \textbf{Steps 2 and 3}: When the operator completes the mission configuration, he/she clicks on the 'Start Button' that triggers a signal to start the SAR process. Then, the system automatically checks the climatology in that area in order to verify the weather and decide whether to launch the mission or not. If the weather is good enough, the mission continues. Otherwise, if the weather is, for example, rainy or windy, the operation based on drones gets cancelled because drones cannot fly in such weather conditions.
The panel also includes a check-box to continue the mission despite the bad weather.
\item \textbf{Steps 4 and 5}: Data are then sent using the encryption algorithm discussed in Section \ref{sec:security}, to the main server, packed in an encrypted JSON format, through a secure HTTPs layer.
\item \textbf{Steps 6 and 7}: The server decrypts the received data in order to get the identifiers of user(s) in search, and the map grid points used to calculate the grids of each searching area. Using the generated grid points, and starting from the established base point (red dot in Fig. \ref{fig:nuevoPanel}), an optimal route is calculated for each area. The way to make sure that all points are visited is by applying a solution to the Travelling Salesman Problem (TSP) in order to provide a low-cost route. Thus, the path-finding algorithm A* is used in each area as a point graph \citep{astar}. All areas take the same starting point, so to solve and obtain separate routes, the server processes each area separately, and a Keyhole Markup Language (KML) is used to display geographic data in three dimensions (see Fig. \ref{fig:kml3d}).
\item \textbf{Steps 8 and 9}: These steps run in parallel, being the most important ones in the emergency protocol because without them, the use of drones in the mission has no sense. When the drones take off (each one to its search area), the mission starts. Every few seconds, the web operator panel asks to the server asynchronously whether it has new updated data. These updates are sent by the drones constantly. If the smartphone device, which goes on top of the drone, finds a Bluetooth signal from a beacon in search, it checks with the main server whether that person is being searched or not. If so, the drone's smartphone automatically begins to take several cenital photos that the operator can see in real time from the web panel as if it were a streaming service (see Fig. \ref{fig:dronedetection}).
\end{itemize}
Once the drone has finished the flight, the operator can analyse all the information gathered by the smartphone
and view the results of past missions on the operator panel (see Fig. \ref{fig:resultados}). In this panel, the operators can see all the photos that the drones have taken and the GPS positions for each one. The rescue tasks begin when the lost person has been found.
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{panelweb.jpg}
\caption{Operator web panel}
\label{fig:nuevoPanel}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{kml3d.jpg}
\caption{Generated KML and view in 3D}
\label{fig:kml3d}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{drone_photo.jpg}
\caption{Drone detecting a missing user}
\label{fig:dronedetection}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{panel_resultados.jpg}
\caption{Result panel}
\label{fig:resultados}
\end{figure}
\subsection{Passive method}
One of the main advantages of using BLE signals, especially those that apply to the UriBeacon protocol or the new Eddystone, is the ability to store a web address in the physical device or in the simulated beacon. The URL used in this proposal is a web address that points directly to a user profile. The meaningful question here is that, with no need to install a new application on the mobile phone, the smartphone is able to display beacon notifications with the received BLE signal information. This notification reflects the header title from the website that the sender is emitting. The detailed steps are explained in the list shown below. This is possible thanks to the fact that in Android 5.1 or higher, Google has implemented a system within its 'Google Now' system that allows passive scanning of beacon devices nearby, following five steps:
\begin{itemize}
\item \textbf{Step 1:} The simulated beacon (see Fig. \ref{fig:passiveMethod}-A) sends a 100 m wireless spherical BLE signal that contains a URL paired with the user profile
\item \textbf{Step 2:} If there is a smartphone (see Fig. \ref{fig:passiveMethod}-B) inside the emitted signal range, the smartphone takes the received shorten URL and visits it in the background.
\item \textbf{Step 3:} The server sends the HTML header (title, body and related information) to the smartphone.
\item \textbf{Step 4:} The device shows it as a notification as seen in Fig. \ref{fig:passiveMethod}-B.
\item \textbf{Step 5:} When the user clicks on the alert, a web browser is opened to display the content of the page. In this case the displayed data correspond to three steps:
\begin{enumerate}
\item Call 112 - It is necessary that the user can rely on the system when he or she makes the emergency call through the 112 phone number.
\item Communicate position - This step allows the emergency squad to determine where the user is.
\item Code - The user must introduce the code that appears in the screen, corresponding to its identifier.
\end{enumerate}
\end{itemize}
\begin{figure}[ht]
\centering
\includegraphics[width=1.35\textwidth]{TransmitterHeader.jpg}
\caption{Passive detection protocol}
\label{fig:passiveMethod}
\end{figure}
The power of the feature mentioned in the last step has been used to make people can participate in the search for missing people since, when a person disappears, anyone with an Android terminal with the aforementioned version could be very useful for the SAR operation by using its smartphone in collaboration with the emergency staff. If the person detects a BLE signal that corresponds to a beacon, but the user is not in search, the emitted URL will point to a 404 page (not found) and it will not be displayed on the smartphone as a notification. This is performed dynamically and without any user interaction.
\subsection{Route planner}
It is estimated that the average maximum flight time in a drone is now half an hour or a little more. That is the main reason why saving time in search is a fundamental problem in this proposal. In a first approach, the search areas were established as equidistant geographical points making use of the formula of the semiversene (shown below) where $\varphi$ is the latitude, $\lambda$ is the longitude and $R$ is the earth radius (6.371 km approximately). In order to use this formula later to define a square cloud of points that drones have to visit in an orderly manner, each one starts from a point on the edge of the initial dotted square.
\\\\
{
\label{eq:haversine}
$a = sin^2\frac{\Delta \varphi}{2}+cos\varphi_{1}*cos\varphi_{2}*sin^2\frac{\Delta \lambda}{2}\\
c = 2 * tan^{-1} \frac{\sqrt{1-a}}{\sqrt{a}}\\
d = R*c$
}
The main problem of this solution is that on many occasions the square areas are too large while the main area of search is small compared to the other areas. That is why we chose to improve the functionality of search in different zones in order to be able to implement the use of irregular polygons. With the implementation of an irregular polygon, another problem arose. How could we separate the areas in an approximately equal way? After investigating different techniques we opted for a method of separation of areas based on the 'k-means' clustering technique.
\subsection{K-means}
The k-means clustering algorithm classifies $n$ points within $k$ clusters by assigning each point to a cluster where the value of the average in a set of $p$ variables is calculated by some approximation technique such as the Euclidean distance. The algorithm computes these assignments iteratively until, with the reassignment of points and recalculating the distances of cluster points produces no changes. The calculation of clusters using k-means involves identifying a series of points called centroids. There are several methods for finding a set of centroids that gives good results, depending on the data sets. The easiest way is to take a set of random points depending on the space (where the number of centroids depends on the desired cluster's numbers). Then, each centroid has to be assigned to its nearest division. After the assignment update, the centre is added in the coordinates of a new point. Assigning all the points to a particular set and successively updating the centres. When there is no change in centroids, the k-means algorithms stops.
The pseudocode of the algorithm can be seen below:
\begin{itemize}
\item Input: K, set of points $x_{1}...x_{n}$
\item Place centroids $c_{1} ... c_{k}$ at random positions
\item Repeat until convergence:
\begin{itemize}
\item for each point $x_{i} \in K$:
\begin{itemize}
\item find the nearest centroid $c_{j}$ using the same distance approach between the point $x_{i}$ and the cluster center $c_{j}$
\item assign the point $x_{j}$ to the cluster of the nearest centroid $j$
\end{itemize}
\item for each cluster $j = 1..K$
\item $\forall a \in 1..d / c_{j}(a)=\frac{1}{n_{j}}\sum_{x_{i} \rightarrow c_{j}}^{}x_{i}(a)$
\item new centroid $c_{j}$ mean of all points $x_{i}$ assigned to cluster $j$ in previous step
\end{itemize}
\item Stop when none of the cluster assignments change
\end{itemize}
In this proposal, the k-means algorithm is used for dividing areas within an irregular polygon. This means that, having an irregular polygon, we can paint geographically the interior points over a map maintaining the calculated distances D between each pair of points. Once we have the point cloud, we apply the k-means algorithm on it, thus getting N distinct areas (see Fig. \ref{fig:kmeans}). Then, for each of these areas and starting from an initial point for all, the solution is applied to the above mentioned TSP problem by applying A* path-finding algorithm.
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{kmeans.jpg}
\caption{Flow diagram of the emergency protocol}
\label{fig:kmeans}
\end{figure}
\section{Security of the proposal}
\label{sec:security}
First, drones can be targets that are easy to reach, through physical or electromagnetic attacks.
\begin{itemize}
\item Physical attacks:
\begin{itemize}
\item Net: If the drone is reached by a net or any type of clothes, the threads can roll up into the helices and the drone could fall to the floor.
\item Metallic dust: If an attacker throws any kind of metallic dust to a drone, the particles of dust can stick into the main motor bringing the drone down.
\end{itemize}
\item Electromagnetic methods:
\begin{itemize}
\item Signal Jammer: A simple powerful signal jammer can nullify or create jamming into the phone signal or the phone sensors like Bluetooth, Wi-Fi, etc.
\item GPS Jamming: An attacker can send a fake GPS signal to change the drone route. This is possible when the attacker broadcasts interferences between the drone and the pilot so the estimated drone's location is wrong. Many security strategies have been proposed against jamming in wireless communications.
\item Data interception: When a drone is in flight and emits a signal through Bluetooth or Wi-Fi, the information travels through the air as a sphere because the signal is emitted in all directions. Thus, anybody can intercept that signal using a satellite dish to read the raw data that the drone is emitting. For this reason, in this proposal sent data are encrypted with a strong algorithm.
\end{itemize}
\end{itemize}
Secondly, different types of attack on the server can be launched:
\begin{itemize}
\item Brute force: A brute-force attack is a type of attack that is usually applied to gain access to an account in order to retrieve sensitive data. The way to perform it is by sending many packages, trying tons of possible combinations of usernames and passwords. Filtering the traffic could be a great solution to avoid this problem. Also, disabling all the traffic from the outside to the server could be useful.
\item Software-based attacks: Malware, Trojans, key-loggers, virus, ransomware, etc. All those techniques are software-based and are normally used to control the machine where they are executed, or in the worst case, to control the drones. This could cause a sensitive data loss in our server. To face that problem, it is necessary the use of an integrated server-protection like a firewall, anti-virus, etc.
\item DoS/DDoS: A Denial of Service (DoS) is a type of attack that tries to flood traffic on a server machine. When the server is collapsed, it cannot answer all the requests from other users. This makes that the server becomes unavailable for the rest of users. If this attack is launched by many attackers at the same time, it is considered a DDoS (Distributed Denial of Service) attack. In order to avoid this kind of attacks, we use a traffic filter.
\end{itemize}
In order to protect data, some of the implemented data protection measures are as follows.
Database core uses an SQL-based database. Instead of querying over it directly, we use another abstract layer over the database motor, known as Object-Relational Mapping (ORM). This layer allows a programmer to interact with the models of the database as if they were objects from a class. Also, this ORM is improved with a middleware that allows the raw stored data to be encrypted using an AES-256 CBC algorithm. With this method we avoid possible database intrusion using SQL injections. The AES-256 CBC is also the encryption system that is used in each package that the drone sends to the server, and on the BLE communication between the drone and the beacon.
\section{Benchmark}
\label{sec:benchmark}
In this section some questions like reliability of the system, monetary costs and time are responded.
\subsection{Detection Range }
\label{rangedetection}
Physical Web Beacons used in this proposal must be able to be detected from long distances in order to perform the missions correctly. However, to make this possible it is mandatory that the flight altitude corresponds to the emission range of a Bluetooth sensor. Besides, it is necessary that the flight altitude allows avoiding certain obstacles like trees, antennas, etc.
According to the Bluetooth SIG \cite{btrange}, a BLE range could reach 500 meters in open field.
In order to verify the system's effectiveness, we have carried out an empirical range detection. The used devices have been two smartphones (both with Android OS), one with Bluetooth 4.1 as beacon transmitter and the other one with Bluetooth 4.0 as receiver. The used broadcast application was BeaconToy \citep{beacontoy}. In addition, for the reception we used the Beacon Tools \citep{beacontools}, application made by Google. The battery test was made in a horizontal open field in a day with some scattered clouds, no wind and no obstacles between emitter and receptor (see Fig. \ref{fig:btrange} and \ref{fig:btrange2}). To perform the vertical test, we placed a small Physical Web Beacon in the drone's bottom. Meanwhile, in the ground we were measuring and checking detection range carrying out the same tests done in horizontal test (10 measures for each altitude). In a real environment, the system would be set in reverse order but we assumed that this factor would not interfere in the experiment. For each distance, each test was repeated 10 times, and the results are shown in the Table \ref{tab:detectiontable}.
\begin{figure}[ht]
\centering
\subfigure[Detection range tests in horizontal]{\label{fig:btrange}\includegraphics[width=0.8\linewidth]{bt_range.jpg}}
\subfigure[Detection range tests in vertical]{\label{fig:btrange2}\includegraphics[width=0.8\linewidth]{aerial_beacon.jpg}}
\caption{Bluetooth range detection in horizontal and vertical}
\end{figure}
\begin{table}[htbp]
\centering
\caption{Bluetooth signal detection}
\label{tab:detectiontable}
\begin{tabular}{|c|c|c|}
\hline
Distance & Success Rate \\ \hline
10\ m & 100\% \\ \hline
20\ m & 100\% \\ \hline
50\ m & 100\% \\ \hline
100\ m & 100\% \\ \hline
150\ m & 90\% \\ \hline
200\ m & 60\% \\ \hline
\end{tabular}
\end{table}
Some studies \citep{bluetoothrange} indicate that the emission range is improving with each new version of BLE technology.
\subsection{Detection time}
The emission power of a Bluetooth device depends on the broadcast period, so that the faster the device emits, the less emission range it has. To perform a search benchmark, a system called Wartes \citep{generalitat} to calculate the probabilities of detecting clues during the search. With this method, the factor $P $ (probability to detect the victim) can be calculated through the equation $Pa * Pd = Pe$ where:
\begin{itemize}
\item $Pa$ is the value that gives priority to the segment of the search area.
\item $Pd$ is the value given to the ability of the resources to detect clues of the victim.
\item $Pe$ is the result of the two previous factors, used as a measure of success.
\end{itemize}
Table \ref{timeGeneralitat}) shows the needs for a search in an area of 2.5 square kilometres. Using the same distance to cover, with a constant speed of 5 m/s, using drones the result in search time is the one shown in Table \ref{timeDrone}. In this case, it is not possible to determine a 100\% probability of detection using drones because several factors, like whether the user has battery in the smartphone, the altitude of flight of the drone, the existence of obstacles between victim and drone, etc., hardly influence the results. However, as we saw in the previous section, if the flight altitude is not too high, the detection probability could increase, reaching at least a 90\%. However, if we consider that a grid can be created where the distance between the grid points is 50 m and the detection distance for each drone is at least 50 m, we have that when a drone is performing its route, each searching point of its area can intersect with the adjacent points searching areas. This means that each point could be visited from one to five times.
\begin{table}[htbp]
\centering
\caption{Needs to cover a 2.5 km squared area using a SAR team}
\label{timeGeneralitat}
\begin{tabular}{|l|l|l|l|r|}
\hline
Distance between people (m) & Nº people & Time each person (min.) & Total time (min.) & \multicolumn{1}{l|}{Pd} \\ \hline
30 & 35 & 210 & 11.130 & 50\% \\ \hline
18 & 88 & 210 & 18.480 & 70\% \\ \hline
6 & 264 & 210 & 55.440 & 90\% \\ \hline
\end{tabular}
\end{table}
\begin{table}[htbp]
\centering
\caption{Needs to cover a 2.5 km squared area using drones}
\label{timeDrone}
\begin{tabular}{|l|l|l|l|c|}
\hline
Speed m/s & Nº Drones & Time each drone (min.) & Total time (min.) \\ \hline
5 & 1 & 8.3 & 8.3 \\ \hline
5 & 2 & 4.1 & 8.3 \\ \hline
5 & 5 & 1.7 & 8.3 \\ \hline
\end{tabular}
\end{table}
\subsection{Alternatives to BLE Signal}
According to \cite{goursaud2015dedicated}, current wireless communications networks of the WAN type do not meet the requirements of the IoT elements such as communication range, costs and number of paired devices. Due to the possible deficiencies of the system seen in the previous subsections, the use of another type of technology is proposed that can deal with situations where the BLE signal is not good enough for an optimal detection of individuals. This is why new ones have been devised for the use of IoT devices such as LPWAN (Low-Power Wide Area Network), a type of connection similar to Wi-Fi networks that transmit a wireless signal lower than 1Ghz (compared to 2.4Ghz or 5Ghz of WiFi) that allows a greater signal wave and therefore, a greater range of communications. There are currently several different manufacturers, although the only one that has an open standard license is LoRa through LoRa-Alliance \citep{lora}. Another emerging communication technology, based on LPWAN, is Narrowband Fidelity (NB-Fi) defined in \citep{waviot}. This innovative technology is designed for communication of IoT devices focusing on the range and power consumption. The benefits of this technology include the type of free license, the connection range (16km in city and 50km in open country), security and scalability.
However, due to the specifications of the project presented in this paper, it is not possible to implement any of these field broadcast technologies because current smartphone devices do not have the integrated technology to receive and communicate with LPWAN. In a not far away future, when the devices contemplate the use of these technologies, it will be a feasible solution that will greatly improve the current one.
\subsection{Cost per unit}
If a single rescue mission (or more than one) in a month requires several professional firefighters, the cost of the service increases proportionally with respect to the number of firefighters that are required. However, although the cost of a professional drone can be greater than the monthly salary of a firefighter, as long as the drones are not lost or damaged, the cost of the service will remain the same, independently of the number of missions. Thus, if we change an entire rescue team for a drone squadron, the initial cost may exceed the salary of the fire brigade, but when the time goes by the costs will be much lower than having an entire firefighter squad.
\section{Conclusions}
\label{sec:conclusions}
This paper presents a novel and practical system to find missing people using technologies like BLE beacons and UAVs.
Drones are used to fly over an area in order to scan the BLE signal that is emitted by the missing person's smartphone. In this way, the proposed system can be used to improve SAR operations in emergency situations because it allows the emergency team to scan an area in less time using drones instead of the typical human emergency squad. Apart from the drones, there is no need to use any device other than smartphones. The coordination of all the parts that compose this proposal is performed using a web platform by an operator to create and establish a route before starting the mission. The generated data are processed in a back-end server that traces the route to make it optimal.
As a clear advantage of this proposal over other search systems, it actively helps emergency services and drastically reduces the time and cost of any operation.
As future work, an updated version will be developed in a real environment in order to check its performance. In addition, a new device could be used to complement the task of drones in night flights, which will would the detection of people by patterns in images using a thermal camera.
\section*{Acknowledgements}
Research supported by the Spanish Ministry of Economy and Competitiveness, the FEDER Fund, and the CajaCanarias Foundation, under Projects TEC2014-54110-R, RTC-2014-1648-8, MTM2015-69138-REDT and DIG02-INSITU.
| {
"attr-fineweb-edu": 2.648438,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUe8TxK6wB9dbuVdPY |
\section{EXPERIMENTS}
\label{sec:experiment}
\subsection{Experimental Setup}
\label{sec:setup}
\noindent \textbf{Dataset.}
We collected a real-world billiards layouts dataset, which consists of more than 3000 break shots in around 94 international professional 9-ball games {\Comment played by 227 players. In the dataset, there are around 1200 (resp. 1400) billiards layouts with the label ``clear'' (resp. ``win'').}
Each layout contains several positions of balls that remain on the pool table after the break shot, and each position corresponds to $(x,y)$ coordinates captured by a professional sports analysis software Kinovea. We give more details on the dataset as follows
When we collect data, a perspective grid provided by the software Kinovea is used on the closed surface of the table which is calibrated into a $200 \times 100$ coordinate system. The coordinates generally belong to the $[0, 200]$ range along the x-axis, and the $[0, 100]$ range along the y-axis, with the left bottom corner of the table being $(0,0)$. The information we collect includes the positions of balls that remain on the pool table after the break shot, the number of potted balls after the break shot, a binary tag indicating clear or not (clear if the player who performs the break shot pocketed all balls and thus win the game; unclear otherwise), win or not (in a 9-ball game, win if the player potted ball 9 into the pocket, not win otherwise),
and remarks. Remarks are used to capture some special situations, such as golden break, which means the ball 9 is potted into a pocket in the break shot; a foul shot, which means the shot is not making the first contact with the lowest numbered ball in the layout, and it is regarded as breaking the rule of competition. In addition, we provide the YouTube video link for each layout.
\noindent \textbf{Baseline.}
{\Comment To evaluate the effectiveness and efficiency of our BL2Vec, we compare it with the following baselines:
}
\\
$\bullet$EMD~\cite{rubner2000earth}. The Earth Mover's Distance (EMD) is based on the idea of a transportation problem that measures the least amount of work needed to transform one set to another.
\\
$\bullet$Hausdorff~\cite{huttenlocher1993comparing}. Hausdorff distance is based on the idea of point-matching and computes the largest distance among all the distances from a point in one set to the nearest point in the other set.
\\
$\bullet$DTW~\cite{yi1998efficient} and Frechet~\cite{alt1995computing}. DTW and Frechet are two widely used similarity measurements for
sequential data.
To apply the measurements to billiards layouts, we model each layout as a sequence of locations, which corresponds to one cue ball followed by several object balls in ascending order of ball numbers.
\\
$\bullet$Peer Matching (PM). {\color{blue} We adopt a straightforward solution for measuring the similarity between two billiards layouts as the average of the distances between their balls matched based on the numbers}, e.g., ball 1 in one layout is matched to the ball 1 in another layout, and so on.
For those mismatched balls with different numbers, we align them in the ascending order of their numbers and match them accordingly.
\\
$\bullet$DeepSets~\cite{zaheer2017deep}. DeepSets is used to generate pointsets representation in an unsupervised fashion. The main idea of DeepSets is to transform vectors
of points in a set into a new representation of the set. It aggregates the representations of vectors and then passes them to fully-connected layers to get the representation.
\\
$\bullet$RepSet~\cite{skianis2020rep}. RepSet is also used to generate the representations of pointsets. It adapts a bipartite matching idea by computing the correspondences between an input set and some generated hidden sets to get the set representation. Besides, RepSet captures the unordered nature of a set and can generate the same output for all possible permutations of points in the set.
\\
$\bullet$WSSET~\cite{Arsomngern2021self}. {\color{blue} WSSET achieves the state-of-the-art performance on pointsets embedding}, which is an approximate version of EMD based on the deep metric learning framework and uses the EMD as a metric for inferring the positive and negative pairs. It generates the representations of pointsets, and these generated representations can be used for similarity search, classification tasks, etc.
\noindent \textbf{Parameter Setting.}
We set the cell size to be 15 (with the results of its effect shown later on) and thus we have 98 unique tokens for the BS features. For BP and BB features, we use the setting of $15^\circ$ and 10 for partitioning the angle range and distance range, respectively. {\color{blue} We tried different settings, the results are similar and thus omitted.}
For each feature, we embed it to a 10-dimensional vector, and thus the representation dimension of the ball is $10*(1+4*6+2)=270$ (i.e., $K'=270$). Recall that for each ball, there is one token in the BS feature, four tokens in the BP feature with totally six pockets, and two tokens in the BB feature. {\color{blue} We use a convolutional layer as $Net(\cdot)$ with 3 output channels. For each, it has 52 filters with varying sizes from $(1,K')$ to $(7,K')$.} Note that larger filters help capture the correlations between the balls. A ReLU function is applied after the convolutional layer and then a global max-pooling layer is employed to generate a 156-dimensional representation of the billiards layout {\color{blue} (i.e., $K=3 \times 56 = 156$).} {\Comment Additionally, in the training process, we randomly sample 70\% billiards layouts to generate training tuples and adopt Adam stochastic gradient descent with an initial learning rate of $0.00001$, and the remaining layouts are used for testing.} To avoid overfitting, we employ a L2 regularization term with $\lambda=0.001$ for $Net(\cdot)$. The margin in the triplet loss is set to {\color{blue} $\delta = 1.0$ based on empirical findings.} {\color{blue} For generating each $\mathcal{B}_+$ (resp. $\mathcal{B}_-$), we set both the noise rate and the drop rate to 0.2 (resp. 0.3) by default.} For parameters of baselines, we follow their settings in the original papers.
\noindent \textbf{Evaluation Metrics.}
We consider the following three tasks to evaluate the effectiveness.
(1) Similarity search, we report the {\color{blue} Hitting Ratio for Top-10 ($HR@10$) and Mean Reciprocal Rank ($MRR$).} In particular, for a query billiards layout $\mathcal{B}_q$, if its positive version $\mathcal{B}_+$ is in the top-10 returned billiards layouts, then $HR@10$ is defined as $HR@10 = 1$, otherwise $HR@10 = 0$.
$MRR$ is defined as $MRR = \frac{1}{rank}$, where $rank$ denotes the rank of $\mathcal{B}_+$ in the database for its query billiards layout $\mathcal{B}_q$. The average results of $HR@10$ and $MRR$ over all queries are reported in the experiments. A higher $HR@10$ or $MRR$ indicates a better result.
(2) Classification, we report the average classification accuracy for the label of ``clear'' and ``not clear''.
Classification accuracy is a widely used metric in the classification task. A higher accuracy indicates a better result.
(3) Clustering, we report two widely used metrics Adjusted Rand Index (ARI) and Adjusted Mutual Information (AMI) for clustering.
They measure the correlation between the predicted result and the ground truth. The ARI and AMI values lie in the range $[-1, 1]$. For ease of understanding, we normalize the values in the range of $[0,1]$.
A higher ARI or AMI indicates a higher correspondence to the ground-truth
\noindent \textbf{Evaluation Platform.}
All the methods are implemented in Python 3.8. The implementation of $BL2Vec$ is based on PyTorch 1.8.0. The experiments are conducted on a server with 10-cores of Intel(R) Core(TM) i9-9820X CPU @ 3.30GHz 64.0GB RAM and one Nvidia GeForce RTX 2080 GPU. {\Comment The datasets and codes can be downloaded via the link~\footnote{https://www.dropbox.com/sh/644aceaolfv4rky/AAAKhpY1yzrbq9-uo4Df3N0Oa?dl=0}.}
\begin{figure*}[h]
\centering
\begin{tabular}{c c c c}
\begin{minipage}{3.2cm
\includegraphics[width=17.2cm]{figures/legend}
\end{minipage}
\\
\begin{minipage}{4.0cm
\includegraphics[width=4.3cm]{figures/hr_noise}
\end{minipage}
&
\begin{minipage}{4cm}
\includegraphics[width=4.3cm]{figures/mr_noise}
\end{minipage}
&
\begin{minipage}{4cm}
\includegraphics[width=4.3cm]{figures/hr_drop}
\end{minipage}
&
\begin{minipage}{4cm}
\includegraphics[width=4.3cm]{figures/mr_drop}
\end{minipage}
\\
(a) Varying noise rate (HR@10)\hspace{-4mm}
&
(b) Varying noise rate (MRR)\hspace{-4mm}
&
(c) Varying drop rate (HR@10)\hspace{-4mm}
&
(d) Varying drop rate (MRR)\hspace{-4mm}
\end{tabular}
\vspace{-2mm}
\caption{Effectiveness evaluation with similarity search.}
\label{fig:effectiveness}
\vspace*{-5mm}
\end{figure*}
\subsection{Experimental Results}
\label{sec:effectiveness}
\begin{table}[]
\centering
\caption {Effectiveness evaluation with classification and clustering.}\label{tab:tasks}
\vspace*{-2mm}
\footnotesize{
\begin{tabular}{c|c|l|l|l|c|lcl}
\hline
\multirow{2}{*}{Tasks} & \multicolumn{4}{c|}{Classification} & \multicolumn{4}{c}{Clustring} \\ \cline{2-9}
& \multicolumn{4}{c|}{Accuracy (\%)} & \multicolumn{2}{c}{ARI (\%)} & \multicolumn{2}{c}{AMI (\%)} \\ \hline
EMD & \multicolumn{4}{c|}{66.33} & \multicolumn{2}{c}{52.15} & \multicolumn{2}{c}{55.81} \\
Hausdorff & \multicolumn{4}{c|}{59.17} & \multicolumn{2}{c}{51.68} & \multicolumn{2}{c}{54.98} \\
DTW & \multicolumn{4}{c|}{59.33} & \multicolumn{2}{c}{50.12} & \multicolumn{2}{c}{50.13} \\
Frechet & \multicolumn{4}{c|}{59.16} & \multicolumn{2}{c}{50.16} & \multicolumn{2}{c}{50.22} \\
PM & \multicolumn{4}{c|}{54.24} & \multicolumn{2}{c}{50.02} & \multicolumn{2}{c}{50.32} \\
DeepSets & \multicolumn{4}{c|}{54.23} & \multicolumn{2}{c}{49.50} & \multicolumn{2}{c}{50.41} \\
RepSet & \multicolumn{4}{c|}{58.99} & \multicolumn{2}{c}{50.54} & \multicolumn{2}{c}{50.72} \\
WSSET & \multicolumn{4}{c|}{56.62} & \multicolumn{2}{c}{49.12} & \multicolumn{2}{c}{49.78} \\
BL2Vec & \multicolumn{4}{c|}{\textbf{72.62}} & \multicolumn{2}{c}{\textbf{61.94}} & \multicolumn{2}{c}{\textbf{63.26}} \\ \hline
\end{tabular}}
\vspace*{-6mm}
\end{table}
\noindent \textbf{(1) Effectiveness evaluation (similarity search).}
The lack of ground truth makes it a challenging problem to evaluate the similarity. To overcome this problem, we follow recent studies~\cite{Arsomngern2021self} which propose to use the $HR@10$ and $MRR$ to quantify the similarity evaluation via self-similarity comparison. Specifically, we randomly select 100 billiards layouts to form the query set (denoted as $Q$) and 500 billiards layouts as the target database (denoted as $D$), For each billiards layout $ \mathcal{B}_q \in Q$, we create its positive version denoted as $\mathcal{B}_+$ by introducing two types of noises. As discussed in Section~\ref{subsec:sample}, (1) we shift each ball along a random direction by a random distance, which is controlled by a noise rate; (2) we randomly delete some balls controlled by a drop rate and then add them at random locations. Then for each $\mathcal{B}_q$, we compute the rank of $\mathcal{B}_+$ in the database $D \cup \mathcal{B}_+$ using BL2Vec and other baselines. Ideally, $\mathcal{B}_+$ should be ranked at the top since $\mathcal{B}_+$ is generated from the $\mathcal{B}_q$.
Figure \ref{fig:effectiveness} (a) and (b) show the results of varying the noise rate from 0.2 to 0.4 when the drop rate is fixed to 0.2 in terms of
$HR@10$ and $MRR$. We can see that $BL2Vec$ is significantly better than the baselines in terms of $HR@10$ and $MRR$. For example, when applying 40\% noise, BL2Vec outperforms RepSet and DTW (the two best baselines) by 1.5 times and 4.7 times in terms of $HR@10$, respectively, and by more than 6.2 times and 13.3 times in terms of $MRR$, respectively. EMD and DeepSets perform the worst in most of the cases because EMD is based on the idea of point-matching and affected significantly by the noise;
DeepSets is developed for a set that has an unordered nature and therefore it is not suitable for measuring the similarity among billiards layouts, which are order sensitive.
Figure \ref{fig:effectiveness} (c) and (d) show the results of varying the drop rate from 0.2 to 0.4 when the noise rate is fixed to 0.2 in terms of
$HR@10$ and $MRR$. BL2Vec still performs the best in terms of both metrics. For example, it outperforms RepSet, which performs best in baselines when the drop rate is 0.4, by more than 1.8 times (resp. 9.6 times) in terms of $HR@10$ (resp. $MRR$).
\noindent \textbf{(2) Effectiveness evaluation (classification).}
Table \ref{tab:tasks} shows the results of different methods in the classification task for predicting the labels of ``clear" and ``not clear". Recall that ``clear" means the player who performs the break shot pocketed all balls and thus win the game, and ``not clear" means the otherwise case.
We train a k-nearest neighbour classifier (kNN) with $k = 10$ for this task and report the average accuracy on 500 billiards layouts. For BL2Vec, we fix the noise rate and drop rate to be 0.2 by default, and use the representations of billiards layouts for computing the distance in the KNN classifier, we do the same for DeepSets, RepSet and WSSET. For EMD, Hausdorff, DTW, Frechet and PM, we use their distances for the KNN classifier directly.
We observe BL2Vec outperforms all the baselines. For example, it outperforms EMD (the best baseline) by 9.5\%. The result indicates that the representation learned from BL2Vec is more useful for learning the target task.
\noindent \textbf{(3) Effectiveness evaluation (clustering).}
We further show the clustering task, which is to group billiards layouts belonging to the same family (i.e., ``clear" or ``not clear") into the same cluster. We use the K-means algorithm with $k = 2$ and report ARI and AMI on the
500 billiards layouts. The results are shown in Table~\ref{tab:tasks}. According to the results, BL2Vec also performs best.
For example, it outperforms EMD (the best baseline) by 18.8\% (resp. 13.3\%) in terms of ARI (resp. AMI). In addition, we observe other baselines have similar ARI and AMI, e.g., the values are between 49\% and 50\%. Similar to the classification task, the results indicate adopting the embeddings of BL2Vec for billiards layout clustering will yield superior performance.
\begin{table}[]
\centering
\caption {The effect of the cell size.}\label{tab:parameter}
\vspace*{-2mm}
\centering
\footnotesize{
\begin{tabular}{c|c|cc|c|l|l|l|c|lcl}
\hline
\multirow{2}{*}{Cell size} & \multirow{2}{*}{\#tokens} & \multicolumn{2}{c|}{Similarity} & \multicolumn{4}{c|}{Classification} & \multicolumn{4}{c}{Clustring} \\ \cline{3-12}
& & HR@10 & MRR & \multicolumn{4}{c|}{Accuracy (\%)} & \multicolumn{2}{c}{ARI (\%)} & \multicolumn{2}{c}{AMI (\%)} \\ \hline
10 & 200 & 0.92 & 0.240 & \multicolumn{4}{c|}{66.67} & \multicolumn{2}{c}{52.25} & \multicolumn{2}{c}{55.80} \\
15 & 98 & 0.94 & 0.252 & \multicolumn{4}{c|}{\textbf{72.62}} & \multicolumn{2}{c}{\textbf{61.94}} & \multicolumn{2}{c}{\textbf{63.26}} \\
20 & 50 & 0.95 &0.259 & \multicolumn{4}{c|}{67.33} & \multicolumn{2}{c}{56.96} & \multicolumn{2}{c}{58.25} \\
25 & 32 & 0.96 &0.303 & \multicolumn{4}{c|}{70.14} & \multicolumn{2}{c}{55.21} & \multicolumn{2}{c}{57.37} \\
30 & 28 & \textbf{0.96} &\textbf{0.306} & \multicolumn{4}{c|}{68.72} & \multicolumn{2}{c}{53.96} & \multicolumn{2}{c}{56.28} \\ \hline
\end{tabular}}
\vspace*{-6mm}
\end{table}
\noindent \textbf{(4) Effectiveness evaluation (parameter study).}
\label{sec:parameter}
We next evaluate the effect of the cell size on the effectiveness of similarity search, classification and clustering for BL2Vec.
Intuitively with the smaller cell size, it generates more tokens and provides a higher resolution of the pool table. However, with the smaller size, it reduces the robustness against the noise for the similarity search task. In Table \ref{tab:parameter}, we report the performance for the similarity search, classification and clustering. We notice that the performance of the similarity search becomes better as the cell size grows and this is in line with our intuition. In addition, we observe the smallest cell size of 10 leads to the worst performance, especially for classification and clustering, because the smallest cell size produces more tokens, which makes the model more difficult to train. In our experiments, we set the cell size to 15 because it provides better performance in general.
\begin{table}[]
\centering
\caption {Effectiveness evaluation with ablation study.}\label{tab:ablation}
\vspace*{-2mm}
\footnotesize{
\begin{tabular}{c|cc|c|l|l|l|c|lcl}
\hline
\multirow{2}{*}{Model} & \multicolumn{2}{c|}{Similarity} & \multicolumn{4}{c|}{Classification} & \multicolumn{4}{c}{Clustering} \\ \cline{2-11}
& HR@10 & MRR & \multicolumn{4}{c|}{Accuracy (\%)} & \multicolumn{2}{c}{ARI (\%)} & \multicolumn{2}{c}{AMI (\%)} \\ \hline
BL2Vec & \textbf{0.94} & \textbf{0.252} & \multicolumn{4}{c|}{\textbf{72.62}} & \multicolumn{2}{c}{\textbf{61.94}} & \multicolumn{2}{c}{\textbf{63.26}} \\
w/o BS & 0.92 & 0.207 & \multicolumn{4}{c|}{62.42} & \multicolumn{2}{c}{53.50} & \multicolumn{2}{c}{59.25} \\
w/o BP & 0.11 & 0.009 & \multicolumn{4}{c|}{51.08} & \multicolumn{2}{c}{49.27} & \multicolumn{2}{c}{50.18} \\
w/o BB & 0.91 & 0.197 & \multicolumn{4}{c|}{69.52} & \multicolumn{2}{c}{52.45} & \multicolumn{2}{c}{61.73} \\
w/o $Net(\cdot)$ & 0.40 & 0.078 & \multicolumn{4}{c|}{41.76} & \multicolumn{2}{c}{50.38} & \multicolumn{2}{c}{50.67} \\ \hline
\end{tabular}}
\end{table}
\begin{figure}
\begin{tabular}{c c}
\begin{minipage}{0.4\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/running_time}
\vspace*{-5mm}
\caption{Scalability.}
\label{fig:efficiency}
\end{minipage}
&
\begin{minipage}{0.47\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/user_study}
\vspace*{-5mm}
\caption{User study.}
\label{fig:userstudy}
\end{minipage}
\end{tabular}
\vspace*{-6mm}
\end{figure}
\noindent \textbf{(5) Effectiveness evaluation (ablation study).}
\label{sec:ablation}
To show the importance of each component of the extracted features in our BL2Vec model, including Ball-self (BS), Ball-Pocket (BP) and Ball-Ball (BB), and {\color{blue} the ability of CNN for capturing spatial features in $Net(\cdot)$}, we conduct an ablation experiment. We denote our BL2Vec model without BS, BP, BB and as w/o BS, w/o BP, w/o BB {\color{blue} and w/o $Net(\cdot)$, respectively.} Table~\ref{tab:ablation} compares the different versions for similarity search, classification and clustering tasks. Overall, all these components help improve the effectiveness of the BL2Vec model and enable the model to achieve superior performance than baselines. A detailed discussion on the effectiveness of each component is given as follows.
(1) BS features can effectively capture the spatial information of each ball; if the features are omitted, the classification performance drops significantly by around 14\%.
(2) BP features capture the correlations between the ball and each pocket, and these are the unique characteristics in the billiards layout data. In addition, BP features are associated with the most tokens (i.e., $4 \times 6 =24$) in the total of 27 tokens to represent each ball as discussed in Section~\ref{subsec:feature_extraction}, and therefore the BP features contribute the most for all of the tasks. As expected, we observe that if the BP features are omitted, the performance of similarity search drops by 88\% (resp. 96\%) in terms of HR@10 (resp. MRR); that of classification drops by 30\% in terms of accuracy; that of clustering drops by 20\% (resp. 21\%) in terms of ARI (resp. AMI). (3) BB features capture the correlation between balls in the layout. The features also affect the overall performance. For example, if the BB features are omitted, the clustering performance drops a lot by around 15\% (resp. 2\%) in terms of ARI (resp. AMI). {\color{blue} (4) the CNN in $Net(\cdot)$ can well capture the correlation of those spatial features (i.e., BS, BP and BB), and simply using the average of feature embeddings cannot lead to the superior performance, e.g., similarity search drops by 57.4\% (resp. 69.05\%) in terms of HR@10 (resp. MRR).}
\noindent \textbf{(6) Scalability.}
\label{sec:efficiency}
{\color{blue}
To test scalability, we generate more layouts by adding noise on the original dataset.
}
Figure \ref{fig:efficiency} reports the average running time over 100 queries for computing the similarity between a query billiards layout and the billiards layouts when we vary the size of the target database {\color{blue} from 10,000 layouts to 50,000 layouts.}
Clearly, EMD and Hausdorff perform extremely slow, which match their quadratic time complexity.
We observe the running time of DTW and Frechet is smaller than that of EMD and Hausdorff though all of them have the same time complexity. This is because DTW and Frechet compute the similarity via dynamic programming to find an
optimal pairwise point-matching.
PM is faster than DTW and Frechet because it matches the balls peer-to-peer based on their numbers and the complexity is linear
In addition, DeepSets, WSSET, Rapset and BL2Vec have a similar running time.
Specifically, they run up to 170 times faster than EMD and Hausdorff, 100 times faster than DTW and Frechet and 50 times faster than PM. Moreover, we notice that DeepSets, WSSET, Rapset and BL2Vec scale linearly with the dataset size and the disparity between them increases as the size of the target database grows. This is because all of the learning-based methods have linear time complexity to compute the similarity by embedding layouts to vectors, and the embedding process can also be done offline, which is consistent with their time complexities.
\begin{figure*}
\centering
\begin{tabular}{c c c c c}
\begin{minipage}{0.18\linewidth}
\includegraphics[width=\linewidth]{figures/user/query_63_63}%
\end{minipage}
&
\begin{minipage}{0.18\linewidth}
\includegraphics[width=\linewidth]{figures/user/query_40_40}%
\end{minipage}
&
\begin{minipage}{0.18\linewidth}
\includegraphics[width=\linewidth]{figures/user/query_42_42}%
\end{minipage}
&
\begin{minipage}{0.18\linewidth}
\includegraphics[width=\linewidth]{figures/user/query_43_43}%
\end{minipage}
&
\begin{minipage}{0.18\linewidth}
\includegraphics[width=\linewidth]{figures/user/query_46_46}%
\end{minipage}
\\
Q1
&
Q2
&
Q3
&
Q4
&
Q5
\end{tabular}
\vspace*{-3mm}
\caption{First five queries that are used for user study.}
\label{userstudy}
\vspace*{-2mm}
\end{figure*}
\begin{figure*}
\centering
\begin{tabular}{c c c c c}
\begin{minipage}{0.18\linewidth}
\includegraphics[width=\linewidth]{figures/user/dtw_63_0}%
\end{minipage}
&
\begin{minipage}{0.18\linewidth}
\includegraphics[width=\linewidth]{figures/user/dtw_40_0}%
\end{minipage}
&
\begin{minipage}{0.18\linewidth}
\includegraphics[width=\linewidth]{figures/user/dtw_42_0}%
\end{minipage}
&
\begin{minipage}{0.18\linewidth}
\includegraphics[width=\linewidth]{figures/user/dtw_43_0}%
\end{minipage}
&
\begin{minipage}{0.18\linewidth}
\includegraphics[width=\linewidth]{figures/user/dtw_46_0}%
\end{minipage}
\\
DTW 1 (1/10)
&
DTW 2 (2/10)
&
DTW 3 (1/10)
&
DTW 4 (6/10)
&
DTW 5 (6/10)
\\
\begin{minipage}{0.18\linewidth}
\includegraphics[width=\linewidth]{figures/user/dml_63_0}%
\end{minipage}
&
\begin{minipage}{0.18\linewidth}
\includegraphics[width=\linewidth]{figures/user/dml_40_0}%
\end{minipage}
&
\begin{minipage}{0.18\linewidth}
\includegraphics[width=\linewidth]{figures/user/dml_42_0}%
\end{minipage}
&
\begin{minipage}{0.18\linewidth}
\includegraphics[width=\linewidth]{figures/user/dml_43_0}%
\end{minipage}
&
\begin{minipage}{0.18\linewidth}
\includegraphics[width=\linewidth]{figures/user/dml_46_0}%
\end{minipage}
\\
BL2Vec 1 (9/10)
&
BL2Vec 2 (8/10)
&
BL2Vec 3 (9/10)
&
BL2Vec 4 (4/10)
&
BL2Vec 5 (4/10)
\\
\end{tabular}
\setlength{\abovecaptionskip}{1pt}
\caption{Top-1 billiards layouts for Q1-Q5 returned by DTW and BL2Vec, (1/10) means the 1 user supports this result among the total of 10 users.}
\label{top5}
\vspace*{-2mm}
\end{figure*}
\noindent \textbf{(7) User study.} {\color{blue} In general, DTW performs the best among all baselines in the similar billiards layout retrieval task}, we conduct a user study to compare the BL2Vec with DTW. We randomly select 10 billiards layouts as the queries as shown in Figure~\ref{userstudy} for the first five queries Q1 to Q5, and for each query in the query set, we use BL2Vec and DTW to retrieve Top-1 billiards layouts from the target database.
We invite ten volunteers with strong billiards knowledge to annotate the relevance of the retrieved results. More specifically, we firstly spent 15 minutes introducing the background to the volunteers so that they understand these queries. Then for each of 10 queries, the volunteers specify the more similar result from the Top-1 result retrieved by BL2Vec and the Top-1 result retrieved by DTW. Note that the volunteers do not know which result is returned by which method in advance. We report the scores of the ten queries for both methods in Figure~\ref{fig:userstudy}. We observe that our BL2Vec outperforms DTW with expert knowledge. In particular, BL2Vec outperforms DTW for 8 queries out of the total 10 queries. In addition, BL2Vec gets 81\% votes while DTW gets only 19\% votes. {\Comment We illustrate the Top-1 results of BL2Vec and DTW for the Q1 to Q5 in Figure~\ref{top5}.
From these figures, we observe the layouts retrieved by BL2Vec match the queries very well. For example, we find that the overall layout of the TOP-1 result by BL2Vec is very similar to Q1. We also note taht DTW is slightly better than BL2Vec for Q4 and Q5 (i.e., by 2\% votes).
We believe this might be because the layouts returned by DTW have the locations of the balls more similar to those in the query layouts and this is perceived by the users. Nevertheless, with a closer check, the balls at similar locations in the layouts returned by DTW and the query layouts are of different numbers (i.e., colors) and thus they should be regarded be not that similar to each other. This illustrates DTW's drawback that it does not captures other information than the locations of layouts well.
We omit the results for Q6 to Q10 due to the page limit.}
\section{CONCLUSION}
\label{sec:conclusion}
In this paper, we propose a novel deep reinforcement learning based solution called \texttt{RL4OASD} for online anomalous subtrajectory detection. \texttt{RL4OASD} incorporates the information of trajectory mobility and road network structure for detecting anomalies, it trains in a weakly supervised framework with noisy labels and can efficiently detect anomalous subtrajectories in an online manner. Experiments on two real-world taxi trajectory datasets demonstrate \texttt{RL4OASD} consistently outperforms existing algorithms, and runs comparably fast. In the future, we will explore other types of anomalous trajectories, such as temporally anomalous trajectories, or detecting anomalous trajectories in trajectory streams.
\section{CONCLUSIONS}
\label{sec:conclusion}
In this paper, {{we study the problem of online anomalous subtrajectory detection on road networks and propose the first deep reinforcement learning based solution called \texttt{RL4OASD}.
\texttt{RL4OASD} is a data-driven approach that can be trained without labeled data.
We conduct extensive experiments on two real-world taxi trajectory datasets with manually labeled anomalies. The results demonstrate that \texttt{RL4OASD} consistently outperforms existing algorithms, runs comparably fast, and
supports the detection in varying traffic conditions.}}
{{In the future, we will explore the cold-start problem when the historical trajectories are not sufficient.
}}
\smallskip
\noindent\textbf{Acknowledgments:}
The project is partially supported by the funding from HKU-SCF FinTech Academy and the ITF project (ITP/173/18FP). This research/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-PhD/2021-08-024[T] and AISG Award No: AISG2-TC-2021-001).
This research is also supported by the Ministry of Education, Singapore, under its Academic Research Fund (Tier 2 Awards MOE-T2EP20220-0011 and MOE-T2EP20221-0013). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore and Ministry of Education, Singapore.
\section{EXPERIMENTS}
\label{sec:experiment}
\subsection{Experimental Setup}
\label{sec:setup}
\noindent \textbf{Dataset.}
The experiments are conducted on two real-world taxi trajectory datasets from DiDi Chuxing~\footnote{https://outreach.didichuxing.com/research/opendata/en/}, namely \emph{Chengdu} and \emph{Xi'an}. All raw trajectories are preprocessed to map-matched trajectories via a popular map-matching algorithm~\cite{yang2018fast}, and the road networks of the two cities are obtained from OpenStreetMap~\footnote{https://www.openstreetmap.org/}.
\noindent \textbf{Ground Truth.}
By following the previous works~\cite{zhang2011ibat,chen2013iboat, lv2017outlier}, we manually label the anomalous subtrajectories, and take the labeled subtrajectories as the ground truth for the evaluation.
Specifically, we randomly sample 200 SD-Pairs following the distribution of their traveled frequency in each dataset. For labeling subtrajectories, we illustrate all routes (i.e., map-matched trajectories) within each SD-Pair as shown in Figure~\ref{fig:casestudy}. We invite 5 participants and let them to identify whether the routes are anomalous or not based on the visualization. If one route is identified as the anomalous, we ask the volunteer to label which parts are responsible for its anomalousness.
{\Comment{For quality control, we randomly pick 10\% trajectories, ask 5 other checkers to label these trajectories independently, adopt the majority voting to aggregate the labels, and compute the accuracy of the labels by the labelers against the aggregated ones by the checkers. The accuracy is 98.7\% for Chengdu and 94.3\% for Xi'an, which shows that our labeled datasets are with high accuracy.}}
{\Comment{Multiple trajectories may correspond to the same route and thus we have fewer routes than trajectories. We label 1,688 (resp. 1,057) routes, which correspond to 558,098 (resp. 163,027) raw trajectories before map-matching for Chengdu (resp. Xi'an). Among them, 1,436 (resp. 813) routes that correspond to 3,930 (resp. 2,368) raw trajectories are identified as the anomalous for Chengdu (resp. Xi'an), where the anomalous ratios for Chengdu and Xi'an are estimated as $3,930/558,098=0.7\%$ and $2,368/163,027=1.5\%$, respectively. The statistics of the two datasets are reported in Table~\ref{tab:dataset}.}}
\begin{table}[]
\setlength{\tabcolsep}{4pt}
\centering
\caption{Dataset statistics.}
\vspace*{-2mm}
\begin{tabular}{lll}
\hline
Dataset & Chengdu & Xi'an \\ \hline
\# of trajectories & 677,492 & 373,054 \\
\# of segments & 4,885 & 5,052 \\
\# of intersections & 12,446 & 13,660 \\
\# of labeled routes & 1,688/558,098 & 1,057/163,027 \\
\# of anomalous routes & 1,436/3,930 & 813/2,368 \\
Anomalous ratio & 0.7\% & 1.5\% \\ \hline
\end{tabular}
\label{tab:dataset}
\vspace*{-2mm}
\end{table}
\noindent \textbf{Baseline.}
We review the literature thoroughly, and identify the following baselines for the online detection problem, including IBOAT~\cite{chen2013iboat}, DBTOD~\cite{wu2017fast}, GM-VSAE~\cite{liu2020online}, SD-VSAE~\cite{liu2020online}, SAE~\cite{malhotra2016lstm}, VSAE~\cite{kingma2013auto} and CTSS~\cite{zhang2020continuous}. The detailed description of these algorithms is discussed in Section~\ref{sec:related}. For SAE and VSAE, they are adapted from GM-VSAE, where SAE replaces the encoder and decoder structure in GM-VSAE with a traditional Seq2Seq model, which aims to minimize a reconstruction error, and the reconstruction error is further used to define the anomaly score. VSAE replaces the Gaussian mixture distribution of the latent route representations in GM-VSAE with a Gaussian distribution.
\if 0
\\
$\bullet$. IBOAT is a isolation-based method for detecting anomalous trajectories online by identifying which parts of a detected trajectory can be separated from others.
\\
$\bullet$. DBTOD is based on a probabilistic model to detect online anomalous trajectories via modeling the driving behavior from historical trajectories. For modeling the driving behavior, it takes some related features into account, including historical average speed, road level and turning angle.
\\
$\bullet$. GM-VSAE is a learning-based model to detect online anomalous trajectories via the generation scheme, by modeling a Gaussian mixture distribution into a sequential autoencoder structure.
\\
$\bullet$. SD-VSAE is a fast version of GM-VSAE. It replaces the encoder part of GM-VSAE with a distillation network (i.e., SD-network) to support efficient online detection.
\\
$\bullet$. SAE and VSAE are two adapted versions of GM-VSAE. In particular, SAE is a traditional Seq2Seq based learning model by minimizing the reconstruction error for the generation. VSAE is based on a traditional VAE structure,
\\
$\bullet$. CTSS is the state-of-the-art method proposed for online detour detection. It continues to monitor an online detour by extending the partial trajectory to the destination and applies the Frechet distance to measure the trajectory similarity with a selected normal route.
\fi
\begin{table*}[]
\caption{Effectiveness comparison with existing baselines.}
\vspace{-2mm}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Mathods & \multicolumn{5}{c|}{Chengdu} & \multicolumn{5}{c|}{Xi'an} \\ \hline
Trajectory Length & G1 & G2 & G3 & G4 &Overall & G1 & G2 & G3 & G4 &Overall \\ \hline
IBOAT~\cite{chen2013iboat} &0.426 &0.431 &0.406 &0.590 &0.431 &0.424 &0.467 &0.487 &0.503 &0.472 \\ \hline
DBTOD~\cite{wu2017fast} &0.523 &0.531 &0.515 &0.669 &0.530 &0.519 &0.448 &0.441 &0.483 &0.433 \\ \hline
GM-VSAE~\cite{liu2020online} &0.375 &0.493 &0.504 &0.658 &0.451 &0.371 &0.464 &0.498 &0.521 &0.467 \\ \hline
SD-VSAE~\cite{liu2020online} &0.373 &0.491 &0.498 &0.637 &0.448 &0.373 &0.459 &0.478 &0.507 &0.458 \\ \hline
SAE~\cite{malhotra2016lstm} &0.375 &0.492 &0.499 &0.658 &0.450 &0.372 &0.460 &0.479 &0.517 &0.461 \\ \hline
VSAE~\cite{kingma2013auto} &0.374 &0.492 &0.504 &0.656 &0.450 &0.369 &0.461 &0.487 &0.520 &0.463 \\ \hline
CTSS~\cite{zhang2020continuous} &0.730 &0.708 &0.625 &0.741 &0.706 &0.657 &0.637 &0.636 &0.672 & 0.658 \\ \hline
\textbf{RL4OASD} &\textbf{0.888} &\textbf{0.892} &\textbf{0.725} &\textbf{0.774} & \textbf{0.854} &\textbf{0.964} &\textbf{0.864} &\textbf{0.809} &\textbf{0.844} &\textbf{0.857} \\ \hline
\end{tabular}
\label{tab:baselines}
\vspace*{-3mm}
\end{table*}
\begin{table}[ht]
\setlength{\tabcolsep}{4pt}
\centering
\caption{Impacts of parameter $\alpha$ for \texttt{RL4OASD}.}
\vspace*{-2mm}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Parameter & $\alpha=0.3$ & $\alpha=0.4$ & $\alpha=0.5$ & $\alpha=0.6$ & $\alpha=0.7$ \\ \hline
$F_1$-score &0.844 &0.845 &\textbf{0.849} &0.846 &0.845 \\ \hline
\end{tabular}
\label{tab:alpha}
\vspace*{-2mm}
\end{table}
\begin{table}[ht]
\setlength{\tabcolsep}{4pt}
\centering
\caption{Impacts of parameter $\delta$ for \texttt{RL4OASD}.}
\vspace*{-2mm}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Parameter & $\delta=0.1$ & $\delta=0.2$ & $\delta=0.3$ & $\delta=0.4$ & $\delta=0.5$ \\ \hline
$F_1$-score &0.647 &0.834 &0.846 &\textbf{0.849} &0.847 \\ \hline
\end{tabular}
\label{tab:delta}
\vspace*{-2mm}
\end{table}
\begin{table}[ht]
\setlength{\tabcolsep}{4pt}
\centering
\caption{Impacts of parameter $D$ for \texttt{RL4OASD}.}
\vspace*{-2mm}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
Parameter & $D=0$ & $D=2$ & $D=4$ & $D=6$ & $D=8$ & $D=10$ \\ \hline
$F_1$-score &0.737 &0.763 &0.812 &0.833 &\textbf{0.849} &0.852\\ \hline
\end{tabular}
\label{tab:D}
\vspace*{-2mm}
\end{table}
\noindent \textbf{Parameter Setting.} In RSRNet, we embed the TCF and NRF features into the 128-dimensional vectors, and use the LSTM with 128 hidden units to implement the RSRNet. To train RSRNet, we construct noisy labels in 24 time slots with one hour granularity via empirical studies. The parameter $\alpha$ and $\delta$ are set to 0.5 and 0.4 by default. In ASDNet, the dimension of label vectors $\mathbf{v}(e_{i}.l)$ is 128. The policy network is implemented with a single-layer feedforward neural network, and the softmax function is adopted as the activation function. The delay parameter $D$ is set to 8 by default. By empirical findings, the learning rates for RSRNet and ASDNet are set to 0.01 and 0.001, respectively. For baselines, we follow their strategies in the original papers.
\noindent \textbf{Evaluation Metrics.}
{\Comment{To verify the results of subtrajectory detection, we adapt the evaluation metric $F_1$-score proposed for Named Entity Recognition (NER)~\cite{li2020survey,li2021effective}.
This is inspired by the fact that our task corresponds to one of tagging subsequences of a sequence, which is similar to NER, which tags phrases (i.e., subsequences) of a sentence (sequence).
The intuition is that we take the anomalous subtrajectories as entities in NER task. Specifically, (1) let $C_{g,i}$ denote a manually labeled anomalous subtrajectory $i$ in the ground truth set $C_g$, and $C_{o,i}$ denotes the corresponding subtrajectory $i$ in the set $C_o$, which is returned by a detection method. We employ Jaccard to measure the ratio of intersection over union of the road segments between $C_{g,i}$ and $C_{o,i}$, that is
\begin{align*}
\mathcal{J}_i(C_{g,i},C_{o,i}) &= \frac{|C_{g,i} \cap C_{o,i}|}{|C_{g,i} \cup C_{o,i}|}.
\end{align*}
(2) We then measure the similarity between $C_{g}$ and $C_{o}$ by aggregating the Jaccard $\mathcal{J}_i(C_{g,i},C_{o,i})$, that is
\begin{align*}
\mathcal{J}(C_g, C_o) &= \sum_{i=1}^{|C_g|} \mathcal{J}_i(C_{g,i},C_{o,i}).
\end{align*}
(3) Finally, we define the precision and recall by following~\cite{li2020survey,li2021effective}, and compute the $F_1$-score accordingly.
\begin{align*}
\text{Precision} &= \frac{\mathcal{J}(C_g, C_o)}{|C_o|}, \quad \text{Recall} = \frac{\mathcal{J}(C_g, C_o)}{|C_g|},\\
F_{1} &= 2\times\frac{\text{Precision}\times\text{Recall}}{\text{Precision}+\text{Recall}}.
\end{align*}
}}
\if 0
{\Comment{To verify the results of subtrajectory detection, we adapt the evaluation metric $F_1$-score proposed for Named Entity Recognition (NER)~\cite{li2020survey,li2021effective}.
The intuition is that we take the anomalous subtrajectories as entities in NER task, and define the precision and recall accordingly.
\begin{align*}
\text{Precision} = \frac{\mathcal{J}(C_o, C_g)}{|C_o|}, &\quad \text{Recall} = \frac{\mathcal{J}(C_o, C_g)}{|C_g|}\\
\mathcal{J}(C_o, C_g) &= \sum_{i=1}^{|C_g|} \frac{|C_{o,i} \cap C_{g,i}|}{|C_{o,i} \cup C_{g,i}|},
\end{align*}
where $C_o$ denotes a set of outputted anomalous subtrajectories by a detection method and $C_g$ denotes a set of ground truth anomalous subtrajectories via manually labeling.
$\mathcal{J}(C_o, C_g)$ denotes the Jaccard index, which measures the ratio of intersection over union of the two sets $C_o$ and $C_g$.
To compute $\mathcal{J}(C_o, C_g)$, we first align the anomalous subtrajectories in $C_o$ and $C_g$ according to their corresponding trajectory ids. For the subtrajectories that cannot be aligned, we simply discard them. For each aligned pair $i$ in $C_o$ and $C_g$ (i.e., $C_{o,i}$ and $C_{g,i}$), we compute their Jaccard's as the size of the intersection divided by the size of the union of the road segments in $C_{o,i}$ and $C_{g,i}$, then we take the summation of Jaccard's of all pairs as $\mathcal{J}(C_o, C_g)$. Finally, the $F_1$-score is defined as
\begin{align*}
F_{1} = 2\times\frac{\text{Precision}\times\text{Recall}}{\text{Precision}+\text{Recall}}.
\end{align*}
}}
\fi
We notice the existing solutions~\cite{chen2013iboat,wu2017fast,liu2020online,zhang2020continuous} output an anomaly score on each point in a detected trajectory. To support this for subtrajectory detection, we turn their thresholds of the anomaly scores in a development set (i.e, a set of 100 trajectories with manual labels), and for each turned threshold, the anomalous subtrajectories $C_o$ is identified as those road segments, whose anomaly scores are larger than the threshold. and the threshold that is associated with the highest $F_1$-score is selected for experiments.
For \texttt{RL4OASD}, it naturally outputs the anomalous subtrajectories, and we can compute its $F_1$-score based on the defined precision and recall directly.
\noindent \textbf{Evaluation Platform.}
We implement \texttt{RL4OASD} and other baselines in Python 3.6 and Tensorflow 1.8.0.
The experiments are conducted on a server with 10-cores of Intel(R) Core(TM) i9-9820X CPU @ 3.30GHz 64.0GB RAM and one Nvidia GeForce RTX 2080 GPU. The labeled datasets and codes can be downloaded via the link~\footnote{https://www.dropbox.com/sh/imix3otoep49v0g/AACQLPpgA0jMdTg7LV-GdL70a?dl=0} to reproduce our work.
\subsection{Experimental Results}
\label{sec:effectiveness}
\noindent \textbf{(1) Effectiveness evaluation (comparison with existing baselines).}
We study the anomalous subtrajectory detection with our labeled datasets. In Table~\ref{tab:baselines}, we report the effectiveness in terms of different trajectory lengths, e.g., we manually partition the Chengdu dataset into four groups, i.e., G1, G2, G3 and G4, such that the lengths in a groups are $G1<15$, $15 \leq G2 < 30$, $30 \leq G3 <45$ and $G4 \geq 45$. In addition, we also report the overall effectiveness in whole datasets. The results clearly show that \texttt{RL4OASD} consistently outperforms baselines in terms of different settings. For example, it outperforms the best baseline (i.e., CTSS) for around 21\% (resp. 30\%) in Chengdu (resp. Xi'an) in terms of the overall effectiveness, which verifies its data-driven nature.
\begin{table}[ht]
\setlength{\tabcolsep}{4pt}
\centering
\caption{Ablation study for \texttt{RL4OASD}.}
\vspace*{-2mm}
\begin{tabular}{|c|c|}
\hline
Effectiveness & $F_1$-score \\ \hline
RL4OASD &\textbf{0.854} \\ \hline
w/o noisy labels & 0.626 \\ \hline
w/o road segment embeddings & 0.828 \\ \hline
w/o RNEE & 0.816 \\ \hline
w/o local reward &0.850 \\ \hline
w/o global reward &0.849 \\ \hline
\end{tabular}
\label{tab:ablation}
\vspace*{-2mm}
\end{table}
\begin{figure}[h]
\hspace{-.8cm}
\centering
\begin{tabular}{c}
\begin{minipage}{0.93\linewidth
\includegraphics[width=1.02\linewidth]{figures/overall_efficiency_legend.pdf}
\end{minipage}
\\
\begin{minipage}{0.93\linewidth
\includegraphics[width=1.02\linewidth]{figures/overall_efficiency_new.pdf}
\end{minipage}
\end{tabular}
\vspace*{-3mm}
\caption{Overall detection efficiency.}
\label{fig:overall_efficiency}
\vspace*{-4mm}
\end{figure}
\begin{figure}[h]
\hspace{-.8cm}
\centering
\begin{tabular}{c c}
\begin{minipage}{3.7cm}
\includegraphics[width=3.9cm]{figures/traj_length_chengdu_2.pdf}
\end{minipage}
&
\begin{minipage}{3.7cm}
\includegraphics[width=3.9cm]{figures/traj_length_xian_2.pdf}
\end{minipage}
\\
(a) Chengdu
&
(b) Xi'an
\end{tabular}
\vspace*{-3mm}
\caption{Efficiency with varying trajectory lengths.}\label{fig:length}
\vspace{-4mm}
\end{figure}
\begin{figure*}
\centering
\begin{tabular}{c c c c c}
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/ground-1000-180-20-choose.pdf}%
\end{minipage}
&
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/ground-1000-180-28-choose.pdf}%
\end{minipage}
&
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/ground-453-4471-1-choose.pdf}%
\end{minipage}
&
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/ground-1000-3447-2-choose.pdf}%
\end{minipage}
\\
(a) GT-1
&
(b) GT-2
&
(c) GT-3
&
(d) GT-4
\\
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/ctss-1000-180-11-choose.pdf}%
\end{minipage}
&
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/ctss-1000-180-17-choose.pdf}%
\end{minipage}
&
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/ctss-453-4471-1-choose.pdf}%
\end{minipage}
&
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/ctss-1000-3447-0-choose.pdf}%
\end{minipage}
\\
(e) CTSS, $F_1$=0.792
&
(f) CTSS, $F_1$=0.621
&
(g) CTSS, $F_1$=0.0
&
(h) CTSS, $F_1$=1.0
\\
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/rl-1000-180-20-choose.pdf}%
\end{minipage}
&
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/rl-1000-180-28-choose.pdf}%
\end{minipage}
&
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/rl-453-4471-1-choose.pdf}%
\end{minipage}
&
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/rl-1000-3447-2-choose.pdf}%
\end{minipage}
\\
(i) \texttt{RL4OASD}, $F_1$=1.0
&
(j) \texttt{RL4OASD}, $F_1$=1.0
&
(k) \texttt{RL4OASD}, $F_1$=1.0
&
(l) \texttt{RL4OASD}, $F_1$=1.0
\end{tabular}
\vspace*{-3mm}
\caption{Case study on Chengdu, blue lines: the normal routes traveled by most of trajectories; red dashed lines: the routes with anomalous subtrajectories (i.e., detours); red points: the intersections on road networks; green lines: the detected detours by Ground Truth (GT), CTSS and \texttt{RL4OASD}.}
\label{fig:casestudy}
\vspace*{-2mm}
\end{figure*}
\noindent \textbf{(2) Effectiveness evaluation (varying parameter $\alpha$).} Table~\ref{tab:alpha} shows the effects of varying parameter $\alpha$ for constructing noisy labels for training \texttt{RL4OASD}. Intuitively, both smaller or larger $\alpha$ will affect the quality of noisy labels, and lead to the model performance degrades. We study the parameter in the labeled Chengdu dataset, and we observe its performance becomes better as $\alpha$ increases, and drops as the $\alpha$ further increases.
\noindent \textbf{(3) Effectiveness evaluation (varying parameter $\delta$).}
In Table~\ref{tab:delta}, we further study the effects of varying parameter $\delta$, where the $\delta$ controls the number of normal routes that will be selected for constructing normal route features. With a smaller $\delta$, many existing anomalous routes in the datasets will be falsely selected. With a larger $\delta$, only a few normal routes will be considered, e.g., when $\delta$ is set to 0.5, only one route (i.e., the most frequent
route) will be selected as the normal route. As expected, a moderate $\delta$ provides the best effectiveness.
\noindent \textbf{(4) Effectiveness evaluation (varying parameter $D$).}
Parameter $D$ controls the number of road segments in the delaying mechanism in order to form longer subtrajectories instead of short fragments. The results of varying $D$ are reported in Table~\ref{tab:D}. We observe the model performance improves as $D$ grows, since a larger $D$ provides better continuity of detected subtrajectories. However, with a larger $D$, the capability of the model to detect multiple anomalous subtrajectories will degrade, which is as expected.
\noindent \textbf{(5) Effectiveness evaluation (ablation study).}
{\Comment{
We conduct an ablation study to show the effects of some important components in \texttt{RL4OASD}, including (1) noisy labels, (2) pre-trained road segment embeddings, (3) Road Network Enhanced Environment (RNEE), (4) local reward and (5) global reward.
In particular, for (1), we replace the noisy labels with random labels;
for (2), we randomly initialize the embeddings of road segments to replace the pre-trained embeddings provided by Toast~\cite{chen2021robust}; for (3), we drop the RNEE, and the model needs to take the action at each road segment without guided by road networks; for (4), we drop the local reward, which encourages the continuity of refined labels; for (5), we drop the global reward, which provides some feedback indicating the quality of refined labels.
In Table~\ref{tab:ablation}, we observe each component benefits the overall effectiveness, where the noisy labels contribute quite much, because it provides a necessary warm-start before the training. With random labels, the model converges to a local optimum and it is difficult to obtain further optimization. In addition, with the RNEE, we notice an effectiveness improvement of around 5\%, since it simplifies the cases of making decisions and the model becomes easier to train. In addition, we notice an efficiency improvement of around 9\% of RNEE, because it saves the time of taking actions via neural networks.}}
\noindent \textbf{(6) Efficiency evaluation (overall detection).}
Figure~\ref{fig:overall_efficiency} reports the online detection efficiency in terms of average running time per point. We observe DBTOD runs the fastest on both datasets, because it is a light model with cheaper feature engineering efforts for a binary classification problem on detecting whether an anomaly is contained in a trajectory. CTSS runs the slowest since it involves discrete Frechet distance to compute the deviation between a detected trajectory and a given reference trajectory, which suffers from quadratic time complexity. Overall, \texttt{RL4OASD} runs reasonably fast and would meet the practical needs, e.g., it takes less than 1ms to process each point, which is much faster than the practical sampling rate of trajectory data.
\noindent \textbf{(7) Efficiency evaluation (detection scalability).}
Further, we study the scalability for detecting ongoing trajectories in terms of different trajectory lengths. In Figure~\ref{fig:length}, we report the average running time per trajectory on both datasets. In general, we notice CTSS performs the worst, and its disparity between other baselines becomes larger as the trajectory length increases. \texttt{RL4OASD} scales well with the trajectory length grows, and it is faster than most of the baselines for long trajectories. This is because it involves the graph structure of road networks for labeling road segments, and the time of taking actions is saved accordingly.
\noindent \textbf{(8) Case study.} In Figure~\ref{fig:casestudy}, we investigate four detour cases in Chengdu dataset, including the case of two detours, the long detour, the short detour and no detour in a detected trajectory. We visualize the detours with the green lines labeled by human, CTSS and \texttt{RL4OASD}, where we choose CTSS for comparison, since it shows the best effectiveness among baselines. The results clearly show that \texttt{RL4OASD} can accurately identify detours and the returned detours are more in line with human intuition, which demonstrates its capability for real applications.
\section{EXPERIMENTS}
\label{sec:experiment}
\subsection{Experimental Setup}
\label{sec:setup}
\noindent \textbf{Dataset.}
The experiments are conducted on two real-world taxi trajectory datasets from DiDi Chuxing~\footnote{https://outreach.didichuxing.com/research/opendata/en/}, namely \emph{Chengdu} and \emph{Xi'an}. All raw trajectories are preprocessed to map-matched trajectories via a popular map-matching algorithm~\cite{yang2018fast}, and the road networks of the two cities are obtained from OpenStreetMap~\footnote{https://www.openstreetmap.org/}. {{Following previous studies~\cite{liu2020online,chen2013iboat}, we preprocess the datasets and filter those SD-pairs that contain less than 25 trajectories to have sufficient trajectories to indicate the normal routes. We randomly sample 10,000 trajectories from the datasets for training, and the remaining for testing.
}}
\smallskip
\noindent \textbf{Ground Truth.}
By following the previous works~\cite{zhang2011ibat,chen2013iboat, lv2017outlier}, we manually label the anomalous subtrajectories, and take the labeled subtrajectories as the ground truth for the evaluation.
{{Specifically, we sample 200 SD pairs with sufficient trajectories between them (e.g., at least 30 trajectories for each pair, and over 900 trajectories on average).}}
{{For labeling subtrajectories, we illustrate all routes (i.e., map-matched trajectories) within each SD pair and highlight the road segments that are traveled by the majority of trajectories as shown in Figure~\ref{fig:casestudy}. We invite 5 participants, and first spend 10 minutes to get everyone to understand the visualization, then let them identify whether the routes are anomalous or not based on visual inspection. If a participant thinks the route is anomalous, we then ask the participant to label which parts are responsible for its anomalousness.
}}
For quality control, we randomly pick 10\% trajectories, ask 5 other checkers to label these trajectories independently, adopt the majority voting to aggregate the labels, and compute the accuracy of the labels by the labelers against the aggregated ones by the checkers. The accuracy is 98.7\% for Chengdu and 94.3\% for Xi'an, which shows that our labeled datasets are with high accuracy.
Multiple raw trajectories may correspond to the same route and thus we have fewer routes than raw trajectories. We label 1,688 (resp. 1,057) routes, which correspond to 558,098 (resp. 163,027) raw trajectories before map-matching for Chengdu (resp. Xi'an). Among them, 1,436 (resp. 813) routes that correspond to 3,930 (resp. 2,368) raw trajectories are identified as the anomalous for Chengdu (resp. Xi'an), where the anomalous ratios for Chengdu and Xi'an are estimated as $3,930/558,098=0.7\%$ and $2,368/163,027=1.5\%$, respectively. The statistics of the datasets are reported in Table~\ref{tab:dataset}.
\begin{table}[]
\setlength{\tabcolsep}{8pt}
\centering
\vspace{-4mm}
\caption{Dataset statistics.}
\vspace{-3mm}
\begin{tabular}{lll}
\hline
Dataset & Chengdu & Xi'an \\ \hline
\# of trajectories & 677,492 & 373,054 \\
\# of segments & 4,885 & 5,052 \\
\# of intersections & 12,446 & 13,660 \\
\# of labeled routes (trajs) & 1,688 (558,098) & 1,057 (163,027) \\
\# of anomalous routes (trajs) & 1,436 (3,930) & 813 (2,368) \\
Anomalous ratio & 0.7\% & 1.5\% \\
Sampling rate & 2s $\sim$ 4s & 2s $\sim$ 4s\\
\hline
\end{tabular}
\label{tab:dataset}
\vspace{-4mm}
\end{table}
\if 0
\begin{table*}
\centering
\begin{tabular}{c c}
\begin{minipage}{0.7\linewidth}
\centering
\makeatletter\def\@captype{table}\makeatother\caption{Effectiveness comparison with existing baselines.}
\vspace{-3mm}
\setlength{\tabcolsep}{4.8pt}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Mathods & \multicolumn{5}{c|}{Chengdu} & \multicolumn{5}{c|}{Xi'an} \\ \hline
Trajectory Length & G1 & G2 & G3 & G4 &Overall & G1 & G2 & G3 & G4 &Overall \\ \hline
IBOAT~\cite{chen2013iboat} &0.426 &0.431 &0.406 &0.590 &0.431 &0.424 &0.467 &0.487 &0.503 &0.472 \\ \hline
DBTOD~\cite{wu2017fast} &0.523 &0.531 &0.515 &0.669 &0.530 &0.519 &0.448 &0.441 &0.483 &0.433 \\ \hline
GM-VSAE~\cite{liu2020online} &0.375 &0.493 &0.504 &0.658 &0.451 &0.371 &0.464 &0.498 &0.521 &0.467 \\ \hline
SD-VSAE~\cite{liu2020online} &0.373 &0.491 &0.498 &0.637 &0.448 &0.373 &0.459 &0.478 &0.507 &0.458 \\ \hline
SAE~\cite{malhotra2016lstm} &0.375 &0.492 &0.499 &0.658 &0.450 &0.372 &0.460 &0.479 &0.517 &0.461 \\ \hline
VSAE~\cite{kingma2013auto} &0.374 &0.492 &0.504 &0.656 &0.450 &0.369 &0.461 &0.487 &0.520 &0.463 \\ \hline
CTSS~\cite{zhang2020continuous} &0.730 &0.708 &0.625 &0.741 &0.706 &0.657 &0.637 &0.636 &0.672 & 0.658 \\ \hline
\textbf{\texttt{RL4OASD}} &\textbf{0.888} &\textbf{0.892} &\textbf{0.725} &\textbf{0.774} & \textbf{0.854} &\textbf{0.964} &\textbf{0.864} &\textbf{0.809} &\textbf{0.844} &\textbf{0.857} \\ \hline
\end{tabular}
\label{tab:baselines}
\end{minipage}
&
\begin{minipage}{0.20\linewidth}
\centering
\makeatletter\def\@captype{table}\makeatother\caption{Ablation study.}
\vspace{-3mm}
\setlength{\tabcolsep}{1.2pt}
\begin{tabular}{|c|c|}
\hline
Effectiveness & $F_1$-score \\ \hline
\texttt{RL4OASD} &\textbf{0.854} \\ \hline
w/o noisy labels & 0.626 \\ \hline
w/o road seg embeddings & 0.828 \\ \hline
w/o RNEL & 0.816 \\ \hline
w/o DL & 0.737 \\ \hline
w/o local reward &0.850 \\ \hline
w/o global reward &0.849 \\ \hline
{{w/o ASDNet}} & {{0.508}} \\ \hline
{{only transition frequency}} & {{0.643}} \\ \hline
\end{tabular}
\label{tab:ablation}
\end{minipage}
\end{tabular}
\vspace*{-2mm}
\end{table*}
\fi
\smallskip
\noindent \textbf{Baseline.}
We review the literature thoroughly, and identify the following baselines for the online detection problem, including IBOAT~\cite{chen2013iboat}, DBTOD~\cite{wu2017fast}, GM-VSAE~\cite{liu2020online}, SD-VSAE~\cite{liu2020online}, SAE~\cite{liu2020online}, VSAE~\cite{liu2020online} and CTSS~\cite{zhang2020continuous}. The detailed description of these algorithms are represented as follows:
{{
\begin{itemize}[leftmargin=*]
\item \textbf{IBOAT}~\cite{chen2013iboat}: it is an online method to detect anomalous trajectories via checking which parts of trajectories isolate from reference (i.e., normal) trajectories for the same source and destination routes.
\item \textbf{DBTOD}~\cite{wu2017fast}: it utilizes a probabilistic model to perform anomalous trajectory detection by modeling human driving behaviors from historical trajectories.
\item \textbf{GM-VSAE}~\cite{liu2020online}: it is a method aiming to detect anomalous trajectories via a generation scheme. This method utilizes the Gaussian mixture distribution to represent categories of different normal routes. Then based on these representations, the model detects anomalous trajectories which are not well-generated.
\item \textbf{SD-VSAE}~\cite{liu2020online}: it is a fast version of GM-VSAE, which outputs one representation of the normal route with the maximized probability. And based on this normal route representation, the model detects anomalous trajectories that cannot be generated well followed by GM-VSAE.
\item \textbf{SAE}~\cite{liu2020online}: we adapt GM-VSAE, where SAE replaces the encoder and decoder structure in GM-VSAE with a traditional Seq2Seq model, which aims to minimize a reconstruction error, and the reconstruction error is further used to define the anomaly score.
\item \textbf{VSAE}~\cite{liu2020online}: we also adapt GM-VSAE by using VSAE to replace the Gaussian mixture distribution of the latent route representations in GM-VSAE with a Gaussian distribution.
\item \textbf{CTSS}~\cite{zhang2020continuous}: it is a method to detect anomalous trajectories via calculating the trajectory similarity with discrete Frechet distance between a given reference route and the current partial route at each timestamp.
\end{itemize}
}}
{{We notice the baselines~\cite{wu2017fast,liu2020online,zhang2020continuous} are proposed for anomalous trajectory detection, and output an anomaly score on each point in a trajectory. We note that those scores are computed from the beginning, i.e., they only consider the subtrajectories starting from the source, which causes them difficult to be adapted for the subtrajectory detection task, where the detected subtrajectories can be started at any position of the trajectory. Thus, we adapt them in this way.
We tune their thresholds of the anomaly scores in a development set (i.e, a set of 100 trajectories with manual labels), and for each tuned threshold, the anomalous subtrajectories are identified as those road segments, whose anomaly scores are larger than the threshold. The threshold that is associated with the best performance (evaluated by $F_1$-score and details will be presented later) is selected for experiments. In addition, we also tune the parameters of baselines to the best based on the development set.
For \texttt{RL4OASD}, it naturally outputs the anomalous subtrajectories for experiments.
}}
\begin{table*}[]
\vspace*{-4mm}
{{\caption{Effectiveness comparison with existing baselines (Left: $F_1$-score, right: $TF_1$-score).}
\label{tab:baselines}}}
\setlength{\tabcolsep}{2.4pt}
\vspace*{-3mm}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Methods & \multicolumn{10}{c|}{Chengdu} & \multicolumn{10}{c|}{Xi'an} \\ \hline
\begin{tabular}[c]{@{}c@{}}Trajectory \\ Length\end{tabular} & \multicolumn{2}{c|}{G1} & \multicolumn{2}{c|}{G2} & \multicolumn{2}{c|}{G3} & \multicolumn{2}{c|}{G4} &\multicolumn{2}{c|}{Overall} & \multicolumn{2}{c|}{G1} & \multicolumn{2}{c|}{G2} & \multicolumn{2}{c|}{G3} & \multicolumn{2}{c|}{G4} &\multicolumn{2}{c|}{Overall} \\ \hline
IBOAT~\cite{chen2013iboat} &0.534 &{{0.541}} &0.538 &{{0.544}} &0.519 &{{0.525}} &0.674 &{{0.781}} &0.539 &{{0.550}} &0.556 &{{0.615}} &0.493 &{{0.495}} &0.497 &{{0.548}} &0.493 &{{0.463}} &0.506 &{{0.519}} \\ \hline
DBTOD~\cite{wu2017fast} &0.523 &{{0.533}}
&0.531 &{{0.537}} &0.515 &{{0.522}} &0.669 &{{0.750}} &0.530 &{{0.542}} &0.519 &{{0.523}} &0.448 &{{0.381}} &0.441 &{{0.380}} &0.483 &{{0.452}} &0.433 &{{0.424}} \\ \hline
GM-VSAE~\cite{liu2020online} &0.375 &{{0.383}} &0.493 &{{0.498}} &0.507 &{{0.513}} &0.669 &{{0.750}} &0.452&{{0.461}} &0.270&{{0.272}} &0.361 &{{0.308}} &0.389 &{{0.332}} &0.421 &{{0.388}} &0.363 &{{0.328}} \\ \hline
SD-VSAE~\cite{liu2020online} &0.375 &{{0.383}} &0.463 &{{0.466}} &0.416 &{{0.406}} &0.555 &{{0.612}} &0.452 &{{0.461}} &0.270 &{{0.272}} &0.353 &{{0.300}} &0.385 &{{0.329}} &0.386 &{{0.360}} &0.350 &{{0.319}} \\ \hline
SAE~\cite{liu2020online} &0.375 &{{0.383}} &0.461 &{{0.463}} &0.413 &{{0.406}} &0.536 &{{0.603}} &0.451 &{{0.461}} &0.270 &{{0.272}} &0.359 &{{0.300}} & 0.386 &{{0.329}} &0.410 &{{0.379}} &0.363 &{{0.328}} \\ \hline
VSAE~\cite{liu2020online} &0.375 &{{0.383}} &0.491 &{{0.496}} &0.493 &{{0.497}} &0.655 &{{0.734}} &0.448 &{{0.457}} &0.262 &{{0.265}} &0.340 &{{0.296}} &0.371 &{{0.316}} &0.369 &{{0.345}} &0.339 &{{0.309}} \\ \hline
CTSS~\cite{zhang2020continuous} &0.730 &{{0.786}} &0.708 &{{0.764}} &0.625 &{{0.657}} &0.741 &{{0.845}} &0.706 &{{0.758}} &0.657 &{{0.669}} &0.637 &{{0.688}} &0.636 &{{0.689}} &0.672 &{{0.700}} &0.658 &{{0.689}} \\ \hline
\texttt{RL4OASD} &\textbf{0.888} &{{\textbf{0.905}}} &\textbf{0.892} &{{\textbf{0.910}}} &\textbf{0.725} &{{\textbf{0.720}}} &\textbf{0.774} &{{\textbf{0.853}}} &\textbf{0.854} &{{\textbf{0.870}}} &\textbf{0.964} &{{\textbf{0.973}}} &\textbf{0.864} &{{\textbf{0.888}}} &\textbf{0.809} &{{\textbf{0.843}}} &\textbf{0.844} &{{\textbf{0.870}}} &\textbf{0.857} &{{\textbf{0.883}}} \\ \hline
\end{tabular}
\vspace*{-4mm}
\end{table*}
\smallskip
\noindent \textbf{Parameter Setting.} In RSRNet, we embed the TCF and NRF features into the 128-dimensional vectors, and use the LSTM with 128 hidden units to implement the RSRNet. {{The parameter $\alpha$, $\delta$ and $D$ are set to 0.5, 0.4 and 8, respectively.}}
To train RSRNet, we compute noisy labels in 24 time slots with one hour granularity via empirical studies. In ASDNet, the dimension of label vectors $\mathbf{v}(e_{i}.l)$ is 128. The policy network is implemented with a single-layer feedforward neural network, and the softmax function is adopted as the activation function. By empirical findings, the learning rates for RSRNet and ASDNet are set to 0.01 and 0.001, respectively.
\begin{table}[t!]
\setlength{\tabcolsep}{10pt}
\centering
\caption{Ablation study for \texttt{RL4OASD}.}
\vspace{-3mm}
\begin{tabular}{|c|c|}
\hline
Effectiveness & $F_1$-score \\ \hline
\texttt{RL4OASD} &\textbf{0.854} \\ \hline
w/o noisy labels & 0.626 \\ \hline
w/o road segment embeddings & 0.828 \\ \hline
w/o RNEL & 0.816 \\ \hline
w/o DL & 0.737 \\ \hline
w/o local reward &0.850 \\ \hline
w/o global reward &0.849 \\ \hline
{{w/o ASDNet}} & {{0.508}} \\ \hline
{{only transition frequency}} & {{0.643}} \\ \hline
\end{tabular}
\label{tab:ablation}
\vspace{-5mm}
\end{table}
\smallskip
\noindent \textbf{Evaluation Metrics.}
To verify the results of subtrajectory detection, we consider two evaluation metrics. \underline{First}, we adapt the evaluation metric $F_1$-score proposed for Named Entity Recognition (NER)~\cite{li2021modularized,li2021effective}.
This is inspired by the fact that our task corresponds to one of tagging subsequences of a sequence, which is similar to NER, which tags phrases (i.e., subsequences) of a sentence (sequence).
The intuition is that we take the anomalous subtrajectories as entities in NER task. Specifically, (1) let $C_{g,i}$ denote a manually labeled anomalous subtrajectory $i$ in the ground truth set $C_g$, and $C_{o,i}$ denotes the corresponding subtrajectory $i$ in the set $C_o$, which is returned by a detection method. We employ Jaccard to measure the ratio of intersection over union of the road segments between $C_{g,i}$ and $C_{o,i}$.
(2) We then measure the similarity between $C_{g}$ and $C_{o}$ by aggregating the Jaccard $\mathcal{J}_i(C_{g,i},C_{o,i})$.
\begin{align}
\mathcal{J}_i(C_{g,i},C_{o,i}) = \frac{|C_{g,i} \cap C_{o,i}|}{|C_{g,i} \cup C_{o,i}|}, \ \mathcal{J}(C_g, C_o) = \sum_{i=1}^{|C_g|} \mathcal{J}_i(C_{g,i},C_{o,i}).
\label{eq:jac}
\end{align}
Note that the intersection and union operations between $C_{g,i}$ and $C_{o,i}$ are based on 1's of the road segments of the manually labeled anomalous trajectory $i$. (3) Finally, we define the precision (P) and recall (R) by following~\cite{li2021modularized,li2021effective}, and compute the $F_1$-score accordingly.
\begin{align}
\text{P} = \frac{\mathcal{J}(C_g, C_o)}{|C_o|}, \quad \text{R} = \frac{\mathcal{J}(C_g, C_o)}{|C_g|}, \quad F_{1} = 2\times\frac{\text{P}\times\text{R}}{\text{P}+\text{R}}.
\label{eq:f1}
\end{align}
{{\underline{Second}, we further design a variant of $F_1$-score. { Specifically, we re-define the Jaccard similarity $\mathcal{J}_i(C_{g,i},C_{o,i})$ to be 1 if it is above a threshold $\phi$ and 0 otherwise.
Then, we compute the $F_1$-score by Equation~\ref{eq:f1} based on the re-defined Jaccard similarity.
We call this variant of $F_1$-score $TF_1$-score, where the threshold $\phi$ is naturally set to 0.5 in this paper.
The intuition of $TF_1$-score is to count only those detected anomalous subtrajectories which are aligned with real anomalous subtrajectories sufficiently.}
}}
\smallskip
\noindent \textbf{Evaluation Platform.}
We implement \texttt{RL4OASD} and other baselines in Python 3.6 and Tensorflow 1.8.0.
The experiments are conducted on a server with 10-cores of Intel(R) Core(TM) i9-9820X CPU @ 3.30GHz 64.0GB RAM and one Nvidia GeForce RTX 2080 GPU. The labeled datasets and codes can be downloaded via the link~\footnote{https://github.com/lizzyhku/OASD} to reproduce our work.
\subsection{Effectiveness Evaluation}
\label{sec:effectiveness}
\noindent \textbf{Comparison with existing baselines.} We study the anomalous subtrajectory detection with our labeled datasets. In Table~\ref{tab:baselines}, we report the effectiveness in terms of different trajectory lengths, e.g., we manually partition the Chengdu dataset into four groups, i.e., G1, G2, G3 and G4, such that the lengths in a groups are $G1<15$, $15 \leq G2 < 30$, $30 \leq G3 <45$ and $G4 \geq 45$. We also report the overall effectiveness in whole datasets. The results clearly show that \texttt{RL4OASD} consistently outperforms baselines in terms of different settings. {{Specifically, it outperforms the best baseline (i.e., CTSS) for around 20\% and 15\% (resp. 30\% and 28\%) in terms of $F_1$-score and $TF_1$-score in Chengdu (resp. Xi'an) regarding the overall effectiveness, and the improvement is around 5\%-26\% and 1\%-19\% (resp. 26\%-47\% and 22\%-45\%) in terms of $F_1$-score and $TF_1$-score in Chengdu (resp. Xi'an) for different groups. A possible reason is that
CTSS needs a threshold to extract those anomalous parts with the anomaly scores larger than the threshold. However, the threshold is hard to set appropriately for all complex traffic cases. \texttt{RL4OASD} demonstrates its superioriority, which is mainly due to its data-driven nature for the anomalous subtrajectory detection task. Besides, we observe that \texttt{RL4OASD} performs similar trends of results in terms of $F_1$-score and $TF_1$-score, which shows the genericity of our method.
}}
\smallskip
\noindent \textbf{Ablation study.} We conduct an ablation study to show the effects of some components in \texttt{RL4OASD}, including (1) the noisy labels, (2) pre-trained road segment embeddings, (3) Road Network Enhanced Labeling (RNEL), (4) Delayed Labeling (DL), (5) local reward, (6) global reward, {{(7) ASDNet and (8) transition frequency.}}
In particular, for (1), we replace the noisy labels with random labels;
for (2), we randomly initialize the embeddings of road segments to replace the pre-trained embeddings provided by Toast~\cite{chen2021robust}; for (3), we drop the RNEL, and the model needs to take the action at each road segment without being guided by road networks; for (4), we drop the DL, and no delay mechanism is involved for labeling road segments; for (5), we drop the local reward, which encourages the continuity of refined labels; for (6), we drop the global reward, which provides some feedback indicating the quality of refined labels; {{for (7), we replace ASDNet with an ordinary classifier to follow the outputs of RSRNet, and train the classifier with noisy labels; for (8), we only use the transition frequency to detect anomalous subtrajectories, which can be regarded as the simplest method.}}
{{In Table~\ref{tab:ablation}, we observe each component benefits the overall effectiveness, where the ASDNet contributes quite much, because it is utilized to refine the labels which are utilized to train the RSRNet, and find out the anomalous subtrajectories. Besides, we note that (1) noisy labels, (4) Delayed Labeling (DL) and (8) transition frequency also contribute much to the performance of \texttt{RL4OASD}. As the aforementioned, noisy labels provide a necessary warm-start before the training with the improvement of around 36\%, the delayed labeling provides the continuity of anomalous subtrajectories with the improvement of around 16\%, and if we only use the transition frequency for the detection task, the performance degrades a lot by around 25\%. In addition, we also verify the effect of (2) pre-trained road segment embeddings, (3) RNEL, (5) local reward and (6) global reward. From Table ~\ref{tab:ablation}, we observe that road segment embeddings benefit the overall effectiveness by providing the traffic context from road networks, and local reward and global reward provide some feedback signals, which guide the training of RSRNet to further provide better states for ASDNet. In addition, with the RNEL, we notice an effectiveness improvement of around 5\%, since it simplifies the cases of making decisions and the model becomes easier to train.
}}
\if 0
\begin{table}[h]
\setlength{\tabcolsep}{4.8pt}
\centering
\vspace{-3mm}
\caption{Impacts of parameter $\alpha$ for \texttt{RL4OASD}.}
\vspace{-3mm}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Parameter & $\alpha=0.3$ & $\alpha=0.4$ & $\alpha=0.5$ & $\alpha=0.6$ & $\alpha=0.7$ \\ \hline
$F_1$-score &0.844 &0.845 &\textbf{0.854} &0.846 &0.845 \\ \hline
\end{tabular}
\label{tab:alpha}
\vspace{-4mm}
\end{table}
\begin{table}[h]
\setlength{\tabcolsep}{5.2pt}
\centering
\vspace{-3mm}
\caption{Impacts of parameter $\delta$ for \texttt{RL4OASD}.}
\vspace{-3mm}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Parameter & $\delta=0.1$ & $\delta=0.2$ & $\delta=0.3$ & $\delta=0.4$ & $\delta=0.5$ \\ \hline
$F_1$-score &0.647 &0.834 &0.846 &\textbf{0.854} &0.847 \\ \hline
\end{tabular}
\label{tab:delta}
\vspace{-4mm}
\end{table}
\begin{table}[h]
\setlength{\tabcolsep}{4pt}
\centering
\vspace{-3mm}
\caption{Impacts of parameter $D$ for \texttt{RL4OASD}.}
\vspace{-3mm}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
Parameter & $D=0$ & $D=2$ & $D=4$ & $D=6$ & $D=8$ & $D=10$ \\ \hline
$F_1$-score &0.737 &0.763 &0.812 &0.833 &\textbf{0.854} &0.848\\ \hline
\end{tabular}
\label{tab:D}
\vspace{-4mm}
\end{table}
\fi
\subsection{Parameter Study}
\noindent \textbf{Varying parameter $\alpha$, $\delta$ and $D$.} We study the effects of parameter $\alpha$ for constructing noisy labels, parameter $\delta$ for constructing normal route features and parameter $D$ for controlling the number of delayed road segments in the delaying mechanism. The results and detailed description are put into the technical report~\cite{TR} due to the page limit. Overall, we observe that a moderate setting { (with $\alpha = 0.5$, $\delta = 0.4$, and $D = 8$)} contributes to the best effectiveness.
\if 0
\noindent \textbf{Varying parameter $\alpha$.} Table~\ref{tab:alpha} shows the effects of varying parameter $\alpha$ for constructing noisy labels for training \texttt{RL4OASD}. Intuitively, both smaller or larger $\alpha$ will affect the quality of noisy labels, and lead to the model performance degradation. {{Specifically, the noisy labels are used in the procedure of pre-training, which provides a warm-start for the two networks, and leads to the two networks having incorporated some information of normal routes before the joint training. With a smaller $\alpha$, the noisy labels are prone to 0, which affects the capability of detecting anomalous roads and vice versa. We study the parameter with the labeled Chengdu dataset. As expected, we observe its performance becomes better as $\alpha$ increases, and drops as the $\alpha$ further increases. When $\alpha =0.5$}, \texttt{RL4OASD} achieves the best $F_1$-score.}
\smallskip
\noindent \textbf{Varying parameter $\delta$.} In Table~\ref{tab:delta}, we further study the effects of varying parameter $\delta$, where the $\delta$ controls the number of normal routes that will be selected for constructing normal route features. With a smaller $\delta$, many existing anomalous routes in the datasets will be falsely selected as normal routes. With a larger $\delta$, only a few normal routes will be considered, and thus some potential normal routes are missed, e.g., when $\delta$ is set to 0.5, only one route (i.e., the most frequent
route) will be selected as the normal route. As expected, a moderate setting of $\delta=0.4$ provides a balance of recognizing the number of normal routes, and contributes to the best effectiveness.
\smallskip
\noindent \textbf{Varying parameter $D$.} The parameter $D$ controls the number of road segments in the delaying mechanism in order to form longer subtrajectories instead of short fragments. When $D = 0$, it reduces to the case of no delaying mechanism. The results of varying $D$ are reported in Table~\ref{tab:D}. We observe the model performance improves as $D$ grows, since a larger $D$ provides better continuity of detected subtrajectories. However, with a larger $D$, the capability of the model to detect multiple anomalous subtrajectories will degrade, which is as expected. {{From Table \ref{tab:D}, we observe that a moderate setting of $D=8$ leads to the best effectiveness.}}
\fi
\begin{figure}[t!]
\centering
\begin{tabular}{c}
\begin{minipage}{0.93\linewidth
\includegraphics[width=1.02\linewidth]{figures/overall_efficiency_legend.pdf}
\end{minipage}
\\
\begin{minipage}{0.93\linewidth
\includegraphics[width=1.02\linewidth]{figures/overall_efficiency_new.pdf}
\end{minipage}
\end{tabular}
\vspace*{-2mm}
\caption{Overall detection efficiency.}
\label{fig:overall_efficiency}
\vspace*{-5mm}
\end{figure}
\subsection{Efficiency Study}
\noindent\textbf{Overall detection.} Figure~\ref{fig:overall_efficiency} reports the online detection efficiency in terms of average running time per point on both Chengdu and Xi'an datasets. We provide the detailed analysis as follows.
We observe the running time in Chengdu is larger than in Xi'an, because the trajectories are generally shorter in Xi'an. {{We observe DBTOD runs the fastest on both datasets, because it is a light model with low-dimensional embeddings of some cheaper features such as road-level and turning angle for the detection, which can be accomplished very quickly, while \texttt{RL4OASD} involves more operations, including an LSTM-based RSRNet to capture features, and an RL-based ASDNet to label each road segment.}} CTSS runs the slowest since it involves discrete Frechet distance to compute the deviation between a target trajectory and a given reference trajectory, which suffers from a quadratic time complexity. {{In addition, for four learning-based methods GM-VSAE, SD-VSAE, SAE and VSAE that are proposed for trajectory detection via the generation scheme, we observe SD-VSAE and VSAE are generally faster than the others (i.e., GM-VSAE and SAE). This is because SAE is proposed with a traditional Seq2Seq structure, where it involves the operations of encoding and decoding, which needs to scan a trajectory twice. Compared with SAE, VSAE is free of the encoding step, and only involves the decoding step to detect some possible anomalies. For SD-VSAE, it is a fast version of GM-VSAE, where it only predicts one Gaussian component in the encoding (or inference) step with its SD module; however, GM-VSAE needs several components in the encoding step, and uses all of them to detect anomalies in the decoding. The results are consistent with the findings that are reported in~\cite{liu2020online}.}} Overall, \texttt{RL4OASD} runs reasonably fast and would meet the practical needs, e.g., it takes less than 0.1ms to process each point, which is 20,000 times faster than the practical sampling rate of the trajectory data (2s).
\begin{figure}[h]
\vspace{-3mm}
\centering
\begin{tabular}{c c}
\begin{minipage}{3.7cm}
\includegraphics[width=4cm]{figures/traj_length_chengdu_2.pdf}
\end{minipage}
&
\begin{minipage}{3.7cm}
\includegraphics[width=4cm]{figures/traj_length_xian_2.pdf}
\end{minipage}
\\
(a) Chengdu
&
(b) Xi'an
\end{tabular}
\vspace*{-2mm}
\caption{Detection scalability.}\label{fig:length}
\vspace{-4mm}
\end{figure}
\smallskip
\noindent \textbf{Detection scalability.} Further, we study the scalability for detecting ongoing trajectories in terms of four trajectory groups with different lengths. In Figure~\ref{fig:length}, we report the average running time per trajectory on both datasets. {{We observe the CTSS runs slowest, and its disparity between others becomes larger as the trajectory length increases. This is because CTSS involves the trajectory similarity with discrete Frechet distance to detect possible anomalies, the trend is consistent with its time complexity. DBTOD is a light model, which consistently runs the fastest. However, in terms of the effectiveness comparison reported in Table~\ref{tab:baselines}, we note that DBTOD is not a good model for the subtrajectory detection task, though it runs faster than \texttt{RL4OASD}. In general, \texttt{RL4OASD} shows similar trends as others, and scales well with the trajectory length grows. This is because \texttt{RL4OASD} involves the graph structure of road networks for labeling road segments (i.e., RNEL), and the time of taking actions can be saved accordingly.}}
\begin{figure}
\centering
\scriptsize
\hspace{-4mm}
\begin{tabular}{c c c}
\begin{minipage}{0.34\linewidth}
\includegraphics[width=\linewidth]{figures/ground-1000-180-20-choose.pdf}
\end{minipage}\hspace{-4mm}
&
\begin{minipage}{0.34\linewidth}
\includegraphics[width=\linewidth]{figures/ctss-1000-180-11-choose.pdf}
\end{minipage}\hspace{-4mm}
&
\begin{minipage}{0.34\linewidth}
\includegraphics[width=\linewidth]{figures/rl-1000-180-20-choose.pdf}%
\end{minipage}\hspace{-4mm}
\\
(a) Ground truth ($F_1$=1.0)
&
(b) CTSS ($F_1$=0.792)
&
(c) \texttt{RL4OASD} ($F_1$=1.0)
\end{tabular}
\caption{Case study, blue lines: the normal routes traveled by most of trajectories; red dashed lines: the routes contained anomalous subtrajectories; red points: the intersections on road networks; green lines: the detected anomalous subtrajectories.
}
\label{fig:casestudy}
\vspace*{-2mm}
\end{figure}
\if 0
\begin{figure*}
\centering
\begin{tabular}{c c c c}
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/ground-1000-180-20-choose.pdf}%
\end{minipage}
&
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/ground-1000-180-28-choose.pdf}%
\end{minipage}
&
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/ground-1516-691-2.pdf}%
\end{minipage}
&
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/ground-1000-3447-2-choose.pdf}%
\end{minipage}
\\
(a) Ground truth ($F_1$=1.0)
&
{{(d) Ground truth ($F_1$=1.0)}}
&
(g) Ground truth ($F_1$=1.0)
&
(j) Ground truth ($F_1$=1.0)
\\
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/ctss-1000-180-11-choose.pdf}%
\end{minipage}
&
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/ctss-1000-180-17-choose.pdf}%
\end{minipage}
&
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/ctss-1516-691-2.pdf}%
\end{minipage}
&
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/ctss-1000-3447-0-choose.pdf}%
\end{minipage}
\\
(b) CTSS ($F_1$=0.792)
&
{{(e) CTSS ($F_1$=0.621)}}
&
(h) CTSS ($F_1$=0.0)
&
(k) CTSS ($F_1$=1.0)
\\
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/rl-1000-180-20-choose.pdf}%
\end{minipage}
&
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/rl-1000-180-28-choose.pdf}%
\end{minipage}
&
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/rl-1516-691-2.pdf}%
\end{minipage}
&
\begin{minipage}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/rl-1000-3447-2-choose.pdf}%
\end{minipage}
\\
(c) \texttt{RL4OASD} ($F_1$=1.0)
&
(f) \texttt{RL4OASD} ($F_1$=1.0)
&
(i) \texttt{RL4OASD} ($F_1$=0.8)
&
(l) \texttt{RL4OASD} ($F_1$=1.0)
\end{tabular}
\caption{Case study, blue lines: the normal routes traveled by most of trajectories; red dashed lines: the routes contained anomalous subtrajectories; red points: the intersections on road networks; green lines: the detected anomalous subtrajectories.
}
\label{fig:casestudy}
\vspace*{-2mm}
\end{figure*}
\fi
\subsection{Case Study}
We investigate representative detour cases in Chengdu. We visualize with the green lines the detours labeled as/by Ground truth, CTSS and \texttt{RL4OASD}. Here, we choose CTSS for comparison, since it shows the best effectiveness among baselines. In Figure~\ref{fig:casestudy}, it illustrates the case of two detours in a route, we observe \texttt{RL4OASD} detects the detours accurately with $F_1$-score=1.0; however, CTSS fails to detect the starting position of the detection. This is because CTSS measures the trajectory similarity via Frechet distance between the normal trajectory (blue) and the current ongoing trajectory (red) at each timestamp, where a detour may already happen though the ongoing trajectory is still similar to the reference at several starting positions. More results are illustrated in~\cite{TR}.
\if 0
In Figure~\ref{fig:casestudy}, we investigate four representative detour cases in Chengdu, including the cases of two detours and a single detour in a target trajectory. We visualize with the green lines the detours labeled as/by Ground truth, CTSS and \texttt{RL4OASD}. Here, we choose CTSS for comparison, since it shows the best effectiveness among baselines. {{The detailed analysis is provided as follows. For the first case in Figure~\ref{fig:casestudy} (a)-(c), it illustrates the case of two detours in a route, we observe \texttt{RL4OASD} detects the detours accurately with $F_1$-score=1.0; however, CTSS fails to detect the starting position of the detection. This is because CTSS measures the trajectory similarity via Frechet distance between the normal trajectory (blue) and the current ongoing trajectory (red) at each timestamp, where a detour may already happen though the ongoing trajectory is still similar to the reference at several starting positions. A similar case can be observed in Figure~\ref{fig:casestudy} (d)-(f), where CTSS detects an obvious detour with $F_1$-score=0.621 only. For the case in Figure~\ref{fig:casestudy} (g)-(i), we observe that no detour is returned by CTSS, because CTSS employs a threshold based method to detect detours, where it reports a detour if the deviation of the current ongoing trajectory from the normal trajectory exceeds the threshold, but the threshold is hard to set appropriately and it fails in this case. For the final case in Figure~\ref{fig:casestudy} (j)-(l), the long detour is obvious between the source and destination, and all methods detect it accurately with $F_1$-score=1.0.}}
The results clearly show that \texttt{RL4OASD} can accurately identify detours, which are close to the ground truth and the returned detours are in line with human intuition. It demonstrates the capability of \texttt{RL4OASD} for real applications.
\fi
\if 0
\begin{table}[h]
\setlength{\tabcolsep}{7.8pt}
\centering
\vspace{-3mm}
\caption{Training time (hours).}
\vspace{-3mm}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Training size &4,000 &6,000 &8,000 &10,000 &12,000 \\ \hline
Time cost &0.22 &0.31 &0.39 &0.48 &0.58\\ \hline
$F_1$-score &0.723 &0.786 &0.821 &\textbf{0.854} &\textbf{0.854} \\ \hline
\end{tabular}
\label{tab:train_time}
\vspace*{-3mm}
\end{table}
\fi
\begin{table}[]
\setlength{\tabcolsep}{5pt}
\vspace{-3mm}
{{\caption{Preprocessing and training time, the map matching~\cite{yang2018fast} is in C++, noisy labeling and training are in Python.
}
\label{tab:train_time}}}
\vspace{-3mm}
\begin{tabular}{|cc|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{Data size} & 4,000 & 6,000 & 8,000 & 10,000 & 12,000 \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}{{Preprocessing}} \\ {{time}}\end{tabular}}} & \begin{tabular}[c]{@{}c@{}}{{Map}} \\ {{matching (s)}}\end{tabular} & {{15.82}} & {{23.17}} &{{30.41}} &{{39.95}} &{{48.74}} \\ \cline{2-7}
\multicolumn{1}{|c|}{} & \begin{tabular}[c]{@{}c@{}}{{Noisy}} \\ {{labeling (s)}}\end{tabular} & {{20.31}} & {{28.40}} & {{36.25}} & {{41.21}} & {{46.82}} \\ \hline
\multicolumn{2}{|c|}{Training time (hours)} & 0.10 & 0.14 & 0.17 & 0.21 & 0.25 \\ \hline
\multicolumn{2}{|c|}{$F_1$-score} & 0.723 & 0.786 & 0.821 & \textbf{0.854} & \textbf{0.854} \\ \hline
\end{tabular}
\vspace*{-3mm}
\end{table}
{{\subsection{Preprocessing and Training Time}
With the default setup in Section~\ref{sec:setup}, we study the preprocessing time (i.e., map-matching and noisy labeling) and training time of \texttt{RL4OASD} in Table~\ref{tab:train_time}, by varying the data size from 4,000 trajectories to 12,000 trajectories, and report the time costs and $F_1$-scores.
Overall, it takes less than two minutes in the preprocessing and around 0.2 hours to obtain a satisfactory model, and the scale is linear with the data size, which indicates the capability of \texttt{RL4OASD} for large-scale trajectory data. Here, we choose 10,000 trajectories to train \texttt{RL4OASD}, which provides a reasonable trade-off between effectiveness and training time cost.
}}
\begin{figure}[h]
\hspace{-.8cm}
\small
\centering
\begin{tabular}{c c}
\begin{minipage}{3.8cm}
\includegraphics[width=3.9cm]{figures/xian_num_parts.pdf}
\end{minipage}
&
\begin{minipage}{3.8cm}
\includegraphics[width=3.9cm]{figures/training_time_xian_num_parts.pdf}
\end{minipage}
\\
(a) $F_1$-score varying $\xi$
&
(b) Training time varying $\xi$
\\
\begin{minipage}{3.8cm}
\includegraphics[width=3.9cm]{figures/xian_for_a_part.pdf}
\end{minipage}
&
\begin{minipage}{3.8cm}
\includegraphics[width=3.9cm]{figures/training_time_xian_for_a_part.pdf}
\end{minipage}
\\
(c) $F_1$-score ($\xi=8$)
&
(d) Training time ($\xi=8$)
\end{tabular}
\vspace*{-1mm}
{{\caption{{{\texttt{RL4OASD} with or without fine-tuning and the training times.}}}\label{fig:traffic}}}
\vspace{-3mm}
\end{figure}
{{\subsection{Detection in Varying Traffic Conditions}
\label{sec:concept_exp}
By following the setting in~\cite{liu2020online}, we first sort the trajectories by their starting timestamps within one day, and split all the sorted trajectories into $\xi$ partitions.
{ For example, when $\xi$ is set 4, we would have 4 parts within one day from Part 1 (i.e., 00:00 - 06:00, the earliest) to Part 4 (i.e., 18:00 - 24:00, the latest).}
Here, we consider two models, namely \texttt{RL4OASD-P1} and \texttt{RL4OASD-FT}, to show the effectiveness of our online learning strategy for the varying traffic conditions. For \texttt{RL4OASD-P1}, we train it on Part 1 and directly apply it to all parts. For \texttt{RL4OASD-FT}, we train it on Part 1, and keep updating it (i.e., fine-tuning using stochastic gradient descent) for other parts.
We first study how to set a proper $\xi$.
We vary the $\xi$ from 1 to 24, and the average $F_1$-scores over all parts and the corresponding training times are reported in Figure~\ref{fig:traffic}(a) and (b), respectively. We observe that the performance fluctuates as $\xi$ increases, and the corresponding training time decreases because with more data parts, the size of training data for each part becomes less.
We choose the $\xi=8$, since it leads to the best effectiveness (i.e., $F_1$-score=0.867) with the average training time below 0.04 hours, which is much smaller than the duration of a time period (i.e., $24/8=3$ hours).
With the setting of $\xi=8$, we further compare \texttt{RL4OASD-FT} with \texttt{RL4OASD-P1}, and report the $F_1$-score for each part in Figure~\ref{fig:traffic}(c). We observe that the performance of \texttt{RL4OASD-P1} degrades on Part 2–7, which can be attributed to the concept drift. In contrast, \texttt{RL4OASD-FT} improves the performance when new trajectories are being recorded to train.
In addition, our model is also able to handle situations with the traffic condition updated very frequently (e.g., hourly) by training from the newly recorded trajectory data. To see this, we report the training time for each part in Figure~\ref{fig:traffic}(d). It normally takes below 0.05 hours to update the model for each part, which largely meets the practical needs (e.g., far below the duration of each time period).}}
\begin{figure}
\centering
\begin{tabular}{c c}
\begin{minipage}{0.45\linewidth}
\includegraphics[width=\linewidth]{figures/453-44712-P1-P1.pdf}%
\end{minipage}
&
\begin{minipage}{0.45\linewidth}
\includegraphics[width=\linewidth]{figures/453-44712-FT-P1.pdf}%
\end{minipage}
\\
(a) \texttt{RL4OASD-P1}, $F_1$=1.0
&
(b) \texttt{RL4OASD-FT}, $F_1$=1.0
\\
\begin{minipage}{0.45\linewidth}
\includegraphics[width=\linewidth]{figures/453-44712-P1-P2.pdf}
\end{minipage}
&
\begin{minipage}{0.45\linewidth}
\includegraphics[width=\linewidth]{figures/453-44712-FT-P2.pdf}%
\end{minipage}
\\
(c) \texttt{RL4OASD-P1}, $F_1$=0.78
&
(d) \texttt{RL4OASD-FT}, $F_1$=1.0
\end{tabular}
\vspace*{-3mm}
\caption{Concept drift, where (a)-(b) for Part 1 and (c)-(d) for Part 2.}
\label{fig:casestudy_drift}
\vspace*{-5mm}
\end{figure}
\begin{table}[t]
\setlength{\tabcolsep}{10pt}
\centering
\vspace{-2mm}
{{\caption{Cold-start problem in RL4OASD.}
\label{tab:droprate}}}
\vspace{-3mm}
{{\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Drop rate & 0.0 & 0.2 & 0.4 & 0.6 & 0.8 \\ \hline
$F_1$-score &\textbf{0.854} &\textbf{0.854} &0.852 &0.831 &0.803 \\ \hline
\end{tabular}}}
\vspace*{-4mm}
\end{table}
We also conduct a case study to illustrate the effectiveness of \texttt{RL4OASD-FT} in Figure~\ref{fig:casestudy_drift}. We observe the normal route and anomalous route are exchanged from Part 1 to Part 2. With the online learning strategy, \texttt{RL4OASD-FT} can still detect the detour on Part 2 with $F_1$-score=1.0; however, \texttt{RL4OASD-P1} causes the false-positive in Figure~\ref{fig:casestudy_drift}(c), because its policy is only trained on Part 1.
{{\subsection{Cold-start Problem with Insufficient Historical Trajectories}
\label{sec:cold_start}
We study the effect of cold-start problem, where we vary the number of historical trajectories within the SD pairs with a drop rate parameter. For example, if the drop rate is set to 0.2, we randomly remove 20\% trajectories for the SD pairs, and the corresponding results are reported in Table~\ref{tab:droprate}. We observe that the effectiveness of \texttt{RL4OASD} is not affected by the cold-start problem much. For example, the model only degrades by 6\% when 80\% of historical trajectories are removed. This is possibly because the normal route feature is computed in a relative way (e.g., by a fraction between 0 and 1), { and the normal routes can still be identified when only few trajectories within a SD pair are available}.
}}
\section{INTRODUCTION}
\label{sec:introduction}
Trajectories, i.e., sequences of GPS points that tracks moving subjects (e.g., vehicles) on urban space, are being generated with our daily travel at an unprecedented speed. Trajectory data usually contain rich information about human mobility and urban transportation, and thus can be leveraged to empower various location-based services (LBS), e.g., urban planning and ride-hailing.
With its ubiquity,
\emph{anomalous trajectory detection} has become an important concern. It reveals whether a moving object is spatially following a normal route during its travel, which is beneficial in many scenarios. For example, the managers of a transportation system can discover local traffic issues from crowded trajectory anomalies. A ride-hailing company can prevent and punish malicious overcharging via discovering unusual trajectory detour.
In recent years, many approaches for anomalous trajectory detection have been proposed, while most of them aim to discriminate whether a given trajectory is anomalous,
i.e., following normal routes between two locations. They fail to answer a more critical and meaningful question: \textbf{Which part of a trajectory (i.e., subtrajectory) is anomalous?}
In real-world applications, trajectories are usually long sequences with many subtrajectories. Identifying anomalous subtrajectories is a necessity to accurately locate and indicate the anomalousness, which is crucial for the corresponding actions to be timely taken.
For example, a ride-haling company can take different measures based on where the detour is occurred and how long it lasts.
However, the anomalous subtrajectory detection problem is non-trivial to solve, due to three challenges:
\begin{itemize}[leftmargin=*]
\item \textbf{Subtrajectory modeling}. The detection of anomalous subtrajectory heavily relies on understanding the mobility patterns of subtrajectories. This is a challenging task, as the complexity and variety of subtrajectories across different places can hardly be modeled by simple heuristics. Moreover, the mobility on urban space usually happens on road networks, which would have a significant impact on the appearance of subtrajectory anomalies and make the problem even more challenging.
\item \textbf{Lack of labeled data}. Worst still, the modeling of normal and anomalous mobility patterns cannot depend on labeled data in this problem. Specifically, the anomalousness is extremely hard to be defined by simple rules, while manually labeling large-scale subtrajectories costs unaffordable human effort. This obstructs some straightforward data-intensive methods to be applied for this problem.
\item \textbf{Efficient online detection}. As trajectories are often generated at a very high speed, detection efficiency is also a central concern. Nevertheless, the number of subtrajectoies is several orders of magnitude larger than the number of trajectories.
Conventional offline detection methods that often need to enumerate all the subtrajectories are clearly infeasible. To avoid this, an optimal solution must enable \emph{online detection}, where the anomalousness can be revealed while a trajectory is being generated.
\end{itemize}
To this end, we propose a novel reinforcement learning based solution in this paper to tackle these challenges.
For the first challenge, we propose Road Segment Representation Network (SRNet), which exploits road network representation learning and deep sequence modeling techniques that jointly model the mobility pattern underlying subtrajectories. For the second challenge, we propose Anomalous Subtrajectory Detection Network (ASDNet), which is a policy network that is tailored for the detection of subtrajectories on road networks. ASDNet can be trained jointly with SRNet, in a weakly supervised manner using noisy labeled data. The two networks can mutually refine each other, where SRNet can provide expressive sequential information behind trajectories for ASDNet to detect, and ASDNet can offer supervision for SRNet to better learn the mobility patterns.
For the third challenge, we propose a novel detection algorithm, namely RL4OASD, which leverages the learned policy and enables online detection of anomalous subtrajectories.
Our contributions can be summarized as follows:
\begin{itemize}
\item We propose the first deep reinforcement learning based solution to learn the mobility patterns of subtrajectories. The proposed model 1) integrates the information of trajectory mobility and road network structure, and 2) can be trained with limited labeled data.
\item We propose a novel detection paradigm based on the learned policy, which allows the anomalous subtrajectory detection to be efficiently conducted in an online manner.
\item We conduct extensive experiments on two real-world trajectory datasets with manually-labeled test data. We compare our proposed solution with various baselines, and the results show that our solution can significantly outperform the state-of-the-art methods and is efficient for online anomalous subtrajectory detection.
\end{itemize}
\noindent\textbf{22 Nov-Cheng: 1. The first challenge is not clear to me - some more elaborations are needed; 2. Some more details of the proposed model (e.g., intuition, how it solves the issues, novel ideas) are desired; In its current form, this part is too brief; 3. Some references are expected throughout the introduction (some of them have been marked as Overleaf comments).}
\section{INTRODUCTION}
\label{sec:introduction}
With the proliferation of tracking facilities involving mobile phones and tracking technologies involving GPS, a large amount of \emph{trajectory} data is being generated by moving objects. Such large amount of trajectory data provides us a chance to investigate rich information contained in them. Existing studies have mined the traffic patterns~\cite{lippi2010collective}, how to operate the taxis via a good pattern~\cite{li2011hunting, liu2010uncovering}, and the hot locations of the cities~\cite{chang2010context, zheng2009mining} from the trajectory data, which helps taxis drivers pick up passengers in a high probability and helps passengers save waiting time.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth, height=0.5\linewidth]{figures/example}
\caption{$T_1$, $T_2$ and $T_3$ are three map-matched trajectories under $e_1$ (source) and $e_{10}$ (destination).
}
\label{fig:example}
\vspace{-5mm}
\end{figure}
An \emph{outlier} is an object that is different from or inconsistent with the remaining objects~\cite{han2006concepts}. In our context, a trajectory outlier or an anomalous trajectory refers to a trajectory that is different from the majority trajectories for a given source and destination~\cite{chen2011real, chen2013iboat} which can be described as occurring sparsely comparing the majority trajectories for a given source and destination. For the problem of detecting anomalous trajectories, each trajectory has a source and destination pair, put it differently, SD pair in the following context. We present an example in figure~\ref{fig:example}. We note that there are three map-matched trajectories from $S$ to the destination $D$. We note that the trajectory $T_3$ is anomalous with respect to the SD pair $<S, D>$ (i.e., $<e_1, e_{10}>$) since it does not follow the normal route followed by the trajectory $T_1$ and $T_2$, which is different from the majority trajectories following the same route representing with blue colour and green colour from $e_1$ to $e_{10}$. As mentioned before, detecting anomalous trajectories has attracted much attention and many approaches~\cite{zhang2020continuous, liu2020online,zhang2011ibat, chen2013iboat, wu2017fast} have been proposed for it, while most of them aim to discriminate whether a target trajectory is anomalous or not and fail to solve \textbf{which part of a trajectory (i.e., subtrajectory) is anomalous?}, which is a non-trivial and meaningful problem. We give an application to present the importance and motivation of detecting anomalous \textbf{sub}trajectories.
\emph{Example:} In order to overcharge on a certain trip from a passenger and earn unusual money, some taxi drivers deliberately perform unnecessary detours, while passengers usually fail to find out in time, especially for those who are not familiar with routes of cities. Real-time anomalous trajectory detection makes it possible for passengers to detect whether the route is normal or anomalous taken by drivers timely and helps taxi companies to monitor fraud behaviours of dishonest drivers. This could improve the quality of taxi services. There are currently three studies to detect anomalous subtrajectories.~\citet{lee2008trajectory} proposes to partition the trajectories and detect the anomalous subtrajectories by computing the distance of each segment in a detected trajectory to the segments in other trajectories, which is offline and cannot detect the anomalous subtrajectories in real-time manner. ~\citet{chen2013iboat} proposes to detect anomalous subtrajectories in online manner with the isolation-based method, which aims to check which parts of a detected trajectory that can be isolated from the reference (i.e., normal) trajectories with the same source and destination routes. ~\citet{zhang2020continuous} proposes to monitor the trajectory similarity based on discrete Frechet distance between a given reference route and the current partial route at each timestamp. If the deviation threshold exceeds a given threshold at any timestamp, and the system alerts that an outlier event of detouring is detected. These studies are hand-crafted heuristic with many predefined parameters (e.g., the deviation threshold) for detecting anomalous subtrajectories and time-consuming.
Therefore, detecting anomalous subtrajectories effectively via online manner is a non-trival task and there are the following challenging issues:
\vspace{-\topsep}
\begin{itemize}[leftmargin=*]
\item \textbf{Lack of labelled data.} The anomalousness is extremely hard to be defined by simple rules, while manually labeling large-scale subtrajectories costs unaffordable human effort. This obstructs some straightforward data-intensive methods to be applied for this problem. {\color{blue}{For example, ~\citet{chen2013iboat} propose to detect anomalous subtrajectories in online manner with the isolation-based method on a small manually dataset with 9 SD pairs with the average number little over 1000 for each SD pair. Put it differently, the manually test set in~\cite{chen2013iboat} is so small that the evaluation results cannot reflect the performance accurately and directly}}.
\item \textbf{Subtrajectory modeling}. {\Comment{The detection of anomalous subtrajectories relies heavily on modeling the characteristics involving complexity, variety, and the uniqueness, of subtrajectories on different routes effectively and efficiently}}, which is a challenging task and hardly handled by manually defined heuristic parameters. Moreover, the mobility on urban space usually happens on road networks, which would have a significant impact on the appearance of subtrajectory anomalies and make the problem even more challenging. {\Comment{Also, predefined heuristic parameters of modeling subtrajectories are hardly adaptive to different applications. As we mention before, a small manually dataset with the anomalous rate is 5.1\%, which is largely higher than the anomalous rate of datasets in this paper (Chengdu:0.7\% and Xi'an: 1.5\%) is utilized in experiments in~\cite{chen2013iboat}. According to the experiment results (Table $\uppercase\expandafter{\romannumeral2}$) in ~\cite{chen2013iboat}, iBOAT can achieve a good performance on detecting anomalous subtrajectopries. Nevertheless, we adapt it to our datasets with \textbf{1,688} (resp.\textbf{1,057}) \emph{SD pairs}, which corresponds to \textbf{558,098} (resp. \textbf{163,027}) raw trajectories before map-matching for Chengdu (resp. Xi'an). The performance of iBOAT proposed in ~\cite{chen2013iboat} is around 40\% lower than our method RL4OASD.
Thus, heuristic methods are hardly adaptive to different applications such as the varied anomalous rate, the varied datasize and so on.}}
\item \textbf{Efficient online detection}. As trajectories are often generated at a very high speed, detection efficiency is also a central concern. For example, taxi trajectories involving 790 million GPS points~\cite{yuan2011driving, yuan2010t} are generated in Beijing in 3 months. {\Comment{The average time interval of sampling is 3.1 minutes per point and the average distance of two consecutively sampling points is around 600 meters. There are 4.96 million trajectories after data preprocessing~\cite{yuan2011driving, yuan2010t}}}. In addition, the number of subtrajectories is several orders of magnitude larger than the number of trajectories. For example, there are over {\color{red}{20 million}} subtrajectories with respect to 4.96 million trajectories~\cite{yuan2011driving, yuan2010t} after map-matching~\cite{yang2018fast}. Thus, conventional offline detection methods that often need to enumerate all the subtrajectories are clearly infeasible. To avoid this, an optimal solution must enable \emph{online detection}, where the anomalousness can be revealed while a trajectory is being generated.
\end{itemize}
\vspace{-\topsep}
\noindent In this paper, {\Comment{we propose a novel reinforcement learning based solution RL4OASD to address above three challenges. RL4OASD consists of three parts, namely data preprocessing, RSRNet and ASDNet. \emph{\textbf{First}}, we obtain map-matched trajectories via map-matching the raw trajectories on road networks during data preprocessing period. The map-matched trajectory is in the form of a sequence of road segments. We then design a noisy labeling method to label each road segment as 0 representing normal and 1 otherwise, by counting each transition frequency from historical trajectories. \textbf{During training process}, the noisy labels are used to train the Road Segment Representation Network (RSRNet). Also, in order to improve the effectiveness of RL4OASD, we model the detecting anomalous subtrajectories as a Markov decision process~\cite{puterman2014markov} named Anomalous Subtrajectory Detection Network (ASDNet). And ASDNet can be trained jointly with SRNet, in a weakly supervised manner using noisy labeled data. The two networks can mutually refine each other, where SRNet can provide expressive sequential information behind trajectories for ASDNet to detect, and ASDNet can offer supervision for SRNet to better learn the characteristics of subtrajectories. \textbf{During test period}, RL4OASD perform testing on \textbf{manually labelled test data}. In this way, we address the first challenge effectively.
\emph{\textbf{Second}}, SRNet, which exploits road network representation learning and deep sequence modeling techniques that jointly model the \emph{characteristics} underlying subtrajectories, which addresses the second challenge.
\emph{\textbf{Third}}, a novel detection algorithm, namely RL4OASD, which leverages the learned policy and enables online detection of anomalous subtrajectories, which addresses the third challenge.}}
\noindent Our contributions can be summarized as follows:
\vspace{-\topsep}
\begin{itemize}[leftmargin=*]
\item We propose the first deep reinforcement learning based solution to detect anomalous subtrajectories. The
proposed model 1) integrates the {\color{blue}{characteristics of subtrajectories}} and road network structure, and 2) can be trained with limited labeled data.
\item We propose a novel detection paradigm based on the learned policy, which allows the anomalous subtrajectory detection to be efficiently conducted in an online manner.
\item We conduct extensive experiments on two real-world trajectory datasets with \textbf{manually-labelled test data}. We compare our proposed solution with various baselines, and the results show that our solution can significantly outperform the state-of-the-art methods and is efficient for online anomalous subtrajectory detection.
\end{itemize}
\vspace{-\topsep}
\section{INTRODUCTION}
\label{sec:introduction}
With the proliferation of tracking facilities involving mobile phones and tracking technologies involving GPS, a large amount of \emph{trajectory} data is being generated by moving objects. Such a large amount of trajectory data provides us with a chance to investigate rich information contained in them. {\Comment{Existing studies have mined the traffic patterns~\cite{lippi2010collective}, co-movement patterns~\cite{fan2016general,tritsarolis2021online}, trajectory similarity search~\cite{tritsarolis2021online, tiakas2006trajectory, pelekis2007similarity} and next location prediction~\cite{yang2017neural}.}}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth, height=0.5\linewidth]{figures/example}
\caption{$T_1$, $T_2$ and $T_3$ are three map-matched trajectories under $e_1$ (source) and $e_{10}$ (destination).
}
\label{fig:example}
\vspace{-5mm}
\end{figure}
An \emph{outlier} is an object that is different from or inconsistent with the remaining objects~\cite{han2006concepts}. In our context, a trajectory outlier or an anomalous trajectory refers to a trajectory that is different from the majority trajectories for a given source and destination~\cite{chen2011real, chen2013iboat} which can be described as occurring sparsely comparing the majority trajectories for a given source and destination. For the problem of detecting anomalous trajectories, each trajectory has a source and destination pair, put it differently, SD pair in the following context. We present an example in figure~\ref{fig:example}. We note that there are three map-matched trajectories from $S$ to the destination $D$. We note that the trajectory $T_3$ is anomalous with respect to the SD pair $<S, D>$ (i.e., $<e_1, e_{10}>$) since it does not follow the normal route followed by the trajectory $T_1$ and $T_2$, which is different from the majority trajectories following the same route representing with blue colour and green colour from $e_1$ to $e_{10}$. As mentioned before, detecting anomalous trajectories has attracted much attention and many approaches~\cite{zhang2020continuous, liu2020online,zhang2011ibat, chen2013iboat, wu2017fast} have been proposed for it, while most of them aim to discriminate whether a target trajectory is anomalous or not and fail to solve \textbf{which part of a trajectory (i.e., subtrajectory) is anomalous?}, which is a non-trivial and meaningful problem. In real-world applications, trajectories are usually long sequences with many subtrajectories. Identifying anomalous subtrajectories is a necessity to accurately locate and indicate the anomalousness, which is crucial for the corresponding actions to be timely taken.
For example, a ride-haling company can take different measures based on where the detour is occurred and how long it lasts.
Therefore, detecting anomalous subtrajectories effectively via online manner is a non-trival task and there are the following challenging issues:
\vspace{-\topsep}
\begin{itemize}[leftmargin=*]
\item \textbf{Lack of labelled data.} The anomalousness is extremely hard to be defined by simple rules, while manually labeling large-scale subtrajectories costs unaffordable human effort. This obstructs some straightforward data-intensive methods to be applied for this problem. {\color{blue}{For example, ~\citet{chen2013iboat} propose to detect anomalous subtrajectories in an online manner with the isolation-based method on a small manually dataset with 9 SD pairs and there are about 1000 trajectories for each SD pair
Put it differently, the manually test set in~\cite{chen2013iboat} is so small that the evaluation results cannot reflect the performance accurately and directly}}.
\item \textbf{Subtrajectory modeling}. {\Comment{The detection of anomalous subtrajectories relies heavily on modeling the characteristics involving complexity, variety, and the uniqueness, of subtrajectories on different routes effectively and efficiently}}, which is a challenging task and hardly handled by manually defined heuristic parameters. Moreover, the mobility on urban space usually happens on road networks, which would have a significant impact on the appearance of subtrajectory anomalies and make the problem even more challenging. {\Comment{Also, predefined heuristic parameters of modeling subtrajectories are hardly adaptive to different applications. As shown in our experiments, we utilize iBOAT on our datasets with \textbf{1,688} ( resp.\textbf{1,057}) \emph{SD pairs}, which corresponds to \textbf{558,098} (resp. \textbf{163,027}) raw trajectories for Chengdu (resp. Xi'an). The performance of iBOAT proposed in ~\cite{chen2013iboat} is around 40\% lower than our method RL4OASD. Thus, heuristic methods are hardly adaptive to different applications such as the varied datasize, the varied anomalous rate and so on.}}
\item \textbf{Efficient online detection}. As trajectories are often generated at a very high speed, detection efficiency is also a central concern. For example, taxi trajectories involving 790 million GPS points~\cite{yuan2011driving, yuan2010t} are generated in Beijing in 3 months. {\Comment{There are 4.96 million trajectories generated in 3 months after data preprocessing~\cite{yuan2011driving, yuan2010t}}}. In addition, the number of subtrajectories is several orders of magnitude larger than the number of trajectories. For example, there are over {\color{red}{20 million}} subtrajectories with respect to 4.96 million trajectories~\cite{yuan2011driving, yuan2010t} after map-matching~\cite{yang2018fast}. Thus, an optimal solution must enable \emph{online detection}, where the anomalousness can be revealed while a trajectory is being generated.
\end{itemize}
\vspace{-\topsep}
\noindent However, existing studies fail to solve three above challenges.~\citet{lee2008trajectory} proposes to partition the trajectories and detect the anomalous subtrajectories by computing the distance of each segment in a detected trajectory to the segments in other trajectories, which is offline and cannot detect the anomalous subtrajectories in real-time manner. ~\citet{chen2013iboat} proposes to detect anomalous subtrajectories in online manner with the isolation-based method, which aims to check which parts of a detected trajectory that can be isolated from the reference (i.e., normal) trajectories with the same source and destination routes, which fail to generalize in many real applications. ~\citet{zhang2020continuous} proposes to monitor the trajectory similarity based on discrete Frechet distance between a given reference route and the current partial route at each timestamp. If the deviation threshold exceeds a given threshold at any timestamp, and the system alerts that an outlier event of detouring is detected, which fail to slove the second challenge. These studies are hand-crafted heuristic with many predefined parameters (e.g., the deviation threshold) for detecting anomalous subtrajectories and time-consuming, and faile to solve above challenges.
\noindent In this paper, {\Comment{we propose a novel reinforcement learning based solution RL4OASD to address above three challenges. RL4OASD consists of three parts, namely data preprocessing, RSRNet and ASDNet. \emph{\textbf{First}}, we obtain map-matched trajectories via map-matching the raw trajectories on road networks during data preprocessing period. The map-matched trajectory is in the form of a sequence of road segments. We then design a noisy labeling method to label each road segment as 0 representing normal and 1 otherwise, by counting each transition frequency from historical trajectories. \textbf{During training process}, the noisy labels are used to train the Road Segment Representation Network (RSRNet). Also, in order to improve the effectiveness of RL4OASD, we model the detecting anomalous subtrajectories as a Markov decision process~\cite{puterman2014markov} named Anomalous Subtrajectory Detection Network (ASDNet). And ASDNet can be trained jointly with SRNet, in a weakly supervised manner using noisy labeled data. The two networks can mutually refine each other, where SRNet can provide expressive sequential information behind trajectories for ASDNet to detect, and ASDNet can offer supervision for SRNet to better learn the characteristics of subtrajectories. \textbf{During test period}, RL4OASD perform testing on \textbf{manually labelled test data}. In this way, we address the first challenge effectively.
\emph{\textbf{Second}}, SRNet, which exploits road network representation learning and deep sequence modeling techniques that jointly model the \emph{characteristics} underlying subtrajectories, which addresses the second challenge.
\emph{\textbf{Third}}, a novel detection algorithm, namely RL4OASD, which leverages the learned policy and enables online detection of anomalous subtrajectories, which addresses the third challenge.}}
\noindent Our contributions can be summarized as follows:
\vspace{-\topsep}
\begin{itemize}[leftmargin=*]
\item We propose the first deep reinforcement learning based solution to detect anomalous subtrajectories. The
proposed model 1) integrates the {\color{blue}{characteristics of subtrajectories}} and road network structure, and 2) can be trained with limited labeled data.
\item We propose a novel detection paradigm based on the learned policy, which allows the anomalous subtrajectory detection to be efficiently conducted in an online manner.
\item We conduct extensive experiments on two real-world trajectory datasets with \textbf{manually-labelled test data}. We compare our proposed solution with various baselines, and the results show that our solution can significantly outperform the state-of-the-art methods and is efficient for online anomalous subtrajectory detection.
\end{itemize}
\vspace{-\topsep}
\section{INTRODUCTION}
\label{sec:introduction}
With the advancement of mobile computing and geographical positioning techniques, massive spatial trajectory data is being generated by various moving objects (\eg, people, vehicles) in spatial spaces, such as GPS devices and smart phones. Such collected trajectory data records the mobility traces of moving objects at different timestamps, which reflects the diverse mobility patterns of objects. Accurate spatial trajectory data analysis serves as the key technical component for a wide spectrum of spatial-temporal applications, including intelligent transportation~\cite{yuan2021effective}, location-based recommendation service~\cite{feng2020hme}, pandemic tracking~\cite{seemann2020tracking}, and criminal prevention for public safety~\cite{2018deepcrime}.
Among various trajectory mining applications, detecting anomalous trajectories plays a vital role in many practical urban scenarios. For example, the real-time vehicle trajectory detection is beneficial for better traffic management by identifying the traffic anomalies (\eg, traffic jam and accidents) beforehand~\cite{yuan2018hetero}. Additionally, studying the mobility behaviors of humans for discovering their anomalous trajectories is helpful for predicting spatial event outliers (\eg, civil unrest and pandemic outbreaks)~\cite{ning2019spatio}. In such context, a trajectory anomaly refers to a trajectory which does not show the normal mobility patterns and deviates from the majority of the trajectories with the same source and destination~\cite{chen2011real, chen2013iboat}.
An \emph{anomaly} is an object that is different from or inconsistent with the remaining objects~\cite{han2006concepts}. In our context, a trajectory anomaly or an anomalous trajectory refers to a trajectory that is different from the majority of the trajectories with the same source and destination~\cite{chen2011real, chen2013iboat}.
Recently, detecting anomalous trajectories has attracted much attention, and many approaches~(\eg, \cite{zhang2020continuous, liu2020online,zhang2011ibat, chen2013iboat, wu2017fast}) have been proposed for solving this problem.
In the following, we assume that each trajectory has a source and destination pair, referred as an SD pair. Consider the example in Figure~\ref{fig:example}. There are three trajectories from the source $S (e_1)$ to the destination $D (e_{10})$. The trajectory $T_3$ (the red one) is considered as anomalous with respect to the SD pair $<S, D>$ if the majority of the trajectories with the same SD pair follow either $T_1$ (the blue one) or $T_2$ (the green one). Most of the existing approaches{{~\cite{chen2013iboat,zhang2020continuous,wu2017fast,liu2020online}}} are designed to decide whether a target trajectory is anomalous or not, but they are not able to provide information of \textbf{which part of the trajectory (\emph{subtrajectory}) is anomalous in a timely manner}, which is critical for a number of real-world applications. For instance, a ride-hailing company could immediately spot a driver when the trajectory starts to deviate from a normal route, or provide an alert to a passenger. Other potential applications include elderly care, especially for those who suffer from Alzheimer or child custody in countries with high kidnapping rates. {{For example, many seniors require constant monitoring to support their daily life, but such care is expensive, and generally processes in an automatic way by monitoring anomalies that occur in their trajectories. When they go outside and some anomalies are detected (e.g., taking a wrong bus), if their families could be notified in time with the accurate mobile localization, immediate and possible lifesaving actions can be taken accordingly. In addition, the subtrajectory detection enables the detection in a finer-grained way than the conventional trajectory anomaly detection task, i.e., it reveals which parts are responsible for the anomalies. This would be helpful for taking prompt and accurate actions against the anomalies in real-world scenarios.}}
\begin{figure}
\centering
\includegraphics[width=0.68\linewidth]{figures/example2}
\caption{$T_1$, $T_2$ and $T_3$ are three trajectories from the same source $e_1$ to the same destination $e_{10}$.
}
\label{fig:example}
\vspace{-5mm}
\end{figure}
Detecting anomalous subtrajectories effectively in a timely (online) manner is a non-trivial task with the following challenging issues:
(1) \textbf{Lack of labeled data.}
{{It is challenging to capture anomalous information behind trajectories without labeled data, and the anomalousness is hard to define by simple rules due to variable and complex traffic condition.}} Yet, manually labeling large-scale subtrajectories would cost unaffordable human effort. This obstructs some straightforward data-intensive methods to be applied for this problem. As far as we know,
{\Comment{the only manually labeled dataset available (used in Chen et al.~\cite{chen2013iboat}) contains only 9 SD pairs with about 1,000}} trajectories. Using a small dataset cannot provide accurate information on the performance of an approach in a real application.
{{(2) \textbf{Complex subtrajectory modeling.} Existing approaches for modeling anomalous subtrajectories follow one of the following strategies. The first approach~\cite{chen2013iboat} is to use some manually defined parameters to isolate an anomalous subtrajectory. It maintains an adaptive window of the latest incoming GPS points to compare against normal trajectories, where the normal trajectories are supported by the majority of the trajectories within an SD pair. When a new point arrives, it will be added to the window as long as the support of the subtrajectory in the window is larger than a pre-defined threshold; otherwise, the adaptive window is reduced to contain only the latest point, and the process continues. However, the threshold is hard to set appropriately, and not general enough to cover all possible situations or adaptive to different road conditions.
The second approach~\cite{zhang2020continuous} is to use a density-based method. It follows some pre-defined rules to partition all trajectories within an SD pair into many subtrajectories, and the anomalous subtrajectories are recognized via computing the distance of each subtrajectory in a detected trajectory to the subtrajectories in other trajectories. It is clear that the detection needs to go through the whole trajectory, and cannot be done in an online manner. The most promising approach is the learning-based method~\cite{liu2020online, wu2017fast}, however, all proposed learning-based methods can only be applied to answer whether a given trajectory is anomalous or not but cannot detect anomalous subtrajectories. A possible reason is due to the lack of labeled data for anomalous subtrajectories.}}
\if 0
(2) \textbf{Complex subtrajectory modeling.} Existing approaches for modeling anomalous subtrajectories follow one of the following strategies. The first approach~\cite{chen2013iboat} is to use a set of manually defined parameters to characterize an anomalous subtrajectory. Due to the limited amount of labeled data, a fixed set of parameters is not general enough to cover all possible situations or adaptive to different road conditions. The second approach~\cite{zhang2020continuous} is to use a distance-based measure to differentiate anomalous trajectories from normal ones, i.e., when the deviation of the current partial route from a normal route exceeds a certain threshold, it reports a detouring event. First, the threshold is hard to set appropriately. Second, this is an indirect measure for anomalous subtrajectories. {{This method detects anomalous trajectories via calculating the trajectory similarity by discrete Frechet distance between a given reference route and the current partial route at each timestamp. Thus, the method is time-consuming and it is not clear whether the detection can be done in a timely manner.}} The most promising approach is the learning-based method~\cite{liu2020online, wu2017fast}, however, all proposed learning-based methods can only be applied to answer whether a given trajectory is anomalous or not but cannot detect anomalous subtrajectories. A possible reason is due to the lack of labeled data for anomalous subtrajectories.
\fi
(3) \textbf{Efficient online detection requirement.} Trajectories can be generated at a very high speed, and thus detection efficiency is a major concern.
The number of subtrajectories is usually several orders of magnitude larger than the number of trajectories. In our example, there are over 1 billion subtrajectories corresponding to the 5 million trajectories. It is important to have a solution that can conduct \emph{online detection}, where the anomalousness can be revealed while a trajectory is being generated.
{{(4) \textbf{Concept drift of anomalous trajectories.}
The concept of anomalous trajectory might drift over time. For example, due to some varying traffic condition (e.g., some accident), the trajectory might gradually deviate from a normal route, and the concept of ``normal'' and ``anomalous'' is changed accordingly. Therefore, it is desirable to handle these changes and incorporate them into the model.}}
In summary, existing studies cannot address the aforementioned challenges simultaneously (more details of these studies can be found in Section~\ref{sec:related}). %
Therefore, in this paper, we propose a novel reinforcement learning based solution \texttt{RL4OASD} to address the challenges. \texttt{RL4OASD} consists of three parts, namely the data preprocessing, Road Segment Representation Network (RSRNet) and Anomalous Subtrajectory Detection Network (ASDNet). First, we obtain map-matched trajectories via map-matching the raw trajectories (GPS points) on road networks. The map-matched trajectory is in the form of a sequence of road segments. We then design a \emph{noisy labeling method} to label each road segment as 0 representing normal and 1 otherwise automatically, by counting each transition frequency from historical trajectories. {{This is the first step to address the challenge of the lack of labeled data, where the noisy labels are used to pre-train RSRNet as a warm-start, which outputs the road segment feature representations behind trajectories. Then, we model the detecting anomalous subtrajectories as a Markov Decision Process (MDP)~\cite{puterman2014markov} via ASDNet.
Here, ASDNet is trained jointly with RSRNet in a weakly supervised manner. Specifically, the two networks can mutually refine each other, where RSRNet provides the representations for ASDNet to detect anomalies, and ASDNet refines the noisy labels using the current policy learned from MDP, and offers supervision of the refined labels for RSRNet to train
better representations.}}
We emphasize that the noisy labels only provide some prior knowledge to address the cold-start problem of training the proposed model, but are not used to train the model as in a supervised paradigm.
During the testing period, \texttt{RL4OASD} performs testing on manually labeled test data. In this way, we do not require labeled data during training and hence we address the first challenge effectively.
Second, RSRNet, which exploits road network representation learning and deep sequence modeling techniques that jointly model the \emph{characteristics} underlying subtrajectories, which addresses the second challenge.
Third, a novel detection algorithm, which leverages the learned policy and enables online detection of anomalous subtrajectories, which addresses the third challenge. {{Fourth, our model can easily incorporate online learning strategy using the latest trajectory data to fine-tune the model, which addresses the fourth challenge.
}}
Our contributions can be summarized as follows.
\begin{itemize}
\item We propose the first deep reinforcement learning based solution to detect anomalous subtrajectories. The proposed model 1) can be trained without labeled data, 2) integrates techniques of road network representation learning (RSRNet) and deep reinforcement learning (ASDNet) to model the characteristics of subtrajectories, 3)
allows the anomalous subtrajectory detection to be efficiently conducted in an online manner, {{and 4) handles the concept drift of anomalous trajectories.}}
\item We conduct extensive experiments on two real-world trajectory datasets with manually labeled test data. We compare our proposed solution with various baselines, and the results show that our solution effectively and efficiently, i.e., it can significantly outperform the state-of-the-art methods (with 20-30\% improvement compared to the best existing approach) and is efficient for online anomalous subtrajectory detection (it takes less than 0.1ms to process each newly generated data point).
\item {{The manually labeled test data is publicly accessible via the link~\footnote{https://www.dropbox.com/sh/imix3otoep49v0g/AACQLPpgA0jMdTg7LV-GdL70a?dl=0}.
}}
\end{itemize}
The rest of paper is organized as follows. We review the literature in Section~\ref{sec:related}. We provide the problem definition in Section~\ref{sec:problem}. We present our \texttt{RL4OASD} model in Section~\ref{sec:method}. We demonstrate the experimental results in Section~\ref{sec:experiment}, and finally we conclude this paper in Section~\ref{sec:conclusion}.
\section{INTRODUCTION}
\label{sec:introduction}
With the advancement of mobile computing and geographical positioning techniques, such as GPS devices and smart phones, massive spatial trajectory data is being generated by various moving objects (\eg, people, vehicles) in spatial spaces. Such collected trajectory data records the mobility traces of moving objects at different timestamps, which reflects diverse mobility patterns of objects. Accurate spatial trajectory data analysis serves as the key technical component for a wide spectrum of spatial-temporal applications, including intelligent transportation~\cite{yuan2021effective}, location-based recommendation service~\cite{feng2020hme}, pandemic tracking~\cite{seemann2020tracking}, and crime prevention for public safety~\cite{2018deepcrime}.
Among various trajectory mining applications, detecting anomalous trajectories plays a vital role in many practical scenarios. For example, the real-time vehicle trajectory anomaly detection is beneficial for better traffic management ~\cite{yuan2018hetero}. Additionally, studying the mobility behaviors of humans for discovering their anomalous trajectories is helpful for predicting event outliers (\eg, civil unrest and pandemic outbreaks)~\cite{ning2019spatio}. In such context, an outlier/anomaly refers to a trajectory which does not show the normal mobility patterns and deviates from the majority of the trajectories with the same source and destination~\cite{chen2011real, chen2013iboat} (termed as SD pair).
Consider the example in Figure~\ref{fig:example}. There are three trajectories from the source $S (e_1)$ to the destination $D (e_{10})$. The trajectory $T_3$ (the red one) is considered as anomalous if the majority of the trajectories with the same SD pair $<S, D>$ follow either $T_1$ (the blue one) or $T_2$ (the green one).
\begin{figure}
\centering
\includegraphics[width=0.68\linewidth]{figures/example2}
\caption{An example of anomalous trajectory. $T_1$, $T_2$ and $T_3$ are three trajectories from the same source $e_1$ to the same destination $e_{10}$.
}
\label{fig:example}
\vspace{-5mm}
\end{figure}
Recently, detecting anomalous trajectories has attracted much attention, and many efforts have been devoted to proposing various methods~(\cite{chen2013iboat, wu2017fast, zhang2020continuous, liu2020online,song2018anomalous,li2007roam}) for solving this problem. However, several key challenges have not been well addressed in current methods, which are summarized in Table~\ref{tab:summary} and elaborated as follows.
\begin{table}[]
\setlength{\tabcolsep}{4.5pt}
\centering
\caption{The existing methods and their corresponding issues ($\surd$ indicates the issue exists and $\times$ otherwise).}
\vspace*{-2mm}
\begin{tabular}{cccc}
\hline
& \begin{tabular}[c]{@{}l@{}}Incapability of detecting \\ anomalous subtrajectories\end{tabular} & \begin{tabular}[c]{@{}l@{}}Non-data \\driven \end{tabular} & \begin{tabular}[c]{@{}l@{}}Requirement of sufficient \\ supervision labels\end{tabular} \\ \hline
\cite{chen2013iboat} &$\times$ &$\surd$ &$\times$ \\
\cite{wu2017fast} &$\surd$ &$\times$ &$\times$ \\
\cite{zhang2020continuous} &$\surd$ &$\surd$ &$\times$ \\
\cite{liu2020online} &$\surd$ &$\times$ &$\times$ \\
\cite{song2018anomalous} &$\surd$ &$\times$ &$\surd$ \\
\cite{li2007roam} &$\surd$ &$\times$ &$\surd$ \\\hline
\end{tabular}
\vspace*{-4mm}
\label{tab:summary}
\end{table}
(1) \textbf{Incapability of detecting anomalous subtrajectories}. Most existing approaches {{~\cite{wu2017fast, zhang2020continuous, liu2020online, song2018anomalous,li2007roam}}} have only focused on identifying anomalous trajectories at coarse-grained levels, and detecting whether a trajectory \emph{as a whole} is anomalous or not.
However, the anomalous trajectory detection at the fine-grained level is more beneficial for better decision making in smart city applications, but unfortunately less explored in current methods. For instance, a ride-hailing company can immediately spot an abnormal driver when his/her trajectory starts to deviate from the normal route, which indicates an anomalous \emph{subtrajectory}.
Hence, in this work, we propose to detect fine-grained anomalous subtrajectories in a timely manner.
{{(2) \textbf{Non-data driven.}
Some existing approaches are based on pre-defined parameters and/or rules, and are not data driven~\cite{chen2013iboat, zhang2020continuous}.
For example, the approach in~\cite{chen2013iboat} is to use some manually defined parameters to isolate an anomalous subtrajectory. It uses an adaptive window maintaining the latest incoming GPS points to compare against normal trajectories, where the normal trajectories are supported by the majority of the trajectories within an SD pair. As a new incoming point is added to window, it then checks the support of the subtrajectory in the window. If the support is larger than a pre-defined threshold, the point is labeled to be normal; otherwise, the point is labeled to be anomalous, and the adaptive window is reduced to contain only the latest point. The process continues until the trajectory is completed. The anomalous subtrajectory is recognized as those points labeled to be anomalous. However, the threshold is hard to set appropriately, and not general enough to cover all possible situations or adaptive to different road conditions.
{{Another approach in~\cite{zhang2020continuous} uses trajectory similarity (e.g., discrete Frechet) to detect anomalies. It computes the distance between a given normal trajectory and the current partial trajectory for each new incoming point, it reports an anomaly event if the distance exceeds a given threshold; otherwise, the detection continues.}}
}}
\if 0
(2) \textbf{Complex subtrajectory modeling.} Existing approaches for modeling anomalous subtrajectories follow one of the following strategies. The first approach~\cite{chen2013iboat} is to use a set of manually defined parameters to characterize an anomalous subtrajectory. Due to the limited amount of labeled data, a fixed set of parameters is not general enough to cover all possible situations or adaptive to different road conditions. The second approach~\cite{zhang2020continuous} is to use a distance-based measure to differentiate anomalous trajectories from normal ones, i.e., when the deviation of the current partial route from a normal route exceeds a certain threshold, it reports a detouring event. First, the threshold is hard to set appropriately. Second, this is an indirect measure for anomalous subtrajectories. {{This method detects anomalous trajectories via calculating the trajectory similarity by discrete Frechet distance between a given reference route and the current partial route at each timestamp. Thus, the method is time-consuming and it is not clear whether the detection can be done in a timely manner.}} The most promising approach is the learning-based method~\cite{liu2020online, wu2017fast}, however, all proposed learning-based methods can only be applied to answer whether a given trajectory is anomalous or not but cannot detect anomalous subtrajectories. A possible reason is due to the lack of labeled data for anomalous subtrajectories.
\fi
(3) \textbf{Requirement of sufficient supervision labels.}
While several machine learning models have been developed to identify the anomalous trajectories~\cite{song2018anomalous,li2007roam}, the effectiveness of those methods largely relies on sufficient labeled data under a supervised learning framework. Nevertheless, practical scenarios may involve a very limited amount of labeled anomalous trajectories compared with the entire trajectory dataset due to the requirement of heavy human efforts, \eg, the trajectory data used in~\cite{chen2013iboat} only contains 9 SD pairs. \\\vspace{-0.15in}
In this paper, we propose a novel reinforcement learning based solution \texttt{RL4OASD}, which avoids the aforementioned issues of existing approaches. First, \texttt{RL4OASD} is designed to detect anomalous \emph{subtrajectories} in an online fashion, and thus it avoids the first issue. It achieves this by predicting a normal/anomalous label for each road segment in a trajectory and detecting anomalous subtrajectories based on the labels of road segments. Second, \texttt{RL4OASD} is a data-driven approach relying on data \emph{without labels}, and thus it avoids the second and third issues.
Specifically, \texttt{RL4OASD} consists of three components, namely data preprocessing, network RSRNet and network ASDNet (see Figure~\ref{fig:rl4oasd}).
In data preprocessing,
it conducts map-matching on raw trajectories, mapping them onto road networks and obtains the map-matched trajectories.
Based on the map-matched trajectories, it computes some labels of the road segments involved in trajectories using some heuristics based on the historical transition data among road segments.
The labels, which are noisy, will be used to train
RSRNet in a weakly supervised manner.
In RSRNet,
it learns the representations of road segments based on both traffic context features and normal route features. These representations in the form of vectors will be fed into ASDNet to define the states of a Markov decision process (MDP) for labeling anomalous subtrajectories.
In ASDNet,
it models the task of labeling anomalous subtrajectories as an MDP and learns the policy via a policy gradient method~\cite{silver2014deterministic}.
ASDNet outputs refined labels of road segments, which are further used to train RSRNet again, and then RSRNet provides better observations for ASDNet to train better policies.
We emphasize that the noisy labels computed by the preprocessing component only provide some prior knowledge to address the cold-start problem of training RSRNet, but are not used to train the model as in a supervised paradigm.
Furthermore, the concept of anomalous trajectory might drift over time. For example, due to some varying traffic conditions (e.g., some accidents), the trajectory might gradually deviate from a normal route, and the concept of ``normal'' and ``anomalous'' is changed accordingly. \texttt{RL4OASD} can handle ``concept drift'' with an online learning strategy, i.e., it continues to be refined when new data comes in.
Our contributions can be summarized as follows.
\begin{itemize}
\item We propose the first deep reinforcement learning based solution to detect anomalous subtrajectories. The proposed model 1) can detect anomalous subtrajectories naturally, 2) is data-driven, and 3) does not require labeled data.
{{In addition, it can handle the concept drift of anomalous trajectories via online learning.}}
\item We conduct extensive experiments on two real-world trajectory datasets, namely Chengdu and Xi'an. We compare our proposed solution with various baselines, and the results show that our solution is effective (e.g., it yields 20-30\% improvement compared to the best existing approach) and efficient (e.g., it takes less than 0.1ms to process each newly generated data point).
\item {
We manually label the anomalous subtrajectories for two real datasets, Chengdu and Xi'an, for testing. Each labeled dataset covers 200 SD pairs and 1,688 (resp. 1,057) map-matched trajectories with these SD pairs for the Chengdu dataset (resp. the Xi'an dataset). This labeled dataset is more than 50 times larger than the existing known dataset~\cite{chen2013iboat}.
{The manually labeled test data is publicly accessible via the link~\footnote{https://github.com/lizzyhku/OASD}.
We believe this relatively large labeled dataset would help with comprehensive and reliable evaluations on approaches for anomalous subtrajectory detection.
}}
\end{itemize}
\section{METHODOLOGY}
\label{sec:method}
\subsection{Overview of \texttt{RL4OASD}}
To discover the anomalous subtrajectories from an ongoing trajectory, we explore a reinforcement learning based method within a weakly supervised framework called RL4OASD. We illustrate the overall architecture of RL4OASD in Figure~\ref{fig:rl4oasd}. It includes three components and we discuss them as follows.
In data preprocessing, we obtain map-matched trajectories via map-matching the raw trajectories on road networks. The map-matched trajectory is in the form of a sequence of road segments, which can be represented its travel route. We then design a noisy labeling method to label each road segment as 0 representing normal and 1 otherwise, by counting each transition frequency from historical trajectories. The noisy labels are then used to train the Road Segment Representation Network (RSRNet) in a weakly supervised manner (Section~\ref{sec:weakly}).
In RSRNet, we infer the normal routes behind a detected trajectory and train the embeddings of road segments of the trajectory. Those embeddings are fed into the Anomalous Subtrajectory Detection Network (ASDNet) as the state, to provide observations for labeling anomalous subtrajectories. We implement the RSRNet with RNN, which is able to capture the sequential information of trajectories (Section~\ref{sec:RSRNet}).
In ASDNet, we build a road network enhanced environment for labeling each road segment as normal or not. The task is modeled as a \emph{Markov decision process}~\cite{puterman2014markov}, whose policies are learned via a policy gradient method~\cite{silver2014deterministic} (Section~\ref{sec:ASDNet}).
Note that the ASDNet outputs refined labels of road segments, which are further used to train RSRNet again, and then RSRNet provides better observations for ASDNet to train better policies. The process iterates and we call the resulting algorithm combined RSRNet and ASDNet as RL4OASD (Section~\ref{sec:RL4OASD}).
\subsection{Data Preprocessing for Noisy Labels}\
\label{sec:weakly}
Our RL4OASD model consists of two networks (i.e., RSRNet and ASDNet), where RSRNet is essentially to capture the features of each road segment. However, labeled data (i.e., the labeled anomalous subtrajectories in a detected trajectory) is usually not available in the datasets, which makes it a challenge to learn good representations. To overcome this, we explore the network training in a weakly supervised setting (i.e., using noisy labels), and take the spatial and temporal factors into consideration. More specifically, the process includes four steps. \underline{Step-1}: we group historical trajectories in a dataset wrt different SD-Pairs and time slots. Here, we have 24 time slots if we partition one day with one hour granularity, and we say a trajectory falls into a time slot if its starting travel time is within the slot. e.g., in Figure~\ref{fig:example}, we have three map-matched trajectories $T_1$, $T_2$ and $T_3$ under the source $e_1$ and destination $e_10$. Assuming the starting travel times of $T_1$, $T_2$ and $T_3$ are 9:00, 17:00, and 9:10, respectively. Therefore, $T_1$ and $T_3$ are in the same group (denoted by Group-1) because they are within the same time slot with one hour granularity, and $T_2$ is in the other group (denoted by Group-2).
\underline{Step-2}: we then capture the fraction of transitions wrt all trajectories in each group, e.g., in Group-1 (i.e., $T_1$ and $T_3$), suppose that there are 9 trajectories traveling the same route as $T_1$ and only one trajectory traveling traveling route as $T_3$, the fraction of transition $<e_1,e_2>$ is calculated by $1/10=0.1$ since it appears only once in all 10 trajectories.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth, height=0.5\linewidth]{figures/example}
\caption{$T_1$, $T_2$ and $T_3$ are three map-matched trajectories under $e_1$ (source) and $e_{10}$ (destination).
}
\label{fig:example}
\vspace{-5mm}
\end{figure}
\underline{Step-3}: for each trajectory, we refer to the group it belongs to
and map it to a sequence of transition fractions wrt each road segment. e.g., for a trajectory traveling the route as $T_3$, its mapped sequence is $<1.0,0.1,0.1,0.1,0.1,0.1,0.1,0.1,1.0>$. Note that the fractions on the source (i.e., $e_1$) and the destination (i.e., $e_1$) are set to 1.0 since the two road segments are definitely traveled within its group.
\underline{Step-4}: the noisy labels can be obtained with a threshold is given, denoted by $\alpha$, e.g., the noisy labels of $T_3$ is $<0,1,1,1,1,1,1,1,0>$ given $alpha=0.1$, where 0 denotes a normal road segment, whose fraction is larger than $\alpha$ meaning that the road segment is frequently traveled, and 1 otherwise.
The noisy labels are then used in the training of RSRNet. Compared with manually labeling~\cite{zhang2011ibat,chen2013iboat, lv2017outlier}, such noisy labels can be obtained more easily.
\subsection{Road Segment Representation Network (RSRNet)}
\label{sec:RSRNet}
RSRNet aims to provide representations of road segments as the state for the MDP in ASDNet. To design RSRNet, we adopt LSTM~\cite{hochreiter1997long} structure, which accepts trajectories with different lengths and captures the sequential information behind trajectories. In addition, we embed two types of features into representations, one is to capture the traffic context on road networks, and the other one is to capture the normal routes for a given SD-Pair.
\noindent \textbf{Traffic Context Feature (TCF).} A map-matched trajectory corresponds to a sequence of road segments and each road segment is naturally in the form of the token (i.e., the road segment id). We pre-train each road segment in the embedding layer of RSRNet as a vector, which embeds traffic context features (e.g., driving speed, trip duration, road type) into it. To do this, we employ Toast~\cite{chen2021robust} to learn road segments, which is the state-of-the-art road network representation learning model to support road segment based applications. The learned road segment representations will be used to initialize the embedding
layer in RSRNet, and those embeddings can still be further optimized with the model training. Other road network representation models~\cite{jepsen2019graph,wang2019learning} are also applicable for the task.
\noindent \textbf{Normal Route Feature (NRF).} Given an SD-Pair in a time slot, we first infer the normal routes within it. Intuitively, the normal trajectories are often followed the same route. If a trajectory contains some road segments that are rarely traveled by others, and it probably contains anomalies. Therefore, we infer a route as the normal route by calculating the fraction of the trajectories passing through the route wrt all trajectories within its SD-Pair. The inferring is controlled with a threshold (denoted by $\delta$), i.e., a route is inferred as the normal if the fraction is larger than the threshold, vice versa.
For example, given $\delta=0.4$, recall that the Group-1 with the trajectory $T_1$ (appearing 9 times) and $T_3$ (appearing 1 time) in Figure~\ref{fig:example}, we infer $T_1$ as the normal route, because its fraction ($9/10=0.9$) is larger than the threshold $\delta$.
Based on the inferred normal routes, we then extract the features of a detected trajectory. For example, given a detected trajectory followed the route as $T_3$, we partition the trajectory into a sequence of transitions, and we detect whether a road segment in a detected trajectory is normal (i.e., 0) if the transition on that road segment occurs in the inferred normal routes; 1 otherwise. For example, the extracted normal features of $T_3$ is $<0,1,1,1,1,1,1,1,0>$, where the feature on the road segment $e_2$ is 0 since the transition $<e_1,e_2>$ does not occur in the normal route $T_1$. Note that the source and destination road segments are always denoted by 0 (i.e., normal) in the extracted features. After obtaining the normal route features on road segments, we embed them via an embedding layer, and the embedded vectors are then fed into the model.
\noindent \textbf{Training RSRNet.} Figure~\ref{fig:rl4oasd} illustrates the architecture of RSRNet. In particular, given a sequence of embedded traffic context features $\mathbf{x_i}^{tcf}$ ($1 \leq i \leq n$), where $n$ denotes the trajectory length, and the LSTM obtains the hidden states $\mathbf{h_i}$ at each road segment, accordingly. We then concatenate $\mathbf{h_i}$ with the embedding of normal route feature $\mathbf{x_i}^{nrf}$, denoted by $\mathbf{z}_i=[\mathbf{h}_i;\mathbf{x}_i^{nrf}]$. RSRNet is trained using the noisy labels, and we adopt cross-entropy loss to train RSRNet between the predicted label $\hat{y}_i$ based on the $\mathbf{z}_i$ and the noisy label $y_i$, i.e., $\mathcal{L}=\frac{1}{n}\sum_{i=1}^{n}\mathcal{H}(y_i,\hat{y}_i)$, where $\mathcal{H}$ denotes the cross-entropy operator. The trained $\mathbf{z}_i$ is then fed into ASDNet as the state representation in ASDNet.
\subsection{Anomalous Subtrajectory Detection Network (ASDNet)}
\label{sec:ASDNet}
We implement ASDNet with reinforcement learning (RL). Consider the task of online anomalous subtrajectory detection is to sequentially scan an ongoing trajectory and for each road segment, it decides whether an anomaly happened at that position. This motivates us to model the process as an MDP, involving actions, states, transitions
and rewards. In addition, we utilize the graph structure of a road network to build an environment for conducting the MDP, which helps the anomaly detection as discussed below.
\noindent \textbf{Road Network Enhanced Environment (RNEE).} Consider a transition $<e_{i-1},e_i>$ ($1 < i \leq n$) between two road segments $e_{i-1}$ and $e_i$. Based on the graph structure of road networks, we denote the out degree and in degree on a road segment $e_{i}$ as $e_{i}.out$, $e_{i}.in$. We then label each road segment as normal (i.e.,0) or abnormal (i.e., 1), denoted by $e_{i}.l$, according to the following four lemmas.
\begin{lemma}
\label{lemma:1}
Given $e_{i-1}.out=1$ and $e_{i}.in=1$, we have $e_{i}.l=e_{i-1}.l$.
\end{lemma}
\begin{proof}
The transition is smooth without any road turns can be involved, and thus the labels on $e_{i-1}$ and $e_{i}$ maintain the same.
\end{proof}
\begin{lemma}
\label{lemma:2}
Given $e_{i-1}.out=1$ and $e_{i}.in>1$, we have $e_{i}.l=0$, if $e_{i-1}.l=0$.
\end{lemma}
\begin{proof}
This can be proved by contradiction. Suppose $e_{i}.l=1$ when $e_{i-1}.l=0$, it implies that an anomaly happened on the road segment $e_{i}$, whose movement do not follow the normal road, denoted by $e'_{i}$. This contradicts the fact that $e_{i-1}.out=1$. Hence, the lemma holds.
\end{proof}
\begin{lemma}
\label{lemma:3}
Given $e_{i-1}.out>1$ and $e_{i}.in=1$, we have $e_{i}.l=1$, if $e_{i-1}.l=1$.
\end{lemma}
\begin{proof}
We also prove it by contradiction. Suppose $e_{i}.l=0$ when $e_{i-1}.l=1$, it indicates that an anomaly finished on the road segment $e_{i}$, which is definitely associated with a normal road $e'_{i-1}$ that is traveled. This contradicts the fact that $e_{i}.in=1$. Hence, the lemma holds.
\end{proof}
\begin{lemma}
\label{lemma:4}
Given $e_{i-1}.out>1$ and $e_{i}.in>1$, $e_{i-1}.l$ and $e_{i}.l$ are arbitrary, i.e., $e_{i}.l=e_{i-1}.l$ or $e_{i}.l \neq e_{i-1}.l$.
\end{lemma}
\begin{proof}
The lemma directly follows the observation.
\end{proof}
We build an environment for conducting the MDP, where a decision for labeling a road segment is deterministic in one of three cases: (1) $e_{i-1}.out=1$, $e_{i}.in=1$ (Lemma~\ref{lemma:1});
(2) $e_{i-1}.out=1$, $e_{i}.in>1$ and $e_{i-1}.l=0$ (Lemma~\ref{lemma:2}); (3) $e_{i-1}.out>1$, $e_{i}.in=1$ and $e_{i-1}.l=1$ (Lemma~\ref{lemma:3}). In other cases, it makes a decision via the model.
As we can see many decisions have been replaced by the lemma~\ref{lemma:1}, \ref{lemma:2} and \ref{lemma:3}, and it brings at least two advantages. First, the efficiency is improved since the time of constructing states and taking actions at some road segments is saved via checking the light lemmas. Second, as many cases have been pruned with deterministic decisions, the RL model is lightly to train for tackling the otherwise cases, and some potential wrong decisions can therefore be avoided. Based on the road network enhanced environment, we define the MDP as follows.
\noindent \textbf{Actions.} We denote an action of the MDP by $a$, which is to label each road segment as normal or not. Note that an anomalous subtrajectory boundary can be identified when the labels of two adjacent road segments are different, and the action is only conducted by the RL model when a decision is required based on the road network enhanced environment.
\noindent \textbf{States.} The state $\mathbf{s}_i$ ($1 < i \leq n$) is represented by the concatenation of $\mathbf{z}_i$ and $\mathbf{v}(e_{i-1}.l)$, i.e., $\mathbf{s}_i=[\mathbf{z}_i;\mathbf{v}(e_{i-1}.l)]$, where $\mathbf{z}_i$ is obtained from RSRNet and $\mathbf{v}(e_{i-1}.l)$ denotes the vector of the label on the previous road segment $e_{i-1}.l$ via a embedding layer, whose parameter is learnable in ASDNet. The rationale for the state design is to capture the features from three aspects, i.e., traffic context, normal routes and the previous label.
\noindent \textbf{Transitions.} We take an action $a$ to produce a label on a road segment at a state $\mathbf{s}$, it then would label the next road segment. There are two cases: (1) the labeling is deterministic at the next road segment based on the derived lemmas above. In this case, the process continues for the next; otherwise, (2) we construct a new state $\mathbf{s}'$, which corresponds to the next state of $\mathbf{s}$, to output an action via the RL model.
\noindent \textbf{Rewards.} We design the reward in terms of two aspects. One is called \emph{local reward}, which aims to capture the local continuity of labels of road segments. The rationale is that normal road segments or anomalous road segments are usually continued for several times. Intuitively, the labels would not be changed frequently. The other one is called \emph{global reward}. During the training, we check the model performance based on the task of online anomalous subtrajectory detection (OASD). We take the model performance (i.e., $F_1$-score) as a signal to guide the training.
The local reward is an intermediate reward, which encourages the continuity of the labels on road segments. It is defined as $r_i^{local}=\text{sign}(e_{i-1}.l=e_{i}.l)\text{cos}(\mathbf{z}_{i-1},\mathbf{z}_{i})$, where the $\text{sign}(e_{i-1}.l=e_{i}.l)$ returns 1 if the condition $e_{i-1}.l=e_{i}.l$ is true and -1 otherwise; $\text{cos}(\mathbf{z}_{i-1},\mathbf{z}_{i})$ denotes the consine similarity between $\mathbf{z}_{i-1}$ and $\mathbf{z}_{i}$, which are obtained from SRNet.
The global reward is obtained based on the OASD task. To do this, we manually label a development dataset for around 100 trajectories (more details for the manually labeling are presented in Section~\ref{sec:setup}), and we evaluate the model performance within the development dataset in terms of the $F_1$-score metric (more details regarding the evaluation metric are presented in Section~\ref{sec:setup}). We evaluate the model performance during the training, and take the returned $F_1$-score as the global reward, denoted by $r^{global}$. We use this reward as a long-term reward, which is calculated when all road segments on a trajectory are labeled,
Then, the expected cumulative reward for a trajectory is defined as $R_n=\frac{1}{n-1}\sum_{i=2}^{n}r_i^{local}+r^{global}$.
\noindent \textbf{Policy Learning on the MDP.}
The core problem of an MDP is to learn a policy, which guides an agent to choose actions based on the constructed states such that the cumulative reward is maximal. We learn the policy via a policy gradient method~\cite{silver2014deterministic}, called the REINFORCE algorithm~\cite{williams1992simple}. To be specific, let $\pi_{\theta}(a|\mathbf{s})$ denote a stochastic policy, which is used to sample an action $a$ for a given state $\mathbf{s}$ via a neural network, whose parameters are denoted by $\theta$. Then, the gradients wrt the network parameters $\theta$ are estimated as $\nabla_{\theta}J(\theta) = \sum_{i=2}^{n}R_n\nabla_{\theta}\ln\pi_{\theta}(a_i|s_i)$, which can be optimized using an optimizer (e.g., Adam stochastic gradient descent).
\noindent \textbf{Joint Training of RSRNet and ASDNet.} We jointly train the two networks (e.g., RSRNet and ASDNet). First, we map raw trajectories on road networks and generate noisy labels as explained in Section~\ref{sec:weakly}. Then, we randomly sample 200 trajectories to pre-train the RSRNet and ASDNet, separately. The pre-training provides a warm-start for the two networks.
During the training, we randomly sample 10,000 trajectories, and for each trajectory, we generate 5 epochs to iteratively train the RSRNet and ASDNet. In particular, we apply the learned policy from ASDNet to
refine the noisy labels, and the refined labels are used to train the RSRNet to obtain the better representation $\mathbf{z}_i$ on each road segment. Then, $\mathbf{z}_i$ is used to construct the state to learn better policy in ASDNet. Since the learned policy can further refine the labels, and the two networks are jointly trained with the best model is chosen during the process.
\subsection{The RL4OASD Algorithm}
\label{sec:RL4OASD}
Our \texttt{RL4OASD} algorithm is based on the learned policy for detecting anomalous subtrajectories. The process is presented in Algorithm~\ref{alg:rl4oasd}. Specifically, \texttt{RL4OASD} accepts an ongoing trajectory in an online fashion for labeling each road segment as the normal (i.e., 0) or not (i.e., 1) (lines 1-12). It first labels the source or destination road segment as normal by definitions (lines 2-3). It then validates the three deterministic cases according to Lemma \ref{lemma:1}, \ref{lemma:2} or \ref{lemma:3}. If so, it labels the current road segment with the label on the previous road segment (lines 4-5). If not, it
constructs a state via calling RSRNet (lines 7-8), and samples an action to label the road segment based on the learned policy in ASDNet (lines 9-10). Finally, it returns the anomalous subtrajectories which involve the abnormal labels (i.e., 1) on the road segments.
We further develop a technique (called labeling with delay) for boosting the effectiveness. \texttt{RL4OASD} forms an anomalous subtrajectory whenever its boundary is identified, i.e., the boundary is identified at $e_{i-1}$ if $e_{i-1}.l = 1$ and $e_{i}.l=0$. Intuitively, it looks a bit rush to form an anomalous subtrajectory and may produce many short fragments from a detected trajectory. Therefore, we consider a delay mechanism, which is to scan $D$ more road segments that follow $e_{i-1}$, and we form a subtrajectory by extending its boundary to the last one with label 1 if that road segment is contained among those $D$ road segments. It could be verified the labeling with delay does not incur much time cost, and offer a better continuity to avoid forming too many fragments instead.
\begin{algorithm}[t!]
\caption{The \texttt{RL4OASD} algorithm}
\label{alg:rl4oasd}
\KwIn{
A map-matched trajectory $T = <e_1, e_2, ..., e_n>$ which is inputted in an online manner}
\KwOut{
The anomalous subtrajectories}
\For{$i=1,2,...,n$}{
\uIf{$i=1$ \textbf{or} $i=n$}{
$e_{i}.l \leftarrow 0$;
}
\uElseIf{Lemma \ref{lemma:1}, \ref{lemma:2} or \ref{lemma:3} is valid}{
$e_{i}.l \leftarrow e_{i-1}.l$;
}
\Else{
Call RSRNet for obtaining a representation $\mathbf{z}_{i}$;\\
Construct a state $\mathbf{s}_i=[\mathbf{z}_i;\mathbf{v}(e_{i-1}.l)]$;\\
Sample an action (i.e., 0 or 1), $a_i \sim \pi_{\theta}(a|s)$;\\
$e_{i}.l \leftarrow a_i$;\\
}
}
\textbf{Return} anomalous subtrajectories consisting of the road segments with the label 1;\\
\end{algorithm}
\noindent \textbf{Time complexity.} The time complexity of the \texttt{RL4OASD} algorithm is $O(n)$, where $n$ denotes the length of a detected trajectory. The time is dominated by two networks, i.e., RSRNet and ASDNet. We analyze them as follows. In RSRNet, the time cost of one road segment consists of (1) that of obtaining embeddings of TCF and NRF, which are both $O(1)$ and (2) that of obtaining the $\mathbf{z}$ via a classic LSTM cell, which is $O(1)$. In ASDNet, the time cost of one road segment consists of (1) that of constructing a state, where the part $\mathbf{z}$ has been computed in RSRNet and the part $\mathbf{v}(e.l)$ is obtained via an embedding layer, which is $O(1)$ and (2) that of sampling an action via the learned policy, which is $O(1)$. As we can see in Algorithm~\ref{alg:rl4oasd}, the two networks are called at most $n$ times. Therefore, the time complexity of the \texttt{RL4OASD} is $O(n)\times O(1)=O(n)$. We note that the $O(n)$ time complexity maintains the current best time complexity as those of existing algorithms~\cite{chen2013iboat,wu2017fast,liu2020online} for the trajectory or subtrajectory detection task, and can largely meet practical needs for online scenarios as shown in our experiments.
\section{METHODOLOGY}
\label{sec:method}
\subsection{Overview of \texttt{RL4OASD}}
To discover the anomalous subtrajectories from an ongoing trajectory, we explore a reinforcement learning based method within a weakly supervised framework called RL4OASD. We illustrate the overall architecture of RL4OASD in Figure~\ref{fig:rl4oasd}. It includes three components and we discuss them as follows.
In data preprocessing, we obtain map-matched trajectories via map-matching the raw trajectories on road networks. The map-matched trajectory is in the form of a sequence of road segments, which can be represented its travel route. We then design a noisy labeling method to label each road segment as 0 representing normal and 1 otherwise, by counting each transition frequency from historical trajectories. The noisy labels are then used to train the Road Segment Representation Network (RSRNet) in a weakly supervised manner (Section~\ref{sec:weakly}).
In RSRNet, we infer the normal routes behind a detected trajectory and train the embeddings of road segments of the trajectory. Those embeddings are fed into the Anomalous Subtrajectory Detection Network (ASDNet) as the state, to provide observations for labeling anomalous subtrajectories. We implement the RSRNet with RNN, which is able to capture the sequential information of trajectories (Section~\ref{sec:RSRNet}).
In ASDNet, we build a road network enhanced environment for labeling each road segment as normal or not. The task is modeled as a \emph{Markov decision process}~\cite{puterman2014markov}, whose policies are learned via a policy gradient method~\cite{silver2014deterministic} (Section~\ref{sec:ASDNet}).
Note that the ASDNet outputs refined labels of road segments, which are further used to train RSRNet again, and then RSRNet provides better observations for ASDNet to train better policies. The process iterates and we call the resulting algorithm combined RSRNet and ASDNet as RL4OASD (Section~\ref{sec:RL4OASD}).
\noindent\textbf{19 Nov-Cheng: Comments on the overview description. 1. It could be a bit difficult to follow (since it assumes an ``author'' angle over the overview, but not a ``reader'' one; 2. It does not explain clearly why we design the method in the way it is (weak supersion + reinforcement learning) and how it is superior over existing solutions. This part needs to revised further.}
\noindent\textbf{19 Nov-Cheng: Comments on the overview figure. 1: The figure looks too small; 2. it does not show how $x^{nrf}$ features are obtained; 3. it does not show the linkage from z variables from RSRNet to those in ASDNet; 4. it does not show how the 0/1 lables used for constructing states in ASDNet are obtained; 5. RNN could be better changed to LSTM? 6. could we also reflect the iterative training process with this figure?}
\subsection{Data Preprocessing for Noisy Labels}\
\label{sec:weakly}
Our RL4OASD model consists of two networks (i.e., RSRNet and ASDNet), where RSRNet is essentially to capture the features of each road segment. However, labeled data (i.e., the labeled anomalous subtrajectories in a detected trajectory) is usually not available in the datasets, which makes it a challenge to learn good representations. To overcome this, we explore the network training in a weakly supervised setting (i.e., using noisy labels), and take the spatial and temporal factors into consideration. More specifically, the process includes four steps. \underline{Step-1}: we group historical trajectories in a dataset wrt different SD-Pairs and time slots. Here, we have 24 time slots if we partition one day with one hour granularity, and we say a trajectory falls into a time slot if its starting travel time is within the slot. e.g., in Figure~\ref{fig:example}, we have three map-matched trajectories $T_1$, $T_2$ and $T_3$ under the source $e_1$ and destination $e_{10}$. Assuming the starting travel times of $T_1$, $T_2$ and $T_3$ are 9:00, 17:00, and 9:10, respectively. Therefore, $T_1$ and $T_3$ are in the same group (denoted by Group-1) because they are within the same time slot with one hour granularity, and $T_2$ is in the other group (denoted by Group-2).
\underline{Step-2}: we then capture the fraction of transitions wrt all trajectories in each group, e.g., in Group-1 (i.e., $T_1$ and $T_3$), suppose that there are 9 trajectories traveling the same route as $T_1$ and only one trajectory traveling traveling route as $T_3$, the fraction of transition $<e_1,e_2>$ is calculated by $1/10=0.1$ since it appears only once in all 10 trajectories.
\underline{Step-3}: for each trajectory, we refer to the group it belongs to
and map it to a sequence of transition fractions wrt each road segment. e.g., for a trajectory traveling the route as $T_3$, its mapped sequence is $<1.0,0.1,0.1,0.1,0.1,0.1,0.1,0.1,1.0>$. Note that the fractions on the source (i.e., $e_1$) and the destination (i.e., $e_1$) are set to 1.0 since the two road segments are definitely traveled within its group.
\underline{Step-4}: the noisy labels can be obtained with a threshold is given, denoted by $\alpha$, e.g., the noisy labels of $T_3$ is $<0,1,1,1,1,1,1,1,0>$ given $alpha=0.1$, where 0 denotes a normal road segment, whose fraction is larger than $\alpha$ meaning that the road segment is frequently traveled, and 1 otherwise.
The noisy labels are then used in the training of RSRNet. Compared with manually labeling~\cite{zhang2011ibat,chen2013iboat, lv2017outlier}, such noisy labels can be obtained more easily.
\noindent\textbf{19 Nov-Cheng: Comments on the noisy labelling part. 1: It mentions that capturing the road segment features without labels is challenging, which I don't agree. For example, self/un-supverised learning is widely used to learn representations, e.g., we can simply use the Toast method. My understanding is that noisy labelling is simply to provide some initialized labels to start with; 2: To show the benefits of this component, some ablation study (e.g., by replacing the noisy labels with random labels) could be conducted. 3: The description of Step-3 is not clear enough and need to be revised further (e.g., sometimes, the fractions are defined for transitions, and some other times, they are for road segments);}
\subsection{Road Segment Representation Network (RSRNet)}
\label{sec:RSRNet}
RSRNet aims to provide representations of road segments as the state for the MDP in ASDNet. To design RSRNet, we adopt LSTM~\cite{hochreiter1997long} structure, which accepts trajectories with different lengths and captures the sequential information behind trajectories. In addition, we embed two types of features into representations, one is to capture the traffic context on road networks, and the other one is to capture the normal routes for a given SD-Pair.
\noindent \textbf{Traffic Context Feature (TCF).} A map-matched trajectory corresponds to a sequence of road segments and each road segment is naturally in the form of the token (i.e., the road segment id). We pre-train each road segment in the embedding layer of RSRNet as a vector, which embeds traffic context features (e.g., driving speed, trip duration, road type) into it. To do this, we employ Toast~\cite{chen2021robust} to learn road segments, which is the state-of-the-art road network representation learning model to support road segment based applications. The learned road segment representations will be used to initialize the embedding
layer in RSRNet, and those embeddings can still be further optimized with the model training. Other road network representation models~\cite{jepsen2019graph,wang2019learning} are also applicable for the task.
\noindent \textbf{Normal Route Feature (NRF).} Given an SD-Pair in a time slot, we first infer the normal routes within it. Intuitively, the normal trajectories are often followed the same route. If a trajectory contains some road segments that are rarely traveled by others, and it probably contains anomalies. Therefore, we infer a route as the normal route by calculating the fraction of the trajectories passing through the route wrt all trajectories within its SD-Pair. The inferring is controlled with a threshold (denoted by $\delta$), i.e., a route is inferred as the normal if the fraction is larger than the threshold, vice versa.
For example, given $\delta=0.4$, recall that the Group-1 with the trajectory $T_1$ (appearing 9 times) and $T_3$ (appearing 1 time) in Figure~\ref{fig:example}, we infer $T_1$ as the normal route, because its fraction ($9/10=0.9$) is larger than the threshold $\delta$.
Based on the inferred normal routes, we then extract the features of a detected trajectory. For example, given a detected trajectory followed the route as $T_3$, we partition the trajectory into a sequence of transitions, and we detect whether a road segment in a detected trajectory is normal (i.e., 0) if the transition on that road segment occurs in the inferred normal routes; 1 otherwise. For example, the extracted normal features of $T_3$ is $<0,1,1,1,1,1,1,1,0>$, where the feature on the road segment $e_2$ is 0 since the transition $<e_1,e_2>$ does not occur in the normal route $T_1$. Note that the source and destination road segments are always denoted by 0 (i.e., normal) in the extracted features. After obtaining the normal route features on road segments, we embed them via an embedding layer, and the embedded vectors are then fed into the model.
\noindent \textbf{Training RSRNet.} Figure~\ref{fig:rl4oasd} illustrates the architecture of RSRNet. In particular, given a sequence of embedded traffic context features $\mathbf{x_i}^{tcf}$ ($1 \leq i \leq n$), where $n$ denotes the trajectory length, and the LSTM obtains the hidden states $\mathbf{h_i}$ at each road segment, accordingly. We then concatenate $\mathbf{h_i}$ with the embedding of normal route feature $\mathbf{x_i}^{nrf}$, denoted by $\mathbf{z}_i=[\mathbf{h}_i;\mathbf{x}_i^{nrf}]$. RSRNet is trained using the noisy labels, and we adopt cross-entropy loss to train RSRNet between the predicted label $\hat{y}_i$ based on the $\mathbf{z}_i$ and the noisy label $y_i$, i.e., $\mathcal{L}=\frac{1}{n}\sum_{i=1}^{n}\mathcal{H}(y_i,\hat{y}_i)$, where $\mathcal{H}$ denotes the cross-entropy operator. The trained $\mathbf{z}_i$ is then fed into ASDNet as the state representation in ASDNet.
\noindent\textbf{22 Nov-Cheng: Comments on RSRNet. 1. The part of defining fractions and using a threshold $\delta$ looks quite similar to the part in the noisy labelling part. I understand that they are different, the former at the route level and the latter at the road segment level. Readers may wonder why not use these features as noisy labels or vice versa. These issues should be clearly addressed/explained; 2. It is not clear how $x^{nrf}$ features are embedded; 3. It needs to be explained why not let the embeddings of $x^{nrf}$ go through the RNN network;}
\subsection{Anomalous Subtrajectory Detection Network (ASDNet)}
\label{sec:ASDNet}
We implement ASDNet with reinforcement learning (RL). Consider the task of online anomalous subtrajectory detection is to sequentially scan an ongoing trajectory and for each road segment, it decides whether an anomaly happened at that position. This motivates us to model the process as an MDP, involving actions, states, transitions
and rewards. In addition, we utilize the graph structure of a road network to build an environment for conducting the MDP, which helps the anomaly detection as discussed below.
\noindent \textbf{Road Network Enhanced Environment (RNEE).} Consider a transition $<e_{i-1},e_i>$ ($1 < i \leq n$) between two road segments $e_{i-1}$ and $e_i$. Based on the graph structure of road networks, we denote the out degree and in degree on a road segment $e_{i}$ as $e_{i}.out$, $e_{i}.in$. We then label each road segment as normal (i.e.,0) or abnormal (i.e., 1), denoted by $e_{i}.l$, according to the following four lemmas.
\begin{lemma}
\label{lemma:1}
Given $e_{i-1}.out=1$ and $e_{i}.in=1$, we have $e_{i}.l=e_{i-1}.l$.
\end{lemma}
\begin{proof}
The transition is smooth without any road turns can be involved, and thus the labels on $e_{i-1}$ and $e_{i}$ maintain the same.
\end{proof}
\begin{lemma}
\label{lemma:2}
Given $e_{i-1}.out=1$ and $e_{i}.in>1$, we have $e_{i}.l=0$, if $e_{i-1}.l=0$.
\end{lemma}
\begin{proof}
This can be proved by contradiction. Suppose $e_{i}.l=1$ when $e_{i-1}.l=0$, it implies that an anomaly happened on the road segment $e_{i}$, whose movement do not follow the normal road, denoted by $e'_{i}$. This contradicts the fact that $e_{i-1}.out=1$. Hence, the lemma holds.
\end{proof}
\begin{lemma}
\label{lemma:3}
Given $e_{i-1}.out>1$ and $e_{i}.in=1$, we have $e_{i}.l=1$, if $e_{i-1}.l=1$.
\end{lemma}
\begin{proof}
We also prove it by contradiction. Suppose $e_{i}.l=0$ when $e_{i-1}.l=1$, it indicates that an anomaly finished on the road segment $e_{i}$, which is definitely associated with a normal road $e'_{i-1}$ that is traveled. This contradicts the fact that $e_{i}.in=1$. Hence, the lemma holds.
\end{proof}
\begin{lemma}
\label{lemma:4}
Given $e_{i-1}.out>1$ and $e_{i}.in>1$, $e_{i-1}.l$ and $e_{i}.l$ are arbitrary, i.e., $e_{i}.l=e_{i-1}.l$ or $e_{i}.l \neq e_{i-1}.l$.
\end{lemma}
\begin{proof}
The lemma directly follows the observation.
\end{proof}
We build an environment for conducting the MDP, where a decision for labeling a road segment is deterministic in one of three cases: (1) $e_{i-1}.out=1$, $e_{i}.in=1$ (Lemma~\ref{lemma:1});
(2) $e_{i-1}.out=1$, $e_{i}.in>1$ and $e_{i-1}.l=0$ (Lemma~\ref{lemma:2}); (3) $e_{i-1}.out>1$, $e_{i}.in=1$ and $e_{i-1}.l=1$ (Lemma~\ref{lemma:3}). In other cases, it makes a decision via the model.
As we can see many decisions have been replaced by the lemma~\ref{lemma:1}, \ref{lemma:2} and \ref{lemma:3}, and it brings at least two advantages. First, the efficiency is improved since the time of constructing states and taking actions at some road segments is saved via checking the light lemmas. Second, as many cases have been pruned with deterministic decisions, the RL model is lightly to train for tackling the otherwise cases, and some potential wrong decisions can therefore be avoided. Based on the road network enhanced environment, we define the MDP as follows.
\noindent \textbf{Actions.} We denote an action of the MDP by $a$, which is to label each road segment as normal or not. Note that an anomalous subtrajectory boundary can be identified when the labels of two adjacent road segments are different, and the action is only conducted by the RL model when a decision is required based on the road network enhanced environment.
\noindent \textbf{States.} The state $\mathbf{s}_i$ ($1 < i \leq n$) is represented by the concatenation of $\mathbf{z}_i$ and $\mathbf{v}(e_{i-1}.l)$, i.e., $\mathbf{s}_i=[\mathbf{z}_i;\mathbf{v}(e_{i-1}.l)]$, where $\mathbf{z}_i$ is obtained from RSRNet and $\mathbf{v}(e_{i-1}.l)$ denotes the vector of the label on the previous road segment $e_{i-1}.l$ via a embedding layer, whose parameter is learnable in ASDNet. The rationale for the state design is to capture the features from three aspects, i.e., traffic context, normal routes and the previous label.
\noindent \textbf{Transitions.} We take an action $a$ to produce a label on a road segment at a state $\mathbf{s}$, it then would label the next road segment. There are two cases: (1) the labeling is deterministic at the next road segment based on the derived lemmas above. In this case, the process continues for the next; otherwise, (2) we construct a new state $\mathbf{s}'$, which corresponds to the next state of $\mathbf{s}$, to output an action via the RL model.
\noindent \textbf{Rewards.} We design the reward in terms of two aspects. One is called \emph{local reward}, which aims to capture the local continuity of labels of road segments. The rationale is that normal road segments or anomalous road segments are usually continued for several times. Intuitively, the labels would not be changed frequently. The other one is called \emph{global reward}. During the training, we check the model performance based on the task of online anomalous subtrajectory detection (OASD). We take the model performance (i.e., $F_1$-score) as a signal to guide the training.
The local reward is an intermediate reward, which encourages the continuity of the labels on road segments. It is defined as $r_i^{local}=\text{sign}(e_{i-1}.l=e_{i}.l)\text{cos}(\mathbf{z}_{i-1},\mathbf{z}_{i})$, where the $\text{sign}(e_{i-1}.l=e_{i}.l)$ returns 1 if the condition $e_{i-1}.l=e_{i}.l$ is true and -1 otherwise; $\text{cos}(\mathbf{z}_{i-1},\mathbf{z}_{i})$ denotes the consine similarity between $\mathbf{z}_{i-1}$ and $\mathbf{z}_{i}$, which are obtained from SRNet.
The global reward is obtained based on the OASD task. To do this, we manually label a development dataset for around 100 trajectories (more details for the manually labeling are presented in Section~\ref{sec:setup}), and we evaluate the model performance within the development dataset in terms of the $F_1$-score metric (more details regarding the evaluation metric are presented in Section~\ref{sec:setup}). We evaluate the model performance during the training, and take the returned $F_1$-score as the global reward, denoted by $r^{global}$. We use this reward as a long-term reward, which is calculated when all road segments on a trajectory are labeled,
Then, the expected cumulative reward for a trajectory is defined as $R_n=\frac{1}{n-1}\sum_{i=2}^{n}r_i^{local}+r^{global}$.
\noindent \textbf{Policy Learning on the MDP.}
The core problem of an MDP is to learn a policy, which guides an agent to choose actions based on the constructed states such that the cumulative reward is maximal. We learn the policy via a policy gradient method~\cite{silver2014deterministic}, called the REINFORCE algorithm~\cite{williams1992simple}. To be specific, let $\pi_{\theta}(a|\mathbf{s})$ denote a stochastic policy, which is used to sample an action $a$ for a given state $\mathbf{s}$ via a neural network, whose parameters are denoted by $\theta$. Then, the gradients wrt the network parameters $\theta$ are estimated as $\nabla_{\theta}J(\theta) = \sum_{i=2}^{n}R_n\nabla_{\theta}\ln\pi_{\theta}(a_i|s_i)$, which can be optimized using an optimizer (e.g., Adam stochastic gradient descent).
\noindent \textbf{Joint Training of RSRNet and ASDNet.} We jointly train the two networks (e.g., RSRNet and ASDNet). First, we map raw trajectories on road networks and generate noisy labels as explained in Section~\ref{sec:weakly}. Then, we randomly sample 200 trajectories to pre-train the RSRNet and ASDNet, separately. The pre-training provides a warm-start for the two networks.
During the training, we randomly sample 10,000 trajectories, and for each trajectory, we generate 5 epochs to iteratively train the RSRNet and ASDNet. In particular, we apply the learned policy from ASDNet to
refine the noisy labels, and the refined labels are used to train the RSRNet to obtain the better representation $\mathbf{z}_i$ on each road segment. Then, $\mathbf{z}_i$ is used to construct the state to learn better policy in ASDNet. Since the learned policy can further refine the labels, and the two networks are jointly trained with the best model is chosen during the process.
\noindent\textbf{22 Nov-Cheng: Comment on the RNEE. It is not that interesting/technicallly novel as it may look to be. I suggest: (1) we do NOT call them lemmas (they are rules but not lemmas that are always correct); (2) we less emphasize this technique (e.g., we treat it similarly as we do the "delay" technique - after all, they are simple rules). To me, the first case could be easily excluded by preprocessing the road networks so that junctions with degree = 2 are dropped and I don't think we have many instances of case 2 and 3 in the real data (you may collect the statistics); (3) we summarize the intuition behind the rules and avoid presenting them rule by rule with 4 cases - I belive they share some intuition; (4) with this change, this section and the pseudo-code need to be adjusted accordingly; (5) same as the "delay technique", we can do some ablation sutdy on the technique based on the rules;}
\noindent\textbf{22 Nov-Cheng: Comments on other parts of the section: 1. For transitions, my understanding is that in either case, we need re-compute the next state? In its current writing, it looks that in the first case, we don't re-compute the state; 2. For local reward, the intuition should be better explained (e.g., why we use the cosine term); 3. For the global reward, it will immediately trigger that we use extra information that is not available for existing solutions. In fact, based on the experimental results (ablation study), even we drop this global reward, we still achieve the best results. How about we target the setting with no labelled data at all? 4. For the joint training, it is not clear how the pre-training works exactly? Why is it necessary?}
\subsection{The RL4OASD Algorithm}
\label{sec:RL4OASD}
Our \texttt{RL4OASD} algorithm is based on the learned policy for detecting anomalous subtrajectories. The process is presented in Algorithm~\ref{alg:rl4oasd}. Specifically, \texttt{RL4OASD} accepts an ongoing trajectory in an online fashion for labeling each road segment as the normal (i.e., 0) or not (i.e., 1) (lines 1-12). It first labels the source or destination road segment as normal by definitions (lines 2-3). It then validates the three deterministic cases according to Lemma \ref{lemma:1}, \ref{lemma:2} or \ref{lemma:3}. If so, it labels the current road segment with the label on the previous road segment (lines 4-5). If not, it
constructs a state via calling RSRNet (lines 7-8), and samples an action to label the road segment based on the learned policy in ASDNet (lines 9-10). Finally, it returns the anomalous subtrajectories which involve the abnormal labels (i.e., 1) on the road segments.
We further develop a technique (called labeling with delay) for boosting the effectiveness. \texttt{RL4OASD} forms an anomalous subtrajectory whenever its boundary is identified, i.e., the boundary is identified at $e_{i-1}$ if $e_{i-1}.l = 1$ and $e_{i}.l=0$. Intuitively, it looks a bit rush to form an anomalous subtrajectory and may produce many short fragments from a detected trajectory. Therefore, we consider a delay mechanism, which is to scan $D$ more road segments that follow $e_{i-1}$, and we form a subtrajectory by extending its boundary to the last one with label 1 if that road segment is contained among those $D$ road segments. It could be verified the labeling with delay does not incur much time cost, and offer a better continuity to avoid forming too many fragments instead.
\begin{algorithm}[t!]
\caption{The \texttt{RL4OASD} algorithm}
\label{alg:rl4oasd}
\KwIn{
A map-matched trajectory $T = <e_1, e_2, ..., e_n>$ which is inputted in an online manner}
\KwOut{
The anomalous subtrajectories}
\For{$i=1,2,...,n$}{
\uIf{$i=1$ \textbf{or} $i=n$}{
$e_{i}.l \leftarrow 0$;
}
\uElseIf{Lemma \ref{lemma:1}, \ref{lemma:2} or \ref{lemma:3} is valid}{
$e_{i}.l \leftarrow e_{i-1}.l$;
}
\Else{
Call RSRNet for obtaining a representation $\mathbf{z}_{i}$;\\
Construct a state $\mathbf{s}_i=[\mathbf{z}_i;\mathbf{v}(e_{i-1}.l)]$;\\
Sample an action (i.e., 0 or 1), $a_i \sim \pi_{\theta}(a|s)$;\\
$e_{i}.l \leftarrow a_i$;\\
}
}
\textbf{Return} anomalous subtrajectories consisting of the road segments with the label 1;\\
\end{algorithm}
\noindent \textbf{Time complexity.} The time complexity of the \texttt{RL4OASD} algorithm is $O(n)$, where $n$ denotes the length of a detected trajectory. The time is dominated by two networks, i.e., RSRNet and ASDNet. We analyze them as follows. In RSRNet, the time cost of one road segment consists of (1) that of obtaining embeddings of TCF and NRF, which are both $O(1)$ and (2) that of obtaining the $\mathbf{z}$ via a classic LSTM cell, which is $O(1)$. In ASDNet, the time cost of one road segment consists of (1) that of constructing a state, where the part $\mathbf{z}$ has been computed in RSRNet and the part $\mathbf{v}(e.l)$ is obtained via an embedding layer, which is $O(1)$ and (2) that of sampling an action via the learned policy, which is $O(1)$. As we can see in Algorithm~\ref{alg:rl4oasd}, the two networks are called at most $n$ times. Therefore, the time complexity of the \texttt{RL4OASD} is $O(n)\times O(1)=O(n)$. We note that the $O(n)$ time complexity maintains the current best time complexity as those of existing algorithms~\cite{chen2013iboat,wu2017fast,liu2020online} for the trajectory or subtrajectory detection task, and can largely meet practical needs for online scenarios as shown in our experiments.
\noindent\textbf{22 Nov-Cheng: For the delay technique, the description is not precise. Is it the same as a post-processing step which convert some 0's to 1's very D steps?}
\section{METHODOLOGY}
\label{sec:method}
\subsection{Overview of \texttt{RL4OASD}}
As we mentioned in section~\ref{sec:problem}, the task of online anomalous trajectory detection is modelled via a supervised learning method, which fails to utilize in real applications, as it needs a large amount of labelled data. In addition, determining an ongoing trajectory is anomalous or not is a decision-making process, which could be modelled as a as a Markov decision process (MDP)~\cite{puterman2014markov}. We propose a weakly supervised framework, namely RL4OASD via the reinforcement learning. The framework consists of three components, namely data preprocessing, RSRNet and ASDNet(see Figure~\ref{fig:rl4oasd}).
In data preprocessing, data preprocessing aims to provide noisy labelled data for training RSRNet. In particular, we conduct map-matching on raw trajectories, mapping them onto road networks and obtain the map-matched trajectories. The map-matching trajectory is denoted as a sequence of road segments. Then to save more efforts and provide more information for following phases, we design a noisy labelling method to label each road segment as 0 representing normal and 1 representing abnormal via counting each transition frequency from historical trajectories. The noisy labels are then used to train the Road Segment Representation Network (RSRNet) in weakly supervised manner(section~\ref{sec:weakly}). Also, we \emph{manually labelled} 1,688 (resp.1,057) SD pairs, which correspond to 558,098 (resp. 163,027) raw trajectories before map-matching for Chengdu (resp. Xi'an), which is utilized as test data to evaluate the performance of our method.
In RSRNet, RSRNet aims to provide representations of road segments as the state for the MDP in ASDNet, which provide a method to model subtrajectories and take advantage of road networks. We capture two types of representations. One is to capture the traffic context feature representation by Toast~\cite{chen2021robust}. Another is to capture normal route feature by calculating the fraction of the each road segment involved in the map-matched trajectories for each SD pair. Those embeddings are fed into the Anomalous Subtrajectory Detection Network (ASDNet) as the state, to provide observations for labeling anomalous subtrajectories. We also implement the RSRNet with RNN, which is able to capture the sequential information of trajectories (Section~\ref{sec:RSRNet}).
In ASDNet, we build a road network enhanced environment, which provides a more effective way to label each road segment as normal or not. The task is modeled as a \emph{Markov decision process}~\cite{puterman2014markov}, whose policies are learned via a policy gradient method~\cite{silver2014deterministic} (Section~\ref{sec:ASDNet}).
Note that the ASDNet outputs refined labels of road segments, which are further used to train RSRNet again, and then RSRNet provides better observations for ASDNet to train better policies. The process iterates and we call the resulting algorithm combined RSRNet and ASDNet as RL4OASD (Section~\ref{sec:RL4OASD}).
\subsection{Data Preprocessing for Noisy Labels}
\label{sec:weakly}
We explore the network training in a weakly supervised setting (i.e., using noisy labels), and take the spatial and temporal factors into consideration. More specifically, the process of obtaining noisy labels includes four steps.
\underline{Step-1}: we group historical trajectories in a dataset wrt different SD-Pairs and time slots. Here, we have 24 time slots if we partition one day with one hour granularity, and we say a trajectory falls into a time slot if its starting travel time is within the slot. e.g., in Figure~\ref{fig:example}, we have three map-matched trajectories $T_1$, $T_2$ and $T_3$ under the source $e_1$ and destination $e_{10}$. Assuming the starting travel times of $T_1$, $T_2$ and $T_3$ are 9:00, 17:00, and 9:10, respectively. Therefore, $T_1$ and $T_3$ are in the same group (denoted by Group-1) because they are within the same time slot with one hour granularity, and $T_2$ is in the other group (denoted by Group-2).
\underline{Step-2}: we then capture the fraction of \emph{transitions} (we defined in section~\ref{sec:problem}) wrt all trajectories in each group, e.g., in Group-1 (i.e., $T_1$ and $T_3$), suppose that there are 9 trajectories traveling the same route as $T_1$ and only one trajectory traveling traveling route as $T_3$, the fraction of \emph{transition} $<e_1,e_2>$ is calculated by $1/10=0.1$ since it appears only once in all 10 trajectories.
\underline{Step-3}: for each trajectory, we refer to the group it belongs to
and map it to a sequence of \emph{transition} fractions wrt each road segment. {\color{blue}{E.g., for a trajectory travelling the route as $T_3$, its mapped \emph{transition} sequence is $<<*,e_1>, <e_1,e_2>, <e_2,e_4>, <e_4, e_{11}>, <e_{11}, e_{12}>, <e_{12}, e_{13}>, <e_{13}, e_{14}>, <e_{14}, e_{15}>, <e_{15}, e_{10}>>$, where we pad the initial transition as $<*,e_1>$, and the corresponding fraction sequence is $<1.0,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,1.0>$}. Note that the fractions on the source $e_1$ (corresponding to $<*,e_1>$) and the destination $e_{10}$ (corresponding to $<e_{15}, e_{10}>$) are always set to 1.0, since the source and destination road segments are definitely travelled within its group.}
\underline{Step-4}: the noisy labels can be obtained with a threshold is given, denoted by $\alpha$, e.g., the noisy labels of $T_3$ is $<0,1,1,1,1,1,1,1,1,0>$ given $\alpha=0.1$, where 0 denotes a normal road segment, whose fraction is larger than $\alpha$ meaning that the road segment is frequently traveled, and 1 otherwise.
The noisy labels are then utilized in the training of RSRNet. Compared with manually labeling~\cite{zhang2011ibat,chen2013iboat, lv2017outlier}, such noisy labels can be obtained more easily.
\subsection{Road Segment Representation Network (RSRNet)}
\label{sec:RSRNet}
RSRNet aims to provide representations of road segments as the state for the MDP in ASDNet, {\color{blue}{which provides a method to model subtrajectories and take advantage of road networks.}} To design RSRNet, we adopt LSTM~\cite{hochreiter1997long} structure, which accepts trajectories with different lengths and captures the sequential information behind trajectories. In addition, we embed two types of features into representations, one is to capture the traffic context on road networks, and the other one is to capture the normal routes for a given SD-Pair.
\noindent \textbf{Traffic Context Feature (TCF).} A map-matched trajectory corresponds to a sequence of road segments and each road segment is naturally in the form of the token (i.e., the road segment id). We pre-train each road segment in the embedding layer of RSRNet as a vector, which embeds traffic context features (e.g., driving speed, trip duration, road type) into it. To do this, we employ Toast~\cite{chen2021robust} to learn road segments, which is the state-of-the-art road network representation learning model to support road segment based applications. The learned road segment representations will be used to initialize the embedding
layer in RSRNet, and those embeddings can still be further optimized with the model training. Other road network representation models~\cite{jepsen2019graph,wang2019learning} are also applicable for the task.
\noindent \textbf{Normal Route Feature (NRF).} Given an SD-Pair in a time slot, we first infer the normal routes within it. Intuitively, the normal trajectories are often followed the same route. If a trajectory contains some road segments that are rarely traveled by others, and it probably contains anomalies. Therefore, we infer a route as the normal route by calculating the fraction of the trajectories passing through the route wrt all trajectories within its SD-Pair. The inferred results are obtained via comparing a threshold (denoted by $\delta$) with the fraction of each road segment, i.e., a route is inferred as the normal if the fraction is larger than the threshold, vice versa.
For example, given $\delta=0.4$, recall that the Group-1 with the trajectory $T_1$ (appearing 9 times) and $T_3$ (appearing 1 time) in Figure~\ref{fig:example}, we infer $T_1$ as the normal route, because its fraction ($9/10=0.9$) is larger than the threshold $\delta$.
Based on the inferred normal routes, we then extract the features of a detected trajectory. For example, given a detected trajectory followed the route as $T_3$, we partition the trajectory into a sequence of transitions, and we detect whether a road segment in a detected trajectory is normal (i.e., 0) if the transition on that road segment occurs in the inferred normal routes; 1 otherwise. For example, the extracted normal features of $T_3$ are $<0,1,1,1,1,1,1,1,0>$, where the feature on the road segment $e_2$ is 1 since the transition $<e_1,e_2>$ does not occur in the normal route $T_1$. {\Comment{Note that the extracted normal features of the source and destination road segments are always 0 (i.e., normal).
After obtaining the normal route features that are in the form of tokens, we then obtain a vector for the feature by embedding the tokens as one-hot vectors. We call the obtained vectors embedded normal route features.
}}
\noindent \textbf{Training RSRNet.} Figure~\ref{fig:rl4oasd} illustrates the architecture of RSRNet. In particular, given a sequence of embedded traffic context features $\mathbf{x_i}^{t}$ ($1 \leq i \leq n$), where $n$ denotes the trajectory length, and the LSTM obtains the hidden states $\mathbf{h_i}$ at each road segment, accordingly. We then concatenate $\mathbf{h_i}$ with the embedded normal route feature $\mathbf{x_i}^{n}$, denoted by $\mathbf{z}_i=[\mathbf{h}_i;\mathbf{x}_i^{n}]$. {\Comment{Note that the two parts $\mathbf{h}$ and $\mathbf{x}^{n}$ capture the sequential trajectory information and normal route information, respectively. They are concatenated and further used to denote the state of RL in ASDNet. Note that we do not let $\mathbf{x}^{n}$ go through the LSTM since it preserves the normal route feature at each road segment.
}}
RSRNet is trained using the noisy labels, and we adopt cross-entropy loss to train RSRNet between the predicted label $\hat{y}_i$ based on the $\mathbf{z}_i$ and the noisy label $y_i$, i.e.,
\begin{equation}
\label{eq:entropy}
\mathcal{L}=\frac{1}{n}\sum_{i=1}^{n}\mathcal{H}(y_i,\hat{y}_i),
\end{equation}
where $\mathcal{H}$ denotes the cross-entropy operator.
\subsection{Anomalous Subtrajectory Detection Network (ASDNet)}
\label{sec:ASDNet}
We implement ASDNet with reinforcement learning (RL). Consider the task of online anomalous subtrajectory detection is to sequentially scan an ongoing trajectory and for each road segment, it decides whether an anomaly happened at that position. This motivates us to model the process as an MDP, involving actions, states, transitions
and rewards. {\Comment{In addition, we utilize the graph structure of a road network to build an environment for conducting the MDP, which reduces the cases of making decisions and facilitates the effectiveness.
\noindent \textbf{Road Network Enhanced Environment (RNEE).} Consider a transition $<e_{i-1},e_i>$ ($1 < i \leq n$) between two road segments $e_{i-1}$ and $e_i$. Based on the graph structure of road networks, we denote the out degree and in degree of a road segment $e_{i}$ as $e_{i}.out$ and $e_{i}.in$, respectively. $e_{i}.l$ denotes the label of the road segment $e_{i}$, where 0 denotes the normal and 1 denotes the abnormal.
\if 0
\begin{lemma}
\label{lemma:1}
Given $e_{i-1}.out=1$ and $e_{i}.in=1$, we have $e_{i}.l=e_{i-1}.l$.
\end{lemma}
\begin{proof}
The transition is smooth without any road turns can be involved, and thus the labels on $e_{i-1}$ and $e_{i}$ maintain the same.
\end{proof}
\begin{lemma}
\label{lemma:2}
Given $e_{i-1}.out=1$ and $e_{i}.in>1$, we have $e_{i}.l=0$, if $e_{i-1}.l=0$.
\end{lemma}
\begin{proof}
This can be proved by contradiction. Suppose $e_{i}.l=1$ when $e_{i-1}.l=0$, it implies that an anomaly happened on the road segment $e_{i}$, whose movement do not follow the normal road, denoted by $e'_{i}$. This contradicts the fact that $e_{i-1}.out=1$. Hence, the lemma holds.
\end{proof}
\begin{lemma}
\label{lemma:3}
Given $e_{i-1}.out>1$ and $e_{i}.in=1$, we have $e_{i}.l=1$, if $e_{i-1}.l=1$.
\end{lemma}
\begin{proof}
We also prove it by contradiction. Suppose $e_{i}.l=0$ when $e_{i-1}.l=1$, it indicates that an anomaly finished on the road segment $e_{i}$, which is definitely associated with a normal road $e'_{i-1}$ that is traveled. This contradicts the fact that $e_{i}.in=1$. Hence, the lemma holds.
\end{proof}
\begin{lemma}
\label{lemma:4}
Given $e_{i-1}.out>1$ and $e_{i}.in>1$, $e_{i-1}.l$ and $e_{i}.l$ are arbitrary, i.e., $e_{i}.l=e_{i-1}.l$ or $e_{i}.l \neq e_{i-1}.l$.
\end{lemma}
\begin{proof}
The lemma directly follows the observation.
\end{proof}
\fi
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/rule.pdf}
\vspace{-3mm}
\caption{Illustrating three cases for deterministic labeling.}
\label{fig:rule}
\vspace{-3mm}
\end{figure}
We build an environment (called RNEE) for conducting the MDP, where a decision for labeling a road segment is deterministic in one of three cases. We illustrate the three cases in Figure~\ref{fig:rule}
\begin{enumerate}
\item If $e_{i-1}.out=1$, $e_{i}.in=1$, then $e_{i}.l=e_{i-1}.l$.
\item If $e_{i-1}.out=1$, $e_{i}.in>1$ and $e_{i-1}.l=0$, then $e_{i}.l=0$.
\item If $e_{i-1}.out>1$, $e_{i}.in=1$ and $e_{i-1}.l=1$, then $e_{i}.l=1$.
\end{enumerate}
We explain the three rules as follows. The common intuition behind them is the label on $e_{i}$ can be determined in some cases when we know the road network structure and the previous label on $e_{i-1}$.
For (1), the case can be seen as a ``smooth transition'' without any road turns, and thus the label on $e_{i}$ is the same as $e_{i-1}$. For (2) and (3), both two cases can be shown by contradiction. Suppose $e_{i}.l=1$ (resp. $e_{i}.l=0$) given $e_{i-1}.l=0$ (resp. $e_{i-1}.l=1$) in Case 2 (resp. Case 3), it implies that an anomaly happened on the road segment $e_{i}$ (resp. $e_{i-1}$), whose movement does not follow a normal road on $e'_{i}$ (resp. $e'_{i-1}$), where $e'_{i} \neq e_{i}$ (resp. $e'_{i-1} \neq e_{i-1}$). This contradicts the fact that $e_{i-1}.out=1$ (resp. $e_{i}.in=1$).
The RNEE brings at least two advantages. First, the efficiency is improved since the time of taking actions in some cases is saved via checking the rules instead of calling the RL model.
Second, as many cases have been pruned with deterministic decisions, the RL model is light to train for only tackling the otherwise cases, and some potential wrong decisions can therefore be avoided.
Based on the RNEE, we define the MDP as follows.
}}
\noindent \textbf{Actions.} We denote an action of the MDP by $a$, which is to label each road segment as normal or not. Note that an anomalous subtrajectory boundary can be identified when the labels of two adjacent road segments are different, and the action is only conducted by the RL model for the cases that are not in the aforementioned three cases.
\noindent \textbf{States.} The state $\mathbf{s}_i$ ($1 < i \leq n$) is represented by the concatenation of $\mathbf{z}_i$ and $\mathbf{v}(e_{i-1}.l)$, i.e., $\mathbf{s}_i=[\mathbf{z}_i;\mathbf{v}(e_{i-1}.l)]$, where $\mathbf{z}_i$ is obtained from RSRNet and $\mathbf{v}(e_{i-1}.l)$ denotes a vector of the label on the previous road segment $e_{i-1}.l$ by embedding its token (i.e., 0 or 1). The rationale for the state design is to capture the features from three aspects, i.e., traffic context, normal routes and the previous label.
\noindent \textbf{Transitions.} We take an action $a$ to produce a label on a road segment at a state $\mathbf{s}$, it then would label the next road segment. There are two cases: (1) the label can be determined based on the RNEE at the next road segment. In this case, the process continues for the next; {\Comment{(2) In the other case, let $\mathbf{s}'$ denote a new state, which corresponds to the next state of $\mathbf{s}$ after the action is taken.}}
\noindent \textbf{Rewards.} We design the reward in terms of two aspects. One is called \emph{local reward}, which aims to capture the local continuity of labels of road segments. The rationale is that normal road segments or anomalous road segments are usually continued for several times, and the labels would not be changed frequently. {\Comment{The other one is called \emph{global reward}, which aims to indicate the quality of the refined labels. Intuitively, with high-quality refined labels, we will obtain a lower cross-entropy loss as computed by Equation~\ref{eq:entropy} from RSRNet. We take the loss as some feedback to guide the training of ASDNet.
}}
The local reward is an intermediate reward, which encourages the continuity of the labels on road segments. It is defined as
\begin{equation}
r_i^{local}=\text{sign}(e_{i-1}.l=e_{i}.l)\text{cos}(\mathbf{z}_{i-1},\mathbf{z}_{i}),
\end{equation}
where the $\text{sign}(e_{i-1}.l=e_{i}.l)$ returns 1 if the condition $e_{i-1}.l=e_{i}.l$ is true and -1 otherwise; $\text{cos}(\mathbf{z}_{i-1},\mathbf{z}_{i})$ denotes the consine similarity between $\mathbf{z}_{i-1}$ and $\mathbf{z}_{i}$, which are obtained from RSRNet. {\Comment{We choose the consine similarity because it outputs a normalized value between 0 and 1, which corresponds to the same output range (i.e., between 0 and 1) of the global reward.
}}
{\Comment{The global reward is designed to measure the quality of refined labels by ASDNet. To obtain the reward, we feed the refined labels into RSRNet, and compute the global reward as
\begin{equation}
r^{global}= \frac{1}{1+\mathcal{L}},
\end{equation}
where $\mathcal{L}$ denotes the cross-entropy loss in Equation~\ref{eq:entropy}. We notice this reward has the range between 0 and 1, and makes the objective of RSRNet well aligned with ASDNet (a smaller loss $\mathcal{L}$ means a larger global reward).
}}
\noindent \textbf{Policy Learning on the MDP.}
The core problem of an MDP is to learn a policy, which guides an agent to choose actions based on the constructed states such that the cumulative reward, denoted by $R_n$, is maximal. We learn the policy via a policy gradient method~\cite{silver2014deterministic}, called the REINFORCE algorithm~\cite{williams1992simple}. To be specific, let $\pi_{\theta}(a|\mathbf{s})$ denote a stochastic policy, which is used to sample an action $a$ for a given state $\mathbf{s}$ via a neural network, whose parameters are denoted by $\theta$. Then, the gradients wrt the network parameters $\theta$ are estimated as
\begin{equation}
\label{eq:policy}
\nabla_{\theta}J(\theta) = \sum_{i=2}^{n}R_n\nabla_{\theta}\ln\pi_{\theta}(a_i|s_i),
\end{equation}
which can be optimized using an optimizer (e.g., Adam stochastic gradient ascent). Here, $R_n$ is the expected cumulative reward for a trajectory, which is defined as follows.
\begin{equation}
R_n=\frac{1}{n-1}\sum_{i=2}^{n}r_i^{local}+r^{global}.
\end{equation}
\noindent \textbf{Joint Training of RSRNet and ASDNet.} We jointly train the two networks (e.g., RSRNet and ASDNet). First, we map raw trajectories on road networks and generate the noisy labels as explained in Section~\ref{sec:weakly}. Then, we randomly sample 200 trajectories to pre-train the RSRNet and ASDNet, separately. {\Comment{In particular, for the RSRNet, we train the network in a supervised manner with the noisy labels. For the ASDNet, we specify its actions as the noisy labels, and train the policy network via a gradient ascent step as computed by Equation~\ref{eq:policy}. The pre-training provides a warm-start for the two networks, where the parameters of the two networks have incorporated some information captured from the normal routes before joint training.
}}
During the joint training, we randomly sample 10,000 trajectories, and for each trajectory, we generate 5 epochs to iteratively train the RSRNet and ASDNet. In particular, we apply the learned policy from ASDNet to
refine the noisy labels, and the refined labels are used to train the RSRNet to obtain the better representation $\mathbf{z}_i$ on each road segment. Then, $\mathbf{z}_i$ is used to construct the state to learn better policy in ASDNet. Since the learned policy can further refine the labels, and the two networks are jointly trained with the best model is chosen during the process.
\subsection{The RL4OASD Algorithm}
\label{sec:RL4OASD}
Our \texttt{RL4OASD} algorithm is based on the learned policy for detecting anomalous subtrajectories. The process is presented in Algorithm~\ref{alg:rl4oasd}. Specifically, \texttt{RL4OASD} accepts an ongoing trajectory in an online fashion for labeling each road segment as the normal (i.e., 0) or not (i.e., 1) (lines 1-12). It first labels the source or destination road segment as normal by definitions (lines 2-3). It then validates the three deterministic cases. If so, it labels the current road segment with the label on the previous road segment (lines 4-5). If not, it
constructs a state via calling RSRNet (lines 7-8), and samples an action to label the road segment based on the learned policy in ASDNet (lines 9-10). Finally, it returns the anomalous subtrajectories which involve the abnormal labels (i.e., 1) on the road segments.
We further develop a technique (called labeling with delay) for boosting the effectiveness. \texttt{RL4OASD} forms an anomalous subtrajectory whenever its boundary is identified, i.e., the boundary is identified at $e_{i-1}$ if $e_{i-1}.l = 1$ and $e_{i}.l=0$. Intuitively, it looks a bit rush to form an anomalous subtrajectory and may produce many short fragments from a detected trajectory. {\Comment{
Therefore, we consider a delay technique as a post-processing step. Specifically, it scans $D$ more road segments that follow $e_{i-1}$ when forming an anomalous subtrajectory. Among the $D$ road segments, we choose the final position $j$ ($i-1<j \leq i-1+D$) where the road segment is with label 1, and then convert some 0's to 1's between the position $i-1$ and $j$.
It could be verified the labeling with delay does not incur much time cost, and offer a better continuity to avoid forming too many fragments instead.
}}
\begin{algorithm}[t!]
\caption{The \texttt{RL4OASD} algorithm}
\label{alg:rl4oasd}
\KwIn{
A map-matched trajectory $T = <e_1, e_2, ..., e_n>$ which is inputted in an online manner}
\KwOut{
The anomalous subtrajectories}
\For{$i=1,2,...,n$}{
\uIf{$i=1$ \textbf{or} $i=n$}{
$e_{i}.l \leftarrow 0$;
}
\uElseIf{Case 1, 2 or 3 is valid}{
$e_{i}.l \leftarrow e_{i-1}.l$;
}
\Else{
Call RSRNet for obtaining a representation $\mathbf{z}_{i}$;\\
Construct a state $\mathbf{s}_i=[\mathbf{z}_i;\mathbf{v}(e_{i-1}.l)]$;\\
Sample an action (i.e., 0 or 1), $a_i \sim \pi_{\theta}(a|s)$;\\
$e_{i}.l \leftarrow a_i$;\\
}
}
\textbf{Return} anomalous subtrajectories consisting of the road segments with the label 1;\\
\end{algorithm}
\noindent \textbf{Time complexity.} The time complexity of the \texttt{RL4OASD} algorithm is $O(n)$, where $n$ denotes the length of a detected trajectory. The time is dominated by two networks, i.e., RSRNet and ASDNet. We analyze them as follows. In RSRNet, the time cost of one road segment consists of (1) that of obtaining embeddings of TCF and NRF, which are both $O(1)$ and (2) that of obtaining the $\mathbf{z}$ via a classic LSTM cell, which is $O(1)$. In ASDNet, the time cost of one road segment consists of (1) that of constructing a state, where the part $\mathbf{z}$ has been computed in RSRNet and the part $\mathbf{v}(e.l)$ is obtained via an embedding layer, which is $O(1)$ and (2) that of sampling an action via the learned policy, which is $O(1)$. As we can see in Algorithm~\ref{alg:rl4oasd}, the two networks are called at most $n$ times. Therefore, the time complexity of the \texttt{RL4OASD} is $O(n)\times O(1)=O(n)$. We note that the $O(n)$ time complexity maintains the current best time complexity as those of existing algorithms~\cite{chen2013iboat,wu2017fast,liu2020online} for the trajectory or subtrajectory detection task, and can largely meet practical needs for online scenarios as shown in our experiments.
\section{METHODOLOGY}
\label{sec:method}
\subsection{Overview of \texttt{RL4OASD}}
\label{sec:overview_rl4oasd}
Determining whether a subtrajectory of an ongoing trajectory is anomalous or not is a decision-making process, which could be modeled as a Markov decision process (MDP)~\cite{puterman2014markov}. We propose a weakly supervised framework called \texttt{RL4OASD} (see Figure~\ref{fig:rl4oasd} for an overview).
The framework consists of three components, namely data preprocessing (Section~\ref{sec:weakly}), RSRNet (Section~\ref{sec:RSRNet}) and ASDNet (Section~\ref{sec:ASDNet}).
In data preprocessing,
we conduct map-matching on raw trajectories, mapping them onto road networks and obtain the map-matched trajectories.
Based on the map-matched trajectories, we compute some labels of the road segments involved in trajectories using some heuristics based on the historical transition data among road segments.
The labels, which are noisy, will be used to train
RSRNet in a weakly supervised manner.
In RSRNet,
we learn the representations of road segments based on both traffic context features and normal route features. These representations in the form of vectors will be fed into ASDNet to define the states of the MDP for labeling anomalous subtrajectories.
In ASDNet,
we model the task of labeling anomalous subtrajectories as a MDP and learn the policy via a policy gradient method~\cite{silver2014deterministic}.
ASDNet outputs refined labels of road segments, which are further used to train RSRNet again, and then RSRNet provides better observations for ASDNet to train better policies. The process iterates and we call the resulting algorithm combining RSRNet and ASDNet as \texttt{RL4OASD} (Section~\ref{sec:RL4OASD}).
{{We explain some insights behind the effectiveness of \texttt{RL4OASD} as follows. First, in RSRNet,
both traffic context features (e.g., driving speed, trip duration) and normal route features that are associated with the anomalies are well captured into the model. Second, in ASDNet, the task of labeling anomalous subtrajectories is formulated as a MDP, whose policy is learned in a data-driven manner, instead of using heuristics (e.g., some pre-defined parameters) as the existing studies do.
Third,
{ RSRNet is first trained with noisy labels for dealing with the cold-start problem.
Then, RSRNet and ASDNet are trained iteratively and collaboratively, where RSRNet uses ASDNet's outputs as labels and ASDNet uses the outputs of RSRNet as features.}
}}
\subsection{Data Preprocessing}
\label{sec:weakly}
The data processing component involves a map matching process~\cite{yang2018fast} and a process of obtaining noisy labels.
The latter produces some noisy labels, which will be utilized to pre-train the representations in RSRNet and also to provide a warm-start for the policy learning in ASDNet.
Recall that we do not assume the availability of real labels for training since manually labeling the data is time-consuming.
Specifically, the process of obtaining noisy labels involves four steps.
\underline{Step-1}:
We group historical trajectories in a dataset with respect to different SD pairs and time slots. Here, we have 24 time slots if we partition one day with one hour granularity, and we say a trajectory falls into a time slot if its starting travel time is within the slot. For example, in Figure~\ref{fig:example}, we have three map-matched trajectories $T_1$, $T_2$ and $T_3$ with the source $e_1$ and destination $e_{10}$. Assuming the starting travel times of $T_1$, $T_2$ and $T_3$ are 9:00, 9:10, and 9:30, respectively. Then, all three trajectories are in the same group because they are within the same time slot with one hour granularity.
\underline{Step-2}: We then compute the fraction of \emph{transitions} each from a road segment to another with respect to all trajectories in each group. Suppose that there are 5 trajectories traveling along $T_1$, 4 along $T_2$, and only 1 along $T_3$, the fraction of \emph{transition} $<e_1,e_2>$ is calculated as $5/10=0.5$ since it appears 5 times (along $T_2$ and $T_3$) in all 10 trajectories.
\underline{Step-3}: For each trajectory, we refer to the group it belongs to and map it to a sequence of \emph{transition} fractions with respect to each road segment. For example, for a trajectory traveling the route as $T_3$, its mapped \emph{transition} sequence is $<<*,e_1>, <e_1,e_2>, <e_2,e_4>, <e_4, e_{11}>, <e_{11}, e_{12}>, <e_{12}, e_{13}>, <e_{13}, e_{14}>, <e_{14}, e_{15}>, <e_{15}, e_{10}>>$, where we pad the initial transition as $<*,e_1>$, and the corresponding fraction sequence is $<1.0,0.5,0.5,0.1,0.1,0.1,0.1,0.1,1.0>$. Note that the fractions on the source $e_1$ (corresponding to $<*,e_1>$) and the destination $e_{10}$ (corresponding to $<e_{15}, e_{10}>$) are always set to 1.0, since the source and destination road segments are definitely travelled within its group.
\underline{Step-4}: We obtain the noisy labels by using a threshold parameter $\alpha$, where 0 denotes a normal road segment, whose fraction is larger than $\alpha$ meaning that the road segment is frequently traveled, and 1 otherwise. For example, by using the threshold $\alpha=0.5$, we obtain the noisy labels of $T_3$ as $<0,1,1,1,1,1,1,1,0>$.
\subsection{Road Segment Representation Network (RSRNet)}
\label{sec:RSRNet}
In RSRNet, we adopt the LSTM~\cite{hochreiter1997long} structure, which accepts trajectories with different lengths and captures the sequential information behind trajectories. We embed two types of features into representations, namely the traffic context features on road networks and the normal route features for a given SD pair.
\smallskip
\noindent \textbf{Traffic Context Feature (TCF).} A map-matched trajectory corresponds to a sequence of road segments and each road segment is naturally in the form of a token (i.e., the road segment id). We pre-train each road segment in the embedding layer of RSRNet as a vector, which captures traffic context features (e.g., driving speed, trip duration, road type). To do this, we employ Toast~\cite{chen2021robust}, which is a recent road network representation learning model to support road segment based applications. The learned road segment representations will be used to initialize the embedding layer in RSRNet, and those embeddings can be further optimized with the model training. Other road network representation models~\cite{jepsen2019graph,wang2019learning} are also applicable for the task.
\smallskip
\noindent \textbf{Normal Route Feature (NRF).} Given an SD pair in a time slot, we first infer the normal routes within it. Intuitively, the normal trajectories often follow the same route. If a trajectory contains some road segments that are rarely traveled by others, it probably contains anomalies. Therefore, we infer a route as the normal route by calculating the fraction of the trajectories passing through the route with respect to all trajectories within its SD pair. The inferred results are obtained via comparing a threshold (denoted by $\delta$) with the fraction of each road segment, i.e., a route is inferred as the normal if the fraction is larger than the threshold, and vice versa.
{{For example, in Figure~\ref{fig:example}, recall that there are 5 trajectories traveling along $T_1$, 4 along $T_2$, and only 1 along $T_3$.
Given $\delta=0.3$, we infer $T_1$ (and resp. $T_2$) as the normal route, because its fraction $5/10=0.5$ (and resp. $4/10=0.4$) is larger than the threshold $\delta$ in all 10 trajectories.}}
Based on the inferred normal routes, we then extract the features of a trajectory. For example, given a trajectory following $T_3$,
we extract the features as follows: a road segment in a target trajectory is normal (i.e., 0) if the transition on that road segment occurs in the inferred normal routes; and 1 otherwise. For example, the extracted normal features of $T_3$ are $<0,0,0,1,1,1,1,1,0>$, where the feature on the road segment $e_2$ is 0 since the transition $<e_1,e_2>$ occurs in the normal route $T_2$. Note that the source and destination road segments always have the feature 0 (i.e., normal).
{{We notice normal route features and noisy labels are both in the form of 0-1. The difference between them is that the former is to capture the information of normal routes, and correspondingly it is obtained by normal routes at a route-level. In contrast, the latter is used as the labels for training RSRNet, which is obtained by computing transition frequencies at an edge-level. {{In addition, the former is utilized during the whole training process, while the latter is utilized to pre-train the representations in RSRNet only (which would provide a warm-start for ASDNet).}}
}}
After obtaining the normal route features that are in the form of tokens, we then obtain a vector for the feature by embedding the tokens as one-hot vectors.
We call the obtained vectors embedded normal route features.
\smallskip
\noindent \textbf{Training RSRNet.} Figure~\ref{fig:rl4oasd} illustrates the architecture of RSRNet. In particular, given a sequence of embedded traffic context features $\mathbf{x_i}^{t}$ ($1 \leq i \leq n$), where $n$ denotes the trajectory length, the LSTM obtains the hidden state $\mathbf{h_i}$ at each road segment.
We then concatenate $\mathbf{h_i}$ with the embedded normal route feature $\mathbf{x_i}^{n}$, denoted by $\mathbf{z}_i=[\mathbf{h}_i;\mathbf{x}_i^{n}]$.
Note that the two parts $\mathbf{h}$ and $\mathbf{x}^{n}$ capture the sequential trajectory information and normal route information, respectively.
Note that we do not let $\mathbf{x}^{n}$ go through the LSTM since it preserves the normal route feature at each road segment.
We adopt cross-entropy loss to train RSRNet between the predicted label $\hat{y}_i$ based on the $\mathbf{z}_i$ and the {{noisy/refined label}} $y_i$, i.e.,
\begin{equation}
\label{eq:entropy}
\mathcal{L}=\frac{1}{n}\sum_{i=1}^{n}\mathcal{H}(y_i,\hat{y}_i),
\end{equation}
where $\mathcal{H}$ denotes the cross-entropy operator.
\subsection{Anomalous Subtrajectory Detection Network (ASDNet)}
\label{sec:ASDNet}
Consider the task of online anomalous subtrajectory detection is to sequentially scan an ongoing trajectory, and for each road segment, decide whether an anomaly happened at that position. This motivates us to model the process as a Markov decision process (MDP), involving states, actions,
and rewards.
\if 0
\noindent \textbf{Road Network Enhanced Environment (RNEE).} Consider a transition $<e_{i-1},e_i>$ ($1 < i \leq n$) between two road segments $e_{i-1}$ and $e_i$. Based on the graph structure of road networks, we denote the out degree and in degree of a road segment $e_{i}$ as $e_{i}.out$ and $e_{i}.in$, respectively. $e_{i}.l$ denotes the label of the road segment $e_{i}$, where 0 denotes the normal and 1 denotes the abnormal.
\if 0
\begin{lemma}
\label{lemma:1}
Given $e_{i-1}.out=1$ and $e_{i}.in=1$, we have $e_{i}.l=e_{i-1}.l$.
\end{lemma}
\begin{proof}
The transition is smooth without any road turns can be involved, and thus the labels on $e_{i-1}$ and $e_{i}$ maintain the same.
\end{proof}
\begin{lemma}
\label{lemma:2}
Given $e_{i-1}.out=1$ and $e_{i}.in>1$, we have $e_{i}.l=0$, if $e_{i-1}.l=0$.
\end{lemma}
\begin{proof}
This can be proved by contradiction. Suppose $e_{i}.l=1$ when $e_{i-1}.l=0$, it implies that an anomaly happened on the road segment $e_{i}$, whose movement do not follow the normal road, denoted by $e'_{i}$. This contradicts the fact that $e_{i-1}.out=1$. Hence, the lemma holds.
\end{proof}
\begin{lemma}
\label{lemma:3}
Given $e_{i-1}.out>1$ and $e_{i}.in=1$, we have $e_{i}.l=1$, if $e_{i-1}.l=1$.
\end{lemma}
\begin{proof}
We also prove it by contradiction. Suppose $e_{i}.l=0$ when $e_{i-1}.l=1$, it indicates that an anomaly finished on the road segment $e_{i}$, which is definitely associated with a normal road $e'_{i-1}$ that is traveled. This contradicts the fact that $e_{i}.in=1$. Hence, the lemma holds.
\end{proof}
\begin{lemma}
\label{lemma:4}
Given $e_{i-1}.out>1$ and $e_{i}.in>1$, $e_{i-1}.l$ and $e_{i}.l$ are arbitrary, i.e., $e_{i}.l=e_{i-1}.l$ or $e_{i}.l \neq e_{i-1}.l$.
\end{lemma}
\begin{proof}
The lemma directly follows the observation.
\end{proof}
\fi
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/rule.pdf}
\vspace{-3mm}
\caption{Illustrating three cases for deterministic labeling.}
\label{fig:rule}
\vspace{-3mm}
\end{figure}
We build an environment (called RNEE) for conducting the MDP, where a decision for labeling a road segment is deterministic in one of three cases. We illustrate the three cases in Figure~\ref{fig:rule}
\begin{enumerate}
\item If $e_{i-1}.out=1$, $e_{i}.in=1$, then $e_{i}.l=e_{i-1}.l$.
\item If $e_{i-1}.out=1$, $e_{i}.in>1$ and $e_{i-1}.l=0$, then $e_{i}.l=0$.
\item If $e_{i-1}.out>1$, $e_{i}.in=1$ and $e_{i-1}.l=1$, then $e_{i}.l=1$.
\end{enumerate}
We explain the three rules as follows. The common intuition behind them is the label on $e_{i}$ can be determined in some cases when we know the road network structure and the previous label on $e_{i-1}$.
For (1), the case can be seen as a ``smooth transition'' without any road turns, and thus the label on $e_{i}$ is the same as $e_{i-1}$. For (2) and (3), both two cases can be shown by contradiction. Suppose $e_{i}.l=1$ (resp. $e_{i}.l=0$) given $e_{i-1}.l=0$ (resp. $e_{i-1}.l=1$) in Case 2 (resp. Case 3), it implies that an anomaly happened on the road segment $e_{i}$ (resp. $e_{i-1}$), whose movement does not follow a normal road on $e'_{i}$ (resp. $e'_{i-1}$), where $e'_{i} \neq e_{i}$ (resp. $e'_{i-1} \neq e_{i-1}$). This contradicts the fact that $e_{i-1}.out=1$ (resp. $e_{i}.in=1$).
The RNEE brings at least two advantages. First, the efficiency is improved since the time of taking actions in some cases is saved via checking the rules instead of calling the RL model.
Second, as many cases have been pruned with deterministic decisions, the RL model is light to train for only tackling the otherwise cases, and some potential wrong decisions can therefore be avoided.
Based on the RNEE, we define the MDP as follows.
\fi
\smallskip
\noindent \textbf{States.} We denote the state when scanning the road segment $e_i$ as $\mathbf{s}_i$. The state $\mathbf{s}_i$ ($1 < i \leq n$) is represented as the concatenation of $\mathbf{z}_i$ and $\mathbf{v}(e_{i-1}.l)$, i.e., $\mathbf{s}_i=[\mathbf{z}_i;\mathbf{v}(e_{i-1}.l)]$, where $\mathbf{z}_i$ is obtained from RSRNet and $\mathbf{v}(e_{i-1}.l)$ denotes a vector of the label on the previous road segment $e_{i-1}$ by embedding its token (i.e., 0 or 1). The rationale for the state design is to capture the features from three aspects, i.e., traffic context, normal routes and the previous label.
\smallskip
\noindent \textbf{Actions.} We denote an action of the MDP by $a$, which is to label each road segment as normal or not. Note that an anomalous subtrajectory boundary can be identified when the labels of two adjacent road segments are different
\if 0
\noindent \textbf{Transitions.} We take an action $a$ to produce a label on a road segment at a state $\mathbf{s}$, it then would then proceed to the next road segment.
Let $\mathbf{s}'$ denote a new state at the next road segment, which corresponds to the next state of $\mathbf{s}$, to output an action via the RL model.
\fi
\smallskip
\noindent \textbf{Rewards.} The reward involves two parts. One is called \emph{local reward}, which aims to capture the local continuity of labels of road segments. The rationale is that the labels of normal road segments or anomalous road segments would not change frequently. The second one is called \emph{global reward}, which aims to indicate the quality of the refined labels
(indicated by the cross-entropy loss of RSRNet). We take the loss as some feedback to guide the training of ASDNet.
The local reward is an intermediate reward, which encourages the continuity of the labels on road segments. Specifically, it is defined as
\begin{equation}
r_i^{local}=\text{sign}(e_{i-1}.l=e_{i}.l)\cdot \text{cos}(\mathbf{z}_{i-1},\mathbf{z}_{i}),
\end{equation}
where the $\text{sign}(e_{i-1}.l=e_{i}.l)$ returns 1 if the condition $e_{i-1}.l=e_{i}.l$ is true {{(i.e., the labels are continued)}} and -1 otherwise; $\text{cos}(\mathbf{z}_{i-1},\mathbf{z}_{i})$ denotes the consine similarity between $\mathbf{z}_{i-1}$ and $\mathbf{z}_{i}$, which are obtained from RSRNet. {\Comment{We choose the consine similarity because it outputs a normalized value between 0 and 1, which corresponds to the same output range (i.e., between 0 and 1) of the global reward.
}}
The global reward is designed to measure the quality of refined labels by ASDNet. We feed the refined labels into RSRNet, and compute the global reward as
\begin{equation}
r^{global}= \frac{1}{1+\mathcal{L}},
\end{equation}
where $\mathcal{L}$ denotes the cross-entropy loss in Equation~\ref{eq:entropy}. We notice this reward has the range between 0 and 1, and makes the objective of RSRNet well aligned with ASDNet (a smaller loss $\mathcal{L}$ means a larger global reward).
\smallskip
\noindent \textbf{Policy Learning on the MDP.}
The core problem of a MDP is to learn a policy, which guides an agent to choose actions based on the constructed states such that the cumulative reward, denoted by $R_n$, is maximized. We learn the policy via a policy gradient method~\cite{silver2014deterministic}, called the REINFORCE algorithm~\cite{williams1992simple}. To be specific, let $\pi_{\theta}(a|\mathbf{s})$ denote a stochastic policy, which is used to sample an action $a$ for a given state $\mathbf{s}$ via a neural network, whose parameters are denoted by $\theta$. Then, the gradients of some performance measure $J(\theta)$ wrt the network parameters $\theta$ are estimated as
\begin{equation}
\label{eq:policy}
\nabla_{\theta}J(\theta) = \sum_{i=2}^{n}R_n\nabla_{\theta}\ln\pi_{\theta}(a_i|\mathbf{s}_i),
\end{equation}
which can be optimized using an optimizer (e.g., Adam stochastic gradient ascent). The expected cumulative reward $R_n$ for a trajectory is defined as
\begin{equation}
R_n=\frac{1}{n-1}\sum_{i=2}^{n}r_i^{local}+r^{global}.
\end{equation}
\smallskip
\noindent \textbf{Joint Training of RSRNet and ASDNet.} We jointly train the two networks (e.g., RSRNet and ASDNet). First, we map raw trajectories on road networks and generate the noisy labels as explained in Section~\ref{sec:weakly}. Then, we randomly sample 200 trajectories to pre-train the RSRNet and ASDNet, separately. In particular, for the RSRNet, we train the network in a supervised manner with the noisy labels. For the ASDNet, we specify its actions as the noisy labels, and train the policy network via a gradient ascent step as computed by Equation~\ref{eq:policy}. The pre-training provides a warm-start for the two networks, where the parameters of the two networks have incorporated some information captured from the normal routes before joint training.
During the joint training, we randomly sample 10,000 trajectories, and for each trajectory, we generate 5 epochs to iteratively train the RSRNet and ASDNet. In particular, we apply the learned policy from ASDNet to
obtain refined labels, and the refined labels are used to train the RSRNet to obtain the better representation $\mathbf{z}_i$ on each road segment. Then, $\mathbf{z}_i$ is used to construct the state to learn a better policy in ASDNet. Since the learned policy can further refine the labels, and the two networks are jointly trained with the best model is chosen during the process.
\subsection{The \texttt{RL4OASD} Algorithm}
\label{sec:RL4OASD}
\begin{algorithm}[t!]
\caption{The \texttt{RL4OASD} algorithm}
\label{alg:rl4oasd}
\KwIn{
A map-matched trajectory $T = <e_1, e_2, ..., e_n>$ which is inputted in an online manner}
\For{$i=1,2,...,n$}{
\uIf{$i=1$ \textbf{or} $i=n$}{
$e_{i}.l \leftarrow 0$;
}
\uElse{
Call RSRNet for obtaining a representation $\mathbf{z}_{i}$;\\
Construct a state $\mathbf{s}_i=[\mathbf{z}_i;\mathbf{v}(e_{i-1}.l)]$;\\
Sample an action (i.e., 0 or 1), $a_i \sim \pi_{\theta}(a|\mathbf{s})$;\\
$e_{i}.l \leftarrow a_i$;\\
{{Monitor the anomalous subtrajectory consisting of the road segments with the label 1 and return the subtrajectory when it is formed;}}
}
}
{{\textbf{Return} a NORMAL trajectory signal;}}
\end{algorithm}
Our \texttt{RL4OASD} algorithm is based on the learned policy for detecting anomalous subtrajectories. The process is presented in Algorithm~\ref{alg:rl4oasd}. Specifically, \texttt{RL4OASD} accepts an ongoing trajectory in an online fashion for labeling each road segment as the normal (i.e., 0) or not (i.e., 1) (lines 1-9). It first labels the source or destination road segment as normal by definitions (lines 2-3). It then
constructs a state via calling RSRNet (lines 5-6), and samples an action to label the road segment based on the learned policy in ASDNet (lines 7-8). {{It monitors the anomalous subtrajectory that involves the abnormal labels (i.e., 1) on the road segments
and returns it when it is formed (line 9). Finally, if no anomaly is detected, the algorithm returns a NORMAL trajectory signal (line 11).}} We further develop two enhancements for boosting the effectiveness and efficiency of \texttt{RL4OASD}. One is called Road Network Enhanced Labeling (RNEL), and the other is called Delayed Labeling (DL).
\smallskip
\noindent\textbf{Road Network Enhanced Labeling.}
{\Comment{For the RNEL, we utilize the graph structure of a road network to help labeling a road segment, where a label on a road segment is deterministic in one of three cases: (1) If $e_{i-1}.out=1$, $e_{i}.in=1$, then $e_{i}.l=e_{i-1}.l$. (2) If $e_{i-1}.out=1$, $e_{i}.in>1$ and $e_{i-1}.l=0$, then $e_{i}.l=0$. (3) If $e_{i-1}.out>1$, $e_{i}.in=1$ and $e_{i-1}.l=1$, then $e_{i}.l=1$. Here, $e_{i}.out$ and $e_{i}.in$ denote the out degree and in degree of a road segment $e_{i}$, respectively, and $e_{i}.l$ denotes the label of the road segment $e_{i}$.
The common intuition behind them is that: (a) any change of the anomalousness status from normal (0) at $e_{i-1}$ to abnormal (1) at $e_i$ means that there exist alternative transitions from $e_{i-1}$ to other road segments (i.e., $e_{i-1}.out > 1$); and (b) any change of the anomalousness status from abnormal (1) at $e_{i-1}$ to normal (0) at $e_i$ means that there exist alternative transitions from other road segments to $e_{i}$ (i.e., $e_{i}.in > 1$).
Based on the rules, we only perform actions in the otherwise cases via the RL model, and some potential wrong decisions can therefore be avoided. In addition, the efficiency can be improved since the time of taking actions in some cases is saved via checking the rules instead of calling the RL model.
}}
\smallskip
\noindent\textbf{Delayed Labeling.}
For the DL, \texttt{RL4OASD} forms an anomalous subtrajectory whenever its boundary is identified, i.e., the boundary is identified at $e_{i-1}$ if $e_{i-1}.l = 1$ and $e_{i}.l=0$. Intuitively, it looks a bit rush to form an anomalous subtrajectory and may produce many short fragments from a target trajectory.
Therefore, we consider a delay technique as a post-processing step. Specifically, it scans $D$ more road segments that follow $e_{i-1}$ when forming an anomalous subtrajectory. Among the $D$ road segments, we choose the final position $j$ ($i-1<j \leq i-1+D$) where the road segment is with label 1, and then convert some 0's to 1's between the position $i-1$ and $j$.
It could be verified the labeling with delay does not incur much time cost, and offers a better continuity to avoid forming too many fragments.
\smallskip
\noindent \textbf{Time complexity.} The time complexity of the \texttt{RL4OASD} algorithm is $O(n)$, where $n$ denotes the length of a target trajectory. The time is dominated by two networks, i.e., RSRNet and ASDNet. We analyze them as follows. In RSRNet, the time cost for one road segment consists of (1) that of obtaining embeddings of TCF and NRF, which are both $O(1)$ and (2) that of obtaining the $\mathbf{z}$ via a classic LSTM cell, which is $O(1)$. In ASDNet, the time cost for one road segment consists of (1) that of constructing a state, where the part $\mathbf{z}$ has been computed in RSRNet and the part $\mathbf{v}(e.l)$ is obtained via an embedding layer, which is $O(1)$ and (2) that of sampling an action via the learned policy, which is $O(1)$. As we can see in Algorithm~\ref{alg:rl4oasd}, the two networks are called at most $n$ times. Therefore, the time complexity of the \texttt{RL4OASD} is $O(n)\times O(1)=O(n)$. We note that the $O(n)$ time complexity maintains the current best time complexity as those of existing algorithms~\cite{chen2013iboat,wu2017fast,liu2020online} for the trajectory or subtrajectory anomaly detection task, and can largely meet practical needs for online scenarios as shown in our experiments.
\smallskip
{{\noindent \textbf{Handling Concept Drift of Anomalous Trajectories.} As discussed in Section~\ref{sec:problem}, we detect anomalous subtrajectories that do not follow the normal routes. However, the concept of ``normal'' and ``anomalous'' may change over time with varying traffic conditions. For example, when some popular route is congested, then drivers may gradually prefer to travel another unpopular route (i.e., to avoid traffic jams). In this case, the unpopular route should be considered as a normal route, while the trajectories traveling the previous route may become anomalous. To handle the issue caused by the concept drift of the normal and anomalous, we adopt an online learning strategy~\cite{liu2020online}, where the model continues to train with newly recorded trajectory data, and keeps its policy updated for the current traffic condition (we have validated the effectiveness of the strategy in Section~\ref{sec:concept_exp}).
}}
\smallskip
{{\noindent \textbf{Discussion on the cold-start problem.} As the anomalous subtrajectories are defined as unpopular parts, and thus there may exist a cold-start problem for the detection, where the historical trajectories are not sufficient for some SD pairs.
Our method relies on the historical trajectories for defining the normal route feature. The feature is calculated as a relative fraction between 0 and 1. Specifically, the feature of a route is defined to be the number of trajectories along the route over the total number of trajectories within the SD pair.
{ We have conducted experiments} by varying the number of historical trajectories within the SD pairs in Table~\ref{tab:droprate}. { The results show that our model is robust against sparse data, e.g., its effectiveness} only degrades by 6\% even if 80\% of historical trajectories are dropped. The cold-start problem in anomalous trajectory/subtrajectory detection looks very interesting. We believe that some generative methods, e.g., to generate some routes within the sparse SD pairs, can possibly be leveraged to overcome the issue, which we plan to explore as future work.
}}
\section{PROBLEM DEFINITION}
\label{sec:problem}
\if 0
\smallskip\noindent
\emph{\textbf{Problem Definition.}}
Trajectories are collected from the trace of the moving objects by GPS. A trajectory of taxi consists of a sequence of GPS points including latitude, longitude and timestamp, denoted as $T = <p_1, p_2, .....p_n>$, each point is represented as $t_i = (lat_i, lon_i, t_i)$.
In our work, a taxi moves from a source point $s$ to a destination point $d$ in a road network referring to a SD pair. There are a lot of trajectories in each SD pair. The trajectory occur with high probability is denoted as normal routes.
In this paper, we use the taxi trajectory datasets of \emph{Xi'an} and \emph{Chengdu}. Figure ~\ref{} shows the trajectories from a source to a destination.
\emph{Definition 1: Raw Trajectory.} A trajectory $T$ consists of a sequence of GPS points, which is represented as $T = <p_1, p_2, .....p_n>$, where $p_i$ represents a GPS point, denoted as $P_i = (lat_i, lon_i, t_i)$. Here, $lat_i,lon_i, t_i$ denote the latitude, the longitude and the timestamp of the point $p_i$ respectively.
\emph{Definition 2: Road network.} A road network is represented as a directed graph $G(V,E)$, where $V$ represents a vertex set referring to crossroads and $E$ represents a edge set referring to road segments.
\emph{Definition 3: Edge Trajectory.} A edge trajectory refers a route $R = <s_1, s_2, .....s_n>$ consists of sets of road segments recording the result of a raw trajectory $T$ after map matching.
\emph{\textbf{Problem statement:}} Given an SD pair and an ongoing trajectory $T$, the ongoing trajectory $T$ is anomalous if it rarely occurs and is different from the reference trajectories in the SD pair.
Besides, we want to identify the anomalous sub-trajectory of the ongoing trajectory.
\fi
\smallskip
\noindent \textbf{(1) Raw Trajectory.} A raw trajectory $T$ consists of a sequence of GPS points, i.e., $T = <p_1, p_2,...,p_n>$, where a GPS point $P_i$ is in the form of a triplet, i.e., $P_i = (x_i, y_i, t_i)$, to record a moving object is located on $(x_i, y_i)$ at timestamp $t_i$, and $n$ represents the length of the trajectory $T$.
\smallskip
\noindent \textbf{(2) Road Network.} A road network is represented as a directed graph $G(V,E)$, where $V$ represents a vertex set referring to crossroads or intersections, and $E$ represents an edge set referring to road segments, and for each edge $e$, it connects two vertexes $u$ and $v$ on the road network, denoted as $e=(u,v)$.
\smallskip
\noindent \textbf{(3) Map-matched Trajectory.} A map-matched trajectory refers to a trajectory generated by a moving object on road networks, which corresponds to a sequence of road segments after map-matching~\cite{yang2018fast}, i.e., $T = <e_1, e_2,...,e_n>$.
For simplicity, {\Comment{we use $T$ to denote a map-matched trajectory, and we refer to map-matched trajectories as trajectories or routes interchangeably in the rest of the paper.}}
\smallskip
\noindent \textbf{(4) Subtrajectory and Transition.} A subtrajectory $T[i,j]$ corresponds to a portion of the trajectory $T = <e_1, e_2,...,e_n>$ from $e_i$ to $e_j$, $1 \leq i \leq j \leq n$. A transition is defined as a special case of a subtrajectory, which only consists of two adjacent road segments to capture the road turn of a moving object, i.e., $<e_{i-1}, e_{i}>$, where $1< i \leq n$.
\smallskip
\noindent \textbf{(5) Online Anomalous Subtrajectory Detection (OASD).} We study the problem of online anomalous subtrajectory detection. Given a source $S$ and destination $D$ pair (SD-pair) with a set of trajectories $\mathcal{T}$ between them. Intuitively, a trajectory $T$ can be considered as the \emph{normal} if it follows the route traveled by the most of trajectories in $\mathcal{T}$. Based on this, we denote an \emph{anomalous} subtrajectory representing a part of a trajectory, whose route \emph{do not} follow the normal routes within the SD-pair. We formulate the problem as follows.
\begin{figure*}
\centering
\includegraphics[width=0.85\linewidth]{figures/model2.pdf}
\vspace{-3mm}
\caption{Overview of \texttt{RL4OASD}, where $\bigoplus$ denotes the concatenation operation.}
\label{fig:rl4oasd}
\vspace{-3mm}
\end{figure*}
\begin{problem}[OASD] Given an ongoing trajectory $T=<e_1,e_2,...e_n>$ that is generated from its source $S_T$ to destination $D_T$ in an online fashion, {\Comment{where the points come one by one, and the future points are not
accessible in advance.
OASD problem aims to identify which parts of a trajectory (i.e., subtrajectories) are anomalous if $T$ contains the anomalous subtrajectories,
while T is sequentially generated.}}
\label{prob:oasd}
\end{problem}
\section{PRELIMINARIES AND PROBLEM STATEMENT}
\label{sec:problem}
\if 0
\smallskip\noindent
\emph{\textbf{Problem Definition.}}
Trajectories are collected from the trace of the moving objects by GPS. A trajectory of a sequence of GPS points including latitude, longitude and timestamp, denoted as $T = <p_1, p_2, .....p_n>$, each point is represented as $t_i = (lat_i, lon_i, t_i)$.
In our work, a taxi moves from a source point $S$ to a destination point $D$ in a road network referring to a SD pair. There are a lot of trajectories in each SD pair. The trajectory occurs with high probability is denoted as normal routes.
In this paper, we use the taxi trajectory datasets of \emph{Xi'an} and \emph{Chengdu}. Figure ~\ref{} shows the trajectories from a source to a destination.
\emph{Definition 1: Raw Trajectory.} A trajectory $T$ consists of a sequence of GPS points, which is represented as $T = <p_1, p_2, .....p_n>$, where $p_i$ represents a GPS point, denoted as $p_i = (lat_i, lon_i, t_i)$. Here, $lat_i,lon_i, t_i$ denote the latitude, the longitude and the timestamp of the point $p_i$ respectively.
\emph{Definition 2: Road network.} A road network is represented as a directed graph $G(V,E)$, where $V$ represents a vertex set referring to crossroads and $E$ represents an edge set referring to road segments.
\emph{Definition 3: Edge Trajectory.} A edge trajectory refers a route $R = <s_1, s_2, .....s_n>$ consists of sets of road segments recording the result of a raw trajectory $T$ after map matching.
\emph{\textbf{Problem statement:}} Given an SD pair and an ongoing trajectory $T$, the ongoing trajectory $T$ is anomalous if it rarely occurs and is different from the reference trajectories in the SD pair.
Besides, we want to identify the anomalous subtrajectory of the ongoing trajectory.
\fi
\subsection{Preliminaries}
\noindent \textbf{Raw Trajectory.} A raw trajectory $T$ consists of a sequence of GPS points, i.e., $T = <p_1, p_2,...,p_n>$, where a GPS point $p_i$ is in the form of a triplet, i.e., $p_i = (x_i, y_i, t_i)$, meaning that a moving object is located on $(x_i, y_i)$ at timestamp $t_i$, and $n$ represents the length of the trajectory $T$.
\smallskip
\noindent \textbf{Road Network.} A road network is represented as a directed graph $G(V,E)$, where $V$ represents a vertex set referring to crossroads or intersections, and $E$ represents an edge set referring to road segments, and for each edge $e$, it connects two vertexes $u$ and $v$ on the road network, denoted as $e=(u,v)$.
\smallskip
\noindent \textbf{Map-matched Trajectory.} A map-matched trajectory refers to a trajectory generated by a moving object on road networks, which corresponds to a sequence of road segments~\cite{yang2018fast}, i.e., $T = <e_1, e_2,...,e_n>$.
For simplicity, we use $T$ to denote a map-matched trajectory, and we refer to map-matched trajectories as trajectories or routes interchangeably in the rest of the paper.
\smallskip
\noindent \textbf{Subtrajectory and Transition.} A subtrajectory $T[i,j]$ corresponds to a portion of the trajectory $T = <e_1, e_2,...,e_n>$ from $e_i$ to $e_j$, $1 \leq i \leq j \leq n$. A transition is defined as a special case of a subtrajectory, which only consists of two adjacent road segments,
i.e., $<e_{i-1}, e_{i}>$, where $1< i \leq n$.
\subsection{Problem Definition}
We study the problem of \emph{online anomalous subtrajectory detection}. Consider a source $S$ and destination $D$ pair (SD pair) and a set of trajectories $\mathcal{T}$ between them. Intuitively, a trajectory $T$ can be considered as \emph{normal} if it follows the route traveled by the majority of trajectories in $\mathcal{T}$. Based on this, we denote an \emph{anomalous} subtrajectory representing a part of a trajectory, which \emph{does not} follow the normal routes within the SD pair. We formulate the problem as follows.
\begin{problem}[OASD] Given an ongoing trajectory $T=<e_1,e_2,...e_n>$ that is generated from its source $S_T$ to destination $D_T$ in an online fashion, where the points $e_i$ are generated one by one, and the future points are not accessible in advance.
{{The OASD problem is to detect and update which parts of $T$ (i.e., subtrajectories) are anomalous, while T is sequentially generated.
}}
\label{prob:oasd}
\end{problem}
\section{RELATED WORK}
\label{sec:related}
\subsection{Online Anomalous Trajectory Detection}
Existing studies of anomalous trajectory detection are divided into two categories with respect to whether the target trajectory is complete: online anomalous trajectory detection methods and offline anomalous trajectory detection methods.
Online anomalous trajectory detection methods refer to detecting the partial trajectory.
For example, A recent study which is a learning-based method ~\citet{liu2020online} proposes to detect anomalous trajectories by detection-via-generation scheme using gaussian mixture distribution to represent the normal routes, which could represent different kinds of normal routes
and detect anomalous trajectories effectively and efficiently; however, this method cannot detect which part of the trajectory is anomalous. Another learning-based method is introduced in ~\cite{wu2017fast}, which proposes to model the distribution of driving behaviours by probabilistic model without access to historical trajectories in database. However, the linearity of the model of this method makes it hard to get the complex information data behind the trajectories and ignores the sequential context information of trajectories. And another recent study which is learning-based~\cite{zhang2020continuous} proposes a method to calculate the estimation deviation to detect the anomalous trajectory, which is efficient to detect the anomalous trajectories. Neverless, this method requires a reference trajectory for each source-destination (SD) pair, which has a large effect on the performance of the method. Besides, this method cannot detect the which part of the trajectory is anomalous.
One metric-based online anomalous trajectory detection study ~\cite{chen2013iboat} which is designed to detect the anomalous trajectory and the detour part which is similar to our work proposes to maintain a adaptive working window of the latest ongoing trajectory points to compare against the set of the historical trajectories in the working set to better detect anomalousness of the ongoing trajectory. The specific idea is as a new ongoing trajectory point is added into the adaptive working window, some of historical trajectories in the working set are removed by certain trajectory which is inconsistent with the subtrajectory in adaptive working window, new points are added into the adaptive working window until the support value of the subtrajectory in the adaptive working window below the threshold $theta$, the adaptive working window is reduced to contain the latest coming points, which is more likely to be anomalous point. And then the adaptive working window will not increase until the next anomalous point. In this iteration until the end of the trajectory, the set of the points in the adaptive working window is the anomalous points in the trajectory. Thus, the limitations of this method include: 1) the method needs to maintain a adaptive working window to record the latest points and the working set to record the historical trajectories to be compared with the ongoing part of the trajectory, which is time-consuming and not efficient enough to detect anomalous trajectories. 2) this method consider trajectories in freespace, which is not effective enough without the road network constraints for anomalous trajectories detection. Our work ....
\subsection{Other Anomalous Trajectory Detection Studies}
Offline anomalous trajectory detection methods refer to detect the anomalous trajectory based on a complete trajectory. All online methods can handle complete trajectories' detection. There are some methods are proposed for detecting anomalous trajectories based on complete trajectories. Offline trajectory detection methods based on the complete trajectory are usually based on distance or density metrics. A study based on distance metrics ~\cite{lee2008trajectory} proposes to detect anomalous trajectory segments by calculating the distance of each segment of the target trajectory to each segments in other trajectories. Another study~\cite{zhang2011ibat} proposes to detect anomalous trajectories via checking the isolation from normal trajectories in the same iteration. This method needs to compare the target trajectory with historical trajectories in database, which is time-consuming. Some literatures, such as ~\cite{zhu2015time,lv2017outlier} propose other distance metrics to detect anomalous trajectories via mining trajectory patterns. One learning-based offline anomalous trajectory detection study ~\cite{song2018anomalous} consider context information of trajectories using Recurrent neural network (RNN) for anomalous detection. However, this method is supervised method, which uses labeled data that is not available in real anomaly detection appliance. Another learning-based work~\cite{gray2018coupled} propose to use adversarial learning to do anomalous trajectory detection, which has a major limit that this method can only apply for detecting trajectories with the same length. Thus these two recent studies are not applicable to real anomalous trajectory detection. Other studies ~\cite{ge2010top,yu2014detecting,banerjee2016mantra}are other types of anomalous trajectory detection,such as detect time-series anomalous trajectories, which will be discussed further.
\subsection{Deep Reinforcement Learning}
In recent years, reinforcement learning is introduced into anomaly detection in ~\cite{oh2019sequential}, which propose a method to do sequential anomaly detection; and ~\citet{huang2018towards} proposes a method based on reinforcement learning to do temporally anomalous detection.
Our work is very different from them in that (i) our work focuses anomalous subtrajectory detection and output the detour avoiding to access historical trajectories in database. (ii) our work detect anomalous trajectories detection with road network constraints, which is the same as the real application. (iii) our work uses deep reinforcement learning to refine the noise labels that we used. (iv) our work contributes a dataset of the anomalous trajectories.
\section{RELATED WORK}
\label{sec:related}
\subsection{Online Anomalous Trajectory Detection}
Many recent studies are proposed for online anomalous trajectory detection.
Online anomalous trajectory detection methods aim to detect a trajectory which is ongoing in an online manner. We discuss these studies in terms of two categories: heuristic-based methods~\cite{ge2010top,yu2014detecting,chen2013iboat,zhang2020continuous} and learning-based methods~\cite{wu2017fast,liu2020online}.
For heuristic-based methods,
Chen et al.~\cite{chen2013iboat} study the online anomalous trajectory detection with the isolation-based method, which aims to check which parts of a detected trajectory that can be isolated from the reference (i.e., normal) trajectories with the same source and destination routes.
Recently, Zhang et al.~\cite{zhang2020continuous} propose to monitor the trajectory similarity based on discrete Frechet distance between a given reference route and the current partial route at each timestamp. If the deviation threshold exceeds a given threshold at any timestamp, and the system alerts that an outlier event of detouring is detected. Our work differs from these studies in that it is based on a road network-enhanced policy learned via reinforcement learning instead of the hand-crafted heuristic with many predefined parameters (e.g., the deviation threshold) for detecting anomalies.
For learning-based methods, a recent study~\citet{liu2020online} proposes to detect anomalous trajectories via a generation scheme, it uses Gaussian mixture distribution to represent different kinds of normal routes and detect those anomalous trajectories that cannot be well-generated based on the given representations of normal routes. Another learning-based method is introduced in~\cite{wu2017fast}, which proposes a probabilistic model to detect trajectory outliers via modeling the distribution of driving behaviours from historical trajectories. The method considers many potential features that are associated with driving behaviour modeling, including road level and turning angle, etc. In our work, we target a more finer-grained setting, i.e, detecting which part of an anomalous trajectory, namely subtrajectory, is responsible for its anomalousness in an online manner. In contrast, subtrajectory detection is not their focus in these studies.
\subsection{Offline Anomalous Trajectory Detection}
Other proposals on anomalous trajectory detection detect the anomalous trajectory by assuming full access to the whole trajectory data (i.e., offline anomalous trajectory detection), and we review the literature in terms of the heuristic-based methods~\cite{lee2008trajectory,zhang2011ibat,zhu2015time,lv2017outlier,banerjee2016mantra} and learning-based methods~\cite{gray2018coupled,song2018anomalous} as follows.
For heuristic-based methods, some distance or density metrics are usually proposed for detecting anomalies. For example, an early study~\cite{lee2008trajectory} proposes to partition the trajectories and detect the anomalous trajectory segments by computing the distance of each segment in a detected trajectory to the segments in other trajectories. Further, Zhu et al.~\cite{zhu2015time} and Lv et al.~\cite{lv2017outlier} adopt the edit distance based metrics to detect anomalies via mining normal trajectory patterns. Another study~\cite{zhang2011ibat} proposes to cluster the trajectories crossing the same itinerary and detect anomalous trajectories via checking the isolation from the cluster. In addition, Ge et al~\cite{ge2011taxi} develop a taxi driving fraud system, which is equipped with both distance and density metrics to identify fraud driving activities.
Other studies are proposed to detect anomalous trajectories with learning-based methods. Song et al~\cite{song2018anomalous} adopt recurrent neural network (RNN) to capture the sequential information of trajectories for detection. However, the proposed model needs to be trained in a supervised manner, and the labeled data is usually unavailable in real applications.
Another learning-based work~\cite{gray2018coupled} aims at detecting anomalous trajectories by new moving subjects (i.e., new taxi driver detection) with an adversarial model (i.e., GAN) is adapted.
It is clear that the methods proposed in this line of research cannot be adopted for the online detection scenario. Again, we target the detection in a finer-grained setting, i.e., subtrajectory anomaly detection via reinforcement learning.
\subsection{Other Orthogonal Work}
Some works~\cite{banerjee2016mantra, li2009temporal, ge2010top, yu2014detecting} focus on other types of anomalous trajectory detection, which are orthogonal to ours. We review them as follows. Banerjee et al.~\cite{banerjee2016mantra} study temporal anomaly detection, it employs travel time estimation and traverses some potential subtrajectories in a detected trajectory, where the anomalies are identified if the travel time largely deviates from the expected time. Li et al.~\cite{li2009temporal} detect temporal outliers and a method using historical similarity trends is studied. In addition, Ge et al.~\cite{ge2010top} study the Top-K evolving outliers and Yu et al.~\cite{yu2014detecting} detect anomalous trajectories in large-scale trajectory streams.
\subsection{Deep Reinforcement Learning}
Deep reinforcement learning aims to guide an agent on how to make sequential decisions to maximize a cumulative reward~\cite{sutton2018reinforcement}, where the agent interacts with a specific environment, which is usually modeled as a Markov decision process (MDP)~\cite{puterman2014markov}. In recent years, reinforcement learning has received much research attention. For example, Oh et al.~\cite{oh2019sequential} explore
inverse reinforcement learning for sequential anomaly detection, and Huang et al.~\cite{huang2018towards} design a deep RL powered anomaly detector for time series detection. Wang et al.~\cite{wang2020efficient} propose to use RL-based algorithms to accelerate subtrajectory similarity search. In this paper, we model a novel road network-enhanced MDP for online anomalous subtrajectory detection (called RL4OASD), and a policy gradient method~\cite{sutton2000policy} is involved for solving the problem. To our best knowledge, this is the first deep reinforcement learning based solution of its kind.
\noindent\textbf{22 Nov-Cheng: Comments on the related work. 1. In Section 2.2, [1] is mentioned but not described; 2. Better to use the word "related" to replace "orthogonal".}
\section{RELATED WORK}
\label{sec:related}
We review the related studies which can be categoried into four groups. The first group is online anomalous trajectory detection, the second group is offline anomalous trajectory detection, the third group is other types of anomalous trajectory detection, such as temporal anomalous trajectory detection, and the last group is about deep reinforcement learning, which is utilized in this paper.
\subsection{Online Anomalous Trajectory Detection}
Online anomalous trajectory detection aims to detect the ongoing trajectory in online manner. Existing studies propose many methods for online anomalous trajectory detection and involve two categories: heuristic-based methods~\cite{ge2010top,yu2014detecting,chen2013iboat,zhang2020continuous} and learning-based methods~\cite{wu2017fast,liu2020online}.
For heuristic methods, ~\citet{chen2013iboat} investigates the detecting anomalous trajectories in online manner via
the isolation-based method, which aims to check which parts of trajectories are isolated from the reference (i.e., normal) trajectories with the same source and destination routes. {\Comment{This method aims to detect the anomalous subtrajectories via ongoing trajectories, which is similar to our paper. However, this method models reference trajectories based on many parameters. Besides, the performance of this method is evaluated on a small manually labelled dataset with 9 SD pair only, which fail to reflect the generalization of this method.}}
A recent work ~\cite{zhang2020continuous} propose to calculate the trajectory similarity via discrete Frechet distance between a given reference route and the current partial route at each timestamp. If the deviation threshold exceeds a given threshold at any timestamp, and the system alerts that an outlier event of detouring is detected. {\Comment{There are many predefined parameters involved in this method. Besides, the method of modelling a reference route is not given, which has a direct impact on the task. Our work differs from these heuristic-based studies in that it is based on a road network-enhanced policy learned via reinforcement learning instead of the hand-crafted heuristic with many predefined parameters (e.g., the deviation threshold) for detecting anomalies. Besides, we evaluate the performance of our method on \textbf{a large dataset.}}}
For learning-based methods, a recent study~\citet{liu2020online} proposes to detect anomalous trajectories via a generation scheme, it utilizes the Gaussian mixture distribution to represent different kinds of normal routes and detect those anomalous trajectories that cannot be well-generated based on the given representations of normal routes. {\Comment{This method aims to detect whether the ongoing trajectory is anomalous or not.}} Another learning-based method ~\cite{wu2017fast} proposes a probabilistic model to detect trajectory outliers via modeling the distribution of driving behaviours from historical trajectories. The method involves many potential features that are associated with driving behaviour modeling, including road level and turning angle, etc. {\Comment{ This method also only targets whether the online trajectory is anomalous or not}}. In our work, we target a more finer-grained setting, i.e, detecting which part of an anomalous trajectory, namely subtrajectory, is responsible for its anomalousness in an online manner. Nevertheless, subtrajectory detection is not the focus in these studies.
\subsection{Offline Anomalous Trajectory Detection}
Offline anomalous trajectory detection refers to detecting the target trajectory which is complete in offline manner. Also there are two types of studies. One type is heuristic-based methods~\cite{lee2008trajectory, zhang2011ibat,zhu2015time,lv2017outlier,banerjee2016mantra} and another type is learning-based methods~\cite{gray2018coupled,song2018anomalous} as follows.
For heuristic-based methods, some heuristic metrics involving distance or density metrics are usually utilized to detect anomalous trajectories. For example, an early study ~\cite{lee2008trajectory} proposes a partition and detect mechanism to conduct anomalous trajectory detection. The main idea is to partition the trajectories and detect the anomalous trajectory segments by computing the distance of each segment in a detected trajectory to the segments in other trajectories. Further, ~\citet{zhu2015time} and ~\citet{lv2017outlier} utilize edit distance metrics to detect anomalous trajectory based on mining the normal trajectory patterns. Another study~\cite{zhang2011ibat} proposes to cluster trajectories based on the same itinerary and further detect anomalous trajectories via checking whether the target trajectory is the isolated from the the cluster. In addition, ~\cite{ge2011taxi} proposes a system which is utilized to detect fraud taxi driving. The main idea of this method is to detect anomalous trajectories via combining distance and density metrics.
Other studies are proposed to detect anomalous trajectories with learning-based methods. Song et al~\cite{song2018anomalous} adopt recurrent neural network (RNN) to capture the sequential information of trajectories for detection. However, the proposed model needs to be trained in a supervised manner, and the labeled data is usually unavailable in real applications.
Another learning-based work~\cite{gray2018coupled} aims at detecting anomalous trajectories by new moving subjects (i.e., new taxi driver detection) with an adversarial model (i.e., GAN) is adapted.
{\Comment{It is clear that these methods proposed in this line of research can not be utilized for the online detection scenario. Again, we target the online anomalous subtrajectory detection in a finer-grained setting, i.e., subtrajectory anomaly detection via reinforcement learning.}}
\subsection{{\Comment{Other Types of Anomalous Detection Studies}}}
Some works~\cite{banerjee2016mantra, li2009temporal, ge2010top, yu2014detecting} focus on other types of anomalous trajectory detection, which are related to ours. We review them as follows. Banerjee et al.~\cite{banerjee2016mantra} study temporal anomaly detection, which employs travel time estimation and traverses some potential subtrajectories in a detected trajectory, where the anomalies are identified if the travel time largely deviates from the expected time. ~\citet{li2009temporal} detect temporal outliers and a method utilizing historical similarity trends is studied. In addition, ~\citet{ge2010top} investigates the Top-K evolving outliers and ~\citet{yu2014detecting} detects anomalous trajectories in large-scale trajectory streams.
\subsection{Deep Reinforcement Learning}
Deep reinforcement learning aims to guide an agent to make sequential decisions to maximize a cumulative reward~\cite{sutton2018reinforcement}, as the agent interacts with a specific environment, which is usually modeled as a Markov decision process (MDP)~\cite{puterman2014markov}. In recent years, reinforcement learning has attracted much research attention. For example, ~\citet{oh2019sequential} explores inverse reinforcement learning for sequential anomaly detection, and ~\citet{huang2018towards} designs a deep RL based anomaly detector for time series detection. ~\citet{wang2020efficient} proposes to use RL-based algorithms to accelerate subtrajectory similarity search. {\Comment{In this paper, we model a novel road network-enhanced MDP for online anomalous subtrajectory detection (called RL4OASD), and a policy gradient method~\cite{sutton2000policy} is adopted for solving the problem. To our best knowledge, this is the first deep reinforcement learning based solution for online anomalous subtrajectories detection.}}
\section{RELATED WORK}
\label{sec:related}
\subsection{Online Anomalous {{Trajectory/Subtrajectory}} Detection.}
Online anomalous trajectory detection aims to detect an ongoing trajectory in an online manner. Existing studies propose many methods for the problem and involve two categories: heuristic-based methods~\cite{chen2013iboat,zhang2020continuous} and learning-based methods~\cite{wu2017fast,liu2020online}.
For heuristic methods, Chen et al.~\cite{chen2013iboat} investigate the detection of anomalous trajectories in an online manner via the isolation-based method, which aims to check which parts of trajectories are isolated from the reference (i.e., normal) trajectories with the same source and destination routes. This method aims to detect the anomalous subtrajectories from ongoing trajectories, which is similar to our paper. However, this method models reference trajectories based on many manually-set parameters. Besides, the performance of this method is evaluated on a small manually labeled dataset with 9 SD pairs only, which fails to reflect the generalization of this method.
A recent work ~\cite{zhang2020continuous} proposes to calculate the trajectory similarity via discrete Frechet distance between a given reference route and the current partial route at each timestamp. If the deviation exceeds a given threshold at any timestamp, and the system alerts that an anomaly event of detouring is detected. There are many predefined parameters involved in this method.
Our work differs from these heuristic-based studies in that it is based on a policy learned via reinforcement learning instead of the hand-crafted heuristic with many predefined parameters (e.g., the deviation threshold) for detecting anomalies. Besides, we evaluate the performance of our method on a large dataset.
For learning-based methods, a recent study~\cite{liu2020online} proposes to detect anomalous trajectories via a generation scheme, which utilizes the Gaussian mixture distribution to represent different kinds of normal routes and detects those anomalous trajectories that cannot be well-generated based on the given representations of normal routes. This method aims to detect whether the ongoing trajectory is anomalous or not. Another learning-based method~\cite{wu2017fast} proposes a probabilistic model to detect trajectory anomalies via modeling the distribution of driving behaviours from historical trajectories. The method involves many potential features that are associated with driving behaviour modeling, including road level and turning angle, etc. This method also only targets whether the online trajectory is anomalous or not.
In our work, we target a finer-grained setting, i.e, detecting which part of an anomalous trajectory, namely subtrajectory, is responsible for its anomalousness in an online manner. Nevertheless, anomalous subtrajectory detection is not the focus in these studies.
\subsection{Offline Anomalous Trajectory Detection.}
Offline anomalous trajectory detection refers to detecting an anomalous trajectory or anomalous subtrajectories of a trajectory, where the trajectory inputted in an offline manner. Also, there are two types of studies. One type is heuristic-based methods~\cite{lee2008trajectory, zhang2011ibat,zhu2015time,lv2017outlier,banerjee2016mantra} and another type is learning-based methods~\cite{gray2018coupled,song2018anomalous}. {{For heuristic-based methods, some heuristic metrics involving distance or density metrics are usually utilized to detect anomalous trajectories. For example, an early study ~\cite{lee2008trajectory} proposes a partition and detect mechanism to conduct anomalous subtrajectory detection. The main idea is to partition the trajectories and detect the anomalous trajectory segments by computing the distance of each segment in a target trajectory to the segments in other trajectories. A recent study~\cite{lv2017outlier} utilizes edit distance metrics to detect anomalous trajectories based on mining the normal trajectory patterns. Another study~\cite{zhang2011ibat} proposes to cluster trajectories based on the same itinerary and further detect anomalous trajectories via checking whether the target trajectory is isolated from the cluster. In addition, ~\cite{ge2011taxi} proposes a system which is utilized to detect fraud taxi driving. The main idea of this method is to detect anomalous trajectories via combining distance and density metrics.
Other studies are proposed to detect anomalous trajectories with learning-based methods. For example, Song et al~\cite{song2018anomalous} adopt recurrent neural network (RNN) to capture the sequential information of trajectories for detection. However, the proposed model needs to be trained in a supervised manner, and the labeled data is usually unavailable in real applications. Another learning-based work~\cite{gray2018coupled} aims at detecting anomalous trajectories by new moving subjects (i.e., new taxi driver detection) with an adversarial model (i.e., GAN) is adapted.}}
\if 0
For heuristic-based methods, some heuristic metrics involving distance or density metrics are usually utilized to detect anomalous trajectories. For example, an early study ~\cite{lee2008trajectory} proposes a partition and detect mechanism to conduct anomalous trajectory detection. The main idea is to partition the trajectories and detect the anomalous trajectory segments by computing the distance of each segment in a detected trajectory to the segments in other trajectories. Further, ~\citet{zhu2015time} and ~\citet{lv2017outlier} utilize edit distance metrics to detect anomalous trajectory based on mining the normal trajectory patterns. Another study~\cite{zhang2011ibat} proposes to cluster trajectories based on the same itinerary and further detect anomalous trajectories via checking whether the target trajectory is the isolated from the the cluster. In addition, ~\cite{ge2011taxi} proposes a system which is utilized to detect fraud taxi driving. The main idea of this method is to detect anomalous trajectories via combining distance and density metrics.
Other studies are proposed to detect anomalous trajectories with learning-based methods. Song et al~\cite{song2018anomalous} adopt recurrent neural network (RNN) to capture the sequential information of trajectories for detection. However, the proposed model needs to be trained in a supervised manner, and the labeled data is usually unavailable in real applications.
Another learning-based work~\cite{gray2018coupled} aims at detecting anomalous trajectories by new moving subjects (i.e., new taxi driver detection) with an adversarial model (i.e., GAN) is adapted.
\fi
{\Comment{It is clear that these methods proposed in this line of research cannot be utilized for the online detection scenario.
}}
\subsection{Other Types of Anomalous Detection Studies.}
Some studies~\cite{banerjee2016mantra,li2009temporal, ge2010top, yu2014detecting, zhu2018sub} focus on other types of anomalous trajectory detection, which are related to ours. We review them as follows. Banerjee et al.~\cite{banerjee2016mantra} study temporal anomaly detection, which employs travel time estimation and traverses some potential subtrajectories in a target trajectory, where the anomalies are identified if the travel time largely deviates from the expected time. Li et al.~\cite{li2009temporal} detect temporal anomalies and a method utilizing historical similarity trends is studied. In addition, Ge et al.~\cite{ge2010top} investigate the Top-K evolving anomalies.~\cite{yu2014detecting, zhu2018sub} detect anomalous trajectories in large-scale trajectory streams.
{{\subsection{Deep Reinforcement Learning.}
Deep reinforcement learning aims to guide an agent to make sequential decisions to maximize a cumulative reward, as the agent interacts with a specific environment, which is usually modeled as a Markov decision process (MDP)~\cite{puterman2014markov}. In recent years, reinforcement learning has attracted much research attention. For example, Oh et al.~\cite{oh2019sequential} explores inverse reinforcement learning for sequential anomaly detection, and Huang et al.~\cite{huang2018towards} designs a deep RL-based anomaly detector for time series detection. Wang et al.~\cite{wang2020efficient,wang2021similar} proposes to use RL-based algorithms to accelerate subtrajectory or sub-game similarity search. In addition, RL-based solutions are developed to simplify trajectories with different objectives~\cite{wang2021trajectory,wang2021error}.
In this paper, we propose a novel RL-based solution for online anomalous subtrajectory detection (called \texttt{RL4OASD}), and a policy gradient method~\cite{sutton2000policy} is adopted for solving the problem. To our best knowledge, this is the first deep reinforcement learning based solution for online anomalous subtrajectories detection.}}
\section{Response to Reviewer 1's Comments}
\label{subsec:reviewerOneReport}
\begin{comment1}\em
``O1. Further clarification of the model technical novelty.''
\end{comment1}
\noindent\textbf{Response.} Thanks for this valuable comment. We highlight the technical novelty of this work from two aspects.
\underline{First}, we are among the pioneering work to develop a deep reinforcement learning based model that enables online anomalous subtrajectory detection. Our proposed model 1) is capable of automatically identifying anomalous subtrajectories without the large efforts on generating hand-crafted features and pre-defined rubrics; 2) does not largely rely on the limited labeled data and can effectively discover the abnormal trajectory patterns under data scarcity. This marks a significant improvement in the methodology to overcome { those issues suffered by existing methods (as summarized in Table~\ref{tab:summary})}.
\underline{Second}, our model is a weakly supervised learning paradigm, where the proposed RSRNet learns accurate road segment observations to define the states of MDP in ASDNet. ASDNet provides refined labels of road segments to train RSRNet. They { iteratively improve} each other, and run in a positive loop to produce superior results. To our best knowledge, the weakly supervised paradigm is the first of its kind.
With the effective model design, we can achieve better detection performance and competitive efficiency (especially for the online detection task which requires real-time response).
\begin{comment1}\em
``O2. Clarifications of i) the calculation for $T_1$ and $T_2$; ii) the process of updating noisy labels.''
\end{comment1}
\noindent\textbf{Response.} Thanks for the detailed comment. We have revised the presentation to make it clear.
The changes are highlighted in blue {{in Section~\ref{sec:RSRNet}}}. { 1) We clarify the fraction calculations. Consider an example where there are 5 trajectories traveling along $T_1$, 4 along $T_2$, and only 1 along $T_3$. Then the fractions of $T_1$ and $T_2$ are calculated as $5/10=0.5$ and $4/10=0.4$, respectively. 2) We clarify that the noisy labels are only utilized to pre-train the representations in RSRNet, which would then provide a warm-start for ASDNet, and the noisy labels are not being updated.}
\begin{comment1}\em
``i) O3. Performance evaluation with more metrics. ii) Model efficiency study with the preprocessing cost.''
\end{comment1}
\noindent\textbf{Response.} Thanks for the comment. 1) We have included a variant of $F_1$-score, { called $TF_1$-score, as a new metric. The intuition of $TF_1$-score is to count only those detected anomalous subtrajectories which are aligned with real anomalous subtrajectories sufficiently.}
The detailed definition is described {{in Section~\ref{sec:setup}}} and the corresponding results are included {{in Table~\ref{tab:baselines}}}.
{ The results based on $TF_1$-score and those based on $F_1$-score show similar trends,}
which shows the genericity of our methods.
2) The inference time is widely adopted by existing learning-based methods including~\cite{wu2017fast,liu2020online,song2018anomalous,li2007roam} for efficiency evaluation. Normally, the training cost is an one-time cost, i.e., once a policy is learned, we can use it for detecting many trajectories online. In this revised paper, we have included some experimental results of the training time and the effects of the size of the training dataset on the performance of RL4OASD in Table~\ref{tab:train_time}. In general, it takes only around 0.2 hours to learn a satisfactory model.
3) In the revised paper, we have reported the preprocessing cost involving map-matching and noisy labeling {{in Table~\ref{tab:train_time}}}. Overall, it takes less than two minutes to preprocess the dataset, which again is an one-time process.
\section{Response to Reviewer 2's Comments}
\label{subsec:reviewerTwoReport}
\begin{comment2}\em
``O1. Discussion about the cold-start settings of anomalous subtrajectory detection.''
\end{comment2}
\noindent\textbf{Response.} Thank you for the insightful comment. Our method relies on the historical trajectories for defining the normal route feature. The feature is calculated as a relative fraction between 0 and 1. Specifically, the feature of a route is defined to be the number of trajectories along the route over the total number of trajectories within the SD pair.
{ We have conducted experiments} by varying the number of historical trajectories within the SD pairs {{in Table~\ref{tab:droprate}}}. { The results show that our model is robust against sparse data, e.g., its effectiveness} only degrades by 6\% even if 80\% of historical trajectories are dropped. The cold-start problem in anomalous trajectory/subtrajectory detection looks very interesting. We believe that some generative methods, e.g., to generate some routes within the sparse SD pairs, can possibly be leveraged to overcome the issue, which we plan to explore as future work.
\begin{comment2}\em
``O2. Clarifications of selecting the proper length of time period for new training parts.''
\end{comment2}
\noindent\textbf{Response.} Thank you for the comment. In the revised paper, we have studied the effect of the length of time period for the training and provided the detailed discussion {{in Section~\ref{sec:concept_exp}}}. Specifically, we follow the setup in~\cite{liu2020online}, and first study how to select a proper length of time period to start the training with a parameter $\xi$. For example, when $\xi$ is set to 4, it partitions one day into 4 parts, and the proposed online learning model \texttt{RL4OASD-FT} is first trained on Part 1, and kept updating on the other parts of data.
We vary $\xi$ from 1 to 24, and report the average $F_1$-score over all parts and the corresponding training time {{in Figure~\ref{fig:traffic}(a) and (b)}}, respectively. Based on the results, we observe that when $\xi$ is set to 8 (i.e., training the model every $24/8=3$ hours), it leads to the best effectiveness with $F_1$-score=0.867, and the average training time less than 0.04 hours, which is much smaller than the duration of a time period (i.e., 3 hours). Under the setting of $\xi=8$, we further report the $F_1$-score and training time for each part {{in Figure~\ref{fig:traffic}(c) and (d)}}, respectively. We note that our online learning strategy can tackle the issue of concept drift in varying traffic conditions.
\begin{comment2}\em
``O3. Further explanations of the effectiveness of individual components in the proposed RL4OASD model.''
\end{comment2}
\noindent\textbf{Response.} Thank you for the comment. In the revised paper, we have included { some explanations on the effectiveness of our method.}
Please refer to the texts highlighted in blue {{in Section~\ref{sec:overview_rl4oasd}}}.
{ We copy the explanations below for the ease of reference.} First, in RSRNet,
both traffic context features (e.g., driving speed, trip duration) and normal route features that are associated with the anomalies are well captured into the model. Second, in ASDNet, the task of labeling anomalous subtrajectories is formulated as a MDP, whose policy is learned in a data-driven manner, instead of using heuristics (e.g., some pre-defined parameters) as the existing studies do.
Third,
RSRNet is first trained with noisy labels for dealing with the cold-start problem.
Then, RSRNet and ASDNet are trained iteratively and collaboratively, where RSRNet uses ASDNet's outputs as labels and ASDNet uses the outputs of RSRNet as features.
\section{Response to Reviewer 3's Comments}
\label{subsec:reviewerThreeReport}
\begin{comment3}\em
O1. Further description of the problem formalization and the criteria for anomalous road segments.
\end{comment3}
\noindent\textbf{Response.} Thanks for the comment. 1) { Yes, a detour would be probably labeled to be anomalous since it deviates from others within the same SD pair}. 2) First, we emphasize that the transition frequency is only used to construct the noisy labels, which are further used to pre-train the model as a warm-start, but are not used to train the model as the supervision signal during the joint training of RSRNet and ADSNet. In addition, based on the ablation study in Table~\ref{tab:ablation},
{ with the transition frequency used, it improves the overall effectiveness by around 36\%.}
\begin{comment3}\em
``O2. The discussion on the generalizability of the approach.''
\end{comment3}
\noindent\textbf{Response.} Thanks for the comment. We discuss the generalizability of the approach in two aspects. {\underline{First}}, our {\texttt{RL4OASD}} is a data-driven approach, where we capture two types of features, namely the representations of road segments based on traffic context (e.g., driving speed) and the information captured from the normal routes. These two types of features can be generalized to other trajectories, and captured by the model trained in a data-driven manner.
{\underline{Second}}, our model can be deployed with an online learning strategy, where the model continues to be refined with newly recorded trajectory data.
The effects of online learning are shown in Figure~\ref{fig:traffic}.
\begin{comment3}\em
``O3. The description of the criteria details for labeling anomalous subtrajectories by participants.''
\end{comment3}
\noindent\textbf{Response.} Thanks for the comment. We have provided the detailed instruction to the participants, and please refer to the text highlighted in blue {{in Section~\ref{sec:setup}}}.
\begin{comment3}\em
``O4. The clarification of the arrows in Fig. 2 and the derivation process of $\hat{y}_{i}$ based on $z_i$.''
\end{comment3}
\noindent\textbf{Response.} Thank you for the comment. { We have revised {{Figure~\ref{fig:rl4oasd}}} in the revised paper. To be specific, we use different arrows with annotations to distinguish the operations represented by the arrows. In addition, $\hat{y}_{i}$ is computed from $z_i$ via Softmax.}
\begin{comment3}\em
``O5. Further justifications of the evaluation metrics.''
\end{comment3}
\noindent\textbf{Response.}
In the revised paper, we have included a new metric called $TF_1$-score, { which counts only those Jaccard similarities above a threshold $\phi$}.
We have included the new results based on this additional new metric $TF_1$-score {{in Table~\ref{tab:baselines}}}, which show that our solution outperforms the existing baselines under this new metric.
\section{Response to Meta-Reviewer's Report}
We have addressed all revision items, and our responses to required changes
\begin{itemize}
\item \textbf{Reviewer 1: O1, O2, O3}, can be found in our responses to
Reviewer 1's Comment 1, 2 and 3, respectively.
\item \textbf{Reviewer 2: O1, O2}, can be found in our responses to Reviewer 2's Comment 1 and 2, respectively.
\item \textbf{Reviewer 3: O1, O2, O5}, can be found in our responses to Reviewer 3's Comment 1, 2 and 5, respectively.
\end{itemize}
and optional changes
\begin{itemize}
\item \textbf{Reviewer 2: O3}, can be found in our responses to Reviewer 2's Comment 3.
\item \textbf{Reviewer 3: O3, O4}, can be found in our responses to Reviewer 3's Comment 3 and 4, respectively.
\end{itemize}
| {
"attr-fineweb-edu": 2.810547,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |