text
stringlengths 4
5.48M
| meta
stringlengths 14
6.54k
|
---|---|
\section{Introduction}
\label{sec:intro}
\emph{Gender diversity}, or more often its lack thereof, among participants to
software development activities has been thoroughly studied in recent years. In
particular, the presence of, effects of, and countermeasures for \emph{gender
bias} in Free/Open Source Software (FOSS) have received a lot of attention
over the past decade~\cite{david2008fossdevs, qiu2010kdewomen,
nafus2012patches, kuechler2012genderfoss, vasilescu2014gender,
oneil2016debiansurvey, robles2016womeninfoss, terrell2017gender,
zacchiroli2021gender}. \emph{Geographic diversity} is on the other hand the
kind of diversity that stems from participants in some global activity coming
from different world regions and cultures.
Geographic diversity in FOSS has received relatively little attention in scholarly
works. In particular, while seminal survey-based and
point-in-time medium-scale studies of the geographic origins of FOSS
contributors exist~\cite{ghosh2005understanding, david2008fossdevs,
barahona2008geodiversity, takhteyev2010ossgeography, robles2014surveydataset,
wachs2021ossgeography}, large-scale longitudinal studies of the geographic
origin of FOSS contributors are still lacking. Such a quantitative
characterization would be useful to inform decisions related to global
development teams~\cite{herbsleb2007globalsweng} and hiring strategies in the
information technology (IT) market, as well as contribute factual information
to the debates on the economic impact and sociology of FOSS around the world.
\paragraph{Contributions}
With this work we contribute to close this gap by conducting \textbf{the first
longitudinal study of the geographic origin of contributors to public code
over 50 years.} Specifically, we provide a preliminary answer to the
following research question:
\begin{researchquestion}
From which world regions do authors of publicly available commits come from
and how has it changed over the past 50 years?
\label{rq:geodiversity}
\end{researchquestion}
We use as dataset the \SWH/ archive~\cite{swhipres2017} and analyze from it
2.2 billion\xspace commits archived from 160 million\xspace projects and authored by
43 million\xspace authors during the 1971--2021 time period.
We geolocate developers to
\DATAWorldRegions/ world regions, using as signals email country code top-level domains (ccTLDs) and
author (first/last) names compared with name distributions around the world, and UTC offsets
mined from commit metadata.
We find evidence of the early dominance of North America in open source
software, later joined by Europe. After that period, the geographic diversity
in public code has been constantly increasing.
We also identify relevant historical shifts
related to the end of the UNIX wars and the increase of coding literacy in
Central and South Asia, as well as of broader phenomena like colonialism and
people movement across countries (immigration/emigration).
\paragraph{Data availability.}
A replication package for this paper is available from Zenodo at
\url{https://doi.org/10.5281/zenodo.6390355}~\cite{replication-package}.
\section{Related Work}
\label{sec:related}
Both early and recent works~\cite{ghosh2005understanding, david2008fossdevs,
robles2014surveydataset, oneil2016debiansurvey} have characterized the
geography of Free/Open Source Software (FOSS) using \emph{developer surveys},
which provide high-quality answers but are limited in size (2-5\,K developers)
and can be biased by participant sampling.
In 2008 Barahona et al.~\cite{barahona2008geodiversity} conducted a seminal
large-scale (for the time) study on FOSS \emph{geography using mining software
repositories (MSR) techniques}. They analyzed the origin of 1\,M contributors
using the SourceForge user database and mailing list archives over the
1999--2005 period, using as signals information similar to ours: email domains
and UTC offsets.
The studied period (7 years) in~\cite{barahona2008geodiversity} is shorter than
what is studied in the present paper (50 years) and the data sources are
largely different; with that in mind, our results show a slightly larger quote of
European v.~North American contributions.
Another empirical work from 2010 by Takhteyev and
Hilts~\cite{takhteyev2010ossgeography} harvested self-declared geographic
locations of GitHub accounts recursively following their connections,
collecting information for $\approx$\,70\,K GitHub users. A very recent
work~\cite{wachs2021ossgeography} by Wachs et al.~has geolocated half a million
GitHub users, having contributed at least 100 commits each, and who
self-declare locations on their GitHub profiles. While the study is
point-in-time as of 2021, the authors compare their findings
against~\cite{barahona2008geodiversity, takhteyev2010ossgeography} to
characterize the evolution of FOSS geography over the time snapshots taken by
the three studies.
Compared with previous empirical works, our study is much larger scale---having
analyzed 43 million\xspace authors of 2.2 billion\xspace commits from 160 million\xspace
projects---longitudinal over 50 years of public code contributions rather than
point in time, and also more fine-grained (with year-by-year granularity over
the observed period). Methodologically, our study relies on Version Control
System (VCS) commit data rather than platform-declared location information.
Other works---in particular the work by Daniel~\cite{daniel2013ossdiversity}
and, more recently, Rastogi et al.~\cite{rastogi2016geobias,
rastogi2018geobias, prana2021geogenderdiversity}---have studied geographic
\emph{diversity and bias}, i.e., the extent to which the origin of FOSS
developers affect their collaborative coding activities.
In this work we characterized geographic diversity in public code for the first
time at this scale, both in terms of contributors and observation period. We do
not tackle the bias angle, but provide empirical data and findings that can be
leveraged to that end as future work.
\emph{Global software engineering}~\cite{herbsleb2007globalsweng} is the
sub-field of software engineering that has analyzed the challenges of scaling
developer collaboration globally, including the specific concern of how to deal
with geographic diversity~\cite{holmstrom2006globaldev, fraser2014eastwest}.
Decades later the present study provides evidence that can be used, in the
specific case of public code and at a very large scale, to verify which
promises of global software engineering have borne fruit.
\section{Methodology}
\label{sec:method}
\newif\ifgrowthfig \growthfigtrue
\ifgrowthfig
\begin{figure}
\includegraphics[width=\columnwidth]{yearly-commits}
\caption{Yearly public commits over time (log scale).
}
\label{fig:growth}
\end{figure}
\fi
\paragraph{Dataset}
We retrieved from \SWH/~\cite{swh-msr2019-dataset} all commits archived until \DATALastCommitDate/.
They amount to \DATACommitsRaw/ commits, unique by SHA1 identifier, harvested from \DATATotalCommitsInSH/ public projects coming from major development forges (GitHub, GitLab, etc.) and package repositories (Debian, PyPI, NPM, etc.).
Commits in the dataset are by \DATAAuthorsRaw/ authors, unique by $\langle$name, email$\rangle$ pairs.
The dataset came as two relational tables, one for commits and one for authors, with the former referencing the latter via a foreign key.
\iflong
Each row in the commit table contains the following fields: commit SHA1 identifier, author and committer timestamps, author and committer identifiers (referencing the author table).
The distinction between commit authors and committers come from Git, which allows to commit a change authored by someone else.
For this study we focused on authors and ignored committers, as the difference between the two is not relevant for our research questions and the amount of commits with a committer other than its author is negligible.
\fi
For each entry in the author table we have author full name and email as two separate strings of raw bytes.
We removed implausible or unusable names that: are not decodable as UTF-8 (\DATAAuthorsRmNondecodable/ author names removed), are email addresses instead of names (\DATAAuthorsRmEmail/ ``names''), consist of only blank characters (\DATAAuthorsRmBlank/), contain more than 10\% non-letters (\DATAAuthorsRmNonletter/), are longer than 100 characters (\DATAAuthorsRmToolong/).
After filtering, about \DATAAuthorsPlausibleApprox/ authors (\DATAAuthorsPlausiblePct/ of the initial dataset) remained for further analysis.
Note that the amount of public code commits (and authors) contained in the
initial dataset grows exponentially over
time~\cite{swh-provenance-emse}\ifgrowthfig, as shown for commits in
\Cref{fig:growth}\else: from $10^4$ commits in 1971, to $10^6$ in 1998, to
almost $10^9$ in 2020\fi. As a consequence the observed trends tend to be more
stable in recent decades than in 40+ year-old ones, due to statistics taken on
exponentially larger populations.
\paragraph{Geolocation}
\begin{figure}
\centering
\includegraphics[clip,trim=6cm 6cm 0 0,width=\linewidth]{subregions-ours}
\caption{The \DATAWorldRegions/ world regions used as geolocation targets.}
\label{fig:worldmap}
\end{figure}
As geolocation targets we use macro world regions derived from the United Nations geoscheme~\cite{un1999geoscheme}.
To avoid domination by large countries (e.g., China or Russia) within macro regions, we merged and split some regions based on geographic proximity and the sharing of preeminent cultural identification features, such as spoken language.
\Cref{fig:worldmap} shows the final list of \DATAWorldRegions/ world regions used as geolocation targets in this study.
Geolocation of commit authors to world regions uses the two complementary techniques introduced in~\cite{icse-seis-2022-gender}, briefly recalled below.
The first one relies on the country code top-level domain (ccTLD) of email addresses extracted from commit metadata, e.g., \texttt{.fr}, \texttt{.ru}, \texttt{.cn}, etc.
We started from the IANA list of Latin character ccTLDs~\cite{wikipedia-cctld} and manually mapped each corresponding territory to a target world region.
The second geolocation technique uses the UTC offset of commit timestamps (e.g., UTC-05:00) and author names to determine the most likely world region of the commit author.
For each UTC offset we determine a list of compatible places (country, state, or dependent territory) in the world that, at the time of that commit, had that UTC offset; commit time is key here, as country UTC offsets vary over time due to timezone changes.
To make this determination we use the IANA time zone database~\cite{tzdata}.
Then we assign to each place a score that captures the likelihood that a given author name is characteristic of it.
To this end we use the Forebears dataset of the frequencies of the most common first and family names which, quoting from~\cite{forebear-names}: {\itshape ``provides the approximate incidence of forenames and surnames produced from a database of \num{4 044 546 938} people (55.5\% of living people in 2014). As of September 2019 it covers \num{27 662 801} forenames and \num{27 206 821} surnames in 236 jurisdictions.''}
As in our dataset authors are full name strings (rather than split by first/family name), we first tokenize names (by blanks and case changes) and then lookup individual tokens in both first and family names frequency lists.
For each element found in name lists we multiply the place population\footnotemark{} by the name frequency to obtain a measure that is proportional to the number of persons bearing that name (token) in the specific place.
\footnotetext{To obtain population totals---as the notion of ``place'' is heterogeneous: full countries v.~slices of large countries spanning multiple timezones---we use a mixture of primary sources (e.g., government websites), and non-primary ones (e.g., Wikipedia articles).}
We sum this figure for all elements to obtain a place score, ending up with a list of $\langle$place, score$\rangle$ pairs.
We then partition this list by the world region that a place belongs to and sum the score for all the places in each region to obtain an overall score, corresponding to the likelihood that the commit belongs to a given world region.
We assign the starting commit as coming from the world region with the highest score.
The email-based technique suffers from the limited and unbalanced use of ccTLDs: most developers use generic TLDs such as \texttt{.com}, \texttt{.org}, or \texttt{.net}.
Moreover this does not happen uniformly across zones: US-based developers, for example, use the \texttt{.us} ccTLD much more seldomly than their European counterparts.
On the other hand the offset/name-based technique relies on the UTC offset of the commit timestamps.
Due to tool configurations on developer setups, a large number of commits in the dataset has an UTC offset equal to zero.
This affects less recent commits (\DATACommitsTZZTwoThousandTwenty/ of 2020s commits have a zero offset) than older ones (\DATACommitsTZZTwoThousand/ in 2000).
As a result the offset/name-based technique could end up detecting a large share of older commits as authored by African developers, and to a lesser extent Europeans.
To counter these issues we combine the two geolocation techniques together by applying the offset/name-based techniques to all commits with a non-zero UTC offset, and the email-based on to all other commits.
\section{Results and Discussion}
\label{sec:results}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{stacked.pdf}
\caption{Ratio of commits (above) and active authors (below) by world zone over the 1971--2020 period.}
\Description[Chart]{Stacked bar chart showing the world zone ratios for commits and authors over the 1971--2020 period.}
\label{fig:results}
\end{figure*}
To answer \cref{rq:geodiversity} we gathered the number of commits and distinct authors per year and per world zone.
We present the obtained results in \Cref{fig:results} as two stacked bar charts, showing yearly breakdowns for commits and authors respectively.
Every bar represents a year and is partitioned in slices showing the commit/author ratio for each of the world regions of \Cref{fig:worldmap} in that year.
To avoid outliers due to sporadic contributors, in the author chart we only consider authors having contributed at least 5 commits in a given year.
While observing trends in the charts remember that the total numbers of commits and authors grow exponentially over time.
Hence for the first years in the charts, the number of data points in some world regions can be extremely small, with negative consequences on the stability of trends.
\paragraph{Geographic diversity over time}
Overall, the general trend appears to be that the \textbf{geographic diversity in public code is increasing}: North America and Europe alternated their ``dominance'' until the middle of the 90s; from that moment on most other world regions show a slow but steady increment.
This trend of increased participation into public code development includes Central and South Asia (comprising India), Russia, Africa, Central and South America,
Notice that also zones that do not seem to follow this trend, such as Australia and New Zealand, are also increasing their participation, but at a lower speed with respect to other zones.
For example, Australia and New Zealand incremented the absolute number of their commits by about 3 orders of magnitude from 2000 to present days.
Another interesting phenomenon that can be appreciated in both charts is the sudden contraction of contributions from North America in 1995; since the charts depict ratios, this corresponds to other zones, and Europe in particular, increasing their share.
An analysis of the main contributions in the years right before the contraction shows that nine out of ten have \texttt{ucbvax.Berkeley.EDU} as author email domain, and the tenth is Keith Bostic, one of the leading Unix BSD developers, appearing with email \texttt{bostic}.
No developer with the same email domain appears anymore within the first hundred contributors in 1996.
This shows the relevance that BSD Unix and the Computer Systems Research Group at the University of California at Berkeley had in the history of open source software.
The group was disbanded in 1995, partially as a consequence of the so-called UNIX wars~\cite{kernighan2019unixhistory}, and this contributes significantly---also because of the relatively low amount of public code circulating at the time---to the sudden drop of contributions from North America in subsequent years.
Descendant UNIX operating systems based on BSD, such as OpenBSD, FreeBSD, and NetBSD had smaller relevance to world trends due to (i) the increasing amount of open source code coming from elsewhere and (ii) their more geographically diverse developer community.
Another time frame in which the ratios for Europe and North America are subject to large, sudden changes is 1975--79.
A preliminary analysis shows that these ratios are erratic due to the very limited number of commits in those time period, but we were unable to detect a specific root cause.
Trends for those years should be subject to further studies, in collaboration with software historians.
\paragraph{Colonialism}
Another trend that stands out from the charts is that Africa appears to be well represented.
To assess if this results from a methodological bias, we double-checked the commits detected as originating from Africa for timezones included in the $[0, 3]$ range using both the email- the offset/name-based methods.
The results show that the offset/name-based approach assigns 22.7\% of the commits to Africa whereas the email-based one only assigns 2.7\% of them.
While a deeper investigation is in order, it is our opinion that the phenomenon we are witnessing here is a consequence of colonialism, specifically the adoption of Europeans names in African countries.
For example the name Eric, derived from Old Norse, is more popular in Ghana than it is in France or in the UK.
This challenges the ability of the offset/name-based method to correctly differentiate between candidate places.
Together with the fact that several African countries are largely populated, the offset/name-based method could detect European names as originating from Africa.
While this cuts both way, the likelihood of a random person contributing to public code is very different between European countries, all having a well-developed software industry, and African countries that do not all share this trait.
\paragraph{Immigration/emigration}
Another area where a similar phenomenon could be at play is the evolution of Central and South America.
Contribution from this macro region appears to be growing steadily.
To assess if this is the result of a bias introduced by the name-based detection we analyzed the evolution of offset/name-based assignment over time for authors whose email domain is among the top-ten US-based entities in terms of overall contributions (estimated in turn by analyzing the most frequent email domains and manually selecting those belonging to US-based entities).
In 1971 no author with an email from top US-based entities is detected as belonging to Central and South America, whereas in 2019 the ratio is 12\%.
Nowadays more than one tenth of the people email-associated to top US-based entities have popular Central and South American names, which we posit as a likely consequence of immigration into US (emigration from Central and South America).
Since immigration has a much longer history than what we are studying here, what we are witnessing probably includes long-term consequences of it, such as second and third generation immigrants employed in white-collar jobs, such as software development.
\section{Limitations and Future Work}
\label{sec:conclusion}
We have performed an exploratory, yet very large scale, empirical study of the geographic diversity in public code commits over time.
We have analyzed 2.2 billion\xspace public commits covering the \DATAYearRange/ time period.
We have geolocated developers to \DATAWorldRegions/ world regions using as signals email domains, timezone offsets, and author names.
Our findings show that the geographic diversity in public code is increasing over time, and markedly so over the past 20--25 years.
Observed trends also co-occur with historical events and macro phenomena like the end of the UNIX wars, increase of coding literacy around the world, colonialism, and immigration.
\medskip
\emph{Limitations.}
This study relies on a combination of two geolocation methods: one based on email domains, another based on commit UTC offsets and author names.
We discussed some of the limitations of either method in \Cref{sec:method}, motivating our decision of restricting the use of the email-based method to commits with a zero UTC offset.
As a consequence, for most commits in the dataset the offset/name-based method is used.
With such method, the frequencies of forenames and surnames are used to rank candidate zones that have a compatible UTC offset at commit time.
A practical consequence of this is that for commits with, say, offset UTC+09:00 the candidate places can be Russia, Japan and Australia, depending on the specific date due to daylight saving time.
Popular forenames and surnames in these regions tend to be quite different so the likelihood of the method to provide a reliable detection is high.
For other offsets the set of popular forenames and surnames from candidate zones can exhibit more substantial overlaps, negatively impacting detection accuracy.
We have discussed some of these cases in \Cref{sec:results}, but other might be lingering in the results impacting observed trends.
The choice of using the email-based method for commits with zero UTC offset, and the offset/name-based method elsewhere, has allowed us to study all developers not having a country-specific email domain (ccTLD), but comes with the risk of under-representing the world zones that have (in part and in some times of the year) an actual UTC offset of zero.
A potential bias in this study could be introduced by the fact that the name database used for offset/name-based geolocation only contains names formed using Latin alphabet characters.
We looked for names containing Chinese, Japanese, and Korean characters in the original dataset, finding only a negligible amount of authors who use non-Latin characters in their VCS names, which leads us to believe that the impact of this issue is minimal.
We did not apply identity merging (e.g., using state-of-the-art tools like SortingHat~\cite{moreno2019sortinghat}), but we do not expect this to be a significant issue because: (a) to introduce bias in author trends the distribution of identity merges around the world should be uneven, which seems unlikely; and (b) the observed commit trends (which would be unaffected by identity merging) are very similar to observed author trends.
We did not systematically remove known bot accounts~\cite{lebeuf2018swbots} from the author dataset, but we did check for the presence of software bots among the top committers of each year. We only found limited traces of continuous integration (CI) bots, used primarily to automate merge commits. After removing CI bots from the dataset the observed global trends were unchanged, therefore this paper presents unfiltered data.
\medskip
\emph{Future work.}
To some extent the above limitations are the price to pay to study such a large dataset: there exists a trade-off between large-scale analysis and accuracy.
We plan nonetheless to further investigate and mitigate them in future work.
Multi-method approaches, merging data mining with social science methods, could be applied to address some of the questions raised in this exploratory study.
While they do not scale to the whole dataset, multi-methods can be adopted to dig deeper into specific aspects, specifically those related to social phenomena.
Software is a social artifact, it is no wonder that aspects related to sociocultural evolution emerge when analyzing its evolution at this scale.
\clearpage
| {'timestamp': '2022-03-30T02:27:00', 'yymm': '2203', 'arxiv_id': '2203.15369', 'language': 'en', 'url': 'https://arxiv.org/abs/2203.15369'} |
\section{Introduction}
One of the fundamental ingredients in the theory of non-commutative or
quantum geometry is the notion of a differential calculus.
In the framework of quantum groups the natural notion
is that of a
bicovariant differential calculus as introduced by Woronowicz
\cite{Wor_calculi}. Due to the allowance of non-commutativity
the uniqueness of a canonical calculus is lost.
It is therefore desirable to classify the possible choices.
The most important piece is the space of one-forms or ``first
order differential calculus'' to which we will restrict our attention
in the following. (From this point on we will use the term
``differential calculus'' to denote a
bicovariant first order differential calculus).
Much attention has been devoted to the investigation of differential
calculi on quantum groups $C_q(G)$ of function algebra type for
$G$ a simple Lie group.
Natural differential calculi on matrix quantum groups were obtained by
Jurco \cite{Jur} and
Carow-Watamura et al.\
\cite{CaScWaWe}. A partial classification of calculi of the same
dimension as the natural ones
was obtained by
Schm\"udgen and Sch\"uler \cite{ScSc2}.
More recently, a classification theorem for factorisable
cosemisimple quantum groups was obtained by Majid \cite{Majid_calculi},
covering the general $C_q(G)$ case. A similar result was
obtained later by Baumann and Schmitt \cite{BaSc}.
Also, Heckenberger and Schm\"udgen \cite{HeSc} gave a
complete classification on $C_q(SL(N))$ and $C_q(Sp(N))$.
In contrast, for $G$ not simple or semisimple the differential calculi
on $C_q(G)$
are largely unknown. A particularly basic case is the Lie group $B_+$
associated with the Lie algebra $\lalg{b_+}$ generated by two elements
$X,H$ with the relation $[H,X]=X$. The quantum enveloping algebra
\ensuremath{U_q(\lalg{b_+})}{}
is self-dual, i.e.\ is non-degenerately paired with itself \cite{Drinfeld}.
This has an interesting consequence: \ensuremath{U_q(\lalg{b_+})}{} may be identified with (a
certain algebraic model of) \ensuremath{C_q(B_+)}. The differential calculi on this
quantum group and on its ``classical limits'' \ensuremath{C(B_+)}{} and \ensuremath{U(\lalg{b_+})}{}
will be the main concern of this paper. We pay hereby equal attention
to the dual notion of ``quantum tangent space''.
In section \ref{sec:q} we obtain the complete classification of differential
calculi on \ensuremath{C_q(B_+)}{}. It turns out that (finite
dimensional) differential
calculi are characterised by finite subsets $I\subset\mathbb{N}$.
These
sets determine the decomposition into coirreducible (i.e.\ not
admitting quotients) differential calculi
characterised by single integers. For the coirreducible calculi the
explicit formulas for the commutation relations and braided
derivations are given.
In section \ref{sec:class} we give the complete classification for the
classical function algebra \ensuremath{C(B_+)}{}. It is essentially the same as in the
$q$-deformed setting and we stress this by giving an almost
one-to-one correspondence of differential calculi to those obtained in
the previous section. In contrast, however, the decomposition and
coirreducibility properties do not hold at all. (One may even say that
they are maximally violated). We give the explicit formulas for those
calculi corresponding to coirreducible ones.
More interesting perhaps is the ``dual'' classical limit. I.e.\ we
view \ensuremath{U(\lalg{b_+})}{} as a quantum function algebra with quantum enveloping
algebra \ensuremath{C(B_+)}{}. This is investigated in section \ref{sec:dual}. It
turns out that in this setting we have considerably more freedom in
choosing a
differential calculus since the bicovariance condition becomes much
weaker. This shows that this dual classical limit is in a sense
``unnatural'' as compared to the ordinary classical limit of section
\ref{sec:class}.
However, we can still establish a correspondence of certain
differential calculi to those of section \ref{sec:q}. The
decomposition properties are conserved while the coirreducibility
properties are not.
We give the
formulas for the calculi corresponding to coirreducible ones.
Another interesting aspect of viewing \ensuremath{U(\lalg{b_+})}{} as a quantum function
algebra is the connection to quantum deformed models of space-time and
its symmetries. In particular, the $\kappa$-deformed Minkowski space
coming from the $\kappa$-deformed Poincar\'e algebra
\cite{LuNoRu}\cite{MaRu} is just a simple generalisation of \ensuremath{U(\lalg{b_+})}.
We use this in section \ref{sec:kappa} to give
a natural $4$-dimensional differential calculus. Then we show (in a
formal context) that integration is given by
the usual Lesbegue integral on $\mathbb{R}^n$ after normal ordering.
This is obtained in an intrinsic context different from the standard
$\kappa$-Poincar\'e approach.
A further important motivation for the investigation of differential
calculi on
\ensuremath{U(\lalg{b_+})}{} and \ensuremath{C(B_+)}{} is the relation of those objects to the Planck-scale
Hopf algebra \cite{Majid_Planck}\cite{Majid_book}. This shall be
developed elsewhere.
In the remaining parts of this introduction we will specify our
conventions and provide preliminaries on the quantum group \ensuremath{U_q(\lalg{b_+})}, its
deformations, and differential calculi.
\subsection{Conventions}
Throughout, $\k$ denotes a field of characteristic 0 and
$\k(q)$ denotes the field of rational
functions in one parameter $q$ over $\k$.
$\k(q)$ is our ground field in
the $q$-deformed setting, while $\k$ is the
ground field in the ``classical'' settings.
Within section \ref{sec:q} one could equally well view $\k$ as the ground
field with $q\in\k^*$ not a root of unity. This point of view is
problematic, however, when obtaining ``classical limits'' as
in sections \ref{sec:class} and \ref{sec:dual}.
The positive integers are denoted by $\mathbb{N}$ while the non-negative
integers are denoted by $\mathbb{N}_0$.
We define $q$-integers, $q$-factorials and
$q$-binomials as follows:
\begin{gather*}
[n]_q=\sum_{i=0}^{n-1} q^i\qquad
[n]_q!=[1]_q [2]_q\cdots [n]_q\qquad
\binomq{n}{m}=\frac{[n]_q!}{[m]_q! [n-m]_q!}
\end{gather*}
For a function of several variables (among
them $x$) over $\k$ we define
\begin{gather*}
(T_{a,x} f)(x) = f(x+a)\\
(\fdiff_{a,x} f)(x) = \frac{f(x+a)-f(x)}{a}
\end{gather*}
with $a\in\k$ and similarly over $\k(q)$
\begin{gather*}
(Q_{m,x} f)(x) = f(q^m x)\\
(\partial_{q,x} f)(x) = \frac{f(x)-f(qx)}{x(1-q)}\\
\end{gather*}
with $m\in\mathbb{Z}$.
We frequently use the notion of a polynomial in an extended
sense. Namely, if we have an algebra with an element $g$ and its
inverse $g^{-1}$ (as
in \ensuremath{U_q(\lalg{b_+})}{}) we will mean by a polynomial in $g,g^{-1}$ a finite power
series in $g$ with exponents in $\mathbb{Z}$. The length of such a polynomial
is the difference between highest and lowest degree.
If $H$ is a Hopf algebra, then $H^{op}$ will denote the Hopf algebra
with the opposite product.
\subsection{\ensuremath{U_q(\lalg{b_+})}{} and its Classical Limits}
\label{sec:intro_limits}
We recall that,
in the framework of quantum groups, the duality between enveloping algebra
$U(\lalg{g})$ of the Lie algebra and algebra of functions $C(G)$ on the Lie
group carries over to $q$-deformations.
In the case of
$\lalg{b_+}$, the
$q$-deformed enveloping algebra \ensuremath{U_q(\lalg{b_+})}{} defined over $\k(q)$ as
\begin{gather*}
U_q(\lalg{b_+})=\k(q)\langle X,g,g^{-1}\rangle \qquad
\text{with relations} \\
g g^{-1}=1 \qquad Xg=qgX \\
\cop X=X\otimes 1 + g\otimes X \qquad
\cop g=g\otimes g \\
\cou (X)=0 \qquad \cou (g)=1 \qquad
\antip X=-g^{-1}X \qquad \antip g=g^{-1}
\end{gather*}
is self-dual. Consequently, it
may alternatively be viewed as the quantum algebra \ensuremath{C_q(B_+)}{} of
functions on the Lie group $B_+$ associated with $\lalg{b_+}$.
It has two classical limits, the enveloping algebra \ensuremath{U(\lalg{b_+})}{}
and the function algebra $C(B_+)$.
The transition to the classical enveloping algebra is achieved by
replacing $q$
by $e^{-t}$ and $g$ by $e^{tH}$ in a formal power series setting in
$t$, introducing a new generator $H$. Now, all expressions are written in
the form $\sum_j a_j t^j$ and only the lowest order in $t$ is kept.
The transition to the classical function algebra on the other hand is
achieved by setting $q=1$.
This may be depicted as follows:
\[\begin{array}{c @{} c @{} c @{} c}
& \ensuremath{U_q(\lalg{b_+})} \cong \ensuremath{C_q(B_+)} && \\
& \diagup \hspace{\stretch{1}} \diagdown && \\
\begin{array}{l} q=e^{-t} \\ g=e^{tH} \end{array} \Big| _{t\to 0}
&& q=1 &\\
\swarrow &&& \searrow \\
\ensuremath{U(\lalg{b_+})} & <\cdots\textrm{dual}\cdots> && \ensuremath{C(B_+)}
\end{array}\]
The self-duality of \ensuremath{U_q(\lalg{b_+})}{} is expressed as a pairing
$\ensuremath{U_q(\lalg{b_+})}\times\ensuremath{U_q(\lalg{b_+})}\to\k$
with
itself:
\[\langle X^n g^m, X^r g^s\rangle =
\delta_{n,r} [n]_q!\, q^{-n(n-1)/2} q^{-ms}
\qquad\forall n,r\in\mathbb{N}_0\: m,s\in\mathbb{Z}\]
In the classical limit this becomes the pairing $\ensuremath{U(\lalg{b_+})}\times\ensuremath{C(B_+)}\to\k$
\begin{equation}
\langle X^n H^m, X^r g^s\rangle =
\delta_{n,r} n!\, s^m\qquad \forall n,m,r\in\mathbb{N}_0\: s\in\mathbb{Z}
\label{eq:pair_class}
\end{equation}
\subsection{Differential Calculi and Quantum Tangent Spaces}
In this section we recall some facts about differential calculi
along the lines of Majid's treatment in \cite{Majid_calculi}.
Following Woronowicz \cite{Wor_calculi}, first order bicovariant differential
calculi on a quantum group $A$ (of
function algebra type) are in one-to-one correspondence to submodules
$M$ of $\ker\cou\subset A$ in the category $^A_A\cal{M}$ of (say) left
crossed modules of $A$ via left multiplication and left adjoint
coaction:
\[
a\triangleright v = av \qquad \mathrm{Ad_L}(v)
=v_{(1)}\antip v_{(3)}\otimes v_{(2)}
\qquad \forall a\in A, v\in A
\]
More precisely, given a crossed submodule $M$, the corresponding
calculus is given by $\Gamma=\ker\cou/M\otimes A$ with $\diff a =
\pi(\cop a - 1\otimes a)$ ($\pi$ the canonical projection).
The right action and coaction on $\Gamma$ are given by
the right multiplication and coproduct on $A$, the left action and
coaction by the tensor product ones with $\ker\cou/M$ as a left
crossed module. In all of what follows, ``differential calculus'' will
mean ``bicovariant first order differential calculus''.
Alternatively \cite{Majid_calculi}, given in addition a quantum group $H$
dually paired with $A$
(which we might think of as being of enveloping algebra type), we can
express the coaction of $A$ on
itself as an action of $H^{op}$ using the pairing:
\[
h\triangleright v = \langle h, v_{(1)} \antip v_{(3)}\rangle v_{(2)}
\qquad \forall h\in H^{op}, v\in A
\]
Thereby we change from the category of (left) crossed $A$-modules to
the category of left modules of the quantum double $A\!\bowtie\! H^{op}$.
In this picture the pairing between $A$ and $H$ descends to a pairing
between $A/\k 1$ (which we may identify with $\ker\cou\subset A$) and
$\ker\cou\subset H$. Further quotienting $A/\k 1$ by $M$ (viewed in
$A/\k 1$) leads to a pairing with the subspace $L\subset\ker\cou H$
that annihilates $M$. $L$ is called a ``quantum tangent space''
and is dual to the differential calculus $\Gamma$ generated by $M$ in
the sense that $\Gamma\cong \Lin(L,A)$ via
\begin{equation}
A/(\k 1+M)\otimes A \to \Lin(L,A)\qquad
v\otimes a \mapsto \langle \cdot, v\rangle a
\label{eq:eval}
\end{equation}
if the pairing between $A/(\k 1+M)$ and $L$ is non-degenerate.
The quantum tangent spaces are obtained directly by dualising the
(left) action of the quantum double on $A$ to a (right) action on
$H$. Explicitly, this is the adjoint action and the coregular action
\[
h \triangleright x = h_{(1)} x \antip h_{(2)} \qquad
a \triangleright x = \langle x_{(1)}, a \rangle x_{(2)}\qquad
\forall h\in H, a\in A^{op},x\in A
\]
where we have converted the right action to a left action by going
from \mbox{$A\!\bowtie\! H^{op}$}-modules to \mbox{$H\!\bowtie\! A^{op}$}-modules.
Quantum tangent spaces are subspaces of $\ker\cou\subset H$ invariant
under the projection of this action to $\ker\cou$ via \mbox{$x\mapsto
x-\cou(x) 1$}. Alternatively, the left action of $A^{op}$ can be
converted to a left coaction of $H$ being the comultiplication (with
subsequent projection onto $H\otimes\ker\cou$).
We can use the evaluation map (\ref{eq:eval})
to define a ``braided derivation'' on elements of the quantum tangent
space via
\[\partial_x:A\to A\qquad \partial_x(a)={\diff a}(x)=\langle
x,a_{(1)}\rangle a_{(2)}\qquad\forall x\in L, a\in A\]
This obeys the braided derivation rule
\[\partial_x(a b)=(\partial_x a) b
+ a_{(2)} \partial_{a_{(1)}\triangleright x}b\qquad\forall x\in L, a\in A\]
Given a right invariant basis $\{\eta_i\}_{i\in I}$ of $\Gamma$ with a
dual basis $\{\phi_i\}_{i\in I}$ of $L$ we have
\[{\diff a}=\sum_{i\in I} \eta_i\cdot \partial_i(a)\qquad\forall a\in A\]
where we denote $\partial_i=\partial_{\phi_i}$. (This can be easily
seen to hold by evaluation against $\phi_i\ \forall i$.)
\section{Classification on \ensuremath{C_q(B_+)}{} and \ensuremath{U_q(\lalg{b_+})}{}}
\label{sec:q}
In this section we completely classify differential calculi on \ensuremath{C_q(B_+)}{}
and, dually, quantum tangent spaces on \ensuremath{U_q(\lalg{b_+})}{}. We start by
classifying the relevant crossed modules and then proceed to a
detailed description of the calculi.
\begin{lem}
\label{lem:cqbp_class}
(a) Left crossed \ensuremath{C_q(B_+)}-submodules $M\subseteq\ensuremath{C_q(B_+)}$ by left
multiplication and left
adjoint coaction are in one-to-one correspondence to
pairs $(P,I)$
where $P\in\k(q)[g]$ is a polynomial with $P(0)=1$ and $I\subset\mathbb{N}$ is
finite.
$\codim M<\infty$ iff $P=1$. In particular $\codim M=\sum_{n\in I}n$
if $P=1$.
(b) The finite codimensional maximal $M$
correspond to the pairs $(1,\{n\})$ with $n$ the
codimension. The infinite codimensional maximal $M$ are characterised by
$(P,\emptyset)$ with $P$ irreducible and $P(g)\neq 1-q^{-k}g$ for any
$k\in\mathbb{N}_0$.
(c) Crossed submodules $M$ of finite
codimension are intersections of maximal ones.
In particular $M=\bigcap_{n\in I} M^n$, with $M^n$ corresponding to
$(1,\{n\})$.
\end{lem}
\begin{proof}
(a) Let $M\subseteq\ensuremath{C_q(B_+)}$ be a crossed \ensuremath{C_q(B_+)}-submodule by left
multiplication and left adjoint coaction and let
$\sum_n X^n P_n(g) \in M$, where $P_n$ are polynomials in $g,g^{-1}$
(every element of \ensuremath{C_q(B_+)}{} can be expressed in
this form). From the formula for the coaction ((\ref{eq:adl}), see appendix)
we observe that for all $n$ and for all $t\le n$ the element
\[X^t P_n(g) \prod_{s=1}^{n-t} (1-q^{s-n}g)\]
lies in $M$.
In particular
this is true for $t=n$, meaning that elements of constant degree in $X$
lie separately in $M$. It is therefore enough to consider such
elements.
Let now $X^n P(g) \in M$.
By left multiplication $X^n P(g)$ generates any element of the form
$X^k P(g) Q(g)$, where $k\ge n$ and $Q$ is any polynomial in
$g,g^{-1}$. (Note that $Q(q^kg) X^k=X^k Q(g)$.)
We see that $M$ contains the following elements:
\[\begin{array}{ll}
\vdots & \\
X^{n+2} & P(g) \\
X^{n+1} & P(g) \\
X^n & P(g) \\
X^{n-1} & P(g) (1-q^{1-n}g) \\
X^{n-2} & P(g) (1-q^{1-n}g) (1-q^{2-n}g) \\
\vdots & \\
X & P(g) (1-q^{1-n}g) (1-q^{2-n}g) \ldots (1-q^{-1}g) \\
& P(g) (1-q^{1-n}g) (1-q^{2-n}g) \ldots (1-q^{-1}g)(1-g)
\end{array}
\]
Moreover, if $M$ is generated by $X^n P(g)$ as a module
then these elements generate a basis for $M$ as a vector
space by left
multiplication with polynomials in $g,g^{-1}$. (Observe that the
application of the coaction to any of the elements shown does not
generate elements of new type.)
Now, let $M$ be a given crossed submodule. We pick, among the
elements in $M$ of the form $X^n P(g)$ with $P$ of minimal
length,
one
with lowest degree in $X$. Then certainly the elements listed above are
in $M$. Furthermore for any element of the form $X^k Q(g)$, $Q$ must
contain $P$ as a factor and for $k<n$, $Q$ must contain $P(g) (1-q^{1-n}g)$
as a factor. We continue by picking the smallest $n_2$, so that
$X^{n_2} P(g) (1-q^{1-n}g) \in M$. Certainly $n_2<n$. Again, for any
element of $X^l Q(g)$ in $M$ with $l<n_2$, we have that
$P(g) (1-q^{1-n}g) (1-q^{1-n_2}g)$ divides Q(g). We proceed by
induction, until we arrive at degree zero in $X$.
We obtain the following elements generating a basis for $M$ by left
multiplication with polynomials in $g,g^{-1}$ (rename $n_1=n$):
\[ \begin{array}{ll}
\vdots & \\
X^{n_1+1} & P(g) \\
X^{n_1} & P(g) \\
X^{n_1-1} & P(g) (1-q^{1-{n_1}}g) \\
\vdots & \\
X^{n_2} & P(g) (1-q^{1-{n_1}}g) \\
X^{n_2-1} & P(g) (1-q^{1-{n_1}}g) (1-q^{1-n_2})\\
\vdots & \\
X^{n_3} & P(g) (1-q^{1-{n_1}}g) (1-q^{1-{n_2}}g) \\
X^{n_3-1} & P(g) (1-q^{1-{n_1}}g) (1-q^{1-{n_2}}g) (1-q^{1-n_3})\\
\vdots & \\
& P(g) (1-q^{1-{n_1}}g) (1-q^{1-n_2}g) (1-q^{1-n_3}g) \ldots (1-q^{1-n_m}g)
\end{array}
\]
We see that the integers $n_1,\ldots,n_m$ uniquely determine the shape
of this picture. The polynomial $P(g)$ on the other hand can be
shifted (by $g$ and $g^{-1}$) or renormalised. To determine $M$
uniquely we shift and normalise $P$ in such a way that it contains no
negative powers
and has unit constant coefficient. $P$ can then be viewed as a
polynomial $\in\k(q)[g]$.
We see that the codimension of $M$ is the sum of the lengths of the
polynomials in $g$ over all degrees in $X$ in the above
picture. Finite codimension corresponds to $P=1$. In this
case the codimension is the sum
$n_1+\ldots +n_m$.
(b) We observe that polynomials of the form $1-q^{j}g$
have no common divisors for distinct $j$. Therefore,
finite codimensional crossed
submodules are maximal if and only if
there is just one integer ($m=1$). Thus, the maximal left
crossed submodule of
codimension $k$ is generated by $X^k$ and $1-q^{1-k}g$.
For an infinite codimensional crossed submodule we certainly need
$m=0$. Then, the maximality corresponds to irreducibility of
$P$.
(c) This is again due to the distinctness of factors $1-q^j g$.
\end{proof}
\begin{cor}
\label{cor:cqbp_eclass}
(a) Left crossed \ensuremath{C_q(B_+)}-submodules $M\subseteq\ker\cou\subset\ensuremath{C_q(B_+)}$
are in one-to-one correspondence to pairs
$(P,I)$ as in lemma \ref{lem:cqbp_class}
with the additional constraint $(1-g)$ divides $P(g)$ or $1\in I$.
$\codim M<\infty$ iff $P=1$. In particular $\codim M=(\sum_{n\in I}n)-1$
if $P=1$.
(b) The finite codimensional maximal $M$
correspond to the pairs
$(1,\{1,n\})$ with $n\ge 2$ the
codimension. The infinite codimensional maximal $M$ correspond to pairs
$(P,\{1\})$ with $P$ irreducible and $P(g)\neq 1-q^{-k}g$ for any
$k\in\mathbb{N}_0$.
(c) Crossed submodules $M$ of finite
codimension are intersections of maximal ones.
In particular $M=\bigcap_{n\in I} M^n$, with $M^n$ corresponding to
$(1,\{1,n\})$.
\end{cor}
\begin{proof}
First observe that $\sum_n X^n P_n(g)\in \ker\cou$ if and only if
$(1-g)$ divides $P_0(g)$. This is to say that that $\ker\cou$
is the crossed submodule corresponding to the pair $(1,\{1\})$ in
lemma \ref{lem:cqbp_class}. We obtain the classification
from the one of lemmas \ref{lem:cqbp_class} by intersecting
everything with this crossed submodule. In particular, this reduces
the codimension by one in the finite codimensional case.
\end{proof}
\begin{lem}
\label{lem:uqbp_class}
(a) Left crossed \ensuremath{U_q(\lalg{b_+})}-submodules $L\subseteq\ensuremath{U_q(\lalg{b_+})}$ via the left adjoint
action and left
regular coaction are in one-to-one correspondence to the set
$3^{\mathbb{N}_0}\times2^{\mathbb{N}}$.
Finite dimensional $L$ are in one-to-one correspondence to
finite sets $I\subset\mathbb{N}$ and $\dim L=\sum_{n\in I}n$.
(b) Finite dimensional irreducible $L$ correspond to $\{n\}$
with $n$ the dimension.
(c) Finite dimensional $L$ are direct sums of irreducible ones. In
particular $L=\oplus_{n\in I} L^n$ with $L^n$ corresponding to $\{n\}$.
\end{lem}
\begin{proof}
(a) The action takes the explicit form
\[g\triangleright X^n g^k = q^{-n} X^n g^k\qquad
X\triangleright X^n g^k = X^{n+1}g^k(1-q^{-(n+k)})\]
while the coproduct is
\[\cop(X^n g^k)=\sum_{r=0}^{n} \binomq{n}{r}
q^{-r(n-r)} X^{n-r} g^{k+r}\otimes X^r g^k\]
which we view as a left coaction here.
Let now $L\subseteq\ensuremath{U_q(\lalg{b_+})}$ be a crossed \ensuremath{U_q(\lalg{b_+})}-submodule via this action
and coaction. For $\sum_n X^n P_n(g)\in L$ invariance under
the action by
$g$ clearly means that \mbox{$X^n P_n(g)\in L\ \forall n$}. Then from
invariance under the coaction we can conclude that
if $X^n \sum_j a_j g^j\in L$ we must have
$X^n g^j\in L\ \forall j$.
I.e.\ elements of the form $X^n g^j$ lie separately in $L$ and it is
sufficient to consider such elements. From the coaction we learn that
if $X^n g^j\in L$ we have $X^m g^j\in L\ \forall m\le n$.
The action
by $X$ leads to $X^n g^j\in L \Rightarrow X^{n+1} g^j\in
L$ except if
$n+j=0$. The classification is given by the possible choices we have
for each power in $g$. For every positive integer $j$ we can
choose wether or not to include the span of
$\{ X^n g^j|\forall n\}$ in $L$ and for
every non-positive
integer we can choose to include either the span of $\{ X^n
g^j|\forall n\}$
or just
$\{ X^n g^j|\forall n\le -j\}$ or neither. I.e.\ for positive
integers ($\mathbb{N}$) we have two choices while for non-positive (identified
with $\mathbb{N}_0$) ones we have three choices.
Clearly, the finite dimensional $L$ are those where we choose only to
include finitely many powers of $g$ and also only finitely many powers
of $X$. The latter is only possible for the non-positive powers
of $g$.
By identifying positive integers $n$ with powers $1-n$ of $g$, we
obtain a classification by finite subsets of $\mathbb{N}$.
(b) Irreducibility clearly corresponds to just including one power of $g$
in the finite dimensional case.
(c) The decomposition property is obvious from the discussion.
\end{proof}
\begin{cor}
\label{cor:uqbp_eclass}
(a) Left crossed \ensuremath{U_q(\lalg{b_+})}-submodules $L\subseteq\ker\cou\subset\ensuremath{U_q(\lalg{b_+})}$ via
the left adjoint
action and left regular coaction (with subsequent projection to
$\ker\cou$ via $x\mapsto x-\cou(x)1$) are in one-to-one correspondence to
the set $3^{\mathbb{N}}\times2^{\mathbb{N}_0}$.
Finite dimensional $L$ are in one-to-one correspondence to
finite sets
$I\subset\mathbb{N}\setminus\{1\}$ and $\dim L=\sum_{n\in I}n$.
(b) Finite dimensional irreducible $L$ correspond to $\{n\}$
with $n\ge 2$ the dimension.
(c) Finite dimensional $L$ are direct sums of irreducible ones. In
particular $L=\oplus_{n\in I} L^n$ with $L^n$ corresponding to $\{n\}$.
\end{cor}
\begin{proof}
Only a small modification of lemma \ref{lem:uqbp_class} is
necessary. Elements of
the form $P(g)$ are replaced by elements of the form
$P(g)-P(1)$. Monomials with non-vanishing degree in $X$ are unchanged.
The choices for elements of degree $0$ in $g$ are reduced to either
including the span of
$\{ X^k |\forall k>0 \}$ in the crossed submodule or not. In
particular, the crossed submodule characterised by \{1\} in lemma
\ref{lem:uqbp_class} is projected out.
\end{proof}
Differential calculi in the original sense of Woronowicz are
classified by corollary \ref{cor:cqbp_eclass} while from the quantum
tangent space
point of view the
classification is given by corollary \ref{cor:uqbp_eclass}.
In the finite dimensional case the duality is strict in the sense of a
one-to-one correspondence.
The infinite dimensional case on the other hand depends strongly on
the algebraic models we use for the function or enveloping
algebras. It is therefore not surprising that in the present purely
algebraic context the classifications are quite different in this
case. We will restrict ourselves to the finite dimensional
case in the following description of the differential calculi.
\begin{thm}
\label{thm:q_calc}
(a) Finite dimensional differential calculi $\Gamma$ on \ensuremath{C_q(B_+)}{} and
corresponding quantum tangent spaces $L$ on \ensuremath{U_q(\lalg{b_+})}{} are
in one-to-one correspondence to
finite sets $I\subset\mathbb{N}\setminus\{1\}$. In particular
$\dim\Gamma=\dim L=\sum_{n\in I}n$.
(b) Coirreducible $\Gamma$ and irreducible $L$ correspond to
$\{n\}$ with $n\ge 2$ the dimension.
Such a $\Gamma$ has a
right invariant basis $\eta_0,\dots,\eta_{n-1}$ so that the relations
\begin{gather*}
\diff X=\eta_1+(q^{n-1}-1)\eta_0 X \qquad
\diff g=(q^{n-1}-1)\eta_0 g\\
[a,\eta_0]=\diff a\quad \forall a\in\ensuremath{C_q(B_+)}\\
[g,\eta_i]_{q^{n-1-i}}=0\quad \forall i\qquad
[X,\eta_i]_{q^{n-1-i}}=\begin{cases}
\eta_{i+1} & \text{if}\ i<n-1 \\
0 & \text{if}\ i=n-1
\end{cases}
\end{gather*}
hold, where $[a,b]_p := a b - p b a$. By choosing the dual basis on
the corresponding irreducible $L$ we obtain
the braided derivations
\begin{gather*}
\partial_i\no{f}=
\no{Q_{n-1-i,g} Q_{n-1-i,X} \frac{1}{[i]_q!} (\partial_{q,X})^i f}
\qquad\forall i\ge 1\\
\partial_0\no{f}=
\no{Q_{n-1,g} Q_{n-1,X} f - f}
\end{gather*}
for $f\in \k(q)[X,g,g^{-1}]$ with normal ordering
$\k(q)[X,g,g^{-1}]\to \ensuremath{C_q(B_+)}$ given by \mbox{$g^n X^m\mapsto g^n X^m$}.
(c) Finite dimensional $\Gamma$ and $L$ decompose into direct sums of
coirreducible respectively irreducible ones.
In particular $\Gamma=\oplus_{n\in I}\Gamma^n$ and
$L=\oplus_{n\in I}L^n$ with $\Gamma^n$ and $L^n$ corresponding to $\{n\}$.
\end{thm}
\begin{proof}
(a) We observe that the classifications of lemma
\ref{lem:cqbp_class} and lemma \ref{lem:uqbp_class} or
corollary \ref{cor:cqbp_eclass} and corollary \ref{cor:uqbp_eclass}
are dual to each other in the finite (co){}dimensional case. More
precisely, for $I\subset\mathbb{N}$ finite the crossed submodule $M$
corresponding to $(1,I)$ in lemma \ref{lem:cqbp_class} is the
annihilator of the crossed
submodule $L$ corresponding to $I$ in lemma \ref{lem:uqbp_class}
and vice versa.
$\ensuremath{C_q(B_+)}/M$ and $L$ are dual spaces with the induced pairing.
For $I\subset\mathbb{N}\setminus\{1\}$ finite this descends to
$M$ corresponding to $(1,I\cup\{1\})$ in corollary
\ref{cor:cqbp_eclass} and $L$ corresponding to $I$ in corollary
\ref{cor:uqbp_eclass}.
For the dimension of $\Gamma$ observe
$\dim\Gamma=\dim{\ker\cou/M}=\codim M$.
(b) Coirreducibility (having no proper quotient) of $\Gamma$
clearly corresponds to maximality of $M$. The statement then follows
from parts (b) of corollaries
\ref{cor:cqbp_eclass} and \ref{cor:uqbp_eclass}. The formulas are
obtained by choosing the basis $\eta_0,\dots,\eta_{n-1}$ of
$\ker\cou/M$ as the equivalence classes of
\[(g-1)/(q^{n-1}-1),X,\dots,X^{n-1}\]
The dual basis of $L$ is then given by
\[g^{1-n}-1, X g^{1-n},\dots, q^{k(k-1)} \frac{1}{[k]_q!} X^k g^{1-n},
\dots,q^{(n-1)(n-2)} \frac{1}{[n-1]_q!} X^{n-1} g^{1-n}\]
(c) The statement follows from corollaries \ref{cor:cqbp_eclass} and
\ref{cor:uqbp_eclass} parts (c) with the observation
\[\ker\cou/M=\ker\cou/{\bigcap_{n\in I}}M^n
=\oplus_{n\in I}\ker\cou/M^n\]
\end{proof}
\begin{cor}
There is precisely one differential calculus on \ensuremath{C_q(B_+)}{} which is
natural in the sense that it
has dimension $2$.
It is coirreducible and obeys the relations
\begin{gather*}
[g,\diff X]=0\qquad [g,\diff g]_q=0\qquad
[X,\diff X]_q=0\qquad [X,\diff g]_q=(q-1)({\diff X}) g
\end{gather*}
with $[a,b]_q:=ab-qba$. In particular we have
\begin{gather*}
\diff\no{f} = {\diff g} \no{\partial_{q,g} f} + {\diff X}
\no{\partial_{q,X} f}\qquad\forall f\in \k(q)[X,g,g^{-1}]
\end{gather*}
\end{cor}
\begin{proof}
This is a special case of theorem \ref{thm:q_calc}.
The formulas follow from (b) with $n=2$.
\end{proof}
\section{Classification in the Classical Limit}
\label{sec:class}
In this section we give the complete classification of differential
calculi and quantum tangent spaces in the classical case of \ensuremath{C(B_+)}{}
along the lines of the previous section.
We pay particular
attention to the relation to the $q$-deformed setting.
The classical limit \ensuremath{C(B_+)}{} of the quantum group \ensuremath{C_q(B_+)}{} is
simply obtained by substituting the parameter $q$ with $1$.
The
classification of left crossed submodules in part (a) of lemma
\ref{lem:cqbp_class} remains
unchanged, as one may check by going through the proof.
In particular, we get a correspondence of crossed modules in the
$q$-deformed setting with crossed modules in the
classical setting
as a map of
pairs $(P,I)\mapsto (P,I)$
that converts polynomials $\k(q)[g]$ to polynomials $\k[g]$ (if
defined) and leaves
sets $I$ unchanged. This is one-to-one in the finite
dimensional case.
However, we did use the distinctness of powers of $q$ in part (b) and
(c) of lemma
$\ref{lem:cqbp_class}$ and have to account for changing this. The
only place where we used it, was in observing that
factors $1-q^j g $ have no common divisors for distinct $j$. This was
crucial to conclude the maximality (b) of certain finite codimensional
crossed submodules and the intersection property (c).
Now, all those factors become $1-g$.
\begin{cor}
\label{cor:cbp_class}
(a) Left crossed \ensuremath{C(B_+)}-submodules $M\subseteq\ensuremath{C(B_+)}$ by left
multiplication and left
adjoint coaction are in one-to-one correspondence to
pairs $(P,I)$
where $P\in\k[g]$ is a polynomial with $P(0)=1$ and $I\subset\mathbb{N}$ is
finite.
$\codim M<\infty$ iff $P=1$. In particular $\codim M=\sum_{n\in I}n$
if $P=1$.
(b) The infinite codimensional maximal $M$ are characterised by
$(P,\emptyset)$ with $P$ irreducible and $P(g)\neq 1-g$ for any
$k\in\mathbb{N}_0$.
\end{cor}
In the restriction to $\ker\cou\subset\ensuremath{C(B_+)}$ corresponding to corollary
\ref{cor:cqbp_eclass} we observe another difference to the
$q$-deformed setting.
Since the condition for a crossed submodule to lie in $\ker\cou$ is exactly
to have factors $1-g$ in the $X$-free monomials this condition may now
be satisfied more easily. If the characterising polynomial does not
contain this factor it is now sufficient to have just any non-empty
characterising integer set $I$ and it need not contain $1$. Consequently,
the map $(P,I)\mapsto (P,I)$ does not reach all crossed submodules now.
\begin{cor}
\label{cor:cbp_eclass}
(a) Left crossed \ensuremath{C(B_+)}-submodules $M\subseteq\ker\cou\subset\ensuremath{C(B_+)}$
are in one-to-one correspondence to pairs
$(P,I)$ as in corollary \ref{cor:cbp_class}
with the additional constraint $(1-g)$ divides $P(g)$ or $I$ non-empty.
$\codim M<\infty$ iff $P=1$. In particular $\codim M=(\sum_{n\in I}n)-1$
if $P=1$.
(b) The infinite codimensional maximal $M$ correspond to pairs
$(P,\{1\})$ with $P$ irreducible and $P(g)\neq 1-g$.
\end{cor}
Let us now turn to quantum tangent spaces on \ensuremath{U(\lalg{b_+})}{}. Here, the process
to go from the $q$-deformed setting to the classical one is not quite
so straightforward.
\begin{lem}
\label{lem:ubp_class}
Proper left crossed \ensuremath{U(\lalg{b_+})}-submodules $L\subset\ensuremath{U(\lalg{b_+})}$ via the left
adjoint action
and left regular coaction are
in one-to-one correspondence to pairs $(l,I)$ with $l\in\mathbb{N}_0$ and
$I\subset\mathbb{N}$ finite. $\dim L<\infty$ iff $l=0$. In particular $\dim
L=\sum_{n\in I}n$ if $l=0$.
\end{lem}
\begin{proof}
The left adjoint action takes the form
\[
X\triangleright X^n H^m = X^{n+1}(H^m-(H+1)^m) \qquad
H\triangleright X^n H^m = n X^n H^m
\]
while the coaction is
\[
\cop(X^n H^m) = \sum_{i=1}^n \sum_{j=1}^m \binom{n}{i} \binom{m}{j}
X^i H^j\otimes X^{n-1} H^{m-j}
\]
Let $L$ be a crossed submodule invariant under the action and coaction.
The (repeated) action of $H$ separates elements by degree in $X$. It is
therefore sufficient to consider elements of the form $X^n P(H)$, where
$P$ is a polynomial.
By acting with $X$ on an element $X^n P(H)$ we obtain
$X^{n+1}(P(H)-P(H+1))$. Subsequently applying the coaction and
projecting on the left hand side of the tensor product onto $X$ (in
the basis $X^i H^j$ of \ensuremath{U(\lalg{b_+})})
leads to the element $X^n (P(H)-P(H+1))$. Now the degree of
$P(H)-P(H+1)$ is exactly the degree of $P(H)$ minus $1$. Thus we have
polynomials $X^n P_i(H)$ of any degree $i=\deg(P_i)\le \deg(P)$ in $L$
by induction. In particular, $X^n H^m\in L$ for all
$m\le\deg(P)$. It is thus sufficient to consider elements of
the form $X^n H^m$. Given such an element, the coaction generates all
elements of the form $X^i H^j$ with $i\le n, j\le m$.
For given $n$, the characterising datum is the maximal $m$ so
that $X^n H^m\in L$. Due to the coaction this cannot decrease
with decreasing $n$ and due to the action of $X$ this can decrease at
most by $1$ when increasing $n$ by $1$. This leads to the
classification given. For $l\in N_0$ and $I\subset\mathbb{N}$ finite, the
corresponding crossed submodule
is generated by
\begin{gather*}
X^{n_m-1} H^{l+m-1}, X^{n_m+n_{m-1}-1} H^{l+m-2},\dots,
X^{(\sum_i n_i)-1} H^{l}\\
\text{and}\qquad
X^{(\sum_i n_i)+k} H^{l-1}\quad \forall k\ge 0\quad\text{if}\quad l>0
\end{gather*}
as a crossed module.
\end{proof}
For the transition from the $q$-deformed (lemma
\ref{lem:uqbp_class}) to the classical case we
observe that the space spanned by $g^{s_1},\dots,g^{s_m}$ with $m$
different integers $s_i\in\mathbb{Z}$ maps to the space spanned by
$1, H, \dots, H^{m-1}$ in the
prescription of the classical limit (as described in section
\ref{sec:intro_limits}). I.e.\ the classical crossed submodule
characterised by an integer $l$ and a finite set $I\subset\mathbb{N}$ comes
from a crossed submodule characterised by this same $I$ and additionally $l$
other integers $j\in\mathbb{Z}$ for which $X^k g^{1-j}$ is included. In
particular, we have a one-to-one correspondence in the finite
dimensional case.
To formulate the analogue of corollary \ref{cor:uqbp_eclass} for the
classical case is essentially straightforward now. However, as for
\ensuremath{C(B_+)}{}, we obtain more crossed submodules than those from the $q$-deformed
setting. This is due to the degeneracy introduced by forgetting the
powers of $g$ and just retaining the number of different powers.
\begin{cor}
\label{cor:ubp_eclass}
(a) Proper left crossed \ensuremath{U(\lalg{b_+})}-submodules
$L\subset\ker\cou\subset\ensuremath{U(\lalg{b_+})}$ via the
left adjoint
action and left regular coaction (with subsequent projection to
$\ker\cou$ via $x\mapsto x-\cou(x)1$) are in one-to-one correspondence to
pairs $(l,I)$ with $l\in\mathbb{N}_0$ and $I\subset\mathbb{N}$ finite where $l\neq 0$
or $I\neq\emptyset$.
$\dim L<\infty$ iff $l=0$. In particular $\dim
L=(\sum_{n\in I}n)-1$ if $l=0$.
\end{cor}
As in the $q$-deformed setting, we give a description of the finite
dimensional differential calculi where we have a strict duality to
quantum tangent spaces.
\begin{prop}
(a) Finite dimensional differential calculi $\Gamma$ on \ensuremath{C(B_+)}{} and
finite dimensional quantum tangent spaces $L$ on \ensuremath{U(\lalg{b_+})}{} are
in one-to-one correspondence to non-empty finite sets $I\subset\mathbb{N}$.
In particular $\dim\Gamma=\dim L=(\sum_{n\in I}) n)-1$.
The $\Gamma$ with $1\in\mathbb{N}$ are in
one-to-one correspondence to the finite dimensional
calculi and quantum tangent spaces of the $q$-deformed setting
(theorem \ref{thm:q_calc}(a)).
(b) The differential calculus $\Gamma$ of dimension $n\ge 2$
corresponding to the
coirreducible one of \ensuremath{C_q(B_+)}{} (theorem \ref{thm:q_calc}(b)) has a right
invariant
basis $\eta_0,\dots,\eta_{n-1}$ so that
\begin{gather*}
\diff X=\eta_1+\eta_0 X \qquad
\diff g=\eta_0 g\\
[g, \eta_i]=0\ \forall i \qquad
[X, \eta_i]=\begin{cases}
0 & \text{if}\ i=0\ \text{or}\ i=n-1\\
\eta_{i+1} & \text{if}\ 0<i<n-1
\end{cases}
\end{gather*}
hold. The braided derivations obtained from the dual basis of the
corresponding $L$ are
given by
\begin{gather*}
\partial_i f=\frac{1}{i!}
\left(\frac{\partial}{\partial X}\right)^i f\qquad
\forall i\ge 1\\
\partial_0 f=\left(X \frac{\partial}{X}+
g \frac{\partial}{g}\right) f
\end{gather*}
for $f\in\ensuremath{C(B_+)}$.
(c) The differential calculus of dimension $n-1$
corresponding to the
one in (b) with $1$ removed from the characterising set is
the same as the one above, except that we set $\eta_0=0$ and
$\partial_0=0$.
\end{prop}
\begin{proof}
(a) We observe that the classifications of corollary
\ref{cor:cbp_class} and lemma \ref{lem:ubp_class} or
corollary \ref{cor:cbp_eclass} and corollary \ref{cor:ubp_eclass}
are dual to each other in the finite (co)dimensional case.
More
precisely, for $I\subset\mathbb{N}$ finite the crossed submodule $M$
corresponding to $(1,I)$ in corollary \ref{cor:cbp_class} is the
annihilator of the crossed
submodule $L$ corresponding to $(0,I)$ in lemma \ref{lem:ubp_class}
and vice versa.
$\ensuremath{C(B_+)}/M$ and $L$ are dual spaces with the induced pairing.
For non-empty $I$ this descends to
$M$ corresponding to $(1,I)$ in corollary
\ref{cor:cbp_eclass} and $L$ corresponding to $(0,I)$ in corollary
\ref{cor:ubp_eclass}.
For the dimension of $\Gamma$ note
$\dim\Gamma=\dim{\ker\cou/M}=\codim M$.
(b) For $I=\{1,n\}$ we choose in
$\ker\cou\subset\ensuremath{C(B_+)}$ the basis $\eta_0,\dots,\eta_{n-1}$ as the
equivalence classes of
$g-1,X,\dots,X^{n-1}$. The dual basis in $L$
is then $H,X,\dots,\frac{1}{k!}X^k,\dots,\frac{1}{(n-1)!}X^{n-1}$.
This leads to the
formulas given.
(c) For $I=\{n\}$ we get the same as in (b) except that $\eta_0$ and
$\partial_0$ disappear.
\end{proof}
The classical commutative calculus is the special case of (b) with
$n=2$. It is the only calculus of dimension $2$ with
$\diff g\neq 0$. Note that it is not coirreducible.
\section{The Dual Classical Limit}
\label{sec:dual}
We proceed in this section to the more interesting point of view where
we consider the classical algebras, but with their roles
interchanged. I.e.\ we view \ensuremath{U(\lalg{b_+})}{} as the ``function algebra''
and \ensuremath{C(B_+)}{} as the ``enveloping algebra''. Due to the self-duality of
\ensuremath{U_q(\lalg{b_+})}{}, we can again view the differential calculi and quantum tangent
spaces as classical limits of the $q$-deformed setting investigated in
section \ref{sec:q}.
In this dual setting the bicovariance constraint for differential
calculi becomes much
weaker. In particular, the adjoint action on a classical function
algebra is trivial due to commutativity and the adjoint coaction on a
classical enveloping algebra is trivial due to cocommutativity.
In effect, the correspondence with the
$q$-deformed setting is much weaker than in the ordinary case of
section \ref{sec:class}.
There are much more differential
calculi and quantum tangent spaces than in the $q$-deformed setting.
We will not attempt to classify all of them in the following but
essentially
contend ourselves with those objects coming from the $q$-deformed setting.
\begin{lem}
\label{lem:cbp_dual}
Left \ensuremath{C(B_+)}-subcomodules $\subseteq\ensuremath{C(B_+)}$ via the left regular coaction are
$\mathbb{Z}$-graded subspaces of \ensuremath{C(B_+)}{} with $|X^n g^m|=n+m$,
stable under formal derivation in $X$.
By choosing any ordering in \ensuremath{C_q(B_+)}{}, left crossed submodules via left
regular action and adjoint coaction are in one-to-one correspondence
to certain subcomodules of \ensuremath{C(B_+)}{} by setting $q=1$. Direct sums
correspond to direct sums.
This descends to $\ker\cou\subset\ensuremath{C(B_+)}$ by the projection $x\mapsto
x-\cou(x) 1$.
\end{lem}
\begin{proof}
The coproduct on \ensuremath{C(B_+)}{} is
\[\cop(X^n g^k)=\sum_{r=0}^{n} \binom{n}{r}
X^{n-r} g^{k+r}\otimes X^r g^k\]
which we view as a left coaction.
Projecting on the left hand side of the tensor product onto $g^l$ in a
basis $X^n g^k$, we
observe that coacting on an element
$\sum_{n,k} a_{n,k} X^n g^k$ we obtain elements
$\sum_n a_{n,l-n} X^n g^{l-n}$ for all $l$.
I.e.\ elements of the form
$\sum_n b_n X^n g^{l-n}$ lie
separately in a subcomodule and it is
sufficient to consider such elements. Writing the coaction
on such an element as
\[\sum_t \frac{1}{t!} X^t g^{l-t}\otimes \sum_n b_n
\frac{n!}{(n-t)!} X^{n-t} g^{l-n}\]
we see that the coaction generates all formal derivatives in $X$
of this element. This gives us the classification: \ensuremath{C(B_+)}-subcomodules
$\subseteq\ensuremath{C(B_+)}$ under the left regular coaction are $\mathbb{Z}$-graded
subspaces with $|X^n g^m|=n+m$, stable under formal derivation in
$X$ given by $X^n
g^m \mapsto n X^{n-1} g^m$.
The correspondence with the \ensuremath{C_q(B_+)} case follows from
the trivial observation
that the coproduct of \ensuremath{C(B_+)}{} is the same as that of \ensuremath{C_q(B_+)}{} with $q=1$.
The restriction to $\ker\cou$ is straightforward.
\end{proof}
\begin{lem}
\label{lem:ubp_dual}
The process of obtaining the classical limit \ensuremath{U(\lalg{b_+})}{} from \ensuremath{U_q(\lalg{b_+})}{} is
well defined for subspaces and sends crossed \ensuremath{U_q(\lalg{b_+})}-submodules
$\subset\ensuremath{U_q(\lalg{b_+})}$ by
regular action and adjoint coaction to \ensuremath{U(\lalg{b_+})}-submodules $\subset\ensuremath{U(\lalg{b_+})}$
by regular
action. This map is injective in the finite codimensional
case. Intersections and codimensions are preserved in this case.
This descends to $\ker\cou$.
\end{lem}
\begin{proof}
To obtain the classical limit of a left ideal it is enough to
apply the limiting process (as described in section
\ref{sec:intro_limits}) to the
module generators (We can forget the additional comodule
structure). On the one hand,
any element generated by left multiplication with polynomials in
$g$ corresponds to some element generated by left multiplication with a
polynomial in $H$, that is, there will be no more generators in the
classical setting. On the other hand, left multiplication by a
polynomial in $H$ comes
from left multiplication by the same polynomial in $g-1$, that is,
there will be no fewer generators.
The maximal left crossed \ensuremath{U_q(\lalg{b_+})}-submodule $\subseteq\ensuremath{U_q(\lalg{b_+})}$
by left multiplication and adjoint coaction of
codimension $n$ ($n\ge 1$) is generated as a left ideal by
$\{1-q^{1-n}g,X^n\}$ (see lemma
\ref{lem:cqbp_class}). Applying the limiting process to this
leads to the
left ideal of \ensuremath{U(\lalg{b_+})}{} (which is not maximal for $n\neq 1$) generated by
$\{H+n-1,X^n\}$ having also codimension $n$.
More generally, the picture given for arbitrary finite codimensional left
crossed modules of \ensuremath{U_q(\lalg{b_+})}{} in terms of generators with respect to
polynomials in $g,g^{-1}$ in lemma \ref{lem:cqbp_class} carries over
by replacing factors
$1-q^{1-n}g$ with factors $H+n-1$ leading to generators with
respect to polynomials in $H$. In particular,
intersections go to intersections since the distinctness of
the factors for different $n$ is conserved.
The restriction to $\ker\cou$ is straightforward.
\end{proof}
We are now in a position to give a detailed description of the
differential calculi induced from the $q$-deformed setting by the
limiting process.
\begin{prop}
(a) Certain finite dimensional
differential calculi $\Gamma$ on \ensuremath{U(\lalg{b_+})}{} and quantum tangent spaces $L$
on \ensuremath{C(B_+)}{}
are in one-to-one correspondence to finite dimensional differential
calculi on \ensuremath{U_q(\lalg{b_+})}{} and quantum
tangent spaces on \ensuremath{C_q(B_+)}{}. Intersections correspond to intersections.
(b) In particular,
$\Gamma$ and $L$ corresponding to coirreducible differential calculi
on \ensuremath{U_q(\lalg{b_+})}{} and
irreducible quantum tangent spaces on \ensuremath{C_q(B_+)}{} via the limiting process
are given as follows:
$\Gamma$ has a right invariant basis
$\eta_0,\dots,\eta_{n-1}$ so that
\begin{gather*}
\diff X=\eta_1 \qquad \diff H=(1-n)\eta_0 \\
[H, \eta_i]=(1-n+i)\eta_i\quad\forall i\qquad
[X, \eta_i]=\begin{cases}
\eta_{i+1} & \text{if}\ \ i<n-1\\
0 & \text{if}\ \ i=n-1
\end{cases}
\end{gather*}
holds. The braided derivations corresponding to the dual basis of
$L$ are given by
\begin{gather*}
\partial_i\no{f}=\no{T_{1-n+i,H}
\frac{1}{i!}\left(\frac{\partial}{\partial X}\right)^i f}
\qquad\forall i\ge 1\\
\partial_0\no{f}=\no{T_{1-n,H} f - f}
\end{gather*}
for $f\in\k[X,H]$
with the normal ordering $\k[X,H]\to \ensuremath{U(\lalg{b_+})}$ via $H^n X^m\mapsto H^n X^m$.
\end{prop}
\begin{proof}
(a) The strict duality between \ensuremath{C(B_+)}-subcomodules $L\subseteq\ker\cou$
given by lemma \ref{lem:cbp_dual} and corollary \ref{cor:uqbp_eclass}
and \ensuremath{U(\lalg{b_+})}-modules $\ensuremath{U(\lalg{b_+})}/(\k 1+M)$ with $M$ given by lemma
\ref{lem:ubp_dual} and
corollary \ref{cor:cqbp_eclass} can be checked explicitly.
It is essentially due to mutual annihilation of factors $H+k$ in
\ensuremath{U(\lalg{b_+})}{} with elements $g^k$ in \ensuremath{C(B_+)}{}.
(b) $L$ is generated by
$\{g^{1-n}-1,Xg^{1-n},\dots,
X^{n-1}g^{1-n}\}$ and
$M$ is generated by $\{H(H+n-1),X(H+n-1),X^n \}$.
The formulas are obtained by denoting with
$\eta_0,\dots,\eta_{n-1}$ the equivalence classes of
$H/(1-n),X,\dots,X^{n-1}$ in $\ensuremath{U(\lalg{b_+})}/(\k 1+M)$.
The dual basis of $L$ is then
\[g^{1-n}-1,X g^{1-n},
\dots,\frac{1}{(n-1)!}X^{n-1}
g^{1-n}\]
\end{proof}
In contrast to the $q$-deformed setting and to the usual classical
setting the many freedoms in choosing a calculus leave us with many
$2$-dimensional calculi. It is not obvious which one we should
consider to be the ``natural'' one. Let us first look at the
$2$-dimensional calculus coming from the $q$-deformed
setting as described in (b). The relations become
\begin{gather*}
[\diff H, a]=\diff a\qquad [\diff X, a]=0\qquad\forall a\in\ensuremath{U(\lalg{b_+})}\\
\diff\no{f} =\diff H \no{\fdiff_{1,H} f}
+ \diff X \no{\frac{\partial}{\partial X} f}
\end{gather*}
for $f\in\k[X,H]$.
We might want to consider calculi which are closer to the classical
theory in the sense that derivatives are not finite differences but
usual derivatives. Let us therefore demand
\[\diff P(H)=\diff H \frac{\partial}{\partial H} P(H)\qquad
\text{and}\qquad
\diff P(X)=\diff X \frac{\partial}{\partial X} P(X)\]
for polynomials $P$ and ${\diff X}\neq 0$ and ${\diff H}\neq 0$.
\begin{prop}
\label{prop:nat_bp}
There is precisely one differential calculus of dimension $2$ meeting
these conditions. It obeys the relations
\begin{gather*}
[a,\diff H]=0\qquad [X,\diff X]=0\qquad [H,\diff X]=\diff X\\
\diff \no{f} =\diff H \no{\frac{\partial}{\partial H} f}
+\diff X \no{\frac{\partial}{\partial X} f}
\end{gather*}
where the normal ordering $\k[X,H]\to \ensuremath{U(\lalg{b_+})}$ is given by
$X^n H^m\mapsto X^n H^m$.
\end{prop}
\begin{proof}
Let $M$ be the left ideal corresponding to the calculus. It is easy to
see that for a primitive element $a$ the classical derivation condition
corresponds to $a^2\in M$ and $a\notin M$. In our case $X^2,H^2\in
M$. If we take the
ideal generated from these two elements we obtain an ideal of
$\ker\cou$ of codimension $3$. Now, it is sufficient without loss of
generality to add a generator of the form $\alpha H+\beta X+\gamma
XH$. $\alpha$ and $\beta$ must then be zero in order not
to generate $X$ or $H$ in $M$.
I.e.\ $M$ is generated by $H^2,
XH, X^2$. The relations stated follow.
\end{proof}
\section{Remarks on $\kappa$-Minkowski Space and Integration}
\label{sec:kappa}
There is a straightforward generalisation of \ensuremath{U(\lalg{b_+})}.
Let us define the Lie algebra $\lalg b_{n+}$ as generated by
$x_0,\dots, x_{n-1}$ with relations
\[ [x_0,x_i]=x_i\qquad [x_i,x_j]=0\qquad\forall i,j\ge 1\]
Its enveloping algebra \ensuremath{U(\lalg{b}_{n+})}{} is nothing but (rescaled) $\kappa$-Minkowski
space as introduced in \cite{MaRu}. In this section we make some
remarks about its intrinsic geometry.
We have an injective Lie algebra
homomorphism $b_{n+}\to b_+$ given by
$x_0\mapsto H$ and $x_i\mapsto X$.
This is an isomorphism for $n=2$. The injective Lie algebra
homomorphism extends to an injective homomorphism of enveloping
algebras $\ensuremath{U(\lalg{b_+})}\to \ensuremath{U(\lalg{b}_{n+})}$ in the obvious way. This gives rise
to an injective map from the set of submodules of \ensuremath{U(\lalg{b_+})}{} to the set of
submodules of \ensuremath{U(\lalg{b}_{n+})}{} by taking the pre-image. In
particular this induces an injective
map from the set of differential calculi on \ensuremath{U(\lalg{b_+})}{} to the set of
differential calculi on \ensuremath{U(\lalg{b}_{n+})}{} which are invariant under permutations
of the $x_i\ i\ge 1$.
\begin{cor}
\label{cor:nat_bnp}
There is a natural $n$-dimensional differential calculus on \ensuremath{U(\lalg{b}_{n+})}{}
induced from the one considered in proposition
\ref{prop:nat_bp}.
It obeys the relations
\begin{gather*}
[a,\diff x_0]=0\quad\forall a\in \ensuremath{U(\lalg{b}_{n+})}\qquad [x_i,\diff x_j]=0
\quad [x_0,\diff x_i]=\diff x_i\qquad\forall i,j\ge 1\\
\diff \no{f} =\sum_{\mu=0}^{n-1}\diff x_{\mu}
\no{\frac{\partial}{\partial x_{\mu}} f}
\end{gather*}
where the normal ordering is given by
\[\k[x_0,\dots,x_{n-1}]\to \ensuremath{U(\lalg{b}_{n+})}\quad\text{via}\quad
x_{n-1}^{m_{n-1}}\cdots
x_0^{m_0}\mapsto x_{n-1}^{m_{n-1}}\cdots x_0^{m_0}\]
\end{cor}
\begin{proof}
The calculus is obtained from the ideal generated by
\[x_0^2,x_i x_j, x_i x_0\qquad\forall i,j\ge 1\]
being the pre-image of
$X^2,XH,X^2$ in \ensuremath{U(\lalg{b_+})}{}.
\end{proof}
Let us try to push the analogy with the commutative case further and
take a look at the notion of integration. The natural way to encode
the condition of translation invariance from the classical context
in the quantum group context
is given by the condition
\[(\int\otimes\id)\circ\cop a=1 \int a\qquad\forall a\in A\]
which defines a right integral on a quantum group $A$
\cite{Sweedler}.
(Correspondingly, we have the notion of a left integral.)
Let us
formulate a slightly
weaker version of this equation
in the context of a Hopf algebra $H$ dually paired with
$A$. We write
\[\int (h-\cou(h))\triangleright a = 0\qquad \forall h\in H, a\in A\]
where the action of $H$ on $A$ is the coregular action
$h\triangleright a = a_{(1)}\langle a_{(2)}, h\rangle$
given by the pairing.
In the present context we set $A=\ensuremath{U(\lalg{b}_{n+})}$ and $H=\ensuremath{C(B_{n+})}$. We define the
latter as a generalisation of \ensuremath{C(B_+)}{} with commuting
generators $g,p_1,\dots,p_{n-1}$ and coproducts
\[\cop p_i=p_i\otimes 1+g\otimes p_i\qquad \cop g=g\otimes g\]
This can be identified (upon rescaling) as the momentum sector of the
full $\kappa$-Poincar\'e algebra (with $g=e^{p_0}$).
The pairing is the natural extension of (\ref{eq:pair_class}):
\[\langle x_{n-1}^{m_{n-1}}\cdots x_1^{m_1} x_0^{k},
p_{n-1}^{r_{n-1}}\cdots p_1^{r_1} g^s\rangle
= \delta_{m_{n-1},r_{n-1}}\cdots\delta_{m_1,r_1} m_{n-1}!\cdots m_1!
s^k\]
The resulting coregular
action is conveniently expressed as (see also \cite{MaRu})
\[p_i\triangleright\no{f}=\no{\frac{\partial}{\partial x_i} f}\qquad
g\triangleright\no{f}=\no{T_{1,x_0} f}\]
with $f\in\k[x_0,\dots,x_{n-1}]$.
Due to cocommutativity, the notions of left and right integral
coincide. The invariance conditions for integration become
\[\int \no{\frac{\partial}{\partial x_i} f}=0\quad
\forall i\in\{1,\dots,n-1\}
\qquad\text{and}\qquad \int \no{\fdiff_{1,x_0} f}=0\]
The condition on the left is familiar and states the invariance under
infinitesimal translations in the $x_i$. The condition on the right states the
invariance under integer translations in $x_0$. However, we should
remember that we use a certain algebraic model of \ensuremath{C(B_{n+})}{}. We might add,
for example, a generator $p_0$
to \ensuremath{C(B_{n+})}{}
that is dual to $x_0$ and behaves
as the ``logarithm'' of $g$, i.e.\ acts as an infinitesimal
translation in $x_0$. We then have the condition of infinitesimal
translation invariance
\[\int \no{\frac{\partial}{\partial x_{\mu}} f}=0\]
for all $\mu\in\{0,1,\dots,{n-1}\}$.
In the present purely algebraic context these conditions do not make
much sense. In fact they would force the integral to be zero on the
whole algebra. This is not surprising, since we are dealing only with
polynomial functions which would not be integrable in the classical
case either.
In contrast, if we had for example the algebra of smooth functions
in two real variables, the conditions just characterise the usual
Lesbegue integral (up to normalisation).
Let us assume $\k=\mathbb{R}$ and suppose that we have extended the normal
ordering vector
space isomorphism $\mathbb{R}[x_0,\dots,x_{n-1}]\cong \ensuremath{U(\lalg{b}_{n+})}$ to a vector space
isomorphism of some sufficiently large class of functions on $\mathbb{R}^n$ with a
suitable completion $\hat{U}(\lalg{b_{n+}})$ in a functional
analytic framework (embedding \ensuremath{U(\lalg{b}_{n+})}{} in some operator algebra on a
Hilbert space). It is then natural to define the integration on
$\hat{U}(\lalg{b_{n+}})$ by
\[\int \no{f}=\int_{\mathbb{R}^n} f\ dx_0\cdots dx_{n-1}\]
where the right hand side is just the usual Lesbegue integral in $n$
real variables $x_0,\dots,x_{n-1}$. This
integral is unique (up to normalisation) in
satisfying the covariance condition since, as we have seen,
these correspond
just to the usual translation invariance in the classical case via normal
ordering, for which the Lesbegue integral is the unique solution.
It is also the $q\to 1$ limit of the translation invariant integral on
\ensuremath{U_q(\lalg{b_+})}{} obtained in \cite{Majid_qreg}.
We see that the natural differential calculus in corollary
\ref{cor:nat_bnp} is
compatible with this integration in that the appearing braided
derivations are exactly the actions of the translation generators
$p_{\mu}$. However, we should stress that this calculus is not
covariant under the full $\kappa$-Poincar\'e algebra, since it was
shown in \cite{GoKoMa} that in $n=4$ there is no such
calculus of dimension $4$. Our results therefore indicate a new
intrinsic approach to $\kappa$-Minkowski space that allows a
bicovariant
differential calculus of dimension $4$ and a unique translation
invariant integral by normal ordering and Lesbegue integration.
\section*{Acknowledgements}
I would like to thank S.~Majid for proposing this project,
and for fruitful discussions during the preparation of this paper.
| {'timestamp': '1998-07-19T14:33:52', 'yymm': '9807', 'arxiv_id': 'math/9807097', 'language': 'en', 'url': 'https://arxiv.org/abs/math/9807097'} |
\section{Introduction}
Continuous Engineering (CE) practices,
such as Continuous Integration (CI) and Continuous Deployment (CD),
are gaining prominence in software engineering,
as they help streamline and optimize the way software is built, tested and shipped.
The most salient advantage of CE is the tighter feedback loops:
CE practices help developers test and build their software more,
and makes software releases less brittle by enabling more incremental releases.
Nevertheless, a frequently reported barrier for success is the need to effectively analyze
the data that results from the numerous build and test
runs~\cite{Laukkanen2017,Hilton2017,Shahin2017,Debbiche2014,Olsson2012}.
One evident example of this is the handling and
analysis of results from complex end-to-end integration tests
which we focus on in this paper:
CE practices make it easier to run such end-to-end tests,
which include system integration and deployment to production hardware,
and they are critical for ensuring the quality of the end product.
However, since these end-to-end tests by their nature can fail for multiple
reasons, not least in the sense that new product code can make the tests
fail in new ways, it is critical to rapidly diagnose these failures.
In this paper we concern ourselves with how to rapidly analyze a set
of logs resulting from complex CE tasks\footnote{~For simplicity, and without loss of generality,
we will refer to these CE tasks as ``integration tests'' or ``tests'' throughout the paper,
though we acknowledge that they include more than just testing,
such as building the system and deploying it on hardware in a test or staging environment,
and failures can occur in any of these phases.
The proposed approach aims to cover all these situations,
and is evaluated on real-life logs capturing everything from building the system,
to deploying it on production hardware,
and running complex integration and interaction scenarios.}
where the overall outcome of the task (i.e. 'fail' or 'pass') is known,
but where analysts must consult the resulting logs to fully diagnose why the failures occurred.
Since these logs can get large and unwieldy, we
develop a tool that automatically suggests which segments in the logs
are most likely relevant for troubleshooting purposes.
Our method gives each event in the log an interestingness score based
on the overall event frequencies in the test result set: The log
events are in turn clustered based on these scores, and the event
clusters are presented to the user in decreasing order of overall
interestingness. The goal is to enable users to find all relevant
diagnostic information in the first presented event cluster, while having the
option of retrieving additional clusters if needed. An
additional benefit of our method is that the extracted events can help
identify commonly occurring patterns that are symptomatic for specific
errors. Future logs that exhibit the same characteristics can then be
automatically classified as having symptoms of that error.
\head{Contributions} We present Spectrum-Based Log Diagnosis (SBLD), a method for helping developers quickly find the
most relevant segments of a log. Using data from \CiscoNorway{an
industrial partner}, we empirically evaluate SBLD by investigating the following
three questions:
(i) How well does SBLD reduce the \emph{effort needed} to identify all \emph{failure-relevant events} in the log for a failing run?
(ii) How is the \emph{performance} of SBLD affected by \emph{available data}?
(iii) How does SBLD compare to searching for \emph{simple textual patterns} that often occur in failure-relevant events?
\head{Overview}
The rest of the paper is structured as follows: Section~\ref{sec:approach}
explains SBLD and the methodology underlying its event ranking
procedures. Sections~\ref{sec:rqs} and~\ref{sec:expdesign} motivates our research questions
and empirical design. We report and discuss our results in
Section~\ref{sec:resdiscuss}. Section~\ref{sec:relwork} surveys related work,
and we discuss threats to validity in Section~\ref{sec:ttv} before concluding
in Section~\ref{sec:conclusion}.
%
\section{Approach}
\label{sec:approach}
\begin{figure}[b]
\includegraphics[width=0.99\columnwidth]{overview.pdf}
\vspace*{-2ex}
\caption{A visual overview of our approach.}
\label{fig:approach}
\end{figure}
SBLD takes a set of log files from test failures, a set of log files from test successes, and a singular log file from a test failure called the \emph{target log} that the user wants analyzed and produces a list of segments from the target log file that are likely relevant for understanding why the corresponding test run failed.
In the following we explain the workings of SBLD in a stepwise
manner. At each step, we present the technical background needed to
understand how SBLD accomplishes its task. A visual overview of SBLD is
shown in Figure \ref{fig:approach}.
\head{Prerequisites}
First of all, SBLD requires access to a set of log files from failing test runs and a set of log files from successful test runs.
For brevity, we will refer to log files from failing test runs as 'failing logs',
and log files from successful test runs as 'passing logs'.%
\footnote{~Note that we explicitly assume that the outcome of each run is known;
This work is not concerned with determining whether the run was a failure or a success,
but rather with helping identify why the failing runs failed.}
We also require a programmatic way of segmenting each log file
into individually meaningful components. For the dataset used in this
paper these components are \emph{events} in the form of blocks of text
preceded by a date and a time-stamp in a predictable format. Lastly,
we require that run-time specific information such as timestamps,
dynamically generated IP addresses, check-sums and so on are removed
from the logs and replaced with standardized text. We refer to the process of
enforcing these requirements and delineating the log into events as
the \emph{abstraction} step. This enables SBLD to treat events
like ``2019-04-05 19:19:22.441 CEST: Alice calls Bob'' and ``2019-04-07
13:12:11.337 CEST: Alice calls Bob'' as two instances of the same
generic event "Alice calls Bob". The appropriate degree of abstraction
and how to meaningfully delineate a log will be context-dependent
and thus we require the user to perform these steps before using SBLD.
In the current paper we use an abstraction mechanism
and dataset generously provided by \CiscoNorway{our industrial partner}.
\renewcommand{\Ncf}{\ensuremath{\text{N}_\text{FI}}} %
\renewcommand{\Nuf}{\ensuremath{\text{N}_\text{FE}}} %
\renewcommand{\Ncs}{\ensuremath{\text{N}_\text{PI}}} %
\renewcommand{\Nus}{\ensuremath{\text{N}_\text{PE}}} %
\head{Computing coverage and event relevance} SBLD requires an assumption about what makes an event \emph{relevant}
and a method for computing this relevance. Our method takes inspiration
from Spectrum-Based Fault Localization (SBFL) in which the suspiciousness
or fault-proneness of a program statement is treated as a function of
the number of times the statement was activated in a failing test case,
combined with the number of times it is skipped in a passing test case~\cite{Jones2002,Abreu2007,Abreu2009}.
The four primitives that need to be computed are shown on the right-hand side in Table~\ref{table:measures}.
We treat each abstracted event as a statement and study their occurrences
in the logs like Fault Localization tracks the activation of statements in test cases.
We compute the analysis primitives by devising a binary
\emph{coverage matrix} whose columns represent every unique event
observed in the set of failing and successful logs while each row $r$
represents a log and tracks whether the event at column $c$ occurred in
log $r$ (1), or not (0), as shown in Figure~\ref{fig:approach}.
By computing these primitives, we can rank each event by using an
\emph{interestingness measure} (also referred to as ranking
metric, heuristic, or similarity coefficient~\cite{Wong2016}).
The choice of interestingness measure
is ultimately left to the user, as these are context dependent and
there is no generally optimal choice of interestingness measure~\cite{Yoo2014}.
In this paper we consider a
selection of nine interestingness measures prominent in the literature
and a simple metric that emphasizes the events that exclusively occur
in failing logs in the spirit of the \emph{union model} discussed
by Renieres et al.~\cite{renieres2003:fault}. We
report on the median performance of these interestingness measures with the intention of providing a
representative, yet unbiased, result. The ten measures considered are
precisely defined in Table~\ref{table:measures}.
\begin{table*}
\centering
\begin{tabular}{c@{\hspace{10mm}}c}
{\renewcommand{\arraystretch}{1.7} %
\begin{tabular}{lc}
\toprule
measure & formula \\\midrule
Tarantula \cite{Jones2001,Jones2002} & %
\( \frac{ \frac{ \cef{} }{ \cef{} + \cnf{} } }{ \frac{ \cef{} }{ \cef{} + \cnf{} } + \frac{ \cep{} }{ \cep{} + \cnp{} } } \)
\\
Jaccard \cite{Jaccard1912,Chen2002} & %
\( \frac{ \Ncf }{ \Ncf + \Nuf + \Ncs } \)
\\
Ochiai \cite{Ochiai1957,Abreu2006} & %
\( \frac{ \Ncf }{ \sqrt{ ( \cef + \cnf ) \times ( \cef + \cep ) } } \)
\\
Ochiai2 \cite{Ochiai1957, Naish2011} & %
\( \frac{ \Aef \times \Anp }{ \sqrt{ ( \Aef + \Aep ) \times ( \Anf + \Anp ) \times ( \Aef + \Anf) \times ( \Aep + \Anp ) } } \)
\\
Zoltar \cite{Gonzalez2007} & %
\( \frac{ \Ncf }{ \Ncf + \Nuf + \Ncs + \frac { 10000 \times \Nuf \times \Ncs }{ \Ncf } } \)
\\
D$^\star$ \cite{Wong2014} (we use $\star = 2$) & %
\( \frac{ (\cef)^\star }{ \cnf + \cep } \)
\\
O$^p$ \cite{Naish2011} & %
\( \Aef - \frac{ \Aep }{ \Aep + \Anp + 1} \)
\\
Wong3 \cite{Wong2007,Wong2010} &
\( \Aef - h, \text{where~} h = \left\{
\scalebox{.8}{\(\renewcommand{\arraystretch}{1} %
\begin{array}{@{}ll@{}}
\Aep & \text{if~} \Aep \leq 2 \\
2 + 0.1(\Aep - 2) & \text{if~} 2 < \Aep \leq 10 \\
2.8 + 0.001(\Aep - 10) & \text{if~} \Aep > 10 \\
\end{array}\)}
\right. \)
\\
Kulczynski2 \cite{Kulczynski1927,Naish2011} & %
\( \frac{ 1 }{ 2 } \times ( \frac{ \Aef }{ \Aef + \Anf } + \frac{ \Aef }{ \Aef + \Aep } ) \)
\\
Failed only & %
\( \left\{\scalebox{.8}{\(\renewcommand{\arraystretch}{1} %
\begin{array}{@{}ll@{}}
1 & \text{if~} \Ncs = 0 \\
0 & \text{otherwise~} \\
\end{array}\)}
\right. \)
\\
\bottomrule
\end{tabular}} &
\begin{tabular}{lp{2.99cm}}
\toprule
\multicolumn{2}{l}{notation used} \\\midrule
\Ncf & number of \emph{failing} logs \\ & that \emph{include} the event \\
\Nuf & number of \emph{failing} logs \\ & that \emph{exclude} the event \\
\Ncs & number of \emph{passing} logs \\ & that \emph{include} the event \\
\Nus & number of \emph{passing} logs \\ & that \emph{exclude} the event \\
\bottomrule
\end{tabular}
\end{tabular}\vspace*{1ex}
\caption{\label{table:measures}The 10 interestingness measures under consideration in this paper.}
\vspace*{-3ex}
\end{table*}
\head{Analyzing a target log file} Using our database of event scores,
we first identify the events occurring in the target log file and the
interestingness scores associated with these events. Then, we group
similarly scored events together using a clustering algorithm. Finally,
we present the best performing cluster of events to the end user. The
clustering step helps us make a meaningful selection of events rather
than setting an often arbitrary window selection size. Among other
things, it prevents two identically scored events from falling at
opposite sides of the selection threshold. If the user suspects that
the best performing cluster did not report all relevant events, she can
inspect additional event clusters in order of decreasing
aggregate interestingness score. To perform the clustering step we use Hierarchical Agglomerative
Clustering (HAC) with Complete linkage~\cite{manning2008introduction}, where
sub-clusters are merged until the maximal distance between members of
each candidate cluster exceeds some specified threshold. In SBLD,
this threshold is the uncorrected sample standard deviation of the event
scores for the events being clustered.\footnote{~Specifically,
we use the \texttt{numpy.std} procedure from the SciPy framework~\cite{2020SciPy-NMeth},
in which the uncorrected sample standard deviation is given by
$ \sqrt{\frac{1}{N} \sum_{i=1}^{N}\lvert x_{i} - \bar{x} \rvert^2} $ where
$\bar{x}$ is the sample mean of the interestingness scores obtained for the
events in the log being analyzed and $N$ is the number of events in the log.}
This ensures that the ``interestingness-distance'' between two events
in a cluster never exceeds the uncorrected sample standard deviation observed in the set.
%
\section{Research Questions}
\label{sec:rqs}
The goal of this paper is to present SBLD and help practitioners make
an informed decision whether SBLD meets their needs. To this end, we have identified
three research questions that encompass several concerns practitioners
are likely to have and that also are of interested to the research community at
large:
\begin{enumerate}[\bfseries RQ1]
\item How well does SBLD reduce the effort needed to identify all
known-to-be relevant events ("does it work?") ?
\item How is the efficacy of SBLD impacted by increased evidence in the form of
additional failing and passing logs ("how much data do we need before
running the analysis?") ?
\item How does SBLD perform compared to a strategy based on searching for
common textual patterns with a tool like \texttt{grep} ("is it better than doing the obvious thing?") ?
\end{enumerate}
RQ1 looks at the aggregated performance of SBLD to assess its viability.
With RQ2 we assess how sensitive the performance is to the amount of
available data: How many logs should you have before you can expect the
analysis to yield good results? Is more data unequivocally a good thing?
What type of log is more informative: A passing log or a failing log?
Finally, we compare SBLD's performance to a more traditional method for
finding relevant segments in logs: Using a textual search for strings
one expects to occur near informative segments, like
"failure" and "error". The next section details the dataset used, our
chosen quality measures for assessment and our methodology for answering
each research question.
%
\section{Experimental Design}
\label{sec:expdesign}
\begin{table}
\centering
\caption{The key per-test attributes of our dataset. Two events are considered
distinct if they are treated as separate events after the abstraction
step. A "mixed" event is an event that occurs in logs of both failing and
passing runs.}
\vspace*{-1ex}
\label{table:descriptive}
\renewcommand{\tabcolsep}{0.11cm}\small
\begin{tabular}{rcrrrrrr}
\toprule
& & \# fail & \# pass & distinct & fail-only & mixed & pass-only \\
test & signature & logs & logs & events & events & events & events \\
\midrule
1 & C & 24 & 100 & 36391 & 21870 & 207 & 14314 \\
2 & E & 11 & 25 & 380 & 79 & 100 & 201 \\
3 & E & 11 & 25 & 679 & 174 & 43 & 462 \\
4 & E & 4 & 25 & 227 & 49 & 39 & 139 \\
5 & C & 2 & 100 & 33420 & 2034 & 82 & 31304 \\
6 & C & 19 & 100 & 49155 & 15684 & 893 & 32578 \\
7 & C & 21 & 100 & 37316 & 17881 & 154 & 19281 \\
8 & C & 4 & 100 & 26614 & 3976 & 67 & 22571 \\
9 & C & 21 & 100 & 36828 & 19240 & 228 & 17360 \\
10 & C & 22 & 100 & 110479 & 19134 & 1135 & 90210 \\
11 & E & 5 & 25 & 586 & 95 & 47 & 444 \\
12 & E & 7 & 25 & 532 & 66 & 18 & 448 \\
13 & C & 2 & 100 & 15351 & 2048 & 232 & 13071 \\
14 & C & 3 & 100 & 16318 & 2991 & 237 & 13090 \\
15 & C & 26 & 100 & 60362 & 20964 & 1395 & 38003 \\
16 & C & 12 & 100 & 2206 & 159 & 112 & 1935 \\
17 & E & 8 & 25 & 271 & 58 & 98 & 115 \\
18 & A & 23 & 75 & 3209 & 570 & 156 & 2483 \\
19 & C & 13 & 100 & 36268 & 13544 & 411 & 22313 \\
20 & B & 3 & 19 & 688 & 69 & 31 & 588 \\
21 & B & 22 & 25 & 540 & 187 & 94 & 259 \\
22 & E & 1 & 25 & 276 & 11 & 13 & 252 \\
23 & C & 13 & 100 & 28395 & 13629 & 114 & 14652 \\
24 & E & 7 & 26 & 655 & 117 & 56 & 482 \\
25 & C & 21 & 100 & 44693 & 18461 & 543 & 25689 \\
26 & C & 21 & 100 & 42259 & 19434 & 408 & 22417 \\
27 & C & 21 & 100 & 44229 & 18115 & 396 & 25718 \\
28 & C & 20 & 100 & 43862 & 16922 & 642 & 26298 \\
29 & C & 28 & 100 & 54003 & 24216 & 1226 & 28561 \\
30 & C & 31 & 100 & 53482 & 26997 & 1063 & 25422 \\
31 & C & 27 & 100 & 53092 & 23283 & 463 & 29346 \\
32 & C & 21 & 100 & 55195 & 19817 & 768 & 34610 \\
33 & E & 9 & 25 & 291 & 70 & 30 & 191 \\
34 & D & 2 & 13 & 697 & 76 & 92 & 529 \\
35 & E & 9 & 25 & 479 & 141 & 47 & 291 \\
36 & E & 10 & 75 & 1026 & 137 & 68 & 821 \\
37 & E & 7 & 25 & 7165 & 1804 & 94 & 5267 \\
38 & E & 4 & 25 & 647 & 67 & 49 & 531 \\
39 & G & 47 & 333 & 3350 & 428 & 144 & 2778 \\
40 & G & 26 & 333 & 3599 & 240 & 157 & 3202 \\
41 & G & 26 & 332 & 4918 & 239 & 145 & 4534 \\
42 & C & 17 & 100 & 30411 & 14844 & 348 & 15219 \\
43 & F & 267 & 477 & 10002 & 3204 & 1519 & 5279 \\
44 & C & 9 & 100 & 29906 & 8260 & 274 & 21372 \\
45 & E & 3 & 25 & 380 & 44 & 43 & 293 \\
\bottomrule
\end{tabular}
\vspace*{-2ex}
\end{table}
%
\begin{table}
\centering
\caption{Ground-truth signatures and their occurrences in distinct events.}
\label{table:signature}
\vspace*{-1ex}
\small
\begin{tabular}{ccrrrc}
\toprule
& sub- & fail-only & pass-only & fail \& & failure \\
signature & pattern & events & events & pass & strings* \\
\midrule
A & 1 & 1 & 0 & 0 & yes \\
A & 2 & 2 & 0 & 0 & no \\
B & 1 & 2 & 0 & 0 & yes \\
C & 1 & 21 & 0 & 0 & yes \\
C & 2 & 21 & 0 & 0 & yes \\
D & 1 & 4 & 0 & 0 & yes \\
\textbf{D$^{\#}$} & \textbf{2} & 69 & 267 & 115 & no \\
\textbf{D$^{\#}$} & \textbf{3} & 2 & 10 & 13 & no \\
\textbf{E$^{\#}$} & \textbf{1} & 24 & 239 & 171 & no \\
E & 1 & 1 & 0 & 0 & no \\
E & 2 & 9 & 0 & 0 & no \\
E & 3 & 9 & 0 & 0 & yes \\
E & 4 & 23 & 0 & 0 & yes \\
F & 1 & 19 & 0 & 0 & yes \\
F & 2 & 19 & 0 & 0 & no \\
F & 3 & 19 & 0 & 0 & yes \\
F & 4 & 14 & 0 & 0 & yes \\
G & 1 & 2 & 0 & 0 & yes \\
G & 2 & 1 & 0 & 0 & no \\
G & 3 & 1 & 0 & 0 & no \\
\bottomrule
\multicolumn{6}{l}{* signature contains the lexical patterns 'error', 'fault' or 'fail*'}\\
\multicolumn{6}{l}{$^{\#}$ sub-patterns that were removed to ensure a clean ground truth}
\end{tabular}
\vspace*{-3ex}
\end{table}
\subsection{Dataset and ground truth}
\label{sec:dataset}
Our dataset provided by \CiscoNorway{our industrial partner} consists
of failing and passing log files from 45 different end-to-end integration
tests. In addition to the log text we also have data on when a given
log file was produced. Most test-sets span a time-period of 38 days, while
the largest set (test 43 in Table~\ref{table:descriptive}) spans 112
days. Each failing log is known to exemplify symptoms of one of seven
known errors, and \CiscoNorway{our industrial partner} has given us a
set of regular expressions that help determine which events are relevant
for a given known error. We refer to the set of regular expressions
that identify a known error as a \emph{signature} for that error. These
signatures help us construct a ground truth for our investigation.
Moreover, an important motivation for developing SBLD is to help create
signatures for novel problems: The events highlighted by SBLD should be
characteristic of the observed failure, and the textual contents of the
events can be used in new signature expressions.
Descriptive facts about our dataset is listed in
Table~\ref{table:descriptive} while Table~\ref{table:signature}
summarizes key insights about the signatures used.
Ideally, our ground truth should highlight exactly and \emph{only} the
log events that an end user would find relevant for troubleshooting
an error. However, the signatures used in this investigation were
designed to find sufficient evidence that the \emph{entire log} in
question belongs to a certain error class: the log might contain other
events that a human user would find equally relevant for diagnosing
a problem, but the signature in question might not encompass these
events. Nevertheless, the events that constitute sufficient evidence
for assigning the log to a given error class are presumably relevant
and should be presented as soon as possible to the end user. However,
if our method cannot differentiate between these signature events and
other events we cannot say anything certain about the relevance of
those other events. This fact is reflected in our choice of quality
measures, specifically in how we assess the precision of the approach. This
is explained in detail in the next section.
When producing the ground truth, we first ensured that a log would only be
associated with a signature if the entire log taken as a whole satisfied all
the sub-patterns of that signature. If so, we then determined which events
the patterns were matching on. These events constitute the known-to-be relevant
set of events for a given log. However, we identified some problems with two of the provided
signatures that made them unsuitable for assessing SBLD. Signature \emph{E}
(see Table~\ref{table:signature}) had a sub-pattern that searched for a "starting test"-prefix that necessarily
matches on the first event in all logs due to the structure of the logs.
Similarly, signature \emph{D} contained two sub-patterns that necessarily
match all logs in the set--in this case by searching for whether the test
was run on a given machine, which was true for all logs for the corresponding
test. We therefore elected to remove these sub-patterns from the signatures
before conducting the analysis.
\subsection{Quality Measures}
As a measure of how well SBLD reports all known-to-be relevant log
events, we measure \emph{recall in best cluster}, which we for brevity refer to
as simply \emph{recall}.
This is an adaption of the classic recall measure used in information retrieval,
which tracks the proportion of all relevant events that were retrieved
by the system~\cite{manning2008introduction}.
As our method presents events to the user in a series of ranked clusters,
we ideally want all known-to-be relevant events to appear in the highest ranked cluster.
We therefore track the overall recall obtained as if the first cluster were the only events retrieved.
Note, however, that SBLD ranks all clusters, and a user can retrieve additional clusters if desired.
We explore whether this could improve SBLD's performance on a
specific problematic test-set in Section~\ref{sec:testfourtythree}.
It is trivial to obtain a perfect recall by simply retrieving all events
in the log, but such a method would obviously be of little help to a user
who wants to reduce the effort needed to diagnose failures.
We therefore also track the \emph{effort reduction} (ER), defined as
\[ \text{ER} = 1 - \frac{\text{number of events in first cluster}}{\text{number of events in log}} \]
Much like effective information retrieval systems aim for high recall and
precision, we want our method to score a perfect recall while obtaining the
highest effort reduction possible.
\subsection{Recording the impact of added data}
To study the impact of added data on SBLD's performance, we need to measure how
SBLD's performance on a target log $t$ is affected by adding an extra
failing log $f$ or a passing log $p$. There are several strategies
for accomplishing this. One way is to try all combinations in the
dataset i.e.\ compute the performance on any $t$ using any choice of
failing and passing logs to produce the interestingness scores. This
approach does not account for the fact that the logs in the data are
produced at different points in time and is also extremely expensive
computationally. We opted instead to order the logs chronologically and
simulate a step-wise increase in data as time progresses, as shown in
Algorithm~\ref{alg:time}.
\begin{algorithm}[b]
\caption{Pseudo-code illustrating how we simulate a step-wise increase in data
as time progresses and account for variability in choice of
interestingness measure.}
\label{alg:time}
\begin{algorithmic}\small
\STATE $F$ is the set of failing logs for a given test
\STATE $P$ is the set of passing logs for a given test
\STATE $M$ is the set of interestingness measures considered
\STATE sort $F$ chronologically
\STATE sort $P$ chronologically
\FOR{$i=0$ to $i=\lvert F \rvert$}
\FOR{$j=0$ to $j=\lvert P \rvert$}
\STATE $f = F[:i]$ \COMMENT{get all elements in F up to and including position i}
\STATE $p = P[:j]$
\FORALL{$l$ in $f$}
\STATE initialize $er\_scores$ as an empty list
\STATE initialize $recall\_scores$ as an empty list
\FORALL{$m$ in $M$}
\STATE perform SBLD on $l$ using $m$ as measure \\ \hspace*{1.75cm} and $f$ and $p$ as spectrum data
\STATE append recorded effort reduction score to $er\_scores$
\STATE append recorded recall score to $recall\_scores$
\ENDFOR
\STATE record median of $er\_scores$
\STATE record median of $recall\_scores$
\ENDFOR
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Variability in interestingness measures}
\label{sec:imvars}
As mentioned in Section~\ref{sec:approach}, SBLD requires a
choice of interestingness measure for scoring the events,
which can have a considerable impact on SBLD's performance.
Considering that the best choice of interestingness measure is context-dependent,
there is no global optimum,
it is up to the user to decide which interestingness metric best reflects their
notion of event relevance.
Consequently, we want to empirically study SBLD in way
that captures the variability introduced by this decision.
To this end, we record the median score obtained by performing SBLD for every possible choice of
interestingness measure from those listed in Table~\ref{table:measures}.
Algorithm~\ref{alg:time} demonstrates the procedure in pseudo-code.
\subsection{Comparing alternatives}
\label{sec:comps}
To answer RQ2 and RQ3, we use pairwise comparisons of
different configurations of SBLD with a method that searches for regular expressions.
The alternatives are compared
on each individual failing log in the set in a paired fashion. An
important consequence of this is that the statistical comparisons have
no concept of which test the failing log belongs to, and thus the test
for which there is most data has the highest impact on the result of the
comparison.
The pairwise comparisons are conducted using paired Wilcoxon signed-rank
tests~\cite{wilcoxon1945} where the Pratt correction~\cite{Pratt1959}
is used to handle ties. We apply Holm's correction~\cite{Holm1979}
to the obtained p-values to account for the family-wise error
rate arising from multiple comparisons. We declare a comparison
\emph{statistically significant} if the Holm-adjusted p-value is below
$\alpha=0.05$. The Wilcoxon tests check the two-sided null hypothesis of
no difference between the alternatives. We report the Vargha-Delaney $A_{12}$ and
$A_{21}$~\cite{Vargha2000} measures of stochastic superiority to
indicate which alternative is the strongest. Conventionally, $A_{12}=0.56$ is
considered a small difference, $A_{12}=.64$ is considered a medium difference
and $A_{12}=.71$ or greater is considered large~\cite{Vargha2000}. Observe
also that $A_{21} = 1 - A_{12}$.
\begin{figure*}
\includegraphics[width=0.8\textwidth]{rq1_boxplot.png}
%
\caption{The overall performance of SBLD in terms of effort reduction
and recall. On many tests, SBLD exhibited perfect recall for
all observations in the inter-quartile range and thus the box collapses to a single line on the $1.0$ mark.\label{fig:rq1boxplot}}
\end{figure*}
\subsection{Analysis procedures}
We implement the SBLD approach in a prototype tool
DAIM (Diagnosis and Analysis using Interestingness Measures),
and use DAIM to empirically evaluate the idea.
\head{RQ1 - overall performance} We investigate the overall performance
of SBLD by analyzing a boxplot for each test in our dataset. Every individual
datum that forms the basis of the plot is the median performance of SBLD over
all choices of interestingness measures for a given set of failing and passing
logs subject to the chronological ordering scheme outlined above.
\head{RQ2 - impact of data} We analyze the impact of added data by
producing and evaluating heatmaps that show the obtained performance
as a function of the number of failing logs (y-axis) and number of
passing logs (x-axis). The color intensity of each tile in the heatmaps
is calculated by taking the median of the scores obtained for each
failing log analyzed with the given number of failing and passing logs
as data for the spectrum inference, wherein the score for each log is
the median over all the interestingness measures considered as outlined in
Section~\ref{sec:imvars}.
Furthermore, we compare three variant configurations
of SBLD that give an overall impression of the influence of added
data. The three configurations considered are \emph{minimal evidence},
\emph{median evidence} and \emph{maximal evidence}, where minimal
evidence uses only events from the log being analyzed and one additional
passing log, median evidence uses the median amount of respectively failing and
and passing logs available while maximal evidence uses
all available data for a given test. The comparisons are conducted with the
statistical scheme described above in Section~\ref{sec:comps}.
\head{RQ3 - SBLD versus pattern-based search} To compare SBLD
against a pattern-based search, we record the effort reduction and
recall obtained when only selecting events in the log that match on the
case-insensitive regular expression \texttt{"error|fault|fail*"}, where
the $*$ denotes a wildcard-operator and the $\lvert$ denotes logical
$OR$. This simulates the results that a user would obtain by using
a tool like \texttt{grep} to search for words like 'error' and 'failure'.
Sometimes the ground-truth signature expressions contain words from this
pattern, and we indicate this in Table~\ref{table:signature}. If so, the
regular expression-based method is guaranteed to retrieve the event.
Similarly to RQ2, we compare the three configurations of SBLD described
above (minimum, median and maximal evidence) against the pattern-based
search using the statistical described in Section~\ref{sec:comps}.
%
\section{Results and Discussion}
\label{sec:resdiscuss}
This section gradually dissects Figure~\ref{fig:rq1boxplot}, showing a breakdown of SBLD's performance per test for both recall
and effort reduction, Figures \ref{fig:erheat} and \ref{fig:recallheat},
showing SBLD's performance as a function of the number of failing and passing
logs used, as well as Table~\ref{table:comparisons}, which shows the results
of the statistical comparisons we have performed.
\begin{figure*}
\includegraphics[width=\textwidth]{er_heatmap.pdf}
\caption{Effort reduction score obtained when SBLD is run on a given number of failing and passing logs. The tests not listed in this figure all obtained a lowest median effort reduction score of 90\% or greater and are thus not shown for space considerations. \label{fig:erheat}}
\vspace*{-2ex}
\end{figure*}
\begin{table*}
\caption{Statistical comparisons performed in this investigation. The
bold p-values are those for which no statistically significant difference under $\alpha=0.05$
could be established.}
\label{table:comparisons}
{\small%
\begin{tabular}{lllrrrr}
\toprule
variant 1 & variant 2 & quality measure & Wilcoxon statistic & $A_{12}$ & $A_{21}$ & Holm-adjusted p-value\\
\midrule
pattern-based search & minimal evidence & effort reduction & 29568.5 & 0.777 & 0.223 & $\ll$ 0.001 \\
pattern-based search & maximal evidence & effort reduction & 202413.0 & 0.506 & 0.494 & \textbf{1.000} \\
pattern-based search & median evidence & effort reduction & 170870.5 & 0.496 & 0.504 & $\ll$ 0.001 \\
minimal evidence & maximal evidence & effort reduction & 832.0 & 0.145 & 0.855 & $\ll$ 0.001 \\
minimal evidence & median evidence & effort reduction & 2666.0 & 0.125 & 0.875 & $\ll$ 0.001 \\
maximal evidence & median evidence & effort reduction & 164674.0 & 0.521 & 0.479 & \textbf{1.000} \\
pattern-based search & minimal evidence & recall & 57707.0 & 0.610 & 0.390 & $\ll$ 0.001 \\
pattern-based search & maximal evidence & recall & 67296.0 & 0.599 & 0.401 & $\ll$ 0.001 \\
pattern-based search & median evidence & recall & 58663.5 & 0.609 & 0.391 & $\ll$ 0.001 \\
minimal evidence & maximal evidence & recall & 867.5 & 0.481 & 0.519 & $\ll$ 0.001 \\
minimal evidence & median evidence & recall & 909.0 & 0.498 & 0.502 & 0.020 \\
maximal evidence & median evidence & recall & 0.0 & 0.518 & 0.482 & $\ll$ 0.001 \\
\bottomrule
\end{tabular}
%
}
\end{table*}
\begin{figure}
\includegraphics[width=\columnwidth]{recall_heatmap.pdf}
\caption{Recall score obtained when SBLD is run on a given number of failing and passing logs. For space
considerations, we only show tests for which the minimum observed
median recall was smaller than 1 (SBLD attained perfect median recall for all configurations in the other tests). \label{fig:recallheat}}
\vspace*{-3ex}
\end{figure}
\subsection{RQ1: The overall performance of SBLD}
Figure~\ref{fig:rq1boxplot} suggests that SBLD's overall performance is strong,
since it obtains near-perfect recall while retaining a high degree of effort
reduction. In terms of recall, SBLD obtains a perfect performance on all except
four tests: 18, 34, 42 and 43, with the lower quartile stationed at perfect recall for all tests
except 43 (which we discuss in detail in Section~\ref{sec:testfourtythree}).
For test 18, only 75 out of 20700 observations ($0.036\%$) obtained a recall score
of $0.5$ while the rest obtained a perfect score. On test 34 (the smallest in our
dataset), 4 out of 39 observations obtained a score of zero recall while the
others obtained perfect recall.
For test 42, 700 out of 15300 ($0.4\%$) observations obtained a score of zero recall while the rest obtained perfect recall.
Hence with the exception of test 43 which is discussed later,
SBLD obtains very strong recall scores overall with only a few outliers.
The performance is also strong in terms of effort reduction, albeit
more varied. To a certain extent this is expected since the attainable
effort reduction on any log will vary with the length of the log and
the number of ground-truth relevant events in the log. As can be seen
in Figure~\ref{fig:rq1boxplot}, most of the observations fall well
over the 75\% mark, with the exceptions being tests 4 and 22. For test
4, Figure~\ref{fig:erheat} suggests that one or more of the latest
passing logs helped SBLD refine the interestingness scores. A similar
but less pronounced effect seems to have happened for test 22. However,
as reported in Table~\ref{table:descriptive}, test 22 consists only of
\emph{one} failing log. Manual inspection reveals that the log consists
of 30 events, of which 11 are fail-only events. Without additional
failing logs, most interestingness measures will give a high score to
all events that are unique to that singular failing log, which is likely
to include many events that are not ground-truth relevant. Reporting 11
out of 30 events to the user yields a meager effort reduction of around
63\%. Nevertheless, the general trend is that SBLD retrieves a compact
set of events to the user which yields a high effort reduction score.
In summary, the overall performance shows that SBLD
retrieves the majority of all known-to-be-relevant events
in compact clusters, which dramatically reduces the analysis burden for the
end user. The major exception is Test 43, which we return to in
Section~\ref{sec:testfourtythree}.
\subsection{RQ2: On the impact of evidence}
The heatmaps suggest that the effort reduction is generally not
adversely affected by adding more \emph{passing logs}. If the
assumptions underlying our interestingness measures are correct,
this is to be expected: Each additional passing log either gives us
reason to devalue certain events that co-occur in failing and passing
logs or contain passing-only events that are deemed uninteresting.
Most interestingness measures highly value events that
exclusively occur in failing logs, and additional passing logs help
reduce the number of events that satisfy this criteria. However, since
our method bases itself on clustering similarly scored events it is
weak to \emph{ties} in interestingness scores. It is possible that
an additional passing log introduces ties where there previously was
none. This is likely to have an exaggerated effect in situations with
little data, where each additional log can have a dramatic impact on the
interestingness scores. This might explain the gradual dip in effort
reduction seen in Test 34, for which there are only two failing logs.
Adding more failing logs, on the other hand, draws a more nuanced
picture: When the number of failing logs (y-axis) is high relative
to the number of passing logs (x-axis), effort reduction seems to suffer.
Again, while most interestingness measures will prioritize events that
only occur in failing logs, this strategy only works if there is a
sufficient corpus of passing logs to weed out false positives. When
there are far fewer passing than failing logs, many events will be
unique to the failing logs even though they merely reflect a different
valid execution path that the test can take. This is especially true for
complex integration tests like the ones in our dataset, which might test
a system's ability to recover from an error, or in other ways have many
valid execution paths.
The statistical comparisons summarized in Table~\ref{table:comparisons}
suggest that the minimal evidence strategy performs poorly compared to the
median and maximal evidence strategies. This is especially
pronounced for effort reduction, where the Vargha-Delaney
metric scores well over 80\% in favor of the maximal and median
strategy. For recall, the difference between the minimum strategy and
the other variants is small, albeit statistically significant. Furthermore,
the jump from minimal evidence to median evidence is much more
pronounced than the jump from median evidence to maximal evidence.
For effort reduction, there is in fact no statistically discernible
difference between the median and maximal strategies. For recall, the maximal
strategies seems a tiny bit better, but the $A_{12}$ measure suggests the
magnitude of the difference to be small.
Overall, SBLD seems to benefit from extra data, especially additional passing
logs. Failing logs also help, but depend on a proportional amount of passing
logs for SBLD to fully benefit.
The performance increase from going from minimal data to some data is more pronounced than going from some data to
maximal data. This suggests that there may be diminishing returns to
collecting extra logs, but our investigation cannot prove or disprove this.
\subsection{RQ3: SBLD versus simple pattern-search}
In terms of effort reduction, Table~\ref{table:comparisons} shows that
the pattern-based search clearly beats the minimal evidence variant of
SBLD. It does not, however, beat the median and maximal variants: The
comparison to median evidence suggests a statistically significant win
in favor of median evidence, but the effect reported by $A_{12}$ is
so small that it is unlikely to matter in practice. No statistically
significant difference could be established between the pattern-based
search and SBLD with maximal evidence.
In one sense, it is to be expected that the pattern-based search does
well on effort reduction assuming that events containing words like
"fault" and "error" are rare. The fact that the pattern-based search
works so well could indicate that \CiscoNorway{our industrial partner}
has a well-designed logging infrastructure where such words are
rare and occur at relevant positions in the logs. On the other
hand, it is then notable that the median and maximum variants of SBLD perform
comparably on effort reduction without having any concept of the textual
content in the events.
In terms of recall, however, pattern-based search beats all variants of
SBLD in a statistically significant manner, where the effect size of the
differences is small to medium. One likely explanation for this better performance is that the
pattern-based search performs very well on Test 43, which SBLD generally
performs less well on. Since the comparisons are run per failing log and test
43 constitutes 29\% of the failing logs (specifically, 267 out of 910 logs), the
performance of test 43 has a massive impact. We return to test 43 and its
impact on our results in Section~\ref{sec:testfourtythree}.
On the whole, SBLD performs similarly to pattern-based search, obtaining
slightly poorer results on recall for reasons that are likely due
to a particular test we discuss below. At any rate, there is no
contradiction in combining SBLD with a traditional pattern-based search.
Analysts could start by issuing a set of pattern-based searches and
run SBLD afterward if the pattern search returned unhelpful results.
Indeed, an excellent and intended use of SBLD is to suggest candidate
signature patterns that, once proven reliable, can be incorporated in a
regular-expression based search to automatically identify known issues
in future runs.
\subsection{What happens in Test 43?}
\label{sec:testfourtythree}
SBLD's performance is much worse on Test 43 than the other tests, which
warrants a dedicated investigation. The first thing we observed in the
results for Test 43 is that all of the ground-truth-relevant events
occurred \emph{exclusively} in failing logs and were often singular
(11 out of the 33) or infrequent (30 out of 33 events occurred in 10\%
of the failing logs or fewer). Consequently, we observed a strong
performance from the \emph{Tarantula} and \emph{Failed only}-measures
that put a high premium on failure-exclusive events. Most of the
interestingness measures, on the other hand, will prefer an event that
is very frequent in the failing logs and sometimes occur in passing logs
over a very rare event that only occurs in failing logs. This goes a
long way in explaining the poor performance on recall. The abundance of
singular events might also suggest that there is an error in the event
abstraction framework, where several events that should be treated as
instances of the same abstract event are treated as separate events. We
discuss this further in Section~\ref{sec:ttv}.
\begin{sloppypar}%
Another observation we made is that the failing logs contained only \emph{two}
ground-truth relevant events, which means that the recorded recall can quickly
fluctuate between $0$, $0.5$ and $1$.
\end{sloppypar}
Would the overall performance improve by retrieving an additional
cluster? A priori, retrieving an extra cluster would strictly improve
or not change recall since more events are retrieved without removing
the previously retrieved events. Furthermore, retrieving an additional
cluster necessarily decreases the effort reduction. We re-ran the
analysis on Test 43 and collected effort reduction and recall scores
for SBLD when retrieving \emph{two} clusters, and found that the added
cluster increased median recall from $0$ to $0.5$ while the median
effort reduction decreased from $0.97$ to $0.72$. While the proportional
increase in recall is larger than the decrease in effort reduction,
this should in our view not be seen as an improvement: As previously
mentioned, the failing logs in this set contain only two ground-truth
relevant events and thus recall is expected to fluctuate greatly.
Secondly, an effort reduction of $0.72$ implies that you still have to
manually inspect 28\% of the data, which in most information retrieval
contexts is unacceptable. An unfortunate aspect of our analysis in this
regard is that we do not account for event \emph{lengths}: An abstracted
event is treated as one atomic entity, but could in reality vary from a
single line to a stack trace that spans several pages. A better measure
of effort reduction should incorporate a notion of event length to
better reflect the real-world effect of retrieving more events.
All in all, Test 43 exhibits a challenge that SBLD is not suited for:
It asks SBLD to prioritize rare events that are exclusive to failing
logs over events that frequently occur in failing logs but might
occasionally occur in passing logs. The majority of interestingness
measures supported by SBLD would prioritize the latter category of
events. In a way, this might suggest that SBLD is not suited for finding
\emph{outliers} and rare events: Rather, it is useful for finding
events that are \emph{characteristic} for failures that have occurred
several times - a "recurring suspect", if you will. An avenue for future
research is to explore ways of letting the user combine a search for
"recurring suspects" with the search for outliers.
%
\section{Related Work}
\label{sec:relwork}
We distinguish two main lines of related work:
First, there is other work aimed at automated analysis of log files,
i.e., our problem domain,
and second, there is other work that shares similarities with our technical approach,
i.e., our solution domain.
\head{Automated log analysis}
Automated log analysis originates in \emph{system and network monitoring} for security and administration~\cite{lin1990:error,Oliner2007},
and saw a revival in recent years due to the needs of \emph{modern software development}, \emph{CE} and \emph{DevOps}~\cite{Hilton2017,Laukkanen2017,Debbiche2014,Olsson2012,Shahin2017,candido2019:contemporary}.
A considerable amount of research has focused on automated \emph{log parsing} or \emph{log abstraction},
which aims to reduce and organize log data by recognizing latent structures or templates in the events in a log~\cite{zhu2019:tools,el-masri2020:systematic}.
He et al. analyze the quality of these log parsers and conclude that many of them are not accurate or efficient enough for parsing the logs of modern software systems~\cite{he2018:automated}.
In contrast to these automated approaches,
our study uses a handcrafted log abstracter developed by \CiscoNorway{our industrial collaborator}.
\emph{Anomaly detection} has traditionally been used for intrusion detection and computer security~\cite{liao2013:intrusion,ramaki2016:survey,ramaki2018:systematic}.
Application-level anomaly detection has been investigated for troubleshooting~\cite{chen2004:failure,zhang2019:robust},
and to assess compliance with service-level agreements~\cite{banerjee2010:logbased,He2018,sauvanaud2018:anomaly}.
Gunter et al. present an infrastructure for troubleshooting of large distributed systems, %
by first (distributively) summarizing high volume event streams before submitting those summaries to a centralized anomaly detector.
This helps them achieve the fidelity needed for detailed troubleshooting,
without suffering from the overhead that such detailed instrumentation would bring~\cite{Gunter2007}.
Deeplog by Du et al. enables execution-path and performance anomaly detection in system logs by training a Long Short-Term Memory neural network of the system's expected behavior from the logs, and using that model to flag events and parameter values in the logs that deviate from the model's expectations~\cite{Du2017}.
Similarly, LogRobust by Zhang et al. performs anomaly detection using a bi-LSTM neural network but also detects events that are likely evolved versions of previously seen events, making the learned model more robust to updates in the target logging infrastructure~\cite{zhang2019:robust}.
In earlier work, we use \emph{log clustering} to reduce the effort needed to process a backlog of failing CE logs
by grouping those logs that failed for similar reasons~\cite{rosenberg2018:use,rosenberg:2018:improving}.
They build on earlier research that uses log clustering to identify problems in system logs~\cite{Lin2016,Shang2013}.
Common to these approaches is how the contrast between passing and failing logs is used to improve accuracy,
which is closely related to how SBLD highlights failure-relevant events.
Nagarash et al.~\cite{nagaraj:2012} explore the use of dependency networks to exploit the contrast between two sets of logs,
one with good and one with bad performance,
to help developers understand which component(s) likely contain the root cause of performance issues.
An often-occurring challenge is the need to (re)construct an interpretable model of a system's execution.
To this end, several authors investigate the combination of log analysis with (static) source code analysis,
where they try to (partially) match events in logs to log statements in the code,
and then use these statements to reconstruct a path through the source code to help determine
what happened in a failed execution~\cite{Xu2009,yuan:2010:sherlog,zhao2014:lprof,schipper2019:tracing}.
Gadler et al. employ Hidden Markov Models to create a model of a system's usage patterns from logged events~\cite{gadler2017:mining}, while
Pettinato et al. model and analyze the behavior of a complex telescope system using Latent Dirichlet Allocation~\cite{pettinato2019:log}.
Other researchers have analyzed the logs for successful and failing builds,
to warn for anti-patterns and decay~\cite{vassallo2019:automated},
give build repair hints~\cite{Vassallo2018},
and automatically repair build scripts~\cite{hassan2018:hirebuild, tarlow2019:learning}.
Opposite to our work,
these techniques exploit the \emph{overlap} in build systems used by many projects to mine patterns that hint at decay or help repair a failing build,
whereas we exploit the \emph{contrast} with passing runs for the same project to highlight failure-relevant events.
\begin{sloppypar}
\head{Fault Localization}
As mentioned, our approach was inspired by Spectrum-Based Fault Localization (SBFL),
where the fault-proneness of a statement is computed as a function of
the number of times that the statement was executed in a failing test case, combined with
the number of times that the statement was skipped in a passing test case~\cite{Jones2002,Chen2002,Abreu2007,Abreu2009,Naish2011}.
This more or less directly translates to the inclusion or exclusion of events in failing, resp. passing logs,
where the difference is that SBLD adds clustering of the results to enable step-wise presentation of results to the user.
\end{sloppypar}
A recent survey of Software Fault Localization includes the SBFL literature up to 2014~\cite{Wong2016}.
De Souza et. all extend this with SBFL work up to to 2017, and add an overview of seminal work on automated debugging from 1950 to 1977~\cite{deSouza2017}.
By reflecting on the information-theoretic foundations of fault localization, Perez proposes the DDU metric,
which can be used to evaluate test suites and predict their diagnostic performance when used in SBFL~\cite{Perez2018}.
One avenue for future work is exploring how a metric like this can be adapted to our context,
and see if helps to explain what happened with test 43.
A recent evaluation of \emph{pure} SBFL on large-scale software systems found that it under-performs in these situations
(only 33-40\% of the bugs are identified with the top 10 of ranked results~\cite{heiden2019:evaluation}.
The authors discuss several directions beyond pure SBFL, such as combining it with dynamic program analysis techniques,
including additional text analysis/IR techniques~\cite{Wang2015a}, mutation based fault localization,
and using SBFL in an interactive feedback-based process, such as whyline-debugging~\cite{ko2008:debugging}.
Pure SBFL is closely related to the Spectrum-Based Log Diagnosis proposed here,
so we may see similar challenges (in fact, test 43 may already show some of this).
Of the proposed directions to go beyond pure SBFL,
both the inclusion of additional text analysis/IR techniques,
and the application of Spectrum-Based Log Diagnosis in an interactive feedback-based process
are plausible avenues to extend our approach.
Closely related to the latter option,
de Souza et al.~\cite{deSouza2018b} assess guidance and filtering strategies to \emph{contextualize} the fault localization process.
Their results suggest that contextualization by guidance and filtering can improve the effectiveness of SBFL,
by classifying more actual bugs in the top ranked results.
\begin{comment}
Direct comparison~\cite{He2018, jiang2017:what, Jones:2007:DP:1273463.1273468,
Xu2009, Hwa-YouHsu:2008:RIB:1642931.1642994}.
Hsu et
al~\cite{Hwa-YouHsu:2008:RIB:1642931.1642994} discuss methods for extracting
failure signatures as sequences of code executions, which in spirit is rather
similar to what we are trying to accomplish.
An interesting data-structure, the event correlation
graph, is explores in~\cite{Fu2012a}. An FL metric that takes frequencies into
account~\cite{Shu2016}.
\end{comment}
%
\section{Threats to Validity}
\label{sec:ttv}
\head{Construct Validity} %
The signatures that provide our ground truth were devised to determine whether a given log \emph{in its entirety} showed symptoms of a known error.
As discussed in Section~\ref{sec:dataset}, we have used these signatures to detect events that give sufficient evidence for a symptom,
but there may be other events that could be useful to the user that are not part of our ground truth.
We also assume that the logs exhibit exactly the failures described by the signature expression.
In reality, the logs could contain symptoms of multiple failures beyond the ones described by the signature.
Furthermore, we currently do not distinguish between events that consist of single line of text,
or events that contain a multi-line stack-trace, although these clearly represent different comprehension efforts.
This threat could be addressed by tracking the \emph{length} of the event contents,
and using it to further improve the accuracy of our effort reduction measure.
The choice of clustering algorithm and parameters affects the events retrieved,
but our investigation currently only considers HAC with complete linkage.
While we chose complete linkage to favor compact clusters,
outliers in the dataset could cause unfavorable clustering outcomes.
Furthermore, using the uncorrected sample standard deviation as threshold criterion
may be too lenient if the variance in the scores is high.
This threat could be addressed by investigate alternative cluster algorithm and parameter choices.
Moreover, as for the majority of log analysis frameworks, the performance of SBLD strongly depends on the quality of log abstraction.
An error in the abstraction will directly propagate to SBLD:
For example, if abstraction fails to identify two concrete events as being instances of the same generic event,
their aggregated frequencies will be smaller and consequently treated as less interesting by SBLD.
Similarly, the accuracy will suffer if two events that represent distinct generic events are treated as instances of the same generic event.
Future work could investigate alternative log abstraction approaches.
\head{Internal Validity} %
While our heatmaps illustrate the interaction between additional data and SBLD performance,
they are not sufficient to prove a causal relationship between performance and added data.
Our statistical comparisons suggests that a strategy of maximizing data is generally preferable,
but they are not sufficient for discussing the respective contribution of failing or passing logs.
\head{External Validity} %
This investigation is concerned with a single dataset from one industrial partner.
Studies using additional datasets from other contexts is needed to assess the generalizability of SBLD to other domains.
Moreover, while SBLD is made to help users diagnose problems that are not already well understood,
we are assessing it on a dataset of \emph{known} problems.
It could be that these errors, being known, are of a kind that are generally easier to identify than most errors.
Studying SBLD in-situ over time and directly assessing whether end users found it helpful
in diagnosis would better indicate the generalizability of our approach.
%
\section{Concluding Remarks}
\label{sec:conclusion}
\head{Contributions}
This paper presents and evaluates Spectrum-Based Log Diagnosis (SBLD),
a method for automatically identifying segments of failing logs
that are likely to help users diagnose failures.
Our empirical investigation of SBLD addresses the following questions:
(i) How well does SBLD reduce the \emph{effort needed} to identify all \emph{failure-relevant events} in the log for a failing run?
(ii) How is the \emph{performance} of SBLD affected by \emph{available data}?
(iii) How does SBLD compare to searching for \emph{simple textual patterns} that often occur in failure-relevant events?
\head{Results}
In response to (i),
we find that SBLD generally retrieves the failure-relevant events in a compact manner
that effectively reduces the effort needed to identify failure-relevant events.
In response to (ii),
we find that SBLD benefits from addition data, especially more logs from successful runs.
SBLD also benefits from additional logs from failing runs if there is a proportional amount of successful runs in the set.
We also find that the effect of added data is most pronounced when going from little data to \emph{some} data rather than from \emph{some} data to maximal data.
In response to (iii),
we find that SBLD achieves roughly the same effort reduction as traditional search-based methods but obtains slightly lower recall.
We trace the likely cause of this discrepancy on recall to a prominent part of our dataset, whose ground truth emphasizes rare events.
A lesson learned in this regard is that SBLD is not suited for finding statistical outliers but rather \emph{recurring suspects}
that characterize the observed failures.
Furthermore, the investigation highlights that traditional pattern-based search and SBLD can complement each other nicely:
Users can resort to SBLD if they are unhappy with what the pattern-based searches turn
up, and SBLD is an excellent method for finding characteristic textual patterns
that can form the basis of automated failure identification methods.
\head{Conclusions}
We conclude that SBLD shows promise as a method diagnosing failing runs,
that its performance is positively affected by additional data,
but that it does not outperform textual search on the dataset considered.
\head{Future work}
We see the following directions for future work:
(a) investigate SBLD's performance on other datasets, to better assess generalizability,
(b) explore the impact of alternative log abstraction mechanisms,
(c) explore ways of combining SBLD with outlier detection, to accommodate different user needs,
(d) adapt the Perez' DDU metric to our context and see if it can help predict diagnostic efficiency,
(e) experiment with extensions of \emph{pure SBLD} that include additional text analysis/IR techniques,
or apply it in an interactive feedback-based process
(f) rigorously assess (extensions of) SBLD in in-situ experiments.
\begin{acks}
We thank Marius Liaaen and Thomas Nornes of Cisco Systems Norway for help with obtaining and understanding the dataset, for developing the log abstraction
mechanisms and for extensive discussions.
This work is supported by the \grantsponsor{RCN}{Research Council of Norway}{https://www.rcn.no} through the
Certus SFI (\grantnum{RCN}{\#203461/030)}.
The empirical evaluation was performed on resources provided by \textsc{uninett s}igma2,
the national infrastructure for high performance computing and data
storage in Norway.
\end{acks}
\printbibliography
\end{document}
| {'timestamp': '2020-08-18T02:18:33', 'yymm': '2008', 'arxiv_id': '2008.06948', 'language': 'en', 'url': 'https://arxiv.org/abs/2008.06948'} |
\section{Introduction}
When granular material in a cubic container is shaken
horizontally one observes experimentally different types of
instabilities, i.e. spontaneous formation of ripples in shallow
beds~\cite{StrassburgerBetatSchererRehberg:1996},
liquefaction~\cite{RistowStrassburgerRehberg:1997,Ristow:1997}, convective
motion~\cite{TennakoonBehringer:1997,Jaeger} and recurrent swelling of
shaken material where the period of swelling decouples from the
forcing period~\cite{RosenkranzPoeschel:1996}. Other interesting experimental results concerning simultaneously vertically and horizontally vibrated granular systems~\cite{TennakoonBehringer:1998} and enhanced packing of spheres due to horizontal vibrations~\cite{PouliquenNicolasWeidman:1997} have been reported recently. Horizontally shaken
granular systems have been simulated numerically using cellular
automata~\cite{StrassburgerBetatSchererRehberg:1996} as well as
molecular dynamics
techniques~\cite{RistowStrassburgerRehberg:1997,Ristow:1997,IwashitaEtAl:1988,LiffmanMetcalfeCleary:1997,SaluenaEsipovPoeschel:1997,SPEpre99}.
Theoretical work on horizontal shaking can be found
in~\cite{SaluenaEsipovPoeschel:1997} and the dynamics of a single
particle in a horizontally shaken box has been discussed
in~\cite{DrosselPrellberg:1997}.
\begin{figure}[htbp]
\centerline{\psfig{file=sketch.eps,width=7cm,clip=}}
\caption{Sketch of the simulated system.}
\label{fig:sketch}
\end{figure}
Recently the effect of convection in a horizontally shaken box filled with
granular material attracted much attention and presently the effect is studied
experimentally by different
groups~\cite{TennakoonBehringer:1997,Jaeger,RosenkranzPoeschel:1996}.
Unlike the effect of convective motion in vertically shaken granular
material which has been studied intensively experimentally,
analytically and by means of computer simulations
(s.~e.g.~\cite{vertikalEX,JaegerVert,vertikalANA,vertikalMD}), there
exist only a few references on horizontal shaking. Different from the
vertical case, where the ``architecture'' of the convection pattern is
very simple~\cite{BizonEtAl:1998}, in horizontally shaken containers one observes a variety
of different patterns, convecting in different directions, in parallel
as well as perpendicular to the direction of
forcing~\cite{TennakoonBehringer:1997}. Under certain conditions one
observes several convection rolls on top of each other~\cite{Jaeger}.
An impression of the complicated convection can be found in the
internet~\cite{movies}.
Whereas the properties of convection in vertically sha\-ken systems
can be reproduced by two dimensional molecular dynamics simulations
with good reliability, for the case of horizontal motion the results
of simulations are inconsistent with the experimental results: in {\em
all} experimental investigations it was reported that the material
flows downwards close to the vertical
walls~\cite{TennakoonBehringer:1997,Jaeger,RosenkranzPoeschel:1996,movies},
but reported numerical simulations systematically show surface rolls
in opposite direction accompanying the more realistic deeper rolls, or
even replacing them completely~\cite{LiffmanMetcalfeCleary:1997}.
Our investigation is thus concerned with the convection pattern, i.e. the
number and direction of the convection rolls in a two dimensional
molecular dynamics simulation. We will show that the choice of the
dissipative material parameters has crucial influence on the convection pattern
and, in particular, that the type of convection rolls observed experimentally
can be
reproduced by using sufficiently high dissipation constants.
\section{Numerical Model}
The system under consideration is sketched in Fig.~\ref{fig:sketch}:
we simulate a two-dimensional vertical cross section of a three-dimensional
container.
This rectangular section of width $L=100$ (all units in cgs system), and
infinite height, contains $N=1000$ spherical particles. The system is
periodically driven by an external oscillator $x(t) = A \sin (2\pi f
t)$ along a horizontal plane. For the effect we want to show, a
working frequency $f=10$ and amplitude $A=4$ is
selected.
These values give an acceleration amplitude of approximately $16 g$.
Lower accelerations affect the intensity of the
convection but do not change the basic features of the convection
pattern which we want to discuss.
As has been shown in~\cite{SPEpre99},
past the fluidization point, a much better indicator of the convective
state is the dimensionless velocity $A 2\pi f/ \sqrt{Lg}$. This means
that in small containers motion saturates earlier, hence, results for
different container lengths at the same values of the acceleration amplitude
cannot be compared directly. Our acceleration amplitude $\approx 16g$ corresponds to
$\approx 3g$ in a 10 cm container (provided that the frequency is the same
and particle sizes have been
scaled by the same amount).
The radii of the particles of density $2$ are homogeneously
distributed in the interval $[0.6, 1.4]$. The rough inner walls of the
container are simulated by attaching additional particles of the same
radii and material properties (this simulation technique is similar to ``real''
experiments, e.g.~\cite{JaegerVert}).
For the molecular dynamics simulations, we apply a modified
soft-particle model by Cundall and Strack~\cite{CundallStrack:1979}:
Two particles $i$ and $j$, with radii $R_i$ and $R_j$ and at positions
$\vec{r}_i$ and $\vec{r}_j$, interact if their compression $\xi_{ij}=
R_i+R_j-\left|\vec{r}_i -\vec{r}_j\right|$ is positive. In this case
the colliding spheres feel the force
$F_{ij}^{N} \vec{n}^N + F_{ij}^{S} \vec{n}^S$,
with $\vec{n}^N$ and $\vec{n}^S$ being the unit vectors in normal and shear
direction. The normal force acting between colliding spheres reads
\begin{equation}
F_{ij}^N = \frac{Y\sqrt{R^{\,\mbox{\it\footnotesize\it eff}}_{ij}}}{1-\nu^2}
~\left(\frac{2}{3}\xi_{ij}^{3/2} + B \sqrt{\xi_{ij}}\,
\frac{d {\xi_{ij}}}{dt} \right)
\label{normal}
\end{equation}
where $Y$ is the Young modulus, $\nu$ is the Poisson ratio and $B$
is a material constant which characterizes the dissipative
character of the material~\cite{BSHP}.
\begin{equation}
R^{\,\mbox{\it\footnotesize\it
eff}}_{ij} = \left(R_i R_j\right)/\left(R_i + R_j\right)
\end{equation}
is the
effective radius. For a strict derivation of (\ref{normal})
see~\cite{BSHP,KuwabaraKono}.
For the shear force we apply the model by Haff and Werner~\cite{HaffWerner}
\begin{equation}
F_{ij}^S = \mbox{sign}\left({v}_{ij}^{\,\mbox{\it\footnotesize\it rel}}\right)
\min \left\{\gamma_s m_{ij}^{\,\mbox{\it\footnotesize\it eff}}
\left|{v}_{ij}^{\,\mbox{\it\footnotesize\it rel}}\right|~,~\mu
\left|F_{ij}^N\right| \right\}
\label{shear}
\end{equation}
with the effective mass $m_{ij}^{\,\mbox{\it\footnotesize\it eff}} =
\left(m_i m_j\right)/\left(m_i + m_j\right)$ and the relative velocity
at the point of contact
\begin{equation}
{v}_{ij}^{\,\mbox{\it\footnotesize\it rel}} = \left(\dot{\vec{r}}_i -
\dot{\vec{r}}_j\right)\cdot \vec{n}^S + R_i {\Omega}_i + R_j {\Omega}_j ~.
\end{equation}
$\Omega_i$ and $\Omega_j$ are the angular velocities of the particles.
The resulting momenta $M_i$ and $M_j$ acting upon the particles are
$M_i = F_{ij}^S R_i$ and $M_j = - F_{ij}^S R_j$. Eq.~(\ref{shear})
takes into account that the particles slide upon each other for the
case that the Coulomb condition $\mu \mid F_{ij}^N \mid~<~\left|
F_{ij}^S \right|$ holds, otherwise they feel some viscous friction.
By means of $\gamma _{n} \equiv BY/(1-\nu ^2)$ and $\gamma _{s}$,
normal and shear damping coefficients, energy loss during particle
contact is taken into account~\cite{restitution}.
The equations of motion for translation and rotation have been solved
using a Gear predictor-corrector scheme of sixth order
(e.g.~\cite{AllenTildesley:1987}).
The values of the coefficients used in simulations are $Y/(1-\nu
^2)=1\times 10^{8}$, $\gamma _{s}=1\times 10^{3}$, $ \mu =0.5$. For
the effect we want to show, the coefficient $\gamma _{n}$ takes values within the range
$\left[10^2,10^4\right]$.
\section{Results}
The mechanisms for convection under horizontal shaking have been
discussed in \cite{LiffmanMetcalfeCleary:1997}. Now we can show that
these mechanisms can be better understood by taking into account the
particular role of dissipation in this problem. The most striking
consequence of varying the normal damping coefficient is the change
in organization of the convective pattern, i.e. the direction and
number of rolls in the stationary regime. This is shown in
Fig.~\ref{fig1}, which has been obtained after averaging particle
displacements over 200 cycles
(2 snapshots per cycle).
The asymmetry of compression and expansion of particles close to
the walls (where the material results highly compressible) explains
the large transverse velocities shown in the figure.
Note, however, that the upward and downward motion at the walls cannot be altered
by this particular averaging procedure.
The first frame shows a convection pattern with only two rolls, where
the arrows indicate that the grains slide down the walls, with at most
a slight expansion of the material at the surface.
There are no surface rolls.
This is very
similar to what has been observed in
experiments\cite{TennakoonBehringer:1997,Jaeger,RosenkranzPoeschel:1996}.
In this case, dissipation is high enough to damp most of the sloshing
induced by the vertical walls, and not even the grains just below the
surface can overcome the pressure gradient directed downwards.
For lower damping, we see the developing of surface rolls,
which
coexist with the inner rolls circulating in the opposite way. Some
energy is now available for upward motion when the walls compress the
material fluidized during the opening of the wall ``gap'' (empty space
which is created alternatively during the shaking motion). This is the
case reported in \cite{LiffmanMetcalfeCleary:1997}. The last frames
demonstrate how the original rolls vanish at the same time that the
surface rolls grow occupying a significant part of the system.
Another feature shown in the figure is the thin layer of material involving
3 particle rows close to the bottom, which perform a different kind
of motion. This effect, which can be seen in all frames,
is due to the presence of the constraining boundaries
but has not been analyzed separately.
\onecolumn
\begin{figure}
\centerline{\psfig{file=fric1nn.eps,width=5.7cm,clip=}
\hspace{0.3cm}\psfig{file=fric2nn.eps,width=5.7cm,clip=}
\hspace{0.3cm}\psfig{file=fric3nn.eps,width=5.7cm,clip=}}
\centerline{\psfig{file=fric4nn.eps,width=5.7cm,clip=}
\hspace{0.3cm}\psfig{file=fric5nn.eps,width=5.7cm,clip=}
\hspace{0.3cm}\psfig{file=fric6nn.eps,width=5.7cm,clip=}}
\centerline{\psfig{file=fric7nn.eps,width=5.7cm,clip=}
\hspace{0.3cm}\psfig{file=fric8nn.eps,width=5.7cm,clip=}
\hspace{0.3cm}\psfig{file=fric9nn.eps,width=5.7cm,clip=}}
\vspace{0.3cm}
\caption{Velocity field obtained after cycle averaging of
particle displacements, for different values of the normal damping
coefficient, $\gamma_n$. The first one is $1\times 10^4$, and for
obtaining each subsequent frame the coefficient has been divided by
two. The frames are ordered from left to right and from top to
bottom. The cell size for averaging is approximately one particle diameter.}
\label{fig1}
\vspace*{-0.2cm}
\end{figure}
\twocolumn
With decreasing normal damping $\gamma_n$ there are two transitions
observable in Fig.~\ref{fig1}, meaning that the convection pattern changes
qualitatively at these two particular values of $\gamma_n$:
The first transition leads to the appearance of two surface rolls
laying on top of the bulk cells and circulating in opposite direction.
The second transition eliminates the bulk rolls. A more detailed analysis of
the displacement fields (Fig.~\ref{fig2})
allows us to locate the transitions much more precisely.
In Fig.~\ref{fig2} we have represented in grey-scale the horizontal and
vertical components of the displacement vectors pictured in
Fig.~\ref{fig1} but in a denser sampling, analyzing data from 30 simulations
corresponding to
values of the normal damping coefficient within the interval [50,10000].
For horizontal displacements, we have chosen vertical sections
at some representative position in horizontal direction
($x=30$). For the vertical displacements, vertical sections of the
leftmost part of the container were selected ($x=10$), s.
Fig.~\ref{fig2}, lower part.
\begin{figure}
\centerline{\psfig{file=vx.eps,width=4.5cm,clip=}\hspace{-0.5cm}
\psfig{file=vy.eps,width=4.5cm,clip=}
\centerline{\psfig{file=sectionn.eps,height=4.2cm,bbllx=7pt,bblly=16pt,bburx=507pt,bbury=544pt,clip=}}
\vspace*{0.2cm}
\caption{Horizontal (left) and vertical (right) displacements at
selected positions of the frames in Fig.~\ref{fig1} (see the text
for details), for decreasing normal damping and as a function of
depth. White indicates strongest flow along positive axis directions
(up,right), and black the corresponding negative ones. The black region
at the bottom of the left picture corresponds to the complex boundary
effect observed in Fig.~\ref{fig1}, involving only two particle layers.
The
figure below shows a typical convection pattern together with the sections
at $x=10$ and $x=30$ at which the displacements were recorded.}
\label{fig2}
\vspace*{-0.1cm}
\end{figure}
The horizontal axis shows the values of the normal damping
coefficient scaled logarithmically in decreasing sequence. The
vertical axis represents the position in vertical direction, with the
free surface of the system located at $y \approx 60$. One observes first
that white surface shades, complemented by subsurface black ones,
appear quite clearly at about $\gamma =$2000 in Fig.~\ref{fig2}
(left), indicating the appearance of surface rolls. On the other
hand, Fig.~\ref{fig2} (right) shows a black area (indicative of
downward flow along the vertical wall) that vanishes at
$\gamma_n \approx 200$ (at this point the grey shade represents vanishing vertical velocity).
The dashed lines in Fig.~\ref{fig2} lead the eye to identify the transition values.
In the interval $ 200 \lesssim \gamma_n
\lesssim 2000$ surface and inner rolls coexist, rotating in opposite
directions.
One can analyze the situation in terms of the restitution coefficient.
\ From Eq. (\ref{normal}), the equation of motion for the displacement
$\xi_{ij}$ can be integrated and the relative energy loss in a
collision $\eta=(E_0-E)/E_0$ (with $E$ and $E_0$ being the energy of
the relative motion of the particles) can be evaluated approximately.
Up to the lowest order in the expansion parameter, one
finds~\cite{Thomas-Thorsten}
\begin{equation}
\eta = 1.78 \left( \frac{\tau}{\ell} v_0\right)^{1/5}\;,
\label{energyloss}
\end{equation}
where $v_0$ is the relative initial velocity in normal direction, and
$\tau$, $\ell$, time and length scales associated with the problem
(see~\cite{Thomas-Thorsten} for details),
\begin{equation}
\tau = \frac{3}{2} B\; ,~~~~~~~~~
\ell = \left(\frac{1}{3} \frac{m_{ij}^{\,\mbox{\it\footnotesize\it eff}}
}{\sqrt{R^{\,\mbox{\it\footnotesize\it eff}}_{ij}}
B \gamma_{n}}\right)^{2}.
\end{equation}
For $\gamma_n = 10^4$ (the highest value analyzed) and the values of
the parameters specified above ($v_0 \approx A 2\pi f$ for collisions
with the incoming wall), $B= 10^{-4}$ and $\eta$ is typically
50\%. This means that after three more collisions the particle leaves
with an energy not enough to overcome the height of one single
particle in the gravity field. For $\gamma_n = 10^3$ and the other
parameters kept constant, $B=10^{-5}$ and $\eta$ has been
reduced to 5\%, resulting in that the number of collisions needed for
the particle to have its kinetic energy reduced to the same residual
fraction, has increased roughly by an order of magnitude. On the other
hand, given the weak dependence of Eq. (\ref{energyloss}) on the
velocity, one expects that the transitions shown in Fig.~\ref{fig2}
will depend also weakly on the amplitude of the shaking velocity. The reduction of the
inelasticity $\eta$ by an order of magnitude seems enough for
particles to ``climb'' the walls and develop the characteristic
surface rolls observed in numerical simulations.
\section{Discussion}
We have shown that the value of the normal damping coefficient
influences the convective pattern of horizontally shaken granular
materials. By means of molecular dynamics simulations in two
dimensions we can reproduce the pattern observed in real experiments,
which corresponds to a situation of comparatively high damping,
characterized by inelasticity parameters $\eta$ larger than 5\%. For
lower damping, the upper layers of the material develop additional
surface rolls as has been reported previously. As normal damping
decreases, the lower rolls descend and finally disappear completely at
inelasticities of the order of 1\%.
\begin{acknowledgement}
The authors want to thank R. P. Behringer, H. M. Jaeger, M. Medved,
and D. Rosenkranz for providing experimental results prior to
publication and V. Buchholtz, S. E. Esipov, and L. Schimansky-Geier
for discussion. The calculations have been done on the parallel
machine {\it KATJA} (http://summa.physik.hu-berlin.de/KATJA/) of the
medical department {\em Charit\'e} of the Humboldt University Berlin.
The work was supported by Deut\-sche Forschungsgemeinschaft through
grant Po 472/3-2.
\end{acknowledgement}
| {'timestamp': '2002-03-19T12:47:20', 'yymm': '9807', 'arxiv_id': 'cond-mat/9807071', 'language': 'en', 'url': 'https://arxiv.org/abs/cond-mat/9807071'} |
"\\section{\\label{sec:intro}Introduction}\n \nDemonstration of non-abelian exchange statistics is o(...TRUNCATED) | "{'timestamp': '2022-10-20T02:16:28', 'yymm': '2210', 'arxiv_id': '2210.10650', 'language': 'en', 'u(...TRUNCATED) |
"\\section{Introduction}\n\nOver the last decade, imaging atmospheric Cherenkov telescopes\n(IACTs) (...TRUNCATED) | "{'timestamp': '1998-07-13T09:54:01', 'yymm': '9807', 'arxiv_id': 'astro-ph/9807119', 'language': 'e(...TRUNCATED) |
"\\section{Introduction}\n\\label{sec:introduction}\nA plethora of observations have led to confirm (...TRUNCATED) | "{'timestamp': '2021-11-08T02:04:43', 'yymm': '2111', 'arxiv_id': '2111.03152', 'language': 'en', 'u(...TRUNCATED) |
"\n\n\\section{Introduction} \\label{sec:introduction} \\input{introduction}\n\\section{Related Wor(...TRUNCATED) | "{'timestamp': '2016-06-17T02:01:41', 'yymm': '1606', 'arxiv_id': '1606.04992', 'language': 'en', 'u(...TRUNCATED) |
"\\section{introduction}\nRecent discovery of Weyl semimetals (WSMs)~\\cite{Lv2015TaAs,Xu2015TaAs,Ya(...TRUNCATED) | "{'timestamp': '2016-08-18T02:05:38', 'yymm': '1608', 'arxiv_id': '1608.03404', 'language': 'en', 'u(...TRUNCATED) |
"\\section{Introduction}\n\nConformal invariance was first recognised to be of physical interest whe(...TRUNCATED) | "{'timestamp': '2019-04-24T02:04:30', 'yymm': '1904', 'arxiv_id': '1904.10101', 'language': 'en', 'u(...TRUNCATED) |
Dataset Card for Dataset Name
Dataset Summary
RedPajama is a clean-room, fully open-source implementation of the LLaMa dataset. This HuggingFace repo contains a 1B-token sample of the RedPajama dataset. The full dataset has the following token counts and is available for download:
Dataset | Token Count |
---|---|
Commoncrawl | 878 Billion |
C4 | 175 Billion |
GitHub | 59 Billion |
Books | 26 Billion |
ArXiv | 28 Billion |
Wikipedia | 24 Billion |
StackExchange | 20 Billion |
Total | 1.2 Trillion |
A full set of scripts to recreate the dataset from scratch can be found here.
Languages
Primarily English, though the Wikipedia slice contains multiple languages.
Dataset Structure
The dataset structure is as follows:
{
"text": ...,
"meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...}
}
Dataset Creation
This dataset was created to follow the LLaMa paper as closely as possible to try to reproduce its recipe.
Source Data
Commoncrawl
We download five dumps from Commoncrawl, and run the dumps through the official cc_net
pipeline.
We then deduplicate on the paragraph level, and filter out low quality text using a linear classifier trained to
classify paragraphs as Wikipedia references or random Commoncrawl samples.
C4
C4 is downloaded from Huggingface. The only preprocessing step is to bring the data into our own format.
GitHub
The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality files and only keep projects that are distributed under the MIT, BSD, or Apache license.
Wikipedia
We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other formatting boilerplate has been removed.
Gutenberg and Books3
The PG19 subset of the Gutenberg Project and Books3 datasets are downloaded from Huggingface. After downloading, we use simhash to remove near duplicates.
ArXiv
ArXiv data is downloaded from Amazon S3 in the arxiv
requester pays bucket. We only keep latex source files and
remove preambles, comments, macros and bibliographies.
Stackexchange
The Stack Exchange split of the dataset is download from the Internet Archive. Here we only keep the posts from the 28 largest sites, remove html tags, group the posts into question-answer pairs, and order answers by their score.
- Downloads last month
- 15,622