Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
A Semantic Distance Metric Learning approach for Lexical Semantic Change
Detection | Detecting temporal semantic changes of words is an important task for various
NLP applications that must make time-sensitive predictions. Lexical Semantic
Change Detection (SCD) task considers the problem of predicting whether a given
target word, $w$, changes its meaning between two different text corpora, $C_1$
and $C_2$. For this purpose, we propose a supervised two-staged SCD method that
uses existing Word-in-Context (WiC) datasets. In the first stage, for a target
word $w$, we learn two sense-aware encoder that represents the meaning of $w$
in a given sentence selected from a corpus. Next, in the second stage, we learn
a sense-aware distance metric that compares the semantic representations of a
target word across all of its occurrences in $C_1$ and $C_2$. Experimental
results on multiple benchmark datasets for SCD show that our proposed method
consistently outperforms all previously proposed SCD methods for multiple
languages, establishing a novel state-of-the-art for SCD. Interestingly, our
findings imply that there are specialised dimensions that carry information
related to semantic changes of words in the sense-aware embedding space. Source
code is available at https://github.com/a1da4/svp-sdml .
| 2,024 | Computation and Language |
Benchmarking zero-shot stance detection with FlanT5-XXL: Insights from
training data, prompting, and decoding strategies into its near-SoTA
performance | We investigate the performance of LLM-based zero-shot stance detection on
tweets. Using FlanT5-XXL, an instruction-tuned open-source LLM, with the
SemEval 2016 Tasks 6A, 6B, and P-Stance datasets, we study the performance and
its variations under different prompts and decoding strategies, as well as the
potential biases of the model. We show that the zero-shot approach can match or
outperform state-of-the-art benchmarks, including fine-tuned models. We provide
various insights into its performance including the sensitivity to instructions
and prompts, the decoding strategies, the perplexity of the prompts, and to
negations and oppositions present in prompts. Finally, we ensure that the LLM
has not been trained on test datasets, and identify a positivity bias which may
partially explain the performance differences across decoding strategie
| 2,024 | Computation and Language |
CASIMIR: A Corpus of Scientific Articles enhanced with Multiple
Author-Integrated Revisions | Writing a scientific article is a challenging task as it is a highly codified
and specific genre, consequently proficiency in written communication is
essential for effectively conveying research findings and ideas. In this
article, we propose an original textual resource on the revision step of the
writing process of scientific articles. This new dataset, called CASIMIR,
contains the multiple revised versions of 15,646 scientific articles from
OpenReview, along with their peer reviews. Pairs of consecutive versions of an
article are aligned at sentence-level while keeping paragraph location
information as metadata for supporting future revision studies at the discourse
level. Each pair of revised sentences is enriched with automatically extracted
edits and associated revision intention. To assess the initial quality on the
dataset, we conducted a qualitative study of several state-of-the-art text
revision approaches and compared various evaluation metrics. Our experiments
led us to question the relevance of the current evaluation methods for the text
revision task.
| 2,024 | Computation and Language |
EUROPA: A Legal Multilingual Keyphrase Generation Dataset | Keyphrase generation has primarily been explored within the context of
academic research articles, with a particular focus on scientific domains and
the English language. In this work, we present EUROPA, a dataset for
multilingual keyphrase generation in the legal domain. It is derived from legal
judgments from the Court of Justice of the European Union (EU), and contains
instances in all 24 EU official languages. We run multilingual models on our
corpus and analyze the results, showing room for improvement on a
domain-specific multilingual corpus such as the one we present.
| 2,024 | Computation and Language |
Extracting Polymer Nanocomposite Samples from Full-Length Documents | This paper investigates the use of large language models (LLMs) for
extracting sample lists of polymer nanocomposites (PNCs) from full-length
materials science research papers. The challenge lies in the complex nature of
PNC samples, which have numerous attributes scattered throughout the text. The
complexity of annotating detailed information on PNCs limits the availability
of data, making conventional document-level relation extraction techniques
impractical due to the challenge in creating comprehensive named entity span
annotations. To address this, we introduce a new benchmark and an evaluation
technique for this task and explore different prompting strategies in a
zero-shot manner. We also incorporate self-consistency to improve the
performance. Our findings show that even advanced LLMs struggle to extract all
of the samples from an article. Finally, we analyze the errors encountered in
this process, categorizing them into three main challenges, and discuss
potential strategies for future research to overcome them.
| 2,024 | Computation and Language |
Gender Bias in Large Language Models across Multiple Languages | With the growing deployment of large language models (LLMs) across various
applications, assessing the influence of gender biases embedded in LLMs becomes
crucial. The topic of gender bias within the realm of natural language
processing (NLP) has gained considerable focus, particularly in the context of
English. Nonetheless, the investigation of gender bias in languages other than
English is still relatively under-explored and insufficiently analyzed. In this
work, We examine gender bias in LLMs-generated outputs for different languages.
We use three measurements: 1) gender bias in selecting descriptive words given
the gender-related context. 2) gender bias in selecting gender-related pronouns
(she/he) given the descriptive words. 3) gender bias in the topics of
LLM-generated dialogues. We investigate the outputs of the GPT series of LLMs
in various languages using our three measurement methods. Our findings revealed
significant gender biases across all the languages we examined.
| 2,024 | Computation and Language |
DPP-Based Adversarial Prompt Searching for Lanugage Models | Language models risk generating mindless and offensive content, which hinders
their safe deployment. Therefore, it is crucial to discover and modify
potential toxic outputs of pre-trained language models before deployment. In
this work, we elicit toxic content by automatically searching for a prompt that
directs pre-trained language models towards the generation of a specific target
output. The problem is challenging due to the discrete nature of textual data
and the considerable computational resources required for a single forward pass
of the language model. To combat these challenges, we introduce Auto-regressive
Selective Replacement Ascent (ASRA), a discrete optimization algorithm that
selects prompts based on both quality and similarity with determinantal point
process (DPP). Experimental results on six different pre-trained language
models demonstrate the efficacy of ASRA for eliciting toxic content.
Furthermore, our analysis reveals a strong correlation between the success rate
of ASRA attacks and the perplexity of target outputs, while indicating limited
association with the quantity of model parameters.
| 2,024 | Computation and Language |
Semi-Instruct: Bridging Natural-Instruct and Self-Instruct for Code
Large Language Models | Instruction tuning plays a pivotal role in Code Large Language Models (Code
LLMs) for the task of program synthesis. Presently, two dominant paradigms for
collecting tuning data are natural-instruct (human-written) and self-instruct
(automatically generated). Natural-instruct includes diverse and correct codes
but lacks instruction-code pairs, and exists improper code formats like nested
single-line codes. In contrast, self-instruct automatically generates proper
paired data. However, it suffers from low diversity due to generating
duplicates and cannot ensure the correctness of codes. To bridge the both
paradigms, we propose \textbf{Semi-Instruct}. It first converts diverse but
improper codes from natural-instruct into proper instruction-code pairs through
a method similar to self-instruct. To verify the correctness of generated
codes, we design a novel way to construct test cases by generating cases'
inputs and executing correct codes from natural-instruct to get outputs.
Finally, diverse and correct instruction-code pairs are retained for
instruction tuning. Experiments show that semi-instruct is significantly better
than natural-instruct and self-instruct. Furthermore, the performance steadily
improves as data scale increases.
| 2,024 | Computation and Language |
Self-Consistent Reasoning-based Aspect-Sentiment Quad Prediction with
Extract-Then-Assign Strategy | In the task of aspect sentiment quad prediction (ASQP), generative methods
for predicting sentiment quads have shown promising results. However, they
still suffer from imprecise predictions and limited interpretability, caused by
data scarcity and inadequate modeling of the quadruplet composition process. In
this paper, we propose Self-Consistent Reasoning-based Aspect-sentiment
quadruple Prediction (SCRAP), optimizing its model to generate reasonings and
the corresponding sentiment quadruplets in sequence. SCRAP adopts the
Extract-Then-Assign reasoning strategy, which closely mimics human cognition.
In the end, SCRAP significantly improves the model's ability to handle complex
reasoning tasks and correctly predict quadruplets through consistency voting,
resulting in enhanced interpretability and accuracy in ASQP.
| 2,024 | Computation and Language |
Post-decoder Biasing for End-to-End Speech Recognition of Multi-turn
Medical Interview | End-to-end (E2E) approach is gradually replacing hybrid models for automatic
speech recognition (ASR) tasks. However, the optimization of E2E models lacks
an intuitive method for handling decoding shifts, especially in scenarios with
a large number of domain-specific rare words that hold specific important
meanings. Furthermore, the absence of knowledge-intensive speech datasets in
academia has been a significant limiting factor, and the commonly used speech
corpora exhibit significant disparities with realistic conversation. To address
these challenges, we present Medical Interview (MED-IT), a multi-turn
consultation speech dataset that contains a substantial number of
knowledge-intensive named entities. We also explore methods to enhance the
recognition performance of rare words for E2E models. We propose a novel
approach, post-decoder biasing, which constructs a transform probability matrix
based on the distribution of training transcriptions. This guides the model to
prioritize recognizing words in the biasing list. In our experiments, for
subsets of rare words appearing in the training speech between 10 and 20 times,
and between 1 and 5 times, the proposed method achieves a relative improvement
of 9.3% and 5.1%, respectively.
| 2,024 | Computation and Language |
Cross-Lingual Learning vs. Low-Resource Fine-Tuning: A Case Study with
Fact-Checking in Turkish | The rapid spread of misinformation through social media platforms has raised
concerns regarding its impact on public opinion. While misinformation is
prevalent in other languages, the majority of research in this field has
concentrated on the English language. Hence, there is a scarcity of datasets
for other languages, including Turkish. To address this concern, we have
introduced the FCTR dataset, consisting of 3238 real-world claims. This dataset
spans multiple domains and incorporates evidence collected from three Turkish
fact-checking organizations. Additionally, we aim to assess the effectiveness
of cross-lingual transfer learning for low-resource languages, with a
particular focus on Turkish. We demonstrate in-context learning (zero-shot and
few-shot) performance of large language models in this context. The
experimental results indicate that the dataset has the potential to advance
research in the Turkish language.
| 2,024 | Computation and Language |
Rethinking Tokenization: Crafting Better Tokenizers for Large Language
Models | Tokenization significantly influences language models(LMs)' performance. This
paper traces the evolution of tokenizers from word-level to subword-level,
analyzing how they balance tokens and types to enhance model adaptability while
controlling complexity. Despite subword tokenizers like Byte Pair Encoding
(BPE) overcoming many word tokenizer limitations, they encounter difficulties
in handling non-Latin languages and depend heavily on extensive training data
and computational resources to grasp the nuances of multiword expressions
(MWEs). This article argues that tokenizers, more than mere technical tools,
should drawing inspiration from the cognitive science about human language
processing. This study then introduces the "Principle of Least Effort" from
cognitive science, that humans naturally seek to reduce cognitive effort, and
discusses the benefits of this principle for tokenizer development. Based on
this principle, the paper proposes that the Less-is-Better (LiB) model could be
a new approach for LLM tokenizer. The LiB model can autonomously learn an
integrated vocabulary consisting of subwords, words, and MWEs, which
effectively reduces both the numbers of tokens and types. Comparative
evaluations show that the LiB tokenizer outperforms existing word and BPE
tokenizers, presenting an innovative method for tokenizer development, and
hinting at the possibility of future cognitive science-based tokenizers being
more efficient.
| 2,024 | Computation and Language |
LLMs for Targeted Sentiment in News Headlines: Exploring Different
Levels of Prompt Prescriptiveness | News headlines often evoke sentiment by intentionally portraying entities in
particular ways, making targeted sentiment analysis (TSA) of headlines a
worthwhile but difficult task. Fine-tuned encoder models show satisfactory TSA
performance, but their background knowledge is limited, and they require a
labeled dataset. LLMs offer a potentially universal solution for TSA due to
their broad linguistic and world knowledge along with in-context learning
abilities, yet their performance is heavily influenced by prompt design.
Drawing parallels with annotation paradigms for subjective tasks, we explore
the influence of prompt design on the performance of LLMs for TSA of news
headlines. We evaluate the predictive accuracy of state-of-the-art LLMs using
prompts with different levels of prescriptiveness, ranging from plain zero-shot
to elaborate few-shot prompts matching annotation guidelines. Recognizing the
subjective nature of TSA, we evaluate the ability of LLMs to quantify
predictive uncertainty via calibration error and correlation to human
inter-annotator agreement. We find that, except for few-shot prompting,
calibration and F1-score improve with increased prescriptiveness, but the
optimal level depends on the model.
| 2,024 | Computation and Language |
Hierarchical Indexing for Retrieval-Augmented Opinion Summarization | We propose a method for unsupervised abstractive opinion summarization, that
combines the attributability and scalability of extractive approaches with the
coherence and fluency of Large Language Models (LLMs). Our method, HIRO, learns
an index structure that maps sentences to a path through a semantically
organized discrete hierarchy. At inference time, we populate the index and use
it to identify and retrieve clusters of sentences containing popular opinions
from input reviews. Then, we use a pretrained LLM to generate a readable
summary that is grounded in these extracted evidential clusters. The modularity
of our approach allows us to evaluate its efficacy at each stage. We show that
HIRO learns an encoding space that is more semantically structured than prior
work, and generates summaries that are more representative of the opinions in
the input reviews. Human evaluation confirms that HIRO generates more coherent,
detailed and accurate summaries that are significantly preferred by annotators
compared to prior work.
| 2,024 | Computation and Language |
Your Model Is Not Predicting Depression Well And That Is Why: A Case
Study of PRIMATE Dataset | This paper addresses the quality of annotations in mental health datasets
used for NLP-based depression level estimation from social media texts. While
previous research relies on social media-based datasets annotated with binary
categories, i.e. depressed or non-depressed, recent datasets such as D2S and
PRIMATE aim for nuanced annotations using PHQ-9 symptoms. However, most of
these datasets rely on crowd workers without the domain knowledge for
annotation. Focusing on the PRIMATE dataset, our study reveals concerns
regarding annotation validity, particularly for the lack of interest or
pleasure symptom. Through reannotation by a mental health professional, we
introduce finer labels and textual spans as evidence, identifying a notable
number of false positives. Our refined annotations, to be released under a Data
Use Agreement, offer a higher-quality test set for anhedonia detection. This
study underscores the necessity of addressing annotation quality issues in
mental health datasets, advocating for improved methodologies to enhance NLP
model reliability in mental health assessments.
| 2,024 | Computation and Language |
LUCID: LLM-Generated Utterances for Complex and Interesting Dialogues | Virtual assistants are poised to take a dramatic leap forward in terms of
their dialogue capabilities, spurred by recent advances in transformer-based
Large Language Models (LLMs). Yet a major bottleneck to achieving genuinely
transformative task-oriented dialogue capabilities remains the scarcity of high
quality and linguistically sophisticated data. Existing datasets, while
impressive in scale, have limited domain coverage and contain few genuinely
challenging conversational phenomena; those which are present are typically
unlabelled, making it difficult to assess the strengths and weaknesses of
models without time-consuming and costly human evaluation. Moreover, creating
high quality dialogue data has until now required considerable human input,
limiting both the scale of these datasets and the ability to rapidly bootstrap
data for a new target domain. We aim to overcome these issues with LUCID, a
modularised and highly automated LLM-driven data generation system that
produces realistic, diverse and challenging dialogues. We use LUCID to generate
a seed dataset of 4,277 multi-domain, multi-intent conversations across 100
intents to demonstrate its capabilities. The generated conversations include a
wide range of challenging phenomena and diverse user behaviour, conveniently
identifiable via a set of turn-level tags. Finally, we provide separate test
sets for seen and unseen intents, allowing for convenient out-of-distribution
evaluation. We release both the data generation code and the dataset itself.
| 2,024 | Computation and Language |
Do Zombies Understand? A Choose-Your-Own-Adventure Exploration of
Machine Cognition | Recent advances in LLMs have sparked a debate on whether they understand
text. In this position paper, we argue that opponents in this debate hold
different definitions for understanding, and particularly differ in their view
on the role of consciousness. To substantiate this claim, we propose a thought
experiment involving an open-source chatbot $Z$ which excels on every possible
benchmark, seemingly without subjective experience. We ask whether $Z$ is
capable of understanding, and show that different schools of thought within
seminal AI research seem to answer this question differently, uncovering their
terminological disagreement. Moving forward, we propose two distinct working
definitions for understanding which explicitly acknowledge the question of
consciousness, and draw connections with a rich literature in philosophy,
psychology and neuroscience.
| 2,024 | Computation and Language |
PoTeC: A German Naturalistic Eye-tracking-while-reading Corpus | The Potsdam Textbook Corpus (PoTeC) is a naturalistic
eye-tracking-while-reading corpus containing data from 75 participants reading
12 scientific texts. PoTeC is the first naturalistic eye-tracking-while-reading
corpus that contains eye-movements from domain-experts as well as novices in a
within-participant manipulation: It is based on a 2x2x2 fully-crossed factorial
design which includes the participants' level of study and the participants'
discipline of study as between-subject factors and the text domain as a
within-subject factor. The participants' reading comprehension was assessed by
a series of text comprehension questions and their domain knowledge was tested
by text-independent background questions for each of the texts. The materials
are annotated for a variety of linguistic features at different levels. We
envision PoTeC to be used for a wide range of studies including but not limited
to analyses of expert and non-expert reading strategies. The corpus and all the
accompanying data at all stages of the preprocessing pipeline and all code used
to preprocess the data are made available via GitHub:
https://github.com/DiLi-Lab/PoTeC.
| 2,024 | Computation and Language |
Surveying the Dead Minds: Historical-Psychological Text Analysis with
Contextualized Construct Representation (CCR) for Classical Chinese | In this work, we develop a pipeline for historical-psychological text
analysis in classical Chinese. Humans have produced texts in various languages
for thousands of years; however, most of the computational literature is
focused on contemporary languages and corpora. The emerging field of historical
psychology relies on computational techniques to extract aspects of psychology
from historical corpora using new methods developed in natural language
processing (NLP). The present pipeline, called Contextualized Construct
Representations (CCR), combines expert knowledge in psychometrics (i.e.,
psychological surveys) with text representations generated via
transformer-based language models to measure psychological constructs such as
traditionalism, norm strength, and collectivism in classical Chinese corpora.
Considering the scarcity of available data, we propose an indirect supervised
contrastive learning approach and build the first Chinese historical psychology
corpus (C-HI-PSY) to fine-tune pre-trained models. We evaluate the pipeline to
demonstrate its superior performance compared with other approaches. The CCR
method outperforms word-embedding-based approaches across all of our tasks and
exceeds prompting with GPT-4 in most tasks. Finally, we benchmark the pipeline
against objective, external data to further verify its validity.
| 2,024 | Computation and Language |
ROME: Memorization Insights from Text, Probability and Hidden State in
Large Language Models | Probing the memorization of large language models holds significant
importance. Previous works have established metrics for quantifying
memorization, explored various influencing factors, such as data duplication,
model size, and prompt length, and evaluated memorization by comparing model
outputs with training corpora. However, the training corpora are of enormous
scale and its pre-processing is time-consuming. To explore memorization without
accessing training data, we propose a novel approach, named ROME, wherein
memorization is explored by comparing disparities across memorized and
non-memorized. Specifically, models firstly categorize the selected samples
into memorized and non-memorized groups, and then comparing the demonstrations
in the two groups from the insights of text, probability, and hidden state.
Experimental findings show the disparities in factors including word length,
part-of-speech, word frequency, mean and variance, just to name a few.
| 2,024 | Computation and Language |
Large Language Models for Simultaneous Named Entity Extraction and
Spelling Correction | Language Models (LMs) such as BERT, have been shown to perform well on the
task of identifying Named Entities (NE) in text. A BERT LM is typically used as
a classifier to classify individual tokens in the input text, or to classify
spans of tokens, as belonging to one of a set of possible NE categories.
In this paper, we hypothesise that decoder-only Large Language Models (LLMs)
can also be used generatively to extract both the NE, as well as potentially
recover the correct surface form of the NE, where any spelling errors that were
present in the input text get automatically corrected.
We fine-tune two BERT LMs as baselines, as well as eight open-source LLMs, on
the task of producing NEs from text that was obtained by applying Optical
Character Recognition (OCR) to images of Japanese shop receipts; in this work,
we do not attempt to find or evaluate the location of NEs in the text.
We show that the best fine-tuned LLM performs as well as, or slightly better
than, the best fine-tuned BERT LM, although the differences are not
significant. However, the best LLM is also shown to correct OCR errors in some
cases, as initially hypothesised.
| 2,024 | Computation and Language |
Standardizing the Measurement of Text Diversity: A Tool and a
Comparative Analysis of Scores | The diversity across outputs generated by large language models shapes the
perception of their quality and utility. Prompt leaks, templated answer
structure, and canned responses across different interactions are readily
noticed by people, but there is no standard score to measure this aspect of
model behavior. In this work we empirically investigate diversity scores on
English texts. We find that computationally efficient compression algorithms
capture information similar to what is measured by slow to compute $n$-gram
overlap homogeneity scores. Further, a combination of measures -- compression
ratios, self-repetition of long $n$-grams and Self-BLEU and BERTScore -- are
sufficient to report, as they have low mutual correlation with each other. The
applicability of scores extends beyond analysis of generative models; for
example, we highlight applications on instruction-tuning datasets and
human-produced texts. We release a diversity score package to facilitate
research and invite consistency across reports.
| 2,024 | Computation and Language |
Modeling the Quality of Dialogical Explanations | Explanations are pervasive in our lives. Mostly, they occur in dialogical
form where an {\em explainer} discusses a concept or phenomenon of interest
with an {\em explainee}. Leaving the explainee with a clear understanding is
not straightforward due to the knowledge gap between the two participants.
Previous research looked at the interaction of explanation moves, dialogue
acts, and topics in successful dialogues with expert explainers. However,
daily-life explanations often fail, raising the question of what makes a
dialogue successful. In this work, we study explanation dialogues in terms of
the interactions between the explainer and explainee and how they correlate
with the quality of explanations in terms of a successful understanding on the
explainee's side. In particular, we first construct a corpus of 399 dialogues
from the Reddit forum {\em Explain Like I am Five} and annotate it for
interaction flows and explanation quality. We then analyze the interaction
flows, comparing them to those appearing in expert dialogues. Finally, we
encode the interaction flows using two language models that can handle long
inputs, and we provide empirical evidence for the effectiveness boost gained
through the encoding in predicting the success of explanation dialogues.
| 2,024 | Computation and Language |
A Bit of a Problem: Measurement Disparities in Dataset Sizes Across
Languages | How should text dataset sizes be compared across languages? Even for
content-matched (parallel) corpora, UTF-8 encoded text can require a
dramatically different number of bytes for different languages. In our work, we
define the byte premium between two languages as the ratio of bytes used to
encode content-matched text in those languages. We compute byte premiums for
1155 languages, and we use linear regressions to estimate byte premiums for
other languages. We release a tool to obtain byte premiums for any two
languages, enabling comparisons of dataset sizes across languages for more
equitable multilingual model development and data practices.
| 2,024 | Computation and Language |
Self-Consistent Decoding for More Factual Open Responses | Self-consistency has emerged as a powerful method for improving the accuracy
of short answers generated by large language models. As previously defined, it
only concerns the accuracy of a final answer parsed from generated text. In
this work, we extend the idea to open response generation, by integrating
voting into the decoding method. Each output sentence is selected from among
multiple samples, conditioning on the previous selections, based on a simple
token overlap score. We compare this "Sample & Select" method to greedy
decoding, beam search, nucleus sampling, and the recently introduced
hallucination avoiding decoders of DoLA, P-CRR, and S-CRR. We show that Sample
& Select improves factuality by a 30% relative margin against these decoders in
NLI-based evaluation on the subsets of CNN/DM and XSum used in the FRANK
benchmark, while maintaining comparable ROUGE-1 F1 scores against reference
summaries. We collect human verifications of the generated summaries,
confirming the factual superiority of our method.
| 2,024 | Computation and Language |
Few-Shot Relation Extraction with Hybrid Visual Evidence | The goal of few-shot relation extraction is to predict relations between name
entities in a sentence when only a few labeled instances are available for
training. Existing few-shot relation extraction methods focus on uni-modal
information such as text only. This reduces performance when there are no clear
contexts between the name entities described in text. We propose a multi-modal
few-shot relation extraction model (MFS-HVE) that leverages both textual and
visual semantic information to learn a multi-modal representation jointly. The
MFS-HVE includes semantic feature extractors and multi-modal fusion components.
The MFS-HVE semantic feature extractors are developed to extract both textual
and visual features. The visual features include global image features and
local object features within the image. The MFS-HVE multi-modal fusion unit
integrates information from various modalities using image-guided attention,
object-guided attention, and hybrid feature attention to fully capture the
semantic interaction between visual regions of images and relevant texts.
Extensive experiments conducted on two public datasets demonstrate that
semantic visual information significantly improves the performance of few-shot
relation prediction.
| 2,024 | Computation and Language |
Dialect prejudice predicts AI decisions about people's character,
employability, and criminality | Hundreds of millions of people now interact with language models, with uses
ranging from serving as a writing aid to informing hiring decisions. Yet these
language models are known to perpetuate systematic racial prejudices, making
their judgments biased in problematic ways about groups like African Americans.
While prior research has focused on overt racism in language models, social
scientists have argued that racism with a more subtle character has developed
over time. It is unknown whether this covert racism manifests in language
models. Here, we demonstrate that language models embody covert racism in the
form of dialect prejudice: we extend research showing that Americans hold
raciolinguistic stereotypes about speakers of African American English and find
that language models have the same prejudice, exhibiting covert stereotypes
that are more negative than any human stereotypes about African Americans ever
experimentally recorded, although closest to the ones from before the civil
rights movement. By contrast, the language models' overt stereotypes about
African Americans are much more positive. We demonstrate that dialect prejudice
has the potential for harmful consequences by asking language models to make
hypothetical decisions about people, based only on how they speak. Language
models are more likely to suggest that speakers of African American English be
assigned less prestigious jobs, be convicted of crimes, and be sentenced to
death. Finally, we show that existing methods for alleviating racial bias in
language models such as human feedback training do not mitigate the dialect
prejudice, but can exacerbate the discrepancy between covert and overt
stereotypes, by teaching language models to superficially conceal the racism
that they maintain on a deeper level. Our findings have far-reaching
implications for the fair and safe employment of language technology.
| 2,024 | Computation and Language |
Mitigating Reversal Curse via Semantic-aware Permutation Training | While large language models (LLMs) have achieved impressive performance
across diverse tasks, recent studies showcase that causal LLMs suffer from the
"reversal curse". It is a typical example that the model knows "A's father is
B", but is unable to reason "B's child is A". This limitation poses a challenge
to the advancement of artificial general intelligence (AGI), as it suggests a
gap in the models' ability to comprehend and apply bidirectional reasoning. In
this paper, we first conduct substantial evaluation and identify that the root
cause of the reversal curse lies in the different word order between the
training and inference stage, namely, the poor ability of causal language
models to predict antecedent words within the training data. Accordingly,
permutation on the training data is considered as a potential solution, since
this can make the model predict antecedent words or tokens. However, previous
permutation methods may disrupt complete phrases or entities, thereby posing
challenges for the model to comprehend and learn from training data. To address
this issue, we propose Semantic-aware Permutation Training (SPT), which
addresses this issue by segmenting the training sentences into semantic units
(i.e., entities or phrases) with an assistant language model and permuting
these units before feeding into the model. Extensive experiments demonstrate
that SPT effectively mitigates the reversal curse since the performance on
reversed questions approximates that on the forward ones, and significantly
advances the performance of existing works.
| 2,024 | Computation and Language |
PRECISE Framework: GPT-based Text For Improved Readability, Reliability,
and Understandability of Radiology Reports For Patient-Centered Care | This study introduces and evaluates the PRECISE framework, utilizing OpenAI's
GPT-4 to enhance patient engagement by providing clearer and more accessible
chest X-ray reports at a sixth-grade reading level. The framework was tested on
500 reports, demonstrating significant improvements in readability,
reliability, and understandability. Statistical analyses confirmed the
effectiveness of the PRECISE approach, highlighting its potential to foster
patient-centric care delivery in healthcare decision-making.
| 2,024 | Computation and Language |
$\textit{L+M-24}$: Building a Dataset for Language + Molecules @ ACL
2024 | Language-molecule models have emerged as an exciting direction for molecular
discovery and understanding. However, training these models is challenging due
to the scarcity of molecule-language pair datasets. At this point, datasets
have been released which are 1) small and scraped from existing databases, 2)
large but noisy and constructed by performing entity linking on the scientific
literature, and 3) built by converting property prediction datasets to natural
language using templates. In this document, we detail the $\textit{L+M-24}$
dataset, which has been created for the Language + Molecules Workshop shared
task at ACL 2024. In particular, $\textit{L+M-24}$ is designed to focus on
three key benefits of natural language in molecule design: compositionality,
functionality, and abstraction.
| 2,024 | Computation and Language |
Getting Serious about Humor: Crafting Humor Datasets with Unfunny Large
Language Models | Humor is a fundamental facet of human cognition and interaction. Yet, despite
recent advances in natural language processing, humor detection remains a
challenging task that is complicated by the scarcity of datasets that pair
humorous texts with similar non-humorous counterparts. In our work, we
investigate whether large language models (LLMs), can generate synthetic data
for humor detection via editing texts. We benchmark LLMs on an existing human
dataset and show that current LLMs display an impressive ability to `unfun'
jokes, as judged by humans and as measured on the downstream task of humor
detection. We extend our approach to a code-mixed English-Hindi humor dataset,
where we find that GPT-4's synthetic data is highly rated by bilingual
annotators and provides challenging adversarial examples for humor classifiers.
| 2,024 | Computation and Language |
Executing Natural Language-Described Algorithms with Large Language
Models: An Investigation | Executing computer programs described in natural language has long been a
pursuit of computer science. With the advent of enhanced natural language
understanding capabilities exhibited by large language models (LLMs), the path
toward this goal has been illuminated. In this paper, we seek to examine the
capacity of present-day LLMs to comprehend and execute algorithms outlined in
natural language. We established an algorithm test set sourced from
Introduction to Algorithm, a well-known textbook that contains many
representative widely-used algorithms. To systematically assess LLMs' code
execution abilities, we selected 30 algorithms, generated 300 random-sampled
instances in total, and evaluated whether popular LLMs can understand and
execute these algorithms. Our findings reveal that LLMs, notably GPT-4, can
effectively execute programs described in natural language, as long as no heavy
numeric computation is involved. We believe our findings contribute to
evaluating LLMs' code execution abilities and would encourage further
investigation and application for the computation power of LLMs.
| 2,024 | Computation and Language |
An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning | Large language models (LLMs) are displaying emergent abilities for math
reasoning tasks,and there is a growing attention on enhancing the ability of
open-source LLMs through supervised fine-tuning (SFT).In this paper, we aim to
explore a general data strategy for supervised data to help optimize and expand
math reasoning ability.Firstly, we determine the ability boundary of reasoning
paths augmentation by identifying these paths' minimal optimal set.Secondly, we
validate that different abilities of the model can be cumulatively enhanced by
Mix of Minimal Optimal Sets of corresponding types of data, while our models
MMOS achieve SOTA performance on series base models under much lower
construction costs.Besides, we point out GSM-HARD is not really hard and
today's LLMs no longer lack numerical robustness.Also, we provide an Auto
Problem Generator for robustness testing and educational applications.Our code
and data are publicly available at https://github.com/cyzhh/MMOS.
| 2,024 | Computation and Language |
Brain-Inspired Two-Stage Approach: Enhancing Mathematical Reasoning by
Imitating Human Thought Processes | Although large language models demonstrate emergent abilities in solving math
word problems, there is a challenging task in complex multi-step mathematical
reasoning tasks. To improve model performance on mathematical reasoning tasks,
previous work has conducted supervised fine-tuning on open-source models by
improving the quality and quantity of data. In this paper, we propose a novel
approach, named Brain, to imitate human thought processes to enhance
mathematical reasoning abilities, using the Frontal Lobe Model to generate
plans, and then employing the Parietal Lobe Model to generate code and execute
to obtain answers. First, we achieve SOTA performance in comparison with Code
LLaMA 7B based models through this method. Secondly, we find that plans can be
explicitly extracted from natural language, code, or formal language. Our code
and data are publicly available at https://github.com/cyzhh/Brain.
| 2,024 | Computation and Language |
Uncovering Customer Issues through Topological Natural Language Analysis | E-commerce companies deal with a high volume of customer service requests
daily. While a simple annotation system is often used to summarize the topics
of customer contacts, thoroughly exploring each specific issue can be
challenging. This presents a critical concern, especially during an emerging
outbreak where companies must quickly identify and address specific issues. To
tackle this challenge, we propose a novel machine learning algorithm that
leverages natural language techniques and topological data analysis to monitor
emerging and trending customer issues. Our approach involves an end-to-end deep
learning framework that simultaneously tags the primary question sentence of
each customer's transcript and generates sentence embedding vectors. We then
whiten the embedding vectors and use them to construct an undirected graph.
From there, we define trending and emerging issues based on the topological
properties of each transcript. We have validated our results through various
methods and found that they are highly consistent with news sources.
| 2,024 | Computation and Language |
IPED: An Implicit Perspective for Relational Triple Extraction based on
Diffusion Model | Relational triple extraction is a fundamental task in the field of
information extraction, and a promising framework based on table filling has
recently gained attention as a potential baseline for entity relation
extraction. However, inherent shortcomings such as redundant information and
incomplete triple recognition remain problematic. To address these challenges,
we propose an Implicit Perspective for relational triple Extraction based on
Diffusion model (IPED), an innovative approach for extracting relational
triples. Our classifier-free solution adopts an implicit strategy using block
coverage to complete the tables, avoiding the limitations of explicit tagging
methods. Additionally, we introduce a generative model structure, the
block-denoising diffusion model, to collaborate with our implicit perspective
and effectively circumvent redundant information disruptions. Experimental
results on two popular datasets demonstrate that IPED achieves state-of-the-art
performance while gaining superior inference speed and low computational
complexity. To support future research, we have made our source code publicly
available online.
| 2,024 | Computation and Language |
Abdelhak at SemEval-2024 Task 9 : Decoding Brainteasers, The Efficacy of
Dedicated Models Versus ChatGPT | This study introduces a dedicated model aimed at solving the BRAINTEASER task
9 , a novel challenge designed to assess models lateral thinking capabilities
through sentence and word puzzles. Our model demonstrates remarkable efficacy,
securing Rank 1 in sentence puzzle solving during the test phase with an
overall score of 0.98. Additionally, we explore the comparative performance of
ChatGPT, specifically analyzing how variations in temperature settings affect
its ability to engage in lateral thinking and problem-solving. Our findings
indicate a notable performance disparity between the dedicated model and
ChatGPT, underscoring the potential of specialized approaches in enhancing
creative reasoning in AI.
| 2,024 | Computation and Language |
LoRA Meets Dropout under a Unified Framework | With the remarkable capabilities, large language models (LLMs) have emerged
as essential elements in numerous NLP applications, while parameter-efficient
finetuning, especially LoRA, has gained popularity as a lightweight approach
for model customization. Meanwhile, various dropout methods, initially designed
for full finetuning with all the parameters updated, alleviates overfitting
associated with excessive parameter redundancy. Hence, a possible contradiction
arises from negligible trainable parameters of LoRA and the effectiveness of
previous dropout methods, which has been largely overlooked. To fill this gap,
we first confirm that parameter-efficient LoRA is also overfitting-prone. We
then revisit transformer-specific dropout methods, and establish their
equivalence and distinctions mathematically and empirically. Building upon this
comparative analysis, we introduce a unified framework for a comprehensive
investigation, which instantiates these methods based on dropping position,
structural pattern and compensation measure. Through this framework, we reveal
the new preferences and performance comparisons of them when involved with
limited trainable parameters. This framework also allows us to amalgamate the
most favorable aspects into a novel dropout method named HiddenKey. Extensive
experiments verify the remarkable superiority and sufficiency of HiddenKey
across multiple models and tasks, which highlights it as the preferred approach
for high-performance and parameter-efficient finetuning of LLMs.
| 2,024 | Computation and Language |
UrbanGPT: Spatio-Temporal Large Language Models | Spatio-temporal prediction aims to forecast and gain insights into the
ever-changing dynamics of urban environments across both time and space. Its
purpose is to anticipate future patterns, trends, and events in diverse facets
of urban life, including transportation, population movement, and crime rates.
Although numerous efforts have been dedicated to developing neural network
techniques for accurate predictions on spatio-temporal data, it is important to
note that many of these methods heavily depend on having sufficient labeled
data to generate precise spatio-temporal representations. Unfortunately, the
issue of data scarcity is pervasive in practical urban sensing scenarios.
Consequently, it becomes necessary to build a spatio-temporal model with strong
generalization capabilities across diverse spatio-temporal learning scenarios.
Taking inspiration from the remarkable achievements of large language models
(LLMs), our objective is to create a spatio-temporal LLM that can exhibit
exceptional generalization capabilities across a wide range of downstream urban
tasks. To achieve this objective, we present the UrbanGPT, which seamlessly
integrates a spatio-temporal dependency encoder with the instruction-tuning
paradigm. This integration enables LLMs to comprehend the complex
inter-dependencies across time and space, facilitating more comprehensive and
accurate predictions under data scarcity. To validate the effectiveness of our
approach, we conduct extensive experiments on various public datasets, covering
different spatio-temporal prediction tasks. The results consistently
demonstrate that our UrbanGPT, with its carefully designed architecture,
consistently outperforms state-of-the-art baselines. These findings highlight
the potential of building large language models for spatio-temporal learning,
particularly in zero-shot scenarios where labeled data is scarce.
| 2,024 | Computation and Language |
RAM-EHR: Retrieval Augmentation Meets Clinical Predictions on Electronic
Health Records | We present RAM-EHR, a Retrieval AugMentation pipeline to improve clinical
predictions on Electronic Health Records (EHRs). RAM-EHR first collects
multiple knowledge sources, converts them into text format, and uses dense
retrieval to obtain information related to medical concepts. This strategy
addresses the difficulties associated with complex names for the concepts.
RAM-EHR then augments the local EHR predictive model co-trained with
consistency regularization to capture complementary information from patient
visits and summarized knowledge. Experiments on two EHR datasets show the
efficacy of RAM-EHR over previous knowledge-enhanced baselines (3.4% gain in
AUROC and 7.2% gain in AUPR), emphasizing the effectiveness of the summarized
knowledge from RAM-EHR for clinical prediction tasks. The code will be
published at \url{https://github.com/ritaranx/RAM-EHR}.
| 2,024 | Computation and Language |
DenseMamba: State Space Models with Dense Hidden Connection for
Efficient Large Language Models | Large language models (LLMs) face a daunting challenge due to the excessive
computational and memory requirements of the commonly used Transformer
architecture. While state space model (SSM) is a new type of foundational
network architecture offering lower computational complexity, their performance
has yet to fully rival that of Transformers. This paper introduces DenseSSM, a
novel approach to enhance the flow of hidden information between layers in
SSMs. By selectively integrating shallowlayer hidden states into deeper layers,
DenseSSM retains fine-grained information crucial for the final output. Dense
connections enhanced DenseSSM still maintains the training parallelizability
and inference efficiency. The proposed method can be widely applicable to
various SSM types like RetNet and Mamba. With similar model size, DenseSSM
achieves significant improvements, exemplified by DenseRetNet outperforming the
original RetNet with up to 5% accuracy improvement on public benchmarks. code
is avalaible at https://github.com/WailordHe/DenseSSM
| 2,024 | Computation and Language |
Social Media as a Sensor: Analyzing Twitter Data for Breast Cancer
Medication Effects Using Natural Language Processing | Breast cancer is a significant public health concern and is the leading cause
of cancer-related deaths among women. Despite advances in breast cancer
treatments, medication non-adherence remains a major problem. As electronic
health records do not typically capture patient-reported outcomes that may
reveal information about medication-related experiences, social media presents
an attractive resource for enhancing our understanding of the patients'
treatment experiences. In this paper, we developed natural language processing
(NLP) based methodologies to study information posted by an automatically
curated breast cancer cohort from social media. We employed a transformer-based
classifier to identify breast cancer patients/survivors on X (Twitter) based on
their self-reported information, and we collected longitudinal data from their
profiles. We then designed a multi-layer rule-based model to develop a breast
cancer therapy-associated side effect lexicon and detect patterns of medication
usage and associated side effects among breast cancer patients. 1,454,637 posts
were available from 583,962 unique users, of which 62,042 were detected as
breast cancer members using our transformer-based model. 198 cohort members
mentioned breast cancer medications with tamoxifen as the most common. Our side
effect lexicon identified well-known side effects of hormone and chemotherapy.
Furthermore, it discovered a subject feeling towards cancer and medications,
which may suggest a pre-clinical phase of side effects or emotional distress.
This analysis highlighted not only the utility of NLP techniques in
unstructured social media data to identify self-reported breast cancer posts,
medication usage patterns, and treatment side effects but also the richness of
social data on such clinical questions.
| 2,024 | Computation and Language |
Information Flow Routes: Automatically Interpreting Language Models at
Scale | Information flows by routes inside the network via mechanisms implemented in
the model. These routes can be represented as graphs where nodes correspond to
token representations and edges to operations inside the network. We
automatically build these graphs in a top-down manner, for each prediction
leaving only the most important nodes and edges. In contrast to the existing
workflows relying on activation patching, we do this through attribution: this
allows us to efficiently uncover existing circuits with just a single forward
pass. Additionally, the applicability of our method is far beyond patching: we
do not need a human to carefully design prediction templates, and we can
extract information flow routes for any prediction (not just the ones among the
allowed templates). As a result, we can talk about model behavior in general,
for specific types of predictions, or different domains. We experiment with
Llama 2 and show that the role of some attention heads is overall important,
e.g. previous token heads and subword merging heads. Next, we find similarities
in Llama 2 behavior when handling tokens of the same part of speech. Finally,
we show that some model components can be specialized on domains such as coding
or multilingual texts.
| 2,024 | Computation and Language |
Comparing effectiveness of regularization methods on text
classification: Simple and complex model in data shortage situation | Text classification is the task of assigning a document to a predefined
class. However, it is expensive to acquire enough labeled documents or to label
them. In this paper, we study the regularization methods' effects on various
classification models when only a few labeled data are available. We compare a
simple word embedding-based model, which is simple but effective, with complex
models (CNN and BiLSTM). In supervised learning, adversarial training can
further regularize the model. When an unlabeled dataset is available, we can
regularize the model using semi-supervised learning methods such as the Pi
model and virtual adversarial training. We evaluate the regularization effects
on four text classification datasets (AG news, DBpedia, Yahoo! Answers, Yelp
Polarity), using only 0.1% to 0.5% of the original labeled training documents.
The simple model performs relatively well in fully supervised learning, but
with the help of adversarial training and semi-supervised learning, both simple
and complex models can be regularized, showing better results for complex
models. Although the simple model is robust to overfitting, a complex model
with well-designed prior beliefs can be also robust to overfitting.
| 2,024 | Computation and Language |
LLMGuard: Guarding Against Unsafe LLM Behavior | Although the rise of Large Language Models (LLMs) in enterprise settings
brings new opportunities and capabilities, it also brings challenges, such as
the risk of generating inappropriate, biased, or misleading content that
violates regulations and can have legal concerns. To alleviate this, we present
"LLMGuard", a tool that monitors user interactions with an LLM application and
flags content against specific behaviours or conversation topics. To do this
robustly, LLMGuard employs an ensemble of detectors.
| 2,024 | Computation and Language |
Self-Refinement of Language Models from External Proxy Metrics Feedback | It is often desirable for Large Language Models (LLMs) to capture multiple
objectives when providing a response. In document-grounded response generation,
for example, agent responses are expected to be relevant to a user's query
while also being grounded in a given document. In this paper, we introduce
Proxy Metric-based Self-Refinement (ProMiSe), which enables an LLM to refine
its own initial response along key dimensions of quality guided by external
metrics feedback, yielding an overall better final response. ProMiSe leverages
feedback on response quality through principle-specific proxy metrics, and
iteratively refines its response one principle at a time. We apply ProMiSe to
open source language models Flan-T5-XXL and Llama-2-13B-Chat, to evaluate its
performance on document-grounded question answering datasets, MultiDoc2Dial and
QuAC, demonstrating that self-refinement improves response quality. We further
show that fine-tuning Llama-2-13B-Chat on the synthetic dialogue data generated
by ProMiSe yields significant performance improvements over the zero-shot
baseline as well as a supervised fine-tuned model on human annotated data.
| 2,024 | Computation and Language |
Deep Learning Detection Method for Large Language Models-Generated
Scientific Content | Large Language Models (LLMs), such as GPT-3 and BERT, reshape how textual
content is written and communicated. These models have the potential to
generate scientific content that is indistinguishable from that written by
humans. Hence, LLMs carry severe consequences for the scientific community,
which relies on the integrity and reliability of publications. This research
paper presents a novel ChatGPT-generated scientific text detection method,
AI-Catcher. AI-Catcher integrates two deep learning models, multilayer
perceptron (MLP) and convolutional neural networks (CNN). The MLP learns the
feature representations of the linguistic and statistical features. The CNN
extracts high-level representations of the sequential patterns from the textual
content. AI-Catcher is a multimodal model that fuses hidden patterns derived
from MLP and CNN. In addition, a new ChatGPT-Generated scientific text dataset
is collected to enhance AI-generated text detection tools, AIGTxt. AIGTxt
contains 3000 records collected from published academic articles across ten
domains and divided into three classes: Human-written, ChatGPT-generated, and
Mixed text. Several experiments are conducted to evaluate the performance of
AI-Catcher. The comparative results demonstrate the capability of AI-Catcher to
distinguish between human-written and ChatGPT-generated scientific text more
accurately than alternative methods. On average, AI-Catcher improved accuracy
by 37.4%.
| 2,024 | Computation and Language |
CLLMs: Consistency Large Language Models | Parallel decoding methods such as Jacobi decoding show promise for more
efficient LLM inference as it breaks the sequential nature of the LLM decoding
process and transforms it into parallelizable computation. However, in
practice, it achieves little speedup compared to traditional autoregressive
(AR) decoding, primarily because Jacobi decoding seldom accurately predicts
more than one token in a single fixed-point iteration step. To address this, we
develop a new approach aimed at realizing fast convergence from any state to
the fixed point on a Jacobi trajectory. This is accomplished by refining the
target LLM to consistently predict the fixed point given any state as input.
Extensive experiments demonstrate the effectiveness of our method, showing
2.4$\times$ to 3.4$\times$ improvements in generation speed while preserving
generation quality across both domain-specific and open-domain benchmarks.
| 2,024 | Computation and Language |
EyeGPT: Ophthalmic Assistant with Large Language Models | Artificial intelligence (AI) has gained significant attention in healthcare
consultation due to its potential to improve clinical workflow and enhance
medical communication. However, owing to the complex nature of medical
information, large language models (LLM) trained with general world knowledge
might not possess the capability to tackle medical-related tasks at an expert
level. Here, we introduce EyeGPT, a specialized LLM designed specifically for
ophthalmology, using three optimization strategies including role-playing,
finetuning, and retrieval-augmented generation. In particular, we proposed a
comprehensive evaluation framework that encompasses a diverse dataset, covering
various subspecialties of ophthalmology, different users, and diverse inquiry
intents. Moreover, we considered multiple evaluation metrics, including
accuracy, understandability, trustworthiness, empathy, and the proportion of
hallucinations. By assessing the performance of different EyeGPT variants, we
identify the most effective one, which exhibits comparable levels of
understandability, trustworthiness, and empathy to human ophthalmologists (all
Ps>0.05). Overall, ur study provides valuable insights for future research,
facilitating comprehensive comparisons and evaluations of different strategies
for developing specialized LLMs in ophthalmology. The potential benefits
include enhancing the patient experience in eye care and optimizing
ophthalmologists' services.
| 2,024 | Computation and Language |
NewsBench: Systematic Evaluation of LLMs for Writing Proficiency and
Safety Adherence in Chinese Journalistic Editorial Applications | This study presents NewsBench, a novel benchmark framework developed to
evaluate the capability of Large Language Models (LLMs) in Chinese Journalistic
Writing Proficiency (JWP) and their Safety Adherence (SA), addressing the gap
between journalistic ethics and the risks associated with AI utilization.
Comprising 1,267 tasks across 5 editorial applications, 7 aspects (including
safety and journalistic writing with 4 detailed facets), and spanning 24 news
topics domains, NewsBench employs two GPT-4 based automatic evaluation
protocols validated by human assessment. Our comprehensive analysis of 11 LLMs
highlighted GPT-4 and ERNIE Bot as top performers, yet revealed a relative
deficiency in journalistic ethic adherence during creative writing tasks. These
findings underscore the need for enhanced ethical guidance in AI-generated
journalistic content, marking a step forward in aligning AI capabilities with
journalistic standards and safety considerations.
| 2,024 | Computation and Language |
SoftTiger: A Clinical Foundation Model for Healthcare Workflows | We release and introduce SoftTiger, a clinical large language model (CLaM)
designed as a foundation model for healthcare workflows. The narrative and
unstructured nature of clinical notes is a major obstacle for healthcare
intelligentization. We address a critical problem of structuring clinical notes
into clinical data, according to international interoperability standards. We
collect and annotate data for three critical subtasks, namely, international
patient summary, clinical impression and medical encounter. We then supervised
fine-tuned a state-of-the-art LLM using public and credentialed clinical data.
The training is orchestrated in a way that the target model can first support
basic clinical tasks such as abbreviation expansion and temporal information
extraction, and then learn to perform more complex downstream clinical tasks
such as impression and encounter summary. Moreover, we address, several
modeling challenges in the healthcare context, e.g., extra long context window.
Our blind pairwise evaluation shows that SoftTiger outperforms other popular
open-source models and GPT-3.5, comparable to Gemini-pro, and only has a mild
gap from GPT-4. We believe that LLMs may become a step-stone towards healthcare
digitalization and democratization. Therefore, we publicly release SoftTiger
models at scales of 13 billion and 70 billion parameters, as well as datasets
and code for our innovative scalable evaluation, hopefully, making a
significant contribution to the healthcare industry.
| 2,024 | Computation and Language |
Word Order and World Knowledge | Word order is an important concept in natural language, and in this work, we
study how word order affects the induction of world knowledge from raw text
using language models. We use word analogies to probe for such knowledge.
Specifically, in addition to the natural word order, we first respectively
extract texts of six fixed word orders from five languages and then pretrain
the language models on these texts. Finally, we analyze the experimental
results of the fixed word orders on word analogies and show that i) certain
fixed word orders consistently outperform or underperform others, though the
specifics vary across languages, and ii) the Wov2Lex hypothesis is not hold in
pre-trained language models, and the natural word order typically yields
mediocre results. The source code will be made publicly available at
https://github.com/lshowway/probing_by_analogy.
| 2,024 | Computation and Language |
Margin Discrepancy-based Adversarial Training for Multi-Domain Text
Classification | Multi-domain text classification (MDTC) endeavors to harness available
resources from correlated domains to enhance the classification accuracy of the
target domain. Presently, most MDTC approaches that embrace adversarial
training and the shared-private paradigm exhibit cutting-edge performance.
Unfortunately, these methods face a non-negligible challenge: the absence of
theoretical guarantees in the design of MDTC algorithms. The dearth of
theoretical underpinning poses a substantial impediment to the advancement of
MDTC algorithms. To tackle this problem, we first provide a theoretical
analysis of MDTC by decomposing the MDTC task into multiple domain adaptation
tasks. We incorporate the margin discrepancy as the measure of domain
divergence and establish a new generalization bound based on Rademacher
complexity. Subsequently, we propose a margin discrepancy-based adversarial
training (MDAT) approach for MDTC, in accordance with our theoretical analysis.
To validate the efficacy of the proposed MDAT method, we conduct empirical
studies on two MDTC benchmarks. The experimental results demonstrate that our
MDAT approach surpasses state-of-the-art baselines on both datasets.
| 2,024 | Computation and Language |
DiaHalu: A Dialogue-level Hallucination Evaluation Benchmark for Large
Language Models | Since large language models (LLMs) achieve significant success in recent
years, the hallucination issue remains a challenge, numerous benchmarks are
proposed to detect the hallucination. Nevertheless, some of these benchmarks
are not naturally generated by LLMs but are intentionally induced. Also, many
merely focus on the factuality hallucination while ignoring the faithfulness
hallucination. Additionally, although dialogue pattern is more widely utilized
in the era of LLMs, current benchmarks only concentrate on sentence-level and
passage-level hallucination. In this study, we propose DiaHalu, the first
dialogue-level hallucination evaluation benchmark to our knowledge. Initially,
we integrate the collected topics into system prompts and facilitate a dialogue
between two ChatGPT3.5. Subsequently, we manually modify the contents that do
not adhere to human language conventions and then have LLMs re-generate,
simulating authentic human-machine interaction scenarios. Finally, professional
scholars annotate all the samples in the dataset. DiaHalu covers four common
multi-turn dialogue domains and five hallucination subtypes, extended from
factuality and faithfulness hallucination. Experiments through some well-known
LLMs and detection methods on the dataset show that DiaHalu is a challenging
benchmark, holding significant value for further research.
| 2,024 | Computation and Language |
MediSwift: Efficient Sparse Pre-trained Biomedical Language Models | Large language models (LLMs) are typically trained on general source data for
various domains, but a recent surge in domain-specific LLMs has shown their
potential to outperform general-purpose models in domain-specific tasks (e.g.,
biomedicine). Although domain-specific pre-training enhances efficiency and
leads to smaller models, the computational costs of training these LLMs remain
high, posing budgeting challenges. We introduce MediSwift, a suite of
biomedical LMs that leverage sparse pre-training on domain-specific biomedical
text data. By inducing up to 75% weight sparsity during the pre-training phase,
MediSwift achieves a 2-2.5x reduction in training FLOPs. Notably, all sparse
pre-training was performed on the Cerebras CS-2 system, which is specifically
designed to realize the acceleration benefits from unstructured weight
sparsity, thereby significantly enhancing the efficiency of the MediSwift
models. Through subsequent dense fine-tuning and strategic soft prompting,
MediSwift models outperform existing LLMs up to 7B parameters on biomedical
tasks, setting new benchmarks w.r.t efficiency-accuracy on tasks such as
PubMedQA. Our results show that sparse pre-training, along with dense
fine-tuning and soft prompting, offers an effective method for creating
high-performing, computationally efficient models in specialized domains.
| 2,024 | Computation and Language |
AutoRD: An Automatic and End-to-End System for Rare Disease Knowledge
Graph Construction Based on Ontologies-enhanced Large Language Models | Objectives: Our objective is to create an end-to-end system called AutoRD,
which automates extracting information from clinical text about rare diseases.
We have conducted various tests to evaluate the performance of AutoRD and
highlighted its strengths and limitations in this paper.
Materials and Methods: Our system, AutoRD, is a software pipeline involving
data preprocessing, entity extraction, relation extraction, entity calibration,
and knowledge graph construction. We implement this using large language models
and medical knowledge graphs developed from open-source medical ontologies. We
quantitatively evaluate our system on entity extraction, relation extraction,
and the performance of knowledge graph construction.
Results: AutoRD achieves an overall F1 score of 47.3%, a 14.4% improvement
compared to the base LLM. In detail, AutoRD achieves an overall entity
extraction F1 score of 56.1% (rare_disease: 83.5%, disease: 35.8%,
symptom_and_sign: 46.1%, anaphor: 67.5%) and an overall relation extraction F1
score of 38.6% (produces: 34.7%, increases_risk_of: 12.4%, is_a: 37.4%,
is_acronym: 44.1%, is_synonym: 16.3%, anaphora: 57.5%). Our qualitative
experiment also demonstrates that the performance in constructing the knowledge
graph is commendable.
Discussion: AutoRD demonstrates the potential of LLM applications in rare
disease detection. This improvement is attributed to several design, including
the integration of ontologies-enhanced LLMs.
Conclusion: AutoRD is an automated end-to-end system for extracting rare
disease information from text to build knowledge graphs. It uses
ontologies-enhanced LLMs for a robust medical knowledge base. The superior
performance of AutoRD is validated by experimental evaluations, demonstrating
the potential of LLMs in healthcare.
| 2,024 | Computation and Language |
MALTO at SemEval-2024 Task 6: Leveraging Synthetic Data for LLM
Hallucination Detection | In Natural Language Generation (NLG), contemporary Large Language Models
(LLMs) face several challenges, such as generating fluent yet inaccurate
outputs and reliance on fluency-centric metrics. This often leads to neural
networks exhibiting "hallucinations". The SHROOM challenge focuses on
automatically identifying these hallucinations in the generated text. To tackle
these issues, we introduce two key components, a data augmentation pipeline
incorporating LLM-assisted pseudo-labelling and sentence rephrasing, and a
voting ensemble from three models pre-trained on Natural Language Inference
(NLI) tasks and fine-tuned on diverse datasets.
| 2,024 | Computation and Language |
LocalRQA: From Generating Data to Locally Training, Testing, and
Deploying Retrieval-Augmented QA Systems | Retrieval-augmented question-answering systems combine retrieval techniques
with large language models to provide answers that are more accurate and
informative. Many existing toolkits allow users to quickly build such systems
using off-the-shelf models, but they fall short in supporting researchers and
developers to customize the model training, testing, and deployment process. We
propose LocalRQA, an open-source toolkit that features a wide selection of
model training algorithms, evaluation methods, and deployment tools curated
from the latest research. As a showcase, we build QA systems using online
documentation obtained from Databricks and Faire's websites. We find 7B-models
trained and deployed using LocalRQA reach a similar performance compared to
using OpenAI's text-ada-002 and GPT-4-turbo.
| 2,024 | Computation and Language |
Merging Text Transformer Models from Different Initializations | Recent work on one-shot permutation-based model merging has shown impressive
low- or zero-barrier mode connectivity between models from completely different
initializations. However, this line of work has not yet extended to the
Transformer architecture, despite its dominant popularity in the language
domain. Therefore, in this work, we investigate the extent to which separate
Transformer minima learn similar features, and propose a model merging
technique to investigate the relationship between these minima in the loss
landscape. The specifics of the architecture, like its residual connections,
multi-headed attention, and discrete, sequential input, require specific
interventions in order to compute model permutations that remain within the
same functional equivalence class. In merging these models with our method, we
consistently find lower loss barriers between minima compared to model
averaging for several models trained on a masked-language modeling task or
fine-tuned on a language understanding benchmark. Our results show that the
minima of these models are less sharp and isolated than previously understood,
and provide a basis for future work on merging separately trained Transformer
models.
| 2,024 | Computation and Language |
Formulation Comparison for Timeline Construction using LLMs | Constructing a timeline requires identifying the chronological order of
events in an article. In prior timeline construction datasets, temporal orders
are typically annotated by either event-to-time anchoring or event-to-event
pairwise ordering, both of which suffer from missing temporal information. To
mitigate the issue, we develop a new evaluation dataset, TimeSET, consisting of
single-document timelines with document-level order annotation. TimeSET
features saliency-based event selection and partial ordering, which enable a
practical annotation workload. Aiming to build better automatic timeline
construction systems, we propose a novel evaluation framework to compare
multiple task formulations with TimeSET by prompting open LLMs, i.e., Llama 2
and Flan-T5. Considering that identifying temporal orders of events is a core
subtask in timeline construction, we further benchmark open LLMs on existing
event temporal ordering datasets to gain a robust understanding of their
capabilities. Our experiments show that (1) NLI formulation with Flan-T5
demonstrates a strong performance among others, while (2) timeline construction
and event temporal ordering are still challenging tasks for few-shot LLMs. Our
code and data are available at https://github.com/kimihiroh/timeset.
| 2,024 | Computation and Language |
Predictions from language models for multiple-choice tasks are not
robust under variation of scoring methods | This paper systematically compares different methods of deriving item-level
predictions of language models for multiple-choice tasks. It compares scoring
methods for answer options based on free generation of responses, various
probability-based scores, a Likert-scale style rating method, and embedding
similarity. In a case study on pragmatic language interpretation, we find that
LLM predictions are not robust under variation of method choice, both within a
single LLM and across different LLMs. As this variability entails pronounced
researcher degrees of freedom in reporting results, knowledge of the
variability is crucial to secure robustness of results and research integrity.
| 2,024 | Computation and Language |
Attribute Structuring Improves LLM-Based Evaluation of Clinical Text
Summaries | Summarizing clinical text is crucial in health decision-support and clinical
research. Large language models (LLMs) have shown the potential to generate
accurate clinical text summaries, but still struggle with issues regarding
grounding and evaluation, especially in safety-critical domains such as health.
Holistically evaluating text summaries is challenging because they may contain
unsubstantiated information. Here, we explore a general mitigation framework
using Attribute Structuring (AS), which structures the summary evaluation
process. It decomposes the evaluation process into a grounded procedure that
uses an LLM for relatively simple structuring and scoring tasks, rather than
the full task of holistic summary evaluation. Experiments show that AS
consistently improves the correspondence between human annotations and
automated metrics in clinical text summarization. Additionally, AS yields
interpretations in the form of a short text span corresponding to each output,
which enables efficient human auditing, paving the way towards trustworthy
evaluation of clinical information in resource-constrained scenarios. We
release our code, prompts, and an open-source benchmark at
https://github.com/microsoft/attribute-structuring.
| 2,024 | Computation and Language |
Peacock: A Family of Arabic Multimodal Large Language Models and
Benchmarks | Multimodal large language models (MLLMs) have proven effective in a wide
range of tasks requiring complex reasoning and linguistic comprehension.
However, due to a lack of high-quality multimodal resources in languages other
than English, success of MLLMs remains relatively limited to English-based
settings. This poses significant challenges in developing comparable models for
other languages, including even those with large speaker populations such as
Arabic. To alleviate this challenge, we introduce a comprehensive family of
Arabic MLLMs, dubbed \textit{Peacock}, with strong vision and language
capabilities. Through comprehensive qualitative and quantitative analysis, we
demonstrate the solid performance of our models on various visual reasoning
tasks and further show their emerging dialectal potential. Additionally, we
introduce ~\textit{Henna}, a new benchmark specifically designed for assessing
MLLMs on aspects related to Arabic culture, setting the first stone for
culturally-aware Arabic MLLMs.The GitHub repository for the \textit{Peacock}
project is available at \url{https://github.com/UBC-NLP/peacock}.
| 2,024 | Computation and Language |
Reading Subtext: Evaluating Large Language Models on Short Story
Summarization with Writers | We evaluate recent Large language Models (LLMs) on the challenging task of
summarizing short stories, which can be lengthy, and include nuanced subtext or
scrambled timelines. Importantly, we work directly with authors to ensure that
the stories have not been shared online (and therefore are unseen by the
models), and to obtain informed evaluations of summary quality using judgments
from the authors themselves. Through quantitative and qualitative analysis
grounded in narrative theory, we compare GPT-4, Claude-2.1, and LLama-2-70B. We
find that all three models make faithfulness mistakes in over 50% of summaries
and struggle to interpret difficult subtext. However, at their best, the models
can provide thoughtful thematic analysis of stories. We additionally
demonstrate that LLM judgments of summary quality do not match the feedback
from the writers.
| 2,024 | Computation and Language |
FaiMA: Feature-aware In-context Learning for Multi-domain Aspect-based
Sentiment Analysis | Multi-domain aspect-based sentiment analysis (ABSA) seeks to capture
fine-grained sentiment across diverse domains. While existing research narrowly
focuses on single-domain applications constrained by methodological limitations
and data scarcity, the reality is that sentiment naturally traverses multiple
domains. Although large language models (LLMs) offer a promising solution for
ABSA, it is difficult to integrate effectively with established techniques,
including graph-based models and linguistics, because modifying their internal
architecture is not easy. To alleviate this problem, we propose a novel
framework, Feature-aware In-context Learning for Multi-domain ABSA (FaiMA). The
core insight of FaiMA is to utilize in-context learning (ICL) as a
feature-aware mechanism that facilitates adaptive learning in multi-domain ABSA
tasks. Specifically, we employ a multi-head graph attention network as a text
encoder optimized by heuristic rules for linguistic, domain, and sentiment
features. Through contrastive learning, we optimize sentence representations by
focusing on these diverse features. Additionally, we construct an efficient
indexing mechanism, allowing FaiMA to stably retrieve highly relevant examples
across multiple dimensions for any given input. To evaluate the efficacy of
FaiMA, we build the first multi-domain ABSA benchmark dataset. Extensive
experimental results demonstrate that FaiMA achieves significant performance
improvements in multiple domains compared to baselines, increasing F1 by 2.07%
on average. Source code and data sets are anonymously available at
https://github.com/SupritYoung/FaiMA.
| 2,024 | Computation and Language |
LLMCRIT: Teaching Large Language Models to Use Criteria | Humans follow criteria when they execute tasks, and these criteria are
directly used to assess the quality of task completion. Therefore, having
models learn to use criteria to provide feedback can help humans or models to
perform tasks better. However, existing research in this field tends to
consider only a limited set of criteria or quality assessment aspects. To fill
this gap, we propose a general framework that enables large language models
(LLMs) to use comprehensive criteria for a task in delivering natural language
feedback on task execution. In particular, we present a model-in-the-loop
framework that semi-automatically derives criteria from collected guidelines
for different writing tasks and constructs in-context demonstrations for each
criterion. We choose three tasks from real-world scenarios to operationalize
this idea: paper introduction writing, Python code writing, and Reddit post
writing, and evaluate our feedback generation framework using different LLMs.
The results reveal the fine-grained effects of incorporating criteria and
demonstrations and provide valuable insights on how to teach LLMs to use
criteria more effectively.
| 2,024 | Computation and Language |
LAB: Large-Scale Alignment for ChatBots | This work introduces LAB (Large-scale Alignment for chatBots), a novel
methodology designed to overcome the scalability challenges in the
instruction-tuning phase of large language model (LLM) training. Leveraging a
taxonomy-guided synthetic data generation process and a multi-phase tuning
framework, LAB significantly reduces reliance on expensive human annotations
and proprietary models like GPT-4. We demonstrate that LAB-trained models can
achieve competitive performance across several benchmarks compared to models
trained with traditional human-annotated or GPT-4 generated synthetic data.
Thus offering a scalable, cost-effective solution for enhancing LLM
capabilities and instruction-following behaviors without the drawbacks of
catastrophic forgetting, marking a step forward in the efficient training of
LLMs for a wide range of applications.
| 2,024 | Computation and Language |
Distilling Text Style Transfer With Self-Explanation From LLMs | Text Style Transfer (TST) seeks to alter the style of text while retaining
its core content. Given the constraints of limited parallel datasets for TST,
we propose CoTeX, a framework that leverages large language models (LLMs)
alongside chain-of-thought (CoT) prompting to facilitate TST. CoTeX distills
the complex rewriting and reasoning capabilities of LLMs into more streamlined
models capable of working with both non-parallel and parallel data. Through
experimentation across four TST datasets, CoTeX is shown to surpass traditional
supervised fine-tuning and knowledge distillation methods, particularly in
low-resource settings. We conduct a comprehensive evaluation, comparing CoTeX
against current unsupervised, supervised, in-context learning (ICL) techniques,
and instruction-tuned LLMs. Furthermore, CoTeX distinguishes itself by offering
transparent explanations for its style transfer process.
| 2,024 | Computation and Language |
MulCogBench: A Multi-modal Cognitive Benchmark Dataset for Evaluating
Chinese and English Computational Language Models | Pre-trained computational language models have recently made remarkable
progress in harnessing the language abilities which were considered unique to
humans. Their success has raised interest in whether these models represent and
process language like humans. To answer this question, this paper proposes
MulCogBench, a multi-modal cognitive benchmark dataset collected from native
Chinese and English participants. It encompasses a variety of cognitive data,
including subjective semantic ratings, eye-tracking, functional magnetic
resonance imaging (fMRI), and magnetoencephalography (MEG). To assess the
relationship between language models and cognitive data, we conducted a
similarity-encoding analysis which decodes cognitive data based on its pattern
similarity with textual embeddings. Results show that language models share
significant similarities with human cognitive data and the similarity patterns
are modulated by the data modality and stimuli complexity. Specifically,
context-aware models outperform context-independent models as language stimulus
complexity increases. The shallow layers of context-aware models are better
aligned with the high-temporal-resolution MEG signals whereas the deeper layers
show more similarity with the high-spatial-resolution fMRI. These results
indicate that language models have a delicate relationship with brain language
representations. Moreover, the results between Chinese and English are highly
consistent, suggesting the generalizability of these findings across languages.
| 2,024 | Computation and Language |
ParallelPARC: A Scalable Pipeline for Generating Natural-Language
Analogies | Analogy-making is central to human cognition, allowing us to adapt to novel
situations -- an ability that current AI systems still lack. Most analogy
datasets today focus on simple analogies (e.g., word analogies); datasets
including complex types of analogies are typically manually curated and very
small. We believe that this holds back progress in computational analogy. In
this work, we design a data generation pipeline, ParallelPARC (Parallel
Paragraph Creator) leveraging state-of-the-art Large Language Models (LLMs) to
create complex, paragraph-based analogies, as well as distractors, both simple
and challenging. We demonstrate our pipeline and create ProPara-Logy, a dataset
of analogies between scientific processes. We publish a gold-set, validated by
humans, and a silver-set, generated automatically. We test LLMs' and humans'
analogy recognition in binary and multiple-choice settings, and found that
humans outperform the best models (~13% gap) after a light supervision. We
demonstrate that our silver-set is useful for training models. Lastly, we show
challenging distractors confuse LLMs, but not humans. We hope our pipeline will
encourage research in this emerging field.
| 2,024 | Computation and Language |
A Survey of AI-generated Text Forensic Systems: Detection, Attribution,
and Characterization | We have witnessed lately a rapid proliferation of advanced Large Language
Models (LLMs) capable of generating high-quality text. While these LLMs have
revolutionized text generation across various domains, they also pose
significant risks to the information ecosystem, such as the potential for
generating convincing propaganda, misinformation, and disinformation at scale.
This paper offers a review of AI-generated text forensic systems, an emerging
field addressing the challenges of LLM misuses. We present an overview of the
existing efforts in AI-generated text forensics by introducing a detailed
taxonomy, focusing on three primary pillars: detection, attribution, and
characterization. These pillars enable a practical understanding of
AI-generated text, from identifying AI-generated content (detection),
determining the specific AI model involved (attribution), and grouping the
underlying intents of the text (characterization). Furthermore, we explore
available resources for AI-generated text forensics research and discuss the
evolving challenges and future directions of forensic systems in an AI era.
| 2,024 | Computation and Language |
BootTOD: Bootstrap Task-oriented Dialogue Representations by Aligning
Diverse Responses | Pre-trained language models have been successful in many scenarios. However,
their usefulness in task-oriented dialogues is limited due to the intrinsic
linguistic differences between general text and task-oriented dialogues.
Current task-oriented dialogue pre-training methods rely on a contrastive
framework, which faces challenges such as selecting true positives and hard
negatives, as well as lacking diversity. In this paper, we propose a novel
dialogue pre-training model called BootTOD. It learns task-oriented dialogue
representations via a self-bootstrapping framework. Unlike contrastive
counterparts, BootTOD aligns context and context+response representations and
dismisses the requirements of contrastive pairs. BootTOD also uses multiple
appropriate response targets to model the intrinsic one-to-many diversity of
human conversations. Experimental results show that BootTOD outperforms strong
TOD baselines on diverse downstream dialogue tasks.
| 2,024 | Computation and Language |
STAR: Constraint LoRA with Dynamic Active Learning for Data-Efficient
Fine-Tuning of Large Language Models | Though Large Language Models (LLMs) have demonstrated the powerful
capabilities of few-shot learning through prompting methods, supervised
training is still necessary for complex reasoning tasks. Because of their
extensive parameters and memory consumption, both Parameter-Efficient
Fine-Tuning (PEFT) methods and Memory-Efficient Fine-Tuning methods have been
proposed for LLMs. Nevertheless, the issue of large annotated data consumption,
the aim of Data-Efficient Fine-Tuning, remains unexplored. One obvious way is
to combine the PEFT method with active learning. However, the experimental
results show that such a combination is not trivial and yields inferior
results. Through probe experiments, such observation might be explained by two
main reasons: uncertainty gap and poor model calibration. Therefore, in this
paper, we propose a novel approach to effectively integrate uncertainty-based
active learning and LoRA. Specifically, for the uncertainty gap, we introduce a
dynamic uncertainty measurement that combines the uncertainty of the base model
and the uncertainty of the full model during the iteration of active learning.
For poor model calibration, we incorporate the regularization method during
LoRA training to keep the model from being over-confident, and the Monte-Carlo
dropout mechanism is employed to enhance the uncertainty estimation.
Experimental results show that the proposed approach outperforms existing
baseline models on three complex reasoning tasks.
| 2,024 | Computation and Language |
DINER: Debiasing Aspect-based Sentiment Analysis with Multi-variable
Causal Inference | Though notable progress has been made, neural-based aspect-based sentiment
analysis (ABSA) models are prone to learn spurious correlations from annotation
biases, resulting in poor robustness on adversarial data transformations. Among
the debiasing solutions, causal inference-based methods have attracted much
research attention, which can be mainly categorized into causal intervention
methods and counterfactual reasoning methods. However, most of the present
debiasing methods focus on single-variable causal inference, which is not
suitable for ABSA with two input variables (the target aspect and the review).
In this paper, we propose a novel framework based on multi-variable causal
inference for debiasing ABSA. In this framework, different types of biases are
tackled based on different causal intervention methods. For the review branch,
the bias is modeled as indirect confounding from context, where backdoor
adjustment intervention is employed for debiasing. For the aspect branch, the
bias is described as a direct correlation with labels, where counterfactual
reasoning is adopted for debiasing. Extensive experiments demonstrate the
effectiveness of the proposed method compared to various baselines on the two
widely used real-world aspect robustness test set datasets.
| 2,024 | Computation and Language |
Balancing Exploration and Exploitation in LLM using Soft RLLF for
Enhanced Negation Understanding | Finetuning approaches in NLP often focus on exploitation rather than
exploration, which may lead to suboptimal models. Given the vast search space
of natural language, this limited exploration can restrict their performance in
complex, high-stakes domains, where accurate negation understanding and logical
reasoning abilities are crucial. To address this issue, we leverage
Reinforcement Learning from Logical Feedback (RLLF) to create an effective
balance between exploration and exploitation in LLMs. Our approach employs an
appropriate benchmark dataset for training and evaluation, highlighting the
importance of exploration in enhancing negation understanding capabilities. We
compare the performance of our RLLF-enhanced LLMs with baseline models trained
without RLLF, demonstrating the value of this balanced approach. Furthermore,
we showcase the potential of our method in legal AI applications by employing
transfer learning and evaluating its impact on negation understanding. Our
experimental results exhibit the effectiveness of balancing exploration and
exploitation with RLLF in improving LLMs' negation capabilities. This has
implications for the development of more accurate, reliable, and logically
consistent language models in high-stakes domains.
| 2,024 | Computation and Language |
A Compositional Typed Semantics for Universal Dependencies | Languages may encode similar meanings using different sentence structures.
This makes it a challenge to provide a single set of formal rules that can
derive meanings from sentences in many languages at once. To overcome the
challenge, we can take advantage of language-general connections between
meaning and syntax, and build on cross-linguistically parallel syntactic
structures. We introduce UD Type Calculus, a compositional, principled, and
language-independent system of semantic types and logical forms for lexical
items which builds on a widely-used language-general dependency syntax
framework. We explain the essential features of UD Type Calculus, which all
involve giving dependency relations denotations just like those of words. These
allow UD-TC to derive correct meanings for sentences with a wide range of
syntactic structures by making use of dependency labels. Finally, we present
evaluation results on a large existing corpus of sentences and their logical
forms, showing that UD-TC can produce meanings comparable with our baseline.
| 2,024 | Computation and Language |
RAGged Edges: The Double-Edged Sword of Retrieval-Augmented Chatbots | Large language models (LLMs) like ChatGPT demonstrate the remarkable progress
of artificial intelligence. However, their tendency to hallucinate -- generate
plausible but false information -- poses a significant challenge. This issue is
critical, as seen in recent court cases where ChatGPT's use led to citations of
non-existent legal rulings. This paper explores how Retrieval-Augmented
Generation (RAG) can counter hallucinations by integrating external knowledge
with prompts. We empirically evaluate RAG against standard LLMs using prompts
designed to induce hallucinations. Our results show that RAG increases accuracy
in some cases, but can still be misled when prompts directly contradict the
model's pre-trained understanding. These findings highlight the complex nature
of hallucinations and the need for more robust solutions to ensure LLM
reliability in real-world applications. We offer practical recommendations for
RAG deployment and discuss implications for the development of more trustworthy
LLMs.
| 2,024 | Computation and Language |
Machine Translation in the Covid domain: an English-Irish case study for
LoResMT 2021 | Translation models for the specific domain of translating Covid data from
English to Irish were developed for the LoResMT 2021 shared task. Domain
adaptation techniques, using a Covid-adapted generic 55k corpus from the
Directorate General of Translation, were applied. Fine-tuning, mixed
fine-tuning and combined dataset approaches were compared with models trained
on an extended in-domain dataset. As part of this study, an English-Irish
dataset of Covid related data, from the Health and Education domains, was
developed. The highest-performing model used a Transformer architecture trained
with an extended in-domain Covid dataset. In the context of this study, we have
demonstrated that extending an 8k in-domain baseline dataset by just 5k lines
improved the BLEU score by 27 points.
| 2,021 | Computation and Language |
DMoERM: Recipes of Mixture-of-Experts for Effective Reward Modeling | The performance of the reward model (RM) is a critical factor in improving
the effectiveness of the large language model (LLM) during alignment
fine-tuning. There remain two challenges in RM training: 1) training the same
RM using various categories of data may cause its generalization performance to
suffer from multi-task disturbance, and 2) the human annotation consistency
rate is generally only $60\%$ to $75\%$, causing training data to contain a lot
of noise. To tackle these two challenges, we introduced the idea of
Mixture-of-Experts (MoE) into the field of RM for the first time. We propose
the Double-Layer MoE RM (DMoERM). The outer layer MoE is a sparse model. After
classifying an input into task categories, we route it to the corresponding
inner layer task-specific model. The inner layer MoE is a dense model. We
decompose the specific task into multiple capability dimensions and
individually fine-tune a LoRA expert on each one. Their outputs are then
synthesized by an MLP to compute the final rewards. To minimize costs, we call
a public LLM API to obtain the capability preference labels. The validation on
manually labeled datasets confirms that our model attains superior consistency
with human preference and outstrips advanced generative approaches. Meanwhile,
through BoN sampling and RL experiments, we demonstrate that our model
outperforms state-of-the-art ensemble methods of RM and mitigates the
overoptimization problem. Our code and dataset are available at:
https://github.com/quanshr/DMoERM-v1.
| 2,024 | Computation and Language |
API Is Enough: Conformal Prediction for Large Language Models Without
Logit-Access | This study aims to address the pervasive challenge of quantifying uncertainty
in large language models (LLMs) without logit-access. Conformal Prediction
(CP), known for its model-agnostic and distribution-free features, is a desired
approach for various LLMs and data distributions. However, existing CP methods
for LLMs typically assume access to the logits, which are unavailable for some
API-only LLMs. In addition, logits are known to be miscalibrated, potentially
leading to degraded CP performance. To tackle these challenges, we introduce a
novel CP method that (1) is tailored for API-only LLMs without logit-access;
(2) minimizes the size of prediction sets; and (3) ensures a statistical
guarantee of the user-defined coverage. The core idea of this approach is to
formulate nonconformity measures using both coarse-grained (i.e., sample
frequency) and fine-grained uncertainty notions (e.g., semantic similarity).
Experimental results on both close-ended and open-ended Question Answering
tasks show our approach can mostly outperform the logit-based CP baselines.
| 2,024 | Computation and Language |
Emotion Analysis in NLP: Trends, Gaps and Roadmap for Future Directions | Emotions are a central aspect of communication. Consequently, emotion
analysis (EA) is a rapidly growing field in natural language processing (NLP).
However, there is no consensus on scope, direction, or methods. In this paper,
we conduct a thorough review of 154 relevant NLP publications from the last
decade. Based on this review, we address four different questions: (1) How are
EA tasks defined in NLP? (2) What are the most prominent emotion frameworks and
which emotions are modeled? (3) Is the subjectivity of emotions considered in
terms of demographics and cultural factors? and (4) What are the primary NLP
applications for EA? We take stock of trends in EA and tasks, emotion
frameworks used, existing datasets, methods, and applications. We then discuss
four lacunae: (1) the absence of demographic and cultural aspects does not
account for the variation in how emotions are perceived, but instead assumes
they are universally experienced in the same manner; (2) the poor fit of
emotion categories from the two main emotion theories to the task; (3) the lack
of standardized EA terminology hinders gap identification, comparison, and
future goals; and (4) the absence of interdisciplinary research isolates EA
from insights in other fields. Our work will enable more focused research into
EA and a more holistic approach to modeling emotions in NLP.
| 2,024 | Computation and Language |
IntactKV: Improving Large Language Model Quantization by Keeping Pivot
Tokens Intact | Large language models (LLMs) excel in natural language processing but demand
intensive computation. To mitigate this, various quantization methods have been
explored, yet they compromise LLM performance. This paper unveils a previously
overlooked type of outlier in LLMs. Such outliers are found to allocate most of
the attention scores on initial tokens of input, termed as pivot tokens, which
is crucial to the performance of quantized LLMs. Given that, we propose
IntactKV to generate the KV cache of pivot tokens losslessly from the
full-precision model. The approach is simple and easy to combine with existing
quantization solutions. Besides, IntactKV can be calibrated as additional LLM
parameters to boost the quantized LLMs further. Mathematical analysis also
proves that IntactKV effectively reduces the upper bound of quantization error.
Empirical results show that IntactKV brings consistent improvement and achieves
lossless weight-only INT4 quantization on various downstream tasks, leading to
the new state-of-the-art for LLM quantization.
| 2,024 | Computation and Language |
Mitigating Catastrophic Forgetting in Large Language Models with
Self-Synthesized Rehearsal | Large language models (LLMs) suffer from catastrophic forgetting during
continual learning. Conventional rehearsal-based methods rely on previous
training data to retain the model's ability, which may not be feasible in
real-world applications. When conducting continual learning based on a
publicly-released LLM checkpoint, the availability of the original training
data may be non-existent. To address this challenge, we propose a framework
called Self-Synthesized Rehearsal (SSR) that uses the LLM to generate synthetic
instances for rehearsal. Concretely, we first employ the base LLM for
in-context learning to generate synthetic instances. Subsequently, we utilize
the latest LLM to refine the instance outputs based on the synthetic inputs,
preserving its acquired ability. Finally, we select diverse high-quality
synthetic instances for rehearsal in future stages. Experimental results
demonstrate that SSR achieves superior or comparable performance compared to
conventional rehearsal-based approaches while being more data-efficient.
Besides, SSR effectively preserves the generalization capabilities of LLMs in
general domains.
| 2,024 | Computation and Language |
Accelerating Greedy Coordinate Gradient via Probe Sampling | Safety of Large Language Models (LLMs) has become a central issue given their
rapid progress and wide applications. Greedy Coordinate Gradient (GCG) is shown
to be effective in constructing prompts containing adversarial suffixes to
break the presumingly safe LLMs, but the optimization of GCG is time-consuming
and limits its practicality. To reduce the time cost of GCG and enable more
comprehensive studies of LLM safety, in this work, we study a new algorithm
called $\texttt{Probe sampling}$ to accelerate the GCG algorithm. At the core
of the algorithm is a mechanism that dynamically determines how similar a
smaller draft model's predictions are to the target model's predictions for
prompt candidates. When the target model is similar to the draft model, we rely
heavily on the draft model to filter out a large number of potential prompt
candidates to reduce the computation time. Probe sampling achieves up to $5.6$
times speedup using Llama2-7b and leads to equal or improved attack success
rate (ASR) on the AdvBench.
| 2,024 | Computation and Language |
A comprehensive cross-language framework for harmful content detection
with the aid of sentiment analysis | In today's digital world, social media plays a significant role in
facilitating communication and content sharing. However, the exponential rise
in user-generated content has led to challenges in maintaining a respectful
online environment. In some cases, users have taken advantage of anonymity in
order to use harmful language, which can negatively affect the user experience
and pose serious social problems. Recognizing the limitations of manual
moderation, automatic detection systems have been developed to tackle this
problem. Nevertheless, several obstacles persist, including the absence of a
universal definition for harmful language, inadequate datasets across
languages, the need for detailed annotation guideline, and most importantly, a
comprehensive framework. This study aims to address these challenges by
introducing, for the first time, a detailed framework adaptable to any
language. This framework encompasses various aspects of harmful language
detection. A key component of the framework is the development of a general and
detailed annotation guideline. Additionally, the integration of sentiment
analysis represents a novel approach to enhancing harmful language detection.
Also, a definition of harmful language based on the review of different related
concepts is presented. To demonstrate the effectiveness of the proposed
framework, its implementation in a challenging low-resource language is
conducted. We collected a Persian dataset and applied the annotation guideline
for harmful detection and sentiment analysis. Next, we present baseline
experiments utilizing machine and deep learning methods to set benchmarks.
Results prove the framework's high performance, achieving an accuracy of 99.4%
in offensive language detection and 66.2% in sentiment analysis.
| 2,024 | Computation and Language |
Greed is All You Need: An Evaluation of Tokenizer Inference Methods | While subword tokenizers such as BPE and WordPiece are typically used to
build vocabularies for NLP models, the method of decoding text into a sequence
of tokens from these vocabularies is often left unspecified, or ill-suited to
the method in which they were constructed. We provide a controlled analysis of
seven tokenizer inference methods across four different algorithms and three
vocabulary sizes, performed on a novel intrinsic evaluation suite we curated
for English, combining measures rooted in morphology, cognition, and
information theory. We show that for the most commonly used tokenizers, greedy
inference performs surprisingly well; and that SaGe, a recently-introduced
contextually-informed tokenizer, outperforms all others on morphological
alignment.
| 2,024 | Computation and Language |
Improving the Validity of Automatically Generated Feedback via
Reinforcement Learning | Automatically generating feedback via large language models (LLMs) in
intelligent tutoring systems and online learning platforms has the potential to
improve the learning outcomes of many students. However, both feedback
generation and evaluation are challenging: feedback content has to be valid
especially in subjects like math, which requires models to understand the
problem, the solution, and where the student's error lies. Feedback also has to
be pedagogically valid to reflect effective tutoring strategies, such as
explaining possible misconceptions and encouraging the student, among other
desirable features. In this work, we address both problems of automatically
generating and evaluating feedback while considering both correctness and
alignment. First, we propose a rubric for evaluating math feedback and show
that GPT-4 is able to effectively use it to annotate human-written and
LLM-generated feedback. Second, we propose a framework for feedback generation
that optimizes both correctness and alignment using reinforcement learning
(RL). Specifically, we use GPT-4's annotations to create preferences over
feedback pairs in an augmented dataset for training via direct preference
optimization (DPO). We show that our methods significantly increase the
correctness and alignment of generated feedback with Llama 2, an open-source
LLM, qualitatively analyze our generation and evaluation systems using case
studies, and outline several areas for future work.
| 2,024 | Computation and Language |
VBART: The Turkish LLM | We present VBART, the first Turkish sequence-to-sequence Large Language
Models (LLMs) pre-trained on a large corpus from scratch. VBART are compact
LLMs based on good ideas leveraged from BART and mBART models and come in two
sizes, Large and XLarge. Fine-tuned VBART models surpass the prior
state-of-the-art results in abstractive text summarization, title generation,
text paraphrasing, question answering and question generation tasks. They allow
fine-tuning for future text generation tasks and datasets, carving a new path
for Turkish Natural Language Processing (NLP) research. Our work shows that
having a pre-trained LLM for Turkish outperforms up to 3x multilingual models,
improving existing results and providing efficient models for training and
inference. Moreover, we show that our monolingual tokenizer is 7x more
efficient than OpenAI's multilingual tokenizer. Last but not least, we
introduce a method to enlarge an existing pre-trained LLM and question the
relevancy of Chinchilla Scaling Law to sequence-to-sequence masked language
models. Our fine-tuned models, tokenizer and cleaned web corpus of 135 GB are
publicly available at huggingface.co/vngrs-ai.
| 2,024 | Computation and Language |
VNLP: Turkish NLP Package | In this work, we present VNLP: the first dedicated, complete, open-source,
well-documented, lightweight, production-ready, state-of-the-art Natural
Language Processing (NLP) package for the Turkish language. It contains a wide
variety of tools, ranging from the simplest tasks, such as sentence splitting
and text normalization, to the more advanced ones, such as text and token
classification models. Its token classification models are based on "Context
Model", a novel architecture that is both an encoder and an auto-regressive
model. NLP tasks solved by VNLP models include but are not limited to Sentiment
Analysis, Named Entity Recognition, Morphological Analysis \& Disambiguation
and Part-of-Speech Tagging. Moreover, it comes with pre-trained word embeddings
and corresponding SentencePiece Unigram tokenizers. VNLP has an open-source
GitHub repository, ReadtheDocs documentation, PyPi package for convenient
installation, Python and command-line API and a demo page to test all the
functionality. Consequently, our main contribution is a complete, compact,
easy-to-install and easy-to-use NLP package for Turkish.
| 2,024 | Computation and Language |
LM4OPT: Unveiling the Potential of Large Language Models in Formulating
Mathematical Optimization Problems | In the rapidly evolving field of natural language processing, the translation
of linguistic descriptions into mathematical formulation of optimization
problems presents a formidable challenge, demanding intricate understanding and
processing capabilities from Large Language Models (LLMs). This study compares
prominent LLMs, including GPT-3.5, GPT-4, and Llama-2-7b, in zero-shot and
one-shot settings for this task. Our findings show GPT-4's superior
performance, particularly in the one-shot scenario. A central part of this
research is the introduction of `LM4OPT,' a progressive fine-tuning framework
for Llama-2-7b that utilizes noisy embeddings and specialized datasets.
However, this research highlights a notable gap in the contextual understanding
capabilities of smaller models such as Llama-2-7b compared to larger
counterparts, especially in processing lengthy and complex input contexts. Our
empirical investigation, utilizing the NL4Opt dataset, unveils that GPT-4
surpasses the baseline performance established by previous research, achieving
an F1-score of 0.63, solely based on the problem description in natural
language, and without relying on any additional named entity information.
GPT-3.5 follows closely, both outperforming the fine-tuned Llama-2-7b. These
findings not only benchmark the current capabilities of LLMs in a novel
application area but also lay the groundwork for future improvements in
mathematical formulation of optimization problems from natural language input.
| 2,024 | Computation and Language |
Improving Cross-lingual Representation for Semantic Retrieval with
Code-switching | Semantic Retrieval (SR) has become an indispensable part of the FAQ system in
the task-oriented question-answering (QA) dialogue scenario. The demands for a
cross-lingual smart-customer-service system for an e-commerce platform or some
particular business conditions have been increasing recently. Most previous
studies exploit cross-lingual pre-trained models (PTMs) for multi-lingual
knowledge retrieval directly, while some others also leverage the continual
pre-training before fine-tuning PTMs on the downstream tasks. However, no
matter which schema is used, the previous work ignores to inform PTMs of some
features of the downstream task, i.e. train their PTMs without providing any
signals related to SR. To this end, in this work, we propose an Alternative
Cross-lingual PTM for SR via code-switching. We are the first to utilize the
code-switching approach for cross-lingual SR. Besides, we introduce the novel
code-switched continual pre-training instead of directly using the PTMs on the
SR tasks. The experimental results show that our proposed approach consistently
outperforms the previous SOTA methods on SR and semantic textual similarity
(STS) tasks with three business corpora and four open datasets in 20+
languages.
| 2,024 | Computation and Language |
Evaluating and Mitigating Number Hallucinations in Large Vision-Language
Models: A Consistency Perspective | Large vision language models have demonstrated remarkable efficacy in
addressing challenges related to both textual and visual content. Nevertheless,
these models are susceptible to various hallucinations. In this paper, we focus
on a new form of hallucination, specifically termed as number hallucination,
which denotes instances where models fail to accurately identify the quantity
of objects in an image. We establish a dataset and employ evaluation metrics to
assess number hallucination, revealing a pronounced prevalence of this issue
across mainstream large vision language models (LVLMs). Additionally, we delve
into a thorough analysis of number hallucination, examining inner and outer
inconsistency problem from two related perspectives. We assert that this
inconsistency is one cause of number hallucination and propose a consistency
training method as a means to alleviate such hallucination, which achieves an
average improvement of 8\% compared with direct finetuning method.
| 2,024 | Computation and Language |
Automatic Question-Answer Generation for Long-Tail Knowledge | Pretrained Large Language Models (LLMs) have gained significant attention for
addressing open-domain Question Answering (QA). While they exhibit high
accuracy in answering questions related to common knowledge, LLMs encounter
difficulties in learning about uncommon long-tail knowledge (tail entities).
Since manually constructing QA datasets demands substantial human resources,
the types of existing QA datasets are limited, leaving us with a scarcity of
datasets to study the performance of LLMs on tail entities. In this paper, we
propose an automatic approach to generate specialized QA datasets for tail
entities and present the associated research challenges. We conduct extensive
experiments by employing pretrained LLMs on our newly generated long-tail QA
datasets, comparing their performance with and without external resources
including Wikipedia and Wikidata knowledge graphs.
| 2,024 | Computation and Language |
Right for Right Reasons: Large Language Models for Verifiable
Commonsense Knowledge Graph Question Answering | Knowledge Graph Question Answering (KGQA) methods seek to answer Natural
Language questions using the relational information stored in Knowledge Graphs
(KGs). With the recent advancements of Large Language Models (LLMs) and their
remarkable reasoning abilities, there is a growing trend to leverage them for
KGQA. However, existing methodologies have only focused on answering factual
questions, e.g., "In which city was Silvio Berlusconi's first wife born?",
leaving questions involving commonsense reasoning that real-world users may
pose more often, e.g., "Do I need separate visas to see the Venus of Willendorf
and attend the Olympics this summer?" unaddressed. In this work, we first
observe that existing LLM-based methods for KGQA struggle with hallucination on
such questions, especially on queries targeting long-tail entities (e.g.,
non-mainstream and recent entities), thus hindering their applicability in
real-world applications especially since their reasoning processes are not
easily verifiable. In response, we propose Right for Right Reasons (R3), a
commonsense KGQA methodology that allows for a verifiable reasoning procedure
by axiomatically surfacing intrinsic commonsense knowledge of LLMs and
grounding every factual reasoning step on KG triples. Through experimental
evaluations across three different tasks--question answering, claim
verification, and preference matching--our findings showcase R3 as a superior
approach, outperforming existing methodologies and notably reducing instances
of hallucination and reasoning errors.
| 2,024 | Computation and Language |
CR-LT-KGQA: A Knowledge Graph Question Answering Dataset Requiring
Commonsense Reasoning and Long-Tail Knowledge | Knowledge graph question answering (KGQA) is a well-established field that
seeks to provide factual answers to natural language (NL) questions by
leveraging knowledge graphs (KGs). However, existing KGQA datasets suffer from
two significant limitations: (1) no existing KGQA dataset requires commonsense
reasoning to arrive at an answer and (2) existing KGQA datasets focus on
popular entities for which large language models (LLMs) can directly answer
without hallucinating and without leveraging the KG. In this work, we seek a
novel KGQA dataset that supports commonsense reasoning and focuses on long-tail
entities (e.g., non-mainstream and recent entities) where LLMs frequently
hallucinate, and thus create the need for novel methodologies that leverage the
KG for factual and attributable commonsense inference. We create a novel
Commonsense Reasoning (CR) and Long-Tail (LT) KGQA dataset with two subtasks --
question answering and claim verification -- that address both limitations (1)
and (2). We construct CR-LT-KGQA by building extensions to existing reasoning
datasets StrategyQA and CREAK over Wikidata. While existing KGQA methods are
not applicable due to their lack of commonsense inference support, baseline
evaluation of LLMs on CR-LT KGQA demonstrate a high rate of hallucination.
Thus, CR-LT KGQA poses significant challenges for hallucination-prone LLMs,
hence paving the way for future commonsense KGQA research to provide accurate
and factual answers for long-tail entities in the era of LLMs.
| 2,024 | Computation and Language |
What Is Missing in Multilingual Visual Reasoning and How to Fix It | NLP models today strive for supporting multiple languages and modalities,
improving accessibility for diverse users. In this paper, we evaluate their
multilingual, multimodal capabilities by testing on a visual reasoning task. We
observe that proprietary systems like GPT-4V obtain the best performance on
this task now, but open models lag in comparison. Surprisingly, GPT-4V exhibits
similar performance between English and other languages, indicating the
potential for equitable system development across languages. Our analysis on
model failures reveals three key aspects that make this task challenging:
multilinguality, complex reasoning, and multimodality. To address these
challenges, we propose three targeted interventions including a translate-test
approach to tackle multilinguality, a visual programming approach to break down
complex reasoning, and a novel method that leverages image captioning to
address multimodality. Our interventions achieve the best open performance on
this task in a zero-shot setting, boosting open model LLaVA by 13.4%, while
also minorly improving GPT-4V's performance.
| 2,024 | Computation and Language |
OVEL: Large Language Model as Memory Manager for Online Video Entity
Linking | In recent years, multi-modal entity linking (MEL) has garnered increasing
attention in the research community due to its significance in numerous
multi-modal applications. Video, as a popular means of information
transmission, has become prevalent in people's daily lives. However, most
existing MEL methods primarily focus on linking textual and visual mentions or
offline videos's mentions to entities in multi-modal knowledge bases, with
limited efforts devoted to linking mentions within online video content. In
this paper, we propose a task called Online Video Entity Linking OVEL, aiming
to establish connections between mentions in online videos and a knowledge base
with high accuracy and timeliness. To facilitate the research works of OVEL, we
specifically concentrate on live delivery scenarios and construct a live
delivery entity linking dataset called LIVE. Besides, we propose an evaluation
metric that considers timelessness, robustness, and accuracy. Furthermore, to
effectively handle OVEL task, we leverage a memory block managed by a Large
Language Model and retrieve entity candidates from the knowledge base to
augment LLM performance on memory management. The experimental results prove
the effectiveness and efficiency of our method.
| 2,024 | Computation and Language |
Fine Tuning vs. Retrieval Augmented Generation for Less Popular
Knowledge | Large language models (LLMs) memorize a vast amount of factual knowledge,
exhibiting strong performance across diverse tasks and domains. However, it has
been observed that the performance diminishes when dealing with less-popular or
low-frequency concepts and entities, for example in domain specific
applications. The two prominent approaches to enhance the performance of LLMs
on low-frequent topics are: Retrieval Augmented Generation (RAG) and
fine-tuning (FT) over synthetic data. This paper explores and evaluates the
impact of RAG and FT on customizing LLMs in handling low-frequency entities on
question answering task. Our findings indicate that FT significantly boosts the
performance across entities of varying popularity, especially in the most and
least popular groups, while RAG surpasses other methods. Additionally, the
success of both RAG and FT approaches is amplified by advancements in retrieval
and data augmentation techniques. We release our data and code at
https://github.com/informagi/RAGvsFT.
| 2,024 | Computation and Language |
Controlling Cloze-test Question Item Difficulty with PLM-based Surrogate
Models for IRT Assessment | Item difficulty plays a crucial role in adaptive testing. However, few works
have focused on generating questions of varying difficulty levels, especially
for multiple-choice (MC) cloze tests. We propose training pre-trained language
models (PLMs) as surrogate models to enable item response theory (IRT)
assessment, avoiding the need for human test subjects. We also propose two
strategies to control the difficulty levels of both the gaps and the
distractors using ranking rules to reduce invalid distractors. Experimentation
on a benchmark dataset demonstrates that our proposed framework and methods can
effectively control and evaluate the difficulty levels of MC cloze tests.
| 2,024 | Computation and Language |
Answerability in Retrieval-Augmented Open-Domain Question Answering | The performance of Open-Domain Question Answering (ODQA) retrieval systems
can exhibit sub-optimal behavior, providing text excerpts with varying degrees
of irrelevance. Unfortunately, many existing ODQA datasets lack examples
specifically targeting the identification of irrelevant text excerpts. Previous
attempts to address this gap have relied on a simplistic approach of pairing
questions with random text excerpts. This paper aims to investigate the
effectiveness of models trained using this randomized strategy, uncovering an
important limitation in their ability to generalize to irrelevant text excerpts
with high semantic overlap. As a result, we observed a substantial decrease in
predictive accuracy, from 98% to 1%. To address this limitation, we discovered
an efficient approach for training models to recognize such excerpts. By
leveraging unanswerable pairs from the SQuAD 2.0 dataset, our models achieve a
nearly perfect (~100%) accuracy when confronted with these challenging text
excerpts.
| 2,024 | Computation and Language |