Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
$Se^2$: Sequential Example Selection for In-Context Learning | The remarkable capability of large language models (LLMs) for in-context
learning (ICL) needs to be activated by demonstration examples. Prior work has
extensively explored the selection of examples for ICL, predominantly following
the "select then organize" paradigm, such approaches often neglect the internal
relationships between examples and exist an inconsistency between the training
and inference. In this paper, we formulate the problem as a
$\textit{se}$quential $\textit{se}$lection problem and introduce $Se^2$, a
sequential-aware method that leverages the LLM's feedback on varying context,
aiding in capturing inter-relationships and sequential information among
examples, significantly enriching the contextuality and relevance of ICL
prompts. Meanwhile, we utilize beam search to seek and construct example
sequences, enhancing both quality and diversity. Extensive experiments across
23 NLP tasks from 8 distinct categories illustrate that $Se^2$ markedly
surpasses competitive baselines and achieves 42% relative improvement over
random selection. Further in-depth analysis show the effectiveness of proposed
strategies, highlighting $Se^2$'s exceptional stability and adaptability across
various scenarios. Our code will be released to facilitate future research.
| 2,024 | Computation and Language |
Beyond Probabilities: Unveiling the Misalignment in Evaluating Large
Language Models | Large Language Models (LLMs) have demonstrated remarkable capabilities across
various applications, fundamentally reshaping the landscape of natural language
processing (NLP) research. However, recent evaluation frameworks often rely on
the output probabilities of LLMs for predictions, primarily due to
computational constraints, diverging from real-world LLM usage scenarios. While
widely employed, the efficacy of these probability-based evaluation strategies
remains an open research question. This study aims to scrutinize the validity
of such probability-based evaluation methods within the context of using LLMs
for Multiple Choice Questions (MCQs), highlighting their inherent limitations.
Our empirical investigation reveals that the prevalent probability-based
evaluation method inadequately aligns with generation-based prediction.
Furthermore, current evaluation frameworks typically assess LLMs through
predictive tasks based on output probabilities rather than directly generating
responses, owing to computational limitations. We illustrate that these
probability-based approaches do not effectively correspond with generative
predictions. The outcomes of our study can enhance the understanding of LLM
evaluation methodologies and provide insights for future research in this
domain.
| 2,024 | Computation and Language |
Calibrating Large Language Models with Sample Consistency | Accurately gauging the confidence level of Large Language Models' (LLMs)
predictions is pivotal for their reliable application. However, LLMs are often
uncalibrated inherently and elude conventional calibration techniques due to
their proprietary nature and massive scale. In this work, we explore the
potential of deriving confidence from the distribution of multiple randomly
sampled model generations, via three measures of consistency. We perform an
extensive evaluation across various open and closed-source models on nine
reasoning datasets. Results show that consistency-based calibration methods
outperform existing post-hoc approaches. Meanwhile, we find that factors such
as intermediate explanations, model scaling, and larger sample sizes enhance
calibration, while instruction-tuning makes calibration more difficult.
Moreover, confidence scores obtained from consistency have the potential to
enhance model performance. Finally, we offer practical guidance on choosing
suitable consistency metrics for calibration, tailored to the characteristics
of various LMs.
| 2,024 | Computation and Language |
Leveraging Collection-Wide Similarities for Unsupervised Document
Structure Extraction | Document collections of various domains, e.g., legal, medical, or financial,
often share some underlying collection-wide structure, which captures
information that can aid both human users and structure-aware models. We
propose to identify the typical structure of document within a collection,
which requires to capture recurring topics across the collection, while
abstracting over arbitrary header paraphrases, and ground each topic to
respective document locations. These requirements pose several challenges:
headers that mark recurring topics frequently differ in phrasing, certain
section headers are unique to individual documents and do not reflect the
typical structure, and the order of topics can vary between documents.
Subsequently, we develop an unsupervised graph-based method which leverages
both inter- and intra-document similarities, to extract the underlying
collection-wide structure. Our evaluations on three diverse domains in both
English and Hebrew indicate that our method extracts meaningful collection-wide
structure, and we hope that future work will leverage our method for
multi-document applications and structure-aware models.
| 2,024 | Computation and Language |
What Linguistic Features and Languages are Important in LLM Translation? | Large Language Models (LLMs) demonstrate strong capability across multiple
tasks, including machine translation. Our study focuses on evaluating Llama2's
machine translation capabilities and exploring how translation depends on
languages in its training data. Our experiments show that the 7B Llama2 model
yields above 10 BLEU score for all languages it has seen, but not always for
languages it has not seen. Most gains for those unseen languages are observed
the most with the model scale compared to using chat versions or adding shot
count. Furthermore, our linguistic distance analysis reveals that syntactic
similarity is not always the primary linguistic factor in determining
translation quality. Interestingly, we discovered that under specific
circumstances, some languages, despite having significantly less training data
than English, exhibit strong correlations comparable to English. Our
discoveries here give new perspectives for the current landscape of LLMs,
raising the possibility that LLMs centered around languages other than English
may offer a more effective foundation for a multilingual model.
| 2,024 | Computation and Language |
SYNFAC-EDIT: Synthetic Imitation Edit Feedback for Factual Alignment in
Clinical Summarization | Large Language Models (LLMs) such as GPT and Llama have demonstrated
significant achievements in summarization tasks but struggle with factual
inaccuracies, a critical issue in clinical NLP applications where errors could
lead to serious consequences. To counter the high costs and limited
availability of expert-annotated data for factual alignment, this study
introduces an innovative pipeline that utilizes GPT-3.5 and GPT-4 to generate
high-quality feedback aimed at enhancing factual consistency in clinical note
summarization. Our research primarily focuses on edit feedback, mirroring the
practical scenario in which medical professionals refine AI system outputs
without the need for additional annotations. Despite GPT's proven expertise in
various clinical NLP tasks, such as the Medical Licensing Examination, there is
scant research on its capacity to deliver expert-level edit feedback for
improving weaker LMs or LLMs generation quality. This work leverages GPT's
advanced capabilities in clinical NLP to offer expert-level edit feedback.
Through the use of two distinct alignment algorithms (DPO and SALT) based on
GPT edit feedback, our goal is to reduce hallucinations and align closely with
medical facts, endeavoring to narrow the divide between AI-generated content
and factual accuracy. This highlights the substantial potential of GPT edits in
enhancing the alignment of clinical factuality.
| 2,024 | Computation and Language |
Large Language Models are Vulnerable to Bait-and-Switch Attacks for
Generating Harmful Content | The risks derived from large language models (LLMs) generating deceptive and
damaging content have been the subject of considerable research, but even safe
generations can lead to problematic downstream impacts. In our study, we shift
the focus to how even safe text coming from LLMs can be easily turned into
potentially dangerous content through Bait-and-Switch attacks. In such attacks,
the user first prompts LLMs with safe questions and then employs a simple
find-and-replace post-hoc technique to manipulate the outputs into harmful
narratives. The alarming efficacy of this approach in generating toxic content
highlights a significant challenge in developing reliable safety guardrails for
LLMs. In particular, we stress that focusing on the safety of the verbatim LLM
outputs is insufficient and that we also need to consider post-hoc
transformations.
| 2,024 | Computation and Language |
Distinctive Image Captioning: Leveraging Ground Truth Captions in CLIP
Guided Reinforcement Learning | Training image captioning models using teacher forcing results in very
generic samples, whereas more distinctive captions can be very useful in
retrieval applications or to produce alternative texts describing images for
accessibility. Reinforcement Learning (RL) allows to use cross-modal retrieval
similarity score between the generated caption and the input image as reward to
guide the training, leading to more distinctive captions. Recent studies show
that pre-trained cross-modal retrieval models can be used to provide this
reward, completely eliminating the need for reference captions. However, we
argue in this paper that Ground Truth (GT) captions can still be useful in this
RL framework. We propose a new image captioning model training strategy that
makes use of GT captions in different ways. Firstly, they can be used to train
a simple MLP discriminator that serves as a regularization to prevent reward
hacking and ensures the fluency of generated captions, resulting in a textual
GAN setup extended for multimodal inputs. Secondly, they can serve as
additional trajectories in the RL strategy, resulting in a teacher forcing loss
weighted by the similarity of the GT to the image. This objective acts as an
additional learning signal grounded to the distribution of the GT captions.
Thirdly, they can serve as strong baselines when added to the pool of captions
used to compute the proposed contrastive reward to reduce the variance of
gradient estimate. Experiments on MS-COCO demonstrate the interest of the
proposed training strategy to produce highly distinctive captions while
maintaining high writing quality.
| 2,024 | Computation and Language |
Making Reasoning Matter: Measuring and Improving Faithfulness of
Chain-of-Thought Reasoning | Large language models (LLMs) have been shown to perform better when asked to
reason step-by-step before answering a question. However, it is unclear to what
degree the model's final answer is faithful to the stated reasoning steps. In
this paper, we perform a causal mediation analysis on twelve LLMs to examine
how intermediate reasoning steps generated by the LLM influence the final
outcome and find that LLMs do not reliably use their intermediate reasoning
steps when generating an answer. To address this issue, we introduce FRODO, a
framework to tailor small-sized LMs to generate correct reasoning steps and
robustly reason over these steps. FRODO consists of an inference module that
learns to generate correct reasoning steps using an implicit causal reward
function and a reasoning module that learns to faithfully reason over these
intermediate inferences using a counterfactual and causal preference objective.
Our experiments show that FRODO significantly outperforms four competitive
baselines. Furthermore, FRODO improves the robustness and generalization
ability of the reasoning LM, yielding higher performance on out-of-distribution
test sets. Finally, we find that FRODO's rationales are more faithful to its
final answer predictions than standard supervised fine-tuning.
| 2,024 | Computation and Language |
Measuring Social Biases in Masked Language Models by Proxy of Prediction
Quality | Social and political scientists often aim to discover and measure distinct
biases from text data representations (embeddings). Innovative
transformer-based language models produce contextually-aware token embeddings
and have achieved state-of-the-art performance for a variety of natural
language tasks, but have been shown to encode unwanted biases for downstream
applications. In this paper, we evaluate the social biases encoded by
transformers trained with the masked language modeling objective using proposed
proxy functions within an iterative masking experiment to measure the quality
of transformer models' predictions, and assess the preference of MLMs towards
disadvantaged and advantaged groups. We compare bias estimations with those
produced by other evaluation methods using two benchmark datasets, finding
relatively high religious and disability biases across considered MLMs and low
gender bias in one dataset relative to the other. Our measures outperform
others in their agreement with human annotators. We extend on previous work by
evaluating social biases introduced after re-training an MLM under the masked
language modeling objective (w.r.t. the model's pre-trained base), and find
that proposed measures produce more accurate estimations of relative preference
for biased sentences between transformers than others based on our methods.
| 2,024 | Computation and Language |
Can You Learn Semantics Through Next-Word Prediction? The Case of
Entailment | Do LMs infer the semantics of text from co-occurrence patterns in their
training data? Merrill et al. (2022) argue that, in theory, probabilities
predicted by an optimal LM encode semantic information about entailment
relations, but it is unclear whether neural LMs trained on corpora learn
entailment in this way because of strong idealizing assumptions made by Merrill
et al. In this work, we investigate whether their theory can be used to decode
entailment judgments from neural LMs. We find that a test similar to theirs can
decode entailment relations between natural sentences, well above random
chance, though not perfectly, across many datasets and LMs. This suggests LMs
implicitly model aspects of semantics to predict semantic effects on sentence
co-occurrence patterns. However, we find the test that predicts entailment in
practice works in the opposite direction to the theoretical test. We thus
revisit the assumptions underlying the original test, finding its derivation
did not adequately account for redundancy in human-written text. We argue that
correctly accounting for redundancy related to explanations might derive the
observed flipped test and, more generally, improve linguistic theories of human
speakers.
| 2,024 | Computation and Language |
Towards Building Multilingual Language Model for Medicine | In this paper, we aim to develop an open-source, multilingual language model
for medicine, that the benefits a wider, linguistically diverse audience from
different regions. In general, we present the contribution from the following
aspects: first, for multilingual medical-specific adaptation, we construct a
new multilingual medical corpus, that contains approximately 25.5B tokens
encompassing 6 main languages, termed as MMedC, that enables auto-regressive
training for existing general LLMs. second, to monitor the development of
multilingual LLMs in medicine, we propose a new multilingual medical
multi-choice question-answering benchmark with rationale, termed as MMedBench;
third, we have assessed a number of popular, opensource large language models
(LLMs) on our benchmark, along with those further auto-regressive trained on
MMedC, as a result, our final model, termed as MMedLM 2, with only 7B
parameters, achieves superior performance compared to all other open-source
models, even rivaling GPT-4 on MMedBench. We will make the resources publicly
available, including code, model weights, and datasets.
| 2,024 | Computation and Language |
Analysing The Impact of Sequence Composition on Language Model
Pre-Training | Most language model pre-training frameworks concatenate multiple documents
into fixed-length sequences and use causal masking to compute the likelihood of
each token given its context; this strategy is widely adopted due to its
simplicity and efficiency. However, to this day, the influence of the
pre-training sequence composition strategy on the generalisation properties of
the model remains under-explored. In this work, we find that applying causal
masking can lead to the inclusion of distracting information from previous
documents during pre-training, which negatively impacts the performance of the
models on language modelling and downstream tasks. In intra-document causal
masking, the likelihood of each token is only conditioned on the previous
tokens in the same document, eliminating potential distracting information from
previous documents and significantly improving performance. Furthermore, we
find that concatenating related documents can reduce some potential
distractions during pre-training, and our proposed efficient retrieval-based
sequence construction method, BM25Chunk, can improve in-context learning
(+11.6\%), knowledge memorisation (+9.8\%), and context utilisation (+7.2\%)
abilities of language models without sacrificing efficiency.
| 2,024 | Computation and Language |
Hallucinations or Attention Misdirection? The Path to Strategic Value
Extraction in Business Using Large Language Models | Large Language Models with transformer architecture have revolutionized the
domain of text generation, setting unprecedented benchmarks. Despite their
impressive capabilities, LLMs have been criticized for generating outcomes that
deviate from factual accuracy or display logical inconsistencies, phenomena
commonly referred to as hallucinations. This term, however, has often been
misapplied to any results deviating from the instructor's expectations, which
this paper defines as attention misdirection rather than true hallucinations.
Understanding the distinction between hallucinations and attention misdirection
becomes increasingly relevant in business contexts, where the ramifications of
such errors can significantly impact the value extraction from these inherently
pre-trained models. This paper highlights the best practices of the PGI,
Persona, Grouping, and Intelligence, method, a strategic framework that
achieved a remarkable error rate of only 3,15 percent across 4,000 responses
generated by GPT in response to a real business challenge. It emphasizes that
by equipping experimentation with knowledge, businesses can unlock
opportunities for innovation through the use of these natively pre-trained
models. This reinforces the notion that strategic application grounded in a
skilled team can maximize the benefits of emergent technologies such as the
LLMs.
| 2,024 | Computation and Language |
Can Watermarks Survive Translation? On the Cross-lingual Consistency of
Text Watermark for Large Language Models | Text watermarking technology aims to tag and identify content produced by
large language models (LLMs) to prevent misuse. In this study, we introduce the
concept of ''cross-lingual consistency'' in text watermarking, which assesses
the ability of text watermarks to maintain their effectiveness after being
translated into other languages. Preliminary empirical results from two LLMs
and three watermarking methods reveal that current text watermarking
technologies lack consistency when texts are translated into various languages.
Based on this observation, we propose a Cross-lingual Watermark Removal Attack
(CWRA) to bypass watermarking by first obtaining a response from an LLM in a
pivot language, which is then translated into the target language. CWRA can
effectively remove watermarks by reducing the Area Under the Curve (AUC) from
0.95 to 0.67 without performance loss. Furthermore, we analyze two key factors
that contribute to the cross-lingual consistency in text watermarking and
propose a defense method that increases the AUC from 0.67 to 0.88 under CWRA.
| 2,024 | Computation and Language |
OlympiadBench: A Challenging Benchmark for Promoting AGI with
Olympiad-Level Bilingual Multimodal Scientific Problems | Recent advancements have seen Large Language Models (LLMs) and Large
Multimodal Models (LMMs) surpassing general human capabilities in various
tasks, approaching the proficiency level of human experts across multiple
domains. With traditional benchmarks becoming less challenging for these
models, new rigorous challenges are essential to gauge their advanced
abilities. In this work, we present OlympiadBench, an Olympiad-level bilingual
multimodal scientific benchmark, featuring 8,952 problems from Olympiad-level
mathematics and physics competitions, including the Chinese college entrance
exam. Each problem is detailed with expert-level annotations for step-by-step
reasoning. Evaluating top-tier models on OlympiadBench, we implement a
comprehensive assessment methodology to accurately evaluate model responses.
Notably, the best-performing model, GPT-4V, attains an average score of 17.23%
on OlympiadBench, with a mere 11.28% in physics, highlighting the benchmark
rigor and the intricacy of physical reasoning. Our analysis orienting GPT-4V
points out prevalent issues with hallucinations, knowledge omissions, and
logical fallacies. We hope that our challenging benchmark can serve as a
valuable resource for helping future AGI research endeavors.
| 2,024 | Computation and Language |
Is LLM-as-a-Judge Robust? Investigating Universal Adversarial Attacks on
Zero-shot LLM Assessment | Large Language Models (LLMs) are powerful zero-shot assessors and are
increasingly used in real-world situations such as for written exams or
benchmarking systems. Despite this, no existing work has analyzed the
vulnerability of judge-LLMs against adversaries attempting to manipulate
outputs. This work presents the first study on the adversarial robustness of
assessment LLMs, where we search for short universal phrases that when appended
to texts can deceive LLMs to provide high assessment scores. Experiments on
SummEval and TopicalChat demonstrate that both LLM-scoring and pairwise
LLM-comparative assessment are vulnerable to simple concatenation attacks,
where in particular LLM-scoring is very susceptible and can yield maximum
assessment scores irrespective of the input text quality. Interestingly, such
attacks are transferable and phrases learned on smaller open-source LLMs can be
applied to larger closed-source models, such as GPT3.5. This highlights the
pervasive nature of the adversarial vulnerabilities across different judge-LLM
sizes, families and methods. Our findings raise significant concerns on the
reliability of LLMs-as-a-judge methods, and underscore the importance of
addressing vulnerabilities in LLM assessment methods before deployment in
high-stakes real-world scenarios.
| 2,024 | Computation and Language |
On Leveraging Encoder-only Pre-trained Language Models for Effective
Keyphrase Generation | This study addresses the application of encoder-only Pre-trained Language
Models (PLMs) in keyphrase generation (KPG) amidst the broader availability of
domain-tailored encoder-only models compared to encoder-decoder models. We
investigate three core inquiries: (1) the efficacy of encoder-only PLMs in KPG,
(2) optimal architectural decisions for employing encoder-only PLMs in KPG, and
(3) a performance comparison between in-domain encoder-only and encoder-decoder
PLMs across varied resource settings. Our findings, derived from extensive
experimentation in two domains reveal that with encoder-only PLMs, although KPE
with Conditional Random Fields slightly excels in identifying present
keyphrases, the KPG formulation renders a broader spectrum of keyphrase
predictions. Additionally, prefix-LM fine-tuning of encoder-only PLMs emerges
as a strong and data-efficient strategy for KPG, outperforming general-domain
seq2seq PLMs. We also identify a favorable parameter allocation towards model
depth rather than width when employing encoder-decoder architectures
initialized with encoder-only PLMs. The study sheds light on the potential of
utilizing encoder-only PLMs for advancing KPG systems and provides a groundwork
for future KPG methods. Our code and pre-trained checkpoints are released at
https://github.com/uclanlp/DeepKPG.
| 2,024 | Computation and Language |
Improving Language Understanding from Screenshots | An emerging family of language models (LMs), capable of processing both text
and images within a single visual view, has the promise to unlock complex tasks
such as chart understanding and UI navigation. We refer to these models as
screenshot language models. Despite their appeal, existing screenshot LMs
substantially lag behind text-only models on language understanding tasks. To
close this gap, we adopt a simplified setting where the model inputs are
plain-text-rendered screenshots, and we focus on improving the text ability of
screenshot LMs. We propose a novel Patch-and-Text Prediction (PTP) objective,
which masks and recovers both image patches of screenshots and text within
screenshots. We also conduct extensive ablation studies on masking rates and
patch sizes, as well as designs for improving training stability. Our
pre-trained model, while solely taking visual inputs, achieves comparable
performance with BERT on 6 out of 8 GLUE tasks (within 2%) and improves up to
8% over prior work. Additionally, we extend PTP to train autoregressive
screenshot LMs and demonstrate its effectiveness--our models can significantly
reduce perplexity by utilizing the screenshot context. Together, we hope our
findings can inspire future research on developing powerful screenshot LMs and
extending their reach to broader applications.
| 2,024 | Computation and Language |
LexC-Gen: Generating Data for Extremely Low-Resource Languages with
Large Language Models and Bilingual Lexicons | Data scarcity in low-resource languages can be addressed with word-to-word
translations from labeled task data in high-resource languages using bilingual
lexicons. However, bilingual lexicons often have limited lexical overlap with
task data, which results in poor translation coverage and lexicon utilization.
We propose lexicon-conditioned data generation (LexC-Gen), a method that
generates low-resource-language classification task data at scale.
Specifically, LexC-Gen first uses high-resource-language words from bilingual
lexicons to generate lexicon-compatible task data, and then it translates them
into low-resource languages with bilingual lexicons via word translation.
Across 17 extremely low-resource languages, LexC-Gen generated data is
competitive with expert-translated gold data, and yields on average 5.6 and 8.9
points improvement over existing lexicon-based word translation methods on
sentiment analysis and topic classification tasks respectively. We show that
conditioning on bilingual lexicons is the key component of LexC-Gen. LexC-Gen
is also practical -- it only needs a single GPU to generate data at scale. It
works well with open-access LLMs, and its cost is one-fifth of the cost of
GPT4-based multilingual data generation.
| 2,024 | Computation and Language |
Cost-Efficient Subjective Task Annotation and Modeling through Few-Shot
Annotator Adaptation | In subjective NLP tasks, where a single ground truth does not exist, the
inclusion of diverse annotators becomes crucial as their unique perspectives
significantly influence the annotations. In realistic scenarios, the annotation
budget often becomes the main determinant of the number of perspectives (i.e.,
annotators) included in the data and subsequent modeling. We introduce a novel
framework for annotation collection and modeling in subjective tasks that aims
to minimize the annotation budget while maximizing the predictive performance
for each annotator. Our framework has a two-stage design: first, we rely on a
small set of annotators to build a multitask model, and second, we augment the
model for a new perspective by strategically annotating a few samples per
annotator. To test our framework at scale, we introduce and release a unique
dataset, Moral Foundations Subjective Corpus, of 2000 Reddit posts annotated by
24 annotators for moral sentiment. We demonstrate that our framework surpasses
the previous SOTA in capturing the annotators' individual perspectives with as
little as 25% of the original annotation budget on two datasets. Furthermore,
our framework results in more equitable models, reducing the performance
disparity among annotators.
| 2,024 | Computation and Language |
FanOutQA: Multi-Hop, Multi-Document Question Answering for Large
Language Models | One type of question that is commonly found in day-to-day scenarios is
``fan-out'' questions, complex multi-hop, multi-document reasoning questions
that require finding information about a large number of entities. However,
there exist few resources to evaluate this type of question-answering
capability among large language models. To evaluate complex reasoning in LLMs
more fully, we present FanOutQA, a high-quality dataset of fan-out
question-answer pairs and human-annotated decompositions with English Wikipedia
as the knowledge base. We formulate three benchmark settings across our dataset
and benchmark 7 LLMs, including GPT-4, LLaMA 2, Claude-2.1, and Mixtral-8x7B,
finding that contemporary models still have room to improve reasoning over
inter-document dependencies in a long context. We provide our dataset and
open-source tools to run models to encourage evaluation at https://fanoutqa.com
| 2,024 | Computation and Language |
Reinforcement Learning with Dynamic Multi-Reward Weighting for
Multi-Style Controllable Generation | Style is an integral component of text that expresses a diverse set of
information, including interpersonal dynamics (e.g. formality) and the author's
emotions or attitudes (e.g. disgust). Humans often employ multiple styles
simultaneously. An open question is how large language models can be explicitly
controlled so that they weave together target styles when generating text: for
example, to produce text that is both negative and non-toxic. Previous work
investigates the controlled generation of a single style, or else controlled
generation of a style and other attributes. In this paper, we expand this into
controlling multiple styles simultaneously. Specifically, we investigate
various formulations of multiple style rewards for a reinforcement learning
(RL) approach to controlled multi-style generation. These reward formulations
include calibrated outputs from discriminators and dynamic weighting by
discriminator gradient magnitudes. We find that dynamic weighting generally
outperforms static weighting approaches, and we explore its effectiveness in 2-
and 3-style control, even compared to strong baselines like plug-and-play
model. All code and data for RL pipelines with multiple style attributes will
be publicly available.
| 2,024 | Computation and Language |
MM-Soc: Benchmarking Multimodal Large Language Models in Social Media
Platforms | Social media platforms are hubs for multimodal information exchange,
encompassing text, images, and videos, making it challenging for machines to
comprehend the information or emotions associated with interactions in online
spaces. Multimodal Large Language Models (MLLMs) have emerged as a promising
solution to address these challenges, yet struggle with accurately interpreting
human emotions and complex contents like misinformation. This paper introduces
MM-Soc, a comprehensive benchmark designed to evaluate MLLMs' understanding of
multimodal social media content. MM-Soc compiles prominent multimodal datasets
and incorporates a novel large-scale YouTube tagging dataset, targeting a range
of tasks from misinformation detection, hate speech detection, and social
context generation. Through our exhaustive evaluation on ten size-variants of
four open-source MLLMs, we have identified significant performance disparities,
highlighting the need for advancements in models' social understanding
capabilities. Our analysis reveals that, in a zero-shot setting, various types
of MLLMs generally exhibit difficulties in handling social media tasks.
However, MLLMs demonstrate performance improvements post fine-tuning,
suggesting potential pathways for improvement.
| 2,024 | Computation and Language |
Can Similarity-Based Domain-Ordering Reduce Catastrophic Forgetting for
Intent Recognition? | Task-oriented dialogue systems are expected to handle a constantly expanding
set of intents and domains even after they have been deployed to support more
and more functionalities. To live up to this expectation, it becomes critical
to mitigate the catastrophic forgetting problem (CF) that occurs in continual
learning (CL) settings for a task such as intent recognition. While existing
dialogue systems research has explored replay-based and regularization-based
methods to this end, the effect of domain ordering on the CL performance of
intent recognition models remains unexplored. If understood well, domain
ordering has the potential to be an orthogonal technique that can be leveraged
alongside existing techniques such as experience replay. Our work fills this
gap by comparing the impact of three domain-ordering strategies (min-sum path,
max-sum path, random) on the CL performance of a generative intent recognition
model. Our findings reveal that the min-sum path strategy outperforms the
others in reducing catastrophic forgetting when training on the 220M T5-Base
model. However, this advantage diminishes with the larger 770M T5-Large model.
These results underscores the potential of domain ordering as a complementary
strategy for mitigating catastrophic forgetting in continually learning intent
recognition models, particularly in resource-constrained scenarios.
| 2,024 | Computation and Language |
TOOLVERIFIER: Generalization to New Tools via Self-Verification | Teaching language models to use tools is an important milestone towards
building general assistants, but remains an open problem. While there has been
significant progress on learning to use specific tools via fine-tuning,
language models still struggle with learning how to robustly use new tools from
only a few demonstrations. In this work we introduce a self-verification method
which distinguishes between close candidates by self-asking contrastive
questions during (1) tool selection; and (2) parameter generation. We construct
synthetic, high-quality, self-generated data for this goal using Llama-2 70B,
which we intend to release publicly. Extensive experiments on 4 tasks from the
ToolBench benchmark, consisting of 17 unseen tools, demonstrate an average
improvement of 22% over few-shot baselines, even in scenarios where the
distinctions between candidate tools are finely nuanced.
| 2,024 | Computation and Language |
Bangla AI: A Framework for Machine Translation Utilizing Large Language
Models for Ethnic Media | Ethnic media, which caters to diaspora communities in host nations, serves as
a vital platform for these communities to both produce content and access
information. Rather than utilizing the language of the host nation, ethnic
media delivers news in the language of the immigrant community. For instance,
in the USA, Bangla ethnic media presents news in Bangla rather than English.
This research delves into the prospective integration of large language models
(LLM) and multi-lingual machine translations (MMT) within the ethnic media
industry. It centers on the transformative potential of using LLM in MMT in
various facets of news translation, searching, and categorization. The paper
outlines a theoretical framework elucidating the integration of LLM and MMT
into the news searching and translation processes for ethnic media.
Additionally, it briefly addresses the potential ethical challenges associated
with the incorporation of LLM and MMT in news translation procedures.
| 2,024 | Computation and Language |
Learning to Reduce: Optimal Representations of Structured Data in
Prompting Large Language Models | Large Language Models (LLMs) have been widely used as general-purpose AI
agents showing comparable performance on many downstream tasks. However,
existing work shows that it is challenging for LLMs to integrate structured
data (e.g. KG, tables, DBs) into their prompts; LLMs need to either understand
long text data or select the most relevant evidence prior to inference, and
both approaches are not trivial.
In this paper, we propose a framework, Learning to Reduce, that fine-tunes a
language model to generate a reduced version of an input context, given a task
description and context input. The model learns to reduce the input context
using On-Policy Reinforcement Learning and aims to improve the reasoning
performance of a fixed LLM. Experimental results illustrate that our model not
only achieves comparable accuracies in selecting the relevant evidence from an
input context, but also shows generalizability on different datasets. We
further show that our model helps improve the LLM's performance on downstream
tasks especially when the context is long.
| 2,024 | Computation and Language |
Towards Understanding Counseling Conversations: Domain Knowledge and
Large Language Models | Understanding the dynamics of counseling conversations is an important task,
yet it is a challenging NLP problem regardless of the recent advance of
Transformer-based pre-trained language models. This paper proposes a systematic
approach to examine the efficacy of domain knowledge and large language models
(LLMs) in better representing conversations between a crisis counselor and a
help seeker. We empirically show that state-of-the-art language models such as
Transformer-based models and GPT models fail to predict the conversation
outcome. To provide richer context to conversations, we incorporate
human-annotated domain knowledge and LLM-generated features; simple integration
of domain knowledge and LLM features improves the model performance by
approximately 15%. We argue that both domain knowledge and LLM-generated
features can be exploited to better characterize counseling conversations when
they are used as an additional context to conversations.
| 2,024 | Computation and Language |
Assisting in Writing Wikipedia-like Articles From Scratch with Large
Language Models | We study how to apply large language models to write grounded and organized
long-form articles from scratch, with comparable breadth and depth to Wikipedia
pages. This underexplored problem poses new challenges at the pre-writing
stage, including how to research the topic and prepare an outline prior to
writing. We propose STORM, a writing system for the Synthesis of Topic Outlines
through Retrieval and Multi-perspective Question Asking. STORM models the
pre-writing stage by (1) discovering diverse perspectives in researching the
given topic, (2) simulating conversations where writers carrying different
perspectives pose questions to a topic expert grounded on trusted Internet
sources, (3) curating the collected information to create an outline.
For evaluation, we curate FreshWiki, a dataset of recent high-quality
Wikipedia articles, and formulate outline assessments to evaluate the
pre-writing stage. We further gather feedback from experienced Wikipedia
editors. Compared to articles generated by an outline-driven
retrieval-augmented baseline, more of STORM's articles are deemed to be
organized (by a 25% absolute increase) and broad in coverage (by 10%). The
expert feedback also helps identify new challenges for generating grounded long
articles, such as source bias transfer and over-association of unrelated facts.
| 2,024 | Computation and Language |
Content Conditional Debiasing for Fair Text Embedding | Mitigating biases in machine learning models has gained increasing attention
in Natural Language Processing (NLP). Yet, only a few studies focus on fair
text embeddings, which are crucial yet challenging for real-world applications.
In this paper, we propose a novel method for learning fair text embeddings. We
achieve fairness while maintaining utility trade-off by ensuring conditional
independence between sensitive attributes and text embeddings conditioned on
the content. Specifically, we enforce that embeddings of texts with different
sensitive attributes but identical content maintain the same distance toward
the embedding of their corresponding neutral text. Furthermore, we address the
issue of lacking proper training data by using Large Language Models (LLMs) to
augment texts into different sensitive groups. Our extensive evaluations
demonstrate that our approach effectively improves fairness while preserving
the utility of embeddings, representing a pioneering effort in achieving
conditional independence for fair text embeddings.
| 2,024 | Computation and Language |
Framing in the Presence of Supporting Data: A Case Study in U.S.
Economic News | The mainstream media has much leeway in what it chooses to cover and how it
covers it. These choices have real-world consequences on what people know and
their subsequent behaviors. However, the lack of objective measures to evaluate
editorial choices makes research in this area particularly difficult. In this
paper, we argue that there are newsworthy topics where objective measures exist
in the form of supporting data and propose a computational framework to analyze
editorial choices in this setup. We focus on the economy because the reporting
of economic indicators presents us with a relatively easy way to determine both
the selection and framing of various publications. Their values provide a
ground truth of how the economy is doing relative to how the publications
choose to cover it. To do this, we define frame prediction as a set of
interdependent tasks. At the article level, we learn to identify the reported
stance towards the general state of the economy. Then, for every numerical
quantity reported in the article, we learn to identify whether it corresponds
to an economic indicator and whether it is being reported in a positive or
negative way. To perform our analysis, we track six American publishers and
each article that appeared in the top 10 slots of their landing page between
2015 and 2023.
| 2,024 | Computation and Language |
Eagle: Ethical Dataset Given from Real Interactions | Recent studies have demonstrated that large language models (LLMs) have
ethical-related problems such as social biases, lack of moral reasoning, and
generation of offensive content. The existing evaluation metrics and methods to
address these ethical challenges use datasets intentionally created by
instructing humans to create instances including ethical problems. Therefore,
the data does not reflect prompts that users actually provide when utilizing
LLM services in everyday contexts. This may not lead to the development of safe
LLMs that can address ethical challenges arising in real-world applications. In
this paper, we create Eagle datasets extracted from real interactions between
ChatGPT and users that exhibit social biases, toxicity, and immoral problems.
Our experiments show that Eagle captures complementary aspects, not covered by
existing datasets proposed for evaluation and mitigation of such ethical
challenges. Our code is publicly available at
https://huggingface.co/datasets/MasahiroKaneko/eagle.
| 2,024 | Computation and Language |
Word-Sequence Entropy: Towards Uncertainty Estimation in Free-Form
Medical Question Answering Applications and Beyond | Uncertainty estimation plays a pivotal role in ensuring the reliability of
safety-critical human-AI interaction systems, particularly in the medical
domain. However, a general method for quantifying the uncertainty of free-form
answers has yet to be established in open-ended medical question-answering (QA)
tasks, where irrelevant words and sequences with limited semantic information
can be the primary source of uncertainty due to the presence of generative
inequality. In this paper, we propose the Word-Sequence Entropy (WSE), which
calibrates the uncertainty proportion at both the word and sequence levels
according to the semantic relevance, with greater emphasis placed on keywords
and more relevant sequences when performing uncertainty quantification. We
compare WSE with 6 baseline methods on 5 free-form medical QA datasets,
utilizing 7 "off-the-shelf" large language models (LLMs), and show that WSE
exhibits superior performance on accurate uncertainty measurement under two
standard criteria for correctness evaluation (e.g., WSE outperforms existing
state-of-the-art method by 3.23% AUROC on the MedQA dataset). Additionally, in
terms of the potential for real-world medical QA applications, we achieve a
significant enhancement in the performance of LLMs when employing sequences
with lower uncertainty, identified by WSE, as final answers (e.g., +6.36%
accuracy improvement on the COVID-QA dataset), without requiring any additional
task-specific fine-tuning or architectural modifications.
| 2,024 | Computation and Language |
Can Large Language Models Detect Misinformation in Scientific News
Reporting? | Scientific facts are often spun in the popular press with the intent to
influence public opinion and action, as was evidenced during the COVID-19
pandemic. Automatic detection of misinformation in the scientific domain is
challenging because of the distinct styles of writing in these two media types
and is still in its nascence. Most research on the validity of scientific
reporting treats this problem as a claim verification challenge. In doing so,
significant expert human effort is required to generate appropriate claims. Our
solution bypasses this step and addresses a more real-world scenario where such
explicit, labeled claims may not be available. The central research question of
this paper is whether it is possible to use large language models (LLMs) to
detect misinformation in scientific reporting. To this end, we first present a
new labeled dataset SciNews, containing 2.4k scientific news stories drawn from
trusted and untrustworthy sources, paired with related abstracts from the
CORD-19 database. Our dataset includes both human-written and LLM-generated
news articles, making it more comprehensive in terms of capturing the growing
trend of using LLMs to generate popular press articles. Then, we identify
dimensions of scientific validity in science news articles and explore how this
can be integrated into the automated detection of scientific misinformation. We
propose several baseline architectures using LLMs to automatically detect false
representations of scientific findings in the popular press. For each of these
architectures, we use several prompt engineering strategies including
zero-shot, few-shot, and chain-of-thought prompting. We also test these
architectures and prompting strategies on GPT-3.5, GPT-4, and Llama2-7B,
Llama2-13B.
| 2,024 | Computation and Language |
Qsnail: A Questionnaire Dataset for Sequential Question Generation | The questionnaire is a professional research methodology used for both
qualitative and quantitative analysis of human opinions, preferences,
attitudes, and behaviors. However, designing and evaluating questionnaires
demands significant effort due to their intricate and complex structure.
Questionnaires entail a series of questions that must conform to intricate
constraints involving the questions, options, and overall structure.
Specifically, the questions should be relevant and specific to the given
research topic and intent. The options should be tailored to the questions,
ensuring they are mutually exclusive, completed, and ordered sensibly.
Moreover, the sequence of questions should follow a logical order, grouping
similar topics together. As a result, automatically generating questionnaires
presents a significant challenge and this area has received limited attention
primarily due to the scarcity of high-quality datasets. To address these
issues, we present Qsnail, the first dataset specifically constructed for the
questionnaire generation task, which comprises 13,168 human-written
questionnaires gathered from online platforms. We further conduct experiments
on Qsnail, and the results reveal that retrieval models and traditional
generative models do not fully align with the given research topic and intents.
Large language models, while more closely related to the research topic and
intents, exhibit significant limitations in terms of diversity and specificity.
Despite enhancements through the chain-of-thought prompt and finetuning,
questionnaires generated by language models still fall short of human-written
questionnaires. Therefore, questionnaire generation is challenging and needs to
be further explored. The dataset is available at:
https://github.com/LeiyanGithub/qsnail.
| 2,024 | Computation and Language |
Can Language Models Act as Knowledge Bases at Scale? | Large language models (LLMs) have demonstrated remarkable proficiency in
understanding and generating responses to complex queries through large-scale
pre-training. However, the efficacy of these models in memorizing and reasoning
among large-scale structured knowledge, especially world knowledge that
explicitly covers abundant factual information remains questionable. Addressing
this gap, our research investigates whether LLMs can effectively store, recall,
and reason with knowledge on a large scale comparable to latest knowledge bases
(KBs) such as Wikidata. Specifically, we focus on three crucial aspects to
study the viability: (1) the efficiency of LLMs with different sizes in
memorizing the exact knowledge in the large-scale KB; (2) the flexibility of
recalling the memorized knowledge in response to natural language queries; (3)
the capability to infer new knowledge through reasoning. Our findings indicate
that while LLMs hold promise as large-scale KBs capable of retrieving and
responding with flexibility, enhancements in their reasoning capabilities are
necessary to fully realize their potential.
| 2,024 | Computation and Language |
GATE X-E : A Challenge Set for Gender-Fair Translations from
Weakly-Gendered Languages | Neural Machine Translation (NMT) continues to improve in quality and
adoption, yet the inadvertent perpetuation of gender bias remains a significant
concern. Despite numerous studies on gender bias in translations into English
from weakly gendered-languages, there are no benchmarks for evaluating this
phenomenon or for assessing mitigation strategies. To address this gap, we
introduce GATE X-E, an extension to the GATE (Rarrick et al., 2023) corpus,
that consists of human translations from Turkish, Hungarian, Finnish, and
Persian into English. Each translation is accompanied by feminine, masculine,
and neutral variants. The dataset, which contains between 1250 and 1850
instances for each of the four language pairs, features natural sentences with
a wide range of sentence lengths and domains, challenging translation rewriters
on various linguistic phenomena. Additionally, we present a translation gender
rewriting solution built with GPT-4 and use GATE X-E to evaluate it. We open
source our contributions to encourage further research on gender debiasing.
| 2,024 | Computation and Language |
Mitigating the Linguistic Gap with Phonemic Representations for Robust
Multilingual Language Understanding | Approaches to improving multilingual language understanding often require
multiple languages during the training phase, rely on complicated training
techniques, and -- importantly -- struggle with significant performance gaps
between high-resource and low-resource languages. We hypothesize that the
performance gaps between languages are affected by linguistic gaps between
those languages and provide a novel solution for robust multilingual language
modeling by employing phonemic representations (specifically, using phonemes as
input tokens to LMs rather than subwords). We present quantitative evidence
from three cross-lingual tasks that demonstrate the effectiveness of phonemic
representation, which is further justified by a theoretical analysis of the
cross-lingual performance gap.
| 2,024 | Computation and Language |
CEV-LM: Controlled Edit Vector Language Model for Shaping Natural
Language Generations | As large-scale language models become the standard for text generation, there
is a greater need to tailor the generations to be more or less concise,
targeted, and informative, depending on the audience/application. Existing
control approaches primarily adjust the semantic (e.g., emotion, topics),
structural (e.g., syntax tree, parts-of-speech), and lexical (e.g.,
keyword/phrase inclusion) properties of text, but are insufficient to
accomplish complex objectives such as pacing which control the complexity and
readability of the text. In this paper, we introduce CEV-LM - a lightweight,
semi-autoregressive language model that utilizes constrained edit vectors to
control three complementary metrics (speed, volume, and circuitousness) that
quantify the shape of text (e.g., pacing of content). We study an extensive set
of state-of-the-art CTG models and find that CEV-LM provides significantly more
targeted and precise control of these three metrics while preserving semantic
content, using less training data, and containing fewer parameters.
| 2,024 | Computation and Language |
Leveraging Large Language Models for Concept Graph Recovery and Question
Answering in NLP Education | In the domain of Natural Language Processing (NLP), Large Language Models
(LLMs) have demonstrated promise in text-generation tasks. However, their
educational applications, particularly for domain-specific queries, remain
underexplored. This study investigates LLMs' capabilities in educational
scenarios, focusing on concept graph recovery and question-answering (QA). We
assess LLMs' zero-shot performance in creating domain-specific concept graphs
and introduce TutorQA, a new expert-verified NLP-focused benchmark for
scientific graph reasoning and QA. TutorQA consists of five tasks with 500 QA
pairs. To tackle TutorQA queries, we present CGLLM, a pipeline integrating
concept graphs with LLMs for answering diverse questions. Our results indicate
that LLMs' zero-shot concept graph recovery is competitive with supervised
methods, showing an average 3% F1 score improvement. In TutorQA tasks, LLMs
achieve up to 26% F1 score enhancement. Moreover, human evaluation and analysis
show that CGLLM generates answers with more fine-grained concepts.
| 2,024 | Computation and Language |
Mitigating Biases of Large Language Models in Stance Detection with
Calibration | Large language models (LLMs) have achieved remarkable progress in many
natural language processing tasks. However, our experiment reveals that, in
stance detection tasks, LLMs may generate biased stances due to spurious
sentiment-stance correlation and preference towards certain individuals and
topics, thus harming their performance. Therefore, in this paper, we propose to
Mitigate Biases of LLMs in stance detection with Calibration (MB-Cal). In
which, a novel gated calibration network is devised to mitigate the biases on
the stance reasoning results from LLMs. Further, to make the calibration more
accurate and generalizable, we construct counterfactual augmented data to
rectify stance biases. Experimental results on in-target and zero-shot stance
detection tasks show that the proposed MB-Cal can effectively mitigate biases
of LLMs, achieving state-of-the-art results.
| 2,024 | Computation and Language |
Multi-modal Stance Detection: New Datasets and Model | Stance detection is a challenging task that aims to identify public opinion
from social media platforms with respect to specific targets. Previous work on
stance detection largely focused on pure texts. In this paper, we study
multi-modal stance detection for tweets consisting of texts and images, which
are prevalent in today's fast-growing social media platforms where people often
post multi-modal messages. To this end, we create five new multi-modal stance
detection datasets of different domains based on Twitter, in which each example
consists of a text and an image. In addition, we propose a simple yet effective
Targeted Multi-modal Prompt Tuning framework (TMPT), where target information
is leveraged to learn multi-modal stance features from textual and visual
modalities. Experimental results on our three benchmark datasets show that the
proposed TMPT achieves state-of-the-art performance in multi-modal stance
detection.
| 2,024 | Computation and Language |
Hint-before-Solving Prompting: Guiding LLMs to Effectively Utilize
Encoded Knowledge | Large Language Models (LLMs) have recently showcased remarkable
generalizability in various domains. Despite their extensive knowledge, LLMs
still face challenges in efficiently utilizing encoded knowledge to develop
accurate and logical reasoning processes. To mitigate this problem, we
introduced Hint-before-Solving Prompting (HSP), which guides the model to
generate hints (e.g., specific knowledge or key ideas) for solving the problem
and then generate solutions containing intermediate reasoning steps. Since HSP
is orthogonal to prompting methods (e.g., Chain-of-Thought (CoT)), we applied
HSP to CoT, Least-to-Most, Plan-and-Solve, and Standard promptings. The results
of extensive experiments on 6 reasoning benchmarks and 4 open-source LLMs
demonstrate that HSP can effectively improve the accuracy of reasoning tasks:
(1) By applying high-quality hint-enhanced HSP to CoT prompting,
Llama2-70B-Chat shows an improvement of 9.7. (2) Beyond exploring training-free
LLM capabilities, we built the HSPMATH dataset based on HSP and fine-tuned
Llemma-7B, reaching 64.3 accuracy, surpassing GPT-3.5 and WizardMath-13B. We
make our code and dataset publicly available at
\url{https://github.com/jinlanfu/HSP}.
| 2,024 | Computation and Language |
Assessing generalization capability of text ranking models in Polish | Retrieval-augmented generation (RAG) is becoming an increasingly popular
technique for integrating internal knowledge bases with large language models.
In a typical RAG pipeline, three models are used, responsible for the
retrieval, reranking, and generation stages. In this article, we focus on the
reranking problem for the Polish language, examining the performance of
rerankers and comparing their results with available retrieval models. We
conduct a comprehensive evaluation of existing models and those trained by us,
utilizing a benchmark of 41 diverse information retrieval tasks for the Polish
language. The results of our experiments show that most models struggle with
out-of-domain generalization. However, a combination of effective optimization
method and a large training dataset allows for building rerankers that are both
compact in size and capable of generalization. The best of our models
establishes a new state-of-the-art for reranking in the Polish language,
outperforming existing models with up to 30 times more parameters.
| 2,024 | Computation and Language |
Triad: A Framework Leveraging a Multi-Role LLM-based Agent to Solve
Knowledge Base Question Answering | Recent progress with LLM-based agents has shown promising results across
various tasks. However, their use in answering questions from knowledge bases
remains largely unexplored. Implementing a KBQA system using traditional
methods is challenging due to the shortage of task-specific training data and
the complexity of creating task-focused model structures. In this paper, we
present Triad, a unified framework that utilizes an LLM-based agent with three
roles for KBQA tasks. The agent is assigned three roles to tackle different
KBQA subtasks: agent as a generalist for mastering various subtasks, as a
decision maker for the selection of candidates, and as an advisor for answering
questions with knowledge. Our KBQA framework is executed in four phases,
involving the collaboration of the agent's multiple roles. We evaluated the
performance of our framework using three benchmark datasets, and the results
show that our framework outperforms state-of-the-art systems on the LC-QuAD and
YAGO-QA benchmarks, yielding F1 scores of 11.8% and 20.7%, respectively.
| 2,024 | Computation and Language |
Understanding and Patching Compositional Reasoning in LLMs | LLMs have marked a revolutonary shift, yet they falter when faced with
compositional reasoning tasks. Our research embarks on a quest to uncover the
root causes of compositional reasoning failures of LLMs, uncovering that most
of them stem from the improperly generated or leveraged implicit reasoning
results. Inspired by our empirical findings, we resort to Logit Lens and an
intervention experiment to dissect the inner hidden states of LLMs. This deep
dive reveals that implicit reasoning results indeed surface within middle
layers and play a causative role in shaping the final explicit reasoning
results. Our exploration further locates multi-head self-attention (MHSA)
modules within these layers, which emerge as the linchpins in accurate
generation and leveraing of implicit reasoning results. Grounded on the above
findings, we develop CREME, a lightweight method to patch errors in
compositional reasoning via editing the located MHSA modules. Our empirical
evidence stands testament to CREME's effectiveness, paving the way for
autonomously and continuously enhancing compositional reasoning capabilities in
language models.
| 2,024 | Computation and Language |
INSTRUCTIR: A Benchmark for Instruction Following of Information
Retrieval Models | Despite the critical need to align search targets with users' intention,
retrievers often only prioritize query information without delving into the
users' intended search context. Enhancing the capability of retrievers to
understand intentions and preferences of users, akin to language model
instructions, has the potential to yield more aligned search targets. Prior
studies restrict the application of instructions in information retrieval to a
task description format, neglecting the broader context of diverse and evolving
search scenarios. Furthermore, the prevailing benchmarks utilized for
evaluation lack explicit tailoring to assess instruction-following ability,
thereby hindering progress in this field. In response to these limitations, we
propose a novel benchmark,INSTRUCTIR, specifically designed to evaluate
instruction-following ability in information retrieval tasks. Our approach
focuses on user-aligned instructions tailored to each query instance,
reflecting the diverse characteristics inherent in real-world search scenarios.
Through experimental analysis, we observe that retrievers fine-tuned to follow
task-style instructions, such as INSTRUCTOR, can underperform compared to their
non-instruction-tuned counterparts. This underscores potential overfitting
issues inherent in constructing retrievers trained on existing
instruction-aware retrieval datasets.
| 2,024 | Computation and Language |
AURA: Natural Language Reasoning for Aleatoric Uncertainty in Rationales | Rationales behind answers not only explain model decisions but boost language
models to reason well on complex reasoning tasks. However, obtaining impeccable
rationales is often impossible. Besides, it is non-trivial to estimate the
degree to which the rationales are faithful enough to encourage model
performance. Thus, such reasoning tasks often compel models to output correct
answers under undesirable rationales and are sub-optimal compared to what the
models are fully capable of. In this work, we propose how to deal with
imperfect rationales causing aleatoric uncertainty. We first define the
ambiguous rationales with entropy scores of given rationales, using model prior
beliefs as informativeness. We then guide models to select one of two different
reasoning models according to the ambiguity of rationales. We empirically argue
that our proposed method produces robust performance superiority against the
adversarial quality of rationales and low-resource settings.
| 2,024 | Computation and Language |
Rule or Story, Which is a Better Commonsense Expression for Talking with
Large Language Models? | Building machines with commonsense has been a longstanding challenge in NLP
due to the reporting bias of commonsense rules and the exposure bias of
rule-based commonsense reasoning. In contrast, humans convey and pass down
commonsense implicitly through stories. This paper investigates the inherent
commonsense ability of large language models (LLMs) expressed through
storytelling. We systematically investigate and compare stories and rules for
retrieving and leveraging commonsense in LLMs. Experimental results on 28
commonsense QA datasets show that stories outperform rules as the expression
for retrieving commonsense from LLMs, exhibiting higher generation confidence
and commonsense accuracy. Moreover, stories are the more effective commonsense
expression for answering questions regarding daily events, while rules are more
effective for scientific questions. This aligns with the reporting bias of
commonsense in text corpora. We further show that the correctness and relevance
of commonsense stories can be further improved via iterative self-supervised
fine-tuning. These findings emphasize the importance of using appropriate
language to express, retrieve, and leverage commonsense for LLMs, highlighting
a promising direction for better exploiting their commonsense abilities.
| 2,024 | Computation and Language |
Rethinking Scientific Summarization Evaluation: Grounding Explainable
Metrics on Facet-aware Benchmark | The summarization capabilities of pretrained and large language models (LLMs)
have been widely validated in general areas, but their use in scientific
corpus, which involves complex sentences and specialized knowledge, has been
less assessed. This paper presents conceptual and experimental analyses of
scientific summarization, highlighting the inadequacies of traditional
evaluation methods, such as $n$-gram, embedding comparison, and QA,
particularly in providing explanations, grasping scientific concepts, or
identifying key content. Subsequently, we introduce the Facet-aware Metric
(FM), employing LLMs for advanced semantic matching to evaluate summaries based
on different aspects. This facet-aware approach offers a thorough evaluation of
abstracts by decomposing the evaluation task into simpler subtasks.Recognizing
the absence of an evaluation benchmark in this domain, we curate a Facet-based
scientific summarization Dataset (FD) with facet-level annotations. Our
findings confirm that FM offers a more logical approach to evaluating
scientific summaries. In addition, fine-tuned smaller models can compete with
LLMs in scientific contexts, while LLMs have limitations in learning from
in-context information in scientific domains. This suggests an area for future
enhancement of LLMs.
| 2,024 | Computation and Language |
Small Language Model Is a Good Guide for Large Language Model in Chinese
Entity Relation Extraction | Recently, large language models (LLMs) have been successful in relational
extraction (RE) tasks, especially in the few-shot learning. An important
problem in the field of RE is long-tailed data, while not much attention is
currently paid to this problem using LLM approaches. Therefore, in this paper,
we propose SLCoLM, a model collaboration framework, to mitigate the data
long-tail problem. In our framework, We use the
``\textit{Training-Guide-Predict}'' strategy to combine the strengths of
pre-trained language models (PLMs) and LLMs, where a task-specific PLM
framework acts as a tutor, transfers task knowledge to the LLM, and guides the
LLM in performing RE tasks. Our experiments on a RE dataset rich in relation
types show that the approach in this paper facilitates RE of long-tail relation
types.
| 2,024 | Computation and Language |
Novi jezi\v{c}ki modeli za srpski jezik | The paper will briefly present the development history of transformer-based
language models for the Serbian language. Several new models for text
generation and vectorization, trained on the resources of the Society for
Language Resources and Technologies, will also be presented. Ten selected
vectorization models for Serbian, including two new ones, will be compared on
four natural language processing tasks. Paper will analyze which models are the
best for each selected task, how does their size and the size of their training
sets affect the performance on those tasks, and what is the optimal setting to
train the best language models for the Serbian language.
| 2,024 | Computation and Language |
Enhancing Temporal Knowledge Graph Forecasting with Large Language
Models via Chain-of-History Reasoning | Temporal Knowledge Graph (TKG) forecasting aims to predict future facts based
on given histories. Most recent graph-based models excel at capturing
structural information within TKGs but lack semantic comprehension abilities.
Nowadays, with the surge of LLMs, the LLM-based TKG prediction model has
emerged. However, the existing LLM-based model exhibits three shortcomings: (1)
It only focuses on the first-order history for prediction while ignoring
high-order historical information, resulting in the provided information for
LLMs being extremely limited. (2) LLMs struggle with optimal reasoning
performance under heavy historical information loads. (3) For TKG prediction,
the temporal reasoning capability of LLM alone is limited. To address the first
two challenges, we propose Chain-of-History (CoH) reasoning which explores
high-order histories step-by-step, achieving effective utilization of
high-order historical information for LLMs on TKG prediction. To address the
third issue, we design CoH as a paly-and-plug module to enhance the performance
of graph-based models for TKG prediction. Extensive experiments on three
datasets and backbones demonstrate the effectiveness of CoH.
| 2,024 | Computation and Language |
On the Tip of the Tongue: Analyzing Conceptual Representation in Large
Language Models with Reverse-Dictionary Probe | Probing and enhancing large language models' reasoning capacity remains a
crucial open question. Here we re-purpose the reverse dictionary task as a case
study to probe LLMs' capacity for conceptual inference. We use in-context
learning to guide the models to generate the term for an object concept implied
in a linguistic description. Models robustly achieve high accuracy in this
task, and their representation space encodes information about object
categories and fine-grained features. Further experiments suggest that the
conceptual inference ability as probed by the reverse-dictionary task predicts
model's general reasoning performance across multiple benchmarks, despite
similar syntactic generalization behaviors across models. Explorative analyses
suggest that prompting LLMs with description$\Rightarrow$word examples may
induce generalization beyond surface-level differences in task construals and
facilitate models on broader commonsense reasoning problems.
| 2,024 | Computation and Language |
Transferring BERT Capabilities from High-Resource to Low-Resource
Languages Using Vocabulary Matching | Pre-trained language models have revolutionized the natural language
understanding landscape, most notably BERT (Bidirectional Encoder
Representations from Transformers). However, a significant challenge remains
for low-resource languages, where limited data hinders the effective training
of such models. This work presents a novel approach to bridge this gap by
transferring BERT capabilities from high-resource to low-resource languages
using vocabulary matching. We conduct experiments on the Silesian and Kashubian
languages and demonstrate the effectiveness of our approach to improve the
performance of BERT models even when the target language has minimal training
data. Our results highlight the potential of the proposed technique to
effectively train BERT models for low-resource languages, thus democratizing
access to advanced language understanding models.
| 2,024 | Computation and Language |
Tug-of-War Between Knowledge: Exploring and Resolving Knowledge
Conflicts in Retrieval-Augmented Language Models | Retrieval-augmented language models (RALMs) have demonstrated significant
potential in refining and expanding their internal memory by retrieving
evidence from external sources. However, RALMs will inevitably encounter
knowledge conflicts when integrating their internal memory with external
sources. Knowledge conflicts can ensnare RALMs in a tug-of-war between
knowledge, limiting their practical applicability. In this paper, we focus on
exploring and resolving knowledge conflicts in RALMs. First, we present an
evaluation framework for assessing knowledge conflicts across various
dimensions. Then, we investigate the behavior and preference of RALMs from the
following two perspectives: (1) Conflicts between internal memory and external
sources: We find that stronger RALMs emerge with the Dunning-Kruger effect,
persistently favoring their faulty internal memory even when correct evidence
is provided. Besides, RALMs exhibit an availability bias towards common
knowledge; (2) Conflicts between truthful, irrelevant and misleading evidence:
We reveal that RALMs follow the principle of majority rule, leaning towards
placing trust in evidence that appears more frequently. Moreover, we find that
RALMs exhibit confirmation bias, and are more willing to choose evidence that
is consistent with their internal memory. To solve the challenge of knowledge
conflicts, we propose a method called Conflict-Disentangle Contrastive Decoding
(CD2) to better calibrate the model's confidence. Experimental results
demonstrate that our CD2 can effectively resolve knowledge conflicts in RALMs.
| 2,024 | Computation and Language |
J-UniMorph: Japanese Morphological Annotation through the Universal
Feature Schema | We introduce a Japanese Morphology dataset, J-UniMorph, developed based on
the UniMorph feature schema. This dataset addresses the unique and rich verb
forms characteristic of the language's agglutinative nature. J-UniMorph
distinguishes itself from the existing Japanese subset of UniMorph, which is
automatically extracted from Wiktionary. On average, the Wiktionary Edition
features around 12 inflected forms for each word and is primarily dominated by
denominal verbs (i.e., [noun] +suru (do-PRS)). Morphologically, this form is
equivalent to the verb suru (do). In contrast, J-UniMorph explores a much
broader and more frequently used range of verb forms, offering 118 inflected
forms for each word on average. It includes honorifics, a range of politeness
levels, and other linguistic nuances, emphasizing the distinctive
characteristics of the Japanese language. This paper presents detailed
statistics and characteristics of J-UniMorph, comparing it with the Wiktionary
Edition. We release J-UniMorph and its interactive visualizer publicly
available, aiming to support cross-linguistic research and various
applications.
| 2,024 | Computation and Language |
KoCoSa: Korean Context-aware Sarcasm Detection Dataset | Sarcasm is a way of verbal irony where someone says the opposite of what they
mean, often to ridicule a person, situation, or idea. It is often difficult to
detect sarcasm in the dialogue since detecting sarcasm should reflect the
context (i.e., dialogue history). In this paper, we introduce a new dataset for
the Korean dialogue sarcasm detection task, KoCoSa (Korean Context-aware
Sarcasm Detection Dataset), which consists of 12.8K daily Korean dialogues and
the labels for this task on the last response. To build the dataset, we propose
an efficient sarcasm detection dataset generation pipeline: 1) generating new
sarcastic dialogues from source dialogues with large language models, 2)
automatic and manual filtering of abnormal and toxic dialogues, and 3) human
annotation for the sarcasm detection task. We also provide a simple but
effective baseline for the Korean sarcasm detection task trained on our
dataset. Experimental results on the dataset show that our baseline system
outperforms strong baselines like large language models, such as GPT-3.5, in
the Korean sarcasm detection task. We show that the sarcasm detection task
relies deeply on the existence of sufficient context. We will release the
dataset at https://anonymous.4open.science/r/KoCoSa-2372.
| 2,024 | Computation and Language |
A Language Model's Guide Through Latent Space | Concept guidance has emerged as a cheap and simple way to control the
behavior of language models by probing their hidden representations for concept
vectors and using them to perturb activations at inference time. While the
focus of previous work has largely been on truthfulness, in this paper we
extend this framework to a richer set of concepts such as appropriateness,
humor, creativity and quality, and explore to what degree current detection and
guidance strategies work in these challenging settings. To facilitate
evaluation, we develop a novel metric for concept guidance that takes into
account both the success of concept elicitation as well as the potential
degradation in fluency of the guided model. Our extensive experiments reveal
that while some concepts such as truthfulness more easily allow for guidance
with current techniques, novel concepts such as appropriateness or humor either
remain difficult to elicit, need extensive tuning to work, or even experience
confusion. Moreover, we find that probes with optimal detection accuracies do
not necessarily make for the optimal guides, contradicting previous
observations for truthfulness. Our work warrants a deeper investigation into
the interplay between detectability, guidability, and the nature of the
concept, and we hope that our rich experimental test-bed for guidance research
inspires stronger follow-up approaches.
| 2,024 | Computation and Language |
Do LLMs Implicitly Determine the Suitable Text Difficulty for Users? | Education that suits the individual learning level is necessary to improve
students' understanding. The first step in achieving this purpose by using
large language models (LLMs) is to adjust the textual difficulty of the
response to students. This work analyzes how LLMs can implicitly adjust text
difficulty between user input and its generated text. To conduct the
experiments, we created a new dataset from Stack-Overflow to explore the
performance of question-answering-based conversation. Experimental results on
the Stack-Overflow dataset and the TSCC dataset, including multi-turn
conversation show that LLMs can implicitly handle text difficulty between user
input and its generated response. We also observed that some LLMs can surpass
humans in handling text difficulty and the importance of instruction-tuning.
| 2,024 | Computation and Language |
Annotation and Classification of Relevant Clauses in
Terms-and-Conditions Contracts | In this paper, we propose a new annotation scheme to classify different types
of clauses in Terms-and-Conditions contracts with the ultimate goal of
supporting legal experts to quickly identify and assess problematic issues in
this type of legal documents. To this end, we built a small corpus of
Terms-and-Conditions contracts and finalized an annotation scheme of 14
categories, eventually reaching an inter-annotator agreement of 0.92. Then, for
11 of them, we experimented with binary classification tasks using few-shot
prompting with a multilingual T5 and two fine-tuned versions of two BERT-based
LLMs for Italian. Our experiments showed the feasibility of automatic
classification of our categories by reaching accuracies ranging from .79 to .95
on validation tasks.
| 2,024 | Computation and Language |
NLAS-multi: A Multilingual Corpus of Automatically Generated Natural
Language Argumentation Schemes | Some of the major limitations identified in the areas of argument mining,
argument generation, and natural language argument analysis are related to the
complexity of annotating argumentatively rich data, the limited size of these
corpora, and the constraints that represent the different languages and domains
in which these data is annotated. To address these limitations, in this paper
we present the following contributions: (i) an effective methodology for the
automatic generation of natural language arguments in different topics and
languages, (ii) the largest publicly available corpus of natural language
argumentation schemes, and (iii) a set of solid baselines and fine-tuned models
for the automatic identification of argumentation schemes.
| 2,024 | Computation and Language |
Is ChatGPT the Future of Causal Text Mining? A Comprehensive Evaluation
and Analysis | Causality is fundamental in human cognition and has drawn attention in
diverse research fields. With growing volumes of textual data, discerning
causalities within text data is crucial, and causal text mining plays a pivotal
role in extracting meaningful patterns. This study conducts comprehensive
evaluations of ChatGPT's causal text mining capabilities. Firstly, we introduce
a benchmark that extends beyond general English datasets, including
domain-specific and non-English datasets. We also provide an evaluation
framework to ensure fair comparisons between ChatGPT and previous approaches.
Finally, our analysis outlines the limitations and future challenges in
employing ChatGPT for causal text mining. Specifically, our analysis reveals
that ChatGPT serves as a good starting point for various datasets. However,
when equipped with a sufficient amount of training data, previous models still
surpass ChatGPT's performance. Additionally, ChatGPT suffers from the tendency
to falsely recognize non-causal sequences as causal sequences. These issues
become even more pronounced with advanced versions of the model, such as GPT-4.
In addition, we highlight the constraints of ChatGPT in handling complex
causality types, including both intra/inter-sentential and implicit causality.
The model also faces challenges with effectively leveraging in-context learning
and domain adaptation. We release our code to support further research and
development in this field.
| 2,024 | Computation and Language |
Does the Generator Mind its Contexts? An Analysis of Generative Model
Faithfulness under Context Transfer | The present study introduces the knowledge-augmented generator, which is
specifically designed to produce information that remains grounded in
contextual knowledge, regardless of alterations in the context. Previous
research has predominantly focused on examining hallucinations stemming from
static input, such as in the domains of summarization or machine translation.
However, our investigation delves into the faithfulness of generative question
answering in the presence of dynamic knowledge. Our objective is to explore the
existence of hallucinations arising from parametric memory when contextual
knowledge undergoes changes, while also analyzing the underlying causes for
their occurrence. In order to efficiently address this issue, we propose a
straightforward yet effective measure for detecting such hallucinations.
Intriguingly, our investigation uncovers that all models exhibit a tendency to
generate previous answers as hallucinations. To gain deeper insights into the
underlying causes of this phenomenon, we conduct a series of experiments that
verify the critical role played by context in hallucination, both during
training and testing, from various perspectives.
| 2,024 | Computation and Language |
INSTRAUG: Automatic Instruction Augmentation for Multimodal Instruction
Fine-tuning | Fine-tuning large language models (LLMs) on multi-task instruction-following
data has been proven to be a powerful learning paradigm for improving their
zero-shot capabilities on new tasks. Recent works about high-quality
instruction-following data generation and selection require amounts of human
labor to conceive model-understandable instructions for the given tasks and
carefully filter the LLM-generated data. In this work, we introduce an
automatic instruction augmentation method named INSTRAUG in multimodal tasks.
It starts from a handful of basic and straightforward meta instructions but can
expand an instruction-following dataset by 30 times. Results on two popular
multimodal instructionfollowing benchmarks MULTIINSTRUCT and InstructBLIP show
that INSTRAUG can significantly improve the alignment of multimodal large
language models (MLLMs) across 12 multimodal tasks, which is even equivalent to
the benefits of scaling up training data multiple times.
| 2,024 | Computation and Language |
Noise-BERT: A Unified Perturbation-Robust Framework with Noise Alignment
Pre-training for Noisy Slot Filling Task | In a realistic dialogue system, the input information from users is often
subject to various types of input perturbations, which affects the slot-filling
task. Although rule-based data augmentation methods have achieved satisfactory
results, they fail to exhibit the desired generalization when faced with
unknown noise disturbances. In this study, we address the challenges posed by
input perturbations in slot filling by proposing Noise-BERT, a unified
Perturbation-Robust Framework with Noise Alignment Pre-training. Our framework
incorporates two Noise Alignment Pre-training tasks: Slot Masked Prediction and
Sentence Noisiness Discrimination, aiming to guide the pre-trained language
model in capturing accurate slot information and noise distribution. During
fine-tuning, we employ a contrastive learning loss to enhance the semantic
representation of entities and labels. Additionally, we introduce an
adversarial attack training strategy to improve the model's robustness.
Experimental results demonstrate the superiority of our proposed approach over
state-of-the-art models, and further analysis confirms its effectiveness and
generalization ability.
| 2,024 | Computation and Language |
"My Answer is C": First-Token Probabilities Do Not Match Text Answers in
Instruction-Tuned Language Models | The open-ended nature of language generation makes the evaluation of
autoregressive large language models (LLMs) challenging. One common evaluation
approach uses multiple-choice questions (MCQ) to limit the response space. The
model is then evaluated by ranking the candidate answers by the log probability
of the first token prediction. However, first-tokens may not consistently
reflect the final response output, due to model's diverse response styles such
as starting with "Sure" or refusing to answer. Consequently, MCQ evaluation is
not indicative of model behaviour when interacting with users. But by how much?
We evaluate how aligned first-token evaluation is with the text output along
several dimensions, namely final option choice, refusal rate, choice
distribution and robustness under prompt perturbation. Our results show that
the two approaches are severely misaligned on all dimensions, reaching mismatch
rates over 60%. Models heavily fine-tuned on conversational or safety data are
especially impacted. Crucially, models remain misaligned even when we
increasingly constrain prompts, i.e., force them to start with an option letter
or example template. Our findings i) underscore the importance of inspecting
the text output, too and ii) caution against relying solely on first-token
evaluation.
| 2,024 | Computation and Language |
Malaysian English News Decoded: A Linguistic Resource for Named Entity
and Relation Extraction | Standard English and Malaysian English exhibit notable differences, posing
challenges for natural language processing (NLP) tasks on Malaysian English.
Unfortunately, most of the existing datasets are mainly based on standard
English and therefore inadequate for improving NLP tasks in Malaysian English.
An experiment using state-of-the-art Named Entity Recognition (NER) solutions
on Malaysian English news articles highlights that they cannot handle
morphosyntactic variations in Malaysian English. To the best of our knowledge,
there is no annotated dataset available to improvise the model. To address
these issues, we constructed a Malaysian English News (MEN) dataset, which
contains 200 news articles that are manually annotated with entities and
relations. We then fine-tuned the spaCy NER tool and validated that having a
dataset tailor-made for Malaysian English could improve the performance of NER
in Malaysian English significantly. This paper presents our effort in the data
acquisition, annotation methodology, and thorough analysis of the annotated
dataset. To validate the quality of the annotation, inter-annotator agreement
was used, followed by adjudication of disagreements by a subject matter expert.
Upon completion of these tasks, we managed to develop a dataset with 6,061
entities and 3,268 relation instances. Finally, we discuss on spaCy fine-tuning
setup and analysis on the NER performance. This unique dataset will contribute
significantly to the advancement of NLP research in Malaysian English, allowing
researchers to accelerate their progress, particularly in NER and relation
extraction. The dataset and annotation guideline has been published on Github.
| 2,024 | Computation and Language |
Towards Unified Task Embeddings Across Multiple Models: Bridging the Gap
for Prompt-Based Large Language Models and Beyond | Task embedding, a meta-learning technique that captures task-specific
information, has become prevalent, especially in areas such as multi-task
learning, model editing, and interpretability. However, it faces challenges
with the emergence of prompt-guided Large Language Models (LLMs) operating in a
gradientfree manner. Existing task embedding methods rely on fine-tuned,
task-specific language models, which hinders the adaptability of task
embeddings across diverse models, especially prompt-based LLMs. To unleash the
power of task embedding in the era of LLMs, we propose a framework for unified
task embeddings (FUTE), harmonizing task embeddings from various models,
including smaller language models and LLMs with varied prompts, within a single
vector space. Such uniformity enables the comparison and analysis of
similarities amongst different models, extending the scope and utility of
existing task embedding methods in addressing multi-model scenarios, whilst
maintaining their performance to be comparable to architecture-specific
methods.
| 2,024 | Computation and Language |
Daisy-TTS: Simulating Wider Spectrum of Emotions via Prosody Embedding
Decomposition | We often verbally express emotions in a multifaceted manner, they may vary in
their intensities and may be expressed not just as a single but as a mixture of
emotions. This wide spectrum of emotions is well-studied in the structural
model of emotions, which represents variety of emotions as derivative products
of primary emotions with varying degrees of intensity. In this paper, we
propose an emotional text-to-speech design to simulate a wider spectrum of
emotions grounded on the structural model. Our proposed design, Daisy-TTS,
incorporates a prosody encoder to learn emotionally-separable prosody embedding
as a proxy for emotion. This emotion representation allows the model to
simulate: (1) Primary emotions, as learned from the training samples, (2)
Secondary emotions, as a mixture of primary emotions, (3) Intensity-level, by
scaling the emotion embedding, and (4) Emotions polarity, by negating the
emotion embedding. Through a series of perceptual evaluations, Daisy-TTS
demonstrated overall higher emotional speech naturalness and emotion
perceiveability compared to the baseline.
| 2,024 | Computation and Language |
Balanced Data Sampling for Language Model Training with Clustering | Data plays a fundamental role in the training of Large Language Models
(LLMs). While attention has been paid to the collection and composition of
datasets, determining the data sampling strategy in training remains an open
question. Most LLMs are trained with a simple strategy, random sampling.
However, this sampling strategy ignores the unbalanced nature of training data
distribution, which can be sub-optimal. In this paper, we propose ClusterClip
Sampling to balance the text distribution of training data for better model
training. Specifically, ClusterClip Sampling utilizes data clustering to
reflect the data distribution of the training set and balances the common
samples and rare samples during training based on the cluster results. A
repetition clip operation is introduced to mitigate the overfitting issue led
by samples from certain clusters. Extensive experiments validate the
effectiveness of ClusterClip Sampling, which outperforms random sampling and
other cluster-based sampling variants under various training datasets and large
language models.
| 2,024 | Computation and Language |
Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt
Politeness on LLM Performance | We investigate the impact of politeness levels in prompts on the performance
of large language models (LLMs). Polite language in human communications often
garners more compliance and effectiveness, while rudeness can cause aversion,
impacting response quality. We consider that LLMs mirror human communication
traits, suggesting they align with human cultural norms. We assess the impact
of politeness in prompts on LLMs across English, Chinese, and Japanese tasks.
We observed that impolite prompts often result in poor performance, but overly
polite language does not guarantee better outcomes. The best politeness level
is different according to the language. This phenomenon suggests that LLMs not
only reflect human behavior but are also influenced by language, particularly
in different cultural contexts. Our findings highlight the need to factor in
politeness for cross-cultural natural language processing and LLM usage.
| 2,024 | Computation and Language |
Whose LLM is it Anyway? Linguistic Comparison and LLM Attribution for
GPT-3.5, GPT-4 and Bard | Large Language Models (LLMs) are capable of generating text that is similar
to or surpasses human quality. However, it is unclear whether LLMs tend to
exhibit distinctive linguistic styles akin to how human authors do. Through a
comprehensive linguistic analysis, we compare the vocabulary, Part-Of-Speech
(POS) distribution, dependency distribution, and sentiment of texts generated
by three of the most popular LLMS today (GPT-3.5, GPT-4, and Bard) to diverse
inputs. The results point to significant linguistic variations which, in turn,
enable us to attribute a given text to its LLM origin with a favorable 88\%
accuracy using a simple off-the-shelf classification model. Theoretical and
practical implications of this intriguing finding are discussed.
| 2,024 | Computation and Language |
Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis | Domain adaption has been widely adapted for cross-domain sentiment analysis
to transfer knowledge from the source domain to the target domain. Whereas,
most methods are proposed under the assumption that the target (test) domain is
known, making them fail to generalize well on unknown test data that is not
always available in practice. In this paper, we focus on the problem of domain
generalization for cross-domain sentiment analysis. Specifically, we propose a
backdoor adjustment-based causal model to disentangle the domain-specific and
domain-invariant representations that play essential roles in tackling domain
shift. First, we rethink the cross-domain sentiment analysis task in a causal
view to model the causal-and-effect relationships among different variables.
Then, to learn an invariant feature representation, we remove the effect of
domain confounders (e.g., domain knowledge) using the backdoor adjustment. A
series of experiments over many homologous and diverse datasets show the great
performance and robustness of our model by comparing it with the
state-of-the-art domain generalization baselines.
| 2,024 | Computation and Language |
Less is More: Mitigating Multimodal Hallucination from an EOS Decision
Perspective | Large Multimodal Models (LMMs) often suffer from multimodal hallucinations,
wherein they may create content that is not present in the visual inputs. In
this paper, we explore a new angle of this issue: overly detailed training data
hinders the model's ability to timely terminate generation, leading to
continued outputs beyond visual perception limits. By investigating how the
model decides to terminate generation with EOS, the special end-of-sentence
token, we find that the model assesses the completeness of the entire sequence
by comparing the generated text with the image. This observation suggests that
the model possesses an inherent potential of making proper EOS decisions based
on its visual perception to avoid overly lengthy outputs. To take advantage of
such potential, we explore two methods to mitigate multimodal hallucinations: a
training objective that enables the model to reduce hallucinations by learning
from regular instruction data, and a data filtering strategy to prevent harmful
training data from exacerbating model hallucinations. Both methods
significantly improve the hallucination performance of LMMs, without requiring
any additional data or knowledge.
| 2,024 | Computation and Language |
LLMs with Industrial Lens: Deciphering the Challenges and Prospects -- A
Survey | Large language models (LLMs) have become the secret ingredient driving
numerous industrial applications, showcasing their remarkable versatility
across a diverse spectrum of tasks. From natural language processing and
sentiment analysis to content generation and personalized recommendations,
their unparalleled adaptability has facilitated widespread adoption across
industries. This transformative shift driven by LLMs underscores the need to
explore the underlying associated challenges and avenues for enhancement in
their utilization. In this paper, our objective is to unravel and evaluate the
obstacles and opportunities inherent in leveraging LLMs within an industrial
context. To this end, we conduct a survey involving a group of industry
practitioners, develop four research questions derived from the insights
gathered, and examine 68 industry papers to address these questions and derive
meaningful conclusions.
| 2,024 | Computation and Language |
LLM-DA: Data Augmentation via Large Language Models for Few-Shot Named
Entity Recognition | Despite the impressive capabilities of large language models (LLMs), their
performance on information extraction tasks is still not entirely satisfactory.
However, their remarkable rewriting capabilities and extensive world knowledge
offer valuable insights to improve these tasks. In this paper, we propose
$LLM-DA$, a novel data augmentation technique based on LLMs for the few-shot
NER task. To overcome the limitations of existing data augmentation methods
that compromise semantic integrity and address the uncertainty inherent in
LLM-generated text, we leverage the distinctive characteristics of the NER task
by augmenting the original data at both the contextual and entity levels. Our
approach involves employing 14 contextual rewriting strategies, designing
entity replacements of the same type, and incorporating noise injection to
enhance robustness. Extensive experiments demonstrate the effectiveness of our
approach in enhancing NER model performance with limited data. Furthermore,
additional analyses provide further evidence supporting the assertion that the
quality of the data we generate surpasses that of other existing methods.
| 2,024 | Computation and Language |
Two Counterexamples to Tokenization and the Noiseless Channel | In Tokenization and the Noiseless Channel (Zouhar et al., 2023a), R\'enyi
efficiency is suggested as an intrinsic mechanism for evaluating a tokenizer:
for NLP tasks, the tokenizer which leads to the highest R\'enyi efficiency of
the unigram distribution should be chosen. The R\'enyi efficiency is thus
treated as a predictor of downstream performance (e.g., predicting BLEU for a
machine translation task), without the expensive step of training multiple
models with different tokenizers. Although useful, the predictive power of this
metric is not perfect, and the authors note there are additional qualities of a
good tokenization scheme that R\'enyi efficiency alone cannot capture.
We describe two variants of BPE tokenization which can arbitrarily increase
R\'enyi efficiency while decreasing the downstream model performance. These
counterexamples expose cases where R\'enyi efficiency fails as an intrinsic
tokenization metric and thus give insight for building more accurate
predictors.
| 2,024 | Computation and Language |
The Impact of Word Splitting on the Semantic Content of Contextualized
Word Representations | When deriving contextualized word representations from language models, a
decision needs to be made on how to obtain one for out-of-vocabulary (OOV)
words that are segmented into subwords. What is the best way to represent these
words with a single vector, and are these representations of worse quality than
those of in-vocabulary words? We carry out an intrinsic evaluation of
embeddings from different models on semantic similarity tasks involving OOV
words. Our analysis reveals, among other interesting findings, that the quality
of representations of words that are split is often, but not always, worse than
that of the embeddings of known words. Their similarity values, however, must
be interpreted with caution.
| 2,024 | Computation and Language |
Cleaner Pretraining Corpus Curation with Neural Web Scraping | The web contains large-scale, diverse, and abundant information to satisfy
the information-seeking needs of humans. Through meticulous data collection,
preprocessing, and curation, webpages can be used as a fundamental data
resource for language model pretraining. However, when confronted with the
progressively revolutionized and intricate nature of webpages,
rule-based/feature-based web scrapers are becoming increasingly inadequate.
This paper presents a simple, fast, and effective Neural web Scraper
(NeuScraper) to help extract primary and clean text contents from webpages.
Experimental results show that NeuScraper surpasses the baseline scrapers by
achieving more than a 20% improvement, demonstrating its potential in
extracting higher-quality data to facilitate the language model pretraining.
All of the code is available at https://github.com/OpenMatch/NeuScraper.
| 2,024 | Computation and Language |
ConceptMath: A Bilingual Concept-wise Benchmark for Measuring
Mathematical Reasoning of Large Language Models | This paper introduces ConceptMath, a bilingual (English and Chinese),
fine-grained benchmark that evaluates concept-wise mathematical reasoning of
Large Language Models (LLMs). Unlike traditional benchmarks that evaluate
general mathematical reasoning with an average accuracy, ConceptMath
systematically organizes math problems under a hierarchy of math concepts, so
that mathematical reasoning can be evaluated at different granularity with
concept-wise accuracies. Based on our ConcepthMath, we evaluate a broad range
of LLMs, and we observe existing LLMs, though achieving high average accuracies
on traditional benchmarks, exhibit significant performance variations across
different math concepts and may even fail catastrophically on the most basic
ones. Besides, we also introduce an efficient fine-tuning strategy to enhance
the weaknesses of existing LLMs. Finally, we hope ConceptMath could guide the
developers to understand the fine-grained mathematical abilities of their
models and facilitate the growth of foundation models.
| 2,024 | Computation and Language |
Middleware for LLMs: Tools Are Instrumental for Language Agents in
Complex Environments | The applications of large language models (LLMs) have expanded well beyond
the confines of text processing, signaling a new era where LLMs are envisioned
as generalist language agents capable of operating within complex real-world
environments. These environments are often highly expansive, making it
impossible for the LLM to process them within its short-term memory. Motivated
by recent research on extending the capabilities of LLMs with tools, this paper
investigates the intriguing potential of tools to augment LLMs in handling such
complexity. To this end, we design customized tools to aid in the proactive
exploration within these massive environments. Such tools can serve as a
middleware layer shielding the LLM from environmental complexity. In two
representative complex environments -- knowledge bases (KBs) and databases --
we demonstrate the significant potential of augmenting language agents with
tools in complex environments. Notably, equipped with these tools, GPT-4
achieves 2.8X the performance of the best baseline in tasks requiring access to
database content and 2.2X in KB tasks. Our findings illuminate the path for
advancing language agents in complex real-world applications.
| 2,024 | Computation and Language |
Is Cognition and Action Consistent or Not: Investigating Large Language
Model's Personality | In this study, we investigate the reliability of Large Language Models (LLMs)
in professing human-like personality traits through responses to personality
questionnaires. Our goal is to evaluate the consistency between LLMs' professed
personality inclinations and their actual "behavior", examining the extent to
which these models can emulate human-like personality patterns. Through a
comprehensive analysis of LLM outputs against established human benchmarks, we
seek to understand the cognition-action divergence in LLMs and propose
hypotheses for the observed results based on psychological theories and
metrics.
| 2,024 | Computation and Language |
UFO: a Unified and Flexible Framework for Evaluating Factuality of Large
Language Models | Large language models (LLMs) may generate text that lacks consistency with
human knowledge, leading to factual inaccuracies or \textit{hallucination}.
Existing research for evaluating the factuality of LLMs involves extracting
fact claims using an LLM and verifying them against a predefined fact source.
However, these evaluation metrics are task-specific, and not scalable, and the
substitutability of fact sources in different tasks is under-explored. To
address these challenges, we categorize four available fact sources:
human-written evidence, reference documents, search engine results, and LLM
knowledge, along with five text generation tasks containing six representative
datasets. Then, we propose \texttt{UFO}, an LLM-based unified and flexible
evaluation framework to verify facts against plug-and-play fact sources. We
implement five evaluation scenarios based on this framework. Experimental
results show that for most QA tasks, human-written evidence and reference
documents are crucial, and they can substitute for each other in
retrieval-augmented QA tasks. In news fact generation tasks, search engine
results and LLM knowledge are essential. Our dataset and code are available at
\url{https://github.com/WaldenRUC/UFO}.
| 2,024 | Computation and Language |
Unveiling Linguistic Regions in Large Language Models | Large Language Models (LLMs) have demonstrated considerable cross-lingual
alignment and generalization ability. Current research primarily focuses on
improving LLMs' cross-lingual generalization capabilities. However, there is
still a lack of research on the intrinsic mechanisms of how LLMs achieve
cross-lingual alignment. From the perspective of region partitioning, this
paper conducts several investigations on the linguistic competence of LLMs. We
discover a core region in LLMs that corresponds to linguistic competence,
accounting for approximately 1% of the total model parameters. Removing this
core region by setting parameters to zero results in a significant performance
decrease across 30 different languages. Furthermore, this core region exhibits
significant dimensional dependency, perturbations to even a single parameter on
specific dimensions leading to a loss of linguistic competence. Moreover, we
discover that distinct regions exist for different monolingual families, and
disruption to these specific regions substantially reduces the LLMs'
proficiency in those corresponding languages. Our research also indicates that
freezing the core linguistic region during further pre-training can mitigate
the issue of catastrophic forgetting (CF), a common occurrence observed during
further pre-training of LLMs. Overall, exploring the LLMs' functional regions
provides insights into the foundation of their intelligence.
| 2,024 | Computation and Language |
COMPASS: Computational Mapping of Patient-Therapist Alliance Strategies
with Language Modeling | The therapeutic working alliance is a critical factor in predicting the
success of psychotherapy treatment. Traditionally, working alliance assessment
relies on questionnaires completed by both therapists and patients. In this
paper, we present COMPASS, a novel framework to directly infer the therapeutic
working alliance from the natural language used in psychotherapy sessions. Our
approach utilizes advanced large language models to analyze transcripts of
psychotherapy sessions and compare them with distributed representations of
statements in the working alliance inventory. Analyzing a dataset of over 950
sessions covering diverse psychiatric conditions, we demonstrate the
effectiveness of our method in microscopically mapping patient-therapist
alignment trajectories and providing interpretability for clinical psychiatry
and in identifying emerging patterns related to the condition being treated. By
employing various neural topic modeling techniques in combination with
generative language prompting, we analyze the topical characteristics of
different psychiatric conditions and incorporate temporal modeling to capture
the evolution of topics at a turn-level resolution. This combined framework
enhances the understanding of therapeutic interactions, enabling timely
feedback for therapists regarding conversation quality and providing
interpretable insights to improve the effectiveness of psychotherapy.
| 2,024 | Computation and Language |
InfFeed: Influence Functions as a Feedback to Improve the Performance of
Subjective Tasks | Recently, influence functions present an apparatus for achieving
explainability for deep neural models by quantifying the perturbation of
individual train instances that might impact a test prediction. Our objectives
in this paper are twofold. First we incorporate influence functions as a
feedback into the model to improve its performance. Second, in a dataset
extension exercise, using influence functions to automatically identify data
points that have been initially `silver' annotated by some existing method and
need to be cross-checked (and corrected) by annotators to improve the model
performance. To meet these objectives, in this paper, we introduce InfFeed,
which uses influence functions to compute the influential instances for a
target instance. Toward the first objective, we adjust the label of the target
instance based on its influencer(s) label. In doing this, InfFeed outperforms
the state-of-the-art baselines (including LLMs) by a maximum macro F1-score
margin of almost 4% for hate speech classification, 3.5% for stance
classification, and 3% for irony and 2% for sarcasm detection. Toward the
second objective we show that manually re-annotating only those silver
annotated data points in the extension set that have a negative influence can
immensely improve the model performance bringing it very close to the scenario
where all the data points in the extension set have gold labels. This allows
for huge reduction of the number of data points that need to be manually
annotated since out of the silver annotated extension dataset, the influence
function scheme picks up ~1/1000 points that need manual correction.
| 2,024 | Computation and Language |
An LLM-Enhanced Adversarial Editing System for Lexical Simplification | Lexical Simplification (LS) aims to simplify text at the lexical level.
Existing methods rely heavily on annotated data, making it challenging to apply
in low-resource scenarios. In this paper, we propose a novel LS method without
parallel corpora. This method employs an Adversarial Editing System with
guidance from a confusion loss and an invariance loss to predict lexical edits
in the original sentences. Meanwhile, we introduce an innovative LLM-enhanced
loss to enable the distillation of knowledge from Large Language Models (LLMs)
into a small-size LS system. From that, complex words within sentences are
masked and a Difficulty-aware Filling module is crafted to replace masked
positions with simpler words. At last, extensive experimental results and
analyses on three benchmark LS datasets demonstrate the effectiveness of our
proposed method.
| 2,024 | Computation and Language |
IEPile: Unearthing Large-Scale Schema-Based Information Extraction
Corpus | Large Language Models (LLMs) demonstrate remarkable potential across various
domains; however, they exhibit a significant performance gap in Information
Extraction (IE). Note that high-quality instruction data is the vital key for
enhancing the specific capabilities of LLMs, while current IE datasets tend to
be small in scale, fragmented, and lack standardized schema. To this end, we
introduce IEPile, a comprehensive bilingual (English and Chinese) IE
instruction corpus, which contains approximately 0.32B tokens. We construct
IEPile by collecting and cleaning 33 existing IE datasets, and introduce
schema-based instruction generation to unearth a large-scale corpus.
Experimental results on LLaMA and Baichuan demonstrate that using IEPile can
enhance the performance of LLMs for IE, especially the zero-shot
generalization. We open-source the resource and pre-trained models, hoping to
provide valuable support to the NLP community.
| 2,024 | Computation and Language |
Efficient and Effective Vocabulary Expansion Towards Multilingual Large
Language Models | This report introduces \texttt{EEVE-Korean-v1.0}, a Korean adaptation of
large language models that exhibit remarkable capabilities across English and
Korean text understanding. Building on recent highly capable but
English-centric LLMs, such as SOLAR-10.7B and Phi-2, where non-English texts
are inefficiently processed with English-centric tokenizers, we present an
efficient and effective vocabulary expansion (EEVE) method, which encompasses
parameter freezing and subword initialization. In contrast to previous efforts
that believe new embeddings require trillions of training tokens, we show that
our method can significantly boost non-English proficiency within just 2
billion tokens. Surpassing most instruction-tuned LLMs on the Open Ko-LLM
Leaderboard, as of January 2024, our model \texttt{EEVE-Korean-10.8B-v1.0}
ranks as the leading Korean pre-trained model in the open-source community,
according to Hugging Face's leaderboard. We open-source our models on
Huggingface to empower the open research community in various languages.
| 2,024 | Computation and Language |
Dependency Annotation of Ottoman Turkish with Multilingual BERT | This study introduces a pretrained large language model-based annotation
methodology for the first dependency treebank in Ottoman Turkish. Our
experimental results show that, iteratively, i) pseudo-annotating data using a
multilingual BERT-based parsing model, ii) manually correcting the
pseudo-annotations, and iii) fine-tuning the parsing model with the corrected
annotations, we speed up and simplify the challenging dependency annotation
process. The resulting treebank, that will be a part of the Universal
Dependencies (UD) project, will facilitate automated analysis of Ottoman
Turkish documents, unlocking the linguistic richness embedded in this
historical heritage.
| 2,024 | Computation and Language |
Scaling Efficient LLMs | Trained LLMs are typically sparse in that most of the parameters are zero,
raising questions on efficiency. In response, we inquire into efficient LLMs,
i.e. those with the fewest parameters that achieve the desired accuracy on a
training corpus. Specifically, we compare theoretical and empirical estimates
for training loss at current scale to obtain upper and lower bounds on the
number of unique sequences in a natural training corpus as a function of its
size. Our result implies (1) to double the number of skills represented in a
training corpus, the corpus must scale roughly between three and five fold (2)
for efficient LLMs, the number of parameters $N$ and the size $D$ of a natural
training corpus scale as $N \sim D^{0.58}$ (3) if the number of parameters of
an LLM is smaller than the number of unique sequences in the training corpus,
scaling up can uncover emergent skills.
| 2,024 | Computation and Language |
MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language
Models in Multi-Turn Dialogues | The advent of Large Language Models (LLMs) has drastically enhanced dialogue
systems. However, comprehensively evaluating the dialogue abilities of LLMs
remains a challenge. Previous benchmarks have primarily focused on single-turn
dialogues or provided coarse-grained and incomplete assessments of multi-turn
dialogues, overlooking the complexity and fine-grained nuances of real-life
dialogues. To address this issue, we introduce MT-Bench-101, specifically
designed to evaluate the fine-grained abilities of LLMs in multi-turn
dialogues. By conducting a detailed analysis of real multi-turn dialogue data,
we construct a three-tier hierarchical ability taxonomy comprising 4208 turns
across 1388 multi-turn dialogues in 13 distinct tasks. We then evaluate 21
popular LLMs based on MT-Bench-101, conducting comprehensive analyses from both
ability and task perspectives and observing differing trends in LLMs
performance across dialogue turns within various tasks. Further analysis
indicates that neither utilizing common alignment techniques nor chat-specific
designs has led to obvious enhancements in the multi-turn abilities of LLMs.
Extensive case studies suggest that our designed tasks accurately assess the
corresponding multi-turn abilities.
| 2,024 | Computation and Language |
2D Matryoshka Sentence Embeddings | Common approaches rely on fixed-length embedding vectors from language models
as sentence embeddings for downstream tasks such as semantic textual similarity
(STS). Such methods are limited in their flexibility due to unknown
computational constraints and budgets across various applications. Matryoshka
Representation Learning (MRL) (Kusupati et al., 2022) encodes information at
finer granularities, i.e., with lower embedding dimensions, to adaptively
accommodate ad hoc tasks. Similar accuracy can be achieved with a smaller
embedding size, leading to speedups in downstream tasks. Despite its improved
efficiency, MRL still requires traversing all Transformer layers before
obtaining the embedding, which remains the dominant factor in time and memory
consumption. This prompts consideration of whether the fixed number of
Transformer layers affects representation quality and whether using
intermediate layers for sentence representation is feasible. In this paper, we
introduce a novel sentence embedding model called Two-dimensional Matryoshka
Sentence Embedding (2DMSE). It supports elastic settings for both embedding
sizes and Transformer layers, offering greater flexibility and efficiency than
MRL. We conduct extensive experiments on STS tasks and downstream applications.
The experimental results demonstrate the effectiveness of our proposed model in
dynamically supporting different embedding sizes and Transformer layers,
allowing it to be highly adaptable to various scenarios.
| 2,024 | Computation and Language |
Zero-shot cross-lingual transfer in instruction tuning of large language
model | Instruction tuning (IT) is widely used to teach pretrained large language
models (LLMs) to follow arbitrary instructions, but is under-studied in
multilingual settings. In this work, we conduct a systematic study of zero-shot
cross-lingual transfer in IT, when an LLM is instruction-tuned on English-only
data and then tested on user prompts in other languages. We investigate the
influence of model configuration choices and devise a multi-facet evaluation
strategy for multilingual instruction following. We find that cross-lingual
transfer does happen successfully in IT even if all stages of model training
are English-centric, but only if multiliguality is taken into account in
hyperparameter tuning and with large enough IT data. English-trained LLMs are
capable of generating correct-language, comprehensive and helpful responses in
the other languages, but suffer from low factuality and may occasionally have
fluency errors.
| 2,024 | Computation and Language |
Enhancing Systematic Decompositional Natural Language Inference Using
Informal Logic | Contemporary language models enable new opportunities for structured
reasoning with text, such as the construction and evaluation of intuitive,
proof-like textual entailment trees without relying on brittle formal logic.
However, progress in this direction has been hampered by a long-standing lack
of a clear protocol for determining what valid compositional entailment is.
This absence causes noisy datasets and limited performance gains by modern
neuro-symbolic engines. To address these problems, we formulate a consistent
and theoretically grounded approach to annotating decompositional entailment
datasets, and evaluate its impact on LLM-based textual inference. We find that
our resulting dataset, RDTE (Recognizing Decompositional Textual Entailment),
has a substantially higher internal consistency (+9%) than prior
decompositional entailment datasets, suggesting that RDTE is a significant step
forward in the long-standing problem of forming a clear protocol for discerning
entailment. We also find that training an RDTE-oriented entailment classifier
via knowledge distillation and employing it in a modern neuro-symbolic
reasoning engine significantly improves results (both accuracy and proof
quality) over other entailment classifier baselines, illustrating the practical
benefit of this advance for textual inference.
| 2,024 | Computation and Language |
Not All Experts are Equal: Efficient Expert Pruning and Skipping for
Mixture-of-Experts Large Language Models | A pivotal advancement in the progress of large language models (LLMs) is the
emergence of the Mixture-of-Experts (MoE) LLMs. Compared to traditional LLMs,
MoE LLMs can achieve higher performance with fewer parameters, but it is still
hard to deploy them due to their immense parameter sizes. Different from
previous weight pruning methods that rely on specifically designed hardware,
this paper mainly aims to enhance the deployment efficiency of MoE LLMs by
introducing plug-and-play expert-level sparsification techniques. Specifically,
we propose, for the first time to our best knowledge, post-training approaches
for task-agnostic and task-specific expert pruning and skipping of MoE LLMs,
tailored to improve deployment efficiency while maintaining model performance
across a wide range of tasks. Extensive experiments show that our proposed
methods can simultaneously reduce model sizes and increase the inference speed,
while maintaining satisfactory performance. Data and code will be available at
https://github.com/Lucky-Lance/Expert_Sparsity.
| 2,024 | Computation and Language |
Identifying Multiple Personalities in Large Language Models with
External Evaluation | As Large Language Models (LLMs) are integrated with human daily applications
rapidly, many societal and ethical concerns are raised regarding the behavior
of LLMs. One of the ways to comprehend LLMs' behavior is to analyze their
personalities. Many recent studies quantify LLMs' personalities using
self-assessment tests that are created for humans. Yet many critiques question
the applicability and reliability of these self-assessment tests when applied
to LLMs. In this paper, we investigate LLM personalities using an alternate
personality measurement method, which we refer to as the external evaluation
method, where instead of prompting LLMs with multiple-choice questions in the
Likert scale, we evaluate LLMs' personalities by analyzing their responses
toward open-ended situational questions using an external machine learning
model. We first fine-tuned a Llama2-7B model as the MBTI personality predictor
that outperforms the state-of-the-art models as the tool to analyze LLMs'
responses. Then, we prompt the LLMs with situational questions and ask them to
generate Twitter posts and comments, respectively, in order to assess their
personalities when playing two different roles. Using the external personality
evaluation method, we identify that the obtained personality types for LLMs are
significantly different when generating posts versus comments, whereas humans
show a consistent personality profile in these two different situations. This
shows that LLMs can exhibit different personalities based on different
scenarios, thus highlighting a fundamental difference between personality in
LLMs and humans. With our work, we call for a re-evaluation of personality
definition and measurement in LLMs.
| 2,024 | Computation and Language |
RelayAttention for Efficient Large Language Model Serving with Long
System Prompts | Practical large language model (LLM) services may involve a long system
prompt, which specifies the instructions, examples, and knowledge documents of
the task and is reused across numerous requests. However, the long system
prompt causes throughput/latency bottlenecks as the cost of generating the next
token grows w.r.t. the sequence length. This paper aims to improve the
efficiency of LLM services that involve long system prompts. Our key
observation is that handling these system prompts requires heavily redundant
memory accesses in existing causal attention computation algorithms.
Specifically, for batched requests, the cached hidden states (i.e., key-value
pairs) of system prompts are transferred from off-chip DRAM to on-chip SRAM
multiple times, each corresponding to an individual request. To eliminate such
a redundancy, we propose RelayAttention, an attention algorithm that allows
reading these hidden states from DRAM exactly once for a batch of input tokens.
RelayAttention is a free lunch: it maintains the generation quality while
requiring no model retraining, as it is based on a mathematical reformulation
of causal attention. Code is available at
\url{https://github.com/rayleizhu/vllm-ra}.
| 2,024 | Computation and Language |
Subsets and Splits