Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
CriticBench: Benchmarking LLMs for Critique-Correct Reasoning | The ability of Large Language Models (LLMs) to critique and refine their
reasoning is crucial for their application in evaluation, feedback provision,
and self-improvement. This paper introduces CriticBench, a comprehensive
benchmark designed to assess LLMs' abilities to critique and rectify their
reasoning across a variety of tasks. CriticBench encompasses five reasoning
domains: mathematical, commonsense, symbolic, coding, and algorithmic. It
compiles 15 datasets and incorporates responses from three LLM families.
Utilizing CriticBench, we evaluate and dissect the performance of 17 LLMs in
generation, critique, and correction reasoning, i.e., GQC reasoning. Our
findings reveal: (1) a linear relationship in GQC capabilities, with
critique-focused training markedly enhancing performance; (2) a task-dependent
variation in correction effectiveness, with logic-oriented tasks being more
amenable to correction; (3) GQC knowledge inconsistencies that decrease as
model size increases; and (4) an intriguing inter-model critiquing dynamic,
where stronger models are better at critiquing weaker ones, while weaker models
can surprisingly surpass stronger ones in their self-critique. We hope these
insights into the nuanced critique-correct reasoning of LLMs will foster
further research in LLM critique and self-improvement.
| 2,024 | Computation and Language |
Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity
Tracking | Fine-tuning on generalized tasks such as instruction following, code
generation, and mathematics has been shown to enhance language models'
performance on a range of tasks. Nevertheless, explanations of how such
fine-tuning influences the internal computations in these models remain
elusive. We study how fine-tuning affects the internal mechanisms implemented
in language models. As a case study, we explore the property of entity
tracking, a crucial facet of language comprehension, where models fine-tuned on
mathematics have substantial performance gains. We identify the mechanism that
enables entity tracking and show that (i) in both the original model and its
fine-tuned versions primarily the same circuit implements entity tracking. In
fact, the entity tracking circuit of the original model on the fine-tuned
versions performs better than the full original model. (ii) The circuits of all
the models implement roughly the same functionality: Entity tracking is
performed by tracking the position of the correct entity in both the original
model and its fine-tuned versions. (iii) Performance boost in the fine-tuned
models is primarily attributed to its improved ability to handle the augmented
positional information. To uncover these findings, we employ: Patch Patching,
DCM, which automatically detects model components responsible for specific
semantics, and CMAP, a new approach for patching activations across models to
reveal improved mechanisms. Our findings suggest that fine-tuning enhances,
rather than fundamentally alters, the mechanistic operation of the model.
| 2,024 | Computation and Language |
PALO: A Polyglot Large Multimodal Model for 5B People | In pursuit of more inclusive Vision-Language Models (VLMs), this study
introduces a Large Multilingual Multimodal Model called PALO. PALO offers
visual reasoning capabilities in 10 major languages, including English,
Chinese, Hindi, Spanish, French, Arabic, Bengali, Russian, Urdu, and Japanese,
that span a total of ~5B people (65% of the world population). Our approach
involves a semi-automated translation approach to adapt the multimodal
instruction dataset from English to the target languages using a fine-tuned
Large Language Model, thereby ensuring high linguistic fidelity while allowing
scalability due to minimal manual effort. The incorporation of diverse
instruction sets helps us boost overall performance across multiple languages
especially those that are underrepresented like Hindi, Arabic, Bengali, and
Urdu. The resulting models are trained across three scales (1.7B, 7B and 13B
parameters) to show the generalization and scalability where we observe
substantial improvements compared to strong baselines. We also propose the
first multilingual multimodal benchmark for the forthcoming approaches to
evaluate their vision-language reasoning capabilities across languages. Code:
https://github.com/mbzuai-oryx/PALO.
| 2,024 | Computation and Language |
Orca-Math: Unlocking the potential of SLMs in Grade School Math | Mathematical word problem-solving has long been recognized as a complex task
for small language models (SLMs). A recent study hypothesized that the smallest
model size, needed to achieve over 80% accuracy on the GSM8K benchmark, is 34
billion parameters. To reach this level of performance with smaller models,
researcher often train SLMs to generate Python code or use tools to help avoid
calculation errors. Additionally, they employ ensembling, where outputs of up
to 100 model runs are combined to arrive at a more accurate result. Result
selection is done using consensus, majority vote or a separate a verifier model
used in conjunction with the SLM. Ensembling provides a substantial boost in
accuracy but at a significant cost increase with multiple calls to the model
(e.g., Phi-GSM uses top-48 to boost the performance from 68.2 to 81.5).
In this work, we present Orca-Math, a 7-billion-parameter SLM based on the
Mistral-7B, which achieves 86.81% on GSM8k without the need for multiple model
calls or the use of verifiers, code execution or any other external tools. Our
approach has the following key elements: (1) A high quality synthetic dataset
of 200K math problems created using a multi-agent setup where agents
collaborate to create the data, (2) An iterative learning techniques that
enables the SLM to practice solving problems, receive feedback on its solutions
and learn from preference pairs incorporating the SLM solutions and the
feedback. When trained with Supervised Fine-Tuning alone, Orca-Math achieves
81.50% on GSM8k pass@1 metric. With iterative preference learning, Orca-Math
achieves 86.81% pass@1. Orca-Math surpasses the performance of significantly
larger models such as LLAMA-2-70B, WizardMath-70B, Gemini-Pro, ChatGPT-3.5. It
also significantly outperforms other smaller models while using much smaller
data (hundreds of thousands vs. millions of problems).
| 2,024 | Computation and Language |
CliqueParcel: An Approach For Batching LLM Prompts That Jointly
Optimizes Efficiency And Faithfulness | Large language models (LLMs) have become pivotal in recent research. However,
during the inference process, LLMs still require substantial resources. In this
paper, we propose CliqueParcel, a method designed to improve the efficiency of
LLMs via prompt batching. Existing strategies to optimize inference efficiency
often compromise on output quality, leading to a discounted output problem.
This issue might result in reduced accuracy or outputs that are less detailed.
CliqueParcel is our answer to this challenge. While ensuring accuracy and
minimizing deviations from the original outputs (i.e., faithfulness), our
method significantly improves efficiency during inference.
To lay the groundwork, we first redefine efficiency measurements by excluding
the reduction in running time due to shorter lengths. Then, we provide a
comprehensive trade-off between efficiency and faithfulness to clarify the
nature of the 'discounted output' problem. Within the CliqueParcel framework,
we suggest multiple batching sub-methods and discuss the specific scenarios in
which they can be applied. During evaluation, CliqueParcel is tested on eight
widely recognized datasets, which can be classified into three types: reading
comprehension, open-source question-answering, and reasoning. Our experiments
explore the performance of CliqueParcel, including efficiency, faithfulness,
and the trade-off between them. This work provides novel insights into
inference efficiency and demonstrates promising performance.
| 2,024 | Computation and Language |
MSynFD: Multi-hop Syntax aware Fake News Detection | The proliferation of social media platforms has fueled the rapid
dissemination of fake news, posing threats to our real-life society. Existing
methods use multimodal data or contextual information to enhance the detection
of fake news by analyzing news content and/or its social context. However,
these methods often overlook essential textual news content (articles) and
heavily rely on sequential modeling and global attention to extract semantic
information. These existing methods fail to handle the complex, subtle twists
in news articles, such as syntax-semantics mismatches and prior biases, leading
to lower performance and potential failure when modalities or social context
are missing. To bridge these significant gaps, we propose a novel multi-hop
syntax aware fake news detection (MSynFD) method, which incorporates
complementary syntax information to deal with subtle twists in fake news.
Specifically, we introduce a syntactical dependency graph and design a
multi-hop subgraph aggregation mechanism to capture multi-hop syntax. It
extends the effect of word perception, leading to effective noise filtering and
adjacent relation enhancement. Subsequently, a sequential relative
position-aware Transformer is designed to capture the sequential information,
together with an elaborate keyword debiasing module to mitigate the prior bias.
Extensive experimental results on two public benchmark datasets verify the
effectiveness and superior performance of our proposed MSynFD over
state-of-the-art detection models.
| 2,024 | Computation and Language |
MIKE: A New Benchmark for Fine-grained Multimodal Entity Knowledge
Editing | Multimodal knowledge editing represents a critical advancement in enhancing
the capabilities of Multimodal Large Language Models (MLLMs). Despite its
potential, current benchmarks predominantly focus on coarse-grained knowledge,
leaving the intricacies of fine-grained (FG) multimodal entity knowledge
largely unexplored. This gap presents a notable challenge, as FG entity
recognition is pivotal for the practical deployment and effectiveness of MLLMs
in diverse real-world scenarios. To bridge this gap, we introduce MIKE, a
comprehensive benchmark and dataset specifically designed for the FG multimodal
entity knowledge editing. MIKE encompasses a suite of tasks tailored to assess
different perspectives, including Vanilla Name Answering, Entity-Level Caption,
and Complex-Scenario Recognition. In addition, a new form of knowledge editing,
Multi-step Editing, is introduced to evaluate the editing efficiency. Through
our extensive evaluations, we demonstrate that the current state-of-the-art
methods face significant challenges in tackling our proposed benchmark,
underscoring the complexity of FG knowledge editing in MLLMs. Our findings
spotlight the urgent need for novel approaches in this domain, setting a clear
agenda for future research and development efforts within the community.
| 2,024 | Computation and Language |
Stealthy Attack on Large Language Model based Recommendation | Recently, the powerful large language models (LLMs) have been instrumental in
propelling the progress of recommender systems (RS). However, while these
systems have flourished, their susceptibility to security threats has been
largely overlooked. In this work, we reveal that the introduction of LLMs into
recommendation models presents new security vulnerabilities due to their
emphasis on the textual content of items. We demonstrate that attackers can
significantly boost an item's exposure by merely altering its textual content
during the testing phase, without requiring direct interference with the
model's training process. Additionally, the attack is notably stealthy, as it
does not affect the overall recommendation performance and the modifications to
the text are subtle, making it difficult for users and platforms to detect. Our
comprehensive experiments across four mainstream LLM-based recommendation
models demonstrate the superior efficacy and stealthiness of our approach. Our
work unveils a significant security gap in LLM-based recommendation systems and
paves the way for future research on protecting these systems.
| 2,024 | Computation and Language |
An Empirical Categorization of Prompting Techniques for Large Language
Models: A Practitioner's Guide | Due to rapid advancements in the development of Large Language Models (LLMs),
programming these models with prompts has recently gained significant
attention. However, the sheer number of available prompt engineering techniques
creates an overwhelming landscape for practitioners looking to utilize these
tools. For the most efficient and effective use of LLMs, it is important to
compile a comprehensive list of prompting techniques and establish a
standardized, interdisciplinary categorization framework. In this survey, we
examine some of the most well-known prompting techniques from both academic and
practical viewpoints and classify them into seven distinct categories. We
present an overview of each category, aiming to clarify their unique
contributions and showcase their practical applications in real-world examples
in order to equip fellow practitioners with a structured framework for
understanding and categorizing prompting techniques tailored to their specific
domains. We believe that this approach will help simplify the complex landscape
of prompt engineering and enable more effective utilization of LLMs in various
applications. By providing practitioners with a systematic approach to prompt
categorization, we aim to assist in navigating the intricacies of effective
prompt design for conversational pre-trained LLMs and inspire new possibilities
in their respective fields.
| 2,024 | Computation and Language |
RFBES at SemEval-2024 Task 8: Investigating Syntactic and Semantic
Features for Distinguishing AI-Generated and Human-Written Texts | Nowadays, the usage of Large Language Models (LLMs) has increased, and LLMs
have been used to generate texts in different languages and for different
tasks. Additionally, due to the participation of remarkable companies such as
Google and OpenAI, LLMs are now more accessible, and people can easily use
them. However, an important issue is how we can detect AI-generated texts from
human-written ones. In this article, we have investigated the problem of
AI-generated text detection from two different aspects: semantics and syntax.
Finally, we presented an AI model that can distinguish AI-generated texts from
human-written ones with high accuracy on both multilingual and monolingual
tasks using the M4 dataset. According to our results, using a semantic approach
would be more helpful for detection. However, there is a lot of room for
improvement in the syntactic approach, and it would be a good approach for
future work.
| 2,024 | Computation and Language |
RJUA-MedDQA: A Multimodal Benchmark for Medical Document Question
Answering and Clinical Reasoning | Recent advancements in Large Language Models (LLMs) and Large Multi-modal
Models (LMMs) have shown potential in various medical applications, such as
Intelligent Medical Diagnosis. Although impressive results have been achieved,
we find that existing benchmarks do not reflect the complexity of real medical
reports and specialized in-depth reasoning capabilities. In this work, we
introduced RJUA-MedDQA, a comprehensive benchmark in the field of medical
specialization, which poses several challenges: comprehensively interpreting
imgage content across diverse challenging layouts, possessing numerical
reasoning ability to identify abnormal indicators and demonstrating clinical
reasoning ability to provide statements of disease diagnosis, status and advice
based on medical contexts. We carefully design the data generation pipeline and
proposed the Efficient Structural Restoration Annotation (ESRA) Method, aimed
at restoring textual and tabular content in medical report images. This method
substantially enhances annotation efficiency, doubling the productivity of each
annotator, and yields a 26.8% improvement in accuracy. We conduct extensive
evaluations, including few-shot assessments of 5 LMMs which are capable of
solving Chinese medical QA tasks. To further investigate the limitations and
potential of current LMMs, we conduct comparative experiments on a set of
strong LLMs by using image-text generated by ESRA method. We report the
performance of baselines and offer several observations: (1) The overall
performance of existing LMMs is still limited; however LMMs more robust to
low-quality and diverse-structured images compared to LLMs. (3) Reasoning
across context and image content present significant challenges. We hope this
benchmark helps the community make progress on these challenging tasks in
multi-modal medical document understanding and facilitate its application in
healthcare.
| 2,024 | Computation and Language |
Text Diffusion with Reinforced Conditioning | Diffusion models have demonstrated exceptional capability in generating
high-quality images, videos, and audio. Due to their adaptiveness in iterative
refinement, they provide a strong potential for achieving better
non-autoregressive sequence generation. However, existing text diffusion models
still fall short in their performance due to a challenge in handling the
discreteness of language. This paper thoroughly analyzes text diffusion models
and uncovers two significant limitations: degradation of self-conditioning
during training and misalignment between training and sampling. Motivated by
our findings, we propose a novel Text Diffusion model called TREC, which
mitigates the degradation with Reinforced Conditioning and the misalignment by
Time-Aware Variance Scaling. Our extensive experiments demonstrate the
competitiveness of TREC against autoregressive, non-autoregressive, and
diffusion baselines. Moreover, qualitative analysis shows its advanced ability
to fully utilize the diffusion process in refining samples.
| 2,024 | Computation and Language |
Purifying Large Language Models by Ensembling a Small Language Model | The emerging success of large language models (LLMs) heavily relies on
collecting abundant training data from external (untrusted) sources. Despite
substantial efforts devoted to data cleaning and curation, well-constructed
LLMs have been reported to suffer from copyright infringement, data poisoning,
and/or privacy violations, which would impede practical deployment of LLMs. In
this study, we propose a simple and easily implementable method for purifying
LLMs from the negative effects caused by uncurated data, namely, through
ensembling LLMs with benign and small language models (SLMs). Aside from
theoretical guarantees, we perform comprehensive experiments to empirically
confirm the efficacy of ensembling LLMs with SLMs, which can effectively
preserve the performance of LLMs while mitigating issues such as copyright
infringement, data poisoning, and privacy violations.
| 2,024 | Computation and Language |
Stick to your Role! Stability of Personal Values Expressed in Large
Language Models | The standard way to study Large Language Models (LLMs) through benchmarks or
psychology questionnaires is to provide many different queries from similar
minimal contexts (e.g. multiple choice questions). However, due to LLM's highly
context-dependent nature, conclusions from such minimal-context evaluations may
be little informative about the model's behavior in deployment (where it will
be exposed to many new contexts). We argue that context-dependence should be
studied as another dimension of LLM comparison alongside others such as
cognitive abilities, knowledge, or model size. In this paper, we present a
case-study about the stability of value expression over different contexts
(simulated conversations on different topics), and as measured using a standard
psychology questionnaire (PVQ) and a behavioral downstream task. We consider 19
open-sourced LLMs from five families. Reusing methods from psychology, we study
Rank-order stability on the population (interpersonal) level, and Ipsative
stability on the individual (intrapersonal) level. We explore two settings:
with and without instructing LLMs to simulate particular personalities. We
observe similar trends in the stability of models and model families - Mixtral,
Mistral and Qwen families being more stable than LLaMa-2 and Phi - over those
two settings, two different simulated populations, and even in the downstream
behavioral task. When instructed to simulate particular personas, LLMs exhibit
low Rank-Order stability, and this stability further diminishes with
conversation length. This highlights the need for future research directions on
LLMs that can coherently simulate a diversity of personas, as well as how
context-dependence can be studied in more thorough and efficient ways. This
paper provides a foundational step in that direction, and, to our knowledge, it
is the first study of value stability in LLMs.
| 2,024 | Computation and Language |
Same Task, More Tokens: the Impact of Input Length on the Reasoning
Performance of Large Language Models | This paper explores the impact of extending input lengths on the capabilities
of Large Language Models (LLMs). Despite LLMs advancements in recent times,
their performance consistency across different input lengths is not well
understood. We investigate this aspect by introducing a novel QA reasoning
framework, specifically designed to assess the impact of input length. We
isolate the effect of input length using multiple versions of the same sample,
each being extended with padding of different lengths, types and locations. Our
findings show a notable degradation in LLMs' reasoning performance at much
shorter input lengths than their technical maximum. We show that the
degradation trend appears in every version of our dataset, although at
different intensities. Additionally, our study reveals that traditional
perplexity metrics do not correlate with performance of LLMs' in long input
reasoning tasks. We analyse our results and identify failure modes that can
serve as useful guides for future research, potentially informing strategies to
address the limitations observed in LLMs.
| 2,024 | Computation and Language |
Asynchronous and Segmented Bidirectional Encoding for NMT | With the rapid advancement of Neural Machine Translation (NMT), enhancing
translation efficiency and quality has become a focal point of research.
Despite the commendable performance of general models such as the Transformer
in various aspects, they still fall short in processing long sentences and
fully leveraging bidirectional contextual information. This paper introduces an
improved model based on the Transformer, implementing an asynchronous and
segmented bidirectional decoding strategy aimed at elevating translation
efficiency and accuracy. Compared to traditional unidirectional translations
from left-to-right or right-to-left, our method demonstrates heightened
efficiency and improved translation quality, particularly in handling long
sentences. Experimental results on the IWSLT2017 dataset confirm the
effectiveness of our approach in accelerating translation and increasing
accuracy, especially surpassing traditional unidirectional strategies in long
sentence translation. Furthermore, this study analyzes the impact of sentence
length on decoding outcomes and explores the model's performance in various
scenarios. The findings of this research not only provide an effective encoding
strategy for the NMT field but also pave new avenues and directions for future
studies.
| 2,024 | Computation and Language |
CHATATC: Large Language Model-Driven Conversational Agents for
Supporting Strategic Air Traffic Flow Management | Generative artificial intelligence (AI) and large language models (LLMs) have
gained rapid popularity through publicly available tools such as ChatGPT. The
adoption of LLMs for personal and professional use is fueled by the natural
interactions between human users and computer applications such as ChatGPT,
along with powerful summarization and text generation capabilities. Given the
widespread use of such generative AI tools, in this work we investigate how
these tools can be deployed in a non-safety critical, strategic traffic flow
management setting. Specifically, we train an LLM, CHATATC, based on a large
historical data set of Ground Delay Program (GDP) issuances, spanning 2000-2023
and consisting of over 80,000 GDP implementations, revisions, and
cancellations. We test the query and response capabilities of CHATATC,
documenting successes (e.g., providing correct GDP rates, durations, and
reason) and shortcomings (e.g,. superlative questions). We also detail the
design of a graphical user interface for future users to interact and
collaborate with the CHATATC conversational agent.
| 2,024 | Computation and Language |
SQL-CRAFT: Text-to-SQL through Interactive Refinement and Enhanced
Reasoning | Modern LLMs have become increasingly powerful, but they are still facing
challenges in specialized tasks such as Text-to-SQL. We propose SQL-CRAFT, a
framework to advance LLMs' SQL generation Capabilities through inteRActive
reFinemenT and enhanced reasoning. We leverage an Interactive Correction Loop
(IC-Loop) for LLMs to interact with databases automatically, as well as
Python-enhanced reasoning. We conduct experiments on two Text-to-SQL datasets,
Spider and Bird, with performance improvements of up to 5.7% compared to the
naive prompting method. Moreover, our method surpasses the current
state-of-the-art on the Spider Leaderboard, demonstrating the effectiveness of
our framework.
| 2,024 | Computation and Language |
HumanEval on Latest GPT Models -- 2024 | In 2023, we are using the latest models of GPT-4 to advance program
synthesis. The large language models have significantly improved the
state-of-the-art for this purpose. To make these advancements more accessible,
we have created a repository that connects these models to Huamn Eval. This
dataset was initally developed to be used with a language model called CODEGEN
on natural and programming language data. The utility of these trained models
is showcased by demonstrating their competitive performance in zero-shot Python
code generation on HumanEval tasks compared to previous state-of-the-art
solutions. Additionally, this gives way to developing more multi-step paradigm
synthesis. This benchmark features 160 diverse problem sets factorized into
multistep prompts that our analysis shows significantly improves program
synthesis over single-turn inputs. All code is open source at
https://github.com/daniel442li/gpt-human-eval .
| 2,024 | Computation and Language |
NL2Formula: Generating Spreadsheet Formulas from Natural Language
Queries | Writing formulas on spreadsheets, such as Microsoft Excel and Google Sheets,
is a widespread practice among users performing data analysis. However,
crafting formulas on spreadsheets remains a tedious and error-prone task for
many end-users, particularly when dealing with complex operations. To alleviate
the burden associated with writing spreadsheet formulas, this paper introduces
a novel benchmark task called NL2Formula, with the aim to generate executable
formulas that are grounded on a spreadsheet table, given a Natural Language
(NL) query as input. To accomplish this, we construct a comprehensive dataset
consisting of 70,799 paired NL queries and corresponding spreadsheet formulas,
covering 21,670 tables and 37 types of formula functions. We realize the
NL2Formula task by providing a sequence-to-sequence baseline implementation
called fCoder. Experimental results validate the effectiveness of fCoder,
demonstrating its superior performance compared to the baseline models.
Furthermore, we also compare fCoder with an initial GPT-3.5 model (i.e.,
text-davinci-003). Lastly, through in-depth error analysis, we identify
potential challenges in the NL2Formula task and advocate for further
investigation.
| 2,024 | Computation and Language |
A Dual-Prompting for Interpretable Mental Health Language Models | Despite the increasing demand for AI-based mental health monitoring tools,
their practical utility for clinicians is limited by the lack of
interpretability.The CLPsych 2024 Shared Task (Chim et al., 2024) aims to
enhance the interpretability of Large Language Models (LLMs), particularly in
mental health analysis, by providing evidence of suicidality through linguistic
content. We propose a dual-prompting approach: (i) Knowledge-aware evidence
extraction by leveraging the expert identity and a suicide dictionary with a
mental health-specific LLM; and (ii) Evidence summarization by employing an
LLM-based consistency evaluator. Comprehensive experiments demonstrate the
effectiveness of combining domain-specific information, revealing performance
improvements and the approach's potential to aid clinicians in assessing mental
state progression.
| 2,024 | Computation and Language |
An LLM Maturity Model for Reliable and Transparent Text-to-Query | Recognizing the imperative to address the reliability and transparency issues
of Large Language Models (LLM), this work proposes an LLM maturity model
tailored for text-to-query applications. This maturity model seeks to fill the
existing void in evaluating LLMs in such applications by incorporating
dimensions beyond mere correctness or accuracy. Moreover, this work introduces
a real-world use case from the law enforcement domain and showcases QueryIQ, an
LLM-powered, domain-specific text-to-query assistant to expedite user workflows
and reveal hidden relationship in data.
| 2,024 | Computation and Language |
Comparing Inferential Strategies of Humans and Large Language Models in
Deductive Reasoning | Deductive reasoning plays a pivotal role in the formulation of sound and
cohesive arguments. It allows individuals to draw conclusions that logically
follow, given the truth value of the information provided. Recent progress in
the domain of large language models (LLMs) has showcased their capability in
executing deductive reasoning tasks. Nonetheless, a significant portion of
research primarily assesses the accuracy of LLMs in solving such tasks, often
overlooking a deeper analysis of their reasoning behavior. In this study, we
draw upon principles from cognitive psychology to examine inferential
strategies employed by LLMs, through a detailed evaluation of their responses
to propositional logic problems. Our findings indicate that LLMs display
reasoning patterns akin to those observed in humans, including strategies like
$\textit{supposition following}$ or $\textit{chain construction}$. Moreover,
our research demonstrates that the architecture and scale of the model
significantly affect its preferred method of reasoning, with more advanced
models tending to adopt strategies more frequently than less sophisticated
ones. Importantly, we assert that a model's accuracy, that is the correctness
of its final conclusion, does not necessarily reflect the validity of its
reasoning process. This distinction underscores the necessity for more nuanced
evaluation procedures in the field.
| 2,024 | Computation and Language |
Is the System Message Really Important to Jailbreaks in Large Language
Models? | The rapid evolution of Large Language Models (LLMs) has rendered them
indispensable in modern society. While security measures are typically in place
to align LLMs with human values prior to release, recent studies have unveiled
a concerning phenomenon named "jailbreak." This term refers to the unexpected
and potentially harmful responses generated by LLMs when prompted with
malicious questions. Existing research focuses on generating jailbreak prompts
but our study aim to answer a different question: Is the system message really
important to jailbreak in LLMs? To address this question, we conducted
experiments in a stable GPT version gpt-3.5-turbo-0613 to generated jailbreak
prompts with varying system messages: short, long, and none. We discover that
different system messages have distinct resistances to jailbreak by
experiments. Additionally, we explore the transferability of jailbreak across
LLMs. This finding underscores the significant impact system messages can have
on mitigating LLMs jailbreak. To generate system messages that are more
resistant to jailbreak prompts, we propose System Messages Evolutionary
Algorithms (SMEA). Through SMEA, we can get robust system messages population
that demonstrate up to 98.9% resistance against jailbreak prompts. Our research
not only bolsters LLMs security but also raises the bar for jailbreak,
fostering advancements in this field of study.
| 2,024 | Computation and Language |
ChatEL: Entity Linking with Chatbots | Entity Linking (EL) is an essential and challenging task in natural language
processing that seeks to link some text representing an entity within a
document or sentence with its corresponding entry in a dictionary or knowledge
base. Most existing approaches focus on creating elaborate contextual models
that look for clues the words surrounding the entity-text to help solve the
linking problem. Although these fine-tuned language models tend to work, they
can be unwieldy, difficult to train, and do not transfer well to other domains.
Fortunately, Large Language Models (LLMs) like GPT provide a highly-advanced
solution to the problems inherent in EL models, but simply naive prompts to
LLMs do not work well. In the present work, we define ChatEL, which is a
three-step framework to prompt LLMs to return accurate results. Overall the
ChatEL framework improves the average F1 performance across 10 datasets by more
than 2%. Finally, a thorough error analysis shows many instances with the
ground truth labels were actually incorrect, and the labels predicted by ChatEL
were actually correct. This indicates that the quantitative results presented
in this paper may be a conservative estimate of the actual performance. All
data and code are available as an open-source package on GitHub at
https://github.com/yifding/In_Context_EL.
| 2,024 | Computation and Language |
Ranking Large Language Models without Ground Truth | Evaluation and ranking of large language models (LLMs) has become an
important problem with the proliferation of these models and their impact.
Evaluation methods either require human responses which are expensive to
acquire or use pairs of LLMs to evaluate each other which can be unreliable. In
this paper, we provide a novel perspective where, given a dataset of prompts
(viz. questions, instructions, etc.) and a set of LLMs, we rank them without
access to any ground truth or reference responses. Inspired by real life where
both an expert and a knowledgeable person can identify a novice our main idea
is to consider triplets of models, where each one of them evaluates the other
two, correctly identifying the worst model in the triplet with high
probability. We also analyze our idea and provide sufficient conditions for it
to succeed. Applying this idea repeatedly, we propose two methods to rank LLMs.
In experiments on different generative tasks (summarization, multiple-choice,
and dialog), our methods reliably recover close to true rankings without
reference data. This points to a viable low-resource mechanism for practical
use.
| 2,024 | Computation and Language |
Evaluation of a semi-autonomous attentive listening system with takeover
prompting | The handling of communication breakdowns and loss of engagement is an
important aspect of spoken dialogue systems, particularly for chatting systems
such as attentive listening, where the user is mostly speaking. We presume that
a human is best equipped to handle this task and rescue the flow of
conversation. To this end, we propose a semi-autonomous system, where a remote
operator can take control of an autonomous attentive listening system in
real-time. In order to make human intervention easy and consistent, we
introduce automatic detection of low interest and engagement to provide
explicit takeover prompts to the remote operator. We implement this
semi-autonomous system which detects takeover points for the operator and
compare it to fully tele-operated and fully autonomous attentive listening
systems. We find that the semi-autonomous system is generally perceived more
positively than the autonomous system. The results suggest that identifying
points of conversation when the user starts to lose interest may help us
improve a fully autonomous dialogue system.
| 2,024 | Computation and Language |
DyVal 2: Dynamic Evaluation of Large Language Models by Meta Probing
Agents | Evaluation of large language models (LLMs) has raised great concerns in the
community due to the issue of data contamination. Existing work designed
evaluation protocols using well-defined algorithms for specific tasks, which
cannot be easily extended to diverse scenarios. Moreover, current evaluation
benchmarks can only provide the overall benchmark results and cannot support a
fine-grained and multifaceted analysis of LLMs' abilities. In this paper, we
propose meta probing agents (MPA), a general dynamic evaluation protocol
inspired by psychometrics to evaluate LLMs. MPA is the key component of DyVal
2, which naturally extends the previous DyVal~\citep{zhu2023dyval}. MPA designs
the probing and judging agents to automatically transform an original
evaluation problem into a new one following psychometric theory on three basic
cognitive abilities: language understanding, problem solving, and domain
knowledge. These basic abilities are also dynamically configurable, allowing
multifaceted analysis. We conducted extensive evaluations using MPA and found
that most LLMs achieve poorer performance, indicating room for improvement. Our
multifaceted analysis demonstrated the strong correlation between the basic
abilities and an implicit Matthew effect on model size, i.e., larger models
possess stronger correlations of the abilities. MPA can also be used as a data
augmentation approach to enhance LLMs.
| 2,024 | Computation and Language |
Effects of term weighting approach with and without stop words removing
on Arabic text classification | Classifying text is a method for categorizing documents into pre-established
groups. Text documents must be prepared and represented in a way that is
appropriate for the algorithms used for data mining prior to classification. As
a result, a number of term weighting strategies have been created in the
literature to enhance text categorization algorithms' functionality. This study
compares the effects of Binary and Term frequency weighting feature
methodologies on the text's classification method when stop words are
eliminated once and when they are not. In recognition of assessing the effects
of prior weighting of features approaches on classification results in terms of
accuracy, recall, precision, and F-measure values, we used an Arabic data set
made up of 322 documents divided into six main topics (agriculture, economy,
health, politics, science, and sport), each of which contains 50 documents,
with the exception of the health category, which contains 61 documents. The
results demonstrate that for all metrics, the term frequency feature weighting
approach with stop word removal outperforms the binary approach, while for
accuracy, recall, and F-Measure, the binary approach outperforms the TF
approach without stop word removal. However, for precision, the two approaches
produce results that are very similar. Additionally, it is clear from the data
that, using the same phrase weighting approach, stop word removing increases
classification accuracy.
| 2,024 | Computation and Language |
LLM Based Multi-Agent Generation of Semi-structured Documents from
Semantic Templates in the Public Administration Domain | In the last years' digitalization process, the creation and management of
documents in various domains, particularly in Public Administration (PA), have
become increasingly complex and diverse. This complexity arises from the need
to handle a wide range of document types, often characterized by
semi-structured forms. Semi-structured documents present a fixed set of data
without a fixed format. As a consequence, a template-based solution cannot be
used, as understanding a document requires the extraction of the data
structure. The recent introduction of Large Language Models (LLMs) has enabled
the creation of customized text output satisfying user requests. In this work,
we propose a novel approach that combines the LLMs with prompt engineering and
multi-agent systems for generating new documents compliant with a desired
structure. The main contribution of this work concerns replacing the commonly
used manual prompting with a task description generated by semantic retrieval
from an LLM. The potential of this approach is demonstrated through a series of
experiments and case studies, showcasing its effectiveness in real-world PA
scenarios.
| 2,024 | Computation and Language |
Semantic Mirror Jailbreak: Genetic Algorithm Based Jailbreak Prompts
Against Open-source LLMs | Large Language Models (LLMs), used in creative writing, code generation, and
translation, generate text based on input sequences but are vulnerable to
jailbreak attacks, where crafted prompts induce harmful outputs. Most jailbreak
prompt methods use a combination of jailbreak templates followed by questions
to ask to create jailbreak prompts. However, existing jailbreak prompt designs
generally suffer from excessive semantic differences, resulting in an inability
to resist defenses that use simple semantic metrics as thresholds. Jailbreak
prompts are semantically more varied than the original questions used for
queries. In this paper, we introduce a Semantic Mirror Jailbreak (SMJ) approach
that bypasses LLMs by generating jailbreak prompts that are semantically
similar to the original question. We model the search for jailbreak prompts
that satisfy both semantic similarity and jailbreak validity as a
multi-objective optimization problem and employ a standardized set of genetic
algorithms for generating eligible prompts. Compared to the baseline
AutoDAN-GA, SMJ achieves attack success rates (ASR) that are at most 35.4%
higher without ONION defense and 85.2% higher with ONION defense. SMJ's better
performance in all three semantic meaningfulness metrics of Jailbreak Prompt,
Similarity, and Outlier, also means that SMJ is resistant to defenses that use
those metrics as thresholds.
| 2,024 | Computation and Language |
Technical Report on the Checkfor.ai AI-Generated Text Classifier | We present the CheckforAI text classifier, a transformer-based neural network
trained to distinguish text written by large language models from text written
by humans. CheckforAI outperforms zero-shot methods such as DetectGPT as well
as leading commercial AI detection tools with over 9 times lower error rates on
a comprehensive benchmark comprised of ten text domains (student writing,
creative writing, scientific writing, books, encyclopedias, news, email,
scientific papers, short-form Q&A) and 8 open- and closed-source large language
models. We propose a training algorithm, hard negative mining with synthetic
mirrors, that enables our classifier to achieve orders of magnitude lower false
positive rates on high-data domains such as reviews. Finally, we show that
CheckforAI is not biased against nonnative English speakers and generalizes to
domains and models unseen during training.
| 2,024 | Computation and Language |
Distillation Contrastive Decoding: Improving LLMs Reasoning with
Contrastive Decoding and Distillation | We propose a straightforward approach called Distillation Contrastive
Decoding (DCD) to enhance the reasoning capabilities of Large Language Models
(LLMs) during inference. In contrast to previous approaches that relied on
smaller amateur models or analysis of hidden state differences, DCD employs
Contrastive Chain-of-thought Prompting and advanced distillation techniques,
including Dropout and Quantization. This approach effectively addresses the
limitations of Contrastive Decoding (CD), which typically requires both an
expert and an amateur model, thus increasing computational resource demands. By
integrating contrastive prompts with distillation, DCD obviates the need for an
amateur model and reduces memory usage. Our evaluations demonstrate that DCD
significantly enhances LLM performance across a range of reasoning benchmarks,
surpassing both CD and existing methods in the GSM8K and StrategyQA datasets.
| 2,024 | Computation and Language |
What's in a Name? Auditing Large Language Models for Race and Gender
Bias | We employ an audit design to investigate biases in state-of-the-art large
language models, including GPT-4. In our study, we prompt the models for advice
involving a named individual across a variety of scenarios, such as during car
purchase negotiations or election outcome predictions. We find that the advice
systematically disadvantages names that are commonly associated with racial
minorities and women. Names associated with Black women receive the least
advantageous outcomes. The biases are consistent across 42 prompt templates and
several models, indicating a systemic issue rather than isolated incidents.
While providing numerical, decision-relevant anchors in the prompt can
successfully counteract the biases, qualitative details have inconsistent
effects and may even increase disparities. Our findings underscore the
importance of conducting audits at the point of LLM deployment and
implementation to mitigate their potential for harm against marginalized
communities.
| 2,024 | Computation and Language |
Driving Generative Agents With Their Personality | This research explores the potential of Large Language Models (LLMs) to
utilize psychometric values, specifically personality information, within the
context of video game character development. Affective Computing (AC) systems
quantify a Non-Player character's (NPC) psyche, and an LLM can take advantage
of the system's information by using the values for prompt generation. The
research shows an LLM can consistently represent a given personality profile,
thereby enhancing the human-like characteristics of game characters.
Repurposing a human examination, the International Personality Item Pool (IPIP)
questionnaire, to evaluate an LLM shows that the model can accurately generate
content concerning the personality provided. Results show that the improvement
of LLM, such as the latest GPT-4 model, can consistently utilize and interpret
a personality to represent behavior.
| 2,024 | Computation and Language |
Automatic Histograms: Leveraging Language Models for Text Dataset
Exploration | Making sense of unstructured text datasets is perennially difficult, yet
increasingly relevant with Large Language Models. Data workers often rely on
dataset summaries, especially distributions of various derived features. Some
features, like toxicity or topics, are relevant to many datasets, but many
interesting features are domain specific: instruments and genres for a music
dataset, or diseases and symptoms for a medical dataset. Accordingly, data
workers often run custom analyses for each dataset, which is cumbersome and
difficult. We present AutoHistograms, a visualization tool leveragingLLMs.
AutoHistograms automatically identifies relevant features, visualizes them with
histograms, and allows the user to interactively query the dataset for
categories of entities and create new histograms. In a user study with 10 data
workers (n=10), we observe that participants can quickly identify insights and
explore the data using AutoHistograms, and conceptualize a broad range of
applicable use cases. Together, this tool and user study contributeto the
growing field of LLM-assisted sensemaking tools.
| 2,024 | Computation and Language |
A Study on the Vulnerability of Test Questions against ChatGPT-based
Cheating | ChatGPT is a chatbot that can answer text prompts fairly accurately, even
performing very well on postgraduate-level questions. Many educators have found
that their take-home or remote tests and exams are vulnerable to ChatGPT-based
cheating because students may directly use answers provided by tools like
ChatGPT. In this paper, we try to provide an answer to an important question:
how well ChatGPT can answer test questions and how we can detect whether the
questions of a test can be answered correctly by ChatGPT. We generated
ChatGPT's responses to the MedMCQA dataset, which contains over 10,000 medical
school entrance exam questions. We analyzed the responses and uncovered certain
types of questions ChatGPT answers more inaccurately than others. In addition,
we have created a basic natural language processing model to single out the
most vulnerable questions to ChatGPT in a collection of questions or a sample
exam. Our tool can be used by test-makers to avoid ChatGPT-vulnerable test
questions.
| 2,023 | Computation and Language |
COBIAS: Contextual Reliability in Bias Assessment | Large Language Models (LLMs) are trained on inherently biased data. Previous
works on debiasing models rely on benchmark datasets to measure model
performance. However, these datasets suffer from several pitfalls due to the
extremely subjective understanding of bias, highlighting a critical need for
contextual exploration. We propose understanding the context of user inputs
with consideration of the diverse situations in which input statements are
possible. This approach would allow for frameworks that foster bias awareness
rather than guardrails that hurt user engagement. Our contribution is twofold:
(i) we create a dataset of 2287 stereotyped statements augmented with points
for adding context; (ii) we develop the Context-Oriented Bias Indicator and
Assessment Score (COBIAS) to assess statements' contextual reliability in
measuring bias. Our metric is a significant predictor of the contextual
reliability of bias-benchmark datasets ($\chi^2=71.02, p<2.2 \cdot 10^{-16})$.
COBIAS can be used to create reliable datasets, resulting in an improvement in
bias mitigation works.
| 2,024 | Computation and Language |
Vygotsky Distance: Measure for Benchmark Task Similarity | Evaluation plays a significant role in modern natural language processing.
Most modern NLP benchmarks consist of arbitrary sets of tasks that neither
guarantee any generalization potential for the model once applied outside the
test set nor try to minimize the resource consumption needed for model
evaluation. This paper presents a theoretical instrument and a practical
algorithm to calculate similarity between benchmark tasks, we call this
similarity measure "Vygotsky distance". The core idea of this similarity
measure is that it is based on relative performance of the "students" on a
given task, rather that on the properties of the task itself. If two tasks are
close to each other in terms of Vygotsky distance the models tend to have
similar relative performance on them. Thus knowing Vygotsky distance between
tasks one can significantly reduce the number of evaluation tasks while
maintaining a high validation quality. Experiments on various benchmarks,
including GLUE, SuperGLUE, CLUE, and RussianSuperGLUE, demonstrate that a vast
majority of NLP benchmarks could be at least 40% smaller in terms of the tasks
included. Most importantly, Vygotsky distance could also be used for the
validation of new tasks thus increasing the generalization potential of the
future NLP models.
| 2,024 | Computation and Language |
LLMBind: A Unified Modality-Task Integration Framework | While recent progress in multimodal large language models tackles various
modality tasks, they posses limited integration capabilities for complex
multi-modality tasks, consequently constraining the development of the field.
In this work, we take the initiative to explore and propose the LLMBind, a
unified framework for modality task integration, which binds Large Language
Models and corresponding pre-trained task models with task-specific tokens.
Consequently, LLMBind can interpret inputs and produce outputs in versatile
combinations of image, text, video, and audio. Specifically, we introduce a
Mixture-of-Experts technique to enable effective learning for different
multimodal tasks through collaboration among diverse experts. Furthermore, we
create a multi-task dataset comprising 400k instruction data, which unlocks the
ability for interactive visual generation and editing tasks. Extensive
experiments show the effectiveness of our framework across various tasks,
including image, video, audio generation, image segmentation, and image
editing. More encouragingly, our framework can be easily extended to other
modality tasks, showcasing the promising potential of creating a unified AI
agent for modeling universal modalities.
| 2,024 | Computation and Language |
Data Augmentation is Dead, Long Live Data Augmentation | Textual data augmentation (DA) is a prolific field of study where novel
techniques to create artificial data are regularly proposed, and that has
demonstrated great efficiency on small data settings, at least for text
classification tasks. In this paper, we challenge those results, showing that
classical data augmentation is simply a way of performing better fine-tuning,
and that spending more time fine-tuning before applying data augmentation
negates its effect. This is a significant contribution as it answers several
questions that were left open in recent years, namely~: which DA technique
performs best (all of them as long as they generate data close enough to the
training set as to not impair training) and why did DA show positive results
(facilitates training of network). We furthermore show that zero and few-shot
data generation via conversational agents such as ChatGPT or LLama2 can
increase performances, concluding that this form of data augmentation does
still work, even if classical methods do not.
| 2,024 | Computation and Language |
Chain-of-Thought Unfaithfulness as Disguised Accuracy | Understanding the extent to which Chain-of-Thought (CoT) generations align
with a large language model's (LLM) internal computations is critical for
deciding whether to trust an LLM's output. As a proxy for CoT faithfulness,
arXiv:2307.13702 propose a metric that measures a model's dependence on its CoT
for producing an answer. Within a single family of proprietary models, they
find that LLMs exhibit a scaling-then-inverse-scaling relationship between
model size and their measure of faithfulness, and that a 13 billion parameter
model exhibits increased faithfulness compared to models ranging from 810
million to 175 billion parameters in size. We evaluate whether these results
generalize as a property of all LLMs. We replicate their experimental setup
with three different families of models and, under specific conditions,
successfully reproduce the scaling trends for CoT faithfulness they report.
However, we discover that simply changing the order of answer choices in the
prompt can reduce the metric by 73 percentage points. The faithfulness metric
is also highly correlated ($R^2$ = 0.91) with accuracy, raising doubts about
its validity as a construct for evaluating faithfulness.
| 2,024 | Computation and Language |
A Usage-centric Take on Intent Understanding in E-Commerce | Identifying and understanding user intents is a pivotal task for E-Commerce.
Despite its popularity, intent understanding has not been consistently defined
or accurately benchmarked. In this paper, we focus on predicative user intents
as "how a customer uses a product", and pose intent understanding as a natural
language reasoning task, independent of product ontologies. We identify two
weaknesses of FolkScope, the SOTA E-Commerce Intent Knowledge Graph, that limit
its capacity to reason about user intents and to recommend diverse useful
products. Following these observations, we introduce a Product Recovery
Benchmark including a novel evaluation framework and an example dataset. We
further validate the above FolkScope weaknesses on this benchmark.
| 2,024 | Computation and Language |
Tokenization counts: the impact of tokenization on arithmetic in
frontier LLMs | Tokenization, the division of input text into input tokens, is an often
overlooked aspect of the large language model (LLM) pipeline and could be the
source of useful or harmful inductive biases. Historically, LLMs have relied on
byte pair encoding, without care to specific input domains. With the increased
use of LLMs for reasoning, various number-specific tokenization schemes have
been adopted, with popular models like LLaMa and PaLM opting for single-digit
tokenization while GPT-3.5 and GPT-4 have separate tokens for each 1-, 2-, and
3-digit numbers. In this work, we study the effect this choice has on numerical
reasoning through the use of arithmetic tasks. We consider left-to-right and
right-to-left tokenization for GPT-3.5 and -4, finding that right-to-left
tokenization (enforced by comma separating numbers at inference time) leads to
largely improved performance. Furthermore, we find that model errors when using
standard left-to-right tokenization follow stereotyped error patterns,
suggesting that model computations are systematic rather than approximate. We
show that the model is able to convert between tokenizations easily, thus
allowing chain-of-thought-inspired approaches to recover performance on
left-to-right tokenized inputs. We also find the gap between tokenization
directions decreases when models are scaled, possibly indicating that larger
models are better able to override this tokenization-dependent inductive bias.
In summary, our work performs the first study of how number tokenization
choices lead to differences in model performance on arithmetic tasks,
accompanied by a thorough analysis of error patterns. We hope this work
inspires practitioners to more carefully ablate number tokenization-related
choices when working towards general models of numerical reasoning.
| 2,024 | Computation and Language |
Re-Examine Distantly Supervised NER: A New Benchmark and a Simple
Approach | This paper delves into Named Entity Recognition (NER) under the framework of
Distant Supervision (DS-NER), where the main challenge lies in the compromised
quality of labels due to inherent errors such as false positives, false
negatives, and positive type errors. We critically assess the efficacy of
current DS-NER methodologies using a real-world benchmark dataset named QTL,
revealing that their performance often does not meet expectations. To tackle
the prevalent issue of label noise, we introduce a simple yet effective
approach, Curriculum-based Positive-Unlabeled Learning CuPUL, which
strategically starts on "easy" and cleaner samples during the training process
to enhance model resilience to noisy samples. Our empirical results highlight
the capability of CuPUL to significantly reduce the impact of noisy labels and
outperform existing methods. QTL dataset and our code is available on GitHub.
| 2,024 | Computation and Language |
Mirror: A Multiple-perspective Self-Reflection Method for Knowledge-rich
Reasoning | While Large language models (LLMs) have the capability to iteratively reflect
on their own outputs, recent studies have observed their struggles with
knowledge-rich problems without access to external resources. In addition to
the inefficiency of LLMs in self-assessment, we also observe that LLMs struggle
to revisit their predictions despite receiving explicit negative feedback.
Therefore, We propose Mirror, a Multiple-perspective self-reflection method for
knowledge-rich reasoning, to avoid getting stuck at a particular reflection
iteration. Mirror enables LLMs to reflect from multiple-perspective clues,
achieved through a heuristic interaction between a Navigator and a Reasoner. It
guides agents toward diverse yet plausibly reliable reasoning trajectory
without access to ground truth by encouraging (1) diversity of directions
generated by Navigator and (2) agreement among strategically induced
perturbations in responses generated by the Reasoner. The experiments on five
reasoning datasets demonstrate that Mirror's superiority over several
contemporary self-reflection approaches. Additionally, the ablation study
studies clearly indicate that our strategies alleviate the aforementioned
challenges.
| 2,024 | Computation and Language |
MultiLS: A Multi-task Lexical Simplification Framework | Lexical Simplification (LS) automatically replaces difficult to read words
for easier alternatives while preserving a sentence's original meaning. LS is a
precursor to Text Simplification with the aim of improving text accessibility
to various target demographics, including children, second language learners,
individuals with reading disabilities or low literacy. Several datasets exist
for LS. These LS datasets specialize on one or two sub-tasks within the LS
pipeline. However, as of this moment, no single LS dataset has been developed
that covers all LS sub-tasks. We present MultiLS, the first LS framework that
allows for the creation of a multi-task LS dataset. We also present MultiLS-PT,
the first dataset to be created using the MultiLS framework. We demonstrate the
potential of MultiLS-PT by carrying out all LS sub-tasks of (1). lexical
complexity prediction (LCP), (2). substitute generation, and (3). substitute
ranking for Portuguese. Model performances are reported, ranging from
transformer-based models to more recent large language models (LLMs).
| 2,024 | Computation and Language |
GenCeption: Evaluate Multimodal LLMs with Unlabeled Unimodal Data | Multimodal Large Language Models (MLLMs) are commonly evaluated using costly
annotated multimodal benchmarks. However, these benchmarks often struggle to
keep pace with the rapidly advancing requirements of MLLM evaluation. We
propose GenCeption, a novel and annotation-free MLLM evaluation framework that
merely requires unimodal data to assess inter-modality semantic coherence and
inversely reflects the models' inclination to hallucinate. Analogous to the
popular DrawCeption game, GenCeption initiates with a non-textual sample and
undergoes a series of iterative description and generation steps. Semantic
drift across iterations is quantified using the GC@T metric. Our empirical
findings validate GenCeption's efficacy, showing strong correlations with
popular MLLM benchmarking results. GenCeption may be extended to mitigate
training data contamination by utilizing ubiquitous, previously unseen unimodal
data.
| 2,024 | Computation and Language |
tinyBenchmarks: evaluating LLMs with fewer examples | The versatility of large language models (LLMs) led to the creation of
diverse benchmarks that thoroughly test a variety of language models'
abilities. These benchmarks consist of tens of thousands of examples making
evaluation of LLMs very expensive. In this paper, we investigate strategies to
reduce the number of evaluations needed to assess the performance of an LLM on
several key benchmarks. For example, we show that to accurately estimate the
performance of an LLM on MMLU, a popular multiple-choice QA benchmark
consisting of 14K examples, it is sufficient to evaluate this LLM on 100
curated examples. We release evaluation tools and tiny versions of popular
benchmarks: Open LLM Leaderboard, MMLU, HELM, and AlpacaEval 2.0. Our empirical
analysis demonstrates that these tools and tiny benchmarks are sufficient to
reliably and efficiently reproduce the original evaluation results.
| 2,024 | Computation and Language |
Divide-or-Conquer? Which Part Should You Distill Your LLM? | Recent methods have demonstrated that Large Language Models (LLMs) can solve
reasoning tasks better when they are encouraged to solve subtasks of the main
task first. In this paper we devise a similar strategy that breaks down
reasoning tasks into a problem decomposition phase and a problem solving phase
and show that the strategy is able to outperform a single stage solution.
Further, we hypothesize that the decomposition should be easier to distill into
a smaller model compared to the problem solving because the latter requires
large amounts of domain knowledge while the former only requires learning
general problem solving strategies. We propose methods to distill these two
capabilities and evaluate their impact on reasoning outcomes and inference
cost. We find that we can distill the problem decomposition phase and at the
same time achieve good generalization across tasks, datasets, and models.
However, it is harder to distill the problem solving capability without losing
performance and the resulting distilled model struggles with generalization.
These results indicate that by using smaller, distilled problem decomposition
models in combination with problem solving LLMs we can achieve reasoning with
cost-efficient inference and local adaptation.
| 2,024 | Computation and Language |
CommVQA: Situating Visual Question Answering in Communicative Contexts | Current visual question answering (VQA) models tend to be trained and
evaluated on image-question pairs in isolation. However, the questions people
ask are dependent on their informational needs and prior knowledge about the
image content. To evaluate how situating images within naturalistic contexts
shapes visual questions, we introduce CommVQA, a VQA dataset consisting of
images, image descriptions, real-world communicative scenarios where the image
might appear (e.g., a travel website), and follow-up questions and answers
conditioned on the scenario. We show that CommVQA poses a challenge for current
models. Providing contextual information to VQA models improves performance
broadly, highlighting the relevance of situating systems within a communicative
scenario.
| 2,024 | Computation and Language |
How Important Is Tokenization in French Medical Masked Language Models? | Subword tokenization has become the prevailing standard in the field of
natural language processing (NLP) over recent years, primarily due to the
widespread utilization of pre-trained language models. This shift began with
Byte-Pair Encoding (BPE) and was later followed by the adoption of
SentencePiece and WordPiece. While subword tokenization consistently
outperforms character and word-level tokenization, the precise factors
contributing to its success remain unclear. Key aspects such as the optimal
segmentation granularity for diverse tasks and languages, the influence of data
sources on tokenizers, and the role of morphological information in
Indo-European languages remain insufficiently explored. This is particularly
pertinent for biomedical terminology, characterized by specific rules governing
morpheme combinations. Despite the agglutinative nature of biomedical
terminology, existing language models do not explicitly incorporate this
knowledge, leading to inconsistent tokenization strategies for common terms. In
this paper, we seek to delve into the complexities of subword tokenization in
French biomedical domain across a variety of NLP tasks and pinpoint areas where
further enhancements can be made. We analyze classical tokenization algorithms,
including BPE and SentencePiece, and introduce an original tokenization
strategy that integrates morpheme-enriched word segmentation into existing
tokenization methods.
| 2,024 | Computation and Language |
Ar-Spider: Text-to-SQL in Arabic | In Natural Language Processing (NLP), one of the most important tasks is
text-to-SQL semantic parsing, which focuses on enabling users to interact with
the database in a more natural manner. In recent years, text-to-SQL has made
significant progress, but most were English-centric. In this paper, we
introduce Ar-Spider 1, the first Arabic cross-domain text-to-SQL dataset. Due
to the unique nature of the language, two major challenges have been
encountered, namely schema linguistic and SQL structural challenges. In order
to handle these issues and conduct the experiments, we adopt two baseline
models LGESQL [4] and S2SQL [12], both of which are tested with two
cross-lingual models to alleviate the effects of schema linguistic and SQL
structure linking challenges. The baselines demonstrate decent single-language
performance on our Arabic text-to-SQL dataset, Ar-Spider, achieving 62.48% for
S2SQL and 65.57% for LGESQL, only 8.79% below the highest results achieved by
the baselines when trained in English dataset. To achieve better performance on
Arabic text-to-SQL, we propose the context similarity relationship (CSR)
approach, which results in a significant increase in the overall performance of
about 1.52% for S2SQL and 1.06% for LGESQL and closes the gap between Arabic
and English languages to 7.73%.
| 2,024 | Computation and Language |
Unintended Impacts of LLM Alignment on Global Representation | Before being deployed for user-facing applications, developers align Large
Language Models (LLMs) to user preferences through a variety of procedures,
such as Reinforcement Learning From Human Feedback (RLHF) and Direct Preference
Optimization (DPO). Current evaluations of these procedures focus on benchmarks
of instruction following, reasoning, and truthfulness. However, human
preferences are not universal, and aligning to specific preference sets may
have unintended effects. We explore how alignment impacts performance along
three axes of global representation: English dialects, multilingualism, and
opinions from and about countries worldwide. Our results show that current
alignment procedures create disparities between English dialects and global
opinions. We find alignment improves capabilities in several languages. We
conclude by discussing design decisions that led to these unintended impacts
and recommendations for more equitable preference tuning.
| 2,024 | Computation and Language |
KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large
Language Models | Automatic evaluation methods for large language models (LLMs) are hindered by
data contamination, leading to inflated assessments of their effectiveness.
Existing strategies, which aim to detect contaminated texts, focus on
quantifying contamination status instead of accurately gauging model
performance. In this paper, we introduce KIEval, a Knowledge-grounded
Interactive Evaluation framework, which incorporates an LLM-powered
"interactor" role for the first time to accomplish a dynamic
contamination-resilient evaluation. Starting with a question in a conventional
LLM benchmark involving domain-specific knowledge, KIEval utilizes dynamically
generated, multi-round, and knowledge-focused dialogues to determine whether a
model's response is merely a recall of benchmark answers or demonstrates a deep
comprehension to apply knowledge in more complex conversations. Extensive
experiments on seven leading LLMs across five datasets validate KIEval's
effectiveness and generalization. We also reveal that data contamination brings
no contribution or even negative effect to models' real-world applicability and
understanding, and existing contamination detection methods for LLMs can only
identify contamination in pre-training but not during supervised fine-tuning.
| 2,024 | Computation and Language |
CARBD-Ko: A Contextually Annotated Review Benchmark Dataset for
Aspect-Level Sentiment Classification in Korean | This paper explores the challenges posed by aspect-based sentiment
classification (ABSC) within pretrained language models (PLMs), with a
particular focus on contextualization and hallucination issues. In order to
tackle these challenges, we introduce CARBD-Ko (a Contextually Annotated Review
Benchmark Dataset for Aspect-Based Sentiment Classification in Korean), a
benchmark dataset that incorporates aspects and dual-tagged polarities to
distinguish between aspect-specific and aspect-agnostic sentiment
classification. The dataset consists of sentences annotated with specific
aspects, aspect polarity, aspect-agnostic polarity, and the intensity of
aspects. To address the issue of dual-tagged aspect polarities, we propose a
novel approach employing a Siamese Network. Our experimental findings highlight
the inherent difficulties in accurately predicting dual-polarities and
underscore the significance of contextualized sentiment analysis models. The
CARBD-Ko dataset serves as a valuable resource for future research endeavors in
aspect-level sentiment classification.
| 2,024 | Computation and Language |
Unlocking the Power of Large Language Models for Entity Alignment | Entity Alignment (EA) is vital for integrating diverse knowledge graph (KG)
data, playing a crucial role in data-driven AI applications. Traditional EA
methods primarily rely on comparing entity embeddings, but their effectiveness
is constrained by the limited input KG data and the capabilities of the
representation learning techniques. Against this backdrop, we introduce ChatEA,
an innovative framework that incorporates large language models (LLMs) to
improve EA. To address the constraints of limited input KG data, ChatEA
introduces a KG-code translation module that translates KG structures into a
format understandable by LLMs, thereby allowing LLMs to utilize their extensive
background knowledge to improve EA accuracy. To overcome the over-reliance on
entity embedding comparisons, ChatEA implements a two-stage EA strategy that
capitalizes on LLMs' capability for multi-step reasoning in a dialogue format,
thereby enhancing accuracy while preserving efficiency. Our experimental
results affirm ChatEA's superior performance, highlighting LLMs' potential in
facilitating EA tasks.
| 2,024 | Computation and Language |
ToMBench: Benchmarking Theory of Mind in Large Language Models | Theory of Mind (ToM) is the cognitive capability to perceive and ascribe
mental states to oneself and others. Recent research has sparked a debate over
whether large language models (LLMs) exhibit a form of ToM. However, existing
ToM evaluations are hindered by challenges such as constrained scope,
subjective judgment, and unintended contamination, yielding inadequate
assessments. To address this gap, we introduce ToMBench with three key
characteristics: a systematic evaluation framework encompassing 8 tasks and 31
abilities in social cognition, a multiple-choice question format to support
automated and unbiased evaluation, and a build-from-scratch bilingual inventory
to strictly avoid data leakage. Based on ToMBench, we conduct extensive
experiments to evaluate the ToM performance of 10 popular LLMs across tasks and
abilities. We find that even the most advanced LLMs like GPT-4 lag behind human
performance by over 10% points, indicating that LLMs have not achieved a
human-level theory of mind yet. Our aim with ToMBench is to enable an efficient
and effective evaluation of LLMs' ToM capabilities, thereby facilitating the
development of LLMs with inherent social intelligence.
| 2,024 | Computation and Language |
Interpreting Context Look-ups in Transformers: Investigating
Attention-MLP Interactions | In this paper, we investigate the interplay between attention heads and
specialized "next-token" neurons in the Multilayer Perceptron that predict
specific tokens. By prompting an LLM like GPT-4 to explain these model
internals, we can elucidate attention mechanisms that activate certain
next-token neurons. Our analysis identifies attention heads that recognize
contexts relevant to predicting a particular token, activating the associated
neuron through the residual connection. We focus specifically on heads in
earlier layers consistently activating the same next-token neuron across
similar prompts. Exploring these differential activation patterns reveals that
heads that specialize for distinct linguistic contexts are tied to generating
certain tokens. Overall, our method combines neural explanations and probing
isolated components to illuminate how attention enables context-dependent,
specialized processing in LLMs.
| 2,024 | Computation and Language |
On the Multi-turn Instruction Following for Conversational Web Agents | Web agents powered by Large Language Models (LLMs) have demonstrated
remarkable abilities in planning and executing multi-step interactions within
complex web-based environments, fulfilling a wide range of web navigation
tasks. Despite these advancements, the potential for LLM-powered agents to
effectively engage with sequential user instructions in real-world scenarios
has not been fully explored. In this work, we introduce a new task of
Conversational Web Navigation, which necessitates sophisticated interactions
that span multiple turns with both the users and the environment, supported by
a specially developed dataset named Multi-Turn Mind2Web (MT-Mind2Web). To
tackle the limited context length of LLMs and the context-dependency issue of
the conversational tasks, we further propose a novel framework, named
self-reflective memory-augmented planning (Self-MAP), which employs memory
utilization and self-reflection techniques. Extensive experiments are conducted
to benchmark the MT-Mind2Web dataset, and validate the effectiveness of the
proposed method.
| 2,024 | Computation and Language |
ColBERT-XM: A Modular Multi-Vector Representation Model for Zero-Shot
Multilingual Information Retrieval | State-of-the-art neural retrievers predominantly focus on high-resource
languages like English, which impedes their adoption in retrieval scenarios
involving other languages. Current approaches circumvent the lack of
high-quality labeled data in non-English languages by leveraging multilingual
pretrained language models capable of cross-lingual transfer. However, these
models require substantial task-specific fine-tuning across multiple languages,
often perform poorly in languages with minimal representation in the
pretraining corpus, and struggle to incorporate new languages after the
pretraining phase. In this work, we present a novel modular dense retrieval
model that learns from the rich data of a single high-resource language and
effectively zero-shot transfers to a wide array of languages, thereby
eliminating the need for language-specific labeled data. Our model, ColBERT-XM,
demonstrates competitive performance against existing state-of-the-art
multilingual retrievers trained on more extensive datasets in various
languages. Further analysis reveals that our modular approach is highly
data-efficient, effectively adapts to out-of-distribution data, and
significantly reduces energy consumption and carbon emissions. By demonstrating
its proficiency in zero-shot scenarios, ColBERT-XM marks a shift towards more
sustainable and inclusive retrieval systems, enabling effective information
accessibility in numerous languages. We publicly release our code and models
for the community.
| 2,024 | Computation and Language |
Fine-tuning Large Language Models for Domain-specific Machine
Translation | Large language models (LLMs) have made significant progress in machine
translation (MT). However, their potential in domain-specific MT remains
under-explored. Current LLM-based MT systems still face several challenges.
First, for LLMs with in-context learning, their effectiveness is highly
sensitive to input translation examples, and processing them can increase
inference costs. They often require extra post-processing due to
over-generation. Second, LLMs with fine-tuning on domain-specific data often
require high training costs for domain adaptation, and may weaken the zero-shot
MT capabilities of LLMs due to over-specialization. The aforementioned methods
can struggle to translate rare words in domain transfer scenarios. To address
these challenges, this paper proposes a prompt-oriented fine-tuning method,
denoted as LlamaIT, to effectively and efficiently fine-tune a general-purpose
LLM for domain-specific MT tasks. First, we construct a task-specific
mix-domain dataset, which is then used to fine-tune the LLM with LoRA. This can
eliminate the need for input translation examples, post-processing, or
over-specialization. By zero-shot prompting with instructions, we adapt the MT
tasks to the target domain at inference time. To further elicit the MT
capability for rare words, we construct new prompts by incorporating
domain-specific bilingual vocabulary. We also conduct extensive experiments on
both publicly available and self-constructed datasets. The results show that
our LlamaIT can significantly enhance the domain-specific MT capabilities of
the LLM, meanwhile preserving its zero-shot MT capabilities.
| 2,024 | Computation and Language |
Gotcha! Don't trick me with unanswerable questions! Self-aligning Large
Language Models for Responding to Unknown Questions | Despite the remarkable abilities of Large Language Models (LLMs) to answer
questions, they often display a considerable level of overconfidence even when
the question does not have a definitive answer. To avoid providing hallucinated
answers to these unknown questions, existing studies typically investigate
approaches to refusing to answer these questions. In this work, we propose a
novel and scalable self-alignment method to utilize the LLM itself to enhance
its response-ability to different types of unknown questions, being capable of
not only refusing to answer but also providing explanation to the
unanswerability of unknown questions. Specifically, the Self-Align method first
employ a two-stage class-aware self-augmentation approach to generate a large
amount of unknown question-response data. Then we conduct disparity-driven
self-curation to select qualified data for fine-tuning the LLM itself for
aligning the responses to unknown questions as desired. Experimental results on
two datasets across four types of unknown questions validate the superiority of
the Self-Align method over existing baselines in terms of three types of task
formulation.
| 2,024 | Computation and Language |
Infusing Hierarchical Guidance into Prompt Tuning: A Parameter-Efficient
Framework for Multi-level Implicit Discourse Relation Recognition | Multi-level implicit discourse relation recognition (MIDRR) aims at
identifying hierarchical discourse relations among arguments. Previous methods
achieve the promotion through fine-tuning PLMs. However, due to the data
scarcity and the task gap, the pre-trained feature space cannot be accurately
tuned to the task-specific space, which even aggravates the collapse of the
vanilla space. Besides, the comprehension of hierarchical semantics for MIDRR
makes the conversion much harder. In this paper, we propose a prompt-based
Parameter-Efficient Multi-level IDRR (PEMI) framework to solve the above
problems. First, we leverage parameter-efficient prompt tuning to drive the
inputted arguments to match the pre-trained space and realize the approximation
with few parameters. Furthermore, we propose a hierarchical label refining
(HLR) method for the prompt verbalizer to deeply integrate hierarchical
guidance into the prompt tuning. Finally, our model achieves comparable results
on PDTB 2.0 and 3.0 using about 0.1% trainable parameters compared with
baselines and the visualization demonstrates the effectiveness of our HLR
method.
| 2,024 | Computation and Language |
PEMT: Multi-Task Correlation Guided Mixture-of-Experts Enables
Parameter-Efficient Transfer Learning | Parameter-efficient fine-tuning (PEFT) has emerged as an effective method for
adapting pre-trained language models to various tasks efficiently. Recently,
there has been a growing interest in transferring knowledge from one or
multiple tasks to the downstream target task to achieve performance
improvements. However, current approaches typically either train adapters on
individual tasks or distill shared knowledge from source tasks, failing to
fully exploit task-specific knowledge and the correlation between source and
target tasks. To overcome these limitations, we propose PEMT, a novel
parameter-efficient fine-tuning framework based on multi-task transfer
learning. PEMT extends the mixture-of-experts (MoE) framework to capture the
transferable knowledge as a weighted combination of adapters trained on source
tasks. These weights are determined by a gated unit, measuring the correlation
between the target and each source task using task description prompt vectors.
To fully exploit the task-specific knowledge, we also propose the Task Sparsity
Loss to improve the sparsity of the gated unit. We conduct experiments on a
broad range of tasks over 17 datasets. The experimental results demonstrate our
PEMT yields stable improvements over full fine-tuning, and state-of-the-art
PEFT and knowledge transferring methods on various tasks. The results highlight
the effectiveness of our method which is capable of sufficiently exploiting the
knowledge and correlation features across multiple tasks.
| 2,024 | Computation and Language |
AttributionBench: How Hard is Automatic Attribution Evaluation? | Modern generative search engines enhance the reliability of large language
model (LLM) responses by providing cited evidence. However, evaluating the
answer's attribution, i.e., whether every claim within the generated responses
is fully supported by its cited evidence, remains an open problem. This
verification, traditionally dependent on costly human evaluation, underscores
the urgent need for automatic attribution evaluation methods. To bridge the gap
in the absence of standardized benchmarks for these methods, we present
AttributionBench, a comprehensive benchmark compiled from various existing
attribution datasets. Our extensive experiments on AttributionBench reveal the
challenges of automatic attribution evaluation, even for state-of-the-art LLMs.
Specifically, our findings show that even a fine-tuned GPT-3.5 only achieves
around 80% macro-F1 under a binary classification formulation. A detailed
analysis of more than 300 error cases indicates that a majority of failures
stem from the model's inability to process nuanced information, and the
discrepancy between the information the model has access to and that human
annotators do.
| 2,024 | Computation and Language |
Interactive-KBQA: Multi-Turn Interactions for Knowledge Base Question
Answering with Large Language Models | This study explores the realm of knowledge-base question answering (KBQA).
KBQA is considered a challenging task, particularly in parsing intricate
questions into executable logical forms. Traditional semantic parsing
(SP)-based methods require extensive data annotations, which result in
significant costs. Recently, the advent of few-shot in-context learning,
powered by large language models (LLMs), has showcased promising capabilities.
Yet, fully leveraging LLMs to parse questions into logical forms in
low-resource scenarios poses a substantial challenge. To tackle these hurdles,
we introduce Interactive-KBQA, a framework designed to generate logical forms
through direct interaction with knowledge bases (KBs). Within this framework,
we have developed three generic APIs for KB interaction. For each category of
complex question, we devised exemplars to guide LLMs through the reasoning
processes. Our method achieves competitive results on the WebQuestionsSP,
ComplexWebQuestions, KQA Pro, and MetaQA datasets with a minimal number of
examples (shots). Importantly, our approach supports manual intervention,
allowing for the iterative refinement of LLM outputs. By annotating a dataset
with step-wise reasoning processes, we showcase our model's adaptability and
highlight its potential for contributing significant enhancements to the field.
| 2,024 | Computation and Language |
Improving Sentence Embeddings with an Automatically Generated NLI
Dataset | Decoder-based large language models (LLMs) have shown high performance on
many tasks in natural language processing. This is also true for sentence
embedding learning, where a decoder-based model, PromptEOL, has achieved the
best performance on semantic textual similarity (STS) tasks. However, PromptEOL
makes great use of fine-tuning with a manually annotated natural language
inference (NLI) dataset. We aim to improve sentence embeddings learned in an
unsupervised setting by automatically generating an NLI dataset with an LLM and
using it to fine-tune PromptEOL. In experiments on STS tasks, the proposed
method achieved an average Spearman's rank correlation coefficient of 82.21
with respect to human evaluation, thus outperforming existing methods without
using large, manually annotated datasets.
| 2,024 | Computation and Language |
Self-Adaptive Reconstruction with Contrastive Learning for Unsupervised
Sentence Embeddings | Unsupervised sentence embeddings task aims to convert sentences to semantic
vector representations. Most previous works directly use the sentence
representations derived from pretrained language models. However, due to the
token bias in pretrained language models, the models can not capture the
fine-grained semantics in sentences, which leads to poor predictions. To
address this issue, we propose a novel Self-Adaptive Reconstruction Contrastive
Sentence Embeddings (SARCSE) framework, which reconstructs all tokens in
sentences with an AutoEncoder to help the model to preserve more fine-grained
semantics during tokens aggregating. In addition, we proposed a self-adaptive
reconstruction loss to alleviate the token bias towards frequency. Experimental
results show that SARCSE gains significant improvements compared with the
strong baseline SimCSE on the 7 STS tasks.
| 2,024 | Computation and Language |
Machine Unlearning of Pre-trained Large Language Models | This study investigates the concept of the `right to be forgotten' within the
context of large language models (LLMs). We explore machine unlearning as a
pivotal solution, with a focus on pre-trained models--a notably
under-researched area. Our research delineates a comprehensive framework for
machine unlearning in pre-trained LLMs, encompassing a critical analysis of
seven diverse unlearning methods. Through rigorous evaluation using curated
datasets from arXiv, books, and GitHub, we establish a robust benchmark for
unlearning performance, demonstrating that these methods are over $10^5$ times
more computationally efficient than retraining. Our results show that
integrating gradient ascent with gradient descent on in-distribution data
improves hyperparameter robustness. We also provide detailed guidelines for
efficient hyperparameter tuning in the unlearning process. Our findings advance
the discourse on ethical AI practices, offering substantive insights into the
mechanics of machine unlearning for pre-trained LLMs and underscoring the
potential for responsible AI development.
| 2,024 | Computation and Language |
Entity-level Factual Adaptiveness of Fine-tuning based Abstractive
Summarization Models | Abstractive summarization models often generate factually inconsistent
content particularly when the parametric knowledge of the model conflicts with
the knowledge in the input document. In this paper, we analyze the robustness
of fine-tuning based summarization models to the knowledge conflict, which we
call factual adaptiveness. We utilize pre-trained language models to construct
evaluation sets and find that factual adaptiveness is not strongly correlated
with factual consistency on original datasets. Furthermore, we introduce a
controllable counterfactual data augmentation method where the degree of
knowledge conflict within the augmented data can be adjustable. Our
experimental results on two pre-trained language models (PEGASUS and BART) and
two fine-tuning datasets (XSum and CNN/DailyMail) demonstrate that our method
enhances factual adaptiveness while achieving factual consistency on original
datasets on par with the contrastive learning baseline.
| 2,024 | Computation and Language |
Biomedical Entity Linking as Multiple Choice Question Answering | Although biomedical entity linking (BioEL) has made significant progress with
pre-trained language models, challenges still exist for fine-grained and
long-tailed entities. To address these challenges, we present BioELQA, a novel
model that treats Biomedical Entity Linking as Multiple Choice Question
Answering. BioELQA first obtains candidate entities with a fast retriever,
jointly presents the mention and candidate entities to a generator, and then
outputs the predicted symbol associated with its chosen entity. This
formulation enables explicit comparison of different candidate entities, thus
capturing fine-grained interactions between mentions and entities, as well as
among entities themselves. To improve generalization for long-tailed entities,
we retrieve similar labeled training instances as clues and concatenate the
input with retrieved instances for the generator. Extensive experimental
results show that BioELQA outperforms state-of-the-art baselines on several
datasets.
| 2,024 | Computation and Language |
DeMPT: Decoding-enhanced Multi-phase Prompt Tuning for Making LLMs Be
Better Context-aware Translators | Generally, the decoder-only large language models (LLMs) are adapted to
context-aware neural machine translation (NMT) in a concatenating way, where
LLMs take the concatenation of the source sentence (i.e., intra-sentence
context) and the inter-sentence context as the input, and then to generate the
target tokens sequentially. This adaptation strategy, i.e., concatenation mode,
considers intra-sentence and inter-sentence contexts with the same priority,
despite an apparent difference between the two kinds of contexts. In this
paper, we propose an alternative adaptation approach, named Decoding-enhanced
Multi-phase Prompt Tuning (DeMPT), to make LLMs discriminately model and
utilize the inter- and intra-sentence context and more effectively adapt LLMs
to context-aware NMT. First, DeMPT divides the context-aware NMT process into
three separate phases. During each phase, different continuous prompts are
introduced to make LLMs discriminately model various information. Second, DeMPT
employs a heuristic way to further discriminately enhance the utilization of
the source-side inter- and intra-sentence information at the final decoding
phase. Experiments show that our approach significantly outperforms the
concatenation method, and further improves the performance of LLMs in discourse
modeling.
| 2,024 | Computation and Language |
Fine-Grained Detoxification via Instance-Level Prefixes for Large
Language Models | Impressive results have been achieved in natural language processing (NLP)
tasks through the training of large language models (LLMs). However, these
models occasionally produce toxic content such as insults, threats, and
profanity in response to certain prompts, thereby constraining their practical
utility. To tackle this issue, various finetuning-based and decoding-based
approaches have been utilized to mitigate toxicity. However, these methods
typically necessitate additional costs such as high-quality training data or
auxiliary models. In this paper, we propose fine-grained detoxification via
instance-level prefixes (FGDILP) to mitigate toxic text without additional
cost. Specifically, FGDILP contrasts the contextualized representation in
attention space using a positive prefix-prepended prompt against multiple
negative prefix-prepended prompts at the instance level. This allows for
constructing fine-grained subtoxicity vectors, which enables collaborative
detoxification by fusing them to correct the normal generation process when
provided with a raw prompt. We validate that FGDILP enables controlled text
generation with regard to toxicity at both the utterance and context levels.
Our method surpasses prompt-based baselines in detoxification, although at a
slight cost to generation fluency and diversity.
| 2,024 | Computation and Language |
GPT-HateCheck: Can LLMs Write Better Functional Tests for Hate Speech
Detection? | Online hate detection suffers from biases incurred in data sampling,
annotation, and model pre-training. Therefore, measuring the averaged
performance over all examples in held-out test data is inadequate. Instead, we
must identify specific model weaknesses and be informed when it is more likely
to fail. A recent proposal in this direction is HateCheck, a suite for testing
fine-grained model functionalities on synthesized data generated using
templates of the kind "You are just a [slur] to me." However, despite enabling
more detailed diagnostic insights, the HateCheck test cases are often generic
and have simplistic sentence structures that do not match the real-world data.
To address this limitation, we propose GPT-HateCheck, a framework to generate
more diverse and realistic functional tests from scratch by instructing large
language models (LLMs). We employ an additional natural language inference
(NLI) model to verify the generations. Crowd-sourced annotation demonstrates
that the generated test cases are of high quality. Using the new functional
tests, we can uncover model weaknesses that would be overlooked using the
original HateCheck dataset.
| 2,024 | Computation and Language |
Chitchat as Interference: Adding User Backstories to Task-Oriented
Dialogues | During task-oriented dialogues (TODs), human users naturally introduce
chitchat that is beyond the immediate scope of the task, interfering with the
flow of the conversation. To address this issue without the need for expensive
manual data creation, we use few-shot prompting with Llama-2-70B to enhance the
MultiWOZ dataset with user backstories, a typical example of chitchat
interference in TODs. We assess the impact of this addition by testing two
models: one trained solely on TODs and another trained on TODs with a
preliminary chitchat interaction. Our analysis reveals that our enriched
dataset poses a significant challenge to these systems. Moreover, we
demonstrate that our dataset can be effectively used for training purposes,
enabling a system to consistently acknowledge the user's backstory while also
successfully moving the task forward in the same turn, as confirmed by human
evaluation. These findings highlight the benefits of generating novel
chitchat-TOD scenarios to test TOD systems more thoroughly and improve their
resilience to natural user interferences.
| 2,024 | Computation and Language |
DEEM: Dynamic Experienced Expert Modeling for Stance Detection | Recent work has made a preliminary attempt to use large language models
(LLMs) to solve the stance detection task, showing promising results. However,
considering that stance detection usually requires detailed background
knowledge, the vanilla reasoning method may neglect the domain knowledge to
make a professional and accurate analysis. Thus, there is still room for
improvement of LLMs reasoning, especially in leveraging the generation
capability of LLMs to simulate specific experts (i.e., multi-agents) to detect
the stance. In this paper, different from existing multi-agent works that
require detailed descriptions and use fixed experts, we propose a Dynamic
Experienced Expert Modeling (DEEM) method which can leverage the generated
experienced experts and let LLMs reason in a semi-parametric way, making the
experts more generalizable and reliable. Experimental results demonstrate that
DEEM consistently achieves the best results on three standard benchmarks,
outperforms methods with self-consistency reasoning, and reduces the bias of
LLMs.
| 2,024 | Computation and Language |
MemoryPrompt: A Light Wrapper to Improve Context Tracking in Pre-trained
Language Models | Transformer-based language models (LMs) track contextual information through
large, hard-coded input windows. We introduce MemoryPrompt, a leaner approach
in which the LM is complemented by a small auxiliary recurrent network that
passes information to the LM by prefixing its regular input with a sequence of
vectors, akin to soft prompts, without requiring LM finetuning. Tested on a
task designed to probe a LM's ability to keep track of multiple fact updates, a
MemoryPrompt-augmented LM outperforms much larger LMs that have access to the
full input history. We also test MemoryPrompt on a long-distance dialogue
dataset, where its performance is comparable to that of a model conditioned on
the entire conversation history. In both experiments we also observe that,
unlike full-finetuning approaches, MemoryPrompt does not suffer from
catastrophic forgetting when adapted to new tasks, thus not disrupting the
generalist capabilities of the underlying LM.
| 2,024 | Computation and Language |
Let's Rectify Step by Step: Improving Aspect-based Sentiment Analysis
with Diffusion Models | Aspect-Based Sentiment Analysis (ABSA) stands as a crucial task in predicting
the sentiment polarity associated with identified aspects within text. However,
a notable challenge in ABSA lies in precisely determining the aspects'
boundaries (start and end indices), especially for long ones, due to users'
colloquial expressions. We propose DiffusionABSA, a novel diffusion model
tailored for ABSA, which extracts the aspects progressively step by step.
Particularly, DiffusionABSA gradually adds noise to the aspect terms in the
training process, subsequently learning a denoising process that progressively
restores these terms in a reverse manner. To estimate the boundaries, we design
a denoising neural network enhanced by a syntax-aware temporal attention
mechanism to chronologically capture the interplay between aspects and
surrounding text. Empirical evaluations conducted on eight benchmark datasets
underscore the compelling advantages offered by DiffusionABSA when compared
against robust baseline models. Our code is publicly available at
https://github.com/Qlb6x/DiffusionABSA.
| 2,024 | Computation and Language |
Causal Graph Discovery with Retrieval-Augmented Generation based Large
Language Models | Causal graph recovery is essential in the field of causal inference.
Traditional methods are typically knowledge-based or statistical
estimation-based, which are limited by data collection biases and individuals'
knowledge about factors affecting the relations between variables of interests.
The advance of large language models (LLMs) provides opportunities to address
these problems. We propose a novel method that utilizes the extensive knowledge
contained within a large corpus of scientific literature to deduce causal
relationships in general causal graph recovery tasks. This method leverages
Retrieval Augmented-Generation (RAG) based LLMs to systematically analyze and
extract pertinent information from a comprehensive collection of research
papers. Our method first retrieves relevant text chunks from the aggregated
literature. Then, the LLM is tasked with identifying and labelling potential
associations between factors. Finally, we give a method to aggregate the
associational relationships to build a causal graph. We demonstrate our method
is able to construct high quality causal graphs on the well-known SACHS dataset
solely from literature.
| 2,024 | Computation and Language |
How (un)ethical are instruction-centric responses of LLMs? Unveiling the
vulnerabilities of safety guardrails to harmful queries | In this study, we tackle a growing concern around the safety and ethical use
of large language models (LLMs). Despite their potential, these models can be
tricked into producing harmful or unethical content through various
sophisticated methods, including 'jailbreaking' techniques and targeted
manipulation. Our work zeroes in on a specific issue: to what extent LLMs can
be led astray by asking them to generate responses that are instruction-centric
such as a pseudocode, a program or a software snippet as opposed to vanilla
text. To investigate this question, we introduce TechHazardQA, a dataset
containing complex queries which should be answered in both text and
instruction-centric formats (e.g., pseudocodes), aimed at identifying triggers
for unethical responses. We query a series of LLMs -- Llama-2-13b, Llama-2-7b,
Mistral-V2 and Mistral 8X7B -- and ask them to generate both text and
instruction-centric responses. For evaluation we report the harmfulness score
metric as well as judgements from GPT-4 and humans. Overall, we observe that
asking LLMs to produce instruction-centric responses enhances the unethical
response generation by ~2-38% across the models. As an additional objective, we
investigate the impact of model editing using the ROME technique, which further
increases the propensity for generating undesirable content. In particular,
asking edited LLMs to generate instruction-centric responses further increases
the unethical response generation by ~3-16% across the different models.
| 2,024 | Computation and Language |
ArabianGPT: Native Arabic GPT-based Large Language Model | The predominance of English and Latin-based large language models (LLMs) has
led to a notable deficit in native Arabic LLMs. This discrepancy is accentuated
by the prevalent inclusion of English tokens in existing Arabic models,
detracting from their efficacy in processing native Arabic's intricate
morphology and syntax. Consequently, there is a theoretical and practical
imperative for developing LLMs predominantly focused on Arabic linguistic
elements. To address this gap, this paper proposes ArabianGPT, a series of
transformer-based models within the ArabianLLM suite designed explicitly for
Arabic. These models, including ArabianGPT-0.1B and ArabianGPT-0.3B, vary in
size and complexity, aligning with the nuanced linguistic characteristics of
Arabic. The AraNizer tokenizer, integral to these models, addresses the unique
morphological aspects of Arabic script, ensuring more accurate text processing.
Empirical results from fine-tuning the models on tasks like sentiment analysis
and summarization demonstrate significant improvements. For sentiment analysis,
the fine-tuned ArabianGPT-0.1B model achieved a remarkable accuracy of 95%, a
substantial increase from the base model's 56%. Similarly, in summarization
tasks, fine-tuned models showed enhanced F1 scores, indicating improved
precision and recall in generating concise summaries. Comparative analysis of
fine-tuned ArabianGPT models against their base versions across various
benchmarks reveals nuanced differences in performance, with fine-tuning
positively impacting specific tasks like question answering and summarization.
These findings underscore the efficacy of fine-tuning in aligning ArabianGPT
models more closely with specific NLP tasks, highlighting the potential of
tailored transformer architectures in advancing Arabic NLP.
| 2,024 | Computation and Language |
Ranking Entities along Conceptual Space Dimensions with LLMs: An
Analysis of Fine-Tuning Strategies | Conceptual spaces represent entities in terms of their primitive semantic
features. Such representations are highly valuable but they are notoriously
difficult to learn, especially when it comes to modelling perceptual and
subjective features. Distilling conceptual spaces from Large Language Models
(LLMs) has recently emerged as a promising strategy. However, existing work has
been limited to probing pre-trained LLMs using relatively simple zero-shot
strategies. We focus in particular on the task of ranking entities according to
a given conceptual space dimension. Unfortunately, we cannot directly fine-tune
LLMs on this task, because ground truth rankings for conceptual space
dimensions are rare. We therefore use more readily available features as
training data and analyse whether the ranking capabilities of the resulting
models transfer to perceptual and subjective features. We find that this is
indeed the case, to some extent, but having perceptual and subjective features
in the training data seems essential for achieving the best results. We
furthermore find that pointwise ranking strategies are competitive against
pairwise approaches, in defiance of common wisdom.
| 2,024 | Computation and Language |
NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data | Large Language Models (LLMs) have shown impressive abilities in data
annotation, opening the way for new approaches to solve classic NLP problems.
In this paper, we show how to use LLMs to create NuNER, a compact language
representation model specialized in the Named Entity Recognition (NER) task.
NuNER can be fine-tuned to solve downstream NER problems in a data-efficient
way, outperforming similar-sized foundation models in the few-shot regime and
competing with much larger LLMs. We find that the size and entity-type
diversity of the pre-training dataset are key to achieving good performance. We
view NuNER as a member of the broader family of task-specific foundation
models, recently unlocked by LLMs.
| 2,024 | Computation and Language |
Dual Encoder: Exploiting the Potential of Syntactic and Semantic for
Aspect Sentiment Triplet Extraction | Aspect Sentiment Triple Extraction (ASTE) is an emerging task in fine-grained
sentiment analysis. Recent studies have employed Graph Neural Networks (GNN) to
model the syntax-semantic relationships inherent in triplet elements. However,
they have yet to fully tap into the vast potential of syntactic and semantic
information within the ASTE task. In this work, we propose a \emph{Dual
Encoder: Exploiting the potential of Syntactic and Semantic} model (D2E2S),
which maximizes the syntactic and semantic relationships among words.
Specifically, our model utilizes a dual-channel encoder with a BERT channel to
capture semantic information, and an enhanced LSTM channel for comprehensive
syntactic information capture. Subsequently, we introduce the heterogeneous
feature interaction module to capture intricate interactions between dependency
syntax and attention semantics, and to dynamically select vital nodes. We
leverage the synergy of these modules to harness the significant potential of
syntactic and semantic information in ASTE tasks. Testing on public benchmarks,
our D2E2S model surpasses the current state-of-the-art(SOTA), demonstrating its
effectiveness.
| 2,024 | Computation and Language |
A Data-Centric Approach To Generate Faithful and High Quality Patient
Summaries with Large Language Models | Patients often face difficulties in understanding their hospitalizations,
while healthcare workers have limited resources to provide explanations. In
this work, we investigate the potential of large language models to generate
patient summaries based on doctors' notes and study the effect of training data
on the faithfulness and quality of the generated summaries. To this end, we
develop a rigorous labeling protocol for hallucinations, and have two medical
experts annotate 100 real-world summaries and 100 generated summaries. We show
that fine-tuning on hallucination-free data effectively reduces hallucinations
from 2.60 to 1.55 per summary for Llama 2, while preserving relevant
information. Although the effect is still present, it is much smaller for GPT-4
when prompted with five examples (0.70 to 0.40). We also conduct a qualitative
evaluation using hallucination-free and improved training data. GPT-4 shows
very good results even in the zero-shot setting. We find that common
quantitative metrics do not correlate well with faithfulness and quality.
Finally, we test GPT-4 for automatic hallucination detection, which yields
promising results.
| 2,024 | Computation and Language |
Repetition Improves Language Model Embeddings | Recent approaches to improving the extraction of text embeddings from
autoregressive large language models (LLMs) have largely focused on
improvements to data, backbone pretrained language models, or improving
task-differentiation via instructions. In this work, we address an
architectural limitation of autoregressive models: token embeddings cannot
contain information from tokens that appear later in the input. To address this
limitation, we propose a simple approach, "echo embeddings," in which we repeat
the input twice in context and extract embeddings from the second occurrence.
We show that echo embeddings of early tokens can encode information about later
tokens, allowing us to maximally leverage high-quality LLMs for embeddings. On
the MTEB leaderboard, echo embeddings improve over classical embeddings by over
9% zero-shot and by around 0.7% when fine-tuned. Echo embeddings with a
Mistral-7B model achieve state-of-the-art compared to prior open source models
that do not leverage synthetic fine-tuning data.
| 2,024 | Computation and Language |
Leveraging Domain Knowledge for Efficient Reward Modelling in RLHF: A
Case-Study in E-Commerce Opinion Summarization | Reinforcement Learning from Human Feedback (RLHF) has become a dominating
strategy in steering Language Models (LMs) towards human values/goals. The key
to the strategy is employing a reward model ({$\varphi$}) which can reflect a
latent reward model with humans. While this strategy has proven to be
effective, the training methodology requires a lot of human preference
annotation (usually of the order of tens of thousands) to train {$\varphi$}.
Such large-scale preference annotations can be achievable if the reward model
can be ubiquitously used. However, human values/goals are subjective and depend
on the nature of the task. This poses a challenge in collecting diverse
preferences for downstream applications. To address this, we propose a novel
methodology to infuse domain knowledge into {$\varphi$}, which reduces the size
of preference annotation required. We validate our approach in E-Commerce
Opinion Summarization, with a significant reduction in dataset size (just $940$
samples) while advancing the state-of-the-art. Our contributions include a
novel Reward Modelling technique, a new dataset (PromptOpinSumm) for Opinion
Summarization, and a human preference dataset (OpinPref). The proposed
methodology opens avenues for efficient RLHF, making it more adaptable to
diverse applications with varying human values. We release the artifacts for
usage under MIT License.
| 2,024 | Computation and Language |
Prejudice and Caprice: A Statistical Framework for Measuring Social
Discrimination in Large Language Models | The growing integration of large language models (LLMs) into social
operations amplifies their impact on decisions in crucial areas such as
economics, law, education, and healthcare, raising public concerns about these
models' discrimination-related safety and reliability. However, prior
discrimination measuring frameworks solely assess the average discriminatory
behavior of LLMs, often proving inadequate due to the overlook of an additional
discrimination-leading factor, i.e., the LLMs' prediction variation across
diverse contexts. In this work, we present the Prejudice-Caprice Framework
(PCF) that comprehensively measures discrimination in LLMs by considering both
their consistently biased preference and preference variation across diverse
contexts. Specifically, we mathematically dissect the aggregated contextualized
discrimination risk of LLMs into prejudice risk, originating from LLMs'
persistent prejudice, and caprice risk, stemming from their generation
inconsistency. In addition, we utilize a data-mining approach to gather
preference-detecting probes from sentence skeletons, devoid of attribute
indications, to approximate LLMs' applied contexts. While initially intended
for assessing discrimination in LLMs, our proposed PCF facilitates the
comprehensive and flexible measurement of any inductive biases, including
knowledge alongside prejudice, across various modality models. We apply our
discrimination-measuring framework to 12 common LLMs, yielding intriguing
findings: i) modern LLMs demonstrate significant pro-male stereotypes, ii)
LLMs' exhibited discrimination correlates with several social and economic
factors, iii) prejudice risk dominates the overall discrimination risk and
follows a normal distribution, and iv) caprice risk contributes minimally to
the overall risk but follows a fat-tailed distribution, suggesting that it is
wild risk requiring enhanced surveillance.
| 2,024 | Computation and Language |
API-BLEND: A Comprehensive Corpora for Training and Benchmarking API
LLMs | There is a growing need for Large Language Models (LLMs) to effectively use
tools and external Application Programming Interfaces (APIs) to plan and
complete tasks. As such, there is tremendous interest in methods that can
acquire sufficient quantities of train and test data that involve calls to
tools / APIs. Two lines of research have emerged as the predominant strategies
for addressing this challenge. The first has focused on synthetic data
generation techniques, while the second has involved curating task-adjacent
datasets which can be transformed into API / Tool-based tasks. In this paper,
we focus on the task of identifying, curating, and transforming existing
datasets and, in turn, introduce API-BLEND, a large corpora for training and
systematic testing of tool-augmented LLMs. The datasets mimic real-world
scenarios involving API-tasks such as API / tool detection, slot filling, and
sequencing of the detected APIs. We demonstrate the utility of the API-BLEND
dataset for both training and benchmarking purposes.
| 2,024 | Computation and Language |
Large Scale Generative AI Text Applied to Sports and Music | We address the problem of scaling up the production of media content,
including commentary and personalized news stories, for large-scale sports and
music events worldwide. Our approach relies on generative AI models to
transform a large volume of multimodal data (e.g., videos, articles, real-time
scoring feeds, statistics, and fact sheets) into coherent and fluent text.
Based on this approach, we introduce, for the first time, an AI commentary
system, which was deployed to produce automated narrations for highlight
packages at the 2023 US Open, Wimbledon, and Masters tournaments. In the same
vein, our solution was extended to create personalized content for ESPN Fantasy
Football and stories about music artists for the Grammy awards. These
applications were built using a common software architecture achieved a 15x
speed improvement with an average Rouge-L of 82.00 and perplexity of 6.6. Our
work was successfully deployed at the aforementioned events, supporting 90
million fans around the world with 8 billion page views, continuously pushing
the bounds on what is possible at the intersection of sports, entertainment,
and AI.
| 2,024 | Computation and Language |
Beware of Words: Evaluating the Lexical Richness of Conversational Large
Language Models | The performance of conversational Large Language Models (LLMs) in general,
and of ChatGPT in particular, is currently being evaluated on many different
tasks, from logical reasoning or maths to answering questions on a myriad of
topics. Instead, much less attention is being devoted to the study of the
linguistic features of the texts generated by these LLMs. This is surprising
since LLMs are models for language, and understanding how they use the language
is important. Indeed, conversational LLMs are poised to have a significant
impact on the evolution of languages as they may eventually dominate the
creation of new text. This means that for example, if conversational LLMs do
not use a word it may become less and less frequent and eventually stop being
used altogether. Therefore, evaluating the linguistic features of the text they
produce and how those depend on the model parameters is the first step toward
understanding the potential impact of conversational LLMs on the evolution of
languages. In this paper, we consider the evaluation of the lexical richness of
the text generated by LLMs and how it depends on the model parameters. A
methodology is presented and used to conduct a comprehensive evaluation of
lexical richness using ChatGPT as a case study. The results show how lexical
richness depends on the version of ChatGPT and some of its parameters, such as
the presence penalty, or on the role assigned to the model. The dataset and
tools used in our analysis are released under open licenses with the goal of
drawing the much-needed attention to the evaluation of the linguistic features
of LLM-generated text.
| 2,024 | Computation and Language |
Detecting misinformation through Framing Theory: the Frame Element-based
Model | In this paper, we delve into the rapidly evolving challenge of misinformation
detection, with a specific focus on the nuanced manipulation of narrative
frames - an under-explored area within the AI community. The potential for
Generative AI models to generate misleading narratives underscores the urgency
of this problem. Drawing from communication and framing theories, we posit that
the presentation or 'framing' of accurate information can dramatically alter
its interpretation, potentially leading to misinformation. We highlight this
issue through real-world examples, demonstrating how shifts in narrative frames
can transmute fact-based information into misinformation. To tackle this
challenge, we propose an innovative approach leveraging the power of
pre-trained Large Language Models and deep neural networks to detect
misinformation originating from accurate facts portrayed under different
frames. These advanced AI techniques offer unprecedented capabilities in
identifying complex patterns within unstructured data critical for examining
the subtleties of narrative frames. The objective of this paper is to bridge a
significant research gap in the AI domain, providing valuable insights and
methodologies for tackling framing-induced misinformation, thus contributing to
the advancement of responsible and trustworthy AI technologies. Several
experiments are intensively conducted and experimental results explicitly
demonstrate the various impact of elements of framing theory proving the
rationale of applying framing theory to increase the performance in
misinformation detection.
| 2,024 | Computation and Language |
PCA-Bench: Evaluating Multimodal Large Language Models in
Perception-Cognition-Action Chain | We present PCA-Bench, a multimodal decision-making benchmark for evaluating
the integrated capabilities of Multimodal Large Language Models (MLLMs).
Departing from previous benchmarks focusing on simplistic tasks and individual
model capability, PCA-Bench introduces three complex scenarios: autonomous
driving, domestic robotics, and open-world games. Given task instructions and
diverse contexts, the model is required to seamlessly integrate multiple
capabilities of Perception, Cognition, and Action in a reasoning chain to make
accurate decisions. Moreover, PCA-Bench features error localization
capabilities, scrutinizing model inaccuracies in areas such as perception,
knowledge, or reasoning. This enhances the reliability of deploying MLLMs. To
balance accuracy and efficiency in evaluation, we propose PCA-Eval, an
automatic evaluation protocol, and assess 10 prevalent MLLMs. The results
reveal significant performance disparities between open-source models and
powerful proprietary models like GPT-4 Vision. To address this, we introduce
Embodied-Instruction-Evolution (EIE), an automatic framework for synthesizing
instruction tuning examples in multimodal embodied environments. EIE generates
7,510 training examples in PCA-Bench and enhances the performance of
open-source MLLMs, occasionally surpassing GPT-4 Vision (+3\% in decision
accuracy), thereby validating the effectiveness of EIE. Our findings suggest
that robust MLLMs like GPT4-Vision show promise for decision-making in embodied
agents, opening new avenues for MLLM research.
| 2,024 | Computation and Language |
Evaluating the Performance of ChatGPT for Spam Email Detection | Email continues to be a pivotal and extensively utilized communication medium
within professional and commercial domains. Nonetheless, the prevalence of spam
emails poses a significant challenge for users, disrupting their daily routines
and diminishing productivity. Consequently, accurately identifying and
filtering spam based on content has become crucial for cybersecurity. Recent
advancements in natural language processing, particularly with large language
models like ChatGPT, have shown remarkable performance in tasks such as
question answering and text generation. However, its potential in spam
identification remains underexplored. To fill in the gap, this study attempts
to evaluate ChatGPT's capabilities for spam identification in both English and
Chinese email datasets. We employ ChatGPT for spam email detection using
in-context learning, which requires a prompt instruction and a few
demonstrations. We also investigate how the training example size affects the
performance of ChatGPT. For comparison, we also implement five popular
benchmark methods, including naive Bayes, support vector machines (SVM),
logistic regression (LR), feedforward dense neural networks (DNN), and BERT
classifiers. Though extensive experiments, the performance of ChatGPT is
significantly worse than deep supervised learning methods in the large English
dataset, while it presents superior performance on the low-resourced Chinese
dataset, even outperforming BERT in this case.
| 2,024 | Computation and Language |
Prompting LLMs to Compose Meta-Review Drafts from Peer-Review Narratives
of Scholarly Manuscripts | One of the most important yet onerous tasks in the academic peer-reviewing
process is composing meta-reviews, which involves understanding the core
contributions, strengths, and weaknesses of a scholarly manuscript based on
peer-review narratives from multiple experts and then summarizing those
multiple experts' perspectives into a concise holistic overview. Given the
latest major developments in generative AI, especially Large Language Models
(LLMs), it is very compelling to rigorously study the utility of LLMs in
generating such meta-reviews in an academic peer-review setting. In this paper,
we perform a case study with three popular LLMs, i.e., GPT-3.5, LLaMA2, and
PaLM2, to automatically generate meta-reviews by prompting them with different
types/levels of prompts based on the recently proposed TELeR taxonomy. Finally,
we perform a detailed qualitative study of the meta-reviews generated by the
LLMs and summarize our findings and recommendations for prompting LLMs for this
complex task.
| 2,024 | Computation and Language |
Alternating Weak Triphone/BPE Alignment Supervision from Hybrid Model
Improves End-to-End ASR | In this paper, alternating weak triphone/BPE alignment supervision is
proposed to improve end-to-end model training. Towards this end, triphone and
BPE alignments are extracted using a pre-existing hybrid ASR system. Then,
regularization effect is obtained by cross-entropy based intermediate auxiliary
losses computed on such alignments at a mid-layer representation of the encoder
for triphone alignments and at the encoder for BPE alignments. Weak supervision
is achieved through strong label smoothing with parameter of 0.5. Experimental
results on TED-LIUM 2 indicate that either triphone or BPE alignment based weak
supervision improves ASR performance over standard CTC auxiliary loss.
Moreover, their combination lowers the word error rate further. We also
investigate the alternation of the two auxiliary tasks during model training,
and additional performance gain is observed. Overall, the proposed techniques
result in over 10% relative error rate reduction over a CTC-regularized
baseline system.
| 2,024 | Computation and Language |
Selective "Selective Prediction": Reducing Unnecessary Abstention in
Vision-Language Reasoning | Prior work on selective prediction minimizes incorrect predictions from
vision-language models (VLMs) by allowing them to abstain from answering when
uncertain. However, when deploying a vision-language system with low tolerance
for inaccurate predictions, selective prediction may be over-cautious and
abstain too frequently, even on many correct predictions. We introduce
ReCoVERR, an inference-time algorithm to reduce the over-abstention of a
selective vision-language system without decreasing prediction accuracy. When
the VLM makes a low-confidence prediction, instead of abstaining ReCoVERR tries
to find relevant clues in the image that provide additional evidence for the
prediction. ReCoVERR uses an LLM to pose related questions to the VLM, collects
high-confidence evidences, and if enough evidence confirms the prediction the
system makes a prediction instead of abstaining. ReCoVERR enables two VLMs,
BLIP2 and InstructBLIP, to answer up to 20% more questions on the A-OKVQA task
than vanilla selective prediction without decreasing system accuracy, thus
improving overall system reliability.
| 2,024 | Computation and Language |
Language-Based User Profiles for Recommendation | Most conventional recommendation methods (e.g., matrix factorization)
represent user profiles as high-dimensional vectors. Unfortunately, these
vectors lack interpretability and steerability, and often perform poorly in
cold-start settings. To address these shortcomings, we explore the use of user
profiles that are represented as human-readable text. We propose the
Language-based Factorization Model (LFM), which is essentially an
encoder/decoder model where both the encoder and the decoder are large language
models (LLMs). The encoder LLM generates a compact natural-language profile of
the user's interests from the user's rating history. The decoder LLM uses this
summary profile to complete predictive downstream tasks. We evaluate our LFM
approach on the MovieLens dataset, comparing it against matrix factorization
and an LLM model that directly predicts from the user's rating history. In
cold-start settings, we find that our method can have higher accuracy than
matrix factorization. Furthermore, we find that generating a compact and
human-readable summary often performs comparably with or better than direct LLM
prediction, while enjoying better interpretability and shorter model input
length. Our results motivate a number of future research directions and
potential improvements.
| 2,024 | Computation and Language |
Fine-Grained Self-Endorsement Improves Factuality and Reasoning | This work studies improving large language model (LLM) generations at
inference time by mitigating fact-conflicting hallucinations. Particularly, we
propose a self-endorsement framework that leverages the fine-grained fact-level
comparisons across multiple sampled responses. Compared with prior ensemble
methods (Wang et al., 2022;Chen et al., 2023)) that perform response-level
selection, our approach can better alleviate hallucinations, especially for
longform generation tasks. Our approach can broadly benefit smaller and
open-source LLMs as it mainly conducts simple content-based comparisons.
Experiments on Biographies show that our method can effectively improve the
factuality of generations with simple and intuitive prompts across different
scales of LLMs. Besides, comprehensive analyses on TriviaQA and GSM8K
demonstrate the potential of self-endorsement for broader application.
| 2,024 | Computation and Language |