Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
On the use of Silver Standard Data for Zero-shot Classification Tasks in
Information Extraction | The superior performance of supervised classification methods in the
information extraction (IE) area heavily relies on a large amount of gold
standard data. Recent zero-shot classification methods converted the task to
other NLP tasks (e.g., textual entailment) and used off-the-shelf models of
these NLP tasks to directly perform inference on the test data without using a
large amount of IE annotation data. A potentially valuable by-product of these
methods is the large-scale silver standard data, i.e., pseudo-labeled data by
the off-the-shelf models of other NLP tasks. However, there is no further
investigation into the use of these data. In this paper, we propose a new
framework, Clean-LaVe, which aims to utilize silver standard data to enhance
the zero-shot performance. Clean-LaVe includes four phases: (1) Obtaining
silver data; (2) Identifying relatively clean data from silver data; (3)
Finetuning the off-the-shelf model using clean data; (4) Inference on the test
data. The experimental results show that Clean-LaVe can outperform the baseline
by 5% and 6% on TACRED and Wiki80 dataset in the zero-shot relation
classification task, and by 3%-7% on Smile (Korean and Polish) in the zero-shot
cross-lingual relation classification task, and by 8% on ACE05-E+ in the
zero-shot event argument classification task. The code is share in
https://github.com/wjw136/Clean_LaVe.git.
| 2,024 | Computation and Language |
Editing Factual Knowledge and Explanatory Ability of Medical Large
Language Models | Model editing aims to precisely modify the behaviours of large language
models (LLMs) on specific knowledge while keeping irrelevant knowledge
unchanged. It has been proven effective in resolving hallucination and
out-of-date issues in LLMs. As a result, it can boost the application of LLMs
in many critical domains (e.g., medical domain), where the hallucination is not
tolerable. In this paper, we propose two model editing studies and validate
them in the medical domain: (1) directly editing the factual medical knowledge
and (2) editing the explanations to facts. Meanwhile, we observed that current
model editing methods struggle with the specialization and complexity of
medical knowledge. Therefore, we propose MedLaSA, a novel Layer-wise Scalable
Adapter strategy for medical model editing. It employs causal tracing to
identify the precise location of knowledge in neurons and then introduces
scalable adapters into the dense layers of LLMs. These adapters are assigned
scaling values based on the corresponding specific knowledge. To evaluate the
editing impact, we build two benchmark datasets and introduce a series of
challenging and comprehensive metrics. Extensive experiments on medical LLMs
demonstrate the editing efficiency of MedLaSA, without affecting irrelevant
knowledge that is not edited.
| 2,024 | Computation and Language |
Assessing the Efficacy of Grammar Error Correction: A Human Evaluation
Approach in the Japanese Context | In this study, we evaluated the performance of the state-of-the-art sequence
tagging grammar error detection and correction model (SeqTagger) using Japanese
university students' writing samples. With an automatic annotation toolkit,
ERRANT, we first evaluated SeqTagger's performance on error correction with
human expert correction as the benchmark. Then a human-annotated approach was
adopted to evaluate Seqtagger's performance in error detection using a subset
of the writing dataset. Results indicated a precision of 63.66% and a recall of
20.19% for error correction in the full dataset. For the subset, after manual
exclusion of irrelevant errors such as semantic and mechanical ones, the model
shows an adjusted precision of 97.98% and an adjusted recall of 42.98% for
error detection, indicating the model's high accuracy but also its
conservativeness. Thematic analysis on errors undetected by the model revealed
that determiners and articles, especially the latter, were predominant.
Specifically, in terms of context-independent errors, the model occasionally
overlooked basic ones and faced challenges with overly erroneous or complex
structures. Meanwhile, context-dependent errors, notably those related to tense
and noun number, as well as those possibly influenced by the students' first
language (L1), remained particularly challenging.
| 2,024 | Computation and Language |
Small But Funny: A Feedback-Driven Approach to Humor Distillation | The emergence of Large Language Models (LLMs) has brought to light promising
language generation capabilities, particularly in performing tasks like complex
reasoning and creative writing. Consequently, distillation through imitation of
teacher responses has emerged as a popular technique to transfer knowledge from
LLMs to more accessible, Small Language Models (SLMs). While this works well
for simpler tasks, there is a substantial performance gap on tasks requiring
intricate language comprehension and creativity, such as humor generation. We
hypothesize that this gap may stem from the fact that creative tasks might be
hard to learn by imitation alone and explore whether an approach, involving
supplementary guidance from the teacher, could yield higher performance. To
address this, we study the effect of assigning a dual role to the LLM - as a
"teacher" generating data, as well as a "critic" evaluating the student's
performance. Our experiments on humor generation reveal that the incorporation
of feedback significantly narrows the performance gap between SLMs and their
larger counterparts compared to merely relying on imitation. As a result, our
research highlights the potential of using feedback as an additional dimension
to data when transferring complex language abilities via distillation.
| 2,024 | Computation and Language |
Exploring Multilingual Human Value Concepts in Large Language Models: Is
Value Alignment Consistent, Transferable and Controllable across Languages? | Prior research in representation engineering has revealed that LLMs encode
concepts within their representation spaces, predominantly centered around
English. In this study, we extend this philosophy to a multilingual scenario,
delving into multilingual human value concepts in LLMs. Through our
comprehensive exploration covering 7 types of human values, 16 languages and 3
LLM series with distinct multilinguality, we empirically substantiate the
existence of multilingual human values in LLMs. Further cross-lingual analysis
on these concepts discloses 3 traits arising from language resource
disparities: cross-lingual inconsistency, distorted linguistic relationships,
and unidirectional cross-lingual transfer between high- and low-resource
languages, all in terms of human value concepts. Additionally, we validate the
feasibility of cross-lingual control over value alignment capabilities of LLMs,
leveraging the dominant language as a source language. Drawing from our
findings on multilingual value alignment, we prudently provide suggestions on
the composition of multilingual data for LLMs pre-training: including a limited
number of dominant languages for cross-lingual alignment transfer while
avoiding their excessive prevalence, and keeping a balanced distribution of
non-dominant languages. We aspire that our findings would contribute to
enhancing the safety and utility of multilingual AI.
| 2,024 | Computation and Language |
Saving the legacy of Hero Ibash: Evaluating Four Language Models for
Aminoacian | This study assesses four cutting-edge language models in the underexplored
Aminoacian language. Through evaluation, it scrutinizes their adaptability,
effectiveness, and limitations in text generation, semantic coherence, and
contextual understanding. Uncovering insights into these models' performance in
a low-resourced language, this research pioneers pathways to bridge linguistic
gaps. By offering benchmarks and understanding challenges, it lays groundwork
for future advancements in natural language processing, aiming to elevate the
applicability of language models in similar linguistic landscapes, marking a
significant step toward inclusivity and progress in language technology.
| 2,024 | Computation and Language |
Cause and Effect: Can Large Language Models Truly Understand Causality? | With the rise of Large Language Models(LLMs), it has become crucial to
understand their capabilities and limitations in deciphering and explaining the
complex web of causal relationships that language entails. Current methods use
either explicit or implicit causal reasoning, yet there is a strong need for a
unified approach combining both to tackle a wide array of causal relationships
more effectively. This research proposes a novel architecture called Context
Aware Reasoning Enhancement with Counterfactual Analysis(CARE CA) framework to
enhance causal reasoning and explainability. The proposed framework
incorporates an explicit causal detection module with ConceptNet and
counterfactual statements, as well as implicit causal detection through LLMs.
Our framework goes one step further with a layer of counterfactual explanations
to accentuate LLMs understanding of causality. The knowledge from ConceptNet
enhances the performance of multiple causal reasoning tasks such as causal
discovery, causal identification and counterfactual reasoning. The
counterfactual sentences add explicit knowledge of the not caused by scenarios.
By combining these powerful modules, our model aims to provide a deeper
understanding of causal relationships, enabling enhanced interpretability.
Evaluation of benchmark datasets shows improved performance across all metrics,
such as accuracy, precision, recall, and F1 scores. We also introduce
CausalNet, a new dataset accompanied by our code, to facilitate further
research in this domain.
| 2,024 | Computation and Language |
Learning Intrinsic Dimension via Information Bottleneck for Explainable
Aspect-based Sentiment Analysis | Gradient-based explanation methods are increasingly used to interpret neural
models in natural language processing (NLP) due to their high fidelity. Such
methods determine word-level importance using dimension-level gradient values
through a norm function, often presuming equal significance for all gradient
dimensions. However, in the context of Aspect-based Sentiment Analysis (ABSA),
our preliminary research suggests that only specific dimensions are pertinent.
To address this, we propose the Information Bottleneck-based Gradient
(\texttt{IBG}) explanation framework for ABSA. This framework leverages an
information bottleneck to refine word embeddings into a concise intrinsic
dimension, maintaining essential features and omitting unrelated information.
Comprehensive tests show that our \texttt{IBG} approach considerably improves
both the models' performance and interpretability by identifying
sentiment-aware features.
| 2,024 | Computation and Language |
Unsupervised Information Refinement Training of Large Language Models
for Retrieval-Augmented Generation | Retrieval-augmented generation (RAG) enhances large language models (LLMs) by
incorporating additional information from retrieval. However, studies have
shown that LLMs still face challenges in effectively using the retrieved
information, even ignoring it or being misled by it. The key reason is that the
training of LLMs does not clearly make LLMs learn how to utilize input
retrieved texts with varied quality. In this paper, we propose a novel
perspective that considers the role of LLMs in RAG as ``Information Refiner'',
which means that regardless of correctness, completeness, or usefulness of
retrieved texts, LLMs can consistently integrate knowledge within the retrieved
texts and model parameters to generate the texts that are more concise,
accurate, and complete than the retrieved texts. To this end, we propose an
information refinement training method named InFO-RAG that optimizes LLMs for
RAG in an unsupervised manner. InFO-RAG is low-cost and general across various
tasks. Extensive experiments on zero-shot prediction of 11 datasets in diverse
tasks including Question Answering, Slot-Filling, Language Modeling, Dialogue,
and Code Generation show that InFO-RAG improves the performance of LLaMA2 by an
average of 9.39\% relative points. InFO-RAG also shows advantages in in-context
learning and robustness of RAG.
| 2,024 | Computation and Language |
Cutting Off the Head Ends the Conflict: A Mechanism for Interpreting and
Mitigating Knowledge Conflicts in Language Models | Recently, retrieval augmentation and tool augmentation have demonstrated a
remarkable capability to expand the internal memory boundaries of language
models (LMs) by providing external context. However, internal memory and
external context inevitably clash, leading to knowledge conflicts within LMs.
In this paper, we aim to interpret the mechanism of knowledge conflicts through
the lens of information flow, and then mitigate conflicts by precise
interventions at the pivotal point. We find there are some attention heads with
opposite effects in the later layers, where memory heads can recall knowledge
from internal memory, and context heads can retrieve knowledge from external
context. Moreover, we reveal that the pivotal point at which knowledge
conflicts emerge in LMs is the integration of inconsistent information flows by
memory heads and context heads. Inspired by the insights, we propose a novel
method called Pruning Head via PatH PatcHing (PH3), which can efficiently
mitigate knowledge conflicts by pruning conflicting attention heads without
updating model parameters. PH3 can flexibly control eight LMs to use internal
memory ($\uparrow$ 44.0%) or external context ($\uparrow$ 38.5%). Moreover, PH3
can also improve the performance of LMs on open-domain QA tasks. We also
conduct extensive experiments to demonstrate the cross-model, cross-relation,
and cross-format generalization of our method.
| 2,024 | Computation and Language |
Evaluating Quantized Large Language Models | Post-training quantization (PTQ) has emerged as a promising technique to
reduce the cost of large language models (LLMs). Specifically, PTQ can
effectively mitigate memory consumption and reduce computational overhead in
LLMs. To meet the requirements of both high efficiency and performance across
diverse scenarios, a comprehensive evaluation of quantized LLMs is essential to
guide the selection of quantization methods. This paper presents a thorough
evaluation of these factors by evaluating the effect of PTQ on Weight,
Activation, and KV Cache on 11 model families, including OPT, LLaMA2, Falcon,
Bloomz, Mistral, ChatGLM, Vicuna, LongChat, StableLM, Gemma, and Mamba, with
parameters ranging from 125M to 180B. The evaluation encompasses five types of
tasks: basic NLP, emergent ability, trustworthiness, dialogue, and long-context
tasks. Moreover, we also evaluate the state-of-the-art (SOTA) quantization
methods to demonstrate their applicability. Based on the extensive experiments,
we systematically summarize the effect of quantization, provide recommendations
to apply quantization techniques, and point out future directions.
| 2,024 | Computation and Language |
MIKO: Multimodal Intention Knowledge Distillation from Large Language
Models for Social-Media Commonsense Discovery | Social media has become a ubiquitous tool for connecting with others, staying
updated with news, expressing opinions, and finding entertainment. However,
understanding the intention behind social media posts remains challenging due
to the implicitness of intentions in social media posts, the need for
cross-modality understanding of both text and images, and the presence of noisy
information such as hashtags, misspelled words, and complicated abbreviations.
To address these challenges, we present MIKO, a Multimodal Intention Kowledge
DistillatiOn framework that collaboratively leverages a Large Language Model
(LLM) and a Multimodal Large Language Model (MLLM) to uncover users'
intentions. Specifically, we use an MLLM to interpret the image and an LLM to
extract key information from the text and finally instruct the LLM again to
generate intentions. By applying MIKO to publicly available social media
datasets, we construct an intention knowledge base featuring 1,372K intentions
rooted in 137,287 posts. We conduct a two-stage annotation to verify the
quality of the generated knowledge and benchmark the performance of widely used
LLMs for intention generation. We further apply MIKO to a sarcasm detection
dataset and distill a student model to demonstrate the downstream benefits of
applying intention knowledge.
| 2,024 | Computation and Language |
Challenges in Pre-Training Graph Neural Networks for Context-Based Fake
News Detection: An Evaluation of Current Strategies and Resource Limitations | Pre-training of neural networks has recently revolutionized the field of
Natural Language Processing (NLP) and has before demonstrated its effectiveness
in computer vision. At the same time, advances around the detection of fake
news were mainly driven by the context-based paradigm, where different types of
signals (e.g. from social media) form graph-like structures that hold
contextual information apart from the news article to classify. We propose to
merge these two developments by applying pre-training of Graph Neural Networks
(GNNs) in the domain of context-based fake news detection. Our experiments
provide an evaluation of different pre-training strategies for graph-based
misinformation detection and demonstrate that transfer learning does currently
not lead to significant improvements over training a model from scratch in the
domain. We argue that a major current issue is the lack of suitable large-scale
resources that can be used for pre-training.
| 2,024 | Computation and Language |
Clustering and Ranking: Diversity-preserved Instruction Selection
through Expert-aligned Quality Estimation | With contributions from the open-source community, a vast amount of
instruction tuning (IT) data has emerged. Given the significant resource
allocation required by training and evaluating models, it is advantageous to
have an efficient method for selecting high-quality IT data. However, existing
methods for instruction data selection have limitations such as relying on
fragile external APIs, being affected by biases in GPT models, or reducing the
diversity of the selected instruction dataset. In this paper, we propose an
industrial-friendly, expert-aligned and diversity-preserved instruction data
selection method: Clustering and Ranking (CaR). CaR consists of two steps. The
first step involves ranking instruction pairs using a scoring model that is
well aligned with expert preferences (achieving an accuracy of 84.25%). The
second step involves preserving dataset diversity through a clustering
process.In our experiment, CaR selected a subset containing only 1.96% of
Alpaca's IT data, yet the underlying AlpaCaR model trained on this subset
outperforms Alpaca by an average of 32.1% in GPT-4 evaluations. Furthermore,
our method utilizes small models (355M parameters) and requires only 11.2% of
the monetary cost compared to existing methods, making it easily deployable in
industrial scenarios.
| 2,024 | Computation and Language |
DANSK and DaCy 2.6.0: Domain Generalization of Danish Named Entity
Recognition | Named entity recognition is one of the cornerstones of Danish NLP, essential
for language technology applications within both industry and research.
However, Danish NER is inhibited by a lack of available datasets. As a
consequence, no current models are capable of fine-grained named entity
recognition, nor have they been evaluated for potential generalizability issues
across datasets and domains. To alleviate these limitations, this paper
introduces: 1) DANSK: a named entity dataset providing for high-granularity
tagging as well as within-domain evaluation of models across a diverse set of
domains; 2) DaCy 2.6.0 that includes three generalizable models with
fine-grained annotation; and 3) an evaluation of current state-of-the-art
models' ability to generalize across domains. The evaluation of existing and
new models revealed notable performance discrepancies across domains, which
should be addressed within the field. Shortcomings of the annotation quality of
the dataset and its impact on model training and evaluation are also discussed.
Despite these limitations, we advocate for the use of the new dataset DANSK
alongside further work on the generalizability within Danish NER.
| 2,024 | Computation and Language |
LLM Task Interference: An Initial Study on the Impact of Task-Switch in
Conversational History | With the recent emergence of powerful instruction-tuned large language models
(LLMs), various helpful conversational Artificial Intelligence (AI) systems
have been deployed across many applications. When prompted by users, these AI
systems successfully perform a wide range of tasks as part of a conversation.
To provide some sort of memory and context, such approaches typically condition
their output on the entire conversational history. Although this sensitivity to
the conversational history can often lead to improved performance on subsequent
tasks, we find that performance can in fact also be negatively impacted, if
there is a task-switch. To the best of our knowledge, our work makes the first
attempt to formalize the study of such vulnerabilities and interference of
tasks in conversational LLMs caused by task-switches in the conversational
history. Our experiments across 5 datasets with 15 task switches using popular
LLMs reveal that many of the task-switches can lead to significant performance
degradation.
| 2,024 | Computation and Language |
Improving Open-Ended Text Generation via Adaptive Decoding | Current language models decode text token by token according to probabilistic
distribution, and determining the appropriate candidates for the next token is
crucial to ensure generation quality. This study introduces adaptive decoding,
a mechanism that empowers the language models to ascertain a sensible candidate
set during the generation process dynamically. Specifically, we introduce an
entropy-based metric called confidence and conceptualize determining the
optimal candidate set as a confidence-increasing process. The rationality of
including a token in the candidate set is assessed by leveraging the increment
of confidence, enabling the model to determine the most suitable candidate set
adaptively. The experimental results reveal that our method achieves higher
MAUVE and diversity in story generation tasks and maintains certain coherence,
underscoring its superiority over existing algorithms. The code is available at
https://github.com/zwhong714/adaptive_decoding.
| 2,024 | Computation and Language |
CogBench: a large language model walks into a psychology lab | Large language models (LLMs) have significantly advanced the field of
artificial intelligence. Yet, evaluating them comprehensively remains
challenging. We argue that this is partly due to the predominant focus on
performance metrics in most benchmarks. This paper introduces CogBench, a
benchmark that includes ten behavioral metrics derived from seven cognitive
psychology experiments. This novel approach offers a toolkit for phenotyping
LLMs' behavior. We apply CogBench to 35 LLMs, yielding a rich and diverse
dataset. We analyze this data using statistical multilevel modeling techniques,
accounting for the nested dependencies among fine-tuned versions of specific
LLMs. Our study highlights the crucial role of model size and reinforcement
learning from human feedback (RLHF) in improving performance and aligning with
human behavior. Interestingly, we find that open-source models are less
risk-prone than proprietary models and that fine-tuning on code does not
necessarily enhance LLMs' behavior. Finally, we explore the effects of
prompt-engineering techniques. We discover that chain-of-thought prompting
improves probabilistic reasoning, while take-a-step-back prompting fosters
model-based behaviors.
| 2,024 | Computation and Language |
Learning or Self-aligning? Rethinking Instruction Fine-tuning | Instruction Fine-tuning~(IFT) is a critical phase in building large language
models~(LLMs). Previous works mainly focus on the IFT's role in the transfer of
behavioral norms and the learning of additional world knowledge. However, the
understanding of the underlying mechanisms of IFT remains significantly
limited. In this paper, we design a knowledge intervention framework to
decouple the potential underlying factors of IFT, thereby enabling individual
analysis of different factors. Surprisingly, our experiments reveal that
attempting to learn additional world knowledge through IFT often struggles to
yield positive impacts and can even lead to markedly negative effects. Further,
we discover that maintaining internal knowledge consistency before and after
IFT is a critical factor for achieving successful IFT. Our findings reveal the
underlying mechanisms of IFT and provide robust support for some very recent
and potential future works.
| 2,024 | Computation and Language |
Towards Generalist Prompting for Large Language Models by Mental Models | Large language models (LLMs) have demonstrated impressive performance on many
tasks. However, to achieve optimal performance, specially designed prompting
methods are still needed. These methods either rely on task-specific few-shot
examples that require a certain level of domain knowledge, or are designed to
be simple but only perform well on a few types of tasks. In this work, we
attempt to introduce the concept of generalist prompting, which operates on the
design principle of achieving optimal or near-optimal performance on a wide
range of tasks while eliminating the need for manual selection and
customization of prompts tailored to specific problems. Furthermore, we propose
MeMo (Mental Models), an innovative prompting method that is simple-designed
yet effectively fulfills the criteria of generalist prompting. MeMo distills
the cores of various prompting methods into individual mental models and allows
LLMs to autonomously select the most suitable mental models for the problem,
achieving or being near to the state-of-the-art results on diverse tasks such
as STEM, logical reasoning, and commonsense reasoning in zero-shot settings. We
hope that the insights presented herein will stimulate further exploration of
generalist prompting methods for LLMs.
| 2,024 | Computation and Language |
A BiRGAT Model for Multi-intent Spoken Language Understanding with
Hierarchical Semantic Frames | Previous work on spoken language understanding (SLU) mainly focuses on
single-intent settings, where each input utterance merely contains one user
intent. This configuration significantly limits the surface form of user
utterances and the capacity of output semantics. In this work, we first propose
a Multi-Intent dataset which is collected from a realistic in-Vehicle dialogue
System, called MIVS. The target semantic frame is organized in a 3-layer
hierarchical structure to tackle the alignment and assignment problems in
multi-intent cases. Accordingly, we devise a BiRGAT model to encode the
hierarchy of ontology items, the backbone of which is a dual relational graph
attention network. Coupled with the 3-way pointer-generator decoder, our method
outperforms traditional sequence labeling and classification-based schemes by a
large margin.
| 2,024 | Computation and Language |
Hierarchical Multimodal Pre-training for Visually Rich Webpage
Understanding | The growing prevalence of visually rich documents, such as webpages and
scanned/digital-born documents (images, PDFs, etc.), has led to increased
interest in automatic document understanding and information extraction across
academia and industry. Although various document modalities, including image,
text, layout, and structure, facilitate human information retrieval, the
interconnected nature of these modalities presents challenges for neural
networks. In this paper, we introduce WebLM, a multimodal pre-training network
designed to address the limitations of solely modeling text and structure
modalities of HTML in webpages. Instead of processing document images as
unified natural images, WebLM integrates the hierarchical structure of document
images to enhance the understanding of markup-language-based documents.
Additionally, we propose several pre-training tasks to model the interaction
among text, structure, and image modalities effectively. Empirical results
demonstrate that the pre-trained WebLM significantly surpasses previous
state-of-the-art pre-trained models across several webpage understanding tasks.
The pre-trained models and code are available at
https://github.com/X-LANCE/weblm.
| 2,024 | Computation and Language |
Retrieval-based Full-length Wikipedia Generation for Emergent Events | In today's fast-paced world, the growing demand to quickly generate
comprehensive and accurate Wikipedia documents for emerging events is both
crucial and challenging. However, previous efforts in Wikipedia generation have
often fallen short of meeting real-world requirements. Some approaches focus
solely on generating segments of a complete Wikipedia document, while others
overlook the importance of faithfulness in generation or fail to consider the
influence of the pre-training corpus. In this paper, we simulate a real-world
scenario where structured full-length Wikipedia documents are generated for
emergent events using input retrieved from web sources. To ensure that Large
Language Models (LLMs) are not trained on corpora related to recently occurred
events, we select events that have taken place recently and introduce a new
benchmark Wiki-GenBen, which consists of 309 events paired with their
corresponding retrieved web pages for generating evidence. Additionally, we
design a comprehensive set of systematic evaluation metrics and baseline
methods, to evaluate the capability of LLMs in generating factual full-length
Wikipedia documents. The data and code are open-sourced at WikiGenBench.
| 2,024 | Computation and Language |
A Survey on Neural Question Generation: Methods, Applications, and
Prospects | In this survey, we present a detailed examination of the advancements in
Neural Question Generation (NQG), a field leveraging neural network techniques
to generate relevant questions from diverse inputs like knowledge bases, texts,
and images. The survey begins with an overview of NQG's background,
encompassing the task's problem formulation, prevalent benchmark datasets,
established evaluation metrics, and notable applications. It then methodically
classifies NQG approaches into three predominant categories: structured NQG,
which utilizes organized data sources, unstructured NQG, focusing on more
loosely structured inputs like texts or visual content, and hybrid NQG, drawing
on diverse input modalities. This classification is followed by an in-depth
analysis of the distinct neural network models tailored for each category,
discussing their inherent strengths and potential limitations. The survey
culminates with a forward-looking perspective on the trajectory of NQG,
identifying emergent research trends and prospective developmental paths.
Accompanying this survey is a curated collection of related research papers,
datasets and codes, systematically organized on Github, providing an extensive
reference for those delving into NQG.
| 2,024 | Computation and Language |
Rethinking the Bounds of LLM Reasoning: Are Multi-Agent Discussions the
Key? | Recent progress in LLMs discussion suggests that multi-agent discussion
improves the reasoning abilities of LLMs. In this work, we reevaluate this
claim through systematic experiments, where we propose a novel group discussion
framework to enrich the set of discussion mechanisms. Interestingly, our
results show that a single-agent LLM with strong prompts can achieve almost the
same performance as the best existing discussion approach on a wide range of
reasoning tasks and backbone LLMs. We observe that the multi-agent discussion
performs better than a single agent only when there is no demonstration in the
prompt. Further study reveals the common interaction mechanisms of LLMs during
the discussion.
| 2,024 | Computation and Language |
Towards Better Understanding of Contrastive Sentence Representation
Learning: A Unified Paradigm for Gradient | Sentence Representation Learning (SRL) is a crucial task in Natural Language
Processing (NLP), where contrastive Self-Supervised Learning (SSL) is currently
a mainstream approach. However, the reasons behind its remarkable effectiveness
remain unclear. Specifically, in other research fields, contrastive SSL shares
similarities in both theory and practical performance with non-contrastive SSL
(e.g., alignment & uniformity, Barlow Twins, and VICReg). However, in SRL,
contrastive SSL outperforms non-contrastive SSL significantly. Therefore, two
questions arise: First, what commonalities enable various contrastive losses to
achieve superior performance in SRL? Second, how can we make non-contrastive
SSL, which is similar to contrastive SSL but ineffective in SRL, effective? To
address these questions, we start from the perspective of gradients and
discover that four effective contrastive losses can be integrated into a
unified paradigm, which depends on three components: the Gradient Dissipation,
the Weight, and the Ratio. Then, we conduct an in-depth analysis of the roles
these components play in optimization and experimentally demonstrate their
significance for model performance. Finally, by adjusting these components, we
enable non-contrastive SSL to achieve outstanding performance in SRL.
| 2,024 | Computation and Language |
Is Crowdsourcing Breaking Your Bank? Cost-Effective Fine-Tuning of
Pre-trained Language Models with Proximal Policy Optimization | Wide usage of ChatGPT has highlighted the potential of reinforcement learning
from human feedback. However, its training pipeline relies on manual ranking, a
resource-intensive process. To reduce labor costs, we propose a self-supervised
text ranking approach for applying Proximal-Policy-Optimization to fine-tune
language models while eliminating the need for human annotators. Our method
begins with probabilistic sampling to encourage a language model to generate
diverse responses for each input. We then employ TextRank and ISODATA
algorithms to rank and cluster these responses based on their semantics.
Subsequently, we construct a reward model to learn the rank and optimize our
generative policy. Our experimental results, conducted using two language
models on three tasks, demonstrate that the models trained by our method
considerably outperform baselines regarding BLEU, GLEU, and METEOR scores.
Furthermore, our manual evaluation shows that our ranking results exhibit a
remarkably high consistency with that of humans. This research significantly
reduces training costs of proximal policy-guided models and demonstrates the
potential for self-correction of language models.
| 2,024 | Computation and Language |
How to think step-by-step: A mechanistic understanding of
chain-of-thought reasoning | Despite superior reasoning prowess demonstrated by Large Language Models
(LLMs) with Chain-of-Thought (CoT) prompting, a lack of understanding prevails
around the internal mechanisms of the models that facilitate CoT generation.
This work investigates the neural sub-structures within LLMs that manifest CoT
reasoning from a mechanistic point of view. From an analysis of LLaMA-2 7B
applied to multistep reasoning over fictional ontologies, we demonstrate that
LLMs deploy multiple parallel pathways of answer generation for step-by-step
reasoning. These parallel pathways provide sequential answers from the input
question context as well as the generated CoT. We observe a striking functional
rift in the middle layers of the LLM. Token representations in the initial half
remain strongly biased towards the pretraining prior, with the in-context
taking over abruptly in the later half. This internal phase shift manifests in
different functional components: attention heads that write the answer token
predominantly appear in the later half, attention heads that move information
along ontological relationships appear exclusively in the initial half, and so
on. To the best of our knowledge, this is the first attempt towards mechanistic
investigation of CoT reasoning in LLMs.
| 2,024 | Computation and Language |
Learning to Generate Instruction Tuning Datasets for Zero-Shot Task
Adaptation | We introduce Bonito, an open-source model for conditional task generation:
the task of converting unannotated text into task-specific training datasets
for instruction tuning. Our goal is to enable zero-shot task adaptation of
large language models on users' specialized, private data. We train Bonito on a
new large-scale dataset with 1.65M examples created by remixing existing
instruction tuning datasets into meta-templates. The meta-templates for a
dataset produce training examples where the input is the unannotated text and
the task attribute and the output consists of the instruction and the response.
We use Bonito to generate synthetic tasks for seven datasets from specialized
domains across three task types -- yes-no question answering, extractive
question answering, and natural language inference -- and adapt language
models. We show that Bonito significantly improves the average performance of
pretrained and instruction tuned models over the de facto self supervised
baseline. For example, adapting Mistral-Instruct-v2 and instruction tuned
variants of Mistral and Llama2 with Bonito improves the strong zero-shot
performance by 22.1 F1 points whereas the next word prediction objective undoes
some of the benefits of instruction tuning and reduces the average performance
by 0.8 F1 points. We conduct additional experiments with Bonito to understand
the effects of the domain, the size of the training set, and the choice of
alternative synthetic task generators. Overall, we show that learning with
synthetic instruction tuning datasets is an effective way to adapt language
models to new domains. The model, dataset, and code are available at
https://github.com/BatsResearch/bonito.
| 2,024 | Computation and Language |
Focus on Your Question! Interpreting and Mitigating Toxic CoT Problems
in Commonsense Reasoning | Large language models exhibit high-level commonsense reasoning abilities,
especially with enhancement methods like Chain-of-Thought (CoT). However, we
find these CoT-like methods lead to a considerable number of originally correct
answers turning wrong, which we define as the Toxic CoT problem. To interpret
and mitigate this problem, we first utilize attribution tracing and causal
tracing methods to probe the internal working mechanism of the LLM during CoT
reasoning. Through comparisons, we prove that the model exhibits information
loss from the question over the shallow attention layers when generating
rationales or answers. Based on the probing findings, we design a novel method
called RIDERS (Residual decodIng and sERial-position Swap), which compensates
for the information deficit in the model from both decoding and serial-position
perspectives. Through extensive experiments on multiple commonsense reasoning
benchmarks, we validate that this method not only significantly eliminates
Toxic CoT problems (decreased by 23.6%), but also effectively improves the
model's overall commonsense reasoning performance (increased by 5.5%).
| 2,024 | Computation and Language |
VerifiNER: Verification-augmented NER via Knowledge-grounded Reasoning
with Large Language Models | Recent approaches in domain-specific named entity recognition (NER), such as
biomedical NER, have shown remarkable advances. However, they still lack of
faithfulness, producing erroneous predictions. We assume that knowledge of
entities can be useful in verifying the correctness of the predictions. Despite
the usefulness of knowledge, resolving such errors with knowledge is
nontrivial, since the knowledge itself does not directly indicate the
ground-truth label. To this end, we propose VerifiNER, a post-hoc verification
framework that identifies errors from existing NER methods using knowledge and
revises them into more faithful predictions. Our framework leverages the
reasoning abilities of large language models to adequately ground on knowledge
and the contextual information in the verification process. We validate
effectiveness of VerifiNER through extensive experiments on biomedical
datasets. The results suggest that VerifiNER can successfully verify errors
from existing models as a model-agnostic approach. Further analyses on
out-of-domain and low-resource settings show the usefulness of VerifiNER on
real-world applications.
| 2,024 | Computation and Language |
Tokenization Is More Than Compression | Tokenization is a foundational step in Natural Language Processing (NLP)
tasks, bridging raw text and language models. Existing tokenization approaches
like Byte-Pair Encoding (BPE) originate from the field of data compression, and
it has been suggested that the effectiveness of BPE stems from its ability to
condense text into a relatively small number of tokens. We test the hypothesis
that fewer tokens lead to better downstream performance by introducing
PathPiece, a new tokenizer that segments a document's text into the minimum
number of tokens for a given vocabulary. Through extensive experimentation we
find this hypothesis not to be the case, casting doubt on the understanding of
the reasons for effective tokenization. To examine which other factors play a
role, we evaluate design decisions across all three phases of tokenization:
pre-tokenization, vocabulary construction, and segmentation, offering new
insights into the design of effective tokenizers. Specifically, we illustrate
the importance of pre-tokenization and the benefits of using BPE to initialize
vocabulary construction. We train 64 language models with varying tokenization,
ranging in size from 350M to 2.4B parameters, all of which are made publicly
available.
| 2,024 | Computation and Language |
The First Place Solution of WSDM Cup 2024: Leveraging Large Language
Models for Conversational Multi-Doc QA | Conversational multi-doc question answering aims to answer specific questions
based on the retrieved documents as well as the contextual conversations. In
this paper, we introduce our winning approach for the "Conversational Multi-Doc
QA" challenge in WSDM Cup 2024, which exploits the superior natural language
understanding and generation capability of Large Language Models (LLMs). We
first adapt LLMs to the task, then devise a hybrid training strategy to make
the most of in-domain unlabeled data. Moreover, an advanced text embedding
model is adopted to filter out potentially irrelevant documents and several
approaches are designed and compared for the model ensemble. Equipped with all
these techniques, our solution finally ranked 1st place in WSDM Cup 2024,
surpassing its rivals to a large extent. The source codes have been released at
https://github.com/zhangzhao219/WSDM-Cup-2024.
| 2,024 | Computation and Language |
Decomposed Prompting: Unveiling Multilingual Linguistic Structure
Knowledge in English-Centric Large Language Models | Despite the predominance of English in their training data, English-centric
Large Language Models (LLMs) like GPT-3 and LLaMA display a remarkable ability
to perform multilingual tasks, raising questions about the depth and nature of
their cross-lingual capabilities. This paper introduces the decomposed
prompting approach to probe the linguistic structure understanding of these
LLMs in sequence labeling tasks. Diverging from the single text-to-text prompt,
our method generates for each token of the input sentence an individual prompt
which asks for its linguistic label. We assess our method on the Universal
Dependencies part-of-speech tagging dataset for 38 languages, utilizing both
English-centric and multilingual LLMs. Our findings show that decomposed
prompting surpasses the iterative prompting baseline in efficacy and efficiency
under zero- and few-shot settings. Further analysis reveals the influence of
evaluation methods and the use of instructions in prompts. Our multilingual
investigation shows that English-centric language models perform better on
average than multilingual models. Our study offers insights into the
multilingual transferability of English-centric LLMs, contributing to the
understanding of their multilingual linguistic knowledge.
| 2,024 | Computation and Language |
Can GPT Improve the State of Prior Authorization via Guideline Based
Automated Question Answering? | Health insurance companies have a defined process called prior authorization
(PA) which is a health plan cost-control process that requires doctors and
other healthcare professionals to get clearance in advance from a health plan
before performing a particular procedure on a patient in order to be eligible
for payment coverage. For health insurance companies, approving PA requests for
patients in the medical domain is a time-consuming and challenging task. One of
those key challenges is validating if a request matches up to certain criteria
such as age, gender, etc. In this work, we evaluate whether GPT can validate
numerous key factors, in turn helping health plans reach a decision drastically
faster. We frame it as a question answering task, prompting GPT to answer a
question from patient electronic health record. We experiment with different
conventional prompting techniques as well as introduce our own novel prompting
technique. Moreover, we report qualitative assessment by humans on the natural
language generation outputs from our approach. Results show that our method
achieves superior performance with the mean weighted F1 score of 0.61 as
compared to its standard counterparts.
| 2,024 | Computation and Language |
Emotion Classification in Low and Moderate Resource Languages | It is important to be able to analyze the emotional state of people around
the globe. There are 7100+ active languages spoken around the world and
building emotion classification for each language is labor intensive.
Particularly for low-resource and endangered languages, building emotion
classification can be quite challenging. We present a cross-lingual emotion
classifier, where we train an emotion classifier with resource-rich languages
(i.e. \textit{English} in our work) and transfer the learning to low and
moderate resource languages. We compare and contrast two approaches of transfer
learning from a high-resource language to a low or moderate-resource language.
One approach projects the annotation from a high-resource language to low and
moderate-resource language in parallel corpora and the other one uses direct
transfer from high-resource language to the other languages. We show the
efficacy of our approaches on 6 languages: Farsi, Arabic, Spanish, Ilocano,
Odia, and Azerbaijani. Our results indicate that our approaches outperform
random baselines and transfer emotions across languages successfully. For all
languages, the direct cross-lingual transfer of emotion yields better results.
We also create annotated emotion-labeled resources for four languages: Farsi,
Azerbaijani, Ilocano and Odia.
| 2,024 | Computation and Language |
Leveraging Diverse Modeling Contexts with Collaborating Learning for
Neural Machine Translation | Autoregressive (AR) and Non-autoregressive (NAR) models are two types of
generative models for Neural Machine Translation (NMT). AR models predict
tokens in a word-by-word manner and can effectively capture the distribution of
real translations. NAR models predict tokens by extracting bidirectional
contextual information which can improve the inference speed but they suffer
from performance degradation. Previous works utilized AR models to enhance NAR
models by reducing the training data's complexity or incorporating the global
information into AR models by virtue of NAR models. However, those investigated
methods only take advantage of the contextual information of a single type of
model while neglecting the diversity in the contextual information that can be
provided by different types of models. In this paper, we propose a novel
generic collaborative learning method, DCMCL, where AR and NAR models are
treated as collaborators instead of teachers and students. To hierarchically
leverage the bilateral contextual information, token-level mutual learning and
sequence-level contrastive learning are adopted between AR and NAR models.
Extensive experiments on four widely used benchmarks show that the proposed
DCMCL method can simultaneously improve both AR and NAR models with up to 1.38
and 2.98 BLEU scores respectively, and can also outperform the current
best-unified model with up to 0.97 BLEU scores for both AR and NAR decoding.
| 2,024 | Computation and Language |
Beyond Natural Language: LLMs Leveraging Alternative Formats for
Enhanced Reasoning and Communication | Natural language (NL) has long been the predominant format for human
cognition and communication, and by extension, has been similarly pivotal in
the development and application of Large Language Models (LLMs). Yet, besides
NL, LLMs have seen various non-NL formats during pre-training, such as code and
logical expression. NL's status as the optimal format for LLMs, particularly in
single-LLM reasoning and multi-agent communication, has not been thoroughly
examined. In this work, we challenge the default use of NL by exploring the
utility of non-NL formats in these contexts. We show that allowing LLMs to
autonomously select the most suitable format before reasoning or communicating
leads to a 3.3 to 5.7\% improvement in reasoning efficiency for different LLMs,
and up to a 72.7\% reduction in token usage in multi-agent communication, all
while maintaining communicative effectiveness. Our comprehensive analysis
further reveals that LLMs can devise a format from limited task instructions
and that the devised format is effectively transferable across different LLMs.
Intriguingly, the structured communication format decided by LLMs exhibits
notable parallels with established agent communication languages, suggesting a
natural evolution towards efficient, structured communication in agent
communication. Our code is released at
\url{https://github.com/thunlp/AutoForm}.
| 2,024 | Computation and Language |
HOP to the Next Tasks and Domains for Continual Learning in NLP | Continual Learning (CL) aims to learn a sequence of problems (i.e., tasks and
domains) by transferring knowledge acquired on previous problems, whilst
avoiding forgetting of past ones. Different from previous approaches which
focused on CL for one NLP task or domain in a specific use-case, in this paper,
we address a more general CL setting to learn from a sequence of problems in a
unique framework. Our method, HOP, permits to hop across tasks and domains by
addressing the CL problem along three directions: (i) we employ a set of
adapters to generalize a large pre-trained model to unseen problems, (ii) we
compute high-order moments over the distribution of embedded representations to
distinguish independent and correlated statistics across different tasks and
domains, (iii) we process this enriched information with auxiliary heads
specialized for each end problem. Extensive experimental campaign on 4 NLP
applications, 5 benchmarks and 2 CL setups demonstrates the effectiveness of
our HOP.
| 2,024 | Computation and Language |
Meta-Task Prompting Elicits Embedding from Large Language Models | In this work, we introduce a new unsupervised embedding method, Meta-Task
Prompting with Explicit One-Word Limitation (MetaEOL), for generating
high-quality sentence embeddings from Large Language Models (LLMs) without the
need for model fine-tuning or task-specific engineering. Leveraging meta-task
prompting, MetaEOL guides LLMs to produce embeddings through a series of
carefully designed prompts that address multiple representational aspects. Our
comprehensive experiments demonstrate that embeddings averaged from various
meta-tasks yield competitive performance on Semantic Textual Similarity (STS)
benchmarks and excel in downstream tasks, surpassing contrastive-trained
models. Our findings suggest a new scaling law for embedding generation,
offering a versatile, resource-efficient approach for embedding extraction
across diverse sentence-centric scenarios.
| 2,024 | Computation and Language |
NewsQs: Multi-Source Question Generation for the Inquiring Mind | We present NewsQs (news-cues), a dataset that provides question-answer pairs
for multiple news documents. To create NewsQs, we augment a traditional
multi-document summarization dataset with questions automatically generated by
a T5-Large model fine-tuned on FAQ-style news articles from the News On the Web
corpus. We show that fine-tuning a model with control codes produces questions
that are judged acceptable more often than the same model without them as
measured through human evaluation. We use a QNLI model with high correlation
with human annotations to filter our data. We release our final dataset of
high-quality questions, answers, and document clusters as a resource for future
work in query-based multi-document summarization.
| 2,024 | Computation and Language |
Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware
Classification | Employing Large Language Models (LLM) in various downstream applications such
as classification is crucial, especially for smaller companies lacking the
expertise and resources required for fine-tuning a model. Fairness in LLMs
helps ensure inclusivity, equal representation based on factors such as race,
gender and promotes responsible AI deployment. As the use of LLMs has become
increasingly prevalent, it is essential to assess whether LLMs can generate
fair outcomes when subjected to considerations of fairness. In this study, we
introduce a framework outlining fairness regulations aligned with various
fairness definitions, with each definition being modulated by varying degrees
of abstraction. We explore the configuration for in-context learning and the
procedure for selecting in-context demonstrations using RAG, while
incorporating fairness rules into the process. Experiments conducted with
different LLMs indicate that GPT-4 delivers superior results in terms of both
accuracy and fairness compared to other models. This work is one of the early
attempts to achieve fairness in prediction tasks by utilizing LLMs through
in-context learning.
| 2,024 | Computation and Language |
Large Language Models and Games: A Survey and Roadmap | Recent years have seen an explosive increase in research on large language
models (LLMs), and accompanying public engagement on the topic. While starting
as a niche area within natural language processing, LLMs have shown remarkable
potential across a broad range of applications and domains, including games.
This paper surveys the current state of the art across the various applications
of LLMs in and for games, and identifies the different roles LLMs can take
within a game. Importantly, we discuss underexplored areas and promising
directions for future uses of LLMs in games and we reconcile the potential and
limitations of LLMs within the games domain. As the first comprehensive survey
and roadmap at the intersection of LLMs and games, we are hopeful that this
paper will serve as the basis for groundbreaking research and innovation in
this exciting new field.
| 2,024 | Computation and Language |
FOFO: A Benchmark to Evaluate LLMs' Format-Following Capability | This paper presents FoFo, a pioneering benchmark for evaluating large
language models' (LLMs) ability to follow complex, domain-specific formats, a
crucial yet underexamined capability for their application as AI agents.
Despite LLMs' advancements, existing benchmarks fail to assess their
format-following proficiency adequately. FoFo fills this gap with a diverse
range of real-world formats and instructions, developed through an AI-Human
collaborative method. Our evaluation across both open-source (e.g., Llama 2,
WizardLM) and closed-source (e.g., GPT-4, PALM2, Gemini) LLMs highlights three
key findings: open-source models significantly lag behind closed-source ones in
format adherence; LLMs' format-following performance is independent of their
content generation quality; and LLMs' format proficiency varies across
different domains. These insights suggest the need for specialized tuning for
format-following skills and highlight FoFo's role in guiding the selection of
domain-specific AI agents. FoFo is released here at
https://github.com/SalesforceAIResearch/FoFo.
| 2,024 | Computation and Language |
Simple linear attention language models balance the recall-throughput
tradeoff | Recent work has shown that attention-based language models excel at recall,
the ability to ground generations in tokens previously seen in context.
However, the efficiency of attention-based models is bottle-necked during
inference by the KV-cache's aggressive memory consumption. In this work, we
explore whether we can improve language model efficiency (e.g. by reducing
memory consumption) without compromising on recall. By applying experiments and
theory to a broad set of architectures, we identify a key tradeoff between a
model's state size and recall ability. We show that efficient alternatives to
attention (e.g. H3, Mamba, RWKV) maintain a fixed-size recurrent state, but
struggle at recall. We propose BASED a simple architecture combining linear and
sliding window attention. By varying BASED window size and linear attention
feature dimension, we can dial the state size and traverse the pareto frontier
of the recall-memory tradeoff curve, recovering the full quality of attention
on one end and the small state size of attention-alternatives on the other. We
train language models up to 1.3b parameters and show that BASED matches the
strongest sub-quadratic models (e.g. Mamba) in perplexity and outperforms them
on real-world recall-intensive tasks by 6.22 accuracy points. Implementations
of linear attention are often less efficient than optimized standard attention
implementations. To make BASED competitive, we develop IO-aware algorithms that
enable 24x higher throughput on language generation than FlashAttention-2, when
generating 1024 tokens using 1.3b parameter models. Code for this work is
provided at: https://github.com/HazyResearch/based.
| 2,024 | Computation and Language |
RORA: Robust Free-Text Rationale Evaluation | Free-text rationales play a pivotal role in explainable NLP, bridging the
knowledge and reasoning gaps behind a model's decision-making. However, due to
the diversity of potential reasoning paths and a corresponding lack of
definitive ground truth, their evaluation remains a challenge. Existing
evaluation metrics rely on the degree to which a rationale supports a target
label, but we find these fall short in evaluating rationales that inadvertently
leak the labels. To address this problem, we propose RORA, a Robust free-text
Rationale evaluation against label leakage. RORA quantifies the new information
supplied by a rationale to justify the label. This is achieved by assessing the
conditional V-information \citep{hewitt-etal-2021-conditional} with a
predictive family robust against leaky features that can be exploited by a
small model. RORA consistently outperforms existing approaches in evaluating
human-written, synthetic, or model-generated rationales, particularly
demonstrating robustness against label leakage. We also show that RORA aligns
well with human judgment, providing a more reliable and accurate measurement
across diverse free-text rationales.
| 2,024 | Computation and Language |
Learning to Compress Prompt in Natural Language Formats | Large language models (LLMs) are great at processing multiple natural
language processing tasks, but their abilities are constrained by inferior
performance with long context, slow inference speed, and the high cost of
computing the results. Deploying LLMs with precise and informative context
helps users process large-scale datasets more effectively and cost-efficiently.
Existing works rely on compressing long prompt contexts into soft prompts.
However, soft prompt compression encounters limitations in transferability
across different LLMs, especially API-based LLMs. To this end, this work aims
to compress lengthy prompts in the form of natural language with LLM
transferability. This poses two challenges: (i) Natural Language (NL) prompts
are incompatible with back-propagation, and (ii) NL prompts lack flexibility in
imposing length constraints. In this work, we propose a Natural Language Prompt
Encapsulation (Nano-Capsulator) framework compressing original prompts into NL
formatted Capsule Prompt while maintaining the prompt utility and
transferability. Specifically, to tackle the first challenge, the
Nano-Capsulator is optimized by a reward function that interacts with the
proposed semantics preserving loss. To address the second question, the
Nano-Capsulator is optimized by a reward function featuring length constraints.
Experimental results demonstrate that the Capsule Prompt can reduce 81.4% of
the original length, decrease inference latency up to 4.5x, and save 80.1% of
budget overheads while providing transferability across diverse LLMs and
different datasets.
| 2,024 | Computation and Language |
Fine-Tuned Machine Translation Metrics Struggle in Unseen Domains | We introduce a new, extensive multidimensional quality metrics (MQM)
annotated dataset covering 11 language pairs in the biomedical domain. We use
this dataset to investigate whether machine translation (MT) metrics which are
fine-tuned on human-generated MT quality judgements are robust to domain shifts
between training and inference. We find that fine-tuned metrics exhibit a
substantial performance drop in the unseen domain scenario relative to metrics
that rely on the surface form, as well as pre-trained metrics which are not
fine-tuned on MT quality judgments.
| 2,024 | Computation and Language |
How Much Annotation is Needed to Compare Summarization Models? | Modern instruction-tuned models have become highly capable in text generation
tasks such as summarization, and are expected to be released at a steady pace.
In practice one may now wish to choose confidently, but with minimal effort,
the best performing summarization model when applied to a new domain or
purpose. In this work, we empirically investigate the test sample size
necessary to select a preferred model in the context of news summarization.
Empirical results reveal that comparative evaluation converges quickly for both
automatic and human evaluation, with clear preferences for a system emerging
from under 100 examples. The human preference data allows us to quantify how
well automatic scores can reproduce preference rankings across a variety of
downstream summarization tasks. We find that, while automatic metrics are
stable at smaller sample sizes, only some automatic metrics are able to
moderately predict model win rates according to human preference.
| 2,024 | Computation and Language |
Advancing Generative AI for Portuguese with Open Decoder Gerv\'asio PT* | To advance the neural decoding of Portuguese, in this paper we present a
fully open Transformer-based, instruction-tuned decoder model that sets a new
state of the art in this respect. To develop this decoder, which we named
Gerv\'asio PT*, a strong LLaMA~2 7B model was used as a starting point, and its
further improvement through additional training was done over language
resources that include new instruction data sets of Portuguese prepared for
this purpose, which are also contributed in this paper. All versions of
Gerv\'asio are open source and distributed for free under an open license,
including for either research or commercial usage, and can be run on
consumer-grade hardware, thus seeking to contribute to the advancement of
research and innovation in language technology for Portuguese.
| 2,024 | Computation and Language |
On the Decision-Making Abilities in Role-Playing using Large Language
Models | Large language models (LLMs) are now increasingly utilized for role-playing
tasks, especially in impersonating domain-specific experts, primarily through
role-playing prompts. When interacting in real-world scenarios, the
decision-making abilities of a role significantly shape its behavioral
patterns. In this paper, we concentrate on evaluating the decision-making
abilities of LLMs post role-playing thereby validating the efficacy of
role-playing. Our goal is to provide metrics and guidance for enhancing the
decision-making abilities of LLMs in role-playing tasks. Specifically, we first
use LLMs to generate virtual role descriptions corresponding to the 16
personality types of Myers-Briggs Type Indicator (abbreviated as MBTI)
representing a segmentation of the population. Then we design specific
quantitative operations to evaluate the decision-making abilities of LLMs post
role-playing from four aspects: adaptability, exploration$\&$exploitation
trade-off ability, reasoning ability, and safety. Finally, we analyze the
association between the performance of decision-making and the corresponding
MBTI types through GPT-4. Extensive experiments demonstrate stable differences
in the four aspects of decision-making abilities across distinct roles,
signifying a robust correlation between decision-making abilities and the roles
emulated by LLMs. These results underscore that LLMs can effectively
impersonate varied roles while embodying their genuine sociological
characteristics.
| 2,024 | Computation and Language |
How do Large Language Models Handle Multilingualism? | Large language models (LLMs) demonstrate remarkable performance across a
spectrum of languages. In this work, we delve into the question: How do LLMs
handle multilingualism? We introduce a framework that depicts LLMs' processing
of multilingual inputs: In the first several layers, LLMs understand the
question, converting multilingual inputs into English to facilitate the
task-solving phase. In the intermediate layers, LLMs engage in problem-solving
by thinking in English and incorporating multilingual knowledge to obtain
factual content, leveraging the self-attention and feed-forward structures,
respectively. In the last several layers, LLMs generate responses that align
with the original language of the query. In addition, we investigate the
existence of language-specific neurons when processing a certain language. To
detect neurons activated by the input language, even without labels, we
innovatively design a Parallel Language specific Neuron Detection
($\texttt{PLND}$) method that effectively measures the significance of neurons
when handling multilingual inputs. By comprehensive ablation analysis through
deactivating neurons of different layers and structures, we verify the
framework that we propose. Additionally, we demonstrate that we can utilize
such a framework to effectively enhance the multilingual ability with much less
training effort.
| 2,024 | Computation and Language |
Utilizing Local Hierarchy with Adversarial Training for Hierarchical
Text Classification | Hierarchical text classification (HTC) is a challenging subtask of
multi-label classification due to its complex taxonomic structure. Nearly all
recent HTC works focus on how the labels are structured but ignore the
sub-structure of ground-truth labels according to each input text which
contains fruitful label co-occurrence information. In this work, we introduce
this local hierarchy with an adversarial framework. We propose a HiAdv
framework that can fit in nearly all HTC models and optimize them with the
local hierarchy as auxiliary information. We test on two typical HTC models and
find that HiAdv is effective in all scenarios and is adept at dealing with
complex taxonomic hierarchies. Further experiments demonstrate that the
promotion of our framework indeed comes from the local hierarchy and the local
hierarchy is beneficial for rare classes which have insufficient training data.
| 2,024 | Computation and Language |
When does word order matter and when doesn't it? | Language models (LMs) may appear insensitive to word order changes in natural
language understanding (NLU) tasks. In this paper, we propose that linguistic
redundancy can explain this phenomenon, whereby word order and other linguistic
cues such as case markers provide overlapping and thus redundant information.
Our hypothesis is that models exhibit insensitivity to word order when the
order provides redundant information, and the degree of insensitivity varies
across tasks. We quantify how informative word order is using mutual
information (MI) between unscrambled and scrambled sentences. Our results show
the effect that the less informative word order is, the more consistent the
model's predictions are between unscrambled and scrambled sentences. We also
find that the effect varies across tasks: for some tasks, like SST-2, LMs'
prediction is almost always consistent with the original one even if the
Pointwise-MI (PMI) changes, while for others, like RTE, the consistency is near
random when the PMI gets lower, i.e., word order is really important.
| 2,024 | Computation and Language |
Reducing Hallucinations in Entity Abstract Summarization with
Facts-Template Decomposition | Entity abstract summarization aims to generate a coherent description of a
given entity based on a set of relevant Internet documents. Pretrained language
models (PLMs) have achieved significant success in this task, but they may
suffer from hallucinations, i.e. generating non-factual information about the
entity. To address this issue, we decompose the summary into two components:
Facts that represent the factual information about the given entity, which PLMs
are prone to fabricate; and Template that comprises generic content with
designated slots for facts, which PLMs can generate competently. Based on the
facts-template decomposition, we propose SlotSum, an explainable framework for
entity abstract summarization. SlotSum first creates the template and then
predicts the fact for each template slot based on the input documents.
Benefiting from our facts-template decomposition, SlotSum can easily locate
errors and further rectify hallucinated predictions with external knowledge. We
construct a new dataset WikiFactSum to evaluate the performance of SlotSum.
Experimental results demonstrate that SlotSum could generate summaries that are
significantly more factual with credible external knowledge.
| 2,024 | Computation and Language |
Principal Component Analysis as a Sanity Check for Bayesian
Phylolinguistic Reconstruction | Bayesian approaches to reconstructing the evolutionary history of languages
rely on the tree model, which assumes that these languages descended from a
common ancestor and underwent modifications over time. However, this assumption
can be violated to different extents due to contact and other factors.
Understanding the degree to which this assumption is violated is crucial for
validating the accuracy of phylolinguistic inference. In this paper, we propose
a simple sanity check: projecting a reconstructed tree onto a space generated
by principal component analysis. By using both synthetic and real data, we
demonstrate that our method effectively visualizes anomalies, particularly in
the form of jogging.
| 2,024 | Computation and Language |
Updating Language Models with Unstructured Facts: Towards Practical
Knowledge Editing | Knowledge editing aims to inject knowledge updates into language models to
keep them correct and up-to-date. However, its current evaluation strategies
are notably impractical: they solely update with well-curated structured facts
(triplets with subjects, relations, and objects), whereas real-world knowledge
updates commonly emerge in unstructured texts like news articles. In this
paper, we propose a new benchmark, Unstructured Knowledge Editing (UKE). It
evaluates editing performance directly using unstructured texts as knowledge
updates, termed unstructured facts. Hence UKE avoids the laborious construction
of structured facts and enables efficient and responsive knowledge editing,
becoming a more practical benchmark. We conduct extensive experiments on newly
built datasets and demonstrate that UKE poses a significant challenge to
state-of-the-art knowledge editing methods, resulting in their critical
performance declines. We further show that this challenge persists even if we
extract triplets as structured facts. Our analysis discloses key insights to
motivate future research in UKE for more practical knowledge editing.
| 2,024 | Computation and Language |
AdaMergeX: Cross-Lingual Transfer with Large Language Models via
Adaptive Adapter Merging | As an effective alternative to the direct fine-tuning on target tasks in
specific languages, cross-lingual transfer addresses the challenges of limited
training data by decoupling ''task ability'' and ''language ability'' by
fine-tuning on the target task in the source language and another selected task
in the target language, respectively. However, they fail to fully separate the
task ability from the source language or the language ability from the chosen
task. In this paper, we acknowledge the mutual reliance between task ability
and language ability and direct our attention toward the gap between the target
language and the source language on tasks. As the gap removes the impact of
tasks, we assume that it remains consistent across tasks. Based on this
assumption, we propose a new cross-lingual transfer method called
$\texttt{AdaMergeX}$ that utilizes adaptive adapter merging. By introducing a
reference task, we can determine that the divergence of adapters fine-tuned on
the reference task in both languages follows the same distribution as the
divergence of adapters fine-tuned on the target task in both languages. Hence,
we can obtain target adapters by combining the other three adapters.
Furthermore, we propose a structure-adaptive adapter merging method. Our
empirical results demonstrate that our approach yields new and effective
cross-lingual transfer, outperforming existing methods across all settings.
| 2,024 | Computation and Language |
Inappropriate Pause Detection In Dysarthric Speech Using Large-Scale
Speech Recognition | Dysarthria, a common issue among stroke patients, severely impacts speech
intelligibility. Inappropriate pauses are crucial indicators in severity
assessment and speech-language therapy. We propose to extend a large-scale
speech recognition model for inappropriate pause detection in dysarthric
speech. To this end, we propose task design, labeling strategy, and a speech
recognition model with an inappropriate pause prediction layer. First, we treat
pause detection as speech recognition, using an automatic speech recognition
(ASR) model to convert speech into text with pause tags. According to the newly
designed task, we label pause locations at the text level and their
appropriateness. We collaborate with speech-language pathologists to establish
labeling criteria, ensuring high-quality annotated data. Finally, we extend the
ASR model with an inappropriate pause prediction layer for end-to-end
inappropriate pause detection. Moreover, we propose a task-tailored metric for
evaluating inappropriate pause detection independent of ASR performance. Our
experiments show that the proposed method better detects inappropriate pauses
in dysarthric speech than baselines. (Inappropriate Pause Error Rate: 14.47%)
| 2,024 | Computation and Language |
SemEval 2024 -- Task 10: Emotion Discovery and Reasoning its Flip in
Conversation (EDiReF) | We present SemEval-2024 Task 10, a shared task centred on identifying
emotions and finding the rationale behind their flips within monolingual
English and Hindi-English code-mixed dialogues. This task comprises three
distinct subtasks - emotion recognition in conversation for code-mixed
dialogues, emotion flip reasoning for code-mixed dialogues, and emotion flip
reasoning for English dialogues. Participating systems were tasked to
automatically execute one or more of these subtasks. The datasets for these
tasks comprise manually annotated conversations focusing on emotions and
triggers for emotion shifts (The task data is available at
https://github.com/LCS2-IIITD/EDiReF-SemEval2024.git). A total of 84
participants engaged in this task, with the most adept systems attaining
F1-scores of 0.70, 0.79, and 0.76 for the respective subtasks. This paper
summarises the results and findings from 24 teams alongside their system
descriptions.
| 2,024 | Computation and Language |
PopALM: Popularity-Aligned Language Models for Social Media Trendy
Response Prediction | Social media platforms are daily exhibiting millions of events. To
preliminarily predict the mainstream public reaction to these events, we study
trendy response prediction to automatically generate top-liked user replies to
social media events. While previous works focus on generating responses without
factoring in popularity, we propose Popularity-Aligned Language Models (PopALM)
to distinguish responses liked by a larger audience through reinforcement
learning. Recognizing the noisy labels from user "likes", we tailor-make
curriculum learning in proximal policy optimization (PPO) to help models
capture the essential samples for easy-to-hard training. In experiments, we
build a large-scale Weibo dataset for trendy response prediction, and its
results show that PopALM can help boost the performance of advanced language
models.
| 2,024 | Computation and Language |
Exploring the Efficacy of Large Language Models in Summarizing Mental
Health Counseling Sessions: A Benchmark Study | Comprehensive summaries of sessions enable an effective continuity in mental
health counseling, facilitating informed therapy planning. Yet, manual
summarization presents a significant challenge, diverting experts' attention
from the core counseling process. This study evaluates the effectiveness of
state-of-the-art Large Language Models (LLMs) in selectively summarizing
various components of therapy sessions through aspect-based summarization,
aiming to benchmark their performance. We introduce MentalCLOUDS, a
counseling-component guided summarization dataset consisting of 191 counseling
sessions with summaries focused on three distinct counseling components (aka
counseling aspects). Additionally, we assess the capabilities of 11
state-of-the-art LLMs in addressing the task of component-guided summarization
in counseling. The generated summaries are evaluated quantitatively using
standard summarization metrics and verified qualitatively by mental health
professionals. Our findings demonstrate the superior performance of
task-specific LLMs such as MentalLlama, Mistral, and MentalBART in terms of
standard quantitative metrics such as Rouge-1, Rouge-2, Rouge-L, and BERTScore
across all aspects of counseling components. Further, expert evaluation reveals
that Mistral supersedes both MentalLlama and MentalBART based on six parameters
-- affective attitude, burden, ethicality, coherence, opportunity costs, and
perceived effectiveness. However, these models share the same weakness by
demonstrating a potential for improvement in the opportunity costs and
perceived effectiveness metrics.
| 2,024 | Computation and Language |
Pointing out the Shortcomings of Relation Extraction Models with
Semantically Motivated Adversarials | In recent years, large language models have achieved state-of-the-art
performance across various NLP tasks. However, investigations have shown that
these models tend to rely on shortcut features, leading to inaccurate
predictions and causing the models to be unreliable at generalization to
out-of-distribution (OOD) samples. For instance, in the context of relation
extraction (RE), we would expect a model to identify the same relation
independently of the entities involved in it. For example, consider the
sentence "Leonardo da Vinci painted the Mona Lisa" expressing the
created(Leonardo_da_Vinci, Mona_Lisa) relation. If we substiute "Leonardo da
Vinci" with "Barack Obama", then the sentence still expresses the created
relation. A robust model is supposed to detect the same relation in both cases.
In this work, we describe several semantically-motivated strategies to generate
adversarial examples by replacing entity mentions and investigate how
state-of-the-art RE models perform under pressure. Our analyses show that the
performance of these models significantly deteriorates on the modified datasets
(avg. of -48.5% in F1), which indicates that these models rely to a great
extent on shortcuts, such as surface forms (or patterns therein) of entities,
without making full use of the information present in the sentences.
| 2,024 | Computation and Language |
Controllable Preference Optimization: Toward Controllable
Multi-Objective Alignment | Alignment in artificial intelligence pursues the consistency between model
responses and human preferences as well as values. In practice, the
multifaceted nature of human preferences inadvertently introduces what is known
as the "alignment tax" -a compromise where enhancements in alignment within one
objective (e.g.,harmlessness) can diminish performance in others
(e.g.,helpfulness). However, existing alignment techniques are mostly
unidirectional, leading to suboptimal trade-offs and poor flexibility over
various objectives. To navigate this challenge, we argue the prominence of
grounding LLMs with evident preferences. We introduce controllable preference
optimization (CPO), which explicitly specifies preference scores for different
objectives, thereby guiding the model to generate responses that meet the
requirements. Our experimental analysis reveals that the aligned models can
provide responses that match various preferences among the "3H" (helpfulness,
honesty, harmlessness) desiderata. Furthermore, by introducing diverse data and
alignment goals, we surpass baseline methods in aligning with single
objectives, hence mitigating the impact of the alignment tax and achieving
Pareto improvements in multi-objective alignment.
| 2,024 | Computation and Language |
Survey in Characterization of Semantic Change | Live languages continuously evolve to integrate the cultural change of human
societies. This evolution manifests through neologisms (new words) or
\textbf{semantic changes} of words (new meaning to existing words).
Understanding the meaning of words is vital for interpreting texts coming from
different cultures (regionalism or slang), domains (e.g., technical terms), or
periods. In computer science, these words are relevant to computational
linguistics algorithms such as translation, information retrieval, question
answering, etc. Semantic changes can potentially impact the quality of the
outcomes of these algorithms. Therefore, it is important to understand and
characterize these changes formally. The study of this impact is a recent
problem that has attracted the attention of the computational linguistics
community. Several approaches propose methods to detect semantic changes with
good precision, but more effort is needed to characterize how the meaning of
words changes and to reason about how to reduce the impact of semantic change.
This survey provides an understandable overview of existing approaches to the
\textit{characterization of semantic changes} and also formally defines three
classes of characterizations: if the meaning of a word becomes more general or
narrow (change in dimension) if the word is used in a more pejorative or
positive/ameliorated sense (change in orientation), and if there is a trend to
use the word in a, for instance, metaphoric or metonymic context (change in
relation). We summarized the main aspects of the selected publications in a
table and discussed the needs and trends in the research activities on semantic
change characterization.
| 2,024 | Computation and Language |
TEncDM: Understanding the Properties of Diffusion Model in the Space of
Language Model Encodings | Drawing inspiration from the success of diffusion models in various domains,
numerous research papers proposed methods for adapting them to text data.
Despite these efforts, none of them has managed to achieve the quality of the
large language models. In this paper, we conduct a comprehensive analysis of
key components of the text diffusion models and introduce a novel approach
named Text Encoding Diffusion Model (TEncDM). Instead of the commonly used
token embedding space, we train our model in the space of the language model
encodings. Additionally, we propose to use a Transformer-based decoder that
utilizes contextual information for text reconstruction. We also analyse
self-conditioning and find that it increases the magnitude of the model
outputs, allowing the reduction of the number of denoising steps at the
inference stage. Evaluation of TEncDM on two downstream text generation tasks,
QQP and XSum, demonstrates its superiority over existing non-autoregressive
models.
| 2,024 | Computation and Language |
Whispers that Shake Foundations: Analyzing and Mitigating False Premise
Hallucinations in Large Language Models | Large Language Models (LLMs) have shown impressive capabilities but still
suffer from the issue of hallucinations. A significant type of this issue is
the false premise hallucination, which we define as the phenomenon when LLMs
generate hallucinated text when confronted with false premise questions. In
this paper, we perform a comprehensive analysis of the false premise
hallucination and elucidate its internal working mechanism: a small subset of
attention heads (which we designate as false premise heads) disturb the
knowledge extraction process, leading to the occurrence of false premise
hallucination. Based on our analysis, we propose \textbf{FAITH} (\textbf{F}alse
premise \textbf{A}ttention head constra\textbf{I}ining for mi\textbf{T}igating
\textbf{H}allucinations), a novel and effective method to mitigate false
premise hallucinations. It constrains the false premise attention heads during
the model inference process. Impressively, extensive experiments demonstrate
that constraining only approximately $1\%$ of the attention heads in the model
yields a notable increase of nearly $20\%$ of model performance.
| 2,024 | Computation and Language |
How to Understand "Support"? An Implicit-enhanced Causal Inference
Approach for Weakly-supervised Phrase Grounding | Weakly-supervised Phrase Grounding (WPG) is an emerging task of inferring the
fine-grained phrase-region matching, while merely leveraging the coarse-grained
sentence-image pairs for training. However, existing studies on WPG largely
ignore the implicit phrase-region matching relations, which are crucial for
evaluating the capability of models in understanding the deep multimodal
semantics. To this end, this paper proposes an Implicit-Enhanced Causal
Inference (IECI) approach to address the challenges of modeling the implicit
relations and highlighting them beyond the explicit. Specifically, this
approach leverages both the intervention and counterfactual techniques to
tackle the above two challenges respectively. Furthermore, a high-quality
implicit-enhanced dataset is annotated to evaluate IECI and detailed
evaluations show the great advantages of IECI over the state-of-the-art
baselines. Particularly, we observe an interesting finding that IECI
outperforms the advanced multimodal LLMs by a large margin on this
implicit-enhanced dataset, which may facilitate more research to evaluate the
multimodal LLMs in this direction.
| 2,024 | Computation and Language |
Evaluating Webcam-based Gaze Data as an Alternative for Human Rationale
Annotations | Rationales in the form of manually annotated input spans usually serve as
ground truth when evaluating explainability methods in NLP. They are, however,
time-consuming and often biased by the annotation process. In this paper, we
debate whether human gaze, in the form of webcam-based eye-tracking recordings,
poses a valid alternative when evaluating importance scores. We evaluate the
additional information provided by gaze data, such as total reading times, gaze
entropy, and decoding accuracy with respect to human rationale annotations. We
compare WebQAmGaze, a multilingual dataset for information-seeking QA, with
attention and explainability-based importance scores for 4 different
multilingual Transformer-based language models (mBERT, distil-mBERT, XLMR, and
XLMR-L) and 3 languages (English, Spanish, and German). Our pipeline can easily
be applied to other tasks and languages. Our findings suggest that gaze data
offers valuable linguistic insights that could be leveraged to infer task
difficulty and further show a comparable ranking of explainability methods to
that of human rationales.
| 2,024 | Computation and Language |
Teaching Large Language Models an Unseen Language on the Fly | Existing large language models struggle to support numerous low-resource
languages, particularly the extremely low-resource ones where there is minimal
training data available for effective parameter updating. We thus investigate
whether LLMs can learn a new language on the fly solely through prompting. To
study this question, we collect a research suite for Zhuang, a language
supported by no LLMs currently. We introduce \textsc{DiPMT++}, a framework for
adapting LLMs to unseen languages by in-context learning. Using a dictionary
and only 5K parallel sentences, \textsc{DiPMT++} significantly enhances the
performance of GPT-4 from 0 to 16 BLEU for Chinese-to-Zhuang translation and
achieves 32 BLEU for Zhuang-to-Chinese translation. Furthermore, we demonstrate
the practical utility of this framework in aiding humans to translate
completely unseen languages, which could contribute to the preservation of
linguistic diversity.
| 2,024 | Computation and Language |
Improving Legal Judgement Prediction in Romanian with Long Text Encoders | In recent years,the entire field of Natural Language Processing (NLP) has
enjoyed amazing novel results achieving almost human-like performance on a
variety of tasks. Legal NLP domain has also been part of this process, as it
has seen an impressive growth. However, general-purpose models are not readily
applicable for legal domain. Due to the nature of the domain (e.g. specialized
vocabulary, long documents) specific models and methods are often needed for
Legal NLP. In this work we investigate both specialized and general models for
predicting the final ruling of a legal case, task known as Legal Judgment
Prediction (LJP). We particularly focus on methods to extend to sequence length
of Transformer-based models to better understand the long documents present in
legal corpora. Extensive experiments on 4 LJP datasets in Romanian, originating
from 2 sources with significantly different sizes and document lengths, show
that specialized models and handling long texts are critical for a good
performance.
| 2,024 | Computation and Language |
PeLLE: Encoder-based language models for Brazilian Portuguese based on
open data | In this paper we present PeLLE, a family of large language models based on
the RoBERTa architecture, for Brazilian Portuguese, trained on curated, open
data from the Carolina corpus. Aiming at reproducible results, we describe
details of the pretraining of the models. We also evaluate PeLLE models against
a set of existing multilingual and PT-BR refined pretrained Transformer-based
LLM encoders, contrasting performance of large versus smaller-but-curated
pretrained models in several downstream tasks. We conclude that several tasks
perform better with larger models, but some tasks benefit from
smaller-but-curated data in its pretraining.
| 2,024 | Computation and Language |
Memory-Augmented Generative Adversarial Transformers | Conversational AI systems that rely on Large Language Models, like
Transformers, have difficulty interweaving external data (like facts) with the
language they generate. Vanilla Transformer architectures are not designed for
answering factual questions with high accuracy. This paper investigates a
possible route for addressing this problem. We propose to extend the standard
Transformer architecture with an additional memory bank holding extra
information (such as facts drawn from a knowledge base), and an extra attention
layer for addressing this memory. We add this augmented memory to a Generative
Adversarial Network-inspired Transformer architecture. This setup allows for
implementing arbitrary felicity conditions on the generated language of the
Transformer. We first demonstrate how this machinery can be deployed for
handling factual questions in goal-oriented dialogues. Secondly, we demonstrate
that our approach can be useful for applications like {\it style adaptation} as
well: the adaptation of utterances according to certain stylistic (external)
constraints, like social properties of human interlocutors in dialogues.
| 2,024 | Computation and Language |
Let LLMs Take on the Latest Challenges! A Chinese Dynamic Question
Answering Benchmark | How to better evaluate the capabilities of Large Language Models (LLMs) is
the focal point and hot topic in current LLMs research. Previous work has noted
that due to the extremely high cost of iterative updates of LLMs, they are
often unable to answer the latest dynamic questions well. To promote the
improvement of Chinese LLMs' ability to answer dynamic questions, in this
paper, we introduce CDQA, a Chinese Dynamic QA benchmark containing
question-answer pairs related to the latest news on the Chinese Internet. We
obtain high-quality data through a pipeline that combines humans and models,
and carefully classify the samples according to the frequency of answer changes
to facilitate a more fine-grained observation of LLMs' capabilities. We have
also evaluated and analyzed mainstream and advanced Chinese LLMs on CDQA.
Extensive experiments and valuable insights suggest that our proposed CDQA is
challenging and worthy of more further study. We believe that the benchmark we
provide will become one of the key data resources for improving LLMs' Chinese
question-answering ability in the future.
| 2,024 | Computation and Language |
GSM-Plus: A Comprehensive Benchmark for Evaluating the Robustness of
LLMs as Mathematical Problem Solvers | Large language models (LLMs) have achieved impressive performance across
various mathematical reasoning benchmarks. However, there are increasing
debates regarding whether these models truly understand and apply mathematical
knowledge or merely rely on shortcuts for mathematical reasoning. One essential
and frequently occurring evidence is that when the math questions are slightly
changed, LLMs can behave incorrectly. This motivates us to evaluate the
robustness of LLMs' math reasoning capability by testing a wide range of
question variations. We introduce the adversarial grade school math
(\datasetname) dataset, an extension of GSM8K augmented with various
mathematical perturbations. Our experiments on 25 LLMs and 4 prompting
techniques show that while LLMs exhibit different levels of math reasoning
abilities, their performances are far from robust. In particular, even for
problems that have been solved in GSM8K, LLMs can make mistakes when new
statements are added or the question targets are altered. We also explore
whether more robust performance can be achieved by composing existing prompting
methods, in which we try an iterative method that generates and verifies each
intermediate thought based on its reasoning goal and calculation result. Code
and data are available at \url{https://github.com/qtli/GSM-Plus}.
| 2,024 | Computation and Language |
Robust Guidance for Unsupervised Data Selection: Capturing Perplexing
Named Entities for Domain-Specific Machine Translation | Employing extensive datasets enables the training of multilingual machine
translation models; however, these models often fail to accurately translate
sentences within specialized domains. Although obtaining and translating
domain-specific data incurs high costs, it is inevitable for high-quality
translations. Hence, finding the most 'effective' data with an unsupervised
setting becomes a practical strategy for reducing labeling costs. Recent
research indicates that this effective data could be found by selecting
'properly difficult data' based on its volume. This means the data should not
be excessively challenging or overly simplistic, especially if the amount of
data is limited. However, we found that establishing a criterion for
unsupervised data selection remains challenging, as the 'proper difficulty'
might vary based on the data domain being trained on. We introduce a novel
unsupervised data selection method, 'Capturing Perplexing Named Entities',
which adopts the maximum inference entropy in translated named entities as a
selection measure. The motivation was that named entities in domain-specific
data are considered the most complex portion of the data and should be
predicted with high confidence. When verified with the 'Korean-English Parallel
Corpus of Specialized Domains,' our method served as a robust guidance for
unsupervised data selection, in contrast to existing methods.
| 2,024 | Computation and Language |
PlanGPT: Enhancing Urban Planning with Tailored Language Model and
Efficient Retrieval | In the field of urban planning, general-purpose large language models often
struggle to meet the specific needs of planners. Tasks like generating urban
planning texts, retrieving related information, and evaluating planning
documents pose unique challenges. To enhance the efficiency of urban
professionals and overcome these obstacles, we introduce PlanGPT, the first
specialized Large Language Model tailored for urban and spatial planning.
Developed through collaborative efforts with institutions like the Chinese
Academy of Urban Planning, PlanGPT leverages a customized local database
retrieval framework, domain-specific fine-tuning of base models, and advanced
tooling capabilities. Empirical tests demonstrate that PlanGPT has achieved
advanced performance, delivering responses of superior quality precisely
tailored to the intricacies of urban planning.
| 2,024 | Computation and Language |
WanJuan-CC: A Safe and High-Quality Open-sourced English Webtext Dataset | This paper presents WanJuan-CC, a safe and high-quality open-sourced English
webtext dataset derived from Common Crawl data. The study addresses the
challenges of constructing large-scale pre-training datasets for language
models, which require vast amounts of high-quality data. A comprehensive
process was designed to handle Common Crawl data, including extraction,
heuristic rule filtering, fuzzy deduplication, content safety filtering, and
data quality filtering. From approximately 68 billion original English
documents, we obtained 2.22T Tokens of safe data and selected 1.0T Tokens of
high-quality data as part of WanJuan-CC. We have open-sourced 100B Tokens from
this dataset. The paper also provides statistical information related to data
quality, enabling users to select appropriate data according to their needs. To
evaluate the quality and utility of the dataset, we trained 1B-parameter and
3B-parameter models using WanJuan-CC and another dataset, RefinedWeb. Results
show that WanJuan-CC performs better on validation datasets and downstream
tasks.
| 2,024 | Computation and Language |
Compact Speech Translation Models via Discrete Speech Units Pretraining | Using Self-Supervised Learning (SSL) as model initialization is now common to
obtain strong results in Speech Translation (ST). However, they also impose a
large memory footprint, hindering on-device deployment. In this paper, we
leverage the SSL models by pretraining smaller models on their Discrete Speech
Units (DSU). We pretrain encoder-decoder models on 1) Filterbank-to-DSU and 2)
DSU-to-Translation data, and take the encoder from 1) and the decoder from 2)
to initialise a new model, finetuning this on limited speech-translation data.
The final model becomes compact by using the DSU pretraining to distil the
knowledge of the SSL model. Our method has several benefits over using DSU as
model inputs, such as shorter inference pipeline and robustness over (DSU)
tokenization. In contrast to ASR pretraining, it does not require transcripts,
making it applicable to low-resource settings. Evaluation on CoVoST-2 X-En
shows that our method is >$0.5$ BLEU better than a ST model that directly
finetune the SSL model, given only half the model size, and on a par with ASR
pretraining.
| 2,024 | Computation and Language |
Here's a Free Lunch: Sanitizing Backdoored Models with Model Merge | The democratization of pre-trained language models through open-source
initiatives has rapidly advanced innovation and expanded access to cutting-edge
technologies. However, this openness also brings significant security risks,
including backdoor attacks, where hidden malicious behaviors are triggered by
specific inputs, compromising natural language processing (NLP) system
integrity and reliability. This paper suggests that merging a backdoored model
with other homogeneous models can remediate backdoor vulnerabilities even if
such models are not entirely secure. In our experiments, we explore various
models (BERT-Base, RoBERTa-Large, Llama2-7B, and Mistral-7B) and datasets
(SST-2, OLID, AG News, and QNLI). Compared to multiple advanced defensive
approaches, our method offers an effective and efficient inference-stage
defense against backdoor attacks without additional resources or specific
knowledge. Our approach consistently outperforms the other advanced baselines,
leading to an average of 75% reduction in the attack success rate. Since model
merging has been an established approach for improving model performance, the
extra advantage it provides regarding defense can be seen as a cost-free bonus.
| 2,024 | Computation and Language |
Prompting Explicit and Implicit Knowledge for Multi-hop Question
Answering Based on Human Reading Process | Pre-trained language models (PLMs) leverage chains-of-thought (CoT) to
simulate human reasoning and inference processes, achieving proficient
performance in multi-hop QA. However, a gap persists between PLMs' reasoning
abilities and those of humans when tackling complex problems. Psychological
studies suggest a vital connection between explicit information in passages and
human prior knowledge during reading. Nevertheless, current research has given
insufficient attention to linking input passages and PLMs' pre-training-based
knowledge from the perspective of human cognition studies. In this study, we
introduce a Prompting Explicit and Implicit knowledge (PEI) framework, which
uses prompts to connect explicit and implicit knowledge, aligning with human
reading process for multi-hop QA. We consider the input passages as explicit
knowledge, employing them to elicit implicit knowledge through unified prompt
reasoning. Furthermore, our model incorporates type-specific reasoning via
prompts, a form of implicit knowledge. Experimental results show that PEI
performs comparably to the state-of-the-art on HotpotQA. Ablation studies
confirm the efficacy of our model in bridging and integrating explicit and
implicit knowledge.
| 2,024 | Computation and Language |
OpenMedLM: Prompt engineering can out-perform fine-tuning in medical
question-answering with open-source large language models | LLMs have become increasingly capable at accomplishing a range of
specialized-tasks and can be utilized to expand equitable access to medical
knowledge. Most medical LLMs have involved extensive fine-tuning, leveraging
specialized medical data and significant, thus costly, amounts of computational
power. Many of the top performing LLMs are proprietary and their access is
limited to very few research groups. However, open-source (OS) models represent
a key area of growth for medical LLMs due to significant improvements in
performance and an inherent ability to provide the transparency and compliance
required in healthcare. We present OpenMedLM, a prompting platform which
delivers state-of-the-art (SOTA) performance for OS LLMs on medical benchmarks.
We evaluated a range of OS foundation LLMs (7B-70B) on four medical benchmarks
(MedQA, MedMCQA, PubMedQA, MMLU medical-subset). We employed a series of
prompting strategies, including zero-shot, few-shot, chain-of-thought (random
selection and kNN selection), and ensemble/self-consistency voting. We found
that OpenMedLM delivers OS SOTA results on three common medical LLM benchmarks,
surpassing the previous best performing OS models that leveraged
computationally costly extensive fine-tuning. The model delivers a 72.6%
accuracy on the MedQA benchmark, outperforming the previous SOTA by 2.4%, and
achieves 81.7% accuracy on the MMLU medical-subset, establishing itself as the
first OS LLM to surpass 80% accuracy on this benchmark. Our results highlight
medical-specific emergent properties in OS LLMs which have not yet been
documented to date elsewhere, and showcase the benefits of further leveraging
prompt engineering to improve the performance of accessible LLMs for medical
applications.
| 2,024 | Computation and Language |
On the Scaling Laws of Geographical Representation in Language Models | Language models have long been shown to embed geographical information in
their hidden representations. This line of work has recently been revisited by
extending this result to Large Language Models (LLMs). In this paper, we
propose to fill the gap between well-established and recent literature by
observing how geographical knowledge evolves when scaling language models. We
show that geographical knowledge is observable even for tiny models, and that
it scales consistently as we increase the model size. Notably, we observe that
larger language models cannot mitigate the geographical bias that is inherent
to the training data.
| 2,024 | Computation and Language |
$\texttt{COSMIC}$: Mutual Information for Task-Agnostic Summarization
Evaluation | Assessing the quality of summarizers poses significant challenges. In
response, we propose a novel task-oriented evaluation approach that assesses
summarizers based on their capacity to produce summaries that are useful for
downstream tasks, while preserving task outcomes. We theoretically establish a
direct relationship between the resulting error probability of these tasks and
the mutual information between source texts and generated summaries. We
introduce $\texttt{COSMIC}$ as a practical implementation of this metric,
demonstrating its strong correlation with human judgment-based metrics and its
effectiveness in predicting downstream task performance. Comparative analyses
against established metrics like $\texttt{BERTScore}$ and $\texttt{ROUGE}$
highlight the competitive performance of $\texttt{COSMIC}$.
| 2,024 | Computation and Language |
Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period
of Large Language Models | Ensuring the trustworthiness of large language models (LLMs) is crucial. Most
studies concentrate on fully pre-trained LLMs to better understand and improve
LLMs' trustworthiness. In this paper, to reveal the untapped potential of
pre-training, we pioneer the exploration of LLMs' trustworthiness during this
period, focusing on five key dimensions: reliability, privacy, toxicity,
fairness, and robustness. To begin with, we apply linear probing to LLMs. The
high probing accuracy suggests that \textit{LLMs in early pre-training can
already distinguish concepts in each trustworthiness dimension}. Therefore, to
further uncover the hidden possibilities of pre-training, we extract steering
vectors from a LLM's pre-training checkpoints to enhance the LLM's
trustworthiness. Finally, inspired by~\citet{choi2023understanding} that mutual
information estimation is bounded by linear probing accuracy, we also probe
LLMs with mutual information to investigate the dynamics of trustworthiness
during pre-training. We are the first to observe a similar two-phase
phenomenon: fitting and compression~\citep{shwartz2017opening}. This research
provides an initial exploration of trustworthiness modeling during LLM
pre-training, seeking to unveil new insights and spur further developments in
the field. We will make our code publicly accessible at
\url{https://github.com/ChnQ/TracingLLM}.
| 2,024 | Computation and Language |
TV-TREES: Multimodal Entailment Trees for Neuro-Symbolic Video Reasoning | It is challenging to perform question-answering over complex, multimodal
content such as television clips. This is in part because current
video-language models rely on single-modality reasoning, have lowered
performance on long inputs, and lack interpetability. We propose TV-TREES, the
first multimodal entailment tree generator. TV-TREES serves as an approach to
video understanding that promotes interpretable joint-modality reasoning by
producing trees of entailment relationships between simple premises directly
entailed by the videos and higher-level conclusions. We then introduce the task
of multimodal entailment tree generation to evaluate the reasoning quality of
such methods. Our method's experimental results on the challenging TVQA dataset
demonstrate intepretable, state-of-the-art zero-shot performance on full video
clips, illustrating a best of both worlds contrast to black-box methods.
| 2,024 | Computation and Language |
Loose LIPS Sink Ships: Asking Questions in Battleship with
Language-Informed Program Sampling | Questions combine our mastery of language with our remarkable facility for
reasoning about uncertainty. How do people navigate vast hypothesis spaces to
pose informative questions given limited cognitive resources? We study these
tradeoffs in a classic grounded question-asking task based on the board game
Battleship. Our language-informed program sampling (LIPS) model uses large
language models (LLMs) to generate natural language questions, translate them
into symbolic programs, and evaluate their expected information gain. We find
that with a surprisingly modest resource budget, this simple Monte Carlo
optimization strategy yields informative questions that mirror human
performance across varied Battleship board scenarios. In contrast, LLM-only
baselines struggle to ground questions in the board state; notably, GPT-4V
provides no improvement over non-visual baselines. Our results illustrate how
Bayesian models of question-asking can leverage the statistics of language to
capture human priors, while highlighting some shortcomings of pure LLMs as
grounded reasoners.
| 2,024 | Computation and Language |
Query-OPT: Optimizing Inference of Large Language Models via Multi-Query
Instructions in Meeting Summarization | This work focuses on the task of query-based meeting summarization in which
the summary of a context (meeting transcript) is generated in response to a
specific query. When using Large Language Models (LLMs) for this task, a new
call to the LLM inference endpoint/API is required for each new query even if
the context stays the same. However, repeated calls to the LLM inference
endpoints would significantly increase the costs of using them in production,
making LLMs impractical for many real-world use cases. To address this problem,
in this paper, we investigate whether combining the queries for the same input
context in a single prompt to minimize repeated calls can be successfully used
in meeting summarization. In this regard, we conduct extensive experiments by
comparing the performance of various popular LLMs: GPT-4, PaLM-2, LLaMA-2,
Mistral, and FLAN-T5 in single-query and multi-query settings. We observe that
while most LLMs tend to respond to the multi-query instructions, almost all of
them (except GPT-4), even after fine-tuning, could not properly generate the
response in the required output format. We conclude that while multi-query
prompting could be useful to optimize the inference costs by reducing calls to
the inference endpoints/APIs for the task of meeting summarization, this
capability to reliably generate the response in the expected format is only
limited to certain LLMs.
| 2,024 | Computation and Language |
Resonance RoPE: Improving Context Length Generalization of Large
Language Models | This paper addresses the challenge of train-short-test-long (TSTL) scenarios
in Large Language Models (LLMs) equipped with Rotary Position Embedding (RoPE),
where models pre-trained on shorter sequences face difficulty with
out-of-distribution (OOD) token positions in longer sequences. We introduce
Resonance RoPE, a novel approach designed to narrow the generalization gap in
TSTL scenarios by refining the interpolation of RoPE features for OOD
positions, significantly improving the model performance without additional
online computational costs. Furthermore, we present PosGen, a new synthetic
benchmark specifically designed for fine-grained behavior analysis in TSTL
scenarios, aiming to isolate the constantly increasing difficulty of token
generation on long contexts from the challenges of recognizing new token
positions. Our experiments on synthetic tasks show that after applying
Resonance RoPE, Transformers recognize OOD position better and more robustly.
Our extensive LLM experiments also show superior performance after applying
Resonance RoPE to the current state-of-the-art RoPE scaling method, YaRN, on
both upstream language modeling tasks and a variety of downstream long-text
applications.
| 2,024 | Computation and Language |
PROC2PDDL: Open-Domain Planning Representations from Texts | Planning in a text-based environment continues to be a major challenge for AI
systems. Recent approaches have used language models to predict a planning
domain definition (e.g., PDDL) but have only been evaluated in closed-domain
simulated environments. To address this, we present Proc2PDDL , the first
dataset containing open-domain procedural texts paired with expert-annotated
PDDL representations. Using this dataset, we evaluate state-of-the-art models
on defining the preconditions and effects of actions. We show that Proc2PDDL is
highly challenging, with GPT-3.5's success rate close to 0% and GPT-4's around
35%. Our analysis shows both syntactic and semantic errors, indicating LMs'
deficiency in both generating domain-specific prgorams and reasoning about
events. We hope this analysis and dataset helps future progress towards
integrating the best of LMs and formal planning.
| 2,024 | Computation and Language |
FAC$^2$E: Better Understanding Large Language Model Capabilities by
Dissociating Language and Cognition | Large language models (LLMs) are primarily evaluated by overall performance
on various text understanding and generation tasks. However, such a paradigm
fails to comprehensively differentiate the fine-grained language and cognitive
skills, rendering the lack of sufficient interpretation to LLMs' capabilities.
In this paper, we present FAC$^2$E, a framework for Fine-grAined and
Cognition-grounded LLMs' Capability Evaluation. Specifically, we formulate
LLMs' evaluation in a multi-dimensional and explainable manner by dissociating
the language-related capabilities and the cognition-related ones. Besides,
through extracting the intermediate reasoning from LLMs, we further break down
the process of applying a specific capability into three sub-steps: recalling
relevant knowledge, utilizing knowledge, and solving problems. Finally,
FAC$^2$E evaluates each sub-step of each fine-grained capability, providing a
two-faceted diagnosis for LLMs. Utilizing FAC$^2$E, we identify a common
shortfall in knowledge utilization among models and propose a straightforward,
knowledge-enhanced method to mitigate this issue. Our results not only showcase
promising performance enhancements but also highlight a direction for future
LLM advancements.
| 2,024 | Computation and Language |
Prompting ChatGPT for Translation: A Comparative Analysis of Translation
Brief and Persona Prompts | Prompt engineering in LLMs has shown potential for improving translation
quality. However, the potential of incorporating translation concepts in prompt
design remains largely underexplored. Against this backdrop, this paper
discusses the effectiveness of incorporating the conceptual tool of translation
brief and the personas of translator and author into prompt design for
translation tasks in ChatGPT. Findings suggest that, although certain elements
are constructive in facilitating human to human communication for translation
tasks, their effectiveness is limited for improving translation quality in
ChatGPT. This accentuates the need for more explorative research on how
translation theorists and practitioners can develop the current set of
conceptual tools rooted in the human to human communication paradigm for
translation purposes in this emerging workflow involving human machine
interaction.
| 2,024 | Computation and Language |
EROS: Entity-Driven Controlled Policy Document Summarization | Privacy policy documents have a crucial role in educating individuals about
the collection, usage, and protection of users' personal data by organizations.
However, they are notorious for their lengthy, complex, and convoluted language
especially involving privacy-related entities. Hence, they pose a significant
challenge to users who attempt to comprehend organization's data usage policy.
In this paper, we propose to enhance the interpretability and readability of
policy documents by using controlled abstractive summarization -- we enforce
the generated summaries to include critical privacy-related entities (e.g.,
data and medium) and organization's rationale (e.g.,target and reason) in
collecting those entities. To achieve this, we develop PD-Sum, a
policy-document summarization dataset with marked privacy-related entity
labels. Our proposed model, EROS, identifies critical entities through a
span-based entity extraction model and employs them to control the information
content of the summaries using proximal policy optimization (PPO). Comparison
shows encouraging improvement over various baselines. Furthermore, we furnish
qualitative and human evaluations to establish the efficacy of EROS.
| 2,024 | Computation and Language |
Ensemble-Based Unsupervised Discontinuous Constituency Parsing by Tree
Averaging | We address unsupervised discontinuous constituency parsing, where we observe
a high variance in the performance of the only previous model. We propose to
build an ensemble of different runs of the existing discontinuous parser by
averaging the predicted trees, to stabilize and boost performance. To begin
with, we provide comprehensive computational complexity analysis (in terms of P
and NP-complete) for tree averaging under different setups of binarity and
continuity. We then develop an efficient exact algorithm to tackle the task,
which runs in a reasonable time for all samples in our experiments. Results on
three datasets show our method outperforms all baselines in all metrics; we
also provide in-depth analyses of our approach.
| 2,024 | Computation and Language |
EBBS: An Ensemble with Bi-Level Beam Search for Zero-Shot Machine
Translation | The ability of zero-shot translation emerges when we train a multilingual
model with certain translation directions; the model can then directly
translate in unseen directions. Alternatively, zero-shot translation can be
accomplished by pivoting through a third language (e.g., English). In our work,
we observe that both direct and pivot translations are noisy and achieve less
satisfactory performance. We propose EBBS, an ensemble method with a novel
bi-level beam search algorithm, where each ensemble component explores its own
prediction step by step at the lower level but they are synchronized by a "soft
voting" mechanism at the upper level. Results on two popular multilingual
translation datasets show that EBBS consistently outperforms direct and pivot
translations as well as existing ensemble techniques. Further, we can distill
the ensemble's knowledge back to the multilingual model to improve inference
efficiency; profoundly, our EBBS-based distillation does not sacrifice, or even
improves, the translation quality.
| 2,024 | Computation and Language |
TELEClass: Taxonomy Enrichment and LLM-Enhanced Hierarchical Text
Classification with Minimal Supervision | Hierarchical text classification aims to categorize each document into a set
of classes in a label taxonomy. Most earlier works focus on fully or
semi-supervised methods that require a large amount of human annotated data
which is costly and time-consuming to acquire. To alleviate human efforts, in
this paper, we work on hierarchical text classification with the minimal amount
of supervision: using the sole class name of each node as the only supervision.
Recently, large language models (LLM) show competitive performance on various
tasks through zero-shot prompting, but this method performs poorly in the
hierarchical setting, because it is ineffective to include the large and
structured label space in a prompt. On the other hand, previous
weakly-supervised hierarchical text classification methods only utilize the raw
taxonomy skeleton and ignore the rich information hidden in the text corpus
that can serve as additional class-indicative features. To tackle the above
challenges, we propose TELEClass, Taxonomy Enrichment and LLM-Enhanced
weakly-supervised hierarchical text classification, which (1) automatically
enriches the label taxonomy with class-indicative topical terms mined from the
corpus to facilitate classifier training and (2) utilizes LLMs for both data
annotation and creation tailored for the hierarchical label space. Experiments
show that TELEClass can outperform previous weakly-supervised hierarchical text
classification methods and LLM-based zero-shot prompting methods on two public
datasets.
| 2,024 | Computation and Language |
"Flex Tape Can't Fix That": Bias and Misinformation in Edited Language
Models | Model editing has emerged as a cost-effective strategy to update knowledge
stored in language models. However, model editing can have unintended
consequences after edits are applied: information unrelated to the edits can
also be changed, and other general behaviors of the model can be wrongly
altered. In this work, we investigate how model editing methods unexpectedly
amplify model biases post-edit. We introduce a novel benchmark dataset,
Seesaw-CF, for measuring bias-related harms of model editing and conduct the
first in-depth investigation of how different weight-editing methods impact
model bias. Specifically, we focus on biases with respect to demographic
attributes such as race, geographic origin, and gender, as well as qualitative
flaws in long-form texts generated by edited language models. We find that
edited models exhibit, to various degrees, more biased behavior as they become
less confident in attributes for Asian, African, and South American subjects.
Furthermore, edited models amplify sexism and xenophobia in text generations
while remaining seemingly coherent and logical. Finally, editing facts about
place of birth, country of citizenship, or gender have particularly negative
effects on the model's knowledge about unrelated features like field of work.
| 2,024 | Computation and Language |
AXOLOTL: Fairness through Assisted Self-Debiasing of Large Language
Model Outputs | Pre-trained Large Language Models (LLMs) have significantly advanced natural
language processing capabilities but are susceptible to biases present in their
training data, leading to unfair outcomes in various applications. While
numerous strategies have been proposed to mitigate bias, they often require
extensive computational resources and may compromise model performance. In this
work, we introduce AXOLOTL, a novel post-processing framework, which operates
agnostically across tasks and models, leveraging public APIs to interact with
LLMs without direct access to internal parameters. Through a three-step process
resembling zero-shot learning, AXOLOTL identifies biases, proposes resolutions,
and guides the model to self-debias its outputs. This approach minimizes
computational costs and preserves model performance, making AXOLOTL a promising
tool for debiasing LLM outputs with broad applicability and ease of use.
| 2,024 | Computation and Language |
Improving Socratic Question Generation using Data Augmentation and
Preference Optimization | The Socratic method is a way of guiding students toward solving a problem
independently without directly revealing the solution to the problem. Although
this method has been shown to significantly improve student learning outcomes,
it remains a complex labor-intensive task for instructors. Large language
models (LLMs) can be used to augment human effort by automatically generating
Socratic questions for students. However, existing methods that involve
prompting these LLMs sometimes produce invalid outputs, e.g., those that
directly reveal the solution to the problem or provide irrelevant or premature
questions. To alleviate this problem, inspired by reinforcement learning with
AI feedback (RLAIF), we first propose a data augmentation method to enrich
existing Socratic questioning datasets with questions that are invalid in
specific ways. Next, we propose a method to optimize open-source LLMs such as
LLama 2 to prefer ground-truth questions over generated invalid ones, using
direct preference optimization (DPO). Our experiments on a Socratic questions
dataset for student code debugging show that a DPO-optimized 7B LLama 2 model
can effectively avoid generating invalid questions, and as a result,
outperforms existing state-of-the-art prompting methods.
| 2,024 | Computation and Language |
Transcription and translation of videos using fine-tuned XLSR Wav2Vec2
on custom dataset and mBART | This research addresses the challenge of training an ASR model for
personalized voices with minimal data. Utilizing just 14 minutes of custom
audio from a YouTube video, we employ Retrieval-Based Voice Conversion (RVC) to
create a custom Common Voice 16.0 corpus. Subsequently, a Cross-lingual
Self-supervised Representations (XLSR) Wav2Vec2 model is fine-tuned on this
dataset. The developed web-based GUI efficiently transcribes and translates
input Hindi videos. By integrating XLSR Wav2Vec2 and mBART, the system aligns
the translated text with the video timeline, delivering an accessible solution
for multilingual video content transcription and translation for personalized
voice.
| 2,024 | Computation and Language |