Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Addressing Order Sensitivity of In-Context Demonstration Examples in
Causal Language Models | In-context learning has become a popular paradigm in natural language
processing. However, its performance can be significantly influenced by the
order of in-context demonstration examples. In this paper, we found that causal
language models (CausalLMs) are more sensitive to this order compared to prefix
language models (PrefixLMs). We attribute this phenomenon to the
auto-regressive attention masks within CausalLMs, which restrict each token
from accessing information from subsequent tokens. This results in different
receptive fields for samples at different positions, thereby leading to
representation disparities across positions. To tackle this challenge, we
introduce an unsupervised fine-tuning method, termed the Information-Augmented
and Consistency-Enhanced approach. This approach utilizes contrastive learning
to align representations of in-context examples across different positions and
introduces a consistency loss to ensure similar representations for inputs with
different permutations. This enhances the model's predictive consistency across
permutations. Experimental results on four benchmarks suggest that our proposed
method can reduce the sensitivity to the order of in-context examples and
exhibit robust generalizability, particularly when demonstrations are sourced
from a pool different from that used in the training phase, or when the number
of in-context examples differs from what is used during training.
| 2,024 | Computation and Language |
Exploring Failure Cases in Multimodal Reasoning About Physical Dynamics | In this paper, we present an exploration of LLMs' abilities to problem solve
with physical reasoning in situated environments. We construct a simple
simulated environment and demonstrate examples of where, in a zero-shot
setting, both text and multimodal LLMs display atomic world knowledge about
various objects but fail to compose this knowledge in correct solutions for an
object manipulation and placement task. We also use BLIP, a vision-language
model trained with more sophisticated cross-modal attention, to identify cases
relevant to object physical properties that that model fails to ground.
Finally, we present a procedure for discovering the relevant properties of
objects in the environment and propose a method to distill this knowledge back
into the LLM.
| 2,024 | Computation and Language |
Leveraging ChatGPT in Pharmacovigilance Event Extraction: An Empirical
Study | With the advent of large language models (LLMs), there has been growing
interest in exploring their potential for medical applications. This research
aims to investigate the ability of LLMs, specifically ChatGPT, in the context
of pharmacovigilance event extraction, of which the main goal is to identify
and extract adverse events or potential therapeutic events from textual medical
sources. We conduct extensive experiments to assess the performance of ChatGPT
in the pharmacovigilance event extraction task, employing various prompts and
demonstration selection strategies. The findings demonstrate that while ChatGPT
demonstrates reasonable performance with appropriate demonstration selection
strategies, it still falls short compared to fully fine-tuned small models.
Additionally, we explore the potential of leveraging ChatGPT for data
augmentation. However, our investigation reveals that the inclusion of
synthesized data into fine-tuning may lead to a decrease in performance,
possibly attributed to noise in the ChatGPT-generated labels. To mitigate this,
we explore different filtering strategies and find that, with the proper
approach, more stable performance can be achieved, although constant
improvement remains elusive.
| 2,024 | Computation and Language |
Foot In The Door: Understanding Large Language Model Jailbreaking via
Cognitive Psychology | Large Language Models (LLMs) have gradually become the gateway for people to
acquire new knowledge. However, attackers can break the model's security
protection ("jail") to access restricted information, which is called
"jailbreaking." Previous studies have shown the weakness of current LLMs when
confronted with such jailbreaking attacks. Nevertheless, comprehension of the
intrinsic decision-making mechanism within the LLMs upon receipt of jailbreak
prompts is noticeably lacking. Our research provides a psychological
explanation of the jailbreak prompts. Drawing on cognitive consistency theory,
we argue that the key to jailbreak is guiding the LLM to achieve cognitive
coordination in an erroneous direction. Further, we propose an automatic
black-box jailbreaking method based on the Foot-in-the-Door (FITD) technique.
This method progressively induces the model to answer harmful questions via
multi-step incremental prompts. We instantiated a prototype system to evaluate
the jailbreaking effectiveness on 8 advanced LLMs, yielding an average success
rate of 83.9%. This study builds a psychological perspective on the explanatory
insights into the intrinsic decision-making logic of LLMs.
| 2,024 | Computation and Language |
Query Augmentation by Decoding Semantics from Brain Signals | Query augmentation is a crucial technique for refining semantically imprecise
queries. Traditionally, query augmentation relies on extracting information
from initially retrieved, potentially relevant documents. If the quality of the
initially retrieved documents is low, then the effectiveness of query
augmentation would be limited as well. We propose Brain-Aug, which enhances a
query by incorporating semantic information decoded from brain signals.
BrainAug generates the continuation of the original query with a prompt
constructed with brain signal information and a ranking-oriented inference
approach. Experimental results on fMRI (functional magnetic resonance imaging)
datasets show that Brain-Aug produces semantically more accurate queries,
leading to improved document ranking performance. Such improvement brought by
brain signals is particularly notable for ambiguous queries.
| 2,024 | Computation and Language |
Making Pre-trained Language Models Better Continual Few-Shot Relation
Extractors | Continual Few-shot Relation Extraction (CFRE) is a practical problem that
requires the model to continuously learn novel relations while avoiding
forgetting old ones with few labeled training data. The primary challenges are
catastrophic forgetting and overfitting. This paper harnesses prompt learning
to explore the implicit capabilities of pre-trained language models to address
the above two challenges, thereby making language models better continual
few-shot relation extractors. Specifically, we propose a Contrastive Prompt
Learning framework, which designs prompt representation to acquire more
generalized knowledge that can be easily adapted to old and new categories, and
margin-based contrastive learning to focus more on hard samples, therefore
alleviating catastrophic forgetting and overfitting issues. To further remedy
overfitting in low-resource scenarios, we introduce an effective memory
augmentation strategy that employs well-crafted prompts to guide ChatGPT in
generating diverse samples. Extensive experiments demonstrate that our method
outperforms state-of-the-art methods by a large margin and significantly
mitigates catastrophic forgetting and overfitting in low-resource scenarios.
| 2,024 | Computation and Language |
GAOKAO-MM: A Chinese Human-Level Benchmark for Multimodal Models
Evaluation | The Large Vision-Language Models (LVLMs) have demonstrated great abilities in
image perception and language understanding. However, existing multimodal
benchmarks focus on primary perception abilities and commonsense knowledge
which are insufficient to reflect the comprehensive capabilities of LVLMs. We
propose GAOKAO-MM, a multimodal benchmark based on the Chinese College Entrance
Examination (GAOKAO), comprising of 8 subjects and 12 types of images, such as
diagrams, function graphs, maps and photos. GAOKAO-MM derives from native
Chinese context and sets human-level requirements for the model's abilities,
including perception, understanding, knowledge and reasoning. We evaluate 10
LVLMs and find that the accuracies of all of them are lower than 50%, with
GPT-4-Vison (48.1%), Qwen-VL-Plus (41.2%) and Gemini-Pro-Vision (35.1%) ranking
in the top three positions. The results of our multi-dimension analysis
indicate that LVLMs have moderate distance towards Artificial General
Intelligence (AGI) and provide insights facilitating the development of
multilingual LVLMs.
| 2,024 | Computation and Language |
HD-Eval: Aligning Large Language Model Evaluators Through Hierarchical
Criteria Decomposition | Large language models (LLMs) have emerged as a promising alternative to
expensive human evaluations. However, the alignment and coverage of LLM-based
evaluations are often limited by the scope and potential bias of the evaluation
prompts and criteria. To address this challenge, we propose HD-Eval, a novel
framework that iteratively aligns LLM-based evaluators with human preference
via Hierarchical Criteria Decomposition. HD-Eval inherits the essence from the
evaluation mindset of human experts and enhances the alignment of LLM-based
evaluators by decomposing a given evaluation task into finer-grained criteria,
aggregating them according to estimated human preferences, pruning
insignificant criteria with attribution, and further decomposing significant
criteria. By integrating these steps within an iterative alignment training
process, we obtain a hierarchical decomposition of criteria that
comprehensively captures aspects of natural language at multiple levels of
granularity. Implemented as a white box, the human preference-guided aggregator
is efficient to train and more explainable than relying solely on prompting,
and its independence from model parameters makes it applicable to closed-source
LLMs. Extensive experiments on three evaluation domains demonstrate the
superiority of HD-Eval in further aligning state-of-the-art evaluators and
providing deeper insights into the explanation of evaluation results and the
task itself.
| 2,024 | Computation and Language |
Dental Severity Assessment through Few-shot Learning and SBERT
Fine-tuning | Dental diseases have a significant impact on a considerable portion of the
population, leading to various health issues that can detrimentally affect
individuals' overall well-being. The integration of automated systems in oral
healthcare has become increasingly crucial. Machine learning approaches offer a
viable solution to address challenges such as diagnostic difficulties,
inefficiencies, and errors in oral disease diagnosis. These methods prove
particularly useful when physicians struggle to predict or diagnose diseases at
their early stages. In this study, thirteen different machine learning, deep
learning, and large language models were employed to determine the severity
level of oral health issues based on radiologists' reports. The results
revealed that the Few-shot learning with SBERT and Multi-Layer Perceptron model
outperformed all other models across various experiments, achieving an
impressive accuracy of 94.1% as the best result. Consequently, this model
exhibits promise as a reliable tool for evaluating the severity of oral
diseases, enabling patients to receive more effective treatment and aiding
healthcare professionals in making informed decisions regarding resource
allocation and the management of high-risk patients.
| 2,024 | Computation and Language |
Chimera: A Lossless Decoding Method for Accelerating Large Language
Models Inference by Fusing all Tokens | Large language models (LLMs) have demonstrated remarkable capabilities across
various tasks. However, their widespread application is hindered by the
resource-intensive decoding process. To address this challenge, current
approaches have incorporated additional decoding heads to enable parallel
prediction of multiple subsequent tokens, thereby achieving inference
acceleration. Nevertheless, the accuracy of these decoding heads falls short of
the auto-regressive decoding approach.
In light of these limitations, we propose Chimera, a novel framework
specifically designed for speculative sampling. Within this framework, we
introduce a lightweight draft model that effectively utilizes previously
generated tokens to predict subsequent words. To ensure both accuracy and
efficiency, we present two strategies within the lightweight draft model.
Firstly, we focus on capturing short-range dependencies at the bottom layer.
Secondly, we leverage the readily available representations from the original
LLM.Through empirical evaluation on the Vicuna and LlaMA-2 series, Chimera
demonstrates impressive results, achieving an average latency speedup ratio of
2.7x compared to the vanilla auto-regressive decoding approach. This highlights
the potential of our proposed framework in significantly improving the
efficiency of large language models during the decoding process.
| 2,024 | Computation and Language |
Look Before You Leap: Problem Elaboration Prompting Improves
Mathematical Reasoning in Large Language Models | Large language models~(LLMs) have exhibited impressive performance across NLP
tasks. So far they still face challenges in complex reasoning tasks and can be
sensitive to input context. Despite significant efforts have been invested in
enhancing reasoning process and improving prefix-prompts robustness, the
crucial role of problem context has been overlooked. In this study, we propose
a new approach to improve the mathematical capacities of LLMs, named Problem
Elaboration Prompting~(PEP). Specifically, PEP decomposes and elucidates the
problem context before reasoning, thus enhancing the global context modeling
and reducing the parsing difficulties. Experiments on datasets demonstrate
promising performances on complex reasoning and indicate the beneficial impact
for ill-formed problems. For instance, with the GPT-3.5
model~(\texttt{text-davinci-003}), we observed a 9.93\% improvement with greedy
decoding and 8.80\% improvement with self-consistency on GSM8k compared to the
standard CoT. With ChatGPT~(\texttt{turbo}) and PEP, we achieve SOTA
performances on SVAMP with 86.2\% and GSM8k with 90.98\%.
| 2,024 | Computation and Language |
Measuring Bargaining Abilities of LLMs: A Benchmark and A
Buyer-Enhancement Method | Bargaining is an important and unique part of negotiation between humans. As
LLM-driven agents learn to negotiate and act like real humans, how to evaluate
agents' bargaining abilities remains an open problem. For the first time, we
formally described the Bargaining task as an asymmetric incomplete information
game, defining the gains of the Buyer and Seller in multiple bargaining
processes. It allows us to quantitatively assess an agent's performance in the
Bargain task. We collected a real product price dataset, AmazonHistoryPrice,
and conducted evaluations of various LLM agents' bargaining abilities. We find
that playing a Buyer is much harder than a Seller, and increasing model size
can not effectively improve the Buyer's performance. To address the challenge,
we propose a novel approach called OG-Narrator that integrates a deterministic
Offer Generator to control the price range of Buyer's offers, and an LLM
Narrator to create natural language sentences for generated offers.
Experimental results show that OG-Narrator improves the buyer's deal rates from
26.67% to 88.88% and brings a ten times of multiplication of profits on all
baselines, even a model that has not been aligned.
| 2,024 | Computation and Language |
A Theoretical Result on the Inductive Bias of RNN Language Models | Recent work by Hewitt et al. (2020) provides a possible interpretation of the
empirical success of recurrent neural networks (RNNs) as language models (LMs).
It shows that RNNs can efficiently represent bounded hierarchical structures
that are prevalent in human language.
This suggests that RNNs' success might be linked to their ability to model
hierarchy.
However, a closer inspection of Hewitt et al.'s (2020) construction shows
that it is not limited to hierarchical LMs, posing the question of what
\emph{other classes} of LMs can be efficiently represented by RNNs.
To this end, we generalize their construction to show that RNNs can
efficiently represent a larger class of LMs: Those that can be represented by a
pushdown automaton with a bounded stack and a generalized stack update
function.
This is analogous to an automaton that keeps a memory of a fixed number of
symbols and updates the memory with a simple update mechanism.
Altogether, the efficiency in representing a diverse class of
non-hierarchical LMs posits a lack of concrete cognitive and
human-language-centered inductive biases in RNNs.
| 2,024 | Computation and Language |
Linguistic Intelligence in Large Language Models for Telecommunications | Large Language Models (LLMs) have emerged as a significant advancement in the
field of Natural Language Processing (NLP), demonstrating remarkable
capabilities in language generation and other language-centric tasks. Despite
their evaluation across a multitude of analytical and reasoning tasks in
various scientific domains, a comprehensive exploration of their knowledge and
understanding within the realm of natural language tasks in the
telecommunications domain is still needed. This study, therefore, seeks to
evaluate the knowledge and understanding capabilities of LLMs within this
domain. To achieve this, we conduct an exhaustive zero-shot evaluation of four
prominent LLMs-Llama-2, Falcon, Mistral, and Zephyr. These models require fewer
resources than ChatGPT, making them suitable for resource-constrained
environments. Their performance is compared with state-of-the-art, fine-tuned
models. To the best of our knowledge, this is the first work to extensively
evaluate and compare the understanding of LLMs across multiple language-centric
tasks in this domain. Our evaluation reveals that zero-shot LLMs can achieve
performance levels comparable to the current state-of-the-art fine-tuned
models. This indicates that pretraining on extensive text corpora equips LLMs
with a degree of specialization, even within the telecommunications domain. We
also observe that no single LLM consistently outperforms others, and the
performance of different LLMs can fluctuate. Although their performance lags
behind fine-tuned models, our findings underscore the potential of LLMs as a
valuable resource for understanding various aspects of this field that lack
large annotated data.
| 2,024 | Computation and Language |
Prompt Perturbation Consistency Learning for Robust Language Models | Large language models (LLMs) have demonstrated impressive performance on a
number of natural language processing tasks, such as question answering and
text summarization. However, their performance on sequence labeling tasks such
as intent classification and slot filling (IC-SF), which is a central component
in personal assistant systems, lags significantly behind discriminative models.
Furthermore, there is a lack of substantive research on the robustness of LLMs
to various perturbations in the input prompts. The contributions of this paper
are three-fold. First, we show that fine-tuning sufficiently large LLMs can
produce IC-SF performance comparable to discriminative models. Next, we
systematically analyze the performance deterioration of those fine-tuned models
due to three distinct yet relevant types of input perturbations - oronyms,
synonyms, and paraphrasing. Finally, we propose an efficient mitigation
approach, Prompt Perturbation Consistency Learning (PPCL), which works by
regularizing the divergence between losses from clean and perturbed samples.
Our experiments demonstrate that PPCL can recover on average 59% and 69% of the
performance drop for IC and SF tasks, respectively. Furthermore, PPCL beats the
data augmentation approach while using ten times fewer augmented data samples.
| 2,024 | Computation and Language |
MATHWELL: Generating Educational Math Word Problems at Scale | Math word problems are critical K-8 educational tools, but writing them is
time-consuming and requires domain expertise. We suggest that language models
can support K-8 math education by automatically generating problems at scale.
To be educational, generated problems must be 1) solvable, 2) accurate, and 3)
appropriate. Existing datasets are unlabeled for these criteria, making them
ill-suited for training problem generators. We introduce MATHWELL, a Llama-2
(70B) model iteratively finetuned to generate K-8 math word problems using data
from expert annotation. Using MATHWELL, we generate the largest English word
problem dataset with Program of Thought (PoT) rationales to date, containing
20,490 problems. 3,484 are scored by domain experts who find MATHWELL has a 40%
higher share of problems that have executable solutions and meet all criteria
than alternatives, with 74% of its problems with executable solutions being
solvable, accurate, and appropriate. We release our model, data, and
annotations.
| 2,024 | Computation and Language |
SportQA: A Benchmark for Sports Understanding in Large Language Models | A deep understanding of sports, a field rich in strategic and dynamic
content, is crucial for advancing Natural Language Processing (NLP). This holds
particular significance in the context of evaluating and advancing Large
Language Models (LLMs), given the existing gap in specialized benchmarks. To
bridge this gap, we introduce SportQA, a novel benchmark specifically designed
for evaluating LLMs in the context of sports understanding. SportQA encompasses
over 70,000 multiple-choice questions across three distinct difficulty levels,
each targeting different aspects of sports knowledge from basic historical
facts to intricate, scenario-based reasoning tasks. We conducted a thorough
evaluation of prevalent LLMs, mainly utilizing few-shot learning paradigms
supplemented by chain-of-thought (CoT) prompting. Our results reveal that while
LLMs exhibit competent performance in basic sports knowledge, they struggle
with more complex, scenario-based sports reasoning, lagging behind human
expertise. The introduction of SportQA marks a significant step forward in NLP,
offering a tool for assessing and enhancing sports understanding in LLMs.
| 2,024 | Computation and Language |
SemEval-2024 Task 8: Weighted Layer Averaging RoBERTa for Black-Box
Machine-Generated Text Detection | This document contains the details of the authors' submission to the
proceedings of SemEval 2024's Task 8: Multigenerator, Multidomain, and
Multilingual Black-Box Machine-Generated Text Detection Subtask A (monolingual)
and B. Detection of machine-generated text is becoming an increasingly
important task, with the advent of large language models (LLMs). In this
document, we lay out the techniques utilized for performing the same, along
with the results obtained.
| 2,024 | Computation and Language |
MultiContrievers: Analysis of Dense Retrieval Representations | Dense retrievers compress source documents into (possibly lossy) vector
representations, yet there is little analysis of what information is lost
versus preserved, and how it affects downstream tasks. We conduct the first
analysis of the information captured by dense retrievers compared to the
language models they are based on (e.g., BERT versus Contriever). We use 25
MultiBert checkpoints as randomized initialisations to train MultiContrievers,
a set of 25 contriever models. We test whether specific pieces of information
-- such as gender and occupation -- can be extracted from contriever vectors of
wikipedia-like documents. We measure this extractability via information
theoretic probing. We then examine the relationship of extractability to
performance and gender bias, as well as the sensitivity of these results to
many random initialisations and data shuffles. We find that (1) contriever
models have significantly increased extractability, but extractability usually
correlates poorly with benchmark performance 2) gender bias is present, but is
not caused by the contriever representations 3) there is high sensitivity to
both random initialisation and to data shuffle, suggesting that future
retrieval research should test across a wider spread of both.
| 2,024 | Computation and Language |
Evaluating Prompting Strategies for Grammatical Error Correction Based
on Language Proficiency | The writing examples of English language learners may be different from those
of native speakers. Given that there is a significant differences in second
language (L2) learners' error types by their proficiency levels, this paper
attempts to reduce overcorrection by examining the interaction between LLM's
performance and L2 language proficiency. Our method focuses on zero-shot and
few-shot prompting and fine-tuning models for GEC for learners of English as a
foreign language based on the different proficiency. We investigate GEC results
and find that overcorrection happens primarily in advanced language learners'
writing (proficiency C) rather than proficiency A (a beginner level) and
proficiency B (an intermediate level). Fine-tuned LLMs, and even few-shot
prompting with writing examples of English learners, actually tend to exhibit
decreased recall measures. To make our claim concrete, we conduct a
comprehensive examination of GEC outcomes and their evaluation results based on
language proficiency.
| 2,024 | Computation and Language |
Frustratingly Simple Prompting-based Text Denoising | This paper introduces a novel perspective on the automated essay scoring
(AES) task, challenging the conventional view of the ASAP dataset as a static
entity. Employing simple text denoising techniques using prompting, we explore
the dynamic potential within the dataset. While acknowledging the previous
emphasis on building regression systems, our paper underscores how making minor
changes to a dataset through text denoising can enhance the final results.
| 2,024 | Computation and Language |
Generalization or Memorization: Data Contamination and Trustworthy
Evaluation for Large Language Models | Recent statements about the impressive capabilities of large language models
(LLMs) are usually supported by evaluating on open-access benchmarks.
Considering the vast size and wide-ranging sources of LLMs' training data, it
could explicitly or implicitly include test data, leading to LLMs being more
susceptible to data contamination. However, due to the opacity of training
data, the black-box access of models, and the rapid growth of synthetic
training data, detecting and mitigating data contamination for LLMs faces
significant challenges. In this paper, we propose CDD, which stands for
Contamination Detection via output Distribution for LLMs. CDD necessitates only
the sampled texts to detect data contamination, by identifying the peakedness
of LLM's output distribution. To mitigate the impact of data contamination in
evaluation, we also present TED: Trustworthy Evaluation via output
Distribution, based on the correction of LLM's output distribution. To
facilitate this study, we introduce two benchmarks, i.e., DetCon and ComiEval,
for data contamination detection and contamination mitigation evaluation tasks.
Extensive experimental results show that CDD achieves the average relative
improvements of 21.8\%-30.2\% over other contamination detection approaches in
terms of Accuracy, F1 Score, and AUC metrics, and can effectively detect
contamination caused by the variants of test data. TED significantly mitigates
performance improvements up to 66.9\% attributed to data contamination across
24 settings and 21 contamination degrees. In real-world applications, we reveal
that ChatGPT exhibits a high potential to suffer from data contamination on
HumanEval benchmark.
| 2,024 | Computation and Language |
Direct Punjabi to English speech translation using discrete units | Speech-to-speech translation is yet to reach the same level of coverage as
text-to-text translation systems. The current speech technology is highly
limited in its coverage of over 7000 languages spoken worldwide, leaving more
than half of the population deprived of such technology and shared experiences.
With voice-assisted technology (such as social robots and speech-to-text apps)
and auditory content (such as podcasts and lectures) on the rise, ensuring that
the technology is available for all is more important than ever. Speech
translation can play a vital role in mitigating technological disparity and
creating a more inclusive society. With a motive to contribute towards speech
translation research for low-resource languages, our work presents a direct
speech-to-speech translation model for one of the Indic languages called
Punjabi to English. Additionally, we explore the performance of using a
discrete representation of speech called discrete acoustic units as input to
the Transformer-based translation model. The model, abbreviated as Unit-to-Unit
Translation (U2UT), takes a sequence of discrete units of the source language
(the language being translated from) and outputs a sequence of discrete units
of the target language (the language being translated to). Our results show
that the U2UT model performs better than the Speech-to-Unit Translation (S2UT)
model by a 3.69 BLEU score.
| 2,024 | Computation and Language |
Likelihood-based Mitigation of Evaluation Bias in Large Language Models | Large Language Models (LLMs) are widely used to evaluate natural language
generation tasks as automated metrics. However, the likelihood, a measure of
LLM's plausibility for a sentence, can vary due to superficial differences in
sentences, such as word order and sentence structure. It is therefore possible
that there might be a likelihood bias if LLMs are used for evaluation: they
might overrate sentences with higher likelihoods while underrating those with
lower likelihoods. In this paper, we investigate the presence and impact of
likelihood bias in LLM-based evaluators. We also propose a method to mitigate
the likelihood bias. Our method utilizes highly biased instances as few-shot
examples for in-context learning. Our experiments in evaluating the
data-to-text and grammatical error correction tasks reveal that several LLMs we
test display a likelihood bias. Furthermore, our proposed method successfully
mitigates this bias, also improving evaluation performance (in terms of
correlation of models with human scores) significantly.
| 2,024 | Computation and Language |
$C^3$: Confidence Calibration Model Cascade for Inference-Efficient
Cross-Lingual Natural Language Understanding | Cross-lingual natural language understanding (NLU) is a critical task in
natural language processing (NLP). Recent advancements have seen multilingual
pre-trained language models (mPLMs) significantly enhance the performance of
these tasks. However, mPLMs necessitate substantial resources and incur high
computational costs during inference, posing challenges for deployment in
real-world and real-time systems. Existing model cascade methods seek to
enhance inference efficiency by greedily selecting the lightest model capable
of processing the current input from a variety of models, based on model
confidence scores. Nonetheless, deep models tend to exhibit overconfidence, and
confidence distributions vary across languages. This leads to the emission of
confident but incorrect predictions by smaller models, hindering their ability
to generalize effectively across test languages. In this study, we introduce a
confidence calibration model cascade ($C^3$) method. This approach, simple yet
effective, involves calibration prior to cascade inference, thereby enhancing
cascade accuracy through more reliable predictions. Extensive experiments
conducted on three cross-lingual benchmarks demonstrate that $C^3$
significantly outperforms all state-of-the-art baselines.
| 2,024 | Computation and Language |
From Noise to Clarity: Unraveling the Adversarial Suffix of Large
Language Model Attacks via Translation of Text Embeddings | The safety defense methods of Large language models(LLMs) stays limited
because the dangerous prompts are manually curated to just few known attack
types, which fails to keep pace with emerging varieties. Recent studies found
that attaching suffixes to harmful instructions can hack the defense of LLMs
and lead to dangerous outputs. This method, while effective, leaves a gap in
understanding the underlying mechanics of such adversarial suffix due to the
non-readability and it can be relatively easily seen through by common defense
methods such as perplexity filters.To cope with this challenge, in this paper,
we propose an Adversarial Suffixes Embedding Translation Framework(ASETF) that
are able to translate the unreadable adversarial suffixes into coherent,
readable text, which makes it easier to understand and analyze the reasons
behind harmful content generation by large language models. We conducted
experiments on LLMs such as LLaMa2, Vicuna and using the Advbench dataset's
harmful instructions. The results indicate that our method achieves a much
better attack success rate to existing techniques, while significantly
enhancing the textual fluency of the prompts. In addition, our approach can be
generalized into a broader method for generating transferable adversarial
suffixes that can successfully attack multiple LLMs, even black-box LLMs, such
as ChatGPT and Gemini. As a result, the prompts generated through our method
exhibit enriched semantic diversity, which potentially provides more
adversarial examples for LLM defense methods.
| 2,024 | Computation and Language |
TMT: Tri-Modal Translation between Speech, Image, and Text by Processing
Different Modalities as Different Languages | The capability to jointly process multi-modal information is becoming an
essential task. However, the limited number of paired multi-modal data and the
large computational requirements in multi-modal learning hinder the
development. We propose a novel Tri-Modal Translation (TMT) model that
translates between arbitrary modalities spanning speech, image, and text. We
introduce a novel viewpoint, where we interpret different modalities as
different languages, and treat multi-modal translation as a well-established
machine translation problem. To this end, we tokenize speech and image data
into discrete tokens, which provide a unified interface across modalities and
significantly decrease the computational cost. In the proposed TMT, a
multi-modal encoder-decoder conducts the core translation, whereas
modality-specific processing is conducted only within the tokenization and
detokenization stages. We evaluate the proposed TMT on all six modality
translation tasks. TMT outperforms single model counterparts consistently,
demonstrating that unifying tasks is beneficial not only for practicality but
also for performance.
| 2,024 | Computation and Language |
HiGPT: Heterogeneous Graph Language Model | Heterogeneous graph learning aims to capture complex relationships and
diverse relational semantics among entities in a heterogeneous graph to obtain
meaningful representations for nodes and edges. Recent advancements in
heterogeneous graph neural networks (HGNNs) have achieved state-of-the-art
performance by considering relation heterogeneity and using specialized message
functions and aggregation rules. However, existing frameworks for heterogeneous
graph learning have limitations in generalizing across diverse heterogeneous
graph datasets. Most of these frameworks follow the "pre-train" and "fine-tune"
paradigm on the same dataset, which restricts their capacity to adapt to new
and unseen data. This raises the question: "Can we generalize heterogeneous
graph models to be well-adapted to diverse downstream learning tasks with
distribution shifts in both node token sets and relation type heterogeneity?''
To tackle those challenges, we propose HiGPT, a general large graph model with
Heterogeneous graph instruction-tuning paradigm. Our framework enables learning
from arbitrary heterogeneous graphs without the need for any fine-tuning
process from downstream datasets. To handle distribution shifts in
heterogeneity, we introduce an in-context heterogeneous graph tokenizer that
captures semantic relationships in different heterogeneous graphs, facilitating
model adaptation. We incorporate a large corpus of heterogeneity-aware graph
instructions into our HiGPT, enabling the model to effectively comprehend
complex relation heterogeneity and distinguish between various types of graph
tokens. Furthermore, we introduce the Mixture-of-Thought (MoT) instruction
augmentation paradigm to mitigate data scarcity by generating diverse and
informative instructions. Through comprehensive evaluations, our proposed
framework demonstrates exceptional performance in terms of generalization
performance.
| 2,024 | Computation and Language |
GraphWiz: An Instruction-Following Language Model for Graph Problems | Large language models (LLMs) have achieved impressive success across several
fields, but their proficiency in understanding and resolving complex graph
problems is less explored. To bridge this gap, we introduce GraphInstruct, a
novel and comprehensive instruction-tuning dataset designed to equip language
models with the ability to tackle a broad spectrum of graph problems using
explicit reasoning paths. Utilizing GraphInstruct, we build GraphWiz, an
open-source language model capable of resolving various graph problem types
while generating clear reasoning processes. To enhance the model's capability
and reliability, we incorporate the Direct Preference Optimization (DPO)
framework into the graph problem-solving context. The enhanced model,
GraphWiz-DPO, achieves an average accuracy of 65% across nine tasks with
different complexity levels, surpassing GPT-4 which has an average accuracy of
43.8%. Moreover, our research delves into the delicate balance between training
data volume and model performance, highlighting the potential for overfitting
with increased data. We also explore the transferability of the model's
reasoning ability across different graph tasks, indicating the model's
adaptability and practical application potential. Our investigation offers a
new blueprint and valuable insights for developing LLMs specialized in graph
reasoning and problem-solving.
| 2,024 | Computation and Language |
Don't Forget Your Reward Values: Language Model Alignment via
Value-based Calibration | While Reinforcement Learning from Human Feedback (RLHF) significantly
enhances the generation quality of Large Language Models (LLMs), recent studies
have raised concerns regarding the complexity and instability associated with
the Proximal Policy Optimization (PPO) algorithm, proposing a series of
order-based calibration methods as viable alternatives. This paper delves
further into current order-based methods, examining their inefficiencies in
utilizing reward values and addressing misalignment issues. Building upon these
findings, we propose a novel \textbf{V}alue-based \textbf{C}ali\textbf{B}ration
(VCB) method to better align LLMs with human preferences. Experimental results
demonstrate that VCB surpasses existing alignment methods on AI assistant and
summarization datasets, providing impressive generalizability, robustness, and
stability in diverse settings.
| 2,024 | Computation and Language |
Emotion Classification in Short English Texts using Deep Learning
Techniques | Detecting emotions in limited text datasets from under-resourced languages
presents a formidable obstacle, demanding specialized frameworks and
computational strategies. This study conducts a thorough examination of deep
learning techniques for discerning emotions in short English texts. Deep
learning approaches employ transfer learning and word embedding, notably BERT,
to attain superior accuracy. To evaluate these methods, we introduce the
"SmallEnglishEmotions" dataset, comprising 6372 varied short Persian texts
annotated with five primary emotion categories. Our experiments reveal that
transfer learning and BERT-based text embedding outperform alternative methods
in accurately categorizing the text in the dataset.
| 2,024 | Computation and Language |
Text Understanding and Generation Using Transformer Models for
Intelligent E-commerce Recommendations | With the rapid development of artificial intelligence technology, Transformer
structural pre-training model has become an important tool for large language
model (LLM) tasks. In the field of e-commerce, these models are especially
widely used, from text understanding to generating recommendation systems,
which provide powerful technical support for improving user experience and
optimizing service processes. This paper reviews the core application scenarios
of Transformer pre-training model in e-commerce text understanding and
recommendation generation, including but not limited to automatic generation of
product descriptions, sentiment analysis of user comments, construction of
personalized recommendation system and automated processing of customer service
conversations. Through a detailed analysis of the model's working principle,
implementation process, and application effects in specific cases, this paper
emphasizes the unique advantages of pre-trained models in understanding complex
user intentions and improving the quality of recommendations. In addition, the
challenges and improvement directions for the future are also discussed, such
as how to further improve the generalization ability of the model, the ability
to handle large-scale data sets, and technical strategies to protect user
privacy. Ultimately, the paper points out that the application of Transformer
structural pre-training models in e-commerce has not only driven technological
innovation, but also brought substantial benefits to merchants and consumers,
and looking forward, these models will continue to play a key role in
e-commerce and beyond.
| 2,024 | Computation and Language |
Deep Learning Approaches for Improving Question Answering Systems in
Hepatocellular Carcinoma Research | In recent years, advancements in natural language processing (NLP) have been
fueled by deep learning techniques, particularly through the utilization of
powerful computing resources like GPUs and TPUs. Models such as BERT and GPT-3,
trained on vast amounts of data, have revolutionized language understanding and
generation. These pre-trained models serve as robust bases for various tasks
including semantic understanding, intelligent writing, and reasoning, paving
the way for a more generalized form of artificial intelligence. NLP, as a vital
application of AI, aims to bridge the gap between humans and computers through
natural language interaction. This paper delves into the current landscape and
future prospects of large-scale model-based NLP, focusing on the
question-answering systems within this domain. Practical cases and developments
in artificial intelligence-driven question-answering systems are analyzed to
foster further exploration and research in the realm of large-scale NLP.
| 2,024 | Computation and Language |
EHRNoteQA: A Patient-Specific Question Answering Benchmark for
Evaluating Large Language Models in Clinical Settings | This study introduces EHRNoteQA, a novel patient-specific question answering
benchmark tailored for evaluating Large Language Models (LLMs) in clinical
environments. Based on MIMIC-IV Electronic Health Record (EHR), a team of three
medical professionals has curated the dataset comprising 962 unique questions,
each linked to a specific patient's EHR clinical notes. What makes EHRNoteQA
distinct from existing EHR-based benchmarks is as follows: Firstly, it is the
first dataset to adopt a multi-choice question answering format, a design
choice that effectively evaluates LLMs with reliable scores in the context of
automatic evaluation, compared to other formats. Secondly, it requires an
analysis of multiple clinical notes to answer a single question, reflecting the
complex nature of real-world clinical decision-making where clinicians review
extensive records of patient histories. Our comprehensive evaluation on various
large language models showed that their scores on EHRNoteQA correlate more
closely with their performance in addressing real-world medical questions
evaluated by clinicians than their scores from other LLM benchmarks. This
underscores the significance of EHRNoteQA in evaluating LLMs for medical
applications and highlights its crucial role in facilitating the integration of
LLMs into healthcare systems. The dataset will be made available to the public
under PhysioNet credential access, promoting further research in this vital
field.
| 2,024 | Computation and Language |
Detecting Machine-Generated Texts by Multi-Population Aware Optimization
for Maximum Mean Discrepancy | Large language models (LLMs) such as ChatGPT have exhibited remarkable
performance in generating human-like texts. However, machine-generated texts
(MGTs) may carry critical risks, such as plagiarism issues, misleading
information, or hallucination issues. Therefore, it is very urgent and
important to detect MGTs in many situations. Unfortunately, it is challenging
to distinguish MGTs and human-written texts because the distributional
discrepancy between them is often very subtle due to the remarkable performance
of LLMs. In this paper, we seek to exploit \textit{maximum mean discrepancy}
(MMD) to address this issue in the sense that MMD can well identify
distributional discrepancies. However, directly training a detector with MMD
using diverse MGTs will incur a significantly increased variance of MMD since
MGTs may contain \textit{multiple text populations} due to various LLMs. This
will severely impair MMD's ability to measure the difference between two
samples. To tackle this, we propose a novel \textit{multi-population} aware
optimization method for MMD called MMD-MP, which can \textit{avoid variance
increases} and thus improve the stability to measure the distributional
discrepancy. Relying on MMD-MP, we develop two methods for paragraph-based and
sentence-based detection, respectively. Extensive experiments on various LLMs,
\eg, GPT2 and ChatGPT, show superior detection performance of our MMD-MP. The
source code is available at \url{https://github.com/ZSHsh98/MMD-MP}.
| 2,024 | Computation and Language |
LLMs with Chain-of-Thought Are Non-Causal Reasoners | This paper explores the role of the Chain of Thought (CoT) in Large Language
Models (LLMs) reasoning. Despite its potential to improve task performance, our
analysis reveals a surprising frequency of correct answers following incorrect
CoTs and vice versa. We employ causal analysis to assess the cause-effect
relationship between CoTs/instructions and answers in LLMs, uncovering the
Structural Causal Model (SCM) that LLMs approximate. By comparing the implied
SCM with that of human reasoning, we highlight discrepancies between LLM and
human reasoning processes. We further examine the factors influencing the
causal structure of the implied SCM, revealing that in-context learning,
supervised fine-tuning, and reinforcement learning on human feedback
significantly impact the causal relations. We release the code and results at
https://github.com/StevenZHB/CoT_Causal_Analysis.
| 2,024 | Computation and Language |
Say More with Less: Understanding Prompt Learning Behaviors through Gist
Compression | Large language models (LLMs) require lengthy prompts as the input context to
produce output aligned with user intentions, a process that incurs extra costs
during inference. In this paper, we propose the Gist COnditioned deCOding
(Gist-COCO) model, introducing a novel method for compressing prompts which
also can assist the prompt interpretation and engineering. Gist-COCO employs an
encoder-decoder based language model and then incorporates an additional
encoder as a plugin module to compress prompts with inputs using gist tokens.
It finetunes the compression plugin module and uses the representations of gist
tokens to emulate the raw prompts in the vanilla language model. By verbalizing
the representations of gist tokens into gist prompts, the compression ability
of Gist-COCO can be generalized to different LLMs with high compression rates.
Our experiments demonstrate that Gist-COCO outperforms previous prompt
compression models in both passage and instruction compression tasks. Further
analysis on gist verbalization results suggests that our gist prompts serve
different functions in aiding language models. They may directly provide
potential answers, generate the chain-of-thought, or simply repeat the inputs.
All data and codes are available at https://github.com/OpenMatch/Gist-COCO .
| 2,024 | Computation and Language |
How Large Language Models Encode Context Knowledge? A Layer-Wise Probing
Study | Previous work has showcased the intriguing capability of large language
models (LLMs) in retrieving facts and processing context knowledge. However,
only limited research exists on the layer-wise capability of LLMs to encode
knowledge, which challenges our understanding of their internal mechanisms. In
this paper, we devote the first attempt to investigate the layer-wise
capability of LLMs through probing tasks. We leverage the powerful generative
capability of ChatGPT to construct probing datasets, providing diverse and
coherent evidence corresponding to various facts. We employ $\mathcal V$-usable
information as the validation metric to better reflect the capability in
encoding context knowledge across different layers. Our experiments on
conflicting and newly acquired knowledge show that LLMs: (1) prefer to encode
more context knowledge in the upper layers; (2) primarily encode context
knowledge within knowledge-related entity tokens at lower layers while
progressively expanding more knowledge within other tokens at upper layers; and
(3) gradually forget the earlier context knowledge retained within the
intermediate layers when provided with irrelevant evidence. Code is publicly
available at https://github.com/Jometeorie/probing_llama.
| 2,024 | Computation and Language |
Citation-Enhanced Generation for LLM-based Chatbots | Large language models (LLMs) exhibit powerful general intelligence across
diverse scenarios, including their integration into chatbots. However, a vital
challenge of LLM-based chatbots is that they may produce hallucinated content
in responses, which significantly limits their applicability. Various efforts
have been made to alleviate hallucination, such as retrieval augmented
generation and reinforcement learning with human feedback, but most of them
require additional training and data annotation. In this paper, we propose a
novel post-hoc Citation-Enhanced Generation (CEG) approach combined with
retrieval argumentation. Unlike previous studies that focus on preventing
hallucinations during generation, our method addresses this issue in a post-hoc
way. It incorporates a retrieval module to search for supporting documents
relevant to the generated content, and employs a natural language
inference-based citation generation module. Once the statements in the
generated content lack of reference, our model can regenerate responses until
all statements are supported by citations. Note that our method is a
training-free plug-and-play plugin that is capable of various LLMs. Experiments
on various hallucination-related datasets show our framework outperforms
state-of-the-art methods in both hallucination detection and response
regeneration on three benchmarks. Our codes and dataset will be publicly
available.
| 2,024 | Computation and Language |
Training a Bilingual Language Model by Mapping Tokens onto a Shared
Character Space | We train a bilingual Arabic-Hebrew language model using a transliterated
version of Arabic texts in Hebrew, to ensure both languages are represented in
the same script. Given the morphological, structural similarities, and the
extensive number of cognates shared among Arabic and Hebrew, we assess the
performance of a language model that employs a unified script for both
languages, on machine translation which requires cross-lingual knowledge. The
results are promising: our model outperforms a contrasting model which keeps
the Arabic texts in the Arabic script, demonstrating the efficacy of the
transliteration step. Despite being trained on a dataset approximately 60%
smaller than that of other existing language models, our model appears to
deliver comparable performance in machine translation across both translation
directions.
| 2,024 | Computation and Language |
Interpreting Predictive Probabilities: Model Confidence or Human Label
Variation? | With the rise of increasingly powerful and user-facing NLP systems, there is
growing interest in assessing whether they have a good representation of
uncertainty by evaluating the quality of their predictive distribution over
outcomes. We identify two main perspectives that drive starkly different
evaluation protocols. The first treats predictive probability as an indication
of model confidence; the second as an indication of human label variation. We
discuss their merits and limitations, and take the position that both are
crucial for trustworthy and fair NLP systems, but that exploiting a single
predictive distribution is limiting. We recommend tools and highlight exciting
directions towards models with disentangled representations of uncertainty
about predictions and uncertainty about human labels.
| 2,024 | Computation and Language |
FuseChat: Knowledge Fusion of Chat Models | While training large language models (LLMs) from scratch can indeed lead to
models with distinct capabilities and strengths, this approach incurs
substantial costs and may lead to potential redundancy in competencies. An
alternative strategy is to combine existing LLMs into a more robust LLM,
thereby diminishing the necessity for expensive pre-training. However, due to
the diverse architectures of LLMs, direct parameter blending proves to be
unfeasible. Recently, \textsc{FuseLLM} introduced the concept of knowledge
fusion to transfer the collective knowledge of multiple structurally varied
LLMs into a target LLM through lightweight continual training. In this report,
we extend the scalability and flexibility of the \textsc{FuseLLM} framework to
realize the fusion of chat LLMs, resulting in \textsc{FuseChat}.
\textsc{FuseChat} comprises two main stages. Firstly, we undertake knowledge
fusion for structurally and scale-varied source LLMs to derive multiple target
LLMs of identical structure and size via lightweight fine-tuning. Then, these
target LLMs are merged within the parameter space, wherein we propose a novel
method for determining the merging weights based on the variation ratio of
parameter matrices before and after fine-tuning. We validate our approach using
three prominent chat LLMs with diverse architectures and scales, namely
\texttt{NH2-Mixtral-8x7B}, \texttt{NH2-Solar-10.7B}, and
\texttt{OpenChat-3.5-7B}. Experimental results spanning various chat domains
demonstrate the superiority of \texttt{\textsc{FuseChat}-7B} across a broad
spectrum of chat LLMs at 7B and 34B scales, even surpassing \texttt{GPT-3.5
(March)} and approaching \texttt{Mixtral-8x7B-Instruct}. Our code, model
weights, and data are openly accessible at
\url{https://github.com/fanqiwan/FuseLLM}.
| 2,024 | Computation and Language |
InstructEdit: Instruction-based Knowledge Editing for Large Language
Models | Knowledge editing for large language models can offer an efficient solution
to alter a model's behavior without negatively impacting the overall
performance. However, the current approach encounters issues with limited
generalizability across tasks, necessitating one distinct editor for each task,
which significantly hinders the broader applications. To address this, we take
the first step to analyze the multi-task generalization issue in knowledge
editing. Specifically, we develop an instruction-based editing technique,
termed InstructEdit, which facilitates the editor's adaptation to various task
performances simultaneously using simple instructions. With only one unified
editor for each LLM, we empirically demonstrate that InstructEdit can improve
the editor's control, leading to an average 14.86% increase in Reliability in
multi-task editing setting. Furthermore, experiments involving holdout unseen
task illustrate that InstructEdit consistently surpass previous strong
baselines. To further investigate the underlying mechanisms of
instruction-based knowledge editing, we analyze the principal components of the
editing gradient directions, which unveils that instructions can help control
optimization direction with stronger OOD generalization. Code and datasets will
be available in https://github.com/zjunlp/EasyEdit.
| 2,024 | Computation and Language |
LSTPrompt: Large Language Models as Zero-Shot Time Series Forecasters by
Long-Short-Term Prompting | Time-series forecasting (TSF) finds broad applications in real-world
scenarios. Prompting off-the-shelf Large Language Models (LLMs) demonstrates
strong zero-shot TSF capabilities while preserving computational efficiency.
However, existing prompting methods oversimplify TSF as language next-token
predictions, overlooking its dynamic nature and lack of integration with
state-of-the-art prompt strategies such as Chain-of-Thought. Thus, we propose
LSTPrompt, a novel approach for prompting LLMs in zero-shot TSF tasks.
LSTPrompt decomposes TSF into short-term and long-term forecasting sub-tasks,
tailoring prompts to each. LSTPrompt guides LLMs to regularly reassess
forecasting mechanisms to enhance adaptability. Extensive evaluations
demonstrate consistently better performance of LSTPrompt than existing
prompting methods, and competitive results compared to foundation TSF models.
| 2,024 | Computation and Language |
What Generative Artificial Intelligence Means for Terminological
Definitions | This paper examines the impact of Generative Artificial Intelligence (GenAI)
on the creation and consumption of terminological definitions. GenAI tools like
ChatGPT present a mix of benefits and drawbacks compared to traditional
terminological resources. ChatGPT excels in providing context-specific meanings
in an interactive and customized fashion but faces challenges with accuracy.
Terminological definitions in recognized resources will likely survive because
of their reliability. From the point of view of the terminologist, tools like
ChatGPT enable AI-assisted terminography, including post-editing terminography,
as an approach blending AI efficiency with human expertise for faster
definition creation.
| 2,024 | Computation and Language |
PeriodicLoRA: Breaking the Low-Rank Bottleneck in LoRA Optimization | Supervised fine-tuning is the most common method to adapt large language
models (LLMs) to downstream tasks, but full fine-tuning LLMs requires massive
computational resources. Recently, parameter-efficient fine-tuning (PEFT)
methods have been widely studied due to its cost-effectiveness. LoRA is one of
the most widely used methods, which assumes that the optimization process is
essentially low-dimensional. Although LoRA fine-tuning is effective, there is
still a performance gap compared to full fine-tuning, since its weight update
is limited to low-rank matrices. In order to break the low-rank bottleneck in
LoRA Optimization, we propose PeriodicLoRA (PLoRA), which accumulates low-rank
update matrices multiple times to achieve a higher update rank. PLoRA has
multiple training stages. During each stage, we still update only the LoRA
weights. However, at the end of each stage, we unload the LoRA weights into the
backbone parameters and then reinitialize the LoRA states. Experimental results
show that PLoRA has stronger learning ability, approximately 1.8 times that of
LoRA's learning ability at most, but it does not increase memory usage.
Further, we introduce a momentum-based unloading strategy for PLoRA to mitigate
the training instability.
| 2,024 | Computation and Language |
From Text to Transformation: A Comprehensive Review of Large Language
Models' Versatility | This groundbreaking study explores the expanse of Large Language Models
(LLMs), such as Generative Pre-Trained Transformer (GPT) and Bidirectional
Encoder Representations from Transformers (BERT) across varied domains ranging
from technology, finance, healthcare to education. Despite their established
prowess in Natural Language Processing (NLP), these LLMs have not been
systematically examined for their impact on domains such as fitness, and
holistic well-being, urban planning, climate modelling as well as disaster
management. This review paper, in addition to furnishing a comprehensive
analysis of the vast expanse and extent of LLMs' utility in diverse domains,
recognizes the research gaps and realms where the potential of LLMs is yet to
be harnessed. This study uncovers innovative ways in which LLMs can leave a
mark in the fields like fitness and wellbeing, urban planning, climate
modelling and disaster response which could inspire future researches and
applications in the said avenues.
| 2,024 | Computation and Language |
DistALANER: Distantly Supervised Active Learning Augmented Named Entity
Recognition in the Open Source Software Ecosystem | This paper proposes a novel named entity recognition (NER) technique
specifically tailored for the open-source software systems. Our approach aims
to address the scarcity of annotated software data by employing a comprehensive
two-step distantly supervised annotation process. This process strategically
leverages language heuristics, unique lookup tables, external knowledge
sources, and an active learning approach. By harnessing these powerful
techniques, we not only enhance model performance but also effectively mitigate
the limitations associated with cost and the scarcity of expert annotators. It
is noteworthy that our framework significantly outperforms the state-of-the-art
LLMs by a substantial margin. We also show the effectiveness of NER in the
downstream task of relation extraction.
| 2,024 | Computation and Language |
Hitting "Probe"rty with Non-Linearity, and More | Structural probes learn a linear transformation to find how dependency trees
are embedded in the hidden states of language models. This simple design may
not allow for full exploitation of the structure of the encoded information.
Hence, to investigate the structure of the encoded information to its full
extent, we incorporate non-linear structural probes. We reformulate the design
of non-linear structural probes introduced by White et al. making its design
simpler yet effective. We also design a visualization framework that lets us
qualitatively assess how strongly two words in a sentence are connected in the
predicted dependency tree. We use this technique to understand which non-linear
probe variant is good at encoding syntactical information. Additionally, we
also use it to qualitatively investigate the structure of dependency trees that
BERT encodes in each of its layers. We find that the radial basis function
(RBF) is an effective non-linear probe for the BERT model than the linear
probe.
| 2,024 | Computation and Language |
Defending Large Language Models against Jailbreak Attacks via Semantic
Smoothing | Aligned large language models (LLMs) are vulnerable to jailbreaking attacks,
which bypass the safeguards of targeted LLMs and fool them into generating
objectionable content. While initial defenses show promise against token-based
threat models, there do not exist defenses that provide robustness against
semantic attacks and avoid unfavorable trade-offs between robustness and
nominal performance. To meet this need, we propose SEMANTICSMOOTH, a
smoothing-based defense that aggregates the predictions of multiple
semantically transformed copies of a given input prompt. Experimental results
demonstrate that SEMANTICSMOOTH achieves state-of-the-art robustness against
GCG, PAIR, and AutoDAN attacks while maintaining strong nominal performance on
instruction following benchmarks such as InstructionFollowing and AlpacaEval.
The codes will be publicly available at
https://github.com/UCSB-NLP-Chang/SemanticSmooth.
| 2,024 | Computation and Language |
ASEM: Enhancing Empathy in Chatbot through Attention-based Sentiment and
Emotion Modeling | Effective feature representations play a critical role in enhancing the
performance of text generation models that rely on deep neural networks.
However, current approaches suffer from several drawbacks, such as the
inability to capture the deep semantics of language and sensitivity to minor
input variations, resulting in significant changes in the generated text. In
this paper, we present a novel solution to these challenges by employing a
mixture of experts, multiple encoders, to offer distinct perspectives on the
emotional state of the user's utterance while simultaneously enhancing
performance. We propose an end-to-end model architecture called ASEM that
performs emotion analysis on top of sentiment analysis for open-domain
chatbots, enabling the generation of empathetic responses that are fluent and
relevant. In contrast to traditional attention mechanisms, the proposed model
employs a specialized attention strategy that uniquely zeroes in on sentiment
and emotion nuances within the user's utterance. This ensures the generation of
context-rich representations tailored to the underlying emotional tone and
sentiment intricacies of the text. Our approach outperforms existing methods
for generating empathetic embeddings, providing empathetic and diverse
responses. The performance of our proposed model significantly exceeds that of
existing models, enhancing emotion detection accuracy by 6.2% and lexical
diversity by 1.4%.
| 2,024 | Computation and Language |
HypoTermQA: Hypothetical Terms Dataset for Benchmarking Hallucination
Tendency of LLMs | Hallucinations pose a significant challenge to the reliability and alignment
of Large Language Models (LLMs), limiting their widespread acceptance beyond
chatbot applications. Despite ongoing efforts, hallucinations remain a
prevalent challenge in LLMs. The detection of hallucinations itself is also a
formidable task, frequently requiring manual labeling or constrained
evaluations. This paper introduces an automated scalable framework that
combines benchmarking LLMs' hallucination tendencies with efficient
hallucination detection. We leverage LLMs to generate challenging tasks related
to hypothetical phenomena, subsequently employing them as agents for efficient
hallucination detection. The framework is domain-agnostic, allowing the use of
any language model for benchmark creation or evaluation in any domain. We
introduce the publicly available HypoTermQA Benchmarking Dataset, on which
state-of-the-art models' performance ranged between 3% and 11%, and evaluator
agents demonstrated a 6% error rate in hallucination prediction. The proposed
framework provides opportunities to test and improve LLMs. Additionally, it has
the potential to generate benchmarking datasets tailored to specific domains,
such as law, health, and finance.
| 2,024 | Computation and Language |
Topic-to-essay generation with knowledge-based content selection | The topic-to-essay generation task is a challenging natural language
generation task that aims to generate paragraph-level text with high semantic
coherence based on a given set of topic words. Previous work has focused on the
introduction of external knowledge, ignoring the insufficient generated text
diversity. In order to improve the generation diversity, we propose a novel
copy mechanism model with a content selection module that integrates rich
semantic knowledge from the language model into the decoder. Furthermore, we
introduce the improved prefix tuning method to train the model, enabling it to
adapt to varying input complexities. In addition, we have contributed a new
Chinese dataset for TEG tasks. Experimental results demonstrate that the
proposed model can improve the generated text diversity by 35\% to 59\%
compared to the state-of-the-art method, while maintaining a high level of
topic consistency.
| 2,024 | Computation and Language |
UniRetriever: Multi-task Candidates Selection for Various
Context-Adaptive Conversational Retrieval | Conversational retrieval refers to an information retrieval system that
operates in an iterative and interactive manner, requiring the retrieval of
various external resources, such as persona, knowledge, and even response, to
effectively engage with the user and successfully complete the dialogue.
However, most previous work trained independent retrievers for each specific
resource, resulting in sub-optimal performance and low efficiency. Thus, we
propose a multi-task framework function as a universal retriever for three
dominant retrieval tasks during the conversation: persona selection, knowledge
selection, and response selection. To this end, we design a dual-encoder
architecture consisting of a context-adaptive dialogue encoder and a candidate
encoder, aiming to attention to the relevant context from the long dialogue and
retrieve suitable candidates by simply a dot product. Furthermore, we introduce
two loss constraints to capture the subtle relationship between dialogue
context and different candidates by regarding historically selected candidates
as hard negatives. Extensive experiments and analysis establish
state-of-the-art retrieval quality both within and outside its training domain,
revealing the promising potential and generalization capability of our model to
serve as a universal retriever for different candidate selection tasks
simultaneously.
| 2,024 | Computation and Language |
PerLTQA: A Personal Long-Term Memory Dataset for Memory Classification,
Retrieval, and Synthesis in Question Answering | Long-term memory plays a critical role in personal interaction, considering
long-term memory can better leverage world knowledge, historical information,
and preferences in dialogues. Our research introduces PerLTQA, an innovative QA
dataset that combines semantic and episodic memories, including world
knowledge, profiles, social relationships, events, and dialogues. This dataset
is collected to investigate the use of personalized memories, focusing on
social interactions and events in the QA task. PerLTQA features two types of
memory and a comprehensive benchmark of 8,593 questions for 30 characters,
facilitating the exploration and application of personalized memories in Large
Language Models (LLMs). Based on PerLTQA, we propose a novel framework for
memory integration and generation, consisting of three main components: Memory
Classification, Memory Retrieval, and Memory Synthesis. We evaluate this
framework using five LLMs and three retrievers. Experimental results
demonstrate that BERT-based classification models significantly outperform LLMs
such as ChatGLM3 and ChatGPT in the memory classification task. Furthermore,
our study highlights the importance of effective memory integration in the QA
task.
| 2,024 | Computation and Language |
Cross-domain Chinese Sentence Pattern Parsing | Sentence Pattern Structure (SPS) parsing is a syntactic analysis method
primarily employed in language teaching.Existing SPS parsers rely heavily on
textbook corpora for training, lacking cross-domain capability.To overcome this
constraint, this paper proposes an innovative approach leveraging large
language models (LLMs) within a self-training framework. Partial syntactic
rules from a source domain are combined with target domain sentences to
dynamically generate training data, enhancing the adaptability of the parser to
diverse domains.Experiments conducted on textbook and news domains demonstrate
the effectiveness of the proposed method, outperforming rule-based baselines by
1.68 points on F1 metrics.
| 2,024 | Computation and Language |
Chain-of-Discussion: A Multi-Model Framework for Complex Evidence-Based
Question Answering | Open-ended question answering requires models to find appropriate evidence to
form well-reasoned, comprehensive and helpful answers. In practical
applications, models also need to engage in extended discussions on potential
scenarios closely relevant to the question. With augmentation of retrieval
module, open-source Large Language Models (LLMs) can produce coherent answers
often with different focuses, but are still sub-optimal in terms of reliable
evidence selection and in-depth question analysis. In this paper, we propose a
novel Chain-of-Discussion framework to leverage the synergy among multiple
open-source LLMs aiming to provide \textbf{more correct} and \textbf{more
comprehensive} answers for open-ended QA, although they are not strong enough
individually. Our experiments show that discussions among multiple LLMs play a
vital role in enhancing the quality of answers. We release our data and code at
\url{https://github.com/kobayashikanna01/Chain-of-Discussion}.
| 2,024 | Computation and Language |
Data-freeWeight Compress and Denoise for Large Language Models | Large Language Models (LLMs) are reshaping the research landscape in
artificial intelligence, particularly as model parameters scale up
significantly, unlocking remarkable capabilities across various domains.
Nevertheless, the scalability of model parameters faces constraints due to
limitations in GPU memory and computational speed. To address these
constraints, various weight compression methods have emerged, such as Pruning
and Quantization. Given the low-rank nature of weight matrices in language
models, the reduction of weights through matrix decomposition undoubtedly holds
significant potential and promise. In this paper, drawing upon the intrinsic
structure of LLMs, we propose a novel approach termed Data-free Joint Rank-k
Approximation for compressing the parameter matrices. Significantly, our method
is characterized by without necessitating additional involvement of any corpus,
while simultaneously preserving orthogonality in conjunction with pruning and
quantization methods. We achieve a model pruning of 80% parameters while
retaining 93.43% of the original performance without any calibration data.
Additionally, we explore the fundamental properties of the weight matrix of
LLMs undergone Rank-k Approximation and conduct comprehensive experiments to
elucidate our hypothesis.
| 2,024 | Computation and Language |
CodeS: Towards Building Open-source Language Models for Text-to-SQL | Language models have shown promising performance on the task of translating
natural language questions into SQL queries (Text-to-SQL). However, most of the
state-of-the-art (SOTA) approaches rely on powerful yet closed-source large
language models (LLMs), such as ChatGPT and GPT-4, which may have the
limitations of unclear model architectures, data privacy risks, and expensive
inference overheads. To address the limitations, we introduce CodeS, a series
of pre-trained language models with parameters ranging from 1B to 15B,
specifically designed for the text-to-SQL task. CodeS is a fully open-source
language model, which achieves superior accuracy with much smaller parameter
sizes. This paper studies the research challenges in building CodeS. To enhance
the SQL generation abilities of CodeS, we adopt an incremental pre-training
approach using a specifically curated SQL-centric corpus. Based on this, we
address the challenges of schema linking and rapid domain adaptation through
strategic prompt construction and a bi-directional data augmentation technique.
We conduct comprehensive evaluations on multiple datasets, including the widely
used Spider benchmark, the newly released BIRD benchmark, robustness-diagnostic
benchmarks such as Spider-DK, Spider-Syn, Spider-Realistic, and Dr.Spider, as
well as two real-world datasets created for financial and academic
applications. The experimental results show that our CodeS achieves new SOTA
accuracy and robustness on nearly all challenging text-to-SQL benchmarks.
| 2,024 | Computation and Language |
MathGenie: Generating Synthetic Data with Question Back-translation for
Enhancing Mathematical Reasoning of LLMs | Large language models (LLMs) have exhibited great potential in mathematical
reasoning. However, there remains a performance gap in this area between
existing open-source models and closed-source models such as GPT-4. In this
paper, we introduce MathGenie, a novel method for generating diverse and
reliable math problems from a small-scale problem-solution dataset (denoted as
seed data). We augment the ground-truth solutions of our seed data and train a
back-translation model to translate the augmented solutions back into new
questions. Subsequently, we generate code-integrated solutions for the new
questions. To ensure the correctness of the code-integrated solutions, we
employ rationale-based strategy for solution verification. Various pretrained
models, ranging from 7B to 70B, are trained on the newly curated data to test
the effectiveness of the proposed augmentation technique, resulting in a family
of models known as MathGenieLM. These models consistently outperform previous
open-source models across five representative mathematical reasoning datasets,
achieving state-of-the-art performance. In particular, MathGenieLM-InternLM2
achieves an accuracy of 87.7% on GSM8K and 55.7% on MATH, securing the best
overall score among open-source language models.
| 2,024 | Computation and Language |
Layer-wise Regularized Dropout for Neural Language Models | Among the various pre-trained neural language models that are popular today,
dropout is already an indispensable regularization technique. To solve the
inconsistency between training and inference caused by the randomness of
dropout, some studies use consistency training to regularize dropout at the
output layer. In this paper, we propose a novel Layer-wise Regularized Dropout
(LR-Drop), which is specially designed for Transformer-based Language models.
Specifically, LR-Drop layer-wise regularizes each Transformer layer using the
consistency training strategy. Each training sample passes through the two
siamese sub-models sampled by dropout, and then LR-Drop forces the hidden
states, multi-head attention matrices, and output distribution of the two
siamese sub-models to be consistent. The proposed LR-Drop can be regarded as a
"self-distillation" framework, in which each sub-model generated by dropout is
the other's "teacher" model and "student" model. Through extensive experiments
on 8 natural language understanding datasets, 6 neural machine translation
datasets, and 1 abstractive summarization dataset (a total of 15 datasets), we
show that LR-Drop achieves superior performances, including state-of-the-art
results.
| 2,024 | Computation and Language |
LLM Inference Unveiled: Survey and Roofline Model Insights | The field of efficient Large Language Model (LLM) inference is rapidly
evolving, presenting a unique blend of opportunities and challenges. Although
the field has expanded and is vibrant, there hasn't been a concise framework
that analyzes the various methods of LLM Inference to provide a clear
understanding of this domain. Our survey stands out from traditional literature
reviews by not only summarizing the current state of research but also by
introducing a framework based on roofline model for systematic analysis of LLM
inference techniques. This framework identifies the bottlenecks when deploying
LLMs on hardware devices and provides a clear understanding of practical
problems, such as why LLMs are memory-bound, how much memory and computation
they need, and how to choose the right hardware. We systematically collate the
latest advancements in efficient LLM inference, covering crucial areas such as
model compression (e.g., Knowledge Distillation and Quantization), algorithm
improvements (e.g., Early Exit and Mixture-of-Expert), and both hardware and
system-level enhancements. Our survey stands out by analyzing these methods
with roofline model, helping us understand their impact on memory access and
computation. This distinctive approach not only showcases the current research
landscape but also delivers valuable insights for practical implementation,
positioning our work as an indispensable resource for researchers new to the
field as well as for those seeking to deepen their understanding of efficient
LLM deployment. The analyze tool, LLM-Viewer, is open-sourced.
| 2,024 | Computation and Language |
Where Do We Go from Here? Multi-scale Allocentric Relational Inference
from Natural Spatial Descriptions | When communicating routes in natural language, the concept of {\em acquired
spatial knowledge} is crucial for geographic information retrieval (GIR) and in
spatial cognitive research. However, NLP navigation studies often overlook the
impact of such acquired knowledge on textual descriptions. Current navigation
studies concentrate on egocentric local descriptions (e.g., `it will be on your
right') that require reasoning over the agent's local perception. These
instructions are typically given as a sequence of steps, with each action-step
explicitly mentioning and being followed by a landmark that the agent can use
to verify they are on the right path (e.g., `turn right and then you will
see...'). In contrast, descriptions based on knowledge acquired through a map
provide a complete view of the environment and capture its overall structure.
These instructions (e.g., `it is south of Central Park and a block north of a
police station') are typically non-sequential, contain allocentric relations,
with multiple spatial relations and implicit actions, without any explicit
verification. This paper introduces the Rendezvous (RVS) task and dataset,
which includes 10,404 examples of English geospatial instructions for reaching
a target location using map-knowledge. Our analysis reveals that RVS exhibits a
richer use of spatial allocentric relations, and requires resolving more
spatial relations simultaneously compared to previous text-based navigation
benchmarks.
| 2,024 | Computation and Language |
Unraveling Babel: Exploring Multilingual Activation Patterns within
Large Language Models | Recently, large language models (LLMs) have achieved tremendous breakthroughs
in the field of language processing, yet their mechanisms in processing
multiple languages remain agnostic. Therefore, in this work we study the
multilingual activation patterns of LLMs. By transforming the original Large
Language Models (LLMs) into a Mixture of Experts (MoE) architecture, we analyze
the expert activation patterns when processing various languages and
demonstrate the connections of these activation patterns at the level of
language families. We discover the existence of non-language-specific neurons
as well as language-specific activation neurons. Further exploration even
showcases that merely leveraging high-frequency activation neurons can
accelerate inference while maintaining comparable performance. These findings
shed light on the LLMs' multilingual processing mechanism, and are of
significant importance in guiding the multilingual training and model pruning
of LLMs.
| 2,024 | Computation and Language |
Improving LLM-based Machine Translation with Systematic Self-Correction | Large Language Models (LLMs) have achieved impressive results in Machine
Translation (MT). However, careful evaluations by human reveal that the
translations produced by LLMs still contain multiple errors. Importantly,
feeding back such error information into the LLMs can lead to self-correction
and result in improved translation performance. Motivated by these insights, we
introduce a systematic LLM-based self-correcting translation framework, named
TER, which stands for Translate, Estimate, and Refine, marking a significant
step forward in this direction. Our findings demonstrate that 1) our
self-correction framework successfully assists LLMs in improving their
translation quality across a wide range of languages, whether it's from
high-resource languages to low-resource ones or whether it's English-centric or
centered around other languages; 2) TER exhibits superior systematicity and
interpretability compared to previous methods; 3) different estimation
strategies yield varied impacts on AI feedback, directly affecting the
effectiveness of the final corrections. We further compare different LLMs and
conduct various experiments involving self-correction and cross-model
correction to investigate the potential relationship between the translation
and evaluation capabilities of LLMs. Our code and data are available at
https://github.com/fzp0424/self_correct_mt
| 2,024 | Computation and Language |
Immunization against harmful fine-tuning attacks | Approaches to aligning large language models (LLMs) with human values has
focused on correcting misalignment that emerges from pretraining. However, this
focus overlooks another source of misalignment: bad actors might purposely
fine-tune LLMs to achieve harmful goals. In this paper, we present an emerging
threat model that has arisen from alignment circumvention and fine-tuning
attacks. However, lacking in previous works is a clear presentation of the
conditions for effective defence. We propose a set of conditions for effective
defence against harmful fine-tuning in LLMs called "Immunization conditions,"
which help us understand how we would construct and measure future defences.
Using this formal framework for defence, we offer a synthesis of different
research directions that might be persued to prevent harmful fine-tuning
attacks and provide a demonstration of how to use these conditions
experimentally showing early results of using an adversarial loss to immunize
LLama2-7b-chat.
| 2,024 | Computation and Language |
MoZIP: A Multilingual Benchmark to Evaluate Large Language Models in
Intellectual Property | Large language models (LLMs) have demonstrated impressive performance in
various natural language processing (NLP) tasks. However, there is limited
understanding of how well LLMs perform in specific domains (e.g, the
intellectual property (IP) domain). In this paper, we contribute a new
benchmark, the first Multilingual-oriented quiZ on Intellectual Property
(MoZIP), for the evaluation of LLMs in the IP domain. The MoZIP benchmark
includes three challenging tasks: IP multiple-choice quiz (IPQuiz), IP question
answering (IPQA), and patent matching (PatentMatch). In addition, we also
develop a new IP-oriented multilingual large language model (called MoZi),
which is a BLOOMZ-based model that has been supervised fine-tuned with
multilingual IP-related text data. We evaluate our proposed MoZi model and four
well-known LLMs (i.e., BLOOMZ, BELLE, ChatGLM and ChatGPT) on the MoZIP
benchmark. Experimental results demonstrate that MoZi outperforms BLOOMZ, BELLE
and ChatGLM by a noticeable margin, while it had lower scores compared with
ChatGPT. Notably, the performance of current LLMs on the MoZIP benchmark has
much room for improvement, and even the most powerful ChatGPT does not reach
the passing level. Our source code, data, and models are available at
\url{https://github.com/AI-for-Science/MoZi}.
| 2,024 | Computation and Language |
From RAGs to riches: Using large language models to write documents for
clinical trials | Clinical trials require numerous documents to be written -- protocols,
consent forms, clinical study reports and others. Large language models (LLMs)
offer the potential to rapidly generate first versions of these documents,
however there are concerns about the quality of their output Here we report an
evaluation of LLMs in generating parts of one such document, clinical trial
protocols. We find that an offthe-shelf LLM delivers reasonable results,
especially when assessing content relevance and the correct use of terminology.
However, deficiencies remain: specifically clinical thinking and logic, and
appropriate use of references. To improve performance, we used
retrieval-augmented generation (RAG) to prompt an LLM with accurate up-to-date
information. As a result of using RAG, the writing quality of the LLM improves
substantially, which has implications for the practical useability of LLMs in
clinical trial-related writing.
| 2,024 | Computation and Language |
Predicting Sustainable Development Goals Using Course Descriptions --
from LLMs to Conventional Foundation Models | We present our work on predicting United Nations sustainable development
goals (SDG) for university courses. We use an LLM named PaLM 2 to generate
training data given a noisy human-authored course description input as input.
We use this data to train several different smaller language models to predict
SDGs for university courses. This work contributes to better university level
adaptation of SDGs. The best performing model in our experiments was BART with
an F1-score of 0.786.
| 2,024 | Computation and Language |
RoCoIns: Enhancing Robustness of Large Language Models through
Code-Style Instructions | Large Language Models (LLMs) have showcased remarkable capabilities in
following human instructions. However, recent studies have raised concerns
about the robustness of LLMs when prompted with instructions combining textual
adversarial samples. In this paper, drawing inspiration from recent works that
LLMs are sensitive to the design of the instructions, we utilize instructions
in code style, which are more structural and less ambiguous, to replace
typically natural language instructions. Through this conversion, we provide
LLMs with more precise instructions and strengthen the robustness of LLMs.
Moreover, under few-shot scenarios, we propose a novel method to compose
in-context demonstrations using both clean and adversarial samples
(\textit{adversarial context method}) to further boost the robustness of the
LLMs. Experiments on eight robustness datasets show that our method
consistently outperforms prompting LLMs with natural language instructions. For
example, with gpt-3.5-turbo, our method achieves an improvement of 5.68\% in
test set accuracy and a reduction of 5.66 points in Attack Success Rate (ASR).
| 2,024 | Computation and Language |
Language-Specific Neurons: The Key to Multilingual Capabilities in Large
Language Models | Large language models (LLMs) demonstrate remarkable multilingual capabilities
without being pre-trained on specially curated multilingual parallel corpora.
It remains a challenging problem to explain the underlying mechanisms by which
LLMs process multilingual texts. In this paper, we delve into the composition
of Transformer architectures in LLMs to pinpoint language-specific regions.
Specially, we propose a novel detection method, language activation probability
entropy (LAPE), to identify language-specific neurons within LLMs. Based on
LAPE, we conduct comprehensive experiments on two representative LLMs, namely
LLaMA-2 and BLOOM. Our findings indicate that LLMs' proficiency in processing a
particular language is predominantly due to a small subset of neurons,
primarily situated in the models' top and bottom layers. Furthermore, we
showcase the feasibility to "steer" the output language of LLMs by selectively
activating or deactivating language-specific neurons. Our research provides
important evidence to the understanding and exploration of the multilingual
capabilities of LLMs.
| 2,024 | Computation and Language |
ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable
Safety Detectors | The safety of Large Language Models (LLMs) has gained increasing attention in
recent years, but there still lacks a comprehensive approach for detecting
safety issues within LLMs' responses in an aligned, customizable and
explainable manner. In this paper, we propose ShieldLM, an LLM-based safety
detector, which aligns with general human safety standards, supports
customizable detection rules, and provides explanations for its decisions. To
train ShieldLM, we compile a large bilingual dataset comprising 14,387
query-response pairs, annotating the safety of responses based on various
safety standards. Through extensive experiments, we demonstrate that ShieldLM
surpasses strong baselines across four test sets, showcasing remarkable
customizability and explainability. Besides performing well on standard
detection datasets, ShieldLM has also been shown to be effective in real-world
situations as a safety evaluator for advanced LLMs. We release ShieldLM at
\url{https://github.com/thu-coai/ShieldLM} to support accurate and explainable
safety detection under various safety standards, contributing to the ongoing
efforts to enhance the safety of LLMs.
| 2,024 | Computation and Language |
RetrievalQA: Assessing Adaptive Retrieval-Augmented Generation for
Short-form Open-Domain Question Answering | Adaptive retrieval-augmented generation (ARAG) aims to dynamically determine
the necessity of retrieval for queries instead of retrieving indiscriminately
to enhance the efficiency and relevance of the sourced information. However,
previous works largely overlook the evaluation of ARAG approaches, leading to
their effectiveness being understudied. This work presents a benchmark,
RetrievalQA, comprising 1,271 short-form questions covering new world and
long-tail knowledge. The knowledge necessary to answer the questions is absent
from LLMs; therefore, external information must be retrieved to answer
correctly. This makes RetrievalQA a suitable testbed to evaluate existing ARAG
methods. We observe that calibration-based methods heavily rely on threshold
tuning, while vanilla prompting is inadequate for guiding LLMs to make reliable
retrieval decisions. Based on our findings, we propose Time-Aware Adaptive
Retrieval (TA-ARE), a simple yet effective method that helps LLMs assess the
necessity of retrieval without calibration or additional training. The dataset
and code will be available at \url{https://github.com/hyintell/RetrievalQA}
| 2,024 | Computation and Language |
ID-XCB: Data-independent Debiasing for Fair and Accurate
Transformer-based Cyberbullying Detection | Swear words are a common proxy to collect datasets with cyberbullying
incidents. Our focus is on measuring and mitigating biases derived from
spurious associations between swear words and incidents occurring as a result
of such data collection strategies. After demonstrating and quantifying these
biases, we introduce ID-XCB, the first data-independent debiasing technique
that combines adversarial training, bias constraints and debias fine-tuning
approach aimed at alleviating model attention to bias-inducing words without
impacting overall model performance. We explore ID-XCB on two popular
session-based cyberbullying datasets along with comprehensive ablation and
generalisation studies. We show that ID-XCB learns robust cyberbullying
detection capabilities while mitigating biases, outperforming state-of-the-art
debiasing methods in both performance and bias mitigation. Our quantitative and
qualitative analyses demonstrate its generalisability to unseen data.
| 2,024 | Computation and Language |
Defending LLMs against Jailbreaking Attacks via Backtranslation | Although many large language models (LLMs) have been trained to refuse
harmful requests, they are still vulnerable to jailbreaking attacks, which
rewrite the original prompt to conceal its harmful intent. In this paper, we
propose a new method for defending LLMs against jailbreaking attacks by
``backtranslation''. Specifically, given an initial response generated by the
target LLM from an input prompt, our backtranslation prompts a language model
to infer an input prompt that can lead to the response. The inferred prompt is
called the backtranslated prompt which tends to reveal the actual intent of the
original prompt, since it is generated based on the LLM's response and is not
directly manipulated by the attacker. We then run the target LLM again on the
backtranslated prompt, and we refuse the original prompt if the model refuses
the backtranslated prompt. We explain that the proposed defense provides
several benefits on its effectiveness and efficiency. We empirically
demonstrate that our defense significantly outperforms the baselines, in the
cases that are hard for the baselines, and our defense also has little impact
on the generation quality for benign input prompts.
| 2,024 | Computation and Language |
Unveiling Vulnerability of Self-Attention | Pre-trained language models (PLMs) are shown to be vulnerable to minor word
changes, which poses a big threat to real-world systems. While previous studies
directly focus on manipulating word inputs, they are limited by their means of
generating adversarial samples, lacking generalization to versatile real-world
attack. This paper studies the basic structure of transformer-based PLMs, the
self-attention (SA) mechanism. (1) We propose a powerful perturbation technique
\textit{HackAttend}, which perturbs the attention scores within the SA matrices
via meticulously crafted attention masks. We show that state-of-the-art PLMs
fall into heavy vulnerability that minor attention perturbations $(1\%)$ can
produce a very high attack success rate $(98\%)$. Our paper expands the
conventional text attack of word perturbations to more general structural
perturbations. (2) We introduce \textit{S-Attend}, a novel smoothing technique
that effectively makes SA robust via structural perturbations. We empirically
demonstrate that this simple yet effective technique achieves robust
performance on par with adversarial training when facing various text
attackers. Code is publicly available at \url{github.com/liongkj/HackAttend}.
| 2,024 | Computation and Language |
mEdIT: Multilingual Text Editing via Instruction Tuning | We introduce mEdIT, a multi-lingual extension to CoEdIT -- the recent
state-of-the-art text editing models for writing assistance. mEdIT models are
trained by fine-tuning multi-lingual large, pre-trained language models (LLMs)
via instruction tuning. They are designed to take instructions from the user
specifying the attributes of the desired text in the form of natural language
instructions, such as Grammatik korrigieren (German) or Parafrasee la oraci\'on
(Spanish). We build mEdIT by curating data from multiple publicly available
human-annotated text editing datasets for three text editing tasks (Grammatical
Error Correction (GEC), Text Simplification, and Paraphrasing) across diverse
languages belonging to six different language families. We detail the design
and training of mEdIT models and demonstrate their strong performance on many
multi-lingual text editing benchmarks against other multilingual LLMs. We also
find that mEdIT generalizes effectively to new languages over multilingual
baselines. We publicly release our data, code, and trained models at
https://github.com/vipulraheja/medit.
| 2,024 | Computation and Language |
LLMArena: Assessing Capabilities of Large Language Models in Dynamic
Multi-Agent Environments | Recent advancements in large language models (LLMs) have revealed their
potential for achieving autonomous agents possessing human-level intelligence.
However, existing benchmarks for evaluating LLM Agents either use static
datasets, potentially leading to data leakage or focus only on single-agent
scenarios, overlooking the complexities of multi-agent interactions. There is a
lack of a benchmark that evaluates the diverse capabilities of LLM agents in
multi-agent, dynamic environments. To this end, we introduce LLMArena, a novel
and easily extensible framework for evaluating the diverse capabilities of LLM
in multi-agent dynamic environments. LLMArena encompasses seven distinct gaming
environments, employing Trueskill scoring to assess crucial abilities in LLM
agents, including spatial reasoning, strategic planning, numerical reasoning,
risk assessment, communication, opponent modeling, and team collaboration. We
conduct an extensive experiment and human evaluation among different sizes and
types of LLMs, showing that LLMs still have a significant journey ahead in
their development towards becoming fully autonomous agents, especially in
opponent modeling and team collaboration. We hope LLMArena could guide future
research towards enhancing these capabilities in LLMs, ultimately leading to
more sophisticated and practical applications in dynamic, multi-agent settings.
The code and data will be available.
| 2,024 | Computation and Language |
Pre-training Cross-lingual Open Domain Question Answering with
Large-scale Synthetic Supervision | Cross-lingual question answering (CLQA) is a complex problem, comprising
cross-lingual retrieval from a multilingual knowledge base, followed by answer
generation either in English or the query language. Both steps are usually
tackled by separate models, requiring substantial annotated datasets, and
typically auxiliary resources, like machine translation systems to bridge
between languages. In this paper, we show that CLQA can be addressed using a
single encoder-decoder model. To effectively train this model, we propose a
self-supervised method based on exploiting the cross-lingual link structure
within Wikipedia. We demonstrate how linked Wikipedia pages can be used to
synthesise supervisory signals for cross-lingual retrieval, through a form of
cloze query, and generate more natural queries to supervise answer generation.
Together, we show our approach, \texttt{CLASS}, outperforms comparable methods
on both supervised and zero-shot language adaptation settings, including those
using machine translation.
| 2,024 | Computation and Language |
LLM-based Privacy Data Augmentation Guided by Knowledge Distillation
with a Distribution Tutor for Medical Text Classification | As sufficient data are not always publically accessible for model training,
researchers exploit limited data with advanced learning algorithms or expand
the dataset via data augmentation (DA). Conducting DA in private domain
requires private protection approaches (i.e. anonymization and perturbation),
but those methods cannot provide protection guarantees. Differential privacy
(DP) learning methods theoretically bound the protection but are not skilled at
generating pseudo text samples with large models. In this paper, we transfer
DP-based pseudo sample generation task to DP-based generated samples
discrimination task, where we propose a DP-based DA method with a LLM and a
DP-based discriminator for text classification on private domains. We construct
a knowledge distillation model as the DP-based discriminator: teacher models,
accessing private data, teaches students how to select private samples with
calibrated noise to achieve DP. To constrain the distribution of DA's
generation, we propose a DP-based tutor that models the noised private
distribution and controls samples' generation with a low privacy cost. We
theoretically analyze our model's privacy protection and empirically verify our
model.
| 2,024 | Computation and Language |
Aligning Large Language Models to a Domain-specific Graph Database | Graph Databases (Graph DB) are widely applied in various fields, including
finance, social networks, and medicine. However, translating Natural Language
(NL) into the Graph Query Language (GQL), commonly known as NL2GQL, proves to
be challenging due to its inherent complexity and specialized nature. Some
approaches have sought to utilize Large Language Models (LLMs) to address
analogous tasks like text2SQL. Nevertheless, when it comes to NL2GQL taskson a
particular domain, the absence of domain-specific NL-GQL data pairs makes it
difficult to establish alignment between LLMs and the graph DB. To address this
challenge, we propose a well-defined pipeline. Specifically, we utilize ChatGPT
to create NL-GQL data pairs based on the given graph DB with self-instruct.
Then, we use the created data to fine-tune LLMs, thereby achieving alignment
between LLMs and the graph DB. Additionally, during inference, we propose a
method that extracts relevant schema to the queried NL as the input context to
guide LLMs for generating accurate GQLs.We evaluate our method on two
constructed datasets deriving from graph DBs in finance domain and medicine
domain, namely FinGQL and MediGQL. Experimental results demonstrate that our
method significantly outperforms a set of baseline methods, with improvements
of 5.90 and 6.36 absolute points on EM, and 6.00 and 7.09 absolute points on
EX, respectively.
| 2,024 | Computation and Language |
Two-stage Generative Question Answering on Temporal Knowledge Graph
Using Large Language Models | Temporal knowledge graph question answering (TKGQA) poses a significant
challenge task, due to the temporal constraints hidden in questions and the
answers sought from dynamic structured knowledge. Although large language
models (LLMs) have made considerable progress in their reasoning ability over
structured data, their application to the TKGQA task is a relatively unexplored
area. This paper first proposes a novel generative temporal knowledge graph
question answering framework, GenTKGQA, which guides LLMs to answer temporal
questions through two phases: Subgraph Retrieval and Answer Generation. First,
we exploit LLM's intrinsic knowledge to mine temporal constraints and
structural links in the questions without extra training, thus narrowing down
the subgraph search space in both temporal and structural dimensions. Next, we
design virtual knowledge indicators to fuse the graph neural network signals of
the subgraph and the text representations of the LLM in a non-shallow way,
which helps the open-source LLM deeply understand the temporal order and
structural dependencies among the retrieved facts through instruction tuning.
Experimental results demonstrate that our model outperforms state-of-the-art
baselines, even achieving 100\% on the metrics for the simple question type.
| 2,024 | Computation and Language |
Multi-Bit Distortion-Free Watermarking for Large Language Models | Methods for watermarking large language models have been proposed that
distinguish AI-generated text from human-generated text by slightly altering
the model output distribution, but they also distort the quality of the text,
exposing the watermark to adversarial detection. More recently, distortion-free
watermarking methods were proposed that require a secret key to detect the
watermark. The prior methods generally embed zero-bit watermarks that do not
provide additional information beyond tagging a text as being AI-generated. We
extend an existing zero-bit distortion-free watermarking method by embedding
multiple bits of meta-information as part of the watermark. We also develop a
computationally efficient decoder that extracts the embedded information from
the watermark with low bit error rate.
| 2,024 | Computation and Language |
Semantic change detection for Slovene language: a novel dataset and an
approach based on optimal transport | In this paper, we focus on the detection of semantic changes in Slovene, a
less resourced Slavic language with two million speakers. Detecting and
tracking semantic changes provides insights into the evolution of the language
caused by changes in society and culture. Recently, several systems have been
proposed to aid in this study, but all depend on manually annotated gold
standard datasets for evaluation. In this paper, we present the first Slovene
dataset for evaluating semantic change detection systems, which contains
aggregated semantic change scores for 104 target words obtained from more than
3000 manually annotated sentence pairs. We evaluate several existing semantic
change detection methods on this dataset and also propose a novel approach
based on optimal transport that improves on the existing state-of-the-art
systems with an error reduction rate of 22.8%.
| 2,024 | Computation and Language |
Rethinking Negative Instances for Generative Named Entity Recognition | Large Language Models (LLMs) have demonstrated impressive capabilities for
generalizing in unseen tasks. In the Named Entity Recognition (NER) task,
recent advancements have seen the remarkable improvement of LLMs in a broad
range of entity domains via instruction tuning, by adopting entity-centric
schema. In this work, we explore the potential enhancement of the existing
methods by incorporating negative instances into training. Our experiments
reveal that negative instances contribute to remarkable improvements by (1)
introducing contextual information, and (2) clearly delineating label
boundaries. Furthermore, we introduce a novel and efficient algorithm named
Hierarchical Matching, which is tailored to transform unstructured predictions
into structured entities. By integrating these components, we present GNER, a
Generative NER system that shows improved zero-shot performance across unseen
entity domains. Our comprehensive evaluation illustrates our system's
superiority, surpassing state-of-the-art (SoTA) methods by 11 $F_1$ score in
zero-shot evaluation.
| 2,024 | Computation and Language |
PAQA: Toward ProActive Open-Retrieval Question Answering | Conversational systems have made significant progress in generating natural
language responses. However, their potential as conversational search systems
is currently limited due to their passive role in the information-seeking
process. One major limitation is the scarcity of datasets that provide labelled
ambiguous questions along with a supporting corpus of documents and relevant
clarifying questions. This work aims to tackle the challenge of generating
relevant clarifying questions by taking into account the inherent ambiguities
present in both user queries and documents. To achieve this, we propose PAQA,
an extension to the existing AmbiNQ dataset, incorporating clarifying
questions. We then evaluate various models and assess how passage retrieval
impacts ambiguity detection and the generation of clarifying questions. By
addressing this gap in conversational search systems, we aim to provide
additional supervision to enhance their active participation in the
information-seeking process and provide users with more accurate results.
| 2,024 | Computation and Language |
Understanding the Dataset Practitioners Behind Large Language Model
Development | As large language models (LLMs) become more advanced and impactful, it is
increasingly important to scrutinize the data that they rely upon and produce.
What is it to be a dataset practitioner doing this work? We approach this in
two parts: first, we define the role of "dataset practitioner" by performing a
retrospective analysis on the responsibilities of teams contributing to LLM
development at Google. Then, we conduct semi-structured interviews with a
cross-section of these practitioners (N=10). We find that data quality is the
top priority. To evaluate data quality, practitioners either rely on their own
intuition or write custom evaluation logic. There is a lack of consensus across
practitioners on what quality is and how to evaluate it. We discuss potential
reasons for this phenomenon and opportunities for alignment.
| 2,024 | Computation and Language |
Long-Context Language Modeling with Parallel Context Encoding | Extending large language models (LLMs) to process longer inputs is crucial
for numerous applications. However, the considerable computational cost of
transformers, coupled with limited generalization of positional encoding,
restricts the size of their context window. We introduce Context Expansion with
Parallel Encoding (CEPE), a framework that can be applied to any existing
decoder-only LLMs to extend their context window. CEPE adopts a small encoder
to process long inputs chunk by chunk and enables the frozen decoder to
leverage additional contexts via cross-attention. CEPE is efficient,
generalizable, and versatile: trained with 8K-token documents, CEPE extends the
context window of LLAMA-2 to 128K tokens, offering 10x the throughput with only
1/6 of the memory. CEPE yields strong performance on language modeling and
in-context learning. CEPE also excels in retrieval-augmented applications,
while existing long-context models degenerate with retrieved contexts. We
further introduce a CEPE variant that can extend the context window of
instruction-tuned models with only unlabeled data, and showcase its
effectiveness on LLAMA-2-CHAT, leading to a strong instruction-following model
that can leverage very long context on downstream tasks.
| 2,024 | Computation and Language |
Domain Embeddings for Generating Complex Descriptions of Concepts in
Italian Language | In this work, we propose a Distributional Semantic resource enriched with
linguistic and lexical information extracted from electronic dictionaries,
designed to address the challenge of bridging the gap between the continuous
semantic values represented by distributional vectors and the discrete
descriptions offered by general semantics theory. Recently, many researchers
have concentrated on the nexus between embeddings and a comprehensive theory of
semantics and meaning. This often involves decoding the representation of word
meanings in Distributional Models into a set of discrete, manually constructed
properties such as semantic primitives or features, using neural decoding
techniques. Our approach introduces an alternative strategy grounded in
linguistic data. We have developed a collection of domain-specific
co-occurrence matrices, derived from two sources: a classification of Italian
nouns categorized into 4 semantic traits and 20 concrete noun sub-categories,
and a list of Italian verbs classified according to their semantic classes. In
these matrices, the co-occurrence values for each word are calculated
exclusively with a defined set of words pertinent to a particular lexical
domain. The resource comprises 21 domain-specific matrices, one comprehensive
matrix, and a Graphical User Interface. Our model facilitates the generation of
reasoned semantic descriptions of concepts by selecting matrices directly
associated with concrete conceptual knowledge, such as a matrix based on
location nouns and the concept of animal habitats. We assessed the utility of
the resource through two experiments, achieving promising outcomes in both: the
automatic classification of animal nouns and the extraction of animal features.
| 2,024 | Computation and Language |
ESG Sentiment Analysis: comparing human and language model performance
including GPT | In this paper we explore the challenges of measuring sentiment in relation to
Environmental, Social and Governance (ESG) social media. ESG has grown in
importance in recent years with a surge in interest from the financial sector
and the performance of many businesses has become based in part on their ESG
related reputations. The use of sentiment analysis to measure ESG related
reputation has developed and with it interest in the use of machines to do so.
The era of digital media has created an explosion of new media sources, driven
by the growth of social media platforms. This growing data environment has
become an excellent source for behavioural insight studies across many
disciplines that includes politics, healthcare and market research. Our study
seeks to compare human performance with the cutting edge in machine performance
in the measurement of ESG related sentiment. To this end researchers classify
the sentiment of 150 tweets and a reliability measure is made. A gold standard
data set is then established based on the consensus of 3 researchers and this
data set is then used to measure the performance of different machine
approaches: one based on the VADER dictionary approach to sentiment
classification and then multiple language model approaches, including Llama2,
T5, Mistral, Mixtral, FINBERT, GPT3.5 and GPT4.
| 2,024 | Computation and Language |
RepoAgent: An LLM-Powered Open-Source Framework for Repository-level
Code Documentation Generation | Generative models have demonstrated considerable potential in software
engineering, particularly in tasks such as code generation and debugging.
However, their utilization in the domain of code documentation generation
remains underexplored. To this end, we introduce RepoAgent, a large language
model powered open-source framework aimed at proactively generating,
maintaining, and updating code documentation. Through both qualitative and
quantitative evaluations, we have validated the effectiveness of our approach,
showing that RepoAgent excels in generating high-quality repository-level
documentation. The code and results are publicly accessible at
https://github.com/OpenBMB/RepoAgent.
| 2,024 | Computation and Language |
StructLM: Towards Building Generalist Models for Structured Knowledge
Grounding | Structured data sources, such as tables, graphs, and databases, are
ubiquitous knowledge sources. Despite the demonstrated capabilities of large
language models (LLMs) on plain text, their proficiency in interpreting and
utilizing structured data remains limited. Our investigation reveals a notable
deficiency in LLMs' ability to process structured data, e.g., ChatGPT lags
behind state-of-the-art (SoTA) model by an average of 35%. To augment the
Structured Knowledge Grounding (SKG) capabilities in LLMs, we have developed a
comprehensive instruction tuning dataset comprising 1.1 million examples.
Utilizing this dataset, we train a series of models, referred to as StructLM,
based on the Code-LLaMA architecture, ranging from 7B to 34B parameters. Our
StructLM series surpasses task-specific models on 14 out of 18 evaluated
datasets and establishes new SoTA achievements on 7 SKG tasks. Furthermore,
StructLM demonstrates exceptional generalization across 6 novel SKG tasks.
Contrary to expectations, we observe that scaling model size offers marginal
benefits, with StructLM-34B showing only slight improvements over StructLM-7B.
This suggests that structured knowledge grounding is still a challenging task
and requires more innovative design to push to a new level.
| 2,024 | Computation and Language |
Adaptation of Biomedical and Clinical Pretrained Models to French Long
Documents: A Comparative Study | Recently, pretrained language models based on BERT have been introduced for
the French biomedical domain. Although these models have achieved
state-of-the-art results on biomedical and clinical NLP tasks, they are
constrained by a limited input sequence length of 512 tokens, which poses
challenges when applied to clinical notes. In this paper, we present a
comparative study of three adaptation strategies for long-sequence models,
leveraging the Longformer architecture. We conducted evaluations of these
models on 16 downstream tasks spanning both biomedical and clinical domains.
Our findings reveal that further pre-training an English clinical model with
French biomedical texts can outperform both converting a French biomedical BERT
to the Longformer architecture and pre-training a French biomedical Longformer
from scratch. The results underscore that long-sequence French biomedical
models improve performance across most downstream tasks regardless of sequence
length, but BERT based models remain the most efficient for named entity
recognition tasks.
| 2,024 | Computation and Language |
HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual
Natural Language Generalization | Large language models (LLMs) have made significant progress in generating
codes from textual prompts. However, existing benchmarks have mainly
concentrated on translating English prompts to multilingual codes or have been
constrained to very limited natural languages (NLs). These benchmarks have
overlooked the vast landscape of massively multilingual NL to multilingual
code, leaving a critical gap in the evaluation of multilingual LLMs. In
response, we introduce HumanEval-XL, a massively multilingual code generation
benchmark specifically crafted to address this deficiency. HumanEval-XL
establishes connections between 23 NLs and 12 programming languages (PLs), and
comprises of a collection of 22,080 prompts with an average of 8.33 test cases.
By ensuring parallel data across multiple NLs and PLs, HumanEval-XL offers a
comprehensive evaluation platform for multilingual LLMs, allowing the
assessment of the understanding of different NLs. Our work serves as a
pioneering step towards filling the void in evaluating NL generalization in the
area of multilingual code generation. We make our evaluation code and data
publicly available at \url{https://github.com/FloatAI/HumanEval-XL}.
| 2,024 | Computation and Language |
Look Before You Leap: Towards Decision-Aware and Generalizable
Tool-Usage for Large Language Models | Tool-augmented large language models (LLMs) are attracting widespread
attention when accessing up-to-date knowledge and alleviating hallucination
issues. Nowadays, advanced closed-source LLMs (e.g., ChatGPT) have demonstrated
surprising tool-usage capabilities through prompting and in-context learning
techniques. To empower the capabilities of open-source LLMs (e.g., LLaMA) in
manipulating tools, current efforts focus on either template-driven or
token-triggered tool-usage. However, the former hampers LLMs' flexibility to
address diverse user's queries due to constrained tool interactions, while the
latter limits the generalizability when engaging with new tools, since
tool-usage learning is based on task- and tool-specific datasets. To alleviate
these concerns, in this paper, we propose a decision-aware and generalizable
tool-usage framework (DEER). Specifically, we first construct the tool-usage
samples with multiple decision branches via an automatic generation pipeline,
thereby inspiring the decision-making awareness of LLMs under diverse
scenarios. Meanwhile, we propose a novel tool sampling strategy to enhance the
generalizability of LLMs over unseen tools. Extensive experiments demonstrate
that our proposed DEER is effective and significantly outperforms baselines
across various datasets.
| 2,024 | Computation and Language |
Generating Effective Ensembles for Sentiment Analysis | In recent years, transformer models have revolutionized Natural Language
Processing (NLP), achieving exceptional results across various tasks, including
Sentiment Analysis (SA). As such, current state-of-the-art approaches for SA
predominantly rely on transformer models alone, achieving impressive accuracy
levels on benchmark datasets. In this paper, we show that the key for further
improving the accuracy of such ensembles for SA is to include not only
transformers, but also traditional NLP models, despite the inferiority of the
latter compared to transformer models. However, as we empirically show, this
necessitates a change in how the ensemble is constructed, specifically relying
on the Hierarchical Ensemble Construction (HEC) algorithm we present. Our
empirical studies across eight canonical SA datasets reveal that ensembles
incorporating a mix of model types, structured via HEC, significantly
outperform traditional ensembles. Finally, we provide a comparative analysis of
the performance of the HEC and GPT-4, demonstrating that while GPT-4 closely
approaches state-of-the-art SA methods, it remains outperformed by our proposed
ensemble strategy.
| 2,024 | Computation and Language |
SelectIT: Selective Instruction Tuning for Large Language Models via
Uncertainty-Aware Self-Reflection | Instruction tuning (IT) is crucial to tailoring large language models (LLMs)
towards human-centric interactions. Recent advancements have shown that the
careful selection of a small, high-quality subset of IT data can significantly
enhance the performance of LLMs. Despite this, common approaches often rely on
additional models or data sets, which increases costs and limits widespread
adoption. In this work, we propose a novel approach, termed SelectIT, that
capitalizes on the foundational capabilities of the LLM itself. Specifically,
we exploit the intrinsic uncertainty present in LLMs to more effectively select
high-quality IT data, without the need for extra resources. Furthermore, we
introduce a novel IT dataset, the Selective Alpaca, created by applying
SelectIT to the Alpaca-GPT4 dataset. Empirical results demonstrate that IT
using Selective Alpaca leads to substantial model ability enhancement. The
robustness of SelectIT has also been corroborated in various foundation models
and domain-specific tasks. Our findings suggest that longer and more
computationally intensive IT data may serve as superior sources of IT, offering
valuable insights for future research in this area. Data, code, and scripts are
freely available at https://github.com/Blue-Raincoat/SelectIT.
| 2,024 | Computation and Language |
CodeChameleon: Personalized Encryption Framework for Jailbreaking Large
Language Models | Adversarial misuse, particularly through `jailbreaking' that circumvents a
model's safety and ethical protocols, poses a significant challenge for Large
Language Models (LLMs). This paper delves into the mechanisms behind such
successful attacks, introducing a hypothesis for the safety mechanism of
aligned LLMs: intent security recognition followed by response generation.
Grounded in this hypothesis, we propose CodeChameleon, a novel jailbreak
framework based on personalized encryption tactics. To elude the intent
security recognition phase, we reformulate tasks into a code completion format,
enabling users to encrypt queries using personalized encryption functions. To
guarantee response generation functionality, we embed a decryption function
within the instructions, which allows the LLM to decrypt and execute the
encrypted queries successfully. We conduct extensive experiments on 7 LLMs,
achieving state-of-the-art average Attack Success Rate (ASR). Remarkably, our
method achieves an 86.6\% ASR on GPT-4-1106.
| 2,024 | Computation and Language |
DREsS: Dataset for Rubric-based Essay Scoring on EFL Writing | Automated essay scoring (AES) is a useful tool in English as a Foreign
Language (EFL) writing education, offering real-time essay scores for students
and instructors. However, previous AES models were trained on essays and scores
irrelevant to the practical scenarios of EFL writing education and usually
provided a single holistic score due to the lack of appropriate datasets. In
this paper, we release DREsS, a large-scale, standard dataset for rubric-based
automated essay scoring. DREsS comprises three sub-datasets: DREsS_New,
DREsS_Std., and DREsS_CASE. We collect DREsS_New, a real-classroom dataset with
1.7K essays authored by EFL undergraduate students and scored by English
education experts. We also standardize existing rubric-based essay scoring
datasets as DREsS_Std. We suggest CASE, a corruption-based augmentation
strategy for essays, which generates 20K synthetic samples of DREsS_CASE and
improves the baseline results by 45.44%. DREsS will enable further research to
provide a more accurate and practical AES system for EFL writing education.
| 2,024 | Computation and Language |
A Comprehensive Evaluation of Quantization Strategies for Large Language
Models | Increasing the number of parameters in large language models (LLMs) usually
improves performance in downstream tasks but raises compute and memory costs,
making deployment difficult in resource-limited settings. Quantization
techniques, which reduce the bits needed for model weights or activations with
minimal performance loss, have become popular due to the rise of LLMs. However,
most quantization studies use pre-trained LLMs, and the impact of quantization
on instruction-tuned LLMs and the relationship between perplexity and benchmark
performance of quantized LLMs are not well understood. Evaluation of quantized
LLMs is often limited to language modeling and a few classification tasks,
leaving their performance on other benchmarks unclear. To address these gaps,
we propose a structured evaluation framework consisting of three critical
dimensions: (1) knowledge \& capacity, (2) alignment, and (3) efficiency, and
conduct extensive experiments across ten diverse benchmarks. Our experimental
results indicate that LLMs with 4-bit quantization can retain performance
comparable to their non-quantized counterparts, and perplexity can serve as a
proxy metric for quantized LLMs on most benchmarks. Furthermore, quantized LLMs
with larger parameter scales can outperform smaller LLMs. Despite the memory
savings achieved through quantization, it can also slow down the inference
speed of LLMs. Consequently, substantial engineering efforts and hardware
support are imperative to achieve a balanced optimization of decoding speed and
memory consumption in the context of quantized LLMs.
| 2,024 | Computation and Language |