Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Political Compass or Spinning Arrow? Towards More Meaningful Evaluations
for Values and Opinions in Large Language Models | Much recent work seeks to evaluate values and opinions in large language
models (LLMs) using multiple-choice surveys and questionnaires. Most of this
work is motivated by concerns around real-world LLM applications. For example,
politically-biased LLMs may subtly influence society when they are used by
millions of people. Such real-world concerns, however, stand in stark contrast
to the artificiality of current evaluations: real users do not typically ask
LLMs survey questions. Motivated by this discrepancy, we challenge the
prevailing constrained evaluation paradigm for values and opinions in LLMs and
explore more realistic unconstrained evaluations. As a case study, we focus on
the popular Political Compass Test (PCT). In a systematic review, we find that
most prior work using the PCT forces models to comply with the PCT's
multiple-choice format. We show that models give substantively different
answers when not forced; that answers change depending on how models are
forced; and that answers lack paraphrase robustness. Then, we demonstrate that
models give different answers yet again in a more realistic open-ended answer
setting. We distill these findings into recommendations and open challenges in
evaluating values and opinions in LLMs.
| 2,024 | Computation and Language |
Set the Clock: Temporal Alignment of Pretrained Language Models | Language models (LMs) are trained on web text originating from many points in
time and, in general, without any explicit temporal grounding. This work
investigates the temporal chaos of pretrained LMs and explores various methods
to align their internal knowledge to a target time, which we call "temporal
alignment." To do this, we first automatically construct a dataset containing
20K time-sensitive questions and their answers for each year from 2000 to 2023.
Based on this dataset, we empirically show that pretrained LMs (e.g., LLaMa2),
despite having a recent pretraining cutoff (e.g., 2022), mostly answer
questions using earlier knowledge (e.g., in 2019). We then develop several
methods, from prompting to finetuning, to align LMs to use their most recent
knowledge when answering questions, and investigate various factors in this
alignment. Our experiments show that aligning LLaMa2 to the year 2022 can boost
its performance by up to 62% relatively as measured by that year, even without
mentioning time information explicitly, indicating the possibility of aligning
models' internal sense of time after pretraining. Finally, we find that
alignment to a historical time is also possible, with up to 2.8$\times$ the
performance of the unaligned LM in 2010 if finetuning models to that year.
These findings hint at the sophistication of LMs' internal knowledge
organization and the necessity of tuning them properly.
| 2,024 | Computation and Language |
OncoGPT: A Medical Conversational Model Tailored with Oncology Domain
Expertise on a Large Language Model Meta-AI (LLaMA) | In the past year, there has been a growing trend in applying Large Language
Models (LLMs) to the field of medicine, particularly with the advent of
advanced language models such as ChatGPT developed by OpenAI. However, there is
limited research on LLMs specifically addressing oncology-related queries. The
primary aim of this research was to develop a specialized language model that
demonstrates improved accuracy in providing advice related to oncology. We
performed an extensive data collection of online question-answer interactions
centered around oncology, sourced from reputable doctor-patient platforms.
Following data cleaning and anonymization, a dataset comprising over 180K+
oncology-related conversations was established. The conversations were
categorized and meticulously reviewed by field specialists and clinicians to
ensure precision. Employing the LLaMA model and other selected open-source
datasets, we conducted iterative fine-tuning to enhance the model's proficiency
in basic medical conversation and specialized oncology knowledge. We observed a
substantial enhancement in the model's understanding of genuine patient
inquiries and its reliability in offering oncology-related advice through the
utilization of real online question-answer interactions in the fine-tuning
process. We release database and models to the research community
(https://github.com/OncoGPT1).
| 2,024 | Computation and Language |
Investigating the Effectiveness of HyperTuning via Gisting | Gisting (Mu et al., 2023) is a simple method for training models to compress
information into fewer token representations using a modified attention mask,
and can serve as an economical approach to training Transformer-based
hypernetworks. We introduce HyperLlama, a set of Gisting-based hypernetworks
built on Llama-2 models that generates task-specific soft prefixes based on
few-shot inputs. In experiments across P3, Super-NaturalInstructions and Symbol
Tuning datasets, we show that HyperLlama models can effectively compress
information from few-shot examples into soft prefixes. However, they still
underperform multi-task fine-tuned language models with full attention over
few-shot in-context examples. We also show that HyperLlama-generated soft
prefixes can serve as better initializations for further prefix tuning.
Overall, Gisting-based hypernetworks are economical and easy to implement, but
have mixed empirical performance.
| 2,024 | Computation and Language |
Nemotron-4 15B Technical Report | We introduce Nemotron-4 15B, a 15-billion-parameter large multilingual
language model trained on 8 trillion text tokens. Nemotron-4 15B demonstrates
strong performance when assessed on English, multilingual, and coding tasks: it
outperforms all existing similarly-sized open models on 4 out of 7 downstream
evaluation areas and achieves competitive performance to the leading open
models in the remaining ones. Specifically, Nemotron-4 15B exhibits the best
multilingual capabilities of all similarly-sized models, even outperforming
models over four times larger and those explicitly specialized for multilingual
tasks.
| 2,024 | Computation and Language |
Rainbow Teaming: Open-Ended Generation of Diverse Adversarial Prompts | As large language models (LLMs) become increasingly prevalent across many
real-world applications, understanding and enhancing their robustness to user
inputs is of paramount importance. Existing methods for identifying adversarial
prompts tend to focus on specific domains, lack diversity, or require extensive
human annotations. To address these limitations, we present Rainbow Teaming, a
novel approach for producing a diverse collection of adversarial prompts.
Rainbow Teaming casts adversarial prompt generation as a quality-diversity
problem, and uses open-ended search to generate prompts that are both effective
and diverse. It can uncover a model's vulnerabilities across a broad range of
domains including, in this paper, safety, question answering, and
cybersecurity. We also demonstrate that fine-tuning on synthetic data generated
by Rainbow Teaming improves the safety of state-of-the-art LLMs without hurting
their general capabilities and helpfulness, paving the path to open-ended
self-improvement.
| 2,024 | Computation and Language |
A Survey on Data Selection for Language Models | A major factor in the recent success of large language models is the use of
enormous and ever-growing text datasets for unsupervised pre-training. However,
naively training a model on all available data may not be optimal (or
feasible), as the quality of available text data can vary. Filtering out data
can also decrease the carbon footprint and financial costs of training models
by reducing the amount of training required.
Data selection methods aim to determine which candidate data points to
include in the training dataset and how to appropriately sample from the
selected data points. The promise of improved data selection methods has caused
the volume of research in the area to rapidly expand. However, because deep
learning is mostly driven by empirical evidence and experimentation on
large-scale data is expensive, few organizations have the resources for
extensive data selection research. Consequently, knowledge of effective data
selection practices has become concentrated within a few organizations, many of
which do not openly share their findings and methodologies.
To narrow this gap in knowledge, we present a comprehensive review of
existing literature on data selection methods and related research areas,
providing a taxonomy of existing approaches. By describing the current
landscape of research, this work aims to accelerate progress in data selection
by establishing an entry point for new and established researchers.
Additionally, throughout this review we draw attention to noticeable holes in
the literature and conclude the paper by proposing promising avenues for future
research.
| 2,024 | Computation and Language |
Mysterious Projections: Multimodal LLMs Gain Domain-Specific Visual
Capabilities Without Richer Cross-Modal Projections | Multimodal large language models (MLLMs) like LLaVA and GPT-4(V) enable
general-purpose conversations about images with the language modality. As
off-the-shelf MLLMs may have limited capabilities on images from domains like
dermatology and agriculture, they must be fine-tuned to unlock domain-specific
applications. The prevalent architecture of current open-source MLLMs comprises
two major modules: an image-language (cross-modal) projection network and a
large language model. It is desirable to understand the roles of these two
modules in modeling domain-specific visual attributes to inform the design of
future models and streamline the interpretability efforts on the current
models. To this end, via experiments on 4 datasets and under 2 fine-tuning
settings, we find that as the MLLM is fine-tuned, it indeed gains
domain-specific visual capabilities, but the updates do not lead to the
projection extracting relevant domain-specific visual attributes. Our results
indicate that the domain-specific visual attributes are modeled by the LLM,
even when only the projection is fine-tuned. Through this study, we offer a
potential reinterpretation of the role of cross-modal projections in MLLM
architectures. Projection webpage:
https://claws-lab.github.io/projection-in-MLLMs/
| 2,024 | Computation and Language |
Eight Methods to Evaluate Robust Unlearning in LLMs | Machine unlearning can be useful for removing harmful capabilities and
memorized text from large language models (LLMs), but there are not yet
standardized methods for rigorously evaluating it. In this paper, we first
survey techniques and limitations of existing unlearning evaluations. Second,
we apply a comprehensive set of tests for the robustness and competitiveness of
unlearning in the "Who's Harry Potter" (WHP) model from Eldan and Russinovich
(2023). While WHP's unlearning generalizes well when evaluated with the
"Familiarity" metric from Eldan and Russinovich, we find i)
higher-than-baseline amounts of knowledge can reliably be extracted, ii) WHP
performs on par with the original model on Harry Potter Q&A tasks, iii) it
represents latent knowledge comparably to the original model, and iv) there is
collateral unlearning in related domains. Overall, our results highlight the
importance of comprehensive unlearning evaluation that avoids ad-hoc metrics.
| 2,024 | Computation and Language |
Do Large Language Models Latently Perform Multi-Hop Reasoning? | We study whether Large Language Models (LLMs) latently perform multi-hop
reasoning with complex prompts such as "The mother of the singer of
'Superstition' is". We look for evidence of a latent reasoning pathway where an
LLM (1) latently identifies "the singer of 'Superstition'" as Stevie Wonder,
the bridge entity, and (2) uses its knowledge of Stevie Wonder's mother to
complete the prompt. We analyze these two hops individually and consider their
co-occurrence as indicative of latent multi-hop reasoning. For the first hop,
we test if changing the prompt to indirectly mention the bridge entity instead
of any other entity increases the LLM's internal recall of the bridge entity.
For the second hop, we test if increasing this recall causes the LLM to better
utilize what it knows about the bridge entity. We find strong evidence of
latent multi-hop reasoning for the prompts of certain relation types, with the
reasoning pathway used in more than 80% of the prompts. However, the
utilization is highly contextual, varying across different types of prompts.
Also, on average, the evidence for the second hop and the full multi-hop
traversal is rather moderate and only substantial for the first hop. Moreover,
we find a clear scaling trend with increasing model size for the first hop of
reasoning but not for the second hop. Our experimental findings suggest
potential challenges and opportunities for future development and applications
of LLMs.
| 2,024 | Computation and Language |
MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT | "Bigger the better" has been the predominant trend in recent Large Language
Models (LLMs) development. However, LLMs do not suit well for scenarios that
require on-device processing, energy efficiency, low memory footprint, and
response efficiency. These requisites are crucial for privacy, security, and
sustainable deployment. This paper explores the "less is more" paradigm by
addressing the challenge of designing accurate yet efficient Small Language
Models (SLMs) for resource constrained devices. Our primary contribution is the
introduction of an accurate and fully transparent open-source 0.5 billion
(0.5B) parameter SLM, named MobiLlama, catering to the specific needs of
resource-constrained computing with an emphasis on enhanced performance with
reduced resource demands. MobiLlama is a SLM design that initiates from a
larger model and applies a careful parameter sharing scheme to reduce both the
pre-training and the deployment cost. Our work strives to not only bridge the
gap in open-source SLMs but also ensures full transparency, where complete
training data pipeline, training code, model weights, and over 300 checkpoints
along with evaluation codes is available at :
https://github.com/mbzuai-oryx/MobiLlama.
| 2,024 | Computation and Language |
Long Dialog Summarization: An Analysis | Dialog summarization has become increasingly important in managing and
comprehending large-scale conversations across various domains. This task
presents unique challenges in capturing the key points, context, and nuances of
multi-turn long conversations for summarization. It is worth noting that the
summarization techniques may vary based on specific requirements such as in a
shopping-chatbot scenario, the dialog summary helps to learn user preferences,
whereas in the case of a customer call center, the summary may involve the
problem attributes that a user specified, and the final resolution provided.
This work emphasizes the significance of creating coherent and contextually
rich summaries for effective communication in various applications. We explore
current state-of-the-art approaches for long dialog summarization in different
domains and benchmark metrics based evaluations show that one single model does
not perform well across various areas for distinct summarization tasks.
| 2,024 | Computation and Language |
What Do Language Models Hear? Probing for Auditory Representations in
Language Models | This work explores whether language models encode meaningfully grounded
representations of sounds of objects. We learn a linear probe that retrieves
the correct text representation of an object given a snippet of audio related
to that object, where the sound representation is given by a pretrained audio
model. This probe is trained via a contrastive loss that pushes the language
representations and sound representations of an object to be close to one
another. After training, the probe is tested on its ability to generalize to
objects that were not seen during training. Across different language models
and audio models, we find that the probe generalization is above chance in many
cases, indicating that despite being trained only on raw text, language models
encode grounded knowledge of sounds for some objects.
| 2,024 | Computation and Language |
Benchmarking LLMs on the Semantic Overlap Summarization Task | Semantic Overlap Summarization (SOS) is a constrained multi-document
summarization task, where the constraint is to capture the common/overlapping
information between two alternative narratives. While recent advancements in
Large Language Models (LLMs) have achieved superior performance in numerous
summarization tasks, a benchmarking study of the SOS task using LLMs is yet to
be performed. As LLMs' responses are sensitive to slight variations in prompt
design, a major challenge in conducting such a benchmarking study is to
systematically explore a variety of prompts before drawing a reliable
conclusion. Fortunately, very recently, the TELeR taxonomy has been proposed
which can be used to design and explore various prompts for LLMs. Using this
TELeR taxonomy and 15 popular LLMs, this paper comprehensively evaluates LLMs
on the SOS Task, assessing their ability to summarize overlapping information
from multiple alternative narratives. For evaluation, we report
well-established metrics like ROUGE, BERTscore, and SEM-F1$ on two different
datasets of alternative narratives. We conclude the paper by analyzing the
strengths and limitations of various LLMs in terms of their capabilities in
capturing overlapping information The code and datasets used to conduct this
study are available at https://anonymous.4open.science/r/llm_eval-E16D.
| 2,024 | Computation and Language |
Can Large Language Models Recall Reference Location Like Humans? | When completing knowledge-intensive tasks, humans sometimes need not just an
answer but also a corresponding reference passage for auxiliary reading.
Previous methods required obtaining pre-segmented article chunks through
additional retrieval models. This paper explores leveraging the parameterized
knowledge stored during the pre-training phase of large language models (LLMs)
to independently recall reference passage from any starting position. We
propose a two-stage framework that simulates the scenario of humans recalling
easily forgotten references. Initially, the LLM is prompted to recall document
title identifiers to obtain a coarse-grained document set. Then, based on the
acquired coarse-grained document set, it recalls fine-grained passage. In the
two-stage recall process, we use constrained decoding to ensure that content
outside of the stored documents is not generated. To increase speed, we only
recall a short prefix in the second stage, then locate its position to retrieve
a complete passage. Experiments on KILT knowledge-sensitive tasks have verified
that LLMs can independently recall reference passage location in various task
forms, and the obtained reference significantly assist downstream tasks.
| 2,024 | Computation and Language |
DiffuCOMET: Contextual Commonsense Knowledge Diffusion | Inferring contextually-relevant and diverse commonsense to understand
narratives remains challenging for knowledge models. In this work, we develop a
series of knowledge models, DiffuCOMET, that leverage diffusion to learn to
reconstruct the implicit semantic connections between narrative contexts and
relevant commonsense knowledge. Across multiple diffusion steps, our method
progressively refines a representation of commonsense facts that is anchored to
a narrative, producing contextually-relevant and diverse commonsense inferences
for an input context. To evaluate DiffuCOMET, we introduce new metrics for
commonsense inference that more closely measure knowledge diversity and
contextual relevance. Our results on two different benchmarks, ComFact and
WebNLG+, show that knowledge generated by DiffuCOMET achieves a better
trade-off between commonsense diversity, contextual relevance and alignment to
known gold references, compared to baseline knowledge models.
| 2,024 | Computation and Language |
Towards Explainability and Fairness in Swiss Judgement Prediction:
Benchmarking on a Multilingual Dataset | The assessment of explainability in Legal Judgement Prediction (LJP) systems
is of paramount importance in building trustworthy and transparent systems,
particularly considering the reliance of these systems on factors that may lack
legal relevance or involve sensitive attributes. This study delves into the
realm of explainability and fairness in LJP models, utilizing Swiss Judgement
Prediction (SJP), the only available multilingual LJP dataset. We curate a
comprehensive collection of rationales that `support' and `oppose' judgement
from legal experts for 108 cases in German, French, and Italian. By employing
an occlusion-based explainability approach, we evaluate the explainability
performance of state-of-the-art monolingual and multilingual BERT-based LJP
models, as well as models developed with techniques such as data augmentation
and cross-lingual transfer, which demonstrated prediction performance
improvement. Notably, our findings reveal that improved prediction performance
does not necessarily correspond to enhanced explainability performance,
underscoring the significance of evaluating models from an explainability
perspective. Additionally, we introduce a novel evaluation framework, Lower
Court Insertion (LCI), which allows us to quantify the influence of lower court
information on model predictions, exposing current models' biases.
| 2,024 | Computation and Language |
Z-AGI Labs at ClimateActivism 2024: Stance and Hate Event Detection on
Social Media | In the digital realm, rich data serves as a crucial source of insights into
the complexities of social, political, and economic landscapes. Addressing the
growing need for high-quality information on events and the imperative to
combat hate speech, this research led to the establishment of the Shared Task
on Climate Activism Stance and Hate Event Detection at CASE 2024. Focused on
climate activists contending with hate speech on social media, our study
contributes to hate speech identification from tweets. Analyzing three
sub-tasks - Hate Speech Detection (Sub-task A), Targets of Hate Speech
Identification (Sub-task B), and Stance Detection (Sub-task C) - Team Z-AGI
Labs evaluated various models, including LSTM, Xgboost, and LGBM based on
Tf-Idf. Results unveiled intriguing variations, with Catboost excelling in
Subtask-B (F1: 0.5604) and Subtask-C (F1: 0.7081), while LGBM emerged as the
top-performing model for Subtask-A (F1: 0.8684). This research provides
valuable insights into the suitability of classical machine learning models for
climate hate speech and stance detection, aiding informed model selection for
robust mechanisms.
| 2,024 | Computation and Language |
Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings | We introduce a novel suite of state-of-the-art bilingual text embedding
models that are designed to support English and another target language. These
models are capable of processing lengthy text inputs with up to 8192 tokens,
making them highly versatile for a range of natural language processing tasks
such as text retrieval, clustering, and semantic textual similarity (STS)
calculations.
By focusing on bilingual models and introducing a unique multi-task learning
objective, we have significantly improved the model performance on STS tasks,
which outperforms the capabilities of existing multilingual models in both
target language understanding and cross-lingual evaluation tasks. Moreover, our
bilingual models are more efficient, requiring fewer parameters and less memory
due to their smaller vocabulary needs. Furthermore, we have expanded the
Massive Text Embedding Benchmark (MTEB) to include benchmarks for German and
Spanish embedding models. This integration aims to stimulate further research
and advancement in text embedding technologies for these languages.
| 2,024 | Computation and Language |
Leveraging Large Language Models for Learning Complex Legal Concepts
through Storytelling | Making legal knowledge accessible to non-experts is crucial for enhancing
general legal literacy and encouraging civic participation in democracy.
However, legal documents are often challenging to understand for people without
legal backgrounds. In this paper, we present a novel application of large
language models (LLMs) in legal education to help non-experts learn intricate
legal concepts through storytelling, an effective pedagogical tool in conveying
complex and abstract concepts. We also introduce a new dataset LegalStories,
which consists of 295 complex legal doctrines, each accompanied by a story and
a set of multiple-choice questions generated by LLMs. To construct the dataset,
we experiment with various LLMs to generate legal stories explaining these
concepts. Furthermore, we use an expert-in-the-loop method to iteratively
design multiple-choice questions. Then, we evaluate the effectiveness of
storytelling with LLMs through an RCT experiment with legal novices on 10
samples from the dataset. We find that LLM-generated stories enhance
comprehension of legal concepts and interest in law among non-native speakers
compared to only definitions. Moreover, stories consistently help participants
relate legal concepts to their lives. Finally, we find that learning with
stories shows a higher retention rate for non-native speakers in the follow-up
assessment. Our work has strong implications for using LLMs in promoting
teaching and learning in the legal field and beyond.
| 2,024 | Computation and Language |
Re-Ex: Revising after Explanation Reduces the Factual Errors in LLM
Responses | Mitigating hallucination issues is one of the main challenges of LLMs we need
to overcome, in order to reliably use them in real-world scenarios. Recently,
various methods are proposed to check the factual errors in the LLM-generated
texts and revise them accordingly, to reduce the hallucination issue. In this
paper, we propose Re-Ex, a method of revising LLM-generated texts, which
introduces a novel step dubbed as the factual error explanation step. Re-Ex
revises the initial response of LLMs using 3-steps: first, external tools are
used to get the evidences on the factual errors in the response; second, LLMs
are instructed to explain the problematic parts of the response based on the
evidences gathered in the first step; finally, LLMs revise the response using
the explanation obtained in the second step. In addition to the explanation
step, we propose new prompting techniques to reduce the amount of tokens and
wall-clock time required for the response revision process. Compared with
existing methods including Factool, CoVE, and RARR, Re-Ex provides better
revision performance with less time and fewer tokens in multiple benchmarks.
| 2,024 | Computation and Language |
Creating Suspenseful Stories: Iterative Planning with Large Language
Models | Automated story generation has been one of the long-standing challenges in
NLP. Among all dimensions of stories, suspense is very common in human-written
stories but relatively under-explored in AI-generated stories. While recent
advances in large language models (LLMs) have greatly promoted language
generation in general, state-of-the-art LLMs are still unreliable when it comes
to suspenseful story generation. We propose a novel iterative-prompting-based
planning method that is grounded in two theoretical foundations of story
suspense from cognitive psychology and narratology. This theory-grounded method
works in a fully zero-shot manner and does not rely on any supervised story
corpora. To the best of our knowledge, this paper is the first attempt at
suspenseful story generation with LLMs. Extensive human evaluations of the
generated suspenseful stories demonstrate the effectiveness of our method.
| 2,024 | Computation and Language |
Fact-and-Reflection (FaR) Improves Confidence Calibration of Large
Language Models | For a LLM to be trustworthy, its confidence level should be well-calibrated
with its actual performance. While it is now common sense that LLM performances
are greatly impacted by prompts, the confidence calibration in prompting LLMs
has yet to be thoroughly explored. In this paper, we explore how different
prompting strategies influence LLM confidence calibration and how it could be
improved. We conduct extensive experiments on six prompting methods in the
question-answering context and we observe that, while these methods help
improve the expected LLM calibration, they also trigger LLMs to be
over-confident when responding to some instances. Inspired by human cognition,
we propose Fact-and-Reflection (FaR) prompting, which improves the LLM
calibration in two steps. First, FaR elicits the known "facts" that are
relevant to the input prompt from the LLM. And then it asks the model to
"reflect" over them to generate the final answer. Experiments show that FaR
prompting achieves significantly better calibration; it lowers the Expected
Calibration Error by 23.5% on our multi-purpose QA tasks. Notably, FaR
prompting even elicits the capability of verbally expressing concerns in less
confident scenarios, which helps trigger retrieval augmentation for solving
these harder instances.
| 2,024 | Computation and Language |
Clustering Document Parts: Detecting and Characterizing Influence
Campaigns From Documents | We propose a novel clustering pipeline to detect and characterize influence
campaigns from documents. This approach clusters parts of document, detects
clusters that likely reflect an influence campaign, and then identifies
documents linked to an influence campaign via their association with the
high-influence clusters. Our approach outperforms both the direct
document-level classification and the direct document-level clustering approach
in predicting if a document is part of an influence campaign. We propose
various novel techniques to enhance our pipeline, including using an existing
event factuality prediction system to obtain document parts, and aggregating
multiple clustering experiments to improve the performance of both cluster and
document classification. Classifying documents on the top of clustering not
only accurately extracts the parts of the documents that are relevant to
influence campaigns, but also capture influence campaigns as a coordinated and
holistic phenomenon. Our approach makes possible more fine-grained and
interpretable characterizations of influence campaigns from documents.
| 2,024 | Computation and Language |
Extreme Encoder Output Frame Rate Reduction: Improving Computational
Latencies of Large End-to-End Models | The accuracy of end-to-end (E2E) automatic speech recognition (ASR) models
continues to improve as they are scaled to larger sizes, with some now reaching
billions of parameters. Widespread deployment and adoption of these models,
however, requires computationally efficient strategies for decoding. In the
present work, we study one such strategy: applying multiple frame reduction
layers in the encoder to compress encoder outputs into a small number of output
frames. While similar techniques have been investigated in previous work, we
achieve dramatically more reduction than has previously been demonstrated
through the use of multiple funnel reduction layers. Through ablations, we
study the impact of various architectural choices in the encoder to identify
the most effective strategies. We demonstrate that we can generate one encoder
output frame for every 2.56 sec of input speech, without significantly
affecting word error rate on a large-scale voice search task, while improving
encoder and decoder latencies by 48% and 92% respectively, relative to a strong
but computationally expensive baseline.
| 2,024 | Computation and Language |
An Effective Mixture-Of-Experts Approach For Code-Switching Speech
Recognition Leveraging Encoder Disentanglement | With the massive developments of end-to-end (E2E) neural networks, recent
years have witnessed unprecedented breakthroughs in automatic speech
recognition (ASR). However, the codeswitching phenomenon remains a major
obstacle that hinders ASR from perfection, as the lack of labeled data and the
variations between languages often lead to degradation of ASR performance. In
this paper, we focus exclusively on improving the acoustic encoder of E2E ASR
to tackle the challenge caused by the codeswitching phenomenon. Our main
contributions are threefold: First, we introduce a novel disentanglement loss
to enable the lower-layer of the encoder to capture inter-lingual acoustic
information while mitigating linguistic confusion at the higher-layer of the
encoder. Second, through comprehensive experiments, we verify that our proposed
method outperforms the prior-art methods using pretrained dual-encoders,
meanwhile having access only to the codeswitching corpus and consuming half of
the parameterization. Third, the apparent differentiation of the encoders'
output features also corroborates the complementarity between the
disentanglement loss and the mixture-of-experts (MoE) architecture.
| 2,024 | Computation and Language |
When Scaling Meets LLM Finetuning: The Effect of Data, Model and
Finetuning Method | While large language models (LLMs) often adopt finetuning to unlock their
capabilities for downstream applications, our understanding on the inductive
biases (especially the scaling properties) of different finetuning methods is
still limited. To fill this gap, we conduct systematic experiments studying
whether and how different scaling factors, including LLM model size,
pretraining data size, new finetuning parameter size and finetuning data size,
affect the finetuning performance. We consider two types of finetuning --
full-model tuning (FMT) and parameter efficient tuning (PET, including prompt
tuning and LoRA), and explore their scaling behaviors in the data-limited
regime where the LLM model size substantially outweighs the finetuning data
size. Based on two sets of pretrained bilingual LLMs from 1B to 16B and
experiments on bilingual machine translation and multilingual summarization
benchmarks, we find that 1) LLM finetuning follows a powerbased multiplicative
joint scaling law between finetuning data size and each other scaling factor;
2) LLM finetuning benefits more from LLM model scaling than pretraining data
scaling, and PET parameter scaling is generally ineffective; and 3) the optimal
finetuning method is highly task- and finetuning data-dependent. We hope our
findings could shed light on understanding, selecting and developing LLM
finetuning methods.
| 2,024 | Computation and Language |
Measuring Vision-Language STEM Skills of Neural Models | We introduce a new challenge to test the STEM skills of neural models. The
problems in the real world often require solutions, combining knowledge from
STEM (science, technology, engineering, and math). Unlike existing datasets,
our dataset requires the understanding of multimodal vision-language
information of STEM. Our dataset features one of the largest and most
comprehensive datasets for the challenge. It includes 448 skills and 1,073,146
questions spanning all STEM subjects. Compared to existing datasets that often
focus on examining expert-level ability, our dataset includes fundamental
skills and questions designed based on the K-12 curriculum. We also add
state-of-the-art foundation models such as CLIP and GPT-3.5-Turbo to our
benchmark. Results show that the recent model advances only help master a very
limited number of lower grade-level skills (2.5% in the third grade) in our
dataset. In fact, these models are still well below (averaging 54.7%) the
performance of elementary students, not to mention near expert-level
performance. To understand and increase the performance on our dataset, we
teach the models on a training split of our dataset. Even though we observe
improved performance, the model performance remains relatively low compared to
average elementary students. To solve STEM problems, we will need novel
algorithmic innovations from the community.
| 2,024 | Computation and Language |
Reasoning in Conversation: Solving Subjective Tasks through Dialogue
Simulation for Large Language Models | Large Language Models (LLMs) have achieved remarkable performance in
objective tasks such as open-domain question answering and mathematical
reasoning, which can often be solved through recalling learned factual
knowledge or chain-of-thought style reasoning. However, we find that the
performance of LLMs in subjective tasks is still unsatisfactory, such as
metaphor recognition, dark humor detection, etc. Compared to objective tasks,
subjective tasks focus more on interpretation or emotional response rather than
a universally accepted reasoning pathway. Based on the characteristics of the
tasks and the strong dialogue-generation capabilities of LLMs, we propose RiC
(Reasoning in Conversation), a method that focuses on solving subjective tasks
through dialogue simulation. The motivation of RiC is to mine useful contextual
information by simulating dialogues instead of supplying chain-of-thought style
rationales, thereby offering potential useful knowledge behind dialogues for
giving the final answers. We evaluate both API-based and open-source LLMs
including GPT-4, ChatGPT, and OpenChat across twelve tasks. Experimental
results show that RiC can yield significant improvement compared with various
baselines.
| 2,024 | Computation and Language |
MATHSENSEI: A Tool-Augmented Large Language Model for Mathematical
Reasoning | Tool-augmented Large Language Models (TALM) are known to enhance the skillset
of large language models (LLM), thereby, leading to their improved reasoning
abilities across many tasks. While, TALMs have been successfully employed in
different question-answering benchmarks, their efficacy on complex mathematical
reasoning benchmarks, and the potential complimentary benefits offered by tools
for knowledge retrieval and mathematical equation solving, are open research
questions. In this work, we present MATHSENSEI, a tool-augmented large language
model for mathematical reasoning. Augmented with tools for knowledge retrieval
(Bing Web Search), program execution (Python), and symbolic equation solving
(Wolfram-Alpha), we study the complimentary benefits of these tools through
evaluations on mathematical reasoning datasets. We perform exhaustive ablations
on MATH,a popular dataset for evaluating mathematical reasoning on diverse
mathematical disciplines. We also conduct experiments involving well-known tool
planners to study the impact of tool sequencing on the model performance.
MATHSENSEI achieves 13.5% better accuracy over gpt-3.5-turbo with
chain-of-thought on the MATH dataset. We further observe that TALMs are not as
effective for simpler math word problems (in GSM-8k), and the benefit increases
as the complexity and required knowledge increases (progressively over AQuA,
MMLU-Math, and higher level complex questions in MATH). The code and data are
available at https://github.com/Debrup-61/MathSensei.
| 2,024 | Computation and Language |
Beyond the Known: Investigating LLMs Performance on Out-of-Domain Intent
Detection | Out-of-domain (OOD) intent detection aims to examine whether the user's query
falls outside the predefined domain of the system, which is crucial for the
proper functioning of task-oriented dialogue (TOD) systems. Previous methods
address it by fine-tuning discriminative models. Recently, some studies have
been exploring the application of large language models (LLMs) represented by
ChatGPT to various downstream tasks, but it is still unclear for their ability
on OOD detection task.This paper conducts a comprehensive evaluation of LLMs
under various experimental settings, and then outline the strengths and
weaknesses of LLMs. We find that LLMs exhibit strong zero-shot and few-shot
capabilities, but is still at a disadvantage compared to models fine-tuned with
full resource. More deeply, through a series of additional analysis
experiments, we discuss and summarize the challenges faced by LLMs and provide
guidance for future work including injecting domain knowledge, strengthening
knowledge transfer from IND(In-domain) to OOD, and understanding long
instructions.
| 2,024 | Computation and Language |
Speak Out of Turn: Safety Vulnerability of Large Language Models in
Multi-turn Dialogue | Large Language Models (LLMs) have been demonstrated to generate illegal or
unethical responses, particularly when subjected to "jailbreak." Research on
jailbreak has highlighted the safety issues of LLMs. However, prior studies
have predominantly focused on single-turn dialogue, ignoring the potential
complexities and risks presented by multi-turn dialogue, a crucial mode through
which humans derive information from LLMs. In this paper, we argue that humans
could exploit multi-turn dialogue to induce LLMs into generating harmful
information. LLMs may not intend to reject cautionary or borderline unsafe
queries, even if each turn is closely served for one malicious purpose in a
multi-turn dialogue. Therefore, by decomposing an unsafe query into several
sub-queries for multi-turn dialogue, we induced LLMs to answer harmful
sub-questions incrementally, culminating in an overall harmful response. Our
experiments, conducted across a wide range of LLMs, indicate current
inadequacies in the safety mechanisms of LLMs in multi-turn dialogue. Our
findings expose vulnerabilities of LLMs in complex scenarios involving
multi-turn dialogue, presenting new challenges for the safety of LLMs.
| 2,024 | Computation and Language |
Mini-Ensemble Low-Rank Adapters for Parameter-Efficient Fine-Tuning | Parameter-efficient fine-tuning (PEFT) is a popular method for tailoring
pre-trained large language models (LLMs), especially as the models' scale and
the diversity of tasks increase. Low-rank adaptation (LoRA) is based on the
idea that the adaptation process is intrinsically low-dimensional, i.e.,
significant model changes can be represented with relatively few parameters.
However, decreasing the rank encounters challenges with generalization errors
for specific tasks when compared to full-parameter fine-tuning. We present
MELoRA, a mini-ensemble low-rank adapters that uses fewer trainable parameters
while maintaining a higher rank, thereby offering improved performance
potential. The core idea is to freeze original pretrained weights and train a
group of mini LoRAs with only a small number of parameters. This can capture a
significant degree of diversity among mini LoRAs, thus promoting better
generalization ability. We conduct a theoretical analysis and empirical studies
on various NLP tasks. Our experimental results show that, compared to LoRA,
MELoRA achieves better performance with 8 times fewer trainable parameters on
natural language understanding tasks and 36 times fewer trainable parameters on
instruction following tasks, which demonstrates the effectiveness of MELoRA.
| 2,024 | Computation and Language |
Can LLM Generate Culturally Relevant Commonsense QA Data? Case Study in
Indonesian and Sundanese | Large Language Models (LLMs) are increasingly being used to generate
synthetic data for training and evaluating models. However, it is unclear
whether they can generate a good quality of question answering (QA) dataset
that incorporates knowledge and cultural nuance embedded in a language,
especially for low-resource languages. In this study, we investigate the
effectiveness of using LLMs in generating culturally relevant commonsense QA
datasets for Indonesian and Sundanese languages. To do so, we create datasets
for these languages using various methods involving both LLMs and human
annotators. Our experiments show that the current best-performing LLM, GPT-4
Turbo, is capable of generating questions with adequate knowledge in Indonesian
but not in Sundanese, highlighting the performance discrepancy between medium-
and lower-resource languages. We also benchmark various LLMs on our generated
datasets and find that they perform better on the LLM-generated datasets
compared to those created by humans.
| 2,024 | Computation and Language |
Probing Multimodal Large Language Models for Global and Local Semantic
Representation | The success of large language models has inspired researchers to transfer
their exceptional representing ability to other modalities. Several recent
works leverage image-caption alignment datasets to train multimodal large
language models (MLLMs), which achieve state-of-the-art performance on
image-to-text tasks. However, there are very few studies exploring whether
MLLMs truly understand the complete image information, i.e., global
information, or if they can only capture some local object information. In this
study, we find that the intermediate layers of models can encode more global
semantic information, whose representation vectors perform better on
visual-language entailment tasks, rather than the topmost layers. We further
probe models for local semantic representation through object detection tasks.
And we draw a conclusion that the topmost layers may excessively focus on local
information, leading to a diminished ability to encode global information.
| 2,024 | Computation and Language |
SKT5SciSumm -- A Hybrid Generative Approach for Multi-Document
Scientific Summarization | Summarization for scientific text has shown significant benefits both for the
research community and human society. Given the fact that the nature of
scientific text is distinctive and the input of the multi-document
summarization task is substantially long, the task requires sufficient
embedding generation and text truncation without losing important information.
To tackle these issues, in this paper, we propose SKT5SciSumm - a hybrid
framework for multi-document scientific summarization (MDSS). We leverage the
Sentence-Transformer version of Scientific Paper Embeddings using
Citation-Informed Transformers (SPECTER) to encode and represent textual
sentences, allowing for efficient extractive summarization using k-means
clustering. We employ the T5 family of models to generate abstractive summaries
using extracted sentences. SKT5SciSumm achieves state-of-the-art performance on
the Multi-XScience dataset. Through extensive experiments and evaluation, we
showcase the benefits of our model by using less complicated models to achieve
remarkable results, thereby highlighting its potential in advancing the field
of multi-document summarization for scientific text.
| 2,024 | Computation and Language |
Unsupervised multiple choices question answering via universal corpus | Unsupervised question answering is a promising yet challenging task, which
alleviates the burden of building large-scale annotated data in a new domain.
It motivates us to study the unsupervised multiple-choice question answering
(MCQA) problem. In this paper, we propose a novel framework designed to
generate synthetic MCQA data barely based on contexts from the universal domain
without relying on any form of manual annotation. Possible answers are
extracted and used to produce related questions, then we leverage both named
entities (NE) and knowledge graphs to discover plausible distractors to form
complete synthetic samples. Experiments on multiple MCQA datasets demonstrate
the effectiveness of our method.
| 2,024 | Computation and Language |
RECOST: External Knowledge Guided Data-efficient Instruction Tuning | In the current landscape of large language models (LLMs), the process of
instruction tuning serves as an essential step. Considering the high computing
power overhead, data-efficient instruction tuning was proposed to reduce the
training data size in this process, aiming at selecting high-quality
instructional data. Nevertheless, we argue that most current data-efficient
instruction-tuning methods are highly dependent on the quality of the original
instruction-tuning dataset. When it comes to datasets synthesized by LLMs, a
common scenario in this field, dirty samples will even be selected with a
higher probability than other samples. To address these challenges, we utilized
external knowledge (relevant examples or paragraphs) to evaluate those samples
synthesized by LLMs with an in-context-based relative predictive entropy. Based
on the new metric, we proposed a framework, dubbed as \textbf{RECOST}, which
integrates external-knowledge-base re-ranking and diversity-consistent sampling
into a single pipeline. Through extensive experiments on several synthetic
datasets (Alpaca and Alpaca-gpt4), we demonstrate the effectiveness of our
method and achieve even better results with only \textbf{1\%} of the full
dataset.
| 2,024 | Computation and Language |
SoFA: Shielded On-the-fly Alignment via Priority Rule Following | The alignment problem in Large Language Models (LLMs) involves adapting them
to the broad spectrum of human values. This requirement challenges existing
alignment methods due to diversity of preferences and regulatory standards.
This paper introduces a novel alignment paradigm, priority rule following,
which defines rules as the primary control mechanism in each dialog,
prioritizing them over user instructions. Our preliminary analysis reveals that
even the advanced LLMs, such as GPT-4, exhibit shortcomings in understanding
and prioritizing the rules. Therefore, we present PriorityDistill, a
semi-automated approach for distilling priority following signals from LLM
simulations to ensure robust rule integration and adherence. Our experiments
show that this method not only effectively minimizes misalignments utilizing
only one general rule but also adapts smoothly to various unseen rules,
ensuring they are shielded from hijacking and that the model responds
appropriately.
| 2,024 | Computation and Language |
A Dataset for Metaphor Detection in Early Medieval Hebrew Poetry | There is a large volume of late antique and medieval Hebrew texts. They
represent a crucial linguistic and cultural bridge between Biblical and modern
Hebrew. Poetry is prominent in these texts and one of its main haracteristics
is the frequent use of metaphor. Distinguishing figurative and literal language
use is a major task for scholars of the Humanities, especially in the fields of
literature, linguistics, and hermeneutics. This paper presents a new,
challenging dataset of late antique and medieval Hebrew poetry with expert
annotations of metaphor, as well as some baseline results, which we hope will
facilitate further research in this area.
| 2,024 | Computation and Language |
KoDialogBench: Evaluating Conversational Understanding of Language
Models with Korean Dialogue Benchmark | As language models are often deployed as chatbot assistants, it becomes a
virtue for models to engage in conversations in a user's first language. While
these models are trained on a wide range of languages, a comprehensive
evaluation of their proficiency in low-resource languages such as Korean has
been lacking. In this work, we introduce KoDialogBench, a benchmark designed to
assess language models' conversational capabilities in Korean. To this end, we
collect native Korean dialogues on daily topics from public sources, or
translate dialogues from other languages. We then structure these conversations
into diverse test datasets, spanning from dialogue comprehension to response
selection tasks. Leveraging the proposed benchmark, we conduct extensive
evaluations and analyses of various language models to measure a foundational
understanding of Korean dialogues. Experimental results indicate that there
exists significant room for improvement in models' conversation skills.
Furthermore, our in-depth comparisons across different language models
highlight the effectiveness of recent training techniques in enhancing
conversational proficiency. We anticipate that KoDialogBench will promote the
progress towards conversation-aware Korean language models.
| 2,024 | Computation and Language |
FairBelief -- Assessing Harmful Beliefs in Language Models | Language Models (LMs) have been shown to inherit undesired biases that might
hurt minorities and underrepresented groups if such systems were integrated
into real-world applications without careful fairness auditing. This paper
proposes FairBelief, an analytical approach to capture and assess beliefs,
i.e., propositions that an LM may embed with different degrees of confidence
and that covertly influence its predictions. With FairBelief, we leverage
prompting to study the behavior of several state-of-the-art LMs across
different previously neglected axes, such as model scale and likelihood,
assessing predictions on a fairness dataset specifically designed to quantify
LMs' outputs' hurtfulness. Finally, we conclude with an in-depth qualitative
assessment of the beliefs emitted by the models. We apply FairBelief to English
LMs, revealing that, although these architectures enable high performances on
diverse natural language processing tasks, they show hurtful beliefs about
specific genders. Interestingly, training procedure and dataset, model scale,
and architecture induce beliefs of different degrees of hurtfulness.
| 2,024 | Computation and Language |
Spot the bot: Coarse-Grained Partition of Semantic Paths for Bots and
Humans | Nowadays, technology is rapidly advancing: bots are writing comments,
articles, and reviews. Due to this fact, it is crucial to know if the text was
written by a human or by a bot. This paper focuses on comparing structures of
the coarse-grained partitions of semantic paths for human-written and
bot-generated texts. We compare the clusterizations of datasets of n-grams from
literary texts and texts generated by several bots. The hypothesis is that the
structures and clusterizations are different. Our research supports the
hypothesis. As the semantic structure may be different for different languages,
we investigate Russian, English, German, and Vietnamese languages.
| 2,023 | Computation and Language |
Benchmarking GPT-4 on Algorithmic Problems: A Systematic Evaluation of
Prompting Strategies | Large Language Models (LLMs) have revolutionized the field of Natural
Language Processing thanks to their ability to reuse knowledge acquired on
massive text corpora on a wide variety of downstream tasks, with minimal (if
any) tuning steps. At the same time, it has been repeatedly shown that LLMs
lack systematic generalization, which allows to extrapolate the learned
statistical regularities outside the training distribution. In this work, we
offer a systematic benchmarking of GPT-4, one of the most advanced LLMs
available, on three algorithmic tasks characterized by the possibility to
control the problem difficulty with two parameters. We compare the performance
of GPT-4 with that of its predecessor (GPT-3.5) and with a variant of the
Transformer-Encoder architecture recently introduced to solve similar tasks,
the Neural Data Router. We find that the deployment of advanced prompting
techniques allows GPT-4 to reach superior accuracy on all tasks, demonstrating
that state-of-the-art LLMs constitute a very strong baseline also in
challenging tasks that require systematic generalization.
| 2,024 | Computation and Language |
Investigating Continual Pretraining in Large Language Models: Insights
and Implications | This paper studies the evolving domain of Continual Learning (CL) in large
language models (LLMs), with a focus on developing strategies for efficient and
sustainable training. Our primary emphasis is on continual domain-adaptive
pretraining, a process designed to equip LLMs with the ability to integrate new
information from various domains while retaining previously learned knowledge
and enhancing cross-domain knowledge transfer without relying on
domain-specific identification. Unlike previous studies, which mostly
concentrate on a limited selection of tasks or domains and primarily aim to
address the issue of forgetting, our research evaluates the adaptability and
capabilities of LLMs to changing data landscapes in practical scenarios. To
this end, we introduce a new benchmark designed to measure the adaptability of
LLMs to these evolving data environments, offering a comprehensive framework
for evaluation. We examine the impact of model size on learning efficacy and
forgetting, as well as how the progression and similarity of emerging domains
affect the knowledge transfer within these models. Our findings uncover several
key insights: (i) when the sequence of domains shows semantic similarity,
continual pretraining enables LLMs to better specialize in the current domain
compared to stand-alone fine-tuning, (ii) training across a diverse range of
domains enhances both backward and forward knowledge transfer, and (iii)
smaller models are particularly sensitive to continual pretraining, showing the
most significant rates of both forgetting and learning. We posit that our
research marks a shift towards establishing a more realistic benchmark for
investigating CL in LLMs, and has the potential to play a key role in guiding
the direction of future research in the field.
| 2,024 | Computation and Language |
Consistency Matters: Explore LLMs Consistency From a Black-Box
Perspective | Nowadays both commercial and open-source academic LLM have become the
mainstream models of NLP. However, there is still a lack of research on LLM
consistency, meaning that throughout the various stages of LLM research and
deployment, its internal parameters and capabilities should remain unchanged.
This issue exists in both the industrial and academic sectors. The solution to
this problem is often time-consuming and labor-intensive, and there is also an
additional cost of secondary deployment, resulting in economic and time losses.
To fill this gap, we build an LLM consistency task dataset and design several
baselines. Additionally, we choose models of diverse scales for the main
experiments. Specifically, in the LightGBM experiment, we used traditional NLG
metrics (i.e., ROUGE, BLEU, METEOR) as the features needed for model training.
The final result exceeds the manual evaluation and GPT3.5 as well as other
models in the main experiment, achieving the best performance. In the end, we
use the best performing LightGBM model as the base model to build the
evaluation tool, which can effectively assist in the deployment of business
models. Our code and tool demo are available at
https://github.com/heavenhellchen/Consistency.git
| 2,024 | Computation and Language |
Enhancing EEG-to-Text Decoding through Transferable Representations from
Pre-trained Contrastive EEG-Text Masked Autoencoder | Reconstructing natural language from non-invasive electroencephalography
(EEG) holds great promise as a language decoding technology for brain-computer
interfaces (BCIs). However, EEG-based language decoding is still in its nascent
stages, facing several technical issues such as: 1) Absence of a hybrid
strategy that can effectively integrate cross-modality (between EEG and text)
self-learning with intra-modality self-reconstruction of EEG features or
textual sequences; 2) Under-utilization of large language models (LLMs) to
enhance EEG-based language decoding. To address above issues, we propose the
Contrastive EEG-Text Masked Autoencoder (CET-MAE), a novel model that
orchestrates compound self-supervised learning across and within EEG and text
through a dedicated multi-stream encoder. Furthermore, we develop a framework
called E2T-PTR (EEG-to-Text decoding using Pretrained Transferable
Representations), which leverages pre-trained modules alongside the EEG stream
from CET-MAE and further enables an LLM (specifically BART) to decode text from
EEG sequences. Comprehensive experiments conducted on the popular text-evoked
EEG database, ZuCo, demonstrate the superiority of E2T-PTR, which outperforms
the state-of-the-art in ROUGE-1 F1 and BLEU-4 scores by 8.34% and 32.21%,
respectively. These results indicate significant advancements in the field and
underscores the proposed framework's potential to enable more powerful and
widespread BCI applications.
| 2,024 | Computation and Language |
Exploiting Emotion-Semantic Correlations for Empathetic Response
Generation | Empathetic response generation aims to generate empathetic responses by
understanding the speaker's emotional feelings from the language of dialogue.
Recent methods capture emotional words in the language of communicators and
construct them as static vectors to perceive nuanced emotions. However,
linguistic research has shown that emotional words in language are dynamic and
have correlations with other grammar semantic roles, i.e., words with semantic
meanings, in grammar. Previous methods overlook these two characteristics,
which easily lead to misunderstandings of emotions and neglect of key
semantics. To address this issue, we propose a dynamical Emotion-Semantic
Correlation Model (ESCM) for empathetic dialogue generation tasks. ESCM
constructs dynamic emotion-semantic vectors through the interaction of context
and emotions. We introduce dependency trees to reflect the correlations between
emotions and semantics. Based on dynamic emotion-semantic vectors and
dependency trees, we propose a dynamic correlation graph convolutional network
to guide the model in learning context meanings in dialogue and generating
empathetic responses. Experimental results on the EMPATHETIC-DIALOGUES dataset
show that ESCM understands semantics and emotions more accurately and expresses
fluent and informative empathetic responses. Our analysis results also indicate
that the correlations between emotions and semantics are frequently used in
dialogues, which is of great significance for empathetic perception and
expression.
| 2,024 | Computation and Language |
Deep Learning Based Named Entity Recognition Models for Recipes | Food touches our lives through various endeavors, including flavor,
nourishment, health, and sustainability. Recipes are cultural capsules
transmitted across generations via unstructured text. Automated protocols for
recognizing named entities, the building blocks of recipe text, are of immense
value for various applications ranging from information extraction to novel
recipe generation. Named entity recognition is a technique for extracting
information from unstructured or semi-structured data with known labels.
Starting with manually-annotated data of 6,611 ingredient phrases, we created
an augmented dataset of 26,445 phrases cumulatively. Simultaneously, we
systematically cleaned and analyzed ingredient phrases from RecipeDB, the
gold-standard recipe data repository, and annotated them using the Stanford
NER. Based on the analysis, we sampled a subset of 88,526 phrases using a
clustering-based approach while preserving the diversity to create the
machine-annotated dataset. A thorough investigation of NER approaches on these
three datasets involving statistical, fine-tuning of deep learning-based
language models and few-shot prompting on large language models (LLMs) provides
deep insights. We conclude that few-shot prompting on LLMs has abysmal
performance, whereas the fine-tuned spaCy-transformer emerges as the best model
with macro-F1 scores of 95.9%, 96.04%, and 95.71% for the manually-annotated,
augmented, and machine-annotated datasets, respectively.
| 2,024 | Computation and Language |
Training-Free Long-Context Scaling of Large Language Models | The ability of Large Language Models (LLMs) to process and generate coherent
text is markedly weakened when the number of input tokens exceeds their
pretraining length. Given the expensive overhead of finetuning large-scale
models with longer sequences, we propose Dual Chunk Attention (DCA), which
enables Llama2 70B to support context windows of more than 100k tokens without
continual training. By decomposing the attention computation for long sequences
into chunk-based modules, DCA manages to effectively capture the relative
positional information of tokens within the same chunk (Intra-Chunk) and across
distinct chunks (Inter-Chunk), as well as integrates seamlessly with Flash
Attention. In addition to its impressive extrapolation capability, DCA achieves
performance on practical long-context tasks that is comparable to or even
better than that of finetuned models. When compared with proprietary models,
our training-free 70B model attains 94% of the performance of gpt-3.5-16k,
indicating it is a viable open-source alternative. All code and data used in
this work are released at \url{https://github.com/HKUNLP/ChunkLlama}.
| 2,024 | Computation and Language |
Can GPT-4 Identify Propaganda? Annotation and Detection of Propaganda
Spans in News Articles | The use of propaganda has spiked on mainstream and social media, aiming to
manipulate or mislead users. While efforts to automatically detect propaganda
techniques in textual, visual, or multimodal content have increased, most of
them primarily focus on English content. The majority of the recent initiatives
targeting medium to low-resource languages produced relatively small annotated
datasets, with a skewed distribution, posing challenges for the development of
sophisticated propaganda detection models. To address this challenge, we
carefully develop the largest propaganda dataset to date, ArPro, comprised of
8K paragraphs from newspaper articles, labeled at the text span level following
a taxonomy of 23 propagandistic techniques. Furthermore, our work offers the
first attempt to understand the performance of large language models (LLMs),
using GPT-4, for fine-grained propaganda detection from text. Results showed
that GPT-4's performance degrades as the task moves from simply classifying a
paragraph as propagandistic or not, to the fine-grained task of detecting
propaganda techniques and their manifestation in text. Compared to models
fine-tuned on the dataset for propaganda detection at different classification
granularities, GPT-4 is still far behind. Finally, we evaluate GPT-4 on a
dataset consisting of six other languages for span detection, and results
suggest that the model struggles with the task across languages. Our dataset
and resources will be released to the community.
| 2,024 | Computation and Language |
Prescribing Large Language Models for Perioperative Care: What's The
Right Dose for Pre-trained Models? | Postoperative risk predictions can inform effective perioperative care
management and planning. We aimed to assess whether clinical large language
models (LLMs) can predict postoperative risks using clinical texts with various
training strategies. The main cohort involved 84,875 records from Barnes Jewish
Hospital (BJH) system between 2018 and 2021. Methods were replicated on Beth
Israel Deaconess's MIMIC dataset. Both studies had mean duration of follow-up
based on the length of postoperative ICU stay less than 7 days. For the BJH
dataset, outcomes included 30-day mortality, pulmonary embolism (PE) and
pneumonia. Three domain adaptation and finetuning strategies were implemented
for BioGPT, ClinicalBERT and BioClinicalBERT: self-supervised objectives;
incorporating labels with semi-supervised fine-tuning; and foundational
modelling through multi-task learning. Model performance was compared using the
area under the receiver operating characteristic curve (AUROC) and the area
under the precision recall curve (AUPRC) for classification tasks, and mean
squared error (MSE) and R2 for regression tasks. Pre-trained LLMs outperformed
traditional word embeddings, with absolute maximal gains of 38.3% for AUROC and
14% for AUPRC. Adapting models further improved performance: (1)
self-supervised finetuning by 3.2% for AUROC and 1.5% for AUPRC; (2)
semi-supervised finetuning by 1.8% for AUROC and 2% for AUPRC, compared to
self-supervised finetuning; (3) foundational modelling by 3.6% for AUROC and
2.6% for AUPRC, compared to self-supervised finetuning. Pre-trained clinical
LLMs offer opportunities for postoperative risk predictions in unforeseen data,
with peaks in foundational models indicating the potential of task-agnostic
learning towards the generalizability of LLMs in perioperative care.
| 2,024 | Computation and Language |
REAR: A Relevance-Aware Retrieval-Augmented Framework for Open-Domain
Question Answering | Considering the limited internal parametric knowledge, retrieval-augmented
generation (RAG) has been widely used to extend the knowledge scope of large
language models (LLMs). Despite the extensive efforts on RAG research, in
existing methods, LLMs cannot precisely assess the relevance of retrieved
documents, thus likely leading to misleading or even incorrect utilization of
external knowledge (i.e., retrieved documents). To address this issue, in this
paper, we propose REAR, a RElevance-Aware Retrieval-augmented approach for
open-domain question answering (QA). As the key motivation, we aim to enhance
the self-awareness of source relevance for LLMs, so as to adaptively utilize
external knowledge in RAG systems. Specially, we develop a new architecture for
LLM based RAG system, by incorporating a specially designed rank head that
precisely assesses the relevance of retrieved documents. Furthermore, we
propose an improved training method based on bi-granularity relevance fusion
and noise-resistant training. By combining the improvements in both
architecture and training, our proposed REAR can better utilize external
knowledge by effectively perceiving the relevance of retrieved documents.
Experiments on four open-domain QA tasks show that REAR significantly
outperforms previous a number of competitive RAG approaches. Our code and data
can be accessed at https://github.com/RUCAIBox/REAR.
| 2,024 | Computation and Language |
Extreme Miscalibration and the Illusion of Adversarial Robustness | Deep learning-based Natural Language Processing (NLP) models are vulnerable
to adversarial attacks, where small perturbations can cause a model to
misclassify. Adversarial Training (AT) is often used to increase model
robustness. However, we have discovered an intriguing phenomenon: deliberately
or accidentally miscalibrating models masks gradients in a way that interferes
with adversarial attack search methods, giving rise to an apparent increase in
robustness. We show that this observed gain in robustness is an illusion of
robustness (IOR), and demonstrate how an adversary can perform various forms of
test-time temperature calibration to nullify the aforementioned interference
and allow the adversarial attack to find adversarial examples. Hence, we urge
the NLP community to incorporate test-time temperature scaling into their
robustness evaluations to ensure that any observed gains are genuine. Finally,
we show how the temperature can be scaled during \textit{training} to improve
genuine robustness.
| 2,024 | Computation and Language |
Latent Attention for Linear Time Transformers | The time complexity of the standard attention mechanism in a transformer
scales quadratically with the length of the sequence. We introduce a method to
reduce this to linear scaling with time, based on defining attention via latent
vectors. The method is readily usable as a drop-in replacement for the standard
attention mechanism. Our "Latte Transformer" model can be implemented for both
bidirectional and unidirectional tasks, with the causal version allowing a
recurrent implementation which is memory and time-efficient during inference of
language generation tasks. Whilst next token prediction scales linearly with
the sequence length for a standard transformer, a Latte Transformer requires
constant time to compute the next token. The empirical performance of our
method is comparable to standard attention, yet allows scaling to context
windows much larger than practical in standard attention.
| 2,024 | Computation and Language |
Predict the Next Word: <Humans exhibit uncertainty in this task and
language models _____> | Language models (LMs) are statistical models trained to assign probability to
human-generated text. As such, it is reasonable to question whether they
approximate linguistic variability exhibited by humans well. This form of
statistical assessment is difficult to perform at the passage level, for it
requires acceptability judgements (i.e., human evaluation) or a robust
automated proxy (which is non-trivial). At the word level, however, given some
context, samples from an LM can be assessed via exact matching against a
prerecorded dataset of alternative single-word continuations of the available
context. We exploit this fact and evaluate the LM's ability to reproduce
variability that humans (in particular, a population of English speakers)
exhibit in the 'next word prediction' task. This can be seen as assessing a
form of calibration, which, in the context of text classification, Baan et al.
(2022) termed calibration to human uncertainty. We assess GPT2, BLOOM and
ChatGPT and find that they exhibit fairly low calibration to human uncertainty.
We also verify the failure of expected calibration error (ECE) to reflect this,
and as such, advise the community against relying on it in this setting.
| 2,024 | Computation and Language |
Retrieval is Accurate Generation | Standard language models generate text by selecting tokens from a fixed,
finite, and standalone vocabulary. We introduce a novel method that selects
context-aware phrases from a collection of supporting documents. One of the
most significant challenges for this paradigm shift is determining the training
oracles, because a string of text can be segmented in various ways and each
segment can be retrieved from numerous possible documents. To address this, we
propose to initialize the training oracles using linguistic heuristics and,
more importantly, bootstrap the oracles through iterative self-reinforcement.
Extensive experiments show that our model not only outperforms standard
language models on a variety of knowledge-intensive tasks but also demonstrates
improved generation quality in open-ended text generation. For instance,
compared to the standard language model counterpart, our model raises the
accuracy from 23.47% to 36.27% on OpenbookQA, and improves the MAUVE score from
42.61% to 81.58% in open-ended text generation. Remarkably, our model also
achieves the best performance and the lowest latency among several
retrieval-augmented baselines. In conclusion, we assert that retrieval is more
accurate generation and hope that our work will encourage further research on
this new paradigm shift.
| 2,024 | Computation and Language |
Unleashing the Potential of Large Language Models as Prompt Optimizers:
An Analogical Analysis with Gradient-based Model Optimizers | Automatic prompt optimization is an important approach to improving the
performance of large language models (LLMs). Recent research demonstrates the
potential of using LLMs as prompt optimizers, which can generate improved task
prompts via iterative refinement. In this paper, we propose a novel perspective
to investigate the design of LLM-based prompt optimizers, by drawing an analogy
with gradient-based model optimizers. To connect these two approaches, we
identify two pivotal factors in model parameter learning: update direction and
update method. Focused on the two aspects, we borrow the theoretical framework
and learning methods from gradient-based optimization to design improved
strategies for LLM-based prompt optimizers. By systematically analyzing a rich
set of improvement strategies, we further develop a capable Gradient-inspired
LLM-based Prompt Optimizer called GPO. At each step, it first retrieves
relevant prompts from the optimization trajectory as the update direction.
Then, it utilizes the generation-based refinement strategy to perform the
update, while controlling the edit distance through a cosine-based decay
strategy. Extensive experiments demonstrate the effectiveness and efficiency of
GPO. In particular, GPO brings an additional improvement of up to 56.8% on
Big-Bench Hard and 55.3% on MMLU compared to baseline methods.
| 2,024 | Computation and Language |
Linguistic Knowledge Can Enhance Encoder-Decoder Models (If You Let It) | In this paper, we explore the impact of augmenting pre-trained
Encoder-Decoder models, specifically T5, with linguistic knowledge for the
prediction of a target task. In particular, we investigate whether fine-tuning
a T5 model on an intermediate task that predicts structural linguistic
properties of sentences modifies its performance in the target task of
predicting sentence-level complexity. Our study encompasses diverse experiments
conducted on Italian and English datasets, employing both monolingual and
multilingual T5 models at various sizes. Results obtained for both languages
and in cross-lingual configurations show that linguistically motivated
intermediate fine-tuning has generally a positive impact on target task
performance, especially when applied to smaller models and in scenarios with
limited data availability.
| 2,024 | Computation and Language |
Neural Automated Writing Evaluation with Corrective Feedback | The utilization of technology in second language learning and teaching has
become ubiquitous. For the assessment of writing specifically, automated
writing evaluation (AWE) and grammatical error correction (GEC) have become
immensely popular and effective methods for enhancing writing proficiency and
delivering instant and individualized feedback to learners. By leveraging the
power of natural language processing (NLP) and machine learning algorithms, AWE
and GEC systems have been developed separately to provide language learners
with automated corrective feedback and more accurate and unbiased scoring that
would otherwise be subject to examiners. In this paper, we propose an
integrated system for automated writing evaluation with corrective feedback as
a means of bridging the gap between AWE and GEC results for second language
learners. This system enables language learners to simulate the essay writing
tests: a student writes and submits an essay, and the system returns the
assessment of the writing along with suggested grammatical error corrections.
Given that automated scoring and grammatical correction are more efficient and
cost-effective than human grading, this integrated system would also alleviate
the burden of manually correcting innumerable essays.
| 2,024 | Computation and Language |
Fine-Grained Natural Language Inference Based Faithfulness Evaluation
for Diverse Summarisation Tasks | We study existing approaches to leverage off-the-shelf Natural Language
Inference (NLI) models for the evaluation of summary faithfulness and argue
that these are sub-optimal due to the granularity level considered for premises
and hypotheses. That is, the smaller content unit considered as hypothesis is a
sentence and premises are made up of a fixed number of document sentences. We
propose a novel approach, namely InFusE, that uses a variable premise size and
simplifies summary sentences into shorter hypotheses. Departing from previous
studies which focus on single short document summarisation, we analyse NLI
based faithfulness evaluation for diverse summarisation tasks. We introduce
DiverSumm, a new benchmark comprising long form summarisation (long documents
and summaries) and diverse summarisation tasks (e.g., meeting and
multi-document summarisation). In experiments, InFusE obtains superior
performance across the different summarisation tasks. Our code and data are
available at https://github.com/HJZnlp/infuse.
| 2,024 | Computation and Language |
From Text Segmentation to Smart Chaptering: A Novel Benchmark for
Structuring Video Transcriptions | Text segmentation is a fundamental task in natural language processing, where
documents are split into contiguous sections. However, prior research in this
area has been constrained by limited datasets, which are either small in scale,
synthesized, or only contain well-structured documents. In this paper, we
address these limitations by introducing a novel benchmark YTSeg focusing on
spoken content that is inherently more unstructured and both topically and
structurally diverse. As part of this work, we introduce an efficient
hierarchical segmentation model MiniSeg, that outperforms state-of-the-art
baselines. Lastly, we expand the notion of text segmentation to a more
practical "smart chaptering" task that involves the segmentation of
unstructured content, the generation of meaningful segment titles, and a
potential real-time application of the models.
| 2,024 | Computation and Language |
Are LLMs Capable of Data-based Statistical and Causal Reasoning?
Benchmarking Advanced Quantitative Reasoning with Data | Quantitative reasoning is a critical skill to analyze data, yet the
assessment of such ability remains limited. To address this gap, we introduce
the Quantitative Reasoning with Data (QRData) benchmark, aiming to evaluate
Large Language Models' capability in statistical and causal reasoning with
real-world data. The benchmark comprises a carefully constructed dataset of 411
questions accompanied by data sheets from textbooks, online learning materials,
and academic papers. To compare models' quantitative reasoning abilities on
data and text, we enrich the benchmark with an auxiliary set of 290 text-only
questions, namely QRText. We evaluate natural language reasoning, program-based
reasoning, and agent reasoning methods including Chain-of-Thought,
Program-of-Thoughts, ReAct, and code interpreter assistants on diverse models.
The strongest model GPT-4 achieves an accuracy of 58%, which has a large room
for improvement. Among open-source models, Deepseek-coder-instruct, a code LLM
pretrained on 2T tokens, gets the highest accuracy of 37%. Analysis reveals
that models encounter difficulties in data analysis and causal reasoning, and
struggle in using causal knowledge and provided data simultaneously. Code and
data are in https://github.com/xxxiaol/QRData.
| 2,024 | Computation and Language |
Beyond prompt brittleness: Evaluating the reliability and consistency of
political worldviews in LLMs | Due to the widespread use of large language models (LLMs) in ubiquitous
systems, we need to understand whether they embed a specific worldview and what
these views reflect. Recent studies report that, prompted with political
questionnaires, LLMs show left-liberal leanings. However, it is as yet unclear
whether these leanings are reliable (robust to prompt variations) and whether
the leaning is consistent across policies and political leaning. We propose a
series of tests which assess the reliability and consistency of LLMs' stances
on political statements based on a dataset of voting-advice questionnaires
collected from seven EU countries and annotated for policy domains. We study
LLMs ranging in size from 7B to 70B parameters and find that their reliability
increases with parameter count. Larger models show overall stronger alignment
with left-leaning parties but differ among policy programs: They evince a
(left-wing) positive stance towards environment protection, social welfare but
also (right-wing) law and order, with no consistent preferences in foreign
policy, migration, and economy.
| 2,024 | Computation and Language |
NextLevelBERT: Investigating Masked Language Modeling with Higher-Level
Representations for Long Documents | While (large) language models have significantly improved over the last
years, they still struggle to sensibly process long sequences found, e.g., in
books, due to the quadratic scaling of the underlying attention mechanism. To
address this, we propose NextLevelBERT, a Masked Language Model operating not
on tokens, but on higher-level semantic representations in the form of text
embeddings. We pretrain NextLevelBERT to predict the vector representation of
entire masked text chunks and evaluate the effectiveness of the resulting
document vectors on three task types: 1) Semantic Textual Similarity via
zero-shot document embeddings, 2) Long document classification, 3)
Multiple-choice question answering. We find that next level Masked Language
Modeling is an effective technique to tackle long-document use cases and can
outperform much larger embedding models as long as the required level of detail
is not too high. We make model and code available.
| 2,024 | Computation and Language |
RAVEL: Evaluating Interpretability Methods on Disentangling Language
Model Representations | Individual neurons participate in the representation of multiple high-level
concepts. To what extent can different interpretability methods successfully
disentangle these roles? To help address this question, we introduce RAVEL
(Resolving Attribute-Value Entanglements in Language Models), a dataset that
enables tightly controlled, quantitative comparisons between a variety of
existing interpretability methods. We use the resulting conceptual framework to
define the new method of Multi-task Distributed Alignment Search (MDAS), which
allows us to find distributed representations satisfying multiple causal
criteria. With Llama2-7B as the target language model, MDAS achieves
state-of-the-art results on RAVEL, demonstrating the importance of going beyond
neuron-level analyses to identify features distributed across activations. We
release our benchmark at https://github.com/explanare/ravel.
| 2,024 | Computation and Language |
AmbigNLG: Addressing Task Ambiguity in Instruction for NLG | In this study, we introduce AmbigNLG, a new task designed to tackle the
challenge of task ambiguity in instructions for Natural Language Generation
(NLG) tasks. Despite the impressive capabilities of Large Language Models
(LLMs) in understanding and executing a wide range of tasks through natural
language interaction, their performance is significantly hindered by the
ambiguity present in real-world instructions. To address this, AmbigNLG seeks
to identify and mitigate such ambiguities, aiming to refine instructions to
match user expectations better. We introduce a dataset, AmbigSNI-NLG,
consisting of 2,500 instances, and develop an ambiguity taxonomy for
categorizing and annotating instruction ambiguities. Our approach demonstrates
substantial improvements in text generation quality, highlighting the critical
role of clear and specific instructions in enhancing LLM performance in NLG
tasks.
| 2,024 | Computation and Language |
Tower: An Open Multilingual Large Language Model for Translation-Related
Tasks | While general-purpose large language models (LLMs) demonstrate proficiency on
multiple tasks within the domain of translation, approaches based on open LLMs
are competitive only when specializing on a single task. In this paper, we
propose a recipe for tailoring LLMs to multiple tasks present in translation
workflows. We perform continued pretraining on a multilingual mixture of
monolingual and parallel data, creating TowerBase, followed by finetuning on
instructions relevant for translation processes, creating TowerInstruct. Our
final model surpasses open alternatives on several tasks relevant to
translation workflows and is competitive with general-purpose closed LLMs. To
facilitate future research, we release the Tower models, our specialization
dataset, an evaluation framework for LLMs focusing on the translation
ecosystem, and a collection of model generations, including ours, on our
benchmark.
| 2,024 | Computation and Language |
Evaluating Very Long-Term Conversational Memory of LLM Agents | Existing works on long-term open-domain dialogues focus on evaluating model
responses within contexts spanning no more than five chat sessions. Despite
advancements in long-context large language models (LLMs) and retrieval
augmented generation (RAG) techniques, their efficacy in very long-term
dialogues remains unexplored. To address this research gap, we introduce a
machine-human pipeline to generate high-quality, very long-term dialogues by
leveraging LLM-based agent architectures and grounding their dialogues on
personas and temporal event graphs. Moreover, we equip each agent with the
capability of sharing and reacting to images. The generated conversations are
verified and edited by human annotators for long-range consistency and
grounding to the event graphs. Using this pipeline, we collect LoCoMo, a
dataset of very long-term conversations, each encompassing 300 turns and 9K
tokens on avg., over up to 35 sessions. Based on LoCoMo, we present a
comprehensive evaluation benchmark to measure long-term memory in models,
encompassing question answering, event summarization, and multi-modal dialogue
generation tasks. Our experimental results indicate that LLMs exhibit
challenges in understanding lengthy conversations and comprehending long-range
temporal and causal dynamics within dialogues. Employing strategies like
long-context LLMs or RAG can offer improvements but these models still
substantially lag behind human performance.
| 2,024 | Computation and Language |
Towards Optimal Learning of Language Models | This work studies the general principles of improving the learning of
language models (LMs), which aims at reducing the necessary training steps for
achieving superior performance. Specifically, we present a theory for the
optimal learning of LMs. We first propose an objective that optimizes LM
learning by maximizing the data compression ratio in an
"LM-training-as-lossless-compression" view. Then, we derive a theorem, named
Learning Law, to reveal the properties of the dynamics in the optimal learning
process under our objective. The theorem is then validated by experiments on a
linear classification and a real-world language modeling task. Finally, we
empirically verify that the optimal learning of LMs essentially stems from the
improvement of the coefficients in the scaling law of LMs, indicating great
promise and significance for designing practical learning acceleration methods.
Our code can be found at https://aka.ms/LearningLaw.
| 2,024 | Computation and Language |
Massive Activations in Large Language Models | We observe an empirical phenomenon in Large Language Models (LLMs) -- very
few activations exhibit significantly larger values than others (e.g., 100,000
times larger). We call them massive activations. First, we demonstrate the
widespread existence of massive activations across various LLMs and
characterize their locations. Second, we find their values largely stay
constant regardless of the input, and they function as indispensable bias terms
in LLMs. Third, these massive activations lead to the concentration of
attention probabilities to their corresponding tokens, and further, implicit
bias terms in the self-attention output. Last, we also study massive
activations in Vision Transformers.
| 2,024 | Computation and Language |
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits | Recent research, such as BitNet, is paving the way for a new era of 1-bit
Large Language Models (LLMs). In this work, we introduce a 1-bit LLM variant,
namely BitNet b1.58, in which every single parameter (or weight) of the LLM is
ternary {-1, 0, 1}. It matches the full-precision (i.e., FP16 or BF16)
Transformer LLM with the same model size and training tokens in terms of both
perplexity and end-task performance, while being significantly more
cost-effective in terms of latency, memory, throughput, and energy consumption.
More profoundly, the 1.58-bit LLM defines a new scaling law and recipe for
training new generations of LLMs that are both high-performance and
cost-effective. Furthermore, it enables a new computation paradigm and opens
the door for designing specific hardware optimized for 1-bit LLMs.
| 2,024 | Computation and Language |
TruthX: Alleviating Hallucinations by Editing Large Language Models in
Truthful Space | Large Language Models (LLMs) have demonstrated remarkable capabilities across
various tasks. However, they sometimes suffer from producing hallucinations,
particularly in cases where they may generate untruthful responses despite
possessing the correct knowledge. In this paper, we propose TruthX, an
inference-time method to elicit the truthfulness of LLMs by editing their
internal representations in truthful space. TruthX employs an auto-encoder to
map LLM's representations into semantic and truthful latent spaces
respectively, and applies contrastive learning to identify a truthful editing
direction within the truthful space. During inference, by editing LLM's
internal representations in truthful space, TruthX effectively enhances the
truthfulness of LLMs. Experiments show that TruthX effectively improves the
truthfulness of 13 advanced LLMs by an average of 20% on TruthfulQA benchmark.
Further analyses suggest that the truthful space acquired by TruthX plays a
pivotal role in controlling LLM to produce truthful or hallucinatory responses.
| 2,024 | Computation and Language |
Stable LM 2 1.6B Technical Report | We introduce StableLM 2 1.6B, the first in a new generation of our language
model series. In this technical report, we present in detail the data and
training procedure leading to the base and instruction-tuned versions of
StableLM 2 1.6B. The weights for both models are available via Hugging Face for
anyone to download and use. The report contains thorough evaluations of these
models, including zero- and few-shot benchmarks, multilingual benchmarks, and
the MT benchmark focusing on multi-turn dialogues. At the time of publishing
this report, StableLM 2 1.6B was the state-of-the-art open model under 2B
parameters by a significant margin. Given its appealing small size, we also
provide throughput measurements on a number of edge devices. In addition, we
open source several quantized checkpoints and provide their performance metrics
compared to the original model.
| 2,024 | Computation and Language |
Follow My Instruction and Spill the Beans: Scalable Data Extraction from
Retrieval-Augmented Generation Systems | Retrieval-Augmented Generation (RAG) improves pre-trained models by
incorporating external knowledge at test time to enable customized adaptation.
We study the risk of datastore leakage in Retrieval-In-Context RAG Language
Models (LMs). We show that an adversary can exploit LMs' instruction-following
capabilities to easily extract text data verbatim from the datastore of RAG
systems built with instruction-tuned LMs via prompt injection. The
vulnerability exists for a wide range of modern LMs that span Llama2,
Mistral/Mixtral, Vicuna, SOLAR, WizardLM, Qwen1.5, and Platypus2, and the
exploitability exacerbates as the model size scales up. Extending our study to
production RAG models GPTs, we design an attack that can cause datastore
leakage with a 100% success rate on 25 randomly selected customized GPTs with
at most 2 queries, and we extract text data verbatim at a rate of 41% from a
book of 77,000 words and 3% from a corpus of 1,569,000 words by prompting the
GPTs with only 100 queries generated by themselves.
| 2,024 | Computation and Language |
BlendSQL: A Scalable Dialect for Unifying Hybrid Question Answering in
Relational Algebra | Many existing end-to-end systems for hybrid question answering tasks can
often be boiled down to a "prompt-and-pray" paradigm, where the user has
limited control and insight into the intermediate reasoning steps used to
achieve the final result. Additionally, due to the context size limitation of
many transformer-based LLMs, it is often not reasonable to expect that the full
structured and unstructured context will fit into a given prompt in a zero-shot
setting, let alone a few-shot setting. We introduce BlendSQL, a superset of
SQLite to act as a unified dialect for orchestrating reasoning across both
unstructured and structured data. For hybrid question answering tasks involving
multi-hop reasoning, we encode the full decomposed reasoning roadmap into a
single interpretable BlendSQL query. Notably, we show that BlendSQL can scale
to massive datasets and improve the performance of end-to-end systems while
using 35% fewer tokens. Our code is available and installable as a package at
https://github.com/parkervg/blendsql.
| 2,024 | Computation and Language |
JMLR: Joint Medical LLM and Retrieval Training for Enhancing Reasoning
and Professional Question Answering Capability | With the explosive growth of medical data and the rapid development of
artificial intelligence technology, precision medicine has emerged as a key to
enhancing the quality and efficiency of healthcare services. In this context,
Large Language Models (LLMs) play an increasingly vital role in medical
knowledge acquisition and question-answering systems. To further improve the
performance of these systems in the medical domain, we introduce an innovative
method that jointly trains an Information Retrieval (IR) system and an LLM
during the fine-tuning phase. This approach, which we call Joint Medical LLM
and Retrieval Training (JMLR), is designed to overcome the challenges faced by
traditional models in handling medical question-answering tasks. By employing a
synchronized training mechanism, JMLR reduces the demand for computational
resources and enhances the model's ability to leverage medical knowledge for
reasoning and answering questions. Our experimental results demonstrate that
JMLR-13B (81.2% on Amboos, 61.3% on MedQA) outperforms models using
conventional pre-training and fine-tuning Meditron-70B (76.4% on AMBOSS, 60.3%
on MedQA). For models of the same 7B scale, JMLR-7B(68.7% on Amboos, 51.7% on
MedQA) significantly outperforms other public models (Meditron-7B: 50.1%,
47.9%), proving its superiority in terms of cost (our training time: 37 hours,
traditional method: 144 hours), efficiency, and effectiveness in medical
question-answering tasks. Through this work, we provide a new and efficient
knowledge enhancement tool for healthcare, demonstrating the great potential of
integrating IR and LLM training in precision medical information retrieval and
question-answering systems.
| 2,024 | Computation and Language |
Researchy Questions: A Dataset of Multi-Perspective, Decompositional
Questions for LLM Web Agents | Existing question answering (QA) datasets are no longer challenging to most
powerful Large Language Models (LLMs). Traditional QA benchmarks like TriviaQA,
NaturalQuestions, ELI5 and HotpotQA mainly study ``known unknowns'' with clear
indications of both what information is missing, and how to find it to answer
the question. Hence, good performance on these benchmarks provides a false
sense of security. A yet unmet need of the NLP community is a bank of
non-factoid, multi-perspective questions involving a great deal of unclear
information needs, i.e. ``unknown uknowns''. We claim we can find such
questions in search engine logs, which is surprising because most
question-intent queries are indeed factoid. We present Researchy Questions, a
dataset of search engine queries tediously filtered to be non-factoid,
``decompositional'' and multi-perspective. We show that users spend a lot of
``effort'' on these questions in terms of signals like clicks and session
length, and that they are also challenging for GPT-4. We also show that ``slow
thinking'' answering techniques, like decomposition into sub-questions shows
benefit over answering directly. We release $\sim$ 100k Researchy Questions,
along with the Clueweb22 URLs that were clicked.
| 2,024 | Computation and Language |
A Language Model based Framework for New Concept Placement in Ontologies | We investigate the task of inserting new concepts extracted from texts into
an ontology using language models. We explore an approach with three steps:
edge search which is to find a set of candidate locations to insert (i.e.,
subsumptions between concepts), edge formation and enrichment which leverages
the ontological structure to produce and enhance the edge candidates, and edge
selection which eventually locates the edge to be placed into. In all steps, we
propose to leverage neural methods, where we apply embedding-based methods and
contrastive learning with Pre-trained Language Models (PLMs) such as BERT for
edge search, and adapt a BERT fine-tuning-based multi-label Edge-Cross-encoder,
and Large Language Models (LLMs) such as GPT series, FLAN-T5, and Llama 2, for
edge selection. We evaluate the methods on recent datasets created using the
SNOMED CT ontology and the MedMentions entity linking benchmark. The best
settings in our framework use fine-tuned PLM for search and a multi-label
Cross-encoder for selection. Zero-shot prompting of LLMs is still not adequate
for the task, and we propose explainable instruction tuning of LLMs for
improved performance. Our study shows the advantages of PLMs and highlights the
encouraging performance of LLMs that motivates future studies.
| 2,024 | Computation and Language |
Extracting Lexical Features from Dialects via Interpretable Dialect
Classifiers | Identifying linguistic differences between dialects of a language often
requires expert knowledge and meticulous human analysis. This is largely due to
the complexity and nuance involved in studying various dialects. We present a
novel approach to extract distinguishing lexical features of dialects by
utilizing interpretable dialect classifiers, even in the absence of human
experts. We explore both post-hoc and intrinsic approaches to interpretability,
conduct experiments on Mandarin, Italian, and Low Saxon, and experimentally
demonstrate that our method successfully identifies key language-specific
lexical features that contribute to dialectal variations.
| 2,024 | Computation and Language |
LLM-Resistant Math Word Problem Generation via Adversarial Attacks | Large language models (LLMs) have significantly transformed the educational
landscape. As current plagiarism detection tools struggle to keep pace with
LLMs' rapid advancements, the educational community faces the challenge of
assessing students' true problem-solving abilities in the presence of LLMs. In
this work, we explore a new paradigm for ensuring fair evaluation -- generating
adversarial examples which preserve the structure and difficulty of the
original questions aimed for assessment, but are unsolvable by LLMs. Focusing
on the domain of math word problems, we leverage abstract syntax trees to
structurally generate adversarial examples that cause LLMs to produce incorrect
answers by simply editing the numeric values in the problems. We conduct
experiments on various open- and closed-source LLMs, quantitatively and
qualitatively demonstrating that our method significantly degrades their math
problem-solving ability. We identify shared vulnerabilities among LLMs and
propose a cost-effective approach to attack high-cost models. Additionally, we
conduct automatic analysis on math problems and investigate the cause of
failure to guide future research on LLM's mathematical capability.
| 2,024 | Computation and Language |
Multitask Multilingual Model Adaptation with Featurized Low-Rank
Mixtures | Adapting pretrained large language models (LLMs) to various downstream tasks
in tens or hundreds of human languages is computationally expensive.
Parameter-efficient fine-tuning (PEFT) significantly reduces the adaptation
cost, by tuning only a small amount of parameters. However, directly applying
PEFT methods such as LoRA (Hu et al., 2022) on diverse dataset mixtures could
lead to suboptimal performance due to limited parameter capacity and negative
interference among different datasets. In this work, we propose Featurized
Low-rank Mixtures (FLix), a novel PEFT method designed for effective multitask
multilingual tuning. FLix associates each unique dataset feature, such as the
dataset's language or task, with its own low-rank weight update parameters. By
composing feature-specific parameters for each dataset, FLix can accommodate
diverse dataset mixtures and generalize better to unseen datasets. Our
experiments show that FLix leads to significant improvements over a variety of
tasks for both supervised learning and zero-shot settings using different
training data mixtures.
| 2,024 | Computation and Language |
Acquiring Linguistic Knowledge from Multimodal Input | In contrast to children, language models (LMs) exhibit considerably inferior
data efficiency when acquiring language. In this submission to the BabyLM
Challenge (Warstadt et al., 2023), we test the hypothesis that this data
efficiency gap is partly caused by a lack of multimodal input and grounding in
the learning environment of typical language models. Although previous work
looking into this question found that multimodal training can even harm
language-only performance, we speculate that these findings can be attributed
to catastrophic forgetting of complex language due to fine-tuning on captions
data. To test our hypothesis, we perform an ablation study on FLAVA (Singh et
al., 2022), a multimodal vision-and-language model, independently varying the
volume of text and vision input to quantify how much text data (if any) can be
offset by vision at different data scales. We aim to limit catastrophic
forgetting through a multitask pretraining regime that includes unimodal
text-only tasks and data sampled from WiT, the relatively diverse
Wikipedia-based dataset (Srinivasan et al., 2021). Our results are largely
negative: Multimodal pretraining does not harm our models' language performance
but does not consistently help either. That said, our conclusions are limited
by our having been able to conduct only a small number of runs. While we must
leave open the possibility that multimodal input explains some of the gap in
data efficiency between LMs and humans, positive evidence for this hypothesis
will require better architectures and techniques for multimodal training.
| 2,024 | Computation and Language |
Large Language Models(LLMs) on Tabular Data: Prediction, Generation, and
Understanding -- A Survey | Recent breakthroughs in large language modeling have facilitated rigorous
exploration of their application in diverse tasks related to tabular data
modeling, such as prediction, tabular data synthesis, question answering, and
table understanding. Each task presents unique challenges and opportunities.
However, there is currently a lack of comprehensive review that summarizes and
compares the key techniques, metrics, datasets, models, and optimization
approaches in this research domain. This survey aims to address this gap by
consolidating recent progress in these areas, offering a thorough survey and
taxonomy of the datasets, metrics, and methodologies utilized. It identifies
strengths, limitations, unexplored territories, and gaps in the existing
literature, while providing some insights for future research directions in
this vital and rapidly evolving field. It also provides relevant code and
datasets references. Through this comprehensive review, we hope to provide
interested readers with pertinent references and insightful perspectives,
empowering them with the necessary tools and knowledge to effectively navigate
and address the prevailing challenges in the field.
| 2,024 | Computation and Language |
Gradient-Free Adaptive Global Pruning for Pre-trained Language Models | The transformative impact of large language models (LLMs) like LLaMA and GPT
on natural language processing is countered by their prohibitive computational
demands. Pruning has emerged as a pivotal compression strategy, introducing
sparsity to enhance both memory and computational efficiency. Yet, traditional
global pruning is impractical for LLMs due to scalability issues, while local
pruning, despite its efficiency, leads to suboptimal solutions. Addressing
these challenges, we propose Adaptive Global Pruning (AdaGP), a novel framework
that redefines the global pruning process into manageable, coordinated
subproblems, allowing for resource-efficient optimization with global
optimality. AdaGP's approach, which conceptualizes LLMs as a chain of modular
functions and leverages auxiliary variables for problem decomposition, not only
facilitates a pragmatic application on LLMs but also demonstrates significant
performance improvements, particularly in high-sparsity regimes where it
surpasses current state-of-the-art methods.
| 2,024 | Computation and Language |
Multilingual Speech Models for Automatic Speech Recognition Exhibit
Gender Performance Gaps | Current voice recognition approaches use multi-task, multilingual models for
speech tasks like Automatic Speech Recognition (ASR) to make them applicable to
many languages without substantial changes. However, broad language coverage
can still mask performance gaps within languages, for example, across genders.
We systematically evaluate multilingual ASR systems on gendered performance
gaps. Using two popular models on three datasets in 19 languages across seven
language families, we find clear gender disparities. However, the advantaged
group varies between languages. While there are no significant differences
across groups in phonetic variables (pitch, speaking rate, etc.), probing the
model's internal states reveals a negative correlation between probe
performance and the gendered performance gap. I.e., the easier to distinguish
speaker gender in a language, the more the models favor female speakers. Our
results show that group disparities remain unsolved despite great progress on
multi-tasking and multilinguality. We provide first valuable insights for
evaluating gender gaps in multilingual ASR systems. We release all code and
artifacts at https://github.com/g8a9/multilingual-asr-gender-gap.
| 2,024 | Computation and Language |
An Iterative Associative Memory Model for Empathetic Response Generation | Empathetic response generation is to comprehend the cognitive and emotional
states in dialogue utterances and generate proper responses. Psychological
theories posit that comprehending emotional and cognitive states necessitates
iteratively capturing and understanding associated words across dialogue
utterances. However, existing approaches regard dialogue utterances as either a
long sequence or independent utterances for comprehension, which are prone to
overlook the associated words between them. To address this issue, we propose
an Iterative Associative Memory Model (IAMM) for empathetic response
generation. Specifically, we employ a novel second-order interaction attention
mechanism to iteratively capture vital associated words between dialogue
utterances and situations, dialogue history, and a memory module (for storing
associated words), thereby accurately and nuancedly comprehending the
utterances. We conduct experiments on the Empathetic-Dialogue dataset. Both
automatic and human evaluations validate the efficacy of the model. Meanwhile,
variant experiments on LLMs also demonstrate that attending to associated words
improves empathetic comprehension and expression.
| 2,024 | Computation and Language |
Collaborative decoding of critical tokens for boosting factuality of
large language models | The most common training pipeline for large language models includes
pretraining, finetuning and aligning phases, with their respective resulting
models, such as the pretrained model and the finetuned model. Finetuned and
aligned models show improved abilities of instruction following and safe
generation, however their abilities to stay factual about the world are
impacted by the finetuning process. Furthermore, the common practice of using
sampling during generation also increases chances of hallucination. In this
work, we introduce a collaborative decoding framework to harness the high
factuality within pretrained models through the concept of critical tokens. We
first design a critical token classifier to decide which model to use for the
next token, and subsequently generates the next token using different decoding
strategies. Experiments with different models and datasets show that our
decoding framework is able to reduce model hallucination significantly,
showcasing the importance of the collaborative decoding framework.
| 2,024 | Computation and Language |
M3-VRD: Multimodal Multi-task Multi-teacher Visually-Rich Form Document
Understanding | This paper presents a groundbreaking multimodal, multi-task, multi-teacher
joint-grained knowledge distillation model for visually-rich form document
understanding. The model is designed to leverage insights from both
fine-grained and coarse-grained levels by facilitating a nuanced correlation
between token and entity representations, addressing the complexities inherent
in form documents. Additionally, we introduce new inter-grained and
cross-grained loss functions to further refine diverse multi-teacher knowledge
distillation transfer process, presenting distribution gaps and a harmonised
understanding of form documents. Through a comprehensive evaluation across
publicly available form document understanding datasets, our proposed model
consistently outperforms existing baselines, showcasing its efficacy in
handling the intricate structures and content of visually complex form
documents.
| 2,024 | Computation and Language |
Exploring Multi-Document Information Consolidation for Scientific
Sentiment Summarization | Modern natural language generation systems with LLMs exhibit the capability
to generate a plausible summary of multiple documents; however, it is uncertain
if models truly possess the ability of information consolidation to generate
summaries, especially on those source documents with opinionated information.
To make scientific sentiment summarization more grounded, we hypothesize that
in peer review human meta-reviewers follow a three-layer framework of sentiment
consolidation to write meta-reviews and it represents the logic of summarizing
scientific sentiments in meta-review generation. The framework is validated via
human annotation. Based on the framework, we propose evaluation metrics to
assess the quality of generated meta-reviews, and we find that the hypothesis
of the sentiment consolidation framework works out empirically when we
incorporate it as prompts for LLMs to generate meta-reviews in extensive
experiments.
| 2,024 | Computation and Language |
A Survey on Recent Advances in LLM-Based Multi-turn Dialogue Systems | This survey provides a comprehensive review of research on multi-turn
dialogue systems, with a particular focus on multi-turn dialogue systems based
on large language models (LLMs). This paper aims to (a) give a summary of
existing LLMs and approaches for adapting LLMs to downstream tasks; (b)
elaborate recent advances in multi-turn dialogue systems, covering both
LLM-based open-domain dialogue (ODD) and task-oriented dialogue (TOD) systems,
along with datasets and evaluation metrics; (c) discuss some future emphasis
and recent research problems arising from the development of LLMs and the
increasing demands on multi-turn dialogue systems.
| 2,024 | Computation and Language |
Hire a Linguist!: Learning Endangered Languages with In-Context
Linguistic Descriptions | How can large language models (LLMs) process and translate endangered
languages? Many languages lack a large corpus to train a decent LLM; therefore
existing LLMs rarely perform well in unseen, endangered languages. On the
contrary, we observe that 2000 endangered languages, though without a large
corpus, have a grammar book or a dictionary. We propose LINGOLLM, a
training-free approach to enable an LLM to process unseen languages that hardly
occur in its pre-training. Our key insight is to demonstrate linguistic
knowledge of an unseen language in an LLM's prompt, including a dictionary, a
grammar book, and morphologically analyzed input text. We implement LINGOLLM on
top of two models, GPT-4 and Mixtral, and evaluate their performance on 5 tasks
across 8 endangered or low-resource languages. Our results show that LINGOLLM
elevates translation capability from GPT-4's 0 to 10.5 BLEU for 10 language
directions. Our findings demonstrate the tremendous value of linguistic
knowledge in the age of LLMs for endangered languages. Our data, code, and
model generations can be found at https://github.com/LLiLab/llm4endangeredlang.
| 2,024 | Computation and Language |
ResLoRA: Identity Residual Mapping in Low-Rank Adaption | As one of the most popular parameter-efficient fine-tuning (PEFT) methods,
low-rank adaptation (LoRA) is commonly applied to fine-tune large language
models (LLMs). However, updating the weights of LoRA blocks effectively and
expeditiously is challenging due to the long calculation path in the original
model. To address this, we propose ResLoRA, an improved framework of LoRA. By
adding residual paths during training and using merging approaches to eliminate
these extra paths during inference, our method can achieve better results in
fewer training steps without any extra trainable parameters or inference cost
compared to LoRA. The experiments on NLG, NLU, and text-to-image tasks
demonstrate the effectiveness of our method. To the best of our knowledge,
ResLoRA is the first work that combines the residual path with LoRA. The code
of our method is available at
https://github.com/microsoft/LMOps/tree/main/reslora .
| 2,024 | Computation and Language |
Datasets for Large Language Models: A Comprehensive Survey | This paper embarks on an exploration into the Large Language Model (LLM)
datasets, which play a crucial role in the remarkable advancements of LLMs. The
datasets serve as the foundational infrastructure analogous to a root system
that sustains and nurtures the development of LLMs. Consequently, examination
of these datasets emerges as a critical topic in research. In order to address
the current lack of a comprehensive overview and thorough analysis of LLM
datasets, and to gain insights into their current status and future trends,
this survey consolidates and categorizes the fundamental aspects of LLM
datasets from five perspectives: (1) Pre-training Corpora; (2) Instruction
Fine-tuning Datasets; (3) Preference Datasets; (4) Evaluation Datasets; (5)
Traditional Natural Language Processing (NLP) Datasets. The survey sheds light
on the prevailing challenges and points out potential avenues for future
investigation. Additionally, a comprehensive review of the existing available
dataset resources is also provided, including statistics from 444 datasets,
covering 8 language categories and spanning 32 domains. Information from 20
dimensions is incorporated into the dataset statistics. The total data size
surveyed surpasses 774.5 TB for pre-training corpora and 700M instances for
other datasets. We aim to present the entire landscape of LLM text datasets,
serving as a comprehensive reference for researchers in this field and
contributing to future studies. Related resources are available at:
https://github.com/lmmlzn/Awesome-LLMs-Datasets.
| 2,024 | Computation and Language |
Crisis talk: analysis of the public debate around the energy crisis and
cost of living | A prominent media topic in the UK in the early 2020s is the energy crisis
affecting the UK and most of Europe. It brings into a single public debate
issues of energy dependency and sustainability, fair distribution of economic
burdens and cost of living, as well as climate change, risk, and
sustainability. In this paper, we investigate the public discourse around the
energy crisis and cost of living to identify how these pivotal and
contradictory issues are reconciled in this debate and to identify which social
actors are involved and the role they play. We analyse a document corpus
retrieved from UK newspapers from January 2014 to March 2023. We apply a
variety of natural language processing and data visualisation techniques to
identify key topics, novel trends, critical social actors, and the role they
play in the debate, along with the sentiment associated with those actors and
topics. We combine automated techniques with manual discourse analysis to
explore and validate the insights revealed in this study. The findings verify
the utility of these techniques by providing a flexible and scalable pipeline
for discourse analysis and providing critical insights for cost of living -
energy crisis nexus research.
| 2,024 | Computation and Language |
Multi-FAct: Assessing Multilingual LLMs' Multi-Regional Knowledge using
FActScore | Large Language Models (LLMs) are prone to factuality hallucination,
generating text that contradicts established knowledge. While extensive
research has addressed this in English, little is known about multilingual
LLMs. This paper systematically evaluates multilingual LLMs' factual accuracy
across languages and geographic regions. We introduce a novel pipeline for
multilingual factuality evaluation, adapting FActScore(Min et al., 2023) for
diverse languages. Our analysis across nine languages reveals that English
consistently outperforms others in factual accuracy and quantity of generated
facts. Furthermore, multilingual models demonstrate a bias towards factual
information from Western continents. These findings highlight the need for
improved multilingual factuality assessment and underscore geographical biases
in LLMs' fact generation.
| 2,024 | Computation and Language |
Characterizing Truthfulness in Large Language Model Generations with
Local Intrinsic Dimension | We study how to characterize and predict the truthfulness of texts generated
from large language models (LLMs), which serves as a crucial step in building
trust between humans and LLMs. Although several approaches based on entropy or
verbalized uncertainty have been proposed to calibrate model predictions, these
methods are often intractable, sensitive to hyperparameters, and less reliable
when applied in generative tasks with LLMs. In this paper, we suggest
investigating internal activations and quantifying LLM's truthfulness using the
local intrinsic dimension (LID) of model activations. Through experiments on
four question answering (QA) datasets, we demonstrate the effectiveness
ohttps://info.arxiv.org/help/prep#abstractsf our proposed method. Additionally,
we study intrinsic dimensions in LLMs and their relations with model layers,
autoregressive language modeling, and the training of LLMs, revealing that
intrinsic dimensions can be a powerful approach to understanding LLMs.
| 2,024 | Computation and Language |
MEGAnno+: A Human-LLM Collaborative Annotation System | Large language models (LLMs) can label data faster and cheaper than humans
for various NLP tasks. Despite their prowess, LLMs may fall short in
understanding of complex, sociocultural, or domain-specific context,
potentially leading to incorrect annotations. Therefore, we advocate a
collaborative approach where humans and LLMs work together to produce reliable
and high-quality labels. We present MEGAnno+, a human-LLM collaborative
annotation system that offers effective LLM agent and annotation management,
convenient and robust LLM annotation, and exploratory verification of LLM
labels by humans.
| 2,024 | Computation and Language |
Contextualizing Generated Citation Texts | Abstractive citation text generation is usually framed as an infilling task,
where a sequence-to-sequence model is trained to generate a citation given a
reference paper and the context window around the target; the generated
citation should be a brief discussion of the reference paper as it relates to
the citing context. However, examining a recent LED-based citation generation
system, we find that many of the generated citations are generic summaries of
the reference papers main contribution, ignoring the citation contexts focus on
a different topic. To address this problem, we propose a simple modification to
the citation text generation task: the generation target is not only the
citation itself, but the entire context window, including the target citation.
This approach can be easily applied to any abstractive citation generation
system, and our experimental results show that training in this way is
preferred by human readers and allows the generation model to make use of
contextual clues about what topic to discuss and what stance to take.
| 2,024 | Computation and Language |
Benchmarking Large Language Models on Answering and Explaining
Challenging Medical Questions | LLMs have demonstrated impressive performance in answering medical questions,
such as passing scores on medical licensing examinations. However, medical
board exam questions or general clinical questions do not capture the
complexity of realistic clinical cases. Moreover, the lack of reference
explanations means we cannot easily evaluate the reasoning of model decisions,
a crucial component of supporting doctors in making complex medical decisions.
To address these challenges, we construct two new datasets: JAMA Clinical
Challenge and Medbullets. JAMA Clinical Challenge consists of questions based
on challenging clinical cases, while Medbullets comprises USMLE Step 2&3 style
clinical questions. Both datasets are structured as multiple-choice
question-answering tasks, where each question is accompanied by an
expert-written explanation. We evaluate four LLMs on the two datasets using
various prompts. Experiments demonstrate that our datasets are harder than
previous benchmarks. The inconsistency between automatic and human evaluations
of model-generated explanations highlights the need to develop new metrics to
support future research on explainable medical QA.
| 2,024 | Computation and Language |