{"id": "10.1145/3583780.3614849", "title": "Diving into a Sea of Opinions: Multi-Modal Abstractive Summarization with Comment Sensitivity", "abstract": "In the modern era, the rapid expansion of social media and the proliferation of the internet community has led to a multi-fold increase in the richness and range of views and outlooks expressed by readers and viewers. To obtain valuable insights from this vast sea of opinions, we present an inventive and holistic procedure for multi-modal abstractive summarization with comment sensitivity. Our proposed model utilizes both textual and visual modalities and examines the remarks provided by the readers to produce summaries that apprehend the significant points and opinions made by them. Our model features a transformer-based encoder that seamlessly processes both news articles and comments, merging them before transmitting the amalgamated information to the decoder. Additionally, the core segment of our architecture consists of an attention-based merging technique which is trained adversarially by means of a generator and discriminator to bridge the semantic gap between comments and articles. We have used a Bi-LSTM-based branch for image pointer generation. We assess our model on the reader-aware multi-document summarization (RA-MDS) dataset which contains news articles, their summaries, and related comments. We have extended the dataset by adding images pertaining to news articles in the corpus to increase the richness and diversity of the dataset. Our comprehensive experiments reveal that our model outperforms similar pre-trained models and baselines across two of the four evaluated metrics, showcasing its superior performance.", "keywords": "adversarial learning;multimodal;abstractive summarization;comment sensitivity"} {"id": "10.1145/3583780.3614896", "title": "GranCATs: Cross-Lingual Enhancement through Granularity-Specific Contrastive Adapters", "abstract": "Multilingual language models (MLLMs) have demonstrated remarkable success in various cross-lingual downstream tasks, facilitating the transfer of knowledge across numerous languages, whereas this transfer is not universally effective. Our study reveals that while existing MLLMs like mBERT can capturephrase-level alignments across the language families, they struggle to effectively capturesentence-level andparagraph-level alignments. To address this limitation, we propose GranCATs, Granularity-specific Contrastive AdapTers. We collect a new dataset that observes each sample at three distinct levels of granularity and employ contrastive learning as a pre-training task to train GranCATs on this dataset. Our objective is to enhance MLLMs' adaptation to a broader range of cross-lingual tasks by equipping them with improved capabilities to capture global information at different levels of granularity. Extensive experiments show that MLLMs with GranCATs yield significant performance advancements across various language tasks with different text granularities, including entity alignment, relation extraction, sentence classification and retrieval, and question-answering. These results validate the effectiveness of our proposed GranCATs in enhancing cross-lingual alignments across various text granularities and effectively transferring this knowledge to downstream tasks.", "keywords": "multilingual language models;adapters;universal patterns;cross-lingual alignments"} {"id": "10.1145/3583780.3614934", "title": "Inducing Causal Structure for Abstractive Text Summarization", "abstract": "The mainstream of data-driven abstractive summarization models tends to explore the correlations rather than the causal relationships. Among such correlations, there can be spurious ones which suffer from the language prior learned from the training corpus and therefore undermine the overall effectiveness of the learned model. To tackle this issue, we introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data. We assume several latent causal factors and non-causal factors, representing the content and style of the document and summary. Theoretically, we prove that the latent factors in our SCM can be identified by fitting the observed training data under certain conditions. On the basis of this, we propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors, guiding us to pursue causal information for summary generation. The key idea is to reformulate the Variational Auto-encoder (VAE) to fit the joint distribution of the document and summary variables from the training corpus. Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.", "keywords": "abstractive text summarization;vae;structural causal model"} {"id": "10.1145/3583780.3614940", "title": "James Ate 5 Oranges = Steve Bought 5 Pencils: Structure-Aware Denoising for Paraphrasing Word Problems", "abstract": "We propose SCANING, an unsupervised framework for paraphrasing via controlled noise injection. We focus on the novel task of paraphrasing algebraic word problems having practical applications in online pedagogy as a means to reduce plagiarism as well as evoke reasoning capabilities on the part of the student instead of rote memorization. This task is more complex than paraphrasing general-domain corpora due to the difficulty in preserving critical information for solution consistency of the paraphrased word problem, managing the increased length of the text and ensuring diversity in the generated paraphrase. Existing approaches fail to demonstrate adequate performance on at least one, if not all, of these facets, necessitating the need for a more comprehensive solution. To this end, we model the noising search space as a composition of contextual and syntactic aspects to sample noising functions. This allows for learning a denoising function, that operates over both aspects and produces semantically equivalent and syntactically diverse outputs through grounded noise injection. The denoising function serves as a foundation for training a paraphrasing function, which operates solely in the input-paraphrase space without carrying any direct dependency on noise. We demonstrate that SCANING improves performance in terms of producing semantically equivalent and syntactically diverse paraphrases by 35% through extensive automated and human evaluation across 4 datasets.", "keywords": "mwp;education;self-supervised;paraphrasing;denoising;neural networks"} {"id": "10.1145/3583780.3614993", "title": "NOVO: Learnable and Interpretable Document Identifiers for Model-Based IR", "abstract": "Model-based Information Retrieval (Model-based IR) has gained attention due to advancements in generative language models. Unlike traditional dense retrieval methods relying on dense vector representations of documents, model-based IR leverages language models to retrieve documents by generating their unique discrete identifiers (docids). This approach effectively reduces the requirements to store separate document representations in an index. Most existing model-based IR approaches utilize pre-defined static docids, i.e., these docids are fixed and are not learnable by training on the retrieval tasks. However, these docids are not specifically optimized for retrieval tasks, which makes it difficult to learn semantics and relationships between documents and achieve satisfactory retrieval performance. To address the above limitations, we propose Neural Optimized VOcabularial (NOVO) docids. NOVO docids are unique n-gram sets identifying each document. They can be generated in any order to retrieve the corresponding document and can be optimized through training to better learn semantics and relationships between documents. We propose to optimize NOVO docids through query denoising modeling and retrieval tasks, allowing for optimizing both semantic and token representations for such docids. Experiments on two datasets under the normal and zero-shot settings show that NOVO exhibits strong performance in more effective and interpretable model-based IR.", "keywords": "large language models;model-based ir;information retrieval"} {"id": "10.1145/3583780.3615034", "title": "Rethinking Sentiment Analysis under Uncertainty", "abstract": "Sentiment Analysis (SA) is a fundamental task in natural language processing, which is widely used in public decision-making. Recently, deep learning have demonstrated great potential to deal with this task. However, prior works have mostly treated SA as a deterministic classification problem, and meanwhile, without quantifying the predictive uncertainty. This presents a serious problem in the SA, different annotator, due to the differences in beliefs, values, and experiences, may have different perspectives on how to label the text sentiment. Such situation will lead to inevitable data uncertainty and make the deterministic classification models feel puzzle to make decision. To address this issue, we propose a new SA paradigm with the consideration of uncertainty and conduct an expensive empirical study. Specifically, we treat SA as the regression task and introduce uncertainty quantification to obtain confidence intervals for predictions, which enables the risk assessment ability of the model and can improve the credibility of SA-aids decision-making. Experiments on five datasets show that our proposed new paradigm effectively quantifies uncertainty in SA while remaining competitive performance to point estimation, in addition to being capable of Out-Of-Distribution\u00a0(OOD) detection.", "keywords": "sentiment analysis;out-of-distribution detection;quantifying uncertainty"} {"id": "10.1145/3583780.3615056", "title": "Sentiment-Aware Review Summarization with Personalized Multi-Task Fine-Tuning", "abstract": "Personalized review summarization is a challenging task in recommender systems, which aims to generate condensed and readable summaries for product reviews. Recently, some methods propose to adopt the sentiment signals of reviews to enhance the review summarization. However, most previous works only share the semantic features of reviews via preliminary multi-task learning, while ignoring the rich personalized information of users and products, which is crucial to both sentiment identification and comprehensive review summarization. In this paper, we propose a sentiment-aware review summarization method with an elaborately designed multi-task fine-tuning framework to make full use of personalized information of users and products effectively based on Pretrained Language Models (PLMs). We first denote two types of personalized information including IDs and historical summaries to indicate their identification and semantics information respectively. Subsequently, we propose to incorporate the IDs of the user/product into the PLMs-based encoder to learn the personalized representations of input reviews and their historical summaries in a fine-tuning way. Based on this, an auxiliary context-aware review sentiment classification task and a further sentiment-guided personalized review summarization task are jointly learned. Specifically, the sentiment representation of input review is used to identify relevant historical summaries, which are then treated as additional semantic context features to enhance the summary generation process. Extensive experimental results show our approach could generate sentiment-consistent summaries and outperforms many competitive baselines on both review summarization and sentiment classification tasks.", "keywords": "pre-trained language models;personalized review summarization;recommender system;sentiment classification"} {"id": "10.1145/3583780.3614792", "title": "Bias Invariant Approaches for Improving Word Embedding Fairness", "abstract": "Many public pre-trained word embeddings have been shown to encode different types of biases. Embeddings are often obtained from training on large pre-existing corpora, and therefore resulting biases can be a reflection of unfair representations in the original data. Bias, in this scenario, is a challenging problem since current mitigation techniques require knowing and understanding existing biases in the embedding, which is not always possible. In this work, we propose to improve word embedding fairness by borrowing methods from the field of data privacy. The idea behind this approach is to treat bias as if it were a special type of training data leakage. This has the unique advantage of not requiring prior knowledge of potential biases in word embeddings. We investigated two types of privacy algorithms, and measured their effect on bias using four different metrics. To investigate techniques from differential privacy, we applied Gaussian perturbation to public pre-trained word embeddings. To investigate noiseless privacy, we applied vector quantization during training. Experiments show that both approaches improve fairness for commonly used embeddings, and additionally, noiseless privacy techniques reduce the size of the resulting embedding representation.", "keywords": "privacy;word embedding;fairness"} {"id": "10.1145/3583780.3614907", "title": "HCL4QC: Incorporating Hierarchical Category Structures Into Contrastive Learning for E-Commerce Query Classification", "abstract": "Query classification plays a crucial role in e-commerce, where the goal is to assign user queries to appropriate categories within a hierarchical product category taxonomy. However, existing methods rely on a limited number of words from the category description and often neglect the hierarchical structure of the category tree, resulting in suboptimal category representations. To overcome these limitations, we propose a novel approach named hierarchical contrastive learning framework for query classification (HCL4QC), which leverages the hierarchical category tree structure to improve the performance of query classification. Specifically, HCL4QC is designed as a plugin module that consists of two innovative losses, namely local hierarchical contrastive loss (LHCL) and global hierarchical contrastive loss (GHCL). LHCL adjusts representations of categories according to their positional relationship in the hierarchical tree, while GHCL ensures the semantic consistency between the parent category and its child categories. Our proposed method can be adapted to any query classification tasks that involve a hierarchical category structure. We conduct experiments on two real-world datasets to demonstrate the superiority of our hierarchical contrastive learning. The results demonstrate significant improvements in the query classification task, particularly for long-tail categories with sparse supervised information.", "keywords": "query classification;hierarchical category tree;contrastive learning"} {"id": "10.1145/3583780.3614984", "title": "MultiPLe: Multilingual Prompt Learning for Relieving Semantic Confusions in Few-Shot Event Detection", "abstract": "Event detection (ED) is a challenging task in the field of information extraction. Due to the monolingual text and rampant confusing triggers, traditional ED models suffer from semantic confusions in terms of polysemy and synonym, leading to severe detection mistakes. Such semantic confusions can be further exacerbated in a practical situation where scarce labeled data cannot provide sufficient semantic clues. To mitigate such bottleneck, we propose a multilingual prompt learning (MultiPLe) framework for few-shot event detection (FSED), including three components, i.e., a multilingual prompt, a hierarchical prototype and a quadruplet contrastive learning module. In detail, to ease the polysemy confusion, the multilingual prompt module develops the in-context semantics of triggers via the multilingual disambiguation and prior knowledge in pretrained language models. Then, the hierarchical prototype module is adopted to diminish the synonym confusion by connecting the captured inmost semantics of fuzzy triggers with labels at a fine granularity. Finally, we employ the quadruplet contrastive learning module to tackle the insufficient label representation and potential noise. Experiments on two public datasets show that MultiPLe outperforms the state-of-the-art baselines in weighted F1-score, presenting a maximum improvement of 13.63% for FSED.", "keywords": "semantic confusions;prompt learning;few-shot event detection"} {"id": "10.1145/3583780.3614996", "title": "On the Thresholding Strategy for Infrequent Labels in Multi-Label Classification", "abstract": "In multi-label classification, the imbalance between labels is often a concern. For a label that seldom occurs, the default threshold used to generate binarized predictions of that label is usually sub-optimal. However, directly tuning the threshold to optimize F-measure has been observed to overfit easily. In this work, we explain why this overfitting occurs. Then, we analyze the FBR heuristic, a previous technique proposed to address the overfitting issue. We explain its success but also point out some problems unobserved before. Then, we first propose a variant of the FBR heuristic that not only fixes the problems but is also more justifiable. Second, we propose a new technique based on smoothing the F-measure when tuning the threshold. We theoretically prove that, with proper parameters, smoothing results in desirable properties of the tuned threshold. Based on the idea of smoothing, we then propose jointly optimizing micro-F and macro-F as a lightweight alternative free from extra hyperparameters. Our methods are empirically evaluated on text and node classification datasets. The results show that our methods consistently outperform the FBR heuristic.", "keywords": "multi-label classification;infrequent labels;threshold adjustion;f-measure"} {"id": "10.1145/3583780.3615029", "title": "Relevance-Based Infilling for Natural Language Counterfactuals", "abstract": "Counterfactual explanations are a natural way for humans to gain understanding and trust in the outcomes of complex machine learning algorithms. In the context of natural language processing, generating counterfactuals is particularly challenging as it requires the generated text to be fluent, grammatically correct, and meaningful. In this study, we improve the current state of the art for the generation of such counterfactual explanations for text classifiers. Our approach, named RELITC (Relevance-based Infilling for Textual Counterfactuals), builds on the idea of masking a fraction of text tokens based on their importance in a given prediction task and employs a novel strategy, based on the entropy of their associated probability distributions, to determine the infilling order of these tokens. Our method uses less time than competing methods to generate counterfactuals that require less changes, are closer to the original text and preserve its content better, while being competitive in terms of fluency. We demonstrate the effectiveness of the method on four different datasets and show the quality of its outcomes in a comparison with human generated counterfactuals.", "keywords": "explainability;masked language model;counterfactuals;nlp"} {"id": "10.1145/3583780.3614809", "title": "Class-Specific Word Sense Aware Topic Modeling via Soft Orthogonalized Topics", "abstract": "We propose a word sense aware topic model for document classification based on soft orthogonalized topics. An essential problem for this task is to capture word senses related to classes, i.e., class-specific word senses. Traditional models mainly introduce semantic information of knowledge libraries for word sense discovery. However, this information may not align with the classification targets, because these targets are often subjective and task-related. We aim to model the class-specific word senses in topic space. The challenge is to optimize the class separability of the senses, i.e., obtaining sense vectors with (a) high intra-class and (b) low inter-class similarities. Most existing models predefine specific topics for each class to specify the class-specific sense vectors. We call them hard orthogonalization based methods. These methods can hardly achieve both (a) and (b) since they assume the conditional independence of topics to classes and inevitably lose topic information. To this problem, we propose soft orthogonalization for topics. Specifically, we reserve all the topics and introduce a group of class-specific weights for each word to handle the importance of topic dimensions to class separability. Besides, we detect and use highly class-specific words in each document to guide sense estimation. Our experiments on two standard datasets show that our proposal outperforms other state-of-the-art models in terms of accuracy of sense estimation, document classification, and topic modeling. In addition, our joint learning experiments with the pre-trained language model BERT showcased the best complementarity of our model in most cases compared to other topic models.", "keywords": "document classification;document representation;word sense disambiguation;graphical model;topic model"} {"id": "10.1145/3583780.3614860", "title": "EAGLE: Enhance Target-Oriented Dialogs by Global Planning and Topic Flow Integration", "abstract": "In this study, we propose a novel model EAGLE for target-oriented dialogue generation. Without relying on any knowledge graphs, our method integrates the global planning strategy in both topic path generation and response generation given the initial and target topics. EAGLE comprises three components: a topic path sampling strategy, a topic flow generator, and a global planner. Our approach confers a number of advantages: EAGLE is robust to the target that has never appeared in the training data set and able to plan the topic flow globally. The topic path sampling strategy samples topic paths based on two predefined rules and use the sampled paths to train the topic path generator. The topic flow generator then applies a non-autoregressive method to generate intermediate topics that link the initial and target topics smoothly. In addition, the global planner is a response generator that generates a response based on the future topic sequence and conversation history, enabling it to plan how to transition to future topics smoothly. Our experimental results demonstrate that EAGLE produces more coherent responses and smoother transitions than state-of-the-art baselines, with an overall success rate improvement of approximately 25% and an average smoothness score improvement of 10% in both offline and human evaluations.", "keywords": "topic transition;target-oriented;conversation generation;global planning"} {"id": "10.1145/3583780.3614913", "title": "Hierarchical Prompt Tuning for Few-Shot Multi-Task Learning", "abstract": "Prompt tuning has enhanced the performance of Pre-trained Language Models for multi-task learning in few-shot scenarios. However, existing studies fail to consider that the prompts among different layers in Transformer are different due to the diverse information learned at each layer. In general, the bottom layers in the model tend to capture low-level semantic or structural information, while the upper layers primarily acquire task-specific knowledge. Hence, we propose a novel hierarchical prompt tuning model for few-shot multi-task learning to capture this regularity. The designed model mainly consists of three types of prompts: shared prompts, auto-adaptive prompts, and task-specific prompts. Shared prompts facilitate the sharing of general information across all tasks. Auto-adaptive prompts dynamically select and integrate relevant prompt information from all tasks into the current task. Task-specific prompts concentrate on learning task-specific knowledge. To enhance the model's adaptability to diverse inputs, we introduce deep instance-aware language prompts as the foundation for constructing the above prompts. To evaluate the effectiveness of our proposed method, we conduct extensive experiments on multiple widely-used datasets. The experimental results demonstrate that the proposed method achieves state-of-the-art performance for multi-task learning in few-shot settings and outperforms ChatGPT in the full-data setting.", "keywords": "prompt tuning;few-shot learning;multi-task learning"} {"id": "10.1145/3583780.3615017", "title": "Prompt Distillation for Efficient LLM-Based Recommendation", "abstract": "Large language models (LLM) have manifested unparalleled modeling capability on various tasks, e.g., multi-step reasoning, but the input to these models is mostly limited to plain text, which could be very long and contain noisy information. Long text could take long time to process, and thus may not be efficient enough for recommender systems that require immediate response. In LLM-based recommendation models, user and item IDs are usually filled in a template (i.e., discrete prompt) to allow the models to understand a given task, but the models usually need extensive fine-tuning to bridge the user/item IDs and the template words and to unleash the power of LLM for recommendation. To address the problems, we propose to distill the discrete prompt for a specific task to a set of continuous prompt vectors so as to bridge IDs and words and to reduce the inference time. We also design a training strategy with an attempt to improve the efficiency of training these models. Experimental results on three real-world datasets demonstrate the effectiveness of our PrOmpt Distillation (POD) approach on both sequential recommendation and top-N recommendation tasks. Although the training efficiency can be significantly improved, the improvement of inference efficiency is limited. This finding may inspire researchers in the community to further improve the inference efficiency of LLM-based recommendation models.", "keywords": "recommender systems;prompt distillation;explainable recommendation;top-n recommendation;sequential recommendation;large language models;generative recommendation"} {"id": "10.1145/3583780.3615015", "title": "Prompt-and-Align: Prompt-Based Social Alignment for Few-Shot Fake News Detection", "abstract": "Despite considerable advances in automated fake news detection, due to the timely nature of news, it remains a critical open question how to effectively predict the veracity of news articles based on limited fact-checks. Existing approaches typically follow a \"Train-from-Scratch\" paradigm, which is fundamentally bounded by the availability of large-scale annotated data. While expressive pre-trained language models (PLMs) have been adapted in a \"Pre-Train-and-Fine-Tune\" manner, the inconsistency between pre-training and downstream objectives also requires costly task-specific supervision. In this paper, we propose \"Prompt-and-Align\" (P amp;A), a novel prompt-based paradigm for few-shot fake news detection that jointly leverages the pre-trained knowledge in PLMs and the social context topology. Our approach mitigates label scarcity by wrapping the news article in a task-related textual prompt, which is then processed by the PLM to directly elicit task-specific knowledge. To supplement the PLM with social context without inducing additional training overheads, motivated by empirical observation on user veracity consistency (i.e., social users tend to consume news of the same veracity type), we further construct a news proximity graph among news articles to capture the veracity-consistent signals in shared readerships, and align the prompting predictions along the graph edges in a confidence-informed manner. Extensive experiments on three real-world benchmarks demonstrate that P amp;A sets new states-of-the-art for few-shot fake news detection performance by significant margins.", "keywords": "fake news;social networks;prompt;few-shot learning"} {"id": "10.1145/3583780.3615018", "title": "Prompting Strategies for Citation Classification", "abstract": "Citation classification aims to identify the purpose of the cited article in the citing article. Previous citation classification methods rely largely on supervised approaches. The models are trained on datasets with citing sentences or citation contexts annotated for a citation's purpose or function or intent. Recent advancements in Large Language Models (LLMs) have dramatically improved the ability of NLP systems to achieve state-of-the-art performances under zero or few-shot settings. This makes LLMs particularly suitable for tasks where sufficiently large labelled datasets are not yet available, which remains to be the case for citation classification. This paper systematically investigates the effectiveness of different prompting strategies for citation classification and compares them to promptless strategies as a baseline. Specifically, we evaluate the following four strategies, two of which we introduce for the first time, which involve updating Language Model (LM) parameters while training the model: (1) Promptless fine-tuning, (2) Fixed-prompt LM tuning, (3) Dynamic Context-prompt LM tuning (proposed), (4) Prompt + LM fine-tuning (proposed). Additionally, we test the zero-shot performance of LLMs, GPT3.5, a (5) Tuning-free prompting strategy that involves no parameter updating. Our results show that prompting methods based on LM parameter updating significantly improve citation classification performances on both domain-specific and multi-disciplinary citation classifications. Moreover, our Dynamic Context-prompting method achieves top scores both for the ACL-ARC and ACT2 citation classification datasets, surpassing the highest-performing system in the 3C shared task benchmark. Interestingly, we observe zero-shot GPT3.5 to perform well on ACT2 but poorly on the ACL-ARC dataset.", "keywords": "citation classification;prompt training;large language models;research evaluation"} {"id": "10.1145/3583780.3615109", "title": "WOT-Class: Weakly Supervised Open-World Text Classification", "abstract": "State-of-the-art weakly supervised text classification methods, while significantly reduced the required human supervision, still requires the supervision to cover all the classes of interest. This is never easy to meet in practice when human explore new, large corpora without complete pictures. In this paper, we work on a novel yet important problem of weakly supervised open-world text classification, where supervision is only needed for a few examples from a few known classes and the machine should handle both known and unknown classes in test time. General open-world classification has been studied mostly using image classification; however, existing methods typically assume the availability of sufficient known-class supervision and strong unknown-class prior knowledge (e.g., the number and/or data distribution). We propose a novel framework \u00f8ur that lifts those strong assumptions. Specifically, it follows an iterative process of (a) clustering text to new classes, (b) mining and ranking indicative words for each class, and (c) merging redundant classes by using the overlapped indicative words as a bridge. Extensive experiments on 7 popular text classification datasets demonstrate that \u00f8ur outperforms strong baselines consistently with a large margin, attaining 23.33% greater average absolute macro-F1 over existing approaches across all datasets. Such competent accuracy illuminates the practical potential of further reducing human effort for text classification.", "keywords": "text classification;open-world learning;weak supervision"} {"id": "10.1145/3583780.3614901", "title": "GripRank: Bridging the Gap between Retrieval and Generation via the Generative Knowledge Improved Passage Ranking", "abstract": "Retrieval-enhanced text generation has shown remarkable progress on knowledge-intensive language tasks, such as open-domain question answering and knowledge-enhanced dialogue generation, by leveraging passages retrieved from a large passage corpus for delivering a proper answer given the input query. However, the retrieved passages are not ideal for guiding answer generation because of the discrepancy between retrieval and generation, i.e., the candidate passages are all treated equally during the retrieval procedure without considering their potential to generate a proper answer. This discrepancy makes a passage retriever deliver a sub-optimal collection of candidate passages to generate the answer. In this paper, we propose the GeneRative Knowledge Improved Passage Ranking (GripRank) approach, addressing the above challenge by distilling knowledge from a generative passage estimator (GPE) to a passage ranker, where the GPE is a generative language model used to measure how likely the candidate passages can generate the proper answer. We realize the distillation procedure by teaching the passage ranker learning to rank the passages ordered by the GPE. Furthermore, we improve the distillation quality by devising a curriculum knowledge distillation mechanism, which allows the knowledge provided by the GPE can be progressively distilled to the ranker through an easy-to-hard curriculum, enabling the passage ranker to correctly recognize the provenance of the answer from many plausible candidates. We conduct extensive experiments on four datasets across three knowledge-intensive language tasks. Experimental results show advantages over the state-of-the-art methods for both passage ranking and answer generation on the KILT benchmark.", "keywords": "passage ranking;knowledge distillation;retrieval-enhanced text generation;knowledge-intensive language tasks"} {"id": "10.1145/3583780.3615001", "title": "Optimizing Upstream Representations for Out-of-Domain Detection with Supervised Contrastive Learning", "abstract": "Out-of-Domain (OOD) text detection has attracted significant research interest. However, conventional approaches primarily employ Cross-Entropy loss during upstream encoder training and seldom focus on optimizing discriminative In-Domain (IND) and OOD representations. To fill this gap, we introduce a novel method that applies supervised contrastive learning (SCL) to IND data for upstream representation optimization. This effectively brings the embeddings of semantically similar texts together while pushing dissimilar ones further apart, leading to more compact and distinct IND representations. This optimization subsequently improves the differentiation between IND and OOD representations, thereby enhancing the detection effect in downstream tasks. To further strengthen the ability of SCL to consolidate IND embedding clusters, and to improve the generalizability of the encoder, we propose a method that generates two different variations of the same text as \"views\". This is achieved by applying a twice \"dropped-out\" on the embeddings before performing SCL. Extensive experiments indicate that our method not only outperforms state-of-the-art approaches, but also reduces the requirement for training a large 354M-parameter model down to a more efficient 110M-parameter model, highlighting its superiority in both effectiveness and computational economy.", "keywords": "out-of-domain detection;supervised contrastive learning"} {"id": "10.1145/3583780.3615064", "title": "Spans, Not Tokens: A Span-Centric Model for Multi-Span Reading Comprehension", "abstract": "Many questions should be answered by not a single answer but a set of multiple answers. This emerging Multi-Span Reading Comprehension (MSRC) task requires extracting multiple non-contiguous spans from a given context to answer a question. Existing methods extend conventional single-span models to predict the positions of the start and end tokens of answer spans, or predict the beginning-inside-outside tag of each token. Such token-centric paradigms can hardly capture dependencies among span-level answers which are critical to MSRC. In this paper, we propose SpanQualifier, a span-centric scheme where spans, as opposed to tokens, are directly represented and scored to qualify as answers. Explicit span representations enable their interaction which exploits their dependencies to enhance representations. Experiments on three MSRC datasets demonstrate the effectiveness of our span-centric scheme and show that SpanQualifier achieves state-of-the-art results.", "keywords": "multi-span reading comprehension;question answering;msrc"} {"id": "10.1145/3583780.3615085", "title": "Topic-Aware Contrastive Learning and K-Nearest Neighbor Mechanism for Stance Detection", "abstract": "The goal of stance detection is to automatically recognize the author's expressed attitude in text towards a given target. However, social media users often express themselves briefly and implicitly, which leads to a significant number of comments lacking explicit reference information to the target, posing a challenge for stance detection. To address the missing relationship between text and target, existing studies primarily focus on incorporating external knowledge, which inevitably introduces noise information. In contrast to their work, we are dedicated to mining implicit relational information within data. Typically, users tend to emphasize their attitudes towards a relevant topic or aspect of the target while concealing others when expressing opinions. Motivated by this phenomenon, we suggest that the potential correlation between text and target can be learned from instances with similar topics. Therefore, we design a pretext task to mine the topic associations between samples and model this topic association as a dynamic weight introduced into contrastive learning. In this way, we can selectively cluster samples that have similar topics and consistent stances, while enlarging the gap between samples with different stances in the feature space. Additionally, we propose a nearest-neighbor prediction mechanism for stance classification to better utilize the features we constructed. Our experiments on two datasets demonstrate the advanced and generalization ability of our method, yielding the state-of-the-art results.", "keywords": "topic association;nearest neighbor mechanism;contrastive learning;stance detection;pretext task"} {"id": "10.1145/3583780.3615093", "title": "Towards Spoken Language Understanding via Multi-Level Multi-Grained Contrastive Learning", "abstract": "Spoken language understanding (SLU) is a core task in task-oriented dialogue systems, which aims at understanding user's current goal through constructing semantic frames. SLU usually consists of two subtasks, including intent detection and slot filling. Although there are some SLU frameworks joint modeling the two subtasks and achieve the high performance, most of them still overlook the inherent relationships between intents and slots, and fail to achieve mutual guidance between the two subtasks. To solve the problem, we propose a multi-level multi-grained SLU framework MMCL to apply contrastive learning at three levels, including utterance level, slot level, and word level to enable intent and slot to mutually guide each other. For the utterance level, our framework implements coarse granularity contrastive learning and fine granularity contrastive learning simultaneously. Besides, we also apply the self-distillation method to improve the robustness of the model. Experimental results and further analysis demonstrate that our proposed model achieves new state-of-the-art results on two public multi-intent SLU datasets, obtaining a 2.6 overall accuracy improvement on MixATIS dataset compared to previous best models.", "keywords": "self-distillation;multi-grained;spoken language understanding;multi-level;contrastive learning"} {"id": "10.1145/3477495.3531926", "title": "A Non-Factoid Question-Answering Taxonomy", "abstract": "Non-factoid question answering (NFQA) is a challenging and under-researched task that requires constructing long-form answers, such as explanations or opinions, to open-ended non-factoid questions - NFQs. There is still little understanding of the categories of NFQs that people tend to ask, what form of answers they expect to see in return, and what the key research challenges of each category are. This work presents the first comprehensive taxonomy of NFQ categories and the expected structure of answers. The taxonomy was constructed with a transparent methodology and extensively evaluated via crowdsourcing. The most challenging categories were identified through an editorial user study. We also release a dataset of categorised NFQs and a question category classifier. Finally, we conduct a quantitative analysis of the distribution of question categories using major NFQA datasets, showing that the NFQ categories that are the most challenging for current NFQA systems are poorly represented in these datasets. This imbalance may lead to insufficient system performance for challenging categories. The new taxonomy, along with the category classifier, will aid research in the area, helping to create more balanced benchmarks and to focus models on addressing specific categories.", "keywords": "dataset analysis;question taxonomy;editorial study;non-factoid question-answering"} {"id": "10.1145/3477495.3532049", "title": "QUASER: Question Answering with Scalable Extractive Rationalization", "abstract": "Designing natural language processing (NLP) models that produce predictions by first extracting a set of relevant input sentences, i.e., rationales, is gaining importance for improving model interpretability and producing supporting evidence for users. Current unsupervised approaches are designed to extract rationales that maximize prediction accuracy, which is invariably obtained by exploiting spurious correlations in datasets, and leads to unconvincing rationales. In this paper, we introduce unsupervised generative models to extract dual-purpose rationales, which must not only be able to support a subsequent answer prediction, but also support a reproduction of the input query. We show that such models can produce more meaningful rationales, that are less influenced by dataset artifacts, and as a result, also achieve the state-of-the-art on rationale extraction metrics on four datasets from the ERASER benchmark, significantly improving upon previous unsupervised methods. Our multi-task model is scalable and enables using state-of-the-art pretrained language models to design explainable question answering systems.", "keywords": "explainability;robustness;question answering"} {"id": "10.1145/3477495.3532048", "title": "PTAU: Prompt Tuning for Attributing Unanswerable Questions", "abstract": "Current question answering systems are insufficient when confronting real-life scenarios, as they can hardly be aware of whether a question is answerable given its context. Hence, there is a recent pursuit of unanswerability of a question and its attribution. Attribution of unanswerability requires the system to choose an appropriate cause for an unanswerable question. As the task is sophisticated for even human beings, it is expensive to acquire labeled data, which makes it a low-data regime problem. Moreover, the causes themselves are semantically abstract and complex, and the process of attribution is heavily question- and context-dependent. Thus, a capable model has to carefully appreciate the causes, and then, judiciously contrast the question with its context, in order to cast it into the right cause. In response to the challenges, we present PTAU, which refers to and implements a high-level human reading strategy such that one reads with anticipation. In specific, PTAU leverages the recent prompt-tuning paradigm, and is further enhanced with two innovatively conceived modules: 1) a cause-oriented template module that constructs continuous templates towards certain attributing class in high dimensional vector space; and 2) a semantics-aware label module that exploits label semantics through contrastive learning to render the classes distinguishable. Extensive experiments demonstrate that the proposed design better enlightens not only the attribution model, but also current question answering models, leading to superior performance.", "keywords": "question answering;prompt tuning;attribution of unanswerability"} {"id": "10.1145/3477495.3532084", "title": "DGQAN: Dual Graph Question-Answer Attention Networks for Answer Selection", "abstract": "Community question answering (CQA) becomes increasingly prevalent in recent years, providing platforms for users with various backgrounds to obtain information and share knowledge. However, the redundancy and lengthiness issues of crowd-sourced answers limit the performance of answer selection, thus leading to difficulties in reading or even misunderstandings for community users. To solve these problems, we propose the dual graph question-answer attention networks (DGQAN) for answer selection task. Aims to fully understand the internal structure of the question and the corresponding answer, firstly, we construct a dual-CQA concept graph with graph convolution networks using the original question and answer text. Specifically, our CQA concept graph exploits the correlation information between question-answer pairs to construct two sub-graphs (QSubject-Answer and QBody-Answer), respectively. Further, a novel dual attention mechanism is incorporated to model both the internal and external semantic relations among questions and answers. More importantly, we conduct experiment to investigate the impact of each layer in the BERT model. The experimental results show that DGQAN model achieves state-of-the-art performance on three datasets (SemEval-2015, 2016, and 2017), outperforming all the baseline models.", "keywords": "answer selection;community question answering;dual graph attention"} {"id": "10.1145/3477495.3532064", "title": "Tag-Assisted Multimodal Sentiment Analysis under Uncertain Missing Modalities", "abstract": "Multimodal sentiment analysis has been studied under the assumption that all modalities are available. However, such a strong assumption does not always hold in practice, and most of multimodal fusion models may fail when partial modalities are missing. Several works have addressed the missing modality problem; but most of them only considered the single modality missing case, and ignored the practically more general cases of multiple modalities missing. To this end, in this paper, we propose a Tag-Assisted Transformer Encoder (TATE) network to handle the problem of missing uncertain modalities. Specifically, we design a tag encoding module to cover both the single modality and multiple modalities missing cases, so as to guide the network's attention to those missing modalities. Besides, we adopt a new space projection pattern to align common vectors. Then, a Transformer encoder-decoder network is utilized to learn the missing modality features. At last, the outputs of the Transformer encoder are used for the final sentiment classification. Extensive experiments are conducted on CMU-MOSI and IEMOCAP datasets, showing that our method can achieve significant improvements compared with several baselines.", "keywords": "multimodal sentiment analysis;missing modality;joint representation"} {"id": "10.1145/3477495.3532029", "title": "Mutual Disentanglement Learning for Joint Fine-Grained Sentiment Classification and Controllable Text Generation", "abstract": "Fine-grained sentiment classification (FGSC) task and fine-grained controllable text generation (FGSG) task are two representative applications of sentiment analysis, two of which together can actually form an inverse task prediction, i.e., the former aims to infer the fine-grained sentiment polarities given a text piece, while the latter generates text content that describes the input fine-grained opinions. Most of the existing work solves the FGSC and the FGSG tasks in isolation, while ignoring the complementary benefits in between. This paper combines FGSC and FGSG as a joint dual learning system, encouraging them to learn the advantages from each other. Based on the dual learning framework, we further propose decoupling the feature representations in two tasks into fine-grained aspect-oriented opinion variables and content variables respectively, by performing mutual disentanglement learning upon them. We also propose to transform the difficult \"data-to-text\u201d generation fashion widely used in FGSG into an easier text-to-text generation fashion by creating surrogate natural language text as the model inputs. Experimental results on 7 sentiment analysis benchmarks including both the document-level and sentence-level datasets show that our method significantly outperforms the current strong-performing baselines on both the FGSC and FGSG tasks. Automatic and human evaluations demonstrate that our FGSG model successfully generates fluent, diverse and rich content conditioned on fine-grained sentiments.", "keywords": "representation disentanglement;controllable text generation;sentiment analysis;dual learning"} {"id": "10.1145/3477495.3531984", "title": "Graph Adaptive Semantic Transfer for Cross-Domain Sentiment Classification", "abstract": "Cross-domain sentiment classification (CDSC) aims to use the transferable semantics learned from the source domain to predict the sentiment of reviews in the unlabeled target domain. Existing studies in this task attach more attention to the sequence modeling of sentences while largely ignoring the rich domain-invariant semantics embedded in graph structures (i.e., the part-of-speech tags and dependency relations). As an important aspect of exploring characteristics of language comprehension, adaptive graph representations have played an essential role in recent years. To this end, in the paper, we aim to explore the possibility of learning invariant semantic features from graph-like structures in CDSC. Specifically, we present Graph Adaptive Semantic Transfer (GAST) model, an adaptive syntactic graph embedding method that is able to learn domain-invariant semantics from both word sequences and syntactic graphs. More specifically, we first raise a POS-Transformer module to extract sequential semantic features from the word sequences as well as the part-of-speech tags. Then, we design a Hybrid Graph Attention (HGAT) module to generate syntax-based semantic features by considering the transferable dependency relations. Finally, we devise an Integrated aDaptive Strategy (IDS) to guide the joint learning process of both modules. Extensive experiments on four public datasets indicate that GAST achieves comparable effectiveness to a range of state-of-the-art models.", "keywords": "domain adaptation;sentiment analysis;graph embedding;web content analysis;text mining"} {"id": "10.1145/3477495.3531938", "title": "Aspect Feature Distillation and Enhancement Network for Aspect-Based Sentiment Analysis", "abstract": "Aspect-based sentiment analysis (ABSA) is a fine-grained sentiment analysis task designed to identify the polarity of a target aspect. Some works introduce various attention mechanisms to fully mine the relevant context words of different aspects, and use the traditional cross-entropy loss to fine-tune the models for the ABSA task. However, the attention mechanism paying partial attention to aspect-unrelated words inevitably introduces irrelevant noise. Moreover, the cross-entropy loss lacks discriminative learning of features, which makes it difficult to exploit the implicit information of intra-class compactness and inter-class separability. To overcome these challenges, we propose an Aspect Feature Distillation and Enhancement Network (AFDEN) for the ABSA task. We first propose a dual-feature extraction module to extract aspect-related and aspect-unrelated features through the attention mechanisms and graph convolutional networks. Then, to eliminate the interference of aspect-unrelated words, we design a novel aspect-feature distillation module containing a gradient reverse layer that learns aspect-unrelated contextual features through adversarial training, and an aspect-specific orthogonal projection layer to further project aspect-related features into the orthogonal space of aspect-unrelated features. Finally, we propose an aspect-feature enhancement module that leverages supervised contrastive learning to capture the implicit information between the same sentiment labels and between different sentiment labels. Experimental results on three public datasets demonstrate that our AFDEN model achieves state-of-the-art performance and verify the effectiveness and robustness of our model.", "keywords": "supervised contrastive learning;orthogonal projection;aspect-based sentiment analysis;adversarial training"} {"id": "10.1145/3477495.3532085", "title": "IAOTP: An Interactive End-to-End Solution for Aspect-Opinion Term Pairs Extraction", "abstract": "Recently, the aspect-opinion term pairs (AOTP) extraction task has gained substantial importance in the domain of aspect-based sentiment analysis. It intends to extract the potential pair of each aspect term with its corresponding opinion term present in a user review. Some existing studies heavily relied on the annotated aspect terms and/or opinion terms, or adopted external knowledge/resources to figure out the task. Therefore, in this study, we propose a novel end-to-end solution, called an Interactive AOTP (IAOTP) model, for exploring AOTP. The IAOTP model first tracks the boundary of each token in given aspect-specific and opinion-specific representations through a span-based operation. Next, it generates the candidate AOTP by formulating the dyadic relations between tokens through the Biaffine transformation. Then, it computes the positioning information to capture the significant distance relationship that each candidate pair holds. And finally, it jointly models collaborative interactions and prediction of AOTP through a 2D self-attention. Besides the IAOTP model, this study also proposes an independent aspect/opinion encoding model (a RS model) that formulates relational semantics to obtain aspect-specific and opinion-specific representations that can effectively perform the extraction of aspect and opinion terms. Detailed experiments conducted on the publicly available benchmark datasets for AOTP, aspect terms, and opinion terms extraction tasks, clearly demonstrate the significantly improved performance of our models relative to other competitive state-of-the-art baselines.", "keywords": "natural language processing;aspect-opinion term pairs extraction;aspect-based sentiment analysis"} {"id": "SHAIK2023100003", "title": "Sentiment analysis and opinion mining on educational data: A survey", "abstract": "Sentiment analysis AKA opinion mining is one of the most widely used NLP applications to identify human intentions from their reviews. In the education sector, opinion mining is used to listen to student opinions and enhance their learning\u2013teaching practices pedagogically. With advancements in sentiment annotation techniques and AI methodologies, student comments can be labelled with their sentiment orientation without much human intervention.\u200b In this review article, (1) we consider the role of emotional analysis in education from four levels: document level, sentence level, entity level, and aspect level, (2) sentiment annotation techniques including lexicon-based and corpus-based approaches for unsupervised annotations are explored, (3) the role of AI in sentiment analysis with methodologies like machine learning, deep learning, and transformers are discussed, (4) the impact of sentiment analysis on educational procedures to enhance pedagogy, decision-making, and evaluation are presented. Educational institutions have been widely invested to build sentiment analysis tools and process their student feedback to draw their opinions and insights. Applications built on sentiment analysis of student feedback are reviewed in this study. Challenges in sentiment analysis like multi-polarity, polysemous, negation words, and opinion spam detection are explored and their trends in the research space are discussed. The future directions of sentiment analysis in education are discussed.", "keywords": "sentiment analysis;opinion mining;student feedback;ai;bert;deep learning"} {"id": "KABIR2023100011", "title": "ASPER: Attention-based approach to extract syntactic patterns denoting semantic relations in sentential context", "abstract": "Semantic relationships, such as hyponym\u2013hypernym, cause\u2013effect, meronym\u2013holonym etc., between a pair of entities in a sentence are usually reflected through syntactic patterns. Automatic extraction of such patterns benefits several downstream tasks, including, entity extraction, ontology building, and question answering. Unfortunately, automatic extraction of such patterns has not yet received much attention from NLP and information retrieval researchers. In this work, we propose an attention-based supervised deep learning model, ASPER, which extracts syntactic patterns between entities exhibiting a given semantic relation in the sentential context. We validate the performance of ASPER on three distinct semantic relations\u2014hyponym\u2013hypernym, cause\u2013effect, and meronym\u2013holonym on six datasets. Experimental results show that for all these semantic relations, ASPER can automatically identify a collection of syntactic patterns reflecting the existence of such a relation between a pair of entities in a sentence. In comparison to the existing methodologies of syntactic pattern extraction, ASPER\u2019s performance is substantially superior.", "keywords": "syntactic pattern;syntactic pattern extraction"} {"id": "STRUBBE2023100012", "title": "Development and verification of a user-friendly software for German text simplification focused on patients with cerebral palsy", "abstract": "People with disabilities, especially those who have been diagnosed with cerebral palsy or other neurological diseases, are often not able to understand the text they are confronted with in their daily life. Important information, for example from the government, is manually translated by professionals into easy language suitable for these people, however, this method cannot be scaled-up to a large number of documents. We have developed a software that translates German text into a simpler version. The software, based on the easy language rules, shortens the text and replaces difficult words with synonyms. To maintain a correct grammar during the synonym replacement, the software recognizes the phrase containing the replaced word and conjugates the other words in this phrase, if needed. For the verification of the software developed, the grammatical correctness of the replacement was quantified by German native speakers and the text shortening was compared against the results of 12 healthy volunteer readers. The software provides an easy-to-use interface, which takes into account both the mental and physical difficulties of the impaired people. Therefore, one can assume that the software will be a helpful tool for the acquirement of text content for these people.", "keywords": "natural language processing;lexical simplification;cerebral palsy;easy language"} {"id": "SHAHZAD2023100010", "title": "Predicting Facebook sentiments towards research", "abstract": "Social media platforms provide users with various ways of interacting with each other, such as commenting, reacting to posts, sharing content, and uploading pictures. Facebook is one of the most popular platforms, and its users frequently share and reshare posts, including research articles. Moreover, the reactions feature on Facebook allows users to express their feelings towards the content they view, providing valuable data for analysis. This study aims to predict the emotional impact of Facebook posts relating to research articles. We collected data on Facebook posts related to various scientific research domains, including Health Sciences, Social Sciences, Dentistry, Arts, and Humanities. We observed Facebook users\u2019 reactions towards research articles and posts and found that \u2018Like\u2019 reactions were the most common. We also noticed that research articles from the Dentistry research domain received a lot of \u2018Haha\u2019 reactions. We used machine learning models to predict the sentiment of Facebook posts related to research articles. We used features such as the research article\u2019s title sentiment, abstract sentiment, abstract length, author count, and research domain to build the models. We used five classifiers: Random Forest, Decision Tree, K-Nearest Neighbors, Logistic Regression, and Na\u00efve Bayes. The models were evaluated using accuracy, precision, recall, and F-1 score metrics. The Random Forest classifier was the best model for two- and three-class labels, achieving accuracy measures of 86", "keywords": "research sentiments;applied machine learning;sentiment analysis;facebook reactions"} {"id": "MEETEI2023100016", "title": "Do cues in a video help in handling rare words in a machine translation system under a low-resource setting?", "abstract": "Video modality is an emerging research area among the numerous modalities utilized for multimodal machine translation. Multimodal machine translation uses multiple modalities to improve the machine-translated target language from the source language. However, the currently available multimodal dataset is focused on a few well-studied languages. In this paper, we propose a video-guided multimodal machine translation (VMMT) model under a low-resource setting by building a synthetic multimodal dataset of the English-Hindi language pair, the first one of its kind for this language pair. The VMMT system employs spatio-temporal video context as an additional input modality along with the source text. The spatio-temporal video context is extracted using a pre-trained 3D convolutional neural network. We report how well the VMMT systems outperform the text-only neural machine translation (NMT) system using automatic evaluation metrics and human evaluation on two test datasets: one in-domain and another out-domain. Our results indicate that the use of video context as an additional input modality enhances the performance of the MT system in resolving the various MT challenges, such as handling rare words, ambiguity, etc., in both English\u2192Hindi and Hindi\u2192English translations. Our experimental results show a significant improvement of up to +4.2 BLEU and +0.07 chrF scores in English\u2192Hindi and +5.4 BLEU and +0.07 chrF scores in Hindi\u2192English with our VMMT system over unimodal NMT system. Our findings highlight the potential of visual cues as an additional modality for improving machine translation systems especially in low-resource settings and emphasize the importance of synthetic multimodal datasets in addressing the scarcity of diverse data for less-studied language pairs.", "keywords": "multimodal machine translation;neural machine translation;video guided;visual attention;low resource;hindi"} {"id": "JEHANGIR2023100017", "title": "A survey on Named Entity Recognition \u2014 datasets, tools, and methodologies", "abstract": "Natural language processing (NLP) is crucial in the current processing of data because it takes into account many sources, formats, and purposes of data as well as information from various sectors of our economy, government, and private and public lives. We perform a variety of NLP operations on the text in order to complete certain tasks. One of them is NER (Named Entity Recognition). An act of recognizing and categorizing named entities that are presented in a text document is known as named entity recognition. The purpose of NER is to find references of rigid designators in the text which belong to established semantic kinds like a person, place, organization, etc. It acts as a cornerstone for many Information Extraction-related activities. In this work, we present a thorough analysis of several methodologies for NER ranging from unsupervised learning, rule-based, supervised learning, and various Deep Learning based approaches. We examine the most relevant datasets, tools, and deep learning approaches like Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Bidirectional Long Short Term Memory, Transfer learning approaches, and numerous other approaches currently being used in present-day NER problem environments and their applications. Finally, we outline the difficulties NER systems encounter and future directions.", "keywords": "natural language processing;named entity recognition;deep learning;convolutional neural network;bidirectional long short term memory;recurrent neural networks"} {"id": "CHAKRAVARTHI2023100006", "title": "Detecting abusive comments at a fine-grained level in a low-resource language", "abstract": "YouTube is a video-sharing and social media platform where users create profiles and share videos for their followers to view, like, and comment on. Abusive comments on videos or replies to other comments may be offensive and detrimental for the mental health of users on the platform. It is observed that often the language used in these comments is informal and does not necessarily adhere to the formal syntactic and lexical structure of the language. Therefore, creating a rule-based system for filtering out abusive comments is challenging. This article introduces four datasets of abusive comments in Tamil and code-mixed Tamil\u2013English extracted from YouTube. Comment-level annotation has been carried out for each dataset by assigning polarities to the comments. We hope these datasets can be used to train effective machine learning-based comment filters for these languages by mitigating the challenges associated with rule-based systems. In order to establish baselines on these datasets, we have carried out experiments with various machine learning classifiers and reported the results using F1-score, precision, and recall. Furthermore, we have employed a t-test to analyze the statistical significance of the results generated by the machine learning classifiers. Furthermore, we have employed SHAP values to analyze and explain the results generated by the machine learning classifiers. The primary contribution of this paper is the construction of a publicly accessible dataset of social media messages annotated with a fine-grained abusive speech in the low-resource Tamil language. Overall, we discovered that MURIL performed well on the binary abusive comment detection task, showing the applicability of multilingual transformers for this work. Nonetheless, a fine-grained annotation for Fine-grained abusive comment detection resulted in a significantly lower number of samples per class, and classical machine learning models outperformed deep learning models, which require extensive training datasets, on this challenge. According to our knowledge, this was the first Tamil-language study on FGACD focused on diverse ethnicities. The methodology for detecting abusive messages described in this work may aid in the creation of comment filters for other under-resourced languages on social media.", "keywords": "abuse detection;low-resourced languages;machine learning;deep learning models;tamil"} {"id": "OLIAEE2023100007", "title": "Using Bidirectional Encoder Representations from Transformers (BERT) to classify traffic crash severity types", "abstract": "Traffic crashes are a critical safety concern. Many studies have attempted to improve traffic safety by performing a wide range of studies on safety topics with the application of diverse statistical and machine learning models. The data elements contained in police-reported crash narrative information are not routinely analyzed with coded and structured crash data. In the recent years, unstructured textual contents in traffic crash narratives have been investigated by many researchers. However, most of these studies are basic text mining applications and often the dataset is limited in size. This study applied an advanced language model Bidirectional Encoder Representations from Transformers (BERT) to classify traffic injury types by using a dataset of over 750,000 unique crash narrative reports. The models have an 84.2", "keywords": "crashes;safety;severity;text mining;bert;language model"} {"id": "EZALDEEN2023100008", "title": "Semantics aware intelligent framework for content-based e-learning recommendation", "abstract": "E-learning accounts for the emergence of re-skilling, up-skilling, and augmenting the traditional education system by providing knowledge delivery. The meaningful learning approach is based on a constructivist process for conceptually modeling an individual\u2019s current and past knowledge or experience towards personalization. This research proposes a novel framework in which semantic analysis of e-content is combined with deep machine learning techniques into an e-learning Intelligent Content-Based Recommendation System (ICRS), with the goal of assisting learners in selecting appropriate e-learning materials. The system tackles textual e-content to extract representative terms and their mutual semantic relationships by which a structure of the context-based graph is developed. Thus, the e-content is semantically represented according to the learner\u2019s terms that are expanded using the ConceptNet semantic network to represent the textual knowledge meaningfully. Furthermore, a new approach is proposed wherein more contextual and semantic information among concepts with graphs are combined to infer the relative semantic relations between terms and e-learning resources in order to build the semantic matrix. This new approach is utilized to generate learners\u2019 semantic datasets used for the classification of available resources to enrich the recommendation. Here, four machine learning (ML) models and an augmented deep learning (DL) recommender model named LSTMM, which are developed for the ICRS framework. The models have been evaluated and compared using a user sequential semantic dataset. Our results show that the LSTMM performs better than others in terms of Accuracy and F1 Score of 0.8453 and 0.7731 respectively.", "keywords": "personalized e-learning recommendation;semantic analysis;term expansion;deep machine learning;knowledge graph"} {"id": "MOROZOVSKII2023100014", "title": "Rare words in text summarization", "abstract": "Automatic text summarization is a difficult task, which involves a good understanding of an input text to produce fluent, brief and vast summary. The usage of text summarization models can vary from legal document summarization to news summarization. The model should be able to understand where important information is located to produce a good summary. However, infrequently used or rare words might limit model\u2019s understanding of an input text, as the model might ignore such words or put less attention on them, especially, when the trained model is tested on the dataset, where the frequency distribution of used words is different. Another issue is that the model accepts only a limited amount of tokens (words) of an input text, which might contain redundant information or not including important information as it is located further in the text. To address the problem of rare words, we have proposed a modification to the attention mechanism of the transformer model with pointer-generator layer, where attention mechanism receives frequency information for each word, which helps to boost rare words. Additionally, our proposed supervised learning model uses the hybrid approach incorporating both extractive and abstractive elements, to include more important information for the abstractive model in a news summarization task. We have designed experiments involving a combination of six different hybrid models with varying input text sizes (measured as tokens) to test our proposed model. Four well-known datasets specific to news articles were used in this work: CNN/DM, XSum, Gigaword and DUC 2004 Task 1. Our results were compared using the well-known ROUGE metric. Our best model achieved R-1 score of 38.22, R-2 score of 15.07 and R-L score of 35.79, outperforming three existing models by several ROUGE points.", "keywords": "abstractive summarization;extractive summarization;pointer-generator;transformer;rare words"} {"id": "XU2023100009", "title": "Improving aspect-based sentiment analysis with contrastive learning", "abstract": "As a fine-grained sentiment analysis task focusing on detecting the sentiment polarity of aspect(s) in a sentence, aspect-based sentiment analysis (ABSA) plays a significant role in opinion analysis and review analysis. Recently, a number of methods have emerged to leverage contrastive learning techniques to enhance the performance of ABSA by learning fine-grained sentiment representations. In this paper, we present and compare two commonly used contrastive learning approaches for enhancing ABSA performance: sentiment-based supervised contrastive learning and augmentation-based unsupervised contrastive learning. Sentiment-based supervised contrastive learning employs sentiment labels to distinguish between positive and negative samples. Augmentation-based unsupervised contrastive learning aims to utilize various data augmentation strategies to generate positive samples. Experimental results on three public ABSA datasets demonstrate that both contrastive learning methods significantly improve the performance of ABSA. Sentiment-based supervised contrastive learning outperforms augmentation-based unsupervised contrastive learning in terms of overall performance improvements. Furthermore, we conduct additional experiments to illustrate the effectiveness and generalizability of these two contrastive learning approaches. The experimental code and data are publicly available at the link: https://github.com/Linda230/ABSA-CL.", "keywords": "aspect-based sentiment analysis;contrastive learning;data augmentation;sentiment label"} {"id": "LOPEZESPEJEL2023100013", "title": "A comprehensive review of State-of-The-Art methods for Java code generation from Natural Language Text", "abstract": "Java Code Generation consists in generating automatically Java code from a Natural Language Text. This NLP task helps in increasing programmers\u2019 productivity by providing them with immediate solutions to the simplest and most repetitive tasks. Code generation is a challenging task because of the hard syntactic rules and the necessity of a deep understanding of the semantic aspect of the programming language. Many works tried to tackle this task using either RNN-based, or Transformer-based models. The latter achieved remarkable advancement in the domain and they can be divided into three groups: (1) encoder-only models, (2) decoder-only models, and (3) encoder\u2013decoder models. In this paper, we provide a comprehensive review of the evolution and progress of deep learning models in Java code generation task. We focus on the most important methods and present their merits and limitations, as well as the objective functions used by the community. In addition, we provide a detailed description of datasets and evaluation metrics used in the literature. Finally, we discuss results of different models on CONCODE dataset, then propose some future directions.", "keywords": "java code generation;language models;natural language processing;recurrent neural networks;transformer neural networks"} {"id": "DAHOU2023100005", "title": "DzNER: A large Algerian Named Entity Recognition dataset", "abstract": "Named Entity Recognition (NER) is a natural language processing (NLP) task that involves assigning labels like Person, Location, and Organization to words in text. While there is a good amount of annotated data available for NER in English and other European languages, this is not the case for Arabic and its dialects. The goal of the paper is to introduce DzNER, an Algerian dataset for NER that consists of more than 21,000 manually annotated sentences (over 220,000 tokens) from Algerian Facebook pages and YouTube channels, with a focus on three prominent classes. In this study, we provide a detailed analysis of the NER tag-set used in the dataset and show that it has a good balance of quantity, diversity, and coverage of different domains. For the proof of resource-effectiveness, we also demonstrate the effectiveness of the dataset by using various language models for the sequence labeling task of NER and comparing the results to existing datasets. According to our research and knowledge, currently no available dataset meets the standards of both variability and volume as well as DzNER. We hope that this dataset and the accompanying code and models will be useful for further research on NLP for Algerian dialect and fill the gap of low resources.", "keywords": "algerian dialect;named entity recognition;arabic;dataset;low-resource language"} {"id": "10.1145/3539597.3570364", "title": "Friendly Conditional Text Generator", "abstract": "Our goal is to control text generation with more fine-grained conditions at lower computational cost than is possible with current alternatives; these conditions are attributes (i.e., multiple codes and free-text). As large-scale pre-trained language models (PLMs) offer excellent performance in free-form text generation, we explore efficient architectures and training schemes that can best leverage PLMs. Our framework, Friendly Conditional Text Generator (FCTG), introduces a multi-view attention (MVA) mechanism and two training tasks, Masked Attribute Modeling (MAM) and Attribute Linguistic Matching (ALM), to direct various PLMs via modalities between the text and its attributes. The motivation of FCTG is to map texts and attributes into a shared space, and bridge their modality gaps, as the texts and attributes reside in different regions of semantic space. To avoid catastrophic forgetting, modality-free embedded representations are learnt, and used to direct PLMs in this space, FCTG applies MAM to learn attribute representations, maps them in the same space as text through MVA, and optimizes their alignment in this space via ALM. Experiments on publicly available datasets show that FCTG outperforms baselines over higher level conditions at lower computation cost.", "keywords": "transformer;conditional text generation;neural language model;neural language generation"} {"id": "10.1145/3539597.3570475", "title": "Effective Seed-Guided Topic Discovery by Integrating Multiple Types of Contexts", "abstract": "Instead of mining coherent topics from a given text corpus in a completely unsupervised manner, seed-guided topic discovery methods leverage user-provided seed words to extract distinctive and coherent topics so that the mined topics can better cater to the user's interest. To model the semantic correlation between words and seeds for discovering topic-indicative terms, existing seed-guided approaches utilize different types of context signals, such as document-level word co-occurrences, sliding window-based local contexts, and generic linguistic knowledge brought by pre-trained language models. In this work, we analyze and show empirically that each type of context information has its value and limitation in modeling word semantics under seed guidance, but combining three types of contexts (i.e., word embeddings learned from local contexts, pre-trained language model representations obtained from general-domain training, and topic-indicative sentences retrieved based on seed information) allows them to complement each other for discovering quality topics. We propose an iterative framework, SeedTopicMine, which jointly learns from the three types of contexts and gradually fuses their context signals via an ensemble ranking process. Under various sets of seeds and on multiple datasets, SeedTopicMine consistently yields more coherent and accurate topics than existing seed-guided topic discovery approaches.", "keywords": "text embedding;topic discovery"} {"id": "10.1145/3539597.3570398", "title": "Making Pre-Trained Language Models End-to-End Few-Shot Learners with Contrastive Prompt Tuning", "abstract": "Pre-trained Language Models (PLMs) have achieved remarkable performance for various language understanding tasks in IR systems, which require the fine-tuning process based on labeled training data. For low-resource scenarios, prompt-based learning for PLMs exploits prompts as task guidance and turns downstream tasks into masked language problems for effective few-shot fine-tuning. In most existing approaches, the high performance of prompt-based learning heavily relies on handcrafted prompts and verbalizers, which may limit the application of such approaches in real-world scenarios. To solve this issue, we present CP-Tuning, an end-to-end Contrastive Prompt Tuning framework for fine-tuning PLMs without any manual engineering of task-specific prompts and verbalizers. It is integrated with the task-invariant continuous prompt encoding technique with fully trainable prompt parameters. We further propose the pair-wise cost-sensitive contrastive learning procedure to optimize the model in order to achieve verbalizer-free class mapping and enhance the task-invariance of prompts. It explicitly learns to distinguish different classes and makes the decision boundary smoother by assigning different costs to easy and hard cases. Experiments over a variety of language understanding tasks and different PLMs show that CP-Tuning outperforms state-of-the-art methods.", "keywords": "deep contrastive learning;few-shot learning;pre-trained language models;prompt-based fine-tuning"} {"id": "10.1145/3539597.3570428", "title": "Visual Matching is Enough for Scene Text Retrieval", "abstract": "Given a text query, the task of scene text retrieval aims at searching and localizing all the text instances that are contained in an image gallery. The state-of-the-art method learns a cross-modal similarity between the query text and the detected text regions in natural images to facilitate retrieval. However, this cross-modal approach still cannot well bridge the heterogeneity gap between the text and image modalities. In this paper, we propose a new paradigm that converts the task into a single-modality retrieval problem. Unlike previous works that rely on character recognition or embedding, we directly leverage pictorial information by rendering query text into images to learn the glyph feature of each character, which can be utilized to capture the similarity between query and scene text images. With the extracted visual features, we devise a synthetic label image guided feature alignment mechanism that is robust to different scene text styles and layouts. The modules of glyph feature learning, text instance detection, and visual matching are jointly trained in an end-to-end framework. Experimental results show that our proposed paradigm achieves the best performance in multiple benchmark datasets. As a side product, our method can also be easily generalized to support text queries with unseen characters or languages in a zero-shot manner.", "keywords": "scene text retrieval;visual matching;multimodal retrieval"} {"id": "10.1145/3539597.3570481", "title": "AGREE: Aligning Cross-Modal Entities for Image-Text Retrieval Upon Vision-Language Pre-Trained Models", "abstract": "Image-text retrieval is a challenging cross-modal task that arouses much attention. While the traditional methods cannot break down the barriers between different modalities, Vision-Language Pre-trained (VLP) models greatly improve image-text retrieval performance based on massive image-text pairs. Nonetheless, the VLP-based methods are still prone to produce retrieval results that cannot be cross-modal aligned with entities. Recent efforts try to fix this problem at the pre-training stage, which is not only expensive but also unpractical due to the unavailable of full datasets. In this paper, we novelly propose a lightweight and practical approach to align cross-modal entities for image-text retrieval upon VLP models only at the fine-tuning and re-ranking stages. We employ external knowledge and tools to construct extra fine-grained image-text pairs, and then emphasize cross-modal entity alignment through contrastive learning and entity-level mask modeling in fine-tuning. Besides, two re-ranking strategies are proposed, including one specially designed for zero-shot scenarios. Extensive experiments with several VLP models on multiple Chinese and English datasets show that our approach achieves state-of-the-art results in nearly all settings.", "keywords": "image-text retrieval;vlp;vision-language pre-training"} {"id": "10.1145/3539597.3570431", "title": "Can Pre-Trained Language Models Understand Chinese Humor?", "abstract": "Humor understanding is an important and challenging research in natural language processing. As the popularity of pre-trained language models (PLMs), some recent work makes preliminary attempts to adopt PLMs for humor recognition and generation. However, these simple attempts do not substantially answer the question: whether PLMs are capable of humor understanding? This paper is the first work that systematically investigates the humor understanding ability of PLMs. For this purpose, a comprehensive framework with three evaluation steps and four evaluation tasks is designed. We also construct a comprehensive Chinese humor dataset, which can fully meet all the data requirements of the proposed evaluation framework. Our empirical study on the Chinese humor dataset yields some valuable observations, which are of great guiding value for future optimization of PLMs in humor understanding and generation.", "keywords": "pre-trained language models;chinese humor dataset;humor understanding;humor evaluation framework"} {"id": "10.1145/3539618.3591637", "title": "An Effective Framework for Enhancing Query Answering in a Heterogeneous Data Lake", "abstract": "There has been a growing interest in cross-source searching to gain rich knowledge in recent years. A data lake collects massive raw and heterogeneous data with different data schemas and query interfaces. Many real-life applications require query answering over the heterogeneous data lake, such as e-commerce, bioinformatics and healthcare. In this paper, we propose LakeAns that semantically integrates heterogeneous data schemas of the lake to enhance the semantics of query answers. To this end, we propose a novel framework to efficiently and effectively perform the cross-source searching. The framework exploits a reinforcement learning method to semantically integrate the data schemas and further create a global relational schema for the heterogeneous data. It then performs a query answering algorithm based on the global schema to find answers across multiple data sources. We conduct extensive experimental evaluations using real-life data to verify that our approach outperforms existing solutions in terms of effectiveness and efficiency.", "keywords": "relational schema;heterogeneous data lake;query answering"} {"id": "10.1145/3539618.3591698", "title": "BeamQA: Multi-Hop Knowledge Graph Question Answering with Sequence-to-Sequence Prediction and Beam Search", "abstract": "Knowledge Graph Question Answering (KGQA) is a task that aims to answer natural language queries by extracting facts from a knowledge graph. Current state-of-the-art techniques for KGQA rely on text-based information from graph entity and relations labels, as well as external textual corpora. By reasoning over multiple edges in the graph, these can accurately rank and return the most relevant entities. However, one of the limitations of these methods is that they cannot handle the inherent incompleteness of real-world knowledge graphs and may lead to inaccurate answers due to missing edges. To address this issue, recent advances in graph representation learning have led to the development of systems that can use link prediction techniques to handle missing edges probabilistically, allowing the system to reason with incomplete information. However, existing KGQA frameworks that use such techniques often depend on learning a transformation from the query representation to the graph embedding space, which requires access to a large training dataset. We present BeamQA, an approach that overcomes these limitations by combining a sequence-to-sequence prediction model with beam search execution in the embedding space. Our model uses a pre-trained large language model and synthetic question generation. Our experiments demonstrate the effectiveness of BeamQA when compared to other KGQA methods on two knowledge graph question-answering datasets.", "keywords": "question answering;knowledge graphs"} {"id": "10.1145/3539618.3591710", "title": "Leader-Generator Net: Dividing Skill and Implicitness for Conquering FairytaleQA", "abstract": "Machine reading comprehension requires systems to understand the given passage and answer questions. Previous methods mainly focus on the interaction between the question and passage. However, they ignore the deep exploration of cognitive elements behind questions, such as fine-grained reading skills (this paper focuses on narrative comprehension skills) and implicitness or explicitness of the question (whether the answer can be found in the passage). Grounded in prior literature on reading comprehension, the understanding of a question is a complex process where human beings need to understand the semantics of the question, use different reading skills for different questions, and then judge the implicitness of the question. To this end, a simple but effective Leader-Generator Network is proposed to explicitly separate and extract fine-grained reading skills and the implicitness or explicitness of the question. Specifically, the proposed skill leader accurately captures the semantic representation of fine-grained reading skills with contrastive learning. And the implicitness-aware pointer-generator adaptively extracts or generates the answer based on the implicitness or explicitness of the question. Furthermore, to validate the generalizability of the methodology, we annotate a new dataset named NarrativeQA 1.1. Experiments on the FairytaleQA and NarrativeQA 1.1 show that the proposed model achieves the state-of-the-art performance (about 5% gain on Rouge-L) on the question answering task. Our annotated data and code are available at https://github.com/pengwei-iie/Leader-Generator-Net.", "keywords": "machine reading comprehension;implicit or explicit question;reading skills;question answering"} {"id": "10.1145/3539618.3591936", "title": "A Lightweight Constrained Generation Alternative for Query-Focused Summarization", "abstract": "Query-focused summarization (QFS) aims to provide a summary of a document that satisfies information need of a given query and is useful in various IR applications, such as abstractive snippet generation. Current QFS approaches typically involve injecting additional information, e.g. query-answer relevance or fine-grained token-level interaction between a query and document, into a finetuned large language model. However, these approaches often require extra parameters & training, and generalize poorly to new dataset distributions. To mitigate this, we propose leveraging a recently developed constrained generation model Neurological Decoding (NLD) as an alternative to current QFS regimes which rely on additional sub-architectures and training. We first construct lexical constraints by identifying important tokens from the document using a lightweight gradient attribution model, then subsequently force the generated summary to satisfy these constraints by directly manipulating the final vocabulary likelihood. This lightweight approach requires no additional parameters or finetuning as it utilizes both an off-the-shelf neural retrieval model to construct the constraints and a standard generative language model to produce the QFS. We demonstrate the efficacy of this approach on two public QFS collections achieving near parity with the state-of-the-art model with substantially reduced complexity.", "keywords": "constrained generation;query-focused summarization"} {"id": "10.1145/3539618.3591938", "title": "Mixup-Based Unified Framework to Overcome Gender Bias Resurgence", "abstract": "Unwanted social biases are usually encoded in pretrained language models (PLMs). Recent efforts are devoted to mitigating intrinsic bias encoded in PLMs. However, the separate fine-tuning on applications is detrimental to intrinsic debiasing. A bias resurgence issue arises when fine-tuning the debiased PLMs on downstream tasks. To eliminate undesired stereotyped associations in PLMs during fine-tuning, we present a mixup-based framework Mix-Debias from a new unified perspective, which directly combines debiasing PLMs with fine-tuning applications. The key to Mix-Debias is applying mixup-based linear interpolation on counterfactually augmented downstream datasets, with expanded pairs from external corpora. Besides, we devised an alignment regularizer to ensure original augmented pairs and gender-balanced counterparts are spatially closer. Experimental results show that Mix-Debias can reduce biases in PLMs while maintaining a promising performance in applications.", "keywords": "social bias;gender debiasing;fairness;natural language processing"} {"id": "10.1145/3539618.3591940", "title": "A Simple yet Effective Framework for Few-Shot Aspect-Based Sentiment Analysis", "abstract": "The pre-training and fine-tuning paradigm has become the main-stream framework in the field of Aspect-Based Sentiment Analysis (ABSA). Although it has achieved sound performance in the domains containing enough fine-grained aspect-sentiment annotations, it is still challenging to conduct few-shot ABSA in domains where manual annotations are scarce. In this work, we argue that two kinds of gaps, i.e., domain gap and objective gap, hinder the transfer of knowledge from pre-training language models (PLMs) to ABSA tasks. To address this issue, we introduce a simple yet effective framework called FS-ABSA, which involves domain-adaptive pre-training and text-infilling fine-tuning. We approach the End-to-End ABSA task as a text-infilling problem and perform domain-adaptive pre-training with the text-infilling objective, narrowing the two gaps and consequently facilitating the knowledge transfer. Experiments show that the resulting model achieves more compelling performance than baselines under the few-shot setting while driving the state-of-the-art performance to a new level across datasets under the fully-supervised setting. Moreover, we apply our framework to two non-English low-resource languages to demonstrate its generality and effectiveness.", "keywords": "sentiment analysis;opinion mining;few-shot learning"} {"id": "10.1145/3539618.3591945", "title": "Adversarial Meta Prompt Tuning for Open Compound Domain Adaptive Intent Detection", "abstract": "Intent detection plays an essential role in dialogue systems. This paper takes the lead to study open compound domain adaptation (OCDA) for intent detection, which brings the advantage of improved generalization to unseen domains. OCDA for intent detection is indeed a more realistic domain adaptation setting, which learns an intent classifier from labeled source domains and adapts it to unlabeled compound target domains containing different intent classes with the source domains. At inference time, we test the intent classifier in open domains that contain previously unseen intent classes. To this end, we propose an Adversarial Meta Prompt Tuning method (called AMPT) for open compound domain adaptive intent detection. Concretely, we propose a meta prompt tuning method, which utilizes language prompts to elicit rich knowledge from large-scale pre-trained language models (PLMs) and automatically finds better prompt initialization that facilitates fast adaptation via meta learning. Furthermore, we leverage a domain adversarial training technique to acquire domain-invariant representations of diverse domains. By taking advantage of the collaborative effect of meta learning, prompt tuning, and adversarial training, we can learn an intent classifier that can effectively generalize to unseen open domains. Experimental results on two benchmark datasets (i.e., HWU64 and CLINC) show that our model can learn substantially better-generalized representations for unseen domains compared with strong competitors.", "keywords": "meta learning;unsupervised intent classification;adversarial training;prompt tuning"} {"id": "10.1145/3539618.3591957", "title": "BioAug: Conditional Generation Based Data Augmentation for Low-Resource Biomedical NER", "abstract": "Biomedical Named Entity Recognition (BioNER) is the fundamental task of identifying named entities from biomedical text. However, BioNER suffers from severe data scarcity and lacks high-quality labeled data due to the highly specialized and expert knowledge required for annotation. Though data augmentation has shown to be highly effective for low-resource NER in general, existing data augmentation techniques fail to produce factual and diverse augmentations for BioNER. In this paper, we present BioAug, a novel data augmentation framework for low-resource BioNER. BioAug, built on BART, is trained to solve a novel text reconstruction task based on selective masking and knowledge augmentation. Post training, we perform conditional generation and generate diverse augmentations conditioning BioAug on selectively corrupted text similar to the training stage. We demonstrate the effectiveness of BioAug on 5 benchmark BioNER datasets and show that BioAug outperforms all our baselines by a significant margin (1.5%-21.5% absolute improvement) and is able to generate augmentations that are both more factual and diverse. Code: https://github.com/Sreyan88/BioAug.", "keywords": "named entity recognition;information extraction;biomedical"} {"id": "10.1145/3539618.3591975", "title": "DocGraphLM: Documental Graph Language Model for Information Extraction", "abstract": "Advances in Visually Rich Document Understanding (VrDU) have enabled information extraction and question answering over documents with complex layouts. Two tropes of architectures have emerged-transformer-based models inspired by LLMs, and Graph Neural Networks. In this paper, we introduce DocGraphLM, a novel framework that combines pre-trained language models with graph semantics. To achieve this, we propose 1) a joint encoder architecture to represent documents, and 2) a novel link prediction approach to reconstruct document graphs. DocGraphLM predicts both directions and distances between nodes using a convergent joint loss function that prioritizes neighborhood restoration and downweighs distant node detection. Our experiments on three SotA datasets show consistent improvement on IE and QA tasks with the adoption of graph features. Moreover, we report that adopting the graph features accelerates convergence in the learning process druing training, despite being solely constructed through link prediction.", "keywords": "language model;visual document understanding;graph neural network;information extraction"} {"id": "10.1145/3539618.3592011", "title": "Limitations of Open-Domain Question Answering Benchmarks for Document-Level Reasoning", "abstract": "Many recent QA models retrieve answers from passages, rather than whole documents, due to the limitations of deep learning models with limited context size. However, this approach ignores important document-level cues that can be crucial in answering questions. This paper reviews three open-domain QA benchmarks from a document-level perspective and finds that they are biased towards passage-level information. Out of 17,000 assessed questions, 82 were identified as requiring document-level reasoning and could not be answered by passage-based models. Document-level retrieval (BM25) outperformed both dense and sparse passage-level retrieval on these questions, highlighting the need for more evaluation of models' ability to understand documents, an often-overlooked challenge in open-domain QA.", "keywords": "document-level reasoning;long context question answering;open-domain question answering"} {"id": "10.1145/3539618.3592026", "title": "Multiple Topics Community Detection in Attributed Networks", "abstract": "Since existing methods are often not effective to detect communities with multiple topics in attributed networks, we propose a method named SSAGCN via Autoencoder-style self-supervised learning. SSAGCN firstly designs an adaptive graph convolutional network (AGCN), which is treated as the encoder for fusing topology information and attribute information automatically, and then utilizes a dual decoder to simultaneously reconstruct network topology and attributes. By further introducing the modularity maximization and the joint optimization strategies, SSAGCN can detect communities with multiple topics in an end-to-end manner. Experimental results show that SSAGCN outperforms state-of-the-art approaches, and also can be used to conduct topic analysis well.", "keywords": "attributed networks;community detection;graph convolutional network;self-supervised learning;topic analysis"} {"id": "10.1145/3539618.3592027", "title": "NC2T: Novel Curriculum Learning Approaches for Cross-Prompt Trait Scoring", "abstract": "Automated essay scoring (AES) is a crucial research area with potential applications in education and beyond. However, recent studies have primarily focused on AES models that evaluate essays within a specific domain or using a holistic score, leaving a gap in research and resources for more generalized models capable of assessing essays with detailed items from multiple perspectives. As evaluating and scoring essays based on complex traits is costly and time-consuming, datasets for such AES evaluations are limited. To address these issues, we developed a cross-prompt trait scoring AES model and proposed a suitable curriculum learning (CL) design. By devising difficulty scores and introducing the key curriculum method, we demonstrated its effectiveness compared to existing CL strategies in natural language understanding tasks.", "keywords": "curriculum learning;cross-prompt;natural language processing;automated essay scoring;trait scoring"} {"id": "10.1145/3539618.3592029", "title": "On Answer Position Bias in Transformers for Question Answering", "abstract": "Extractive Transformer-based models for question answering (QA) are trained to predict the start and end position of the answer in a candidate paragraph. However, the true answer position can bias these models when its distribution in the training data is highly skewed. That is, models trained only with the answer at the beginning of the paragraph will perform poorly on test instances with the answer at the end. Many studies have focused on countering answer position bias but have yet to deepen our understanding of how such bias manifests in the main components of the Transformer. In this paper, we analyze the self-attention and embedding generation components of five Transformer-based models with different architectures and position embedding strategies. Our analysis shows that models tend to map position bias in their attention matrices, generating embeddings that correlate the answer and its biased position, ultimately compromising model generalization.", "keywords": "model understanding;reading comprehension;deep learning;natural language processing;question answering"} {"id": "10.1145/3539618.3592042", "title": "Private Meeting Summarization Without Performance Loss", "abstract": "Meeting summarization has an enormous business potential, but in addition to being a hard problem, roll-out is challenged by privacy concerns. We explore the problem of meeting summarization under differential privacy constraints and find, to our surprise, that while differential privacy leads to slightly lower performance on in-sample data, differential privacy improves performance when evaluated on unseen meeting types. Since meeting summarization systems will encounter a great variety of meeting types in practical employment scenarios, this observation makes safe meeting summarization seem much more feasible. We perform extensive error analysis and identify potential risks in meeting summarization under differential privacy, including a faithfulness analysis.", "keywords": "meeting summarization;differential privacy;text summarization"} {"id": "10.1145/3539618.3592043", "title": "Prompt Learning to Mitigate Catastrophic Forgetting in Cross-Lingual Transfer for Open-Domain Dialogue Generation", "abstract": "Dialogue systems for non-English languages have long been under-explored. In this paper, we take the first step to investigate few-shot cross-lingual transfer learning (FS-XLT) and multitask learning (MTL) in the context of open-domain dialogue generation for non-English languages with limited data. We observed catastrophic forgetting in both FS-XLT and MTL for all 6 languages in our preliminary experiments. To mitigate the issue, we propose a simple yet effective prompt learning approach that can preserve the multilinguality of multilingual pre-trained language model (mPLM) in FS-XLT and MTL by bridging the gap between pre-training and fine-tuning with Fixed-prompt LM Tuning and our hand-crafted prompts. Experimental results on all 6 languages in terms of both automatic and human evaluations demonstrate the effectiveness of our approach. Our code is available at https://github.com/JeremyLeiLiu/XLinguDial.", "keywords": "catastrophic forgetting;prompt learning;dialogue generation;multitask learning;few-shot cross-lingual transfer"} {"id": "10.1145/3539618.3592048", "title": "Rating Prediction in Conversational Task Assistants with Behavioral and Conversational-Flow Features", "abstract": "Predicting the success of Conversational Task Assistants (CTA) can be critical to understand user behavior and act accordingly. In this paper, we propose TB-Rater, a Transformer model which combines conversational-flow features with user behavior features for predicting user ratings in a CTA scenario. In particular, we use real human-agent conversations and ratings collected in the Alexa TaskBot challenge, a novel multimodal and multi-turn conversational context. Our results show the advantages of modeling both the conversational-flow and behavioral aspects of the conversation in a single model for offline rating prediction. Additionally, an analysis of the CTA-specific behavioral features brings insights into this setting and can be used to bootstrap future systems.", "keywords": "conversational task assistants;nlp;rating prediction"} {"id": "10.1145/3539618.3592050", "title": "Reducing Spurious Correlations for Relation Extraction by Feature Decomposition and Semantic Augmentation", "abstract": "Deep neural models have become mainstream in relation extraction (RE), yielding state-of-the-art performance. However, most existing neural models are prone to spurious correlations between input features and prediction labels, making the models suffer from low robustness and generalization.In this paper, we propose a spurious correlation reduction method for RE via feature decomposition and semantic augmentation (denoted as FDSA). First, we decompose the original sentence representation into class-related features and context-related features. To obtain better context-related features, we devise a contrastive learning method to pull together the context-related features of the anchor sentence and its augmented sentences, and push away the context-related features of different anchor sentences. In addition, we propose gradient-based semantic augmentation on context-related features in order to improve the robustness of the RE model. Experiments on four datasets show that our model outperforms the strong competitors.", "keywords": "spurious correlation;semantic augmentation;relation extraction;feature decomposition"} {"id": "10.1145/3539618.3592063", "title": "SimTDE: Simple Transformer Distillation for Sentence Embeddings", "abstract": "In this paper we introduce SimTDE, a simple knowledge distillation framework to compress sentence embeddings transformer models with minimal performance loss and significant size and latency reduction. SimTDE effectively distills large and small transformers via a compact token embedding block and a shallow encoding block, connected with a projection layer, relaxing dimension match requirement. SimTDE simplifies distillation loss to focus only on token embedding and sentence embedding. We evaluate on standard semantic textual similarity (STS) tasks and entity resolution (ER) tasks. It achieves 99.94% of the state-of-the-art (SOTA) SimCSE-Bert-Base performance with 3 times size reduction and 96.99% SOTA performance with 12 times size reduction on STS tasks. It also achieves 99.57% of teacher's performance on multi-lingual ER data with a tiny transformer student model of 1.4M parameters and 5.7MB size. Moreover, compared to other distilled transformers SimTDE is 2 times faster at inference given similar size and still 1.17 times faster than a model 33% smaller (e.g. MiniLM). The easy-to-adopt framework, strong accuracy and low latency of SimTDE can widely enable runtime deployment of SOTA sentence embeddings.", "keywords": "semantic text similarity;entity resolution;knowledge distillation;sentence embeddings"} {"id": "10.1145/3539618.3592081", "title": "Unsupervised Dialogue Topic Segmentation with Topic-Aware Contrastive Learning", "abstract": "Dialogue Topic Segmentation (DTS) plays an essential role in a variety of dialogue modeling tasks. Previous DTS methods either focus on semantic similarity or dialogue coherence to assess topic similarity for unsupervised dialogue segmentation. However, the topic similarity cannot be fully identified via semantic similarity or dialogue coherence. In addition, the unlabeled dialogue data, which contains useful clues of utterance relationships, remains underexploited. In this paper, we propose a novel unsupervised DTS framework, which learns topic-aware utterance representations from unlabeled dialogue data through neighboring utterance matching and pseudo-segmentation. Extensive experiments on two benchmark datasets (i.e., DialSeg711 and Doc2Dial) demonstrate that our method significantly outperforms the strong baseline methods. For reproducibility, we provide our code and data at: https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/dial-start.", "keywords": "dialogue topic segmentation;text segmentation;dialogue understanding;self-supervised learning"} {"id": "JANSEN2023100020", "title": "Employing large language models in survey research", "abstract": "This article discusses the promising potential of employing large language models (LLMs) for survey research, including generating responses to survey items. LLMs can address some of the challenges associated with survey research regarding question-wording and response bias. They can address issues relating to a lack of clarity and understanding but cannot yet correct for sampling or nonresponse bias challenges. While LLMs can assist with some of the challenges with survey research, at present, LLMs need to be used in conjunction with other methods and approaches. With thoughtful and nuanced approaches to development, LLMs can be used responsibly and beneficially while minimizing the associated risks.", "keywords": "survey research;large language models;survey data;surveys;llm survey respondents"} {"id": "CB2023100021", "title": "Ontology-based data interestingness: A state-of-the-art review", "abstract": "In recent years, there has been significant growth in the use of ontology-based methods to enhance data interestingness. These methods play a crucial role in knowledge management by providing explicit, shareable, and reusable knowledge descriptions. However, limited research has been conducted to explore the impact and practical benefits of employing ontologies to increase data interestingness. A comprehensive analysis of the current state-of-the-art approaches in ontology-based data interestingness is essential to identify future advancements and directions in this field. The main objective of this study is to present a clear roadmap showcasing how ontologies contribute to data interestingness and to evaluate their effectiveness in the existing literature. The findings of this research offer an integrated and comprehensive understanding of the following aspects: (1) interestingness metrics in data mining, (2) association rule mining to identify interesting patterns, (3) ontology-based methods to improve data interestingness, and (4) techniques that use ontologies to derive interestingness in data. The paper concludes with a summary of ontology-based methods for data interestingness and outlines potential avenues for future research.", "keywords": "ontology;data interestingness;knowledge management;semantics;association rule mining;data mining;interestingness metrics;ontology-based methods"} {"id": "BHUYAN2023100028", "title": "Textual entailment as an evaluation metric for abstractive text summarization", "abstract": "Automated text summarization systems require to be heedful of the reader and the communication goals since it may be the determining component of whether the original textual content is actually worth reading in full. The summary can also assist enhance document indexing for information retrieval, and it is generally much less biased than a human-written summary. A crucial part while building intelligent systems is evaluating them. Consequently, the choice of evaluation metric(s) is of utmost importance. Current standard evaluation metrics like BLEU and ROUGE, although fairly effective for evaluation of extractive text summarization systems, become futile when it comes to comparing semantic information between two texts, i.e in abstractive summarization. We propose textual entailment as a potential metric to evaluate abstractive summaries. The results show the contribution of text entailment as a strong automated evaluation model for such summaries. The textual entailment scores between the text and generated summaries, and between the reference and predicted summaries were calculated, and an overall summarizer score was generated to give a fair idea of how efficient the generated summaries are. We put forward some novel methods that use the entailment scores and the final summarizer scores for a reasonable evaluation of the same across various scenarios. A Final Entailment Metric Score (FEMS) was generated to get an insightful idea in order to compare both the generated summaries.", "keywords": "automatic evaluation;abstractive text summarization;text entailment;natural language processing;deep learning"} {"id": "KVTKN2023100019", "title": "Semi-supervised approach for tweet-level stress detection", "abstract": "Psychological stress detection through social media has caught the interest of researchers due to the emergence of social media platforms like Twitter as major data sources to solicit human emotions and interactions. The problem of tweet-level stress detection is solved using various supervised learning mechanisms described in the existing literature. Given the real-world scenario in which there is a scarcity of labeled data and an abundance of unlabeled data, the semi-supervised solutions form an ideal choice. In this work, a semi-supervised approach called \u201cSelf-training Method for Tweet-level Stress Detection (SMTSD)\u201d is proposed to extend the self-training approach for utilizing the information from sarcasm in predicting the pseudo-labels. In SMTSD, logistic regression is used as the base classifier. The predicted pseudo-labels are then used to train and build supervised baseline models on all the datasets collected using Twitter\u2019s Tweepy API. The proposed solution of SMTSD, with the usage of sarcasm in predicting the pseudo-label, surpasses the performance of the basic self-training approach. Moreover, the proposed model also outperforms the supervised models considered and also gives better performance than the state-of-the-art techniques like Bi-LSTM. It is demonstrated that as the sample size for combined data of tweets with predicted pseudo-labels increases, so does the performance of the supervised models trained on them.", "keywords": "self-training;tweet-level stress detection;semi-supervised;sarcasm;logistic regression"} {"id": "DE2023100023", "title": "Towards Improvement of Grounded Cross-lingual Natural Language Inference with VisioTextual Attention", "abstract": "Natural Language Inference (NLI) has been one of the fundamental tasks in Natural Language Processing (NLP). Recognizing Textual Entailment (RTE) between the two pieces of text is a crucial problem. It adds further challenges when it involves two languages, i.e., in the cross-lingual scenario. In this paper, we propose VisioTextual-Attention (VTA) \u2014 an effective visual\u2013textual coattention mechanism for multi-modal cross-lingual NLI. Through our research, we show that instead of using only linguistic input features, introducing visual features supporting the textual inputs improves the performance of NLI models if an effective cross-modal attention mechanism is carefully constructed. We perform several experiments on a standard cross-lingual textual entailment dataset in Hindi\u2013English language pairs and show that the addition of visual information to the dataset along with our proposed VisioTextual Attention (VTA) enhances performance and surpasses the current state-of-the-art by 4.5", "keywords": "attention mechanism;textual entailment;natural language inference;grounded textual entailment;cross-lingual textual entailment"} {"id": "ONIKOYI2023100018", "title": "Gender prediction with descriptive textual data using a Machine Learning approach", "abstract": "Social media are well-established means of online communication, generating vast amounts of data. In this paper, we focus on Twitter and investigate behavioural differences between male and female users on social media. Using Natural Language Processing and Machine Learning approaches, we propose a user gender identification method that considers both the tweets and the Twitter profile description of a user. For experimentation and evaluation, we enriched and used an existing Twitter User Gender Classification dataset, which is freely available on Kaggle. We considered a variety of methods and components, such as the Bag of Words model, pre-trained word embeddings (GLOVE, BERT, GPT2 and Word2Vec) and machine learners, e.g., Na\u00efve Bayes, Support Vector Machines and Random Forests. Evaluation results have shown that including the Twitter profile description of a user significantly improves gender classification accuracy, by 10", "keywords": "gender prediction;gender classification;machine learning;twitter;natural language processing;pre-trained word embeddings"} {"id": "DEWYNTER2023100024", "title": "An evaluation on large language model outputs: Discourse and memorization", "abstract": "We present an empirical evaluation of various outputs generated by nine of the most widely-available large language models (LLMs). Our analysis is done with off-the-shelf, readily-available tools. We find a correlation between percentage of memorized text, percentage of unique text, and overall output quality, when measured with respect to output pathologies such as counterfactual and logically-flawed statements, and general failures like not staying on topic. Overall, 80.0", "keywords": "large language models;memorization;natural language generation;gpt-4;hallucination"} {"id": "SHUKLA2023100025", "title": "An evaluation of Google Translate for Sanskrit to English translation via sentiment and semantic analysis", "abstract": "Google Translate has been prominent for language translation; however, limited work has been done in evaluating the quality of translation when compared to expert translators. Sanskrit, one of the oldest written languages in the world was added to the Google Translate engine in 2022. Sanskrit is known as the mother of North Indian languages such as Hindi, Punjabi, and Bengali. Sanskrit has been used for composing sacred Hindu texts such as the Bhagavad Gita and the Upanishads. In this study, we present a framework that evaluates Google Translate for Sanskrit using the Bhagavad Gita. We first published a translation of the Bhagavad Gita in Sanskrit using Google Translate and then compared it with selected prominent translations by experts. Our framework features a BERT-based language model that implements sentiment and semantic analysis. The results indicate that there is a low level of similarity between the Bhagavad Gita by Google Translate and expert translators in terms of sentiment and semantic analyses. We found major inconsistencies in the translation of philosophical terms and metaphors. We further implemented a qualitative evaluation and found that Google Translate was unsuitable for the translation of certain Sanskrit words and phrases due to the poetic nature, contextual significance, metaphor and imagery. The mistranslations are not surprising since the Bhagavad Gita is known as a difficult text not only to translate but also to interpret since it relies on contextual, philosophical and historical information. It is difficult to distinguish different names and philosophical concepts such as karma and dharma without knowing the context and having a background of Hindu philosophy. Our framework lays the foundation for the automatic evaluation of other languages that can be translated by Google Translate.", "keywords": "natural language processing;language translator models;sanskrit translations;google translate;semantic analysis;sentiment analysis;hindu texts"} {"id": "MAZHAR2023100029", "title": "Similarity learning of product descriptions and images using multimodal neural networks", "abstract": "Multimodal deep learning is an emerging research topic in machine learning and involves the parallel processing of different modalities of data such as texts, images and audiovisual data. Well-known application areas are multimodal image and video processing as well as speech recognition. In this paper, we propose a multimodal neural network that measures the similarity of text-written product descriptions and images and has applications in inventory reconciliation and search engine optimization. We develop two models. The first takes image and text data, each processed by convolutional neural networks, and combines the two modalities. The second is based on a bidirectional triplet loss function. We conduct experiments using ABO! (ABO!) dataset and an industry-related dataset used for the inventory reconciliation of a mechanical engineering company. Our first model achieves an accuracy of 92.37", "keywords": "multimodal deep learning;similarity learning;image\u2013text matching;character-based language models;product matching"} {"id": "ALKHALED2023100030", "title": "Bipol: A novel multi-axes bias evaluation metric with explainability for NLP", "abstract": "We introduce bipol, a new metric with explainability, for estimating social bias in text data. Harmful bias is prevalent in many online sources of data that are used for training machine learning (ML) models. In a step to address this challenge we create a novel metric that involves a two-step process: corpus-level evaluation based on model classification and sentence-level evaluation based on (sensitive) term frequency (TF). After creating new models to classify bias using SotA architectures, we evaluate two popular NLP datasets (COPA and SQuADv2) and the WinoBias dataset. As additional contribution, we created a large English dataset (with almost 2 million labeled samples) for training models in bias classification and make it publicly available. We also make public our codes.", "keywords": "bipol;mab dataset;nlp;bias"} {"id": "ZHOU2023100031", "title": "Sentimental Contrastive Learning for event representation", "abstract": "Event representation learning is crucial for numerous event-driven tasks, as the quality of event representations greatly influences the performance of these tasks. However, many existing event representation methods exhibit a heavy reliance on semantic features, often neglecting the wealth of information available in other dimensions of events. Consequently, these methods struggle to capture subtle distinctions between events. Incorporating sentimental information can be particularly useful when modeling event data, as leveraging such information can yield superior event representations. To effectively integrate sentimental information, we propose a novel event representation learning framework, namely Sentimental Contrastive Learning (SCL). Specifically, we firstly utilize BERT as the backbone network for pre-training and obtain the initial event representations. Subsequently, we employ instance-level and cluster-level contrastive learning to fine-tune the original event representations. We introduce two distinct contrastive losses respectively for instance-level and cluster-level contrastive learning, each aiming to incorporate sentimental information from different perspectives. To evaluate the effectiveness of our proposed model, we select the event similarity evaluation task and conduct experiments on three representative datasets. Extensive experimental results demonstrate obvious performance improvement achieved by our approach over many other models.", "keywords": "event representation learning;contrastive learning;sentiment analysis;pre-trained model"} {"id": "LE2023100015", "title": "Active learning with feature matching for clinical named entity recognition", "abstract": "Active learning (AL) methods for Named Entity Recognition (NER) perform best when train and test data are drawn from the same distribution. However, only limited research in active learning has considered how to leverage the similarity between train and test data distributions. This is especially critical for clinical NER due to the rare concept (e.g., Symptoms) issue. Therefore, in this paper we present a novel AL method for clinical NER that selects the most beneficial instances for training by comparing train and test data distributions via low computational cost similarity metrics. When using GloVe embeddings, our method outperforms baseline AL methods by up to 11", "keywords": "active learning;named entity recognition;self-supervised;biobert;glove;umls;external specific-domain knowledge;external clinical knowledge"} {"id": "KHAN2023100026", "title": "Exploring the frontiers of deep learning and natural language processing: A comprehensive overview of key challenges and emerging trends", "abstract": "In the recent past, more than 5 years or so, Deep Learning (DL) especially the large language models (LLMs) has generated extensive studies out of a distinctly average downturn field of knowledge made up of a traditional society of researchers. As a result, DL is now so pervasive that its use is widespread across the body of research related to machine learning computing. The rapid emergence and apparent dominance of DL architectures over traditional machine learning techniques on a variety of tasks have been truly astonishing to witness. DL models outperformed in a variety of areas, including natural language processing (NLP), image analysis, language understanding, machine translation, computer vision, speech processing, audio recognition, style imitation, and computational biology. In this study, the aim is to explain the rudiments of DL, such as neural networks, convolutional neural networks, deep belief networks, and various variants of DL. The study will explore how these models have been applied to NLP and delve into the underlying mathematics behind them. Additionally, the study will investigate the latest advancements in DL and NLP, while acknowledging the key challenges and emerging trends in the field. Furthermore, it will discuss the core component of DL, namely embeddings, from a taxonomic perspective. Moreover, a literature review will be provided focusing on the application of DL models for six popular pattern recognition tasks: speech recognition, question answering, part of speech tagging, named entity recognition, text classification, and machine translation. Finally, the study will demystify state-of-the-art DL libraries/frameworks and available resources. The outcome and implication of this study reveal that LLMs face challenges in dealing with pragmatic aspects of language due to their reliance on statistical learning techniques and lack of genuine understanding of context, presupposition, implicature, and social norms. Furthermore, this study provides a comprehensive analysis of the current state-of-the-art advancements and highlights significant obstacles and emerging developments. The article has the potential to enhance readers\u2019 understanding of the subject matter.", "keywords": "natural language processing (nlp);machine learning (ml);large language model (llm);word embedding;deep learning (dl)"} {"id": "AKHTER2023100027", "title": "A robust hybrid machine learning model for Bengali cyber bullying detection in social media", "abstract": "Social networking platforms give users countless opportunities to share information, collaborate, and communicate positively. The same platform can be extended to a fabricated and poisonous atmosphere that gives an impersonal, harmful platform for online misuse and assault. Cyberstalking is when someone uses an internet system to ridicule, torment, insult, criticize, slander, and discredit a victim while never seeing them. With the growth of social networks, Facebook has become the online arena for bullying. Since the effects could result in a widespread contagion, it is vital to have models and mechanisms in place for the automatic identification and removal of internet cyberbullying data. This paper presents a robust hybrid ML model for cyberbullying detection in the Bengali language on social media. The Bengalibullying proposal involves an effective text preprocessing to make the Bengali text data into a useful text format, feature extraction using the TfidfVectorizer (TFID) to get the beneficial information of text data and resampling by Instance Hardness Threshold (IHT) procedure to balance the dataset to avoid overfitting or underfitting problems. In our experiment, we used the publicly available Bangla text dataset (44,001 comments) and got the highest performance ever published works on it. The model achieved the most elevated accuracy rate of 98.57", "keywords": "cyberbully;bengali;bullying;machine learning;harassment;language;social media"} {"id": "AZIZ2023100039", "title": "Leveraging contextual representations with BiLSTM-based regressor for lexical complexity prediction", "abstract": "Lexical complexity prediction (LCP) determines the complexity level of words or phrases in a sentence. LCP has a significant impact on the enhancement of language translations, readability assessment, and text generation. However, the domain-specific technical word, the complex grammatical structure, the polysemy problem, the inter-word relationship, and dependencies make it challenging to determine the complexity of words or phrases. In this paper, we propose an integrated transformer regressor model named ITRM-LCP to estimate the lexical complexity of words and phrases where diverse contextual features are extracted from various transformer models. The transformer models are fine-tuned using the text-pair data. Then, a bidirectional LSTM-based regressor module is plugged on top of each transformer to learn the long-term dependencies and estimate the complexity scores. The predicted scores of each module are then aggregated to determine the final complexity score. We assess our proposed model using two benchmark datasets from shared tasks. Experimental findings demonstrate that our ITRM-LCP model obtains 10.2", "keywords": "lexical complexity prediction;lexical simplification;sentence-pair regression;transformer models"} {"id": "KUMARESAN2023100041", "title": "Homophobia and transphobia detection for low-resourced languages in social media comments", "abstract": "People are increasingly sharing and expressing their emotions using online social media platforms such as Twitter, Facebook, and YouTube. An abusive, hateful, threatening, and discriminatory act that makes discomfort targets gay, lesbian, transgender, or bisexual individuals is called Homophobia and Transphobia. Detecting these types of acts on social media is called Homophobia and Transphobia Detection. This task has recently gained interest among researchers. Identifying homophobic and transphobic content for under-resourced languages is a bit challenging task. There are no such resources for Malayalam and Hindi to categorize these types of content as far now. This paper presents a new high-quality dataset for detecting homophobia and transphobia in Malayalam and Hindi languages. Our dataset consists of 5,193 comments in Malayalam and 3,203 comments in Hindi. We also submitted the experiments performed with traditional machine learning and transformer-based deep learning models on the Malayalam, Hindi, English, Tamil, and Tamil-English datasets.", "keywords": "dataset creation;under-resourced language;machine learning;transformer models;cross-lingual transfer learning;homophobia and transphobia;performance analysis"} {"id": "BACCO2023100037", "title": "On the instability of further pre-training: Does a single sentence matter to BERT?", "abstract": "We observe a remarkable instability in BERT-like models: minimal changes in the internal representations of BERT, as induced by one-step further pre-training with even a single sentence, can noticeably change the behaviour of subsequently fine-tuned models. While the pre-trained models seem to be essentially the same, also by means of established similarity assessment techniques, the measurable tiny changes appear to substantially impact the models\u2019 tuning path, leading to significantly different fine-tuned systems and affecting downstream performance. After testing a very large number of combinations, which we briefly summarize, the experiments reported in this short paper focus on an intermediate phase consisting of a single-step and single-sentence masked language modeling stage and its impact on a sentiment analysis task. We discuss a series of unexpected findings which leave some open questions over the nature and stability of further pre-training.", "keywords": "natural language processing;large language models;further pre-training;transformers;instability;bert"} {"id": "LOPEZESPEJEL2023100032", "title": "GPT-3.5, GPT-4, or BARD? Evaluating LLMs reasoning ability in zero-shot setting and performance boosting through prompts", "abstract": "Large Language Models (LLMs) have exhibited remarkable performance on various Natural Language Processing (NLP) tasks. However, there is a current hot debate regarding their reasoning capacity. In this paper, we examine the performance of GPT-3.5, GPT-4, and BARD models, by performing a thorough technical evaluation on different reasoning tasks across eleven distinct datasets. Our paper provides empirical evidence showcasing the superior performance of ChatGPT-4 in comparison to both ChatGPT-3.5 and BARD in zero-shot setting throughout almost all evaluated tasks. While the superiority of GPT-4 compared to GPT-3.5 might be explained by its larger size and NLP efficiency, this was not evident for BARD. We also demonstrate that the three models show limited proficiency in Inductive, Mathematical, and Multi-hop Reasoning Tasks. To bolster our findings, we present a detailed and comprehensive analysis of the results from these three models. Furthermore, we propose a set of engineered prompts that enhances the zero-shot setting performance of all three models.", "keywords": "chatgpt;bard;language models;natural language processing;reasoning;zero-shot setting"} {"id": "ZHUANG2023100038", "title": "Out-of-vocabulary word embedding learning based on reading comprehension mechanism", "abstract": "Currently, most natural language processing tasks use word embeddings as the representation of words. However, when encountering out-of-vocabulary (OOV) words, the performance of downstream models that use word embeddings as input is often quite limited. To solve this problem, the latest methods mainly infer the meaning of OOV words based on two types of information sources: the morphological structure of OOV words and the contexts in which they appear. However, the low frequency of OOV words themselves usually makes them difficult to learn in pre-training tasks by general word embedding models. In addition, this characteristic of OOV word embedding learning also brings the problem of context scarcity. Therefore, we introduce the concept of \u201csimilar contexts\u201d based on the classical \u201cdistributed hypothesis\u201d in linguistics, by borrowing from the human reading comprehension mechanisms to make up for the deficiency of insufficient contexts in previous OOV word embedding learning work. The experimental results show that our model achieved the highest relative scores in both intrinsic and extrinsic evaluation tasks, which demonstrates the positive effect of the \u201csimilar contexts\u201d introduced in our model on OOV word embedding learning.", "keywords": "out-of-vocabulary words;word embedding;reading comprehension mechanism"} {"id": "N2023100033", "title": "A Bidirectional LSTM approach for written script auto evaluation using keywords-based pattern matching", "abstract": "The evaluation process necessitates significant work in order to effectively and impartially assess the growing number of new subjects and interests in courses. This paper aims at auto-evaluating and setting scores for individuals similar to those provided by humans using deep learning models. This system is built purely to decipher the English characters and numbers from images, convert them into text format, and match the existing written scripts or custom keywords provided by the invigilators to check the answers. The Handwritten Text Recognition (HTR) model fervors and implements an algorithm that is capable of evaluating written scripts based on handwriting and comparing it with the custom keywords provided, whereas the existing models using Convolutional Neural networks (CNN) or Recurrent Neural networks (RNN) suffer from the Vanishing Gradient problem. The core objective of this model is to reduce manual paper checking using Bidirectional Long Short Term Memory (BiLSTM) and CRNN (Convolutional Recurrent Neural Networks). It has been implemented more than the models built on conventional approaches in aspects of performance, efficiency, and better text recognition. The inputs given to the model are in the form of custom keywords; the system processes them through HTR and image processing techniques of segmentation; and the output formats the percentage obtained by the student, word error rate, number of words misspelt, synonyms produced, and the effective outcome. The system has the capability to identify and highlight errors made by students. This feature is advantageous for both students and teachers, as it saves a significant amount of time. Even if the keywords used by students do not align perfectly, the advanced processing models employed by the system possess the intelligence to provide a reasonable number of marks.", "keywords": "convolutional recurrent neural network (crnn);handwritten text recognition (htr);bidirectional long short term memory (bilstm)"} {"id": "MOCANU2023100036", "title": "Knowledge representation and acquisition in the era of large language models: Reflections on learning to reason via PAC-Semantics", "abstract": "Human beings are known for their remarkable ability to comprehend, analyse, and interpret common sense knowledge. This ability is critical for exhibiting intelligent behaviour, often defined as a mapping from beliefs to actions, which has led to attempts to formalize and capture explicit representations in the form of databases, knowledge bases, and ontologies in AI agents. But in the era of large language models (LLMs), this emphasis might seem unnecessary. After all, these models already capture the extent of human knowledge and can infer appropriate things from it (presumably) as per some innate logical rules. The question then is whether they can also be trained to perform mathematical computations. Although the consensus on the reliability of such models is still being studied, early results do seem to suggest they do not offer logically and mathematically consistent results. In this short summary article, we articulate the motivations for still caring about logical/symbolic artefacts and representations, and report on recent progress in learning to reason via the so-called probably approximately correct (PAC)-semantics.", "keywords": "pac-semantics;logical knowledge bases;knowledge acquisition"} {"id": "ZENG2023100035", "title": "Multi-aspect attentive text representations for simple question answering over knowledge base", "abstract": "With the deepening of knowledge base research and application, question answering over knowledge base, also called KBQA, has recently received more and more attention from researchers. Most previous KBQA models focus on mapping the input query and the fact in KBs into an embedding format. Then the similarity between the query vector and the fact vector is computed eventually. Based on the similarity, each query can obtain an answer representing a tuple (subject, predicate, object) from the KBs. However, the information about each word in the input question will lose inevitably during the process. To retain as much original information as possible, we introduce an attention-based recurrent neural network model with interactive similarity matrixes. It can extract more comprehensive information from the hierarchical structure of words among queries and tuples stored in the knowledge base. This work makes three main contributions: (1) A neural network-based question-answering model for the knowledge base is proposed to handle single relation questions. (2) An attentive module is designed to obtain information from multiple aspects to represent queries and data, which contributes to avoiding losing potentially valuable information. (3) Similarity matrixes are introduced to obtain the interaction information between queries and data from the knowledge base. Experimental results show that our proposed model performs better on simple questions than state-of-the-art in several effectiveness measures.", "keywords": "question answering;knowledge base;deep learning"} {"id": "10.1145/3477495.3531990", "title": "HTKG: Deep Keyphrase Generation with Neural Hierarchical Topic Guidance", "abstract": "Keyphrases can concisely describe the high-level topics discussed in a document that usually possesses hierarchical topic structures. Thus, it is crucial to understand the hierarchical topic structures and employ it to guide the keyphrase identification. However, integrating the hierarchical topic information into a deep keyphrase generation model is unexplored. In this paper, we focus on how to effectively exploit the hierarchical topic to improve the keyphrase generation performance (HTKG). Specifically, we propose a novel hierarchical topic-guided variational neural sequence generation method for keyphrase generation, which consists of two major modules: a neural hierarchical topic model that learns the latent topic tree across the whole corpus of documents, and a variational neural keyphrase generation model to generate keyphrases under hierarchical topic guidance. Finally, these two modules are jointly trained to help them learn complementary information from each other. To the best of our knowledge, this is the first attempt to leverage the neural hierarchical topic to guide keyphrase generation. The experimental results demonstrate that our method significantly outperforms the existing state-of-the-art methods across five benchmark datasets.", "keywords": "neural hierarchical topic model;variational neural generation model;deep keyphrase generation"} {"id": "10.1145/3477495.3532016", "title": "Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning", "abstract": "Machine reading comprehension has aroused wide concerns, since it explores the potential of model for text understanding. To further equip the machine with the reasoning capability, the challenging task of logical reasoning is proposed. Previous works on logical reasoning have proposed some strategies to extract the logical units from different aspects. However, there still remains a challenge to model the long distance dependency among the logical units. Also, it is demanding to uncover the logical structures of the text and further fuse the discrete logic to the continuous text embedding. To tackle the above issues, we propose an end-to-end model Logiformer which utilizes a two-branch graph transformer network for logical reasoning of text. Firstly, we introduce different extraction strategies to split the text into two sets of logical units, and construct the logical graph and the syntax graph respectively. The logical graph models the causal relations for the logical branch while the syntax graph captures the co-occurrence relations for the syntax branch. Secondly, to model the long distance dependency, the node sequence from each graph is fed into the fully connected graph transformer structures. The two adjacent matrices are viewed as the attention biases for the graph transformer layers, which map the discrete logical structures to the continuous text embedding space. Thirdly, a dynamic gate mechanism and a question-aware self-attention module are introduced before the answer prediction to update the features. The reasoning process provides the interpretability by employing the logical units, which are consistent with human cognition. The experimental results show the superiority of our model, which outperforms the state-of-the-art single model on two logical reasoning benchmarks.", "keywords": "graph transformer;machine reading comprehension;logical reasoning"} {"id": "10.1145/3477495.3532037", "title": "Personalized Abstractive Opinion Tagging", "abstract": "An opinion tag is a sequence of words on a specific aspect of a product or service. Opinion tags reflect key characteristics of product reviews and help users quickly understand their content in e-commerce portals. The task of abstractive opinion tagging has previously been proposed to automatically generate a ranked list of opinion tags for a given review. However, current models for opinion tagging are not personalized, even though personalization is an essential ingredient of engaging user interactions, especially in e-commerce. In this paper, we focus on the task of personalized abstractive opinion tagging. There are two main challenges when developing models for the end-to-end generation of personalized opinion tags: sparseness of reviews and difficulty to integrate multi-type signals, i.e., explicit review signals and implicit behavioral signals. To address these challenges, we propose an end-to-end model, named POT, that consists of three main components: (1) a review-based explicit preference tracker component based on a hierarchical heterogeneous review graph to track user preferences from reviews; (2)a behavior-based implicit preference tracker component using a heterogeneous behavior graph to track the user preferences from implicit behaviors; and (3) a personalized rank-aware tagging component to generate a ranked sequence of personalized opinion tags. In our experiments, we evaluate POT on a real-world dataset collected from e-commerce platforms and the results demonstrate that it significantly outperforms strong baselines.", "keywords": "abstractive summarization;personalization;e-commerce;review analysis"} {"id": "10.1145/3477495.3531954", "title": "Contrastive Learning with Hard Negative Entities for Entity Set Expansion", "abstract": "Entity Set Expansion (ESE) is a promising task which aims to expand entities of the target semantic class described by a small seed entity set. Various NLP and IR applications will benefit from ESE due to its ability to discover knowledge. Although previous ESE methods have achieved great progress, most of them still lack the ability to handle hard negative entities (i.e., entities that are difficult to distinguish from the target entities), since two entities may or may not belong to the same semantic class based on different granularity levels we analyze on. To address this challenge, we devise an entity-level masked language model with contrastive learning to refine the representation of entities. In addition, we propose the ProbExpan, a novel probabilistic ESE framework utilizing the entity representation obtained by the aforementioned language model to expand entities. Extensive experiments and detailed analyses on three datasets show that our method outperforms previous state-of-the-art methods.", "keywords": "knowledge discovery;entity set expansion;contrastive learning"} {"id": "10.1145/3477495.3532071", "title": "Unifying Cross-Lingual Summarization and Machine Translation with Compression Rate", "abstract": "Cross-Lingual Summarization (CLS) is a task that extracts important information from a source document and summarizes it into a summary in another language. It is a challenging task that requires a system to understand, summarize, and translate at the same time, making it highly related to Monolingual Summarization (MS) and Machine Translation (MT). In practice, the training resources for Machine Translation are far more than that for cross-lingual and monolingual summarization. Thus incorporating the Machine Translation corpus into CLS would be beneficial for its performance. However, the present work only leverages a simple multi-task framework to bring Machine Translation in, lacking deeper exploration.In this paper, we propose a novel task, Cross-lingual Summarization with Compression rate (CSC), to benefit Cross-Lingual Summarization by large-scale Machine Translation corpus. Through introducing compression rate, the information ratio between the source and the target text, we regard the MT task as a special CLS task with a compression rate of 100%. Hence they can be trained as a unified task, sharing knowledge more effectively. However, a huge gap exists between the MT task and the CLS task, where samples with compression rates between 30% and 90% are extremely rare. Hence, to bridge these two tasks smoothly, we propose an effective data augmentation method to produce document-summary pairs with different compression rates. The proposed method not only improves the performance of the CLS task, but also provides controllability to generate summaries in desired lengths. Experiments demonstrate that our method outperforms various strong baselines in three cross-lingual summarization datasets. We released our code and data at https://github.com/ybai-nlp/CLS_CR.", "keywords": "compression rate;cross-lingual summarization;machine translation"} {"id": "10.1145/3477495.3532080", "title": "What Makes the Story Forward? Inferring Commonsense Explanations as Prompts for Future Event Generation", "abstract": "Prediction over event sequences is critical for many real-world applications in Information Retrieval and Natural Language Processing. Future Event Generation (FEG) is a challenging task in event sequence prediction because it requires not only fluent text generation but also commonsense reasoning to maintain the logical coherence of the entire event story. In this paper, we propose a novel explainable FEG framework, Coep. It highlights and integrates two types of event knowledge, sequential knowledge of direct event-event relations and inferential knowledge that reflects the intermediate character psychology between events, such as intents, causes, reactions, which intrinsically pushes the story forward. To alleviate the knowledge forgetting issue, we design two modules, IM and GM, for each type of knowledge, which are combined via prompt tuning. First, IM focuses on understanding inferential knowledge to generate commonsense explanations and provide a soft prompt vector for GM. We also design a contrastive discriminator for better generalization ability. Second, GM generates future events by modeling direct sequential knowledge with the guidance of IM. Automatic and human evaluation demonstrate that our approach can generate more coherent, specific, and logical future events.", "keywords": "textual event generation;commonsense reasoning;contrastive training"} {"id": "10.1145/3477495.3531923", "title": "A Dual-Expert Framework for Event Argument Extraction", "abstract": "Event argument extraction (EAE) is an important information extraction task, which aims to identify the arguments of an event described in a given text and classify the roles played by them. A key characteristic in realistic EAE data is that the instance numbers of different roles follow an obvious long-tail distribution. However, the training and evaluation paradigms of existing EAE models either prone to neglect the performance on \"tail roles\u201d, or change the role instance distribution for model training to an unrealistic uniform distribution. Though some generic methods can alleviate the class imbalance in long-tail datasets, they usually sacrifice the performance of \"head classes\u201d as a trade-off. To address the above issues, we propose to train our model on realistic long-tail EAE datasets, and evaluate the average performance over all roles. Inspired by the Mixture of Experts (MOE), we propose a Routing-Balanced Dual Expert Framework (RBDEF), which divides all roles into \"head\" and \"tail\" two scopes and assigns the classifications of head and tail roles to two separate experts. In inference, each encoded instance will be allocated to one of the two experts by a routing mechanism. To reduce routing errors caused by the imbalance of role instances, we design a Balanced Routing Mechanism (BRM), which transfers several head roles to the tail expert to balance the load of routing, and employs a tri-filter routing strategy to reduce the misallocation of the tail expert's instances. To enable an effective learning of tail roles with scarce instances, we devise Target-Specialized Meta Learning (TSML) to train the tail expert. Different from other meta learning algorithms that only search a generic parameter initialization equally applying to infinite tasks, TSML can adaptively adjust its search path to obtain a specialized initialization for the tail expert, thereby expanding the benefits to the learning of tail roles. In experiments, RBDEF significantly outperforms the state-of-the-art EAE models and advanced methods for long-tail data.", "keywords": "event argument extraction;meta learning;information extraction;mixture of experts"} {"id": "10.1145/3477495.3531956", "title": "CorED: Incorporating Type-Level and Instance-Level Correlations for Fine-Grained Event Detection", "abstract": "Event detection (ED) is a pivotal task for information retrieval, which aims at identifying event triggers and classifying them into pre-defined event types. In real-world applications, events are usually annotated with numerous fine-grained types, which often arises long-tail type nature and co-occurrence event nature. Existing studies explore the event correlations without full utilization, which may limit the capability of event detection. This paper simultaneously incorporates both the type-level and instance-level event correlations, and proposes a novel framework, termed as CorED. Specifically, we devise an adaptive graph-based type encoder to capture instance-level correlations, learning type representations not only from their training data but also from their relevant types, thus leading to more informative type representations especially for the low-resource types. Besides, we devise an instance interactive decoder to capture instance-level correlations, which predicts event instance types conditioned on the contextual typed event instances, leveraging co-occurrence events as remarkable evidence in prediction. We conduct experiments on two public benchmarks, MAVEN and ACE-2005 dataset. Empirical results demonstrate the unity of both type-level and instance-level correlations, and the model achieves effectiveness performance on both benchmarks.", "keywords": "event detection;event correlation;information extraction"} {"id": "10.1145/3477495.3532034", "title": "On the Role of Relevance in Natural Language Processing Tasks", "abstract": "Many recent Natural Language Processing (NLP) task formulations, such as question answering and fact verification, are implemented as a two-stage cascading architecture. In the first stage an IR system retrieves \"relevant\u201d documents containing the knowledge, and in the second stage an NLP system performs reasoning to solve the task. Optimizing the IR system for retrieving relevant documents ensures that the NLP system has sufficient information to operate over. These recent NLP task formulations raise interesting and exciting challenges for IR, where the end-user of an IR system is not a human with an information need, but another system exploiting the documents retrieved by the IR system to perform reasoning and address the user information need. Among these challenges, as we will show, is that noise from the IR system, such as retrieving spurious or irrelevant documents, can negatively impact the accuracy of the downstream reasoning module. Hence, there is the need to balance maximizing relevance while minimizing noise in the IR system. This paper presents experimental results on two NLP tasks implemented as a two-stage cascading architecture. We show how spurious or irrelevant retrieved results from the first stage can induce errors in the second stage. We use these results to ground our discussion of the research challenges that the IR community should address in the context of these knowledge-intensive NLP tasks.", "keywords": "nlp;ir;relevance;neural databases;effectiveness"} {"id": "10.1145/3477495.3531765", "title": "Constrained Sequence-to-Tree Generation for Hierarchical Text Classification", "abstract": "Hierarchical Text Classification (HTC) is a challenging task where a document can be assigned to multiple hierarchically structured categories within a taxonomy. The majority of prior studies consider HTC as a flat multi-label classification problem, which inevitably leads to \u201dlabel inconsistency\u201d problem. In this paper, we formulate HTC as a sequence generation task and introduce a sequence-to-tree framework (Seq2Tree) for modeling the hierarchical label structure. Moreover, we design a constrained decoding strategy with dynamic vocabulary to secure the label consistency of the results. Compared with previous works, the proposed approach achieves significant and consistent improvements on three benchmark datasets.", "keywords": "text classification;sequence-to-sequence;text representation"} {"id": "10.1145/3477495.3531802", "title": "What Makes a Good Podcast Summary?", "abstract": "Abstractive summarization of podcasts is motivated by the growing popularity of podcasts and the needs of their listeners. Podcasting is a markedly different domain from news and other media that are commonly studied in the context of automatic summarization. As such, the qualities of a good podcast summary are yet unknown. Using a collection of podcast summaries produced by different algorithms alongside human judgments of summary quality obtained from the TREC 2020 Podcasts Track, we study the correlations between various automatic evaluation metrics and human judgments, as well as the linguistic aspects of summaries that result in strong evaluations.", "keywords": "rouge;podcast summarization;abstractive text summarization;evaluation"} {"id": "10.1145/3477495.3531823", "title": "Improving Contrastive Learning of Sentence Embeddings with Case-Augmented Positives and Retrieved Negatives", "abstract": "Following SimCSE, contrastive learning based methods have achieved the state-of-the-art (SOTA) performance in learning sentence embeddings. However, the unsupervised contrastive learning methods still lag far behind the supervised counterparts. We attribute this to the quality of positive and negative samples, and aim to improve both. Specifically, for positive samples, we propose switch-case augmentation to flip the case of the first letter of randomly selected words in a sentence. This is to counteract the intrinsic bias of pre-trained token embeddings to frequency, word cases and subwords. For negative samples, we sample hard negatives from the whole dataset based on a pre-trained language model. Combining the above two methods with SimCSE, our proposed Contrastive learning with Augmented and Retrieved Data for Sentence embedding (CARDS) method significantly surpasses the current SOTA on STS benchmarks in the unsupervised setting.", "keywords": "data augmentation;natural language understanding;roberta;intrinsic bias;hard negatives;text retrieval"} {"id": "10.1145/3477495.3531825", "title": "Masking and Generation: An Unsupervised Method for Sarcasm Detection", "abstract": "Existing approaches for sarcasm detection are mainly based on supervised learning, in which the promising performance largely depends on a considerable amount of labeled data or extra information. In the real world scenario, however, the abundant labeled data or extra information requires high labor cost, not to mention that sufficient annotated data is unavailable in many low-resource conditions. To alleviate this dilemma, we investigate sarcasm detection from an unsupervised perspective, in which we explore a masking and generation paradigm in the context to extract the context incongruities for learning sarcastic expression. Further, to improve the feature representations of the sentences, we use unsupervised contrastive learning to improve the sentence representation based on the standard dropout. Experimental results on six perceived sarcasm detection benchmark datasets show that our approach outperforms baselines. Simultaneously, our unsupervised method obtains comparative performance with supervised methods for the intended sarcasm dataset.", "keywords": "pre-trained language models;natural language generation;sentiment analysis;unsupervised sarcasm detection;words mask"} {"id": "10.1145/3477495.3531831", "title": "Relation-Guided Few-Shot Relational Triple Extraction", "abstract": "In few-shot relational triple extraction (FS-RTE), one seeks to extract relational triples from plain texts by utilizing only few annotated samples. Recent work first extracts all entities and then classifies their relations. Such an entity-then-relation paradigm ignores the entity discrepancy between relations. To address it, we propose a novel task decomposition strategy, Relation-then-Entity, for FS-RTE. It first detects relations occurred in a sentence and then extracts the corresponding head/tail entities of the detected relations. To instantiate this strategy, we further propose a model, RelATE, which builds a dual-level attention to aggregate relation-relevant information to detect the relation occurrence and utilizes the annotated samples of the detected relations to extract the corresponding head/tail entities. Experimental results show that our model outperforms previous work by an absolute gain (18.98%, 28.85% in F1 in two few-shot settings).", "keywords": "relational triple extraction;few-shot learning;information extraction"} {"id": "10.1145/3477495.3531834", "title": "Tensor-Based Graph Modularity for Text Data Clustering", "abstract": "Graphs are used in several applications to represent similarities between instances. For text data, we can represent texts by different features such as bag-of-words, static embeddings (Word2vec, GloVe, etc.), and contextual embeddings (BERT, RoBERTa, etc.), leading to multiple similarities (or graphs) based on each representation. The proposal posits that incorporating the local invariance within every graph and the consistency across different graphs leads to a consensus clustering that improves the document clustering. This problem is complex and challenged with the sparsity and the noisy data included in each graph. To this end, we rely on the modularity metric, which effectively evaluates graph clustering in such circumstances. Therefore, we present a novel approach for text clustering based on both a sparse tensor representation and graph modularity. This leads to cluster texts (nodes) while capturing information arising from the different graphs. We iteratively maximize a Tensor-based Graph Modularity criterion. Extensive experiments on benchmark text clustering datasets are performed, showing that the proposed algorithm referred to as Tensor Graph Modularity -TGM- outperforms other baseline methods in terms of clustering task. The source code is available at https://github.com/TGMclustering/TGMclustering.", "keywords": "nlp;graphs;word embedding;tensor;text clustering"} {"id": "10.1145/3477495.3531877", "title": "DeSCoVeR: Debiased Semantic Context Prior for Venue Recommendation", "abstract": "We present a novel semantic context prior-based venue recommendation system that uses only the title and the abstract of a paper. Based on the intuition that the text in the title and abstract have both semantic and syntactic components, we demonstrate that a joint training of a semantic feature extractor and syntactic feature extractor collaboratively leverages meaningful information that helps to provide venues for papers. The proposed methodology that we call DeSCoVeR at first elicits these semantic and syntactic features using a Neural Topic Model and text classifier respectively. The model then executes a transfer learning optimization procedure to perform a contextual transfer between the feature distributions of the Neural Topic Model and the text classifier during the training phase. DeSCoVeR also mitigates the document-level label bias using a Causal back-door path criterion and a sentence-level keyword bias removal technique. Experiments on the DBLP dataset show that DeSCoVeR outperforms the state-of-the-art methods.", "keywords": "document classification;joint learning;mutual transfer;topic modeling;causal debiasing"} {"id": "10.1145/3477495.3531887", "title": "Dual Pseudo Supervision for Semi-Supervised Text Classification with a Reliable Teacher", "abstract": "In this paper, we study the semi-supervised text classification (SSTC) by exploring both labeled and extra unlabeled data. One of the most popular SSTC techniques is pseudo-labeling which assigns pseudo labels for unlabeled data via a teacher classifier trained on labeled data. These pseudo labeled data is then applied to train a student classifier. However, when the pseudo labels are inaccurate, the student classifier will learn from inaccurate data and get even worse performance than the teacher. To mitigate this issue, we propose a simple yet efficient pseudo-labeling framework called Dual Pseudo Supervision (DPS), which exploits the feedback signal from the student to guide the teacher to generate better pseudo labels. In particular, we alternately update the student based on the pseudo labeled data annotated by the teacher and optimize the teacher based on the student's performance via meta learning. In addition, we also design a consistency regularization term to further improve the stability of the teacher. With the above two strategies, the learned reliable teacher can provide more accurate pseudo-labels to the student and thus improve the overall performance of text classification. We conduct extensive experiments on three benchmark datasets (i.e., AG News, Yelp and Yahoo) to verify the effectiveness of our DPS method. Experimental results show that our approach achieves substantially better performance than the strong competitors. For reproducibility, we will release our code and data of this paper publicly at https://github.com/GRIT621/DPS.", "keywords": "semi-supervised text classification;meta learning;pseudo labeling;consistency regularization"} {"id": "10.1145/3477495.3531900", "title": "An Efficient Fusion Mechanism for Multimodal Low-Resource Setting", "abstract": "The effective fusion of multiple modalities (i.e., text, acoustic, and visual) is a non-trivial task, as these modalities often carry specific and diverse information and do not contribute equally. The fusion of different modalities could even be more challenging under the low-resource setting, where we have fewer samples for training. This paper proposes a multi-representative fusion mechanism that generates diverse fusions with multiple modalities and then chooses the best fusion among them. To achieve this, we first apply convolution filters on multimodal inputs to generate different and diverse representations of modalities. We then fuse pairwise modalities with multiple representations to get the multiple fusions. Finally, we propose an attention mechanism that only selects the most appropriate fusion, which eventually helps resolve the noise problem by ignoring the noisy fusions. We evaluate our proposed approach on three low-resource multimodal sentiment analysis datasets, i.e., YouTube, MOUD, and ICT-MMMO. Experimental results show the effectiveness of our proposed approach with the accuracies of 59.3%, 83.0%, and 84.1% for the YouTube, MOUD, and ICT-MMMO datasets, respectively.", "keywords": "sentiment analysis;low-resource dataset;multi-representative fusion;deep learning"} {"id": "10.1145/3477495.3531908", "title": "Lightweight Meta-Learning for Low-Resource Abstractive Summarization", "abstract": "Recently, supervised abstractive summarization using high-resource datasets, such as CNN/DailyMail and Xsum, has achieved significant performance improvements. However, most of the existing high-resource dataset is biased towards a specific domain like news, and annotating document-summary pairs for low-resource datasets is too expensive. Furthermore, the need for low-resource abstractive summarization task is emerging but existing methods for the task such as transfer learning still have domain shifting and overfitting problems. To address these problems, we propose a new framework for low-resource abstractive summarization using a meta-learning algorithm that can quickly adapt to a new domain using small data. For adaptive meta-learning, we introduce a lightweight module inserted into the attention mechanism of a pre-trained language model; the module is first meta-learned with high-resource task-related datasets and then is fine-tuned with the low-resource target dataset. We evaluate our model on 11 different datasets. Experimental results show that the proposed method achieves the state-of-the-art on 9 datasets in low-resource abstractive summarization.", "keywords": "abstractive summarization;low-resource summarization;meta-learning"} {"id": "10.1145/3477495.3531914", "title": "Multi-Label Masked Language Modeling on Zero-Shot Code-Switched Sentiment Analysis", "abstract": "In multilingual communities, code-switching is a common phenomenon and code-switched tasks have become a crucial area of research in natural language processing (NLP) applications. Existing approaches mainly focus on supervised learning. However, it is expensive to annotate a sufficient amount of code-switched data. In this paper, we consider zero-shot setting and improve model performance on code-switched tasks via monolingual language datasets, unlabeled code-switched datasets, and semantic dictionaries. Inspired by the mechanism of code-switching itself, we propose multi-label masked language modeling and predict both the masked word and its synonyms in other languages. Experimental results show that compared with baselines, our method can further improve the pretrained multilingual model's performance on code-switched sentiment analysis datasets.", "keywords": "sentiment analysis;multi-label;masked language modeling;code-switched;zero-shot learning"} {"id": "10.1145/3477495.3531916", "title": "Extractive Elementary Discourse Units for Improving Abstractive Summarization", "abstract": "Abstractive summarization focuses on generating concise and fluent text from an original document while maintaining the original intent and containing the new words that do not appear in the original document. Recent studies point out that rewriting extractive summaries help improve the performance with a more concise and comprehensible output summary, which uses a sentence as a textual unit. However, a single document sentence normally cannot supply sufficient information. In this paper, we apply elementary discourse unit (EDU) as textual unit of content selection. In order to utilize EDU for generating a high quality summary, we propose a novel summarization model that first designs an EDU selector to choose salient content. Then, the generator model rewrites the selected EDUs as the final summary. To determine the relevancy of each EDU on the entire document, we choose to apply group tag embedding, which can establish the connection between summary sentences and relevant EDUs, so that our generator does not only focus on selected EDUs, but also ingest the entire original document. Extensive experiments on the CNN/Daily Mail dataset have demonstrated the effectiveness of our model.", "keywords": "abstractive summarization;two-stage summarization;text generation"} {"id": "10.1145/3477495.3531920", "title": "Task-Oriented Dialogue System as Natural Language Generation", "abstract": "In this paper, we propose to formulate the task-oriented dialogue system as the purely natural language generation task, so as to fully leverage the large-scale pre-trained models like GPT-2 and simplify complicated delexicalization prepossessing. However, directly applying this method heavily suffers from the dialogue entity inconsistency caused by the removal of delexicalized tokens, as well as the catastrophic forgetting problem of the pre-trained model during fine-tuning, leading to unsatisfactory performance. To alleviate these problems, we design a novel GPT-Adapter-CopyNet network, which incorporates the lightweight adapter and CopyNet modules into GPT-2 to achieve better performance on transfer learning and dialogue entity generation. Experimental results conducted on the DSTC8 Track 1 benchmark and MultiWOZ dataset demonstrate that our proposed approach significantly outperforms baseline models with a remarkable performance on automatic and human evaluations.", "keywords": "gpt;natural language generation;task-oriented dialogue system"} {"id": "10.1145/3583780.3614769", "title": "A Retrieve-and-Read Framework for Knowledge Graph Link Prediction", "abstract": "Knowledge graph (KG) link prediction aims to infer new facts based on existing facts in the KG. Recent studies have shown that using the graph neighborhood of a node via graph neural networks (GNNs) provides more useful information compared to just using the query information. Conventional GNNs for KG link prediction follow the standard message-passing paradigm on the entire KG, which leads to superfluous computation, over-smoothing of node representations, and also limits their expressive power. On a large scale, it becomes computationally expensive to aggregate useful information from the entire KG for inference. To address the limitations of existing KG link prediction frameworks, we propose a novel retrieve-and-read framework, which first retrieves a relevant subgraph context for the query and then jointly reasons over the context and the query with a high-capacity reader. As part of our exemplar instantiation for the new framework, we propose a novel Transformer-based GNN as the reader, which incorporates graph-based attention structure and cross-attention between query and context for deep fusion. This simple yet effective design enables the model to focus on salient context information relevant to the query. Empirical results on two standard KG link prediction datasets demonstrate the competitive performance of the proposed method. Furthermore, our analysis yields valuable insights for designing improved retrievers within the framework.", "keywords": "transformers;knowledge graph link prediction;knowledge graph completion;graph neural networks"} {"id": "10.1145/3539618.3591630", "title": "A Topic-Aware Summarization Framework with Different Modal Side Information", "abstract": "Automatic summarization plays an important role in the exponential document growth on the Web. On content websites such as CNN.com and WikiHow.com, there often exist various kinds of side information along with the main document for attention attraction and easier understanding, such as videos, images, and queries. Such information can be used for better summarization, as they often explicitly or implicitly mention the essence of the article. However, most of the existing side-aware summarization methods are designed to incorporate either single-modal or multi-modal side information, and cannot effectively adapt to each other. In this paper, we propose a general summarization framework, which can flexibly incorporate various modalities of side information. The main challenges in designing a flexible summarization model with side information include: (1) the side information can be in textual or visualformat, and the model needs to align and unify it with the document into the same semantic space, (2) the side inputs can contain information from variousaspects, and the model should recognize the aspects useful for summarization. To address these two challenges, we first propose a unified topic encoder, which jointly discovers latent topics from the document and various kinds of side information. The learned topics flexibly bridge and guide the information flow between multiple inputs in a graph encoder through a topic-aware interaction. We secondly propose a triplet contrastive learning mechanism to align the single-modal or multi-modal information into a unified semantic space, where thesummary quality is enhanced by better understanding thedocument andside information. Results show that our model significantly surpasses strong baselines on three public single-modal or multi-modal benchmark summarization datasets.", "keywords": "abstractive summarization;multimodal summarization"} {"id": "10.1145/3539618.3591703", "title": "Can ChatGPT Write a Good Boolean Query for Systematic Review Literature Search?", "abstract": "Systematic reviews are comprehensive literature reviews for a highly focused research question. These reviews are considered the highest form of evidence in medicine. Complex Boolean queries are developed as part of the systematic review creation process to retrieve literature, as they permit reproducibility and understandability. However, it is difficult and time-consuming to develop high-quality Boolean queries, often requiring the expertise of expert searchers like librarians. Recent advances in transformer-based generative models have shown their ability to effectively follow user instructions and generate answers based on these instructions. In this paper, we investigate ChatGPT as a means for automatically formulating and refining complex Boolean queries for systematic review literature search. Overall, our research finds that ChatGPT has the potential to generate effective Boolean queries. The ability of ChatGPT to follow complex instructions and generate highly precise queries makes it a tool of potential value for researchers conducting systematic reviews, particularly for rapid reviews where time is a constraint and where one can trade off higher precision for lower recall. We also identify several caveats in using ChatGPT for this task, highlighting that this technology needs further validation before it is suitable for widespread uptake.", "keywords": "prompt engineering;boolean query;systematic review;chatgpt"} {"id": "10.1145/3539618.3591687", "title": "FiD-Light: Efficient and Effective Retrieval-Augmented Text Generation", "abstract": "Retrieval-augmented generation models offer many benefits over standalone language models: besides a textual answer to a given query they provide provenance items retrieved from an updateable knowledge base. However, they are also more complex systems and need to handle long inputs. In this work, we introduce FiD-Light to strongly increase the efficiency of the state-of-the-art retrieval-augmented FiD model, while maintaining the same level of effectiveness. Our FiD-Light model constrains the information flow from the encoder (which encodes passages separately) to the decoder (using concatenated encoded representations). Furthermore, we adapt FiD-Light with re-ranking capabilities through textual source pointers, to improve the top-ranked provenance precision. Our experiments on a diverse set of seven knowledge intensive tasks (KILT) show FiD-Light consistently improves the Pareto frontier between query latency and effectiveness. FiD-Light with source pointing sets substantial new state-of-the-art results on six KILT tasks for combined text generation and provenance retrieval evaluation, while maintaining high efficiency.", "keywords": "retrieval augmented generation;kilt;fusion-in-decoder"} {"id": "10.1145/3539618.3591631", "title": "A Unified Generative Retriever for Knowledge-Intensive Language Tasks via Prompt Learning", "abstract": "Knowledge-intensive language tasks (KILTs) benefit from retrieving high-quality relevant contexts from large external knowledge corpora. Learning task-specific retrievers that return relevant contexts at an appropriate level of semantic granularity, such as a document retriever, passage retriever, sentence retriever, and entity retriever, may help to achieve better performance on the end-to-end task. But a task-specific retriever usually has poor generalization ability to new domains and tasks, and it may be costly to deploy a variety of specialised retrievers in practice.We propose a unified generative retriever (UGR) that combines task-specific effectiveness with robust performance over different retrieval tasks in KILTs. To achieve this goal, we make two major contributions: (i) To unify different retrieval tasks into a single generative form, we introduce an n-gram-based identifier for relevant contexts at different levels of granularity in KILTs. And (ii) to address different retrieval tasks with a single model, we employ a prompt learning strategy and investigate three methods to design prompt tokens for each task. In this way, the proposed UGR model can not only share common knowledge across tasks for better generalization, but also perform different retrieval tasks effectively by distinguishing task-specific characteristics. We train UGR on a heterogeneous set of retrieval corpora with well-designed prompts in a supervised and multi-task fashion. Experimental results on the KILT benchmark demonstrate the effectiveness of UGR on in-domain datasets, out-of-domain datasets, and unseen tasks.", "keywords": "unified retriever;generative retrieval;knowledge-intensive language tasks"} {"id": "10.1145/3539618.3591759", "title": "RHB-Net: A Relation-Aware Historical Bridging Network for Text2SQL Auto-Completion", "abstract": "Test2SQL, a natural language interface to database querying, has seen considerable improvement, in part due to advances in deep learning. However, despite recent improvement, existing Text2SQL proposals allow only input in the form of complete questions. This leaves behind users who struggle to formulate complete questions, e.g., because they lack database expertise or are unfamiliar with the underlying database schema. To address this shortcoming, we study the novel problem of Text2SQL Auto-Completion (TSAC) that extends Text2SQL to also take partial or incomplete questions as input. Specifically, the TSAC problem is to predict the complete, executable SQL query. To solve the problem, we propose a novel Relation-aware Historical Bridging Network (RHB-Net) that consists of a relation-aware union encoder and an extraction-generation sensitive decoder. RHB-Net models relations between questions and database schemas and predicts the ambiguous intents expressed in partial queries. We also propose two optimization strategies: historical query bridging that fuses historical database queries, and a dynamic context construction that prevents repeated generation of the same SQL elements. Extensive experiments with real-world data offer evidence that RHB-Net is capable of outperforming baseline algorithms.", "keywords": "query language;auto-completion;text2sql;database"} {"id": "10.1145/3539618.3591708", "title": "Large Language Models Are Versatile Decomposers: Decomposing Evidence and Questions for Table-Based Reasoning", "abstract": "Table-based reasoning has shown remarkable progress in a wide range of table-based tasks. It is a challenging task, which requires reasoning over both free-form natural language (NL) questions and (semi-)structured tabular data. However, previous table-based reasoning solutions usually suffer from significant performance degradation on \u201dhuge\u201d evidence (tables). In addition, most existing methods struggle to reason over complex questions since the essential information is scattered in different places. To alleviate the above challenges, we exploit large language models (LLMs) as decomposers for effective table-based reasoning, which (i) decompose huge evidence (a huge table) into sub-evidence (a small table) to mitigate the interference of useless information for table reasoning, and (ii) decompose a complex question into simpler sub-questions for text reasoning. First, we use a powerful LLM to decompose the evidence involved in the current question into the sub-evidence that retains the relevant information and excludes the remaining irrelevant information from the \u201dhuge\u201d evidence. Second, we propose a novel \u201dparsing-execution-filling\u201d strategy to decompose a complex question into simper step-by-step sub-questions by generating intermediate SQL queries as a bridge to produce numerical and logical sub-questions with a powerful LLM. Finally, we leverage the decomposed sub-evidence and sub-questions to get the final answer with a few in-context prompting examples. Extensive experiments on three benchmark datasets (TabFact, WikiTableQuestion, and FetaQA) demonstrate that our method achieves significantly better results than competitive baselines for table-based reasoning. Notably, our method outperforms human performance for the first time on the TabFact dataset. In addition to impressive overall performance, our method also has the advantage of interpretability, where the returned results are to some extent tractable with the generated sub-evidence and sub-questions. For reproducibility, we release our source code and data at: https://github.com/AlibabaResearch/DAMO-ConvAI.", "keywords": "pre-trained language models;large language models;table-based reasoning"} {"id": "10.1145/3539618.3591728", "title": "MGeo: Multi-Modal Geographic Language Model Pre-Training", "abstract": "Query and point of interest (POI) matching is a core task in location-based services\u00a0(LBS), e.g., navigation maps. It connects users' intent with real-world geographic information. Lately, pre-trained language models (PLMs) have made notable advancements in many natural language processing (NLP) tasks. To overcome the limitation that generic PLMs lack geographic knowledge for query-POI matching, related literature attempts to employ continued pre-training based on domain-specific corpus. However, a query generally describes the geographic context (GC) about its destination and contains mentions of multiple geographic objects like nearby roads and regions of interest (ROIs). These diverse geographic objects and their correlations are pivotal to retrieving the most relevant POI. Text-based single-modal PLMs can barely make use of the important GC and are therefore limited. In this work, we propose a novel method for query-POI matching, namely Multi-modal Geographic language model (MGeo), which comprises a geographic encoder and a multi-modal interaction module. Representing GC as a new modality, MGeo is able to fully extract multi-modal correlations to perform accurate query-POI matching. Moreover, there exists no publicly available query-POI matching benchmark. Intending to facilitate further research, we build a new open-source large-scale benchmark for this topic, i.e., Geographic TExtual Similarity (GeoTES). The POIs come from an open-source geographic information system (GIS) and the queries are manually generated by annotators to prevent privacy issues. Compared with several strong baselines, the extensive experiment results and detailed ablation analyses demonstrate that our proposed multi-modal geographic pre-training method can significantly improve the query-POI matching capability of PLMs with or without users' locations. Our code and benchmark are publicly available at https://github.com/PhantomGrapes/MGeo.", "keywords": "multi-modal;language model;benchmark;geographic context;query-poi matching"} {"id": "10.1145/3539618.3591633", "title": "Adapting Generative Pretrained Language Model for Open-Domain Multimodal Sentence Summarization", "abstract": "Multimodal sentence summarization, aiming to generate a brief summary of the source sentence and image, is a new yet challenging task. Although existing methods have achieved compelling success, they still suffer from two key limitations: 1) lacking the adaptation of generative pre-trained language models for open-domain MMSS, and 2) lacking the explicit critical information modeling. To address these limitations, we propose a BART-MMSS framework, where BART is adopted as the backbone. To be specific, we propose a prompt-guided image encoding module to extract the source image feature. It leverages several soft to-be-learned prompts for image patch embedding, which facilitates the visual content injection to BART for open-domain MMSS tasks. Thereafter, we devise an explicit source critical token learning module to directly capture the critical tokens of the source sentence with the reference of the source image, where we incorporate explicit supervision to improve performance. Extensive experiments on a public dataset fully validate the superiority of our proposed method. In addition, the predicted tokens by the vision-guided key-token highlighting module can be easily understood by humans and hence improve the interpretability of our model.", "keywords": "multimodal summarization;pre-trained language model;prompt learning"} {"id": "10.1145/3539618.3591764", "title": "SciMine: An Efficient Systematic Prioritization Model Based on Richer Semantic Information", "abstract": "Systematic review is a crucial method that has been widely used. by scholars from different research domains. However, screening for relevant scientific literature from paper candidates remains an extremely time-consuming process so the task of screening prioritization has been established to reduce the human workload. Various methods under the human-in-the-loop fashion are proposed to solve this task by using lexical features. These methods, even though achieving better performance than more sophisticated feature-based models such as BERT, omit rich and essential semantic information, therefore suffered from feature bias. In this study, we propose a novel framework SciMine to accelerate this screening process by capturing semantic feature representations from both background and the corpus. In particular, based on contextual representation learned from the pre-trained language models, our approach utilizes an autoencoder-based classifier and a feature-dependent classification module to extract general document-level and phrase-level information. Then a ranking ensemble strategy is used to combine these two complementary pieces of information. Experiments on five real-world datasets demonstrate that SciMine achieves state-of-the-art performance and comprehensive analysis further shows the efficacy of SciMine to solve feature bias.", "keywords": "meta analysis;systematic review;active learning;text classification;human-in-the-loop;screening prioritization"} {"id": "10.1145/3539618.3591667", "title": "Distilling Semantic Concept Embeddings from Contrastively Fine-Tuned Language Models", "abstract": "Learning vectors that capture the meaning of concepts remains a fundamental challenge. Somewhat surprisingly, perhaps, pre-trained language models have thus far only enabled modest improvements to the quality of such concept embeddings. Current strategies for using language models typically represent a concept by averaging the contextualised representations of its mentions in some corpus. This is potentially sub-optimal for at least two reasons. First, contextualised word vectors have an unusual geometry, which hampers downstream tasks. Second, concept embeddings should capture the semantic properties of concepts, whereas contextualised word vectors are also affected by other factors. To address these issues, we propose two contrastive learning strategies, based on the view that whenever two sentences reveal similar properties, the corresponding contextualised vectors should also be similar. One strategy is fully unsupervised, estimating the properties which are expressed in a sentence from the neighbourhood structure of the contextualised word embeddings. The second strategy instead relies on a distant supervision signal from ConceptNet. Our experimental results show that the resulting vectors substantially outperform existing concept embeddings in predicting the semantic properties of concepts, with the ConceptNet-based strategy achieving the best results. These findings are furthermore confirmed in a clustering task and in the downstream task of ontology completion.", "keywords": "contrastive learning;word embedding;commonsense knowledge;language models"} {"id": "10.1145/3539618.3591752", "title": "Prompt Learning for News Recommendation", "abstract": "Some recent news recommendation (NR) methods introduce a Pre-trained Language Model (PLM) to encode news representation by following the vanilla pre-train and fine-tune paradigm with carefully-designed recommendation-specific neural networks and objective functions. Due to the inconsistent task objective with that of PLM, we argue that their modeling paradigm has not well exploited the abundant semantic information and linguistic knowledge embedded in the pre-training process. Recently, the pre-train, prompt, and predict paradigm, called prompt learning, has achieved many successes in natural language processing domain. In this paper, we make the first trial of this new paradigm to develop a Prompt Learning for News Recommendation (Prompt4NR) framework, which transforms the task of predicting whether a user would click a candidate news as a cloze-style mask-prediction task. Specifically, we design a series of prompt templates, including discrete, continuous, and hybrid templates, and construct their corresponding answer spaces to examine the proposed Prompt4NR framework. Furthermore, we use the prompt ensembling to integrate predictions from multiple prompt templates. Extensive experiments on the MIND dataset validate the effectiveness of our Prompt4NR with a set of new benchmark results.", "keywords": "pre-trained language model;prompt learning;news recommendation"} {"id": "10.1145/3491102.3501825", "title": "Design Guidelines for Prompt Engineering Text-to-Image Generative Models", "abstract": "Text-to-image generative models are a new and powerful way to generate visual artwork. However, the open-ended nature of text as interaction is double-edged; while users can input anything and have access to an infinite range of generations, they also must engage in brute-force trial and error with the text prompt when the result quality is poor. We conduct a study exploring what prompt keywords and model hyperparameters can help produce coherent outputs. In particular, we study prompts structured to include subject and style keywords and investigate success and failure modes of these prompts. Our evaluation of 5493 generations over the course of five experiments spans 51 abstract and concrete subjects as well as 51 abstract and figurative styles. From this evaluation, we present design guidelines that can help people produce better outcomes from text-to-image generative models.", "keywords": "text-to-image;computational creativity;multimodal generative models;prompt engineering.;ai co-creation;design guidelines"} {"id": "10.1145/3491102.3517582", "title": "AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts", "abstract": "Although large language models (LLMs) have demonstrated impressive potential on simple tasks, their breadth of scope, lack of transparency, and insufficient controllability can make them less effective when assisting humans on more complex tasks. In response, we introduce the concept of Chaining LLM steps together, where the output of one step becomes the input for the next, thus aggregating the gains per step. We first define a set of LLM primitive operations useful for Chain construction, then present an interactive system where users can modify these Chains, along with their intermediate results, in a modular way. In a 20-person user study, we found that Chaining not only improved the quality of task outcomes, but also significantly enhanced system transparency, controllability, and sense of collaboration. Additionally, we saw that users developed new ways of interacting with LLMs through Chains: they leveraged sub-tasks to calibrate model expectations, compared and contrasted alternative strategies by observing parallel downstream effects, and debugged unexpected model outputs by \u201cunit-testing\u201d sub-components of a Chain. In two case studies, we further explore how LLM Chains may be used in future applications.", "keywords": "human-ai interaction;natural language processing;large language models"} {"id": "10.1145/3491102.3501870", "title": "Discovering the Syntax and Strategies of Natural Language Programming with Generative Language Models", "abstract": "In this paper, we present a natural language code synthesis tool, GenLine, backed by 1) a large generative language model and 2) a set of task-specific prompts that create or change code. To understand the user experience of natural language code synthesis with these new types of models, we conducted a user study in which participants applied GenLine to two programming tasks. Our results indicate that while natural language code synthesis can sometimes provide a magical experience, participants still faced challenges. In particular, participants felt that they needed to learn the model\u2019s \u201csyntax,\u201d despite their input being natural language. Participants also struggled to form an accurate mental model of the types of requests the model can reliably translate and developed a set of strategies to debug model input. From these findings, we discuss design implications for future natural language code synthesis tools built using large generative language models.", "keywords": "prompt programming;generative language models;code synthesis"} {"id": "10.1145/3491102.3502030", "title": "CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities", "abstract": "Large language models (LMs) offer unprecedented language generation capabilities and exciting opportunities for interaction design. However, their highly context-dependent capabilities are difficult to grasp and are often subjectively interpreted. In this paper, we argue that by curating and analyzing large interaction datasets, the HCI community can foster more incisive examinations of LMs\u2019 generative capabilities. Exemplifying this approach, we present CoAuthor, a dataset designed for revealing GPT-3\u2019s capabilities in assisting creative and argumentative writing. CoAuthor captures rich interactions between 63 writers and four instances of GPT-3 across 1445 writing sessions. We demonstrate that CoAuthor can address questions about GPT-3\u2019s language, ideation, and collaboration capabilities, and reveal its contribution as a writing \u201ccollaborator\u201d under various definitions of good collaboration. Finally, we discuss how this work may facilitate a more principled discussion around LMs\u2019 promises and pitfalls in relation to interaction design. The dataset and an interface for replaying the writing sessions are publicly available at https://coauthor.stanford.edu.", "keywords": "human-ai collaborative writing;gpt-3;crowdsourcing;dataset;language models;natural language generation;writing assistants."} {"id": "costa-etal-2022-domain", "title": "Domain Adaptation in Neural Machine Translation using a Qualia-Enriched FrameNet", "abstract": "In this paper we present Scylla, a methodology for domain adaptation of Neural Machine Translation (NMT) systems that make use of a multilingual FrameNet enriched with qualia relations as an external knowledge base. Domain adaptation techniques used in NMT usually require fine-tuning and in-domain training data, which may pose difficulties for those working with lesser-resourced languages and may also lead to performance decay of the NMT system for out-of-domain sentences. Scylla does not require fine-tuning of the NMT model, avoiding the risk of model over-fitting and consequent decrease in performance for out-of-domain translations. Two versions of Scylla are presented: one using the source sentence as input, and another one using the target sentence. We evaluate Scylla in comparison to a state-of-the-art commercial NMT system in an experiment in which 50 sentences from the Sports domain are translated from Brazilian Portuguese to English. The two versions of Scylla significantly outperform the baseline commercial system in HTER.", "keywords": "framenet;qualia relations;sports;machine translation;domain adaptation", "url": "https://aclanthology.org/2022.lrec-1.1"} {"id": "gladkoff-han-2022-hope", "title": "HOPE: A Task-Oriented and Human-Centric Evaluation Framework Using Professional Post-Editing Towards More Effective MT Evaluation", "abstract": "Traditional automatic evaluation metrics for machine translation have been widely criticized by linguists due to their low accuracy, lack of transparency, focus on language mechanics rather than semantics, and low agreement with human quality evaluation. Human evaluations in the form of MQM-like scorecards have always been carried out in real industry setting by both clients and translation service providers (TSPs). However, traditional human translation quality evaluations are costly to perform and go into great linguistic detail, raise issues as to inter-rater reliability (IRR) and are not designed to measure quality of worse than premium quality translations. In this work, we introduce HOPE, a task-oriented and human-centric evaluation framework for machine translation output based on professional post-editing annotations. It contains only a limited number of commonly occurring error types, and uses a scoring model with geometric progression of error penalty points (EPPs) reflecting error severity level to each translation unit. The initial experimental work carried out on English-Russian language pair MT outputs on marketing content type of text from highly technical domain reveals that our evaluation framework is quite effective in reflecting the MT output quality regarding both overall system-level performance and segment-level transparency, and it increases the IRR for error type interpretation. The approach has several key advantages, such as ability to measure and compare less than perfect MT output from different systems, ability to indicate human perception of quality, immediate estimation of the labor effort required to bring MT output to premium quality, low-cost and faster application, as well as higher IRR. Our experimental data is available at .", "keywords": "machine translation evaluation;professional post-editing;human evaluation;error classifications", "url": "https://aclanthology.org/2022.lrec-1.2"} {"id": "park-etal-2022-priming", "title": "Priming Ancient Korean Neural Machine Translation", "abstract": "In recent years, there has been an increasing need for the restoration and translation of historical languages. In this study, we attempt to translate historical records in ancient Korean language based on neural machine translation (NMT). Inspired by priming, a cognitive science theory that two different stimuli influence each other, we propose novel priming ancient-Korean NMT (AKNMT) using bilingual subword embedding initialization with structural property awareness in the ancient documents. Finally, we obtain state-of-the-art results in the AKNMT task. To the best of our knowledge, we confirm the possibility of developing a human-centric model that incorporates the concepts of cognitive science and analyzes the result from the perspective of interference and cognitive dissonance theory for the first time.", "keywords": "ancient-korean neural machine translation;priming;neural machine translation", "url": "https://aclanthology.org/2022.lrec-1.3"} {"id": "bond-choo-2022-sense", "title": "Sense and Sentiment", "abstract": "In this paper we examine existing sentiment lexicons and sense-based sentiment-tagged corpora to find out how sense and concept-based semantic relations effect sentiment scores (for polarity and valence). We show that some relations are good predictors of sentiment of related words: antonyms have similar valence and opposite polarity, synonyms similar valence and polarity, as do many derivational relations. We use this knowledge and existing resources to build a sentiment annotated wordnet of English, and show how it can be used to produce sentiment lexicons for other languages using the Open Multilingual Wordnet.", "keywords": "sentiment;wordnet;derivation;inflection", "url": "https://aclanthology.org/2022.lrec-1.7"} {"id": "brabant-etal-2022-coqar", "title": "CoQAR: Question Rewriting on CoQA", "abstract": "Questions asked by humans during a conversation often contain contextual dependencies, i.e., explicit or implicit references to previous dialogue turns. These dependencies take the form of coreferences (e.g., via pronoun use) or ellipses, and can make the understanding difficult for automated systems. One way to facilitate the understanding and subsequent treatments of a question is to rewrite it into an out-of-context form, i.e., a form that can be understood without the conversational context. We propose CoQAR, a corpus containing 4.5K conversations from the Conversational Question-Answering dataset CoQA, for a total of 53K follow-up question-answer pairs. Each original question was manually annotated with at least 2 at most 3 out-of-context rewritings. CoQA originally contains 8k conversations, which sum up to 127k question-answer pairs. CoQAR can be used in the supervised learning of three tasks: question paraphrasing, question rewriting and conversational question answering. In order to assess the quality of CoQAR's rewritings, we conduct several experiments consisting in training and evaluating models for these three tasks. Our results support the idea that question rewriting can be used as a preprocessing step for (conversational and non-conversational) question answering models, thereby increasing their performances.", "keywords": "question rewriting;conversational question answering;question paraphrasing", "url": "https://aclanthology.org/2022.lrec-1.13"} {"id": "aicher-etal-2022-user", "title": "User Interest Modelling in Argumentative Dialogue Systems", "abstract": "Most systems helping to provide structured information and support opinion building, discuss with users without considering their individual interest. The scarce existing research on user interest in dialogue systems depends on explicit user feedback. Such systems require user responses that are not content-related and thus, tend to disturb the dialogue flow. In this paper, we present a novel model for implicitly estimating user interest during argumentative dialogues based on semantically clustered data. Therefore, an online user study was conducted to acquire training data which was used to train a binary neural network classifier in order to predict whether or not users are still interested in the content of the ongoing dialogue. We achieved a classification accuracy of 74.9% and furthermore investigated with different Artificial Neural Networks (ANN) which new argument would fit the user interest best.", "keywords": "interest model;user model;argumentative dialogue systems;conversational systems;user usability;user satisfaction;hci;preference modelling", "url": "https://aclanthology.org/2022.lrec-1.14"} {"id": "xompero-etal-2022-every", "title": "Every time I fire a conversational designer, the performance of the dialogue system goes down", "abstract": "Incorporating handwritten domain scripts into neural-based task-oriented dialogue systems may be an effective way to reduce the need for large sets of annotated dialogues. In this paper, we investigate how the use of domain scripts written by conversational designers affects the performance of neural-based dialogue systems. To support this investigation, we propose the Conversational-Logic-Injection-in-Neural-Network system (CLINN) where domain scripts are coded in semi-logical rules. By using CLINN, we evaluated semi-logical rules produced by a team of differently-skilled conversational designers. We experimented with the Restaurant domain of the MultiWOZ dataset. Results show that external knowledge is extremely important for reducing the need for annotated examples for conversational systems. In fact, rules from conversational designers used in CLINN significantly outperform a state-of-the-art neural-based dialogue system when trained with smaller sets of annotated dialogues.", "keywords": "neural-based dialogue systems;task-oriented dialogue;handwritten rules;hybrid dialogue systems", "url": "https://aclanthology.org/2022.lrec-1.15"} {"id": "wen-etal-2022-empirical", "title": "An Empirical Study on the Overlapping Problem of Open-Domain Dialogue Datasets", "abstract": "Open-domain dialogue systems aim to converse with humans through text, and dialogue research has heavily relied on benchmark datasets. In this work, we observe the overlapping problem in DailyDialog and OpenSubtitles, two popular open-domain dialogue benchmark datasets. Our systematic analysis then shows that such overlapping can be exploited to obtain fake state-of-the-art performance. Finally, we address this issue by cleaning these datasets and setting up a proper data processing procedure for future research.", "keywords": "open-domain dialogue;data cleaning", "url": "https://aclanthology.org/2022.lrec-1.16"} {"id": "mapelli-etal-2022-language", "title": "Language Resources to Support Language Diversity \u2013 the ELRA Achievements", "abstract": "This article highlights ELRA\u2019s latest achievements in the field of Language Resources (LRs) identification, sharing and production. It also reports on ELRA's involvement in several national and international projects, as well as in the organization of events for the support of LRs and related Language Technologies, including for under-resourced languages. Over the past few years, ELRA, together with its operational agency ELDA, has continued to increase its catalogue offer of LRs, establishing worldwide partnerships for the production of various types of LRs (SMS, tweets, crawled data, MT aligned data, speech LRs, sentiment-based data, etc.). Through their consistent involvement in EU-funded projects, ELRA and ELDA have contributed to improve the access to multilingual information in the context of the pandemic, develop tools for the de-identification of texts in the legal and medical domains, support the EU eTranslation Machine Translation system, and set up a European platform providing access to both resources and services. In December 2019, ELRA co-organized the LT4All conference, whose main topics were Language Technologies for enabling linguistic diversity and multilingualism worldwide. Moreover, although LREC was cancelled in 2020, ELRA published the LREC 2020 proceedings for the Main conference and Workshops papers, and carried on its dissemination activities while targeting the new LREC edition for 2022.", "keywords": "language resources;language diversity;under-resourced languages;language technology;language resources infrastructures;anonymization", "url": "https://aclanthology.org/2022.lrec-1.58"} {"id": "kamocki-witt-2022-ethical", "title": "Ethical Issues in Language Resources and Language Technology \u2013 Tentative Categorisation", "abstract": "Ethical issues in Language Resources and Language Technology are often invoked, but rarely discussed. This is at least partly because little work has been done to systematize ethical issues and principles applicable in the fields of Language Resources and Language Technology. This paper provides an overview of ethical issues that arise at different stages of Language Resources and Language Technology development, from the conception phase through the construction phase to the use phase. Based on this overview, the authors propose a tentative taxonomy of ethical issues in Language Resources and Language Technology, built around five principles: Privacy, Property, Equality, Transparency and Freedom. The authors hope that this tentative taxonomy will facilitate ethical assessment of projects in the field of Language Resources and Language Technology, and structure the discussion on ethical issues in this domain, which may eventually lead to the adoption of a universally accepted Code of Ethics of the Language Resources and Language Technology community.", "keywords": "ethics;privacy;language resources;language technology", "url": "https://aclanthology.org/2022.lrec-1.59"} {"id": "ducel-etal-2022-name", "title": "Do we Name the Languages we Study? The #BenderRule in LREC and ACL articles", "abstract": "This article studies the application of the #BenderRule in Natural Language Processing (NLP) articles according to two dimensions. Firstly, in a contrastive manner, by considering two major international conferences, LREC and ACL, and secondly, in a diachronic manner, by inspecting nearly 14,000 articles over a period of time ranging from 2000 to 2020 for LREC and from 1979 to 2020 for ACL. For this purpose, we created a corpus from LREC and ACL articles from the above-mentioned periods, from which we manually annotated nearly 1,000. We then developed two classifiers to automatically annotate the rest of the corpus. Our results show that LREC articles tend to respect the #BenderRule (80 to 90% of them respect it), whereas 30 to 40% of ACL articles do not. Interestingly, over the considered periods, the results appear to be stable for the two conferences, even though a rebound in ACL 2020 could be a sign of the influence of the blog post about the #BenderRule.", "keywords": "#benderrule;language diversity;ethics", "url": "https://aclanthology.org/2022.lrec-1.60"} {"id": "de-bruyne-etal-2022-aspect", "title": "Aspect-Based Emotion Analysis and Multimodal Coreference: A Case Study of Customer Comments on Adidas Instagram Posts", "abstract": "While aspect-based sentiment analysis of user-generated content has received a lot of attention in the past years, emotion detection at the aspect level has been relatively unexplored. Moreover, given the rise of more visual content on social media platforms, we want to meet the ever-growing share of multimodal content. In this paper, we present a multimodal dataset for Aspect-Based Emotion Analysis (ABEA). Additionally, we take the first steps in investigating the utility of multimodal coreference resolution in an ABEA framework. The presented dataset consists of 4,900 comments on 175 images and is annotated with aspect and emotion categories and the emotional dimensions of valence and arousal. Our preliminary experiments suggest that ABEA does not benefit from multimodal coreference resolution, and that aspect and emotion classification only requires textual information. However, when more specific information about the aspects is desired, image recognition could be essential.", "keywords": "absa;abea;sentiment analysis;emotion detection;multimodal coreference", "url": "https://aclanthology.org/2022.lrec-1.61"} {"id": "roccabruna-etal-2022-multi", "title": "Multi-source Multi-domain Sentiment Analysis with BERT-based Models", "abstract": "Sentiment analysis is one of the most widely studied tasks in natural language processing. While BERT-based models have achieved state-of-the-art results in this task, little attention has been given to its performance variability across class labels, multi-source and multi-domain corpora. In this paper, we present an improved state-of-the-art and comparatively evaluate BERT-based models for sentiment analysis on Italian corpora. The proposed model is evaluated over eight sentiment analysis corpora from different domains (social media, finance, e-commerce, health, travel) and sources (Twitter, YouTube, Facebook, Amazon, Tripadvisor, Opera and Personal Healthcare Agent) on the prediction of positive, negative and neutral classes. Our findings suggest that BERT-based models are confident in predicting positive and negative examples but not as much with neutral examples. We release the sentiment analysis model as well as a newly financial domain sentiment corpus.", "keywords": "sentiment analysis;multi-domain;multi-source", "url": "https://aclanthology.org/2022.lrec-1.62"} {"id": "prost-2022-integrating", "title": "Integrating a Phrase Structure Corpus Grammar and a Lexical-Semantic Network: the HOLINET Knowledge Graph", "abstract": "In this paper we address the question of how to integrate grammar and lexical-semantic knowledge within a single and homogeneous knowledge graph. We introduce a graph modelling of grammar knowledge which enables its merging with a lexical-semantic network. Such an integrated representation is expected, for instance, to provide new material for language-related graph embeddings in order to model interactions between Syntax and Semantics. Our base model relies on a phrase structure grammar. The phrase structure is accounted for by both a Proof-Theoretical representation, through a Context-Free Grammar, and a Model-Theoretical one, through a constraint-based grammar. The constraint types colour the grammar layer with syntactic relationships such as Immediate Dominance, Linear Precedence, and more. We detail a creation process which infers the grammar layer from a corpus annotated in constituency and integrates it with a lexical-semantic network through a shared POS tagset. We implement the process, and experiment with the French Treebank and the JeuxDeMots lexical-semantic network. The outcome is the HOLINET knowledge graph.", "keywords": "knowledge graph;lexical-semantic network;grammar;phrase structure;constituency;jeuxdemots", "url": "https://aclanthology.org/2022.lrec-1.65"} {"id": "ottolina-etal-2022-impact", "title": "On the Impact of Temporal Representations on Metaphor Detection", "abstract": "State-of-the-art approaches for metaphor detection compare their literal - or core - meaning and their contextual meaning using metaphor classifiers based on neural networks. However, metaphorical expressions evolve over time due to various reasons, such as cultural and societal impact. Metaphorical expressions are known to co-evolve with language and literal word meanings, and even drive, to some extent, this evolution. This poses the question of whether different, possibly time-specific, representations of literal meanings may impact the metaphor detection task. To the best of our knowledge, this is the first study that examines the metaphor detection task with a detailed exploratory analysis where different temporal and static word embeddings are used to account for different representations of literal meanings. Our experimental analysis is based on three popular benchmarks used for metaphor detection and word embeddings extracted from different corpora and temporally aligned using different state-of-the-art approaches. The results suggest that the usage of different static word embedding methods does impact the metaphor detection task and some temporal word embeddings slightly outperform static methods. However, the results also suggest that temporal word embeddings may provide representations of the core meaning of the metaphor even too close to their contextual meaning, thus confusing the classifier. Overall, the interaction between temporal language evolution and metaphor detection appears tiny in the benchmark datasets used in our experiments. This suggests that future work for the computational analysis of this important linguistic phenomenon should first start by creating a new dataset where this interaction is better represented.", "keywords": "metaphor detection;temporal word embeddings;static word embeddings", "url": "https://aclanthology.org/2022.lrec-1.66"} {"id": "sileo-moens-2022-analysis", "title": "Analysis and Prediction of NLP Models via Task Embeddings", "abstract": "Task embeddings are low-dimensional representations that are trained to capture task properties. In this paper, we propose MetaEval, a collection of 101 NLP tasks. We fit a single transformer to all MetaEval tasks jointly while conditioning it on learned embeddings. The resulting task embeddings enable a novel analysis of the space of tasks. We then show that task aspects can be mapped to task embeddings for new tasks without using any annotated examples. Predicted embeddings can modulate the encoder for zero-shot inference and outperform a zero-shot baseline on GLUE tasks. The provided multitask setup can function as a benchmark for future transfer learning research.", "keywords": "task embeddings;metalearning;natural language processing;evaluation;extreme multi-task learning", "url": "https://aclanthology.org/2022.lrec-1.67"} {"id": "hazem-etal-2022-cross", "title": "Cross-lingual and Cross-domain Transfer Learning for Automatic Term Extraction from Low Resource Data", "abstract": "Automatic Term Extraction (ATE) is a key component for domain knowledge understanding and an important basis for further natural language processing applications. Even with persistent improvements, ATE still exhibits weak results exacerbated by small training data inherent to specialized domain corpora. Recently, transformers-based deep neural models, such as BERT, have proven to be efficient in many downstream NLP tasks. However, no systematic evaluation of ATE has been conducted so far. In this paper, we run an extensive study on fine-tuning pre-trained BERT models for ATE. We propose strategies that empirically show BERT's effectiveness using cross-lingual and cross-domain transfer learning to extract single and multi-word terms. Experiments have been conducted on four specialized domains in three languages. The obtained results suggest that BERT can capture cross-domain and cross-lingual terminologically-marked contexts shared by terms, opening a new design-pattern for ATE.", "keywords": "ate;automatic term extraction;terminology;low resource;multilingual", "url": "https://aclanthology.org/2022.lrec-1.68"} {"id": "jurkschat-etal-2022-shot", "title": "Few-Shot Learning for Argument Aspects of the Nuclear Energy Debate", "abstract": "We approach aspect-based argument mining as a supervised machine learning task to classify arguments into semantically coherent groups referring to the same defined aspect categories. As an exemplary use case, we introduce the Argument Aspect Corpus - Nuclear Energy that separates arguments about the topic of nuclear energy into nine major aspects. Since the collection of training data for further aspects and topics is costly, we investigate the potential for current transformer-based few-shot learning approaches to accurately classify argument aspects. The best approach is applied to a British newspaper corpus covering the debate on nuclear energy over the past 21 years. Our evaluation shows that a stable prediction of shares of argument aspects in this debate is feasible with 50 to 100 training samples per aspect. Moreover, we see signals for a clear shift in the public discourse in favor of nuclear energy in recent years. This revelation of changing patterns of pro and contra arguments related to certain aspects over time demonstrates the potential of supervised argument aspect detection for tracking issue-specific media discourses.", "keywords": "argument mining;aspect-based argument mining;argument frames;argument aspects;text classification;few-shot learning;nuclear energy discourse", "url": "https://aclanthology.org/2022.lrec-1.69"} {"id": "bardhan-etal-2022-drugehrqa", "title": "DrugEHRQA: A Question Answering Dataset on Structured and Unstructured Electronic Health Records For Medicine Related Queries", "abstract": "This paper develops the first question answering dataset (DrugEHRQA) containing question-answer pairs from both structured tables and unstructured notes from a publicly available Electronic Health Record (EHR). EHRs contain patient records, stored in structured tables and unstructured clinical notes. The information in structured and unstructured EHRs is not strictly disjoint: information may be duplicated, contradictory, or provide additional context between these sources. Our dataset has medication-related queries, containing over 70,000 question-answer pairs. To provide a baseline model and help analyze the dataset, we have used a simple model (MultimodalEHRQA) which uses the predictions of a modality selection network to choose between EHR tables and clinical notes to answer the questions. This is used to direct the questions to the table-based or text-based state-of-the-art QA model. In order to address the problem arising from complex, nested queries, this is the first time Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers (RAT-SQL) has been used to test the structure of query templates in EHR data. Our goal is to provide a benchmark dataset for multi-modal QA systems, and to open up new avenues of research in improving question answering over EHR structured data by using context from unstructured clinical data.", "keywords": "multimodal question answering;electronic health records;ehrs;databases;information retrieval", "url": "https://aclanthology.org/2022.lrec-1.117"} {"id": "verkijk-vossen-2022-efficiently", "title": "Efficiently and Thoroughly Anonymizing a Transformer Language Model for Dutch Electronic Health Records: a Two-Step Method", "abstract": "Neural Network (NN) architectures are used more and more to model large amounts of data, such as text data available online. Transformer-based NN architectures have shown to be very useful for language modelling. Although many researchers study how such Language Models (LMs) work, not much attention has been paid to the privacy risks of training LMs on large amounts of data and publishing them online. This paper presents a new method for anonymizing a language model by presenting the way in which MedRoBERTa.nl, a Dutch language model for hospital notes, was anonymized. The two-step method involves i) automatic anonymization of the training data and ii) semi-automatic anonymization of the LM\u2019s vocabulary. Adopting the fill-mask task where the model predicts what tokens are most probable in a certain context, it was tested how often the model will predict a name in a context where a name should be. It was shown that it predicts a name-like token 0.2% of the time. Any name-like token that was predicted was never the name originally present in the training data. By explaining how a LM trained on highly private real-world medical data can be published, we hope that more language resources will be published openly and responsibly so the scientific community can profit from them.", "keywords": "anonymization;language model;medical text data", "url": "https://aclanthology.org/2022.lrec-1.118"} {"id": "grobol-etal-2022-bertrade", "title": "BERTrade: Using Contextual Embeddings to Parse Old French", "abstract": "The successes of contextual word embeddings learned by training large-scale language models, while remarkable, have mostly occurred for languages where significant amounts of raw texts are available and where annotated data in downstream tasks have a relatively regular spelling. Conversely, it is not yet completely clear if these models are also well suited for lesser-resourced and more irregular languages. We study the case of Old French, which is in the interesting position of having relatively limited amount of available raw text, but enough annotated resources to assess the relevance of contextual word embedding models for downstream NLP tasks. In particular, we use POS-tagging and dependency parsing to evaluate the quality of such models in a large array of configurations, including models trained from scratch from small amounts of raw text and models pre-trained on other languages but fine-tuned on Medieval French data.", "keywords": "old french;contextual word embeddings;dependency parsing;part of speech tagging", "url": "https://aclanthology.org/2022.lrec-1.119"} {"id": "zabokrtsky-etal-2022-towards", "title": "Towards Universal Segmentations: UniSegments 1.0", "abstract": "Our work aims at developing a multilingual data resource for morphological segmentation. We present a survey of 17 existing data resources relevant for segmentation in 32 languages, and analyze diversity of how individual linguistic phenomena are captured across them. Inspired by the success of Universal Dependencies, we propose a harmonized scheme for segmentation representation, and convert the data from the studied resources into this common scheme. Harmonized versions of resources available under free licenses are published as a collection called UniSegments 1.0.", "keywords": "morphology;morpheme;morph;segmentation;multilingual language resources", "url": "https://aclanthology.org/2022.lrec-1.122"} {"id": "bear-cook-2022-leveraging", "title": "Leveraging a Bilingual Dictionary to Learn Wolastoqey Word Representations", "abstract": "Word embeddings (Mikolov et al., 2013; Pennington et al., 2014) have been used to bolster the performance of natural language processing systems in a wide variety of tasks, including information retrieval (Roy et al., 2018) and machine translation (Qi et al., 2018). However, approaches to learning word embeddings typically require large corpora of running text to learn high quality representations. For many languages, such resources are unavailable. This is the case for Wolastoqey, also known as Passamaquoddy-Maliseet, an endangered low-resource Indigenous language. As there exist no large corpora of running text for Wolastoqey, in this paper, we leverage a bilingual dictionary to learn Wolastoqey word embeddings by encoding their corresponding English definitions into vector representations using pretrained English word and sequence representation models. Specifically, we consider representations based on pretrained word2vec (Mikolov et al., 2013), RoBERTa (Liu et al., 2019) and sentence-BERT (Reimers and Gurevych, 2019) models. We evaluate these embeddings in word prediction tasks focused on part-of-speech, animacy, and transitivity; semantic clustering; and reverse dictionary search. In all evaluations we demonstrate that approaches using these embeddings outperform task-specific baselines, without requiring any language-specific training or fine-tuning.", "keywords": "word embeddings;less-resourced languages;endangered languages;malecite-passamaquoddy", "url": "https://aclanthology.org/2022.lrec-1.124"} {"id": "urbizu-etal-2022-basqueglue", "title": "BasqueGLUE: A Natural Language Understanding Benchmark for Basque", "abstract": "Natural Language Understanding (NLU) technology has improved significantly over the last few years and multitask benchmarks such as GLUE are key to evaluate this improvement in a robust and general way. These benchmarks take into account a wide and diverse set of NLU tasks that require some form of language understanding, beyond the detection of superficial, textual clues. However, they are costly to develop and language-dependent, and therefore they are only available for a small number of languages. In this paper, we present BasqueGLUE, the first NLU benchmark for Basque, a less-resourced language, which has been elaborated from previously existing datasets and following similar criteria to those used for the construction of GLUE and SuperGLUE. We also report the evaluation of two state-of-the-art language models for Basque on BasqueGLUE, thus providing a strong baseline to compare upon. BasqueGLUE is freely available under an open license.", "keywords": "neural language models;natural language understanding;less-resourced languages", "url": "https://aclanthology.org/2022.lrec-1.172"} {"id": "stefanovitch-etal-2022-resources", "title": "Resources and Experiments on Sentiment Classification for Georgian", "abstract": "This paper presents, to the best of our knowledge, the first ever publicly available annotated dataset for sentiment classification and semantic polarity dictionary for Georgian. The characteristics of these resources and the process of their creation are described in detail. The results of various experiments on the performance of both lexicon- and machine learning-based models for Georgian sentiment classification are also reported. Both 3-label (positive, neutral, negative) and 4-label settings (same labels + mixed) are considered. The machine learning models explored include, i.a., logistic regression, SVMs, and transformed-based models. We also explore transfer learning- and translation-based (to a well-supported language) approaches. The obtained results for Georgian are on par with the state-of-the-art results in sentiment classification for well studied languages when using training data of comparable size.", "keywords": "sentiment analysis;low-resourced language;linguistic resources;georgian language;machine learning", "url": "https://aclanthology.org/2022.lrec-1.173"} {"id": "tamburini-2022-combining", "title": "Combining ELECTRA and Adaptive Graph Encoding for Frame Identification", "abstract": "This paper presents contributions in two directions: first we propose a new system for Frame Identification (FI), based on pre-trained text encoders trained discriminatively and graphs embedding, producing state of the art performance and, second, we take in consideration all the extremely different procedures used to evaluate systems for this task performing a complete evaluation over two benchmarks and all possible splits and cleaning procedures used in the FI literature.", "keywords": "frame semantics;frame identification;evaluation", "url": "https://aclanthology.org/2022.lrec-1.178"} {"id": "gari-soler-etal-2022-polysemy", "title": "Polysemy in Spoken Conversations and Written Texts", "abstract": "Our discourses are full of potential lexical ambiguities, due in part to the pervasive use of words having multiple senses. Sometimes, one word may even be used in more than one sense throughout a text. But, to what extent is this true for different kinds of texts? Does the use of polysemous words change when a discourse involves two people, or when speakers have time to plan what to say? We investigate these questions by comparing the polysemy level of texts of different nature, with a focus on spontaneous spoken dialogs; unlike previous work which examines solely scripted, written, monolog-like data. We compare multiple metrics that presuppose different conceptualizations of text polysemy, i.e., they consider the observed or the potential number of senses of words, or their sense distribution in a discourse. We show that the polysemy level of texts varies greatly depending on the kind of text considered, with dialog and spoken discourses having generally a higher polysemy level than written monologs. Additionally, our results emphasize the need for relaxing the popular \"one sense per discourse\" hypothesis.", "keywords": "semantics;word sense disambiguation;document classification;text categorisation", "url": "https://aclanthology.org/2022.lrec-1.179"} {"id": "viksna-etal-2022-assessing", "title": "Assessing Multilinguality of Publicly Accessible Websites", "abstract": "Although information on the Internet can be shared in many languages, the language presence on the World Wide Web is very disproportionate. The problem of multilingualism on the Web, in particular access, availability and quality of information in the world\u2019s languages, has been the subject of UNESCO focus for several decades. Making European websites more multilingual is also one of the focal targets of the Connecting Europe Facility Automated Translation (CEF AT) digital service infrastructure. In order to monitor this goal, alongside other possible solutions, CEF AT needs a methodology and easy to use tool to assess the degree of multilingualism of a given website. In this paper we investigate methods and tools that automatically analyse the language diversity of the Web and propose indicators and methodology on how to measure the multilingualism of European websites. We also introduce a prototype tool based on open-source software that helps to assess multilingualism of the Web and can be independently run at set intervals. We also present initial results obtained with our tool that allows us to conclude that multilingualism on the Web is still a problem not only at the world level, but also at the European and regional level.", "keywords": "multilingualism;internet;scoring;language equality", "url": "https://aclanthology.org/2022.lrec-1.227"} {"id": "frohberg-binder-2022-crass", "title": "CRASS: A Novel Data Set and Benchmark to Test Counterfactual Reasoning of Large Language Models", "abstract": "We introduce the CRASS (counterfactual reasoning assessment) data set and benchmark utilizing questionized counterfactual conditionals as a novel and powerful tool to evaluate large language models. We present the data set design and benchmark. We test six state-of-the-art models against our benchmark. Our results show that it poses a valid challenge for these models and opens up considerable room for their improvement.", "keywords": "common-sense reasoning;counterfactual conditionals;nlp;large language models", "url": "https://aclanthology.org/2022.lrec-1.229"} {"id": "costa-jussa-etal-2022-evaluating", "title": "Evaluating Gender Bias in Speech Translation", "abstract": "The scientific community is increasingly aware of the necessity to embrace pluralism and consistently represent major and minor social groups. Currently, there are no standard evaluation techniques for different types of biases. Accordingly, there is an urgent need to provide evaluation sets and protocols to measure existing biases in our automatic systems. Evaluating the biases should be an essential step towards mitigating them in the systems. This paper introduces WinoST, a new freely available challenge set for evaluating gender bias in speech translation. WinoST is the speech version of WinoMT, an MT challenge set, and both follow an evaluation protocol to measure gender accuracy. Using an S-Transformer end-to-end speech translation system, we report the gender bias evaluation on four language pairs, and we reveal the inaccuracies in translations generating gender-stereotyped translations.", "keywords": "challenge set;multilinguality;direct speech-to-text translation", "url": "https://aclanthology.org/2022.lrec-1.230"} {"id": "bai-stede-2022-argument", "title": "Argument Similarity Assessment in German for Intelligent Tutoring: Crowdsourced Dataset and First Experiments", "abstract": "NLP technologies such as text similarity assessment, question answering and text classification are increasingly being used to develop intelligent educational applications. The long-term goal of our work is an intelligent tutoring system for German secondary schools, which will support students in a school exercise that requires them to identify arguments in an argumentative source text. The present paper presents our work on a central subtask, viz. the automatic assessment of similarity between a pair of argumentative text snippets in German. In the designated use case, students write out key arguments from a given source text; the tutoring system then evaluates them against a target reference, assessing the similarity level between student work and the reference. We collect a dataset for our similarity assessment task through crowdsourcing as authentic German student data are scarce; we label the collected text pairs with similarity scores on a 5-point scale and run first experiments on the task. We see that a model based on BERT shows promising results, while we also discuss some challenges that we observe.", "keywords": "semantic textual similarity;argument similarity;intelligent tutoring systems", "url": "https://aclanthology.org/2022.lrec-1.234"} {"id": "jain-etal-2022-leveraging", "title": "Leveraging Pre-trained Language Models for Gender Debiasing", "abstract": "Studying and mitigating gender and other biases in natural language have become important areas of research from both algorithmic and data perspectives. This paper explores the idea of reducing gender bias in a language generation context by generating gender variants of sentences. Previous work in this field has either been rule-based or required large amounts of gender balanced training data. These approaches are however not scalable across multiple languages, as creating data or rules for each language is costly and time-consuming. This work explores a light-weight method to generate gender variants for a given text using pre-trained language models as the resource, without any task-specific labelled data. The approach is designed to work on multiple languages with minimal changes in the form of heuristics. To showcase that, we have tested it on a high-resourced language, namely Spanish, and a low-resourced language from a different family, namely Serbian. The approach proved to work very well on Spanish, and while the results were less positive for Serbian, it showed potential even for languages where pre-trained models are less effective.", "keywords": "gender debiasing;language generation;pre-trained language models", "url": "https://aclanthology.org/2022.lrec-1.235"} {"id": "de-la-pena-sarracen-rosso-2022-unsupervised", "title": "Unsupervised Embeddings with Graph Auto-Encoders for Multi-domain and Multilingual Hate Speech Detection", "abstract": "Hate speech detection is a prominent and challenging task, since hate messages are often expressed in subtle ways and with characteristics that may vary depending on the author. Hence, many models suffer from the generalization problem. However, retrieving and monitoring hateful content on social media is a current necessity. In this paper, we propose an unsupervised approach using Graph Auto-Encoders (GAE), which allows us to avoid using labeled data when training the representation of the texts. Specifically, we represent texts as nodes of a graph, and use a transformer layer together with a convolutional layer to encode these nodes in a low-dimensional space. As a result, we obtain embeddings that can be decoded into a reconstruction of the original network. Our main idea is to learn a model with a set of texts without supervision, in order to generate embeddings for the nodes: nodes with the same label should be close in the embedding space, which, in turn, should allow us to distinguish among classes. We employ this strategy to detect hate speech in multi-domain and multilingual sets of texts, where our method shows competitive results on small datasets.", "keywords": "hate speech detection;unsupervised embeddings;graph auto-encoders", "url": "https://aclanthology.org/2022.lrec-1.236"} {"id": "heinrich-etal-2022-fquad2", "title": "FQuAD2.0: French Question Answering and Learning When You Don't Know", "abstract": "Question Answering, including Reading Comprehension, is one of the NLP research areas that has seen significant scientific breakthroughs over the past few years, thanks to the concomitant advances in Language Modeling. Most of these breakthroughs, however, are centered on the English language. In 2020, as a first strong initiative to bridge the gap to the French language, Illuin Technology introduced FQuAD1.1, a French Native Reading Comprehension dataset composed of 60,000+ questions and answers samples extracted from Wikipedia articles. Nonetheless, Question Answering models trained on this dataset have a major drawback: they are not able to predict when a given question has no answer in the paragraph of interest, therefore making unreliable predictions in various industrial use-cases. We introduce FQuAD2.0, which extends FQuAD with 17,000+ unanswerable questions, annotated adversarially, in order to be similar to answerable ones. This new dataset, comprising a total of almost 80,000 questions, makes it possible to train French Question Answering models with the ability of distinguishing unanswerable questions from answerable ones. We benchmark several models with this dataset: our best model, a fine-tuned CamemBERT-large, achieves a F1 score of 82.3% on this classification task, and a F1 score of 83% on the Reading Comprehension task.", "keywords": "question answering;adversarial;french;multilingual", "url": "https://aclanthology.org/2022.lrec-1.237"} {"id": "toraman-etal-2022-large", "title": "Large-Scale Hate Speech Detection with Cross-Domain Transfer", "abstract": "The performance of hate speech detection models relies on the datasets on which the models are trained. Existing datasets are mostly prepared with a limited number of instances or hate domains that define hate topics. This hinders large-scale analysis and transfer learning with respect to hate domains. In this study, we construct large-scale tweet datasets for hate speech detection in English and a low-resource language, Turkish, consisting of human-labeled 100k tweets per each. Our datasets are designed to have equal number of tweets distributed over five domains. The experimental results supported by statistical tests show that Transformer-based language models outperform conventional bag-of-words and neural models by at least 5% in English and 10% in Turkish for large-scale hate speech detection. The performance is also scalable to different training sizes, such that 98% of performance in English, and 97% in Turkish, are recovered when 20% of training instances are used. We further examine the generalization ability of cross-domain transfer among hate domains. We show that 96% of the performance of a target domain in average is recovered by other domains for English, and 92% for Turkish. Gender and religion are more successful to generalize to other domains, while sports fail most.", "keywords": "cross-domain transfer;hate speech detection;low-resource language;offensive language;scalability", "url": "https://aclanthology.org/2022.lrec-1.238"} {"id": "meyer-elsweiler-2022-glohbcd", "title": "GLoHBCD: A Naturalistic German Dataset for Language of Health Behaviour Change on Online Support Forums", "abstract": "Health behaviour change is a difficult and prolonged process that requires sustained motivation and determination. Conversa- tional agents have shown promise in supporting the change process in the past. One therapy approach that facilitates change and has been used as a framework for conversational agents is motivational interviewing. However, existing implementations of this therapy approach lack the deep understanding of user utterances that is essential to the spirit of motivational interviewing. To address this lack of understanding, we introduce the GLoHBCD, a German dataset of naturalistic language around health behaviour change. Data was sourced from a popular German weight loss forum and annotated using theoretically grounded motivational interviewing categories. We describe the process of dataset construction and present evaluation results. Initial experiments suggest a potential for broad applicability of the data and the resulting classifiers across different behaviour change domains. We make code to replicate the dataset and experiments available on Github.", "keywords": "conversational agents;motivational interviewing;behaviour change;language resources", "url": "https://aclanthology.org/2022.lrec-1.239"} {"id": "cui-etal-2022-openel", "title": "OpenEL: An Annotated Corpus for Entity Linking and Discourse in Open Domain Dialogue", "abstract": "Entity linking in dialogue is the task of mapping entity mentions in utterances to a target knowledge base. Prior work on entity linking has mainly focused on well-written articles such as Wikipedia, annotated newswire, or domain-specific datasets. We extend the study of entity linking to open domain dialogue by presenting the OpenEL corpus: an annotated multi-domain corpus for linking entities in natural conversation to Wikidata. Each dialogic utterance in 179 dialogues over 12 topics from the EDINA dataset has been annotated for entities realized by definite referring expressions as well as anaphoric forms such as he, she, it and they. This dataset supports training and evaluation of entity linking in open-domain dialogue, as well as analysis of the effect of using dialogue context and anaphora resolution in model training. It could also be used for fine-tuning a coreference resolution algorithm. To the best of our knowledge, this is the first substantial entity linking corpus publicly available for open-domain dialogue. We also establish baselines for this task using several existing entity linking systems. We found that the Transformer-based system Flair + BLINK has the best performance with a 0.65 F1 score. Our results show that dialogue context is extremely beneficial for entity linking in conversations, with Flair + Blink achieving an F1 of 0.61 without discourse context. These results also demonstrate the remaining performance gap between the baselines and human performance, highlighting the challenges of entity linking in open-domain dialogue, and suggesting many avenues for future research using OpenEL.", "keywords": "entity linking;coreference;discourse modeling;wikidata;open-domain dialogue", "url": "https://aclanthology.org/2022.lrec-1.241"} {"id": "willemsen-etal-2022-collecting", "title": "Collecting Visually-Grounded Dialogue with A Game Of Sorts", "abstract": "An idealized, though simplistic, view of the referring expression production and grounding process in (situated) dialogue assumes that a speaker must merely appropriately specify their expression so that the target referent may be successfully identified by the addressee. However, referring in conversation is a collaborative process that cannot be aptly characterized as an exchange of minimally-specified referring expressions. Concerns have been raised regarding assumptions made by prior work on visually-grounded dialogue that reveal an oversimplified view of conversation and the referential process. We address these concerns by introducing a collaborative image ranking task, a grounded agreement game we call \"A Game Of Sorts\". In our game, players are tasked with reaching agreement on how to rank a set of images given some sorting criterion through a largely unrestricted, role-symmetric dialogue. By putting emphasis on the argumentation in this mixed-initiative interaction, we collect discussions that involve the collaborative referential process. We describe results of a small-scale data collection experiment with the proposed task. All discussed materials, which includes the collected data, the codebase, and a containerized version of the application, are publicly available.", "keywords": "visually-grounded dialogue;data collection;referring expressions", "url": "https://aclanthology.org/2022.lrec-1.242"} {"id": "wahle-etal-2022-d3", "title": "D3: A Massive Dataset of Scholarly Metadata for Analyzing the State of Computer Science Research", "abstract": "DBLP is the largest open-access repository of scientific articles on computer science and provides metadata associated with publications, authors, and venues. We retrieved more than 6 million publications from DBLP and extracted pertinent metadata (e.g., abstracts, author affiliations, citations) from the publication texts to create the DBLP Discovery Dataset (D3). D3 can be used to identify trends in research activity, productivity, focus, bias, accessibility, and impact of computer science research. We present an initial analysis focused on the volume of computer science research (e.g., number of papers, authors, research activity), trends in topics of interest, and citation patterns. Our findings show that computer science is a growing research field (15% annually), with an active and collaborative researcher community. While papers in recent years present more bibliographical entries in comparison to previous decades, the average number of citations has been declining. Investigating papers' abstracts reveals that recent topic trends are clearly reflected in D3. Finally, we list further applications of D3 and pose supplemental research questions. The D3 dataset, our findings, and source code are publicly available for research purposes.", "keywords": "computer science;scientometrics;research trends;nlp;dblp;ai", "url": "https://aclanthology.org/2022.lrec-1.283"} {"id": "gavidia-etal-2022-cats", "title": "CATs are Fuzzy PETs: A Corpus and Analysis of Potentially Euphemistic Terms", "abstract": "Euphemisms have not received much attention in natural language processing, despite being an important element of polite and figurative language. Euphemisms prove to be a difficult topic, not only because they are subject to language change, but also because humans may not agree on what is a euphemism and what is not. Nonetheless, the first step to tackling the issue is to collect and analyze examples of euphemisms. We present a corpus of potentially euphemistic terms (PETs) along with example texts from the GloWbE corpus. Additionally, we present a subcorpus of texts where these PETs are not being used euphemistically, which may be useful for future applications. We also discuss the results of multiple analyses run on the corpus. Firstly, we find that sentiment analysis on the euphemistic texts supports that PETs generally decrease negative and offensive sentiment. Secondly, we observe cases of disagreement in an annotation task, where humans are asked to label PETs as euphemistic or not in a subset of our corpus text examples. We attribute the disagreement to a variety of potential reasons, including if the PET was a commonly accepted term (CAT).", "keywords": "euphemisms;politeness;nlp", "url": "https://aclanthology.org/2022.lrec-1.285"} {"id": "aumiller-gertz-2022-klexikon", "title": "Klexikon: A German Dataset for Joint Summarization and Simplification", "abstract": "Traditionally, Text Simplification is treated as a monolingual translation task where sentences between source texts and their simplified counterparts are aligned for training. However, especially for longer input documents, summarizing the text (or dropping less relevant content altogether) plays an important role in the simplification process, which is currently not reflected in existing datasets. Simultaneously, resources for non-English languages are scarce in general and prohibitive for training new solutions. To tackle this problem, we pose core requirements for a system that can jointly summarize and simplify long source documents. We further describe the creation of a new dataset for joint Text Simplification and Summarization based on German Wikipedia and the German children's encyclopedia \"Klexikon\", consisting of almost 2,900 documents. We release a document-aligned version that particularly highlights the summarization aspect, and provide statistical evidence that this resource is well suited to simplification as well. Code and data are available on Github: https://github.com/dennlinger/klexikon", "keywords": "summarization;text simplification;german", "url": "https://aclanthology.org/2022.lrec-1.288"} {"id": "hartl-kruschwitz-2022-applying", "title": "Applying Automatic Text Summarization for Fake News Detection", "abstract": "The distribution of fake news is not a new but a rapidly growing problem. The shift to news consumption via social media has been one of the drivers for the spread of misleading and deliberately wrong information, as in addition to its ease of use there is rarely any veracity monitoring. Due to the harmful effects of such fake news on society, the detection of these has become increasingly important. We present an approach to the problem that combines the power of transformer-based language models while simultaneously addressing one of their inherent problems. Our framework, CMTR-BERT, combines multiple text representations, with the goal of circumventing sequential limits and related loss of information the underlying transformer architecture typically suffers from. Additionally, it enables the incorporation of contextual information. Extensive experiments on two very different, publicly available datasets demonstrates that our approach is able to set new state-of-the-art performance benchmarks. Apart from the benefit of using automatic text summarization techniques we also find that the incorporation of contextual information contributes to performance gains.", "keywords": "fake news detection;text summarization;bert;ensemble", "url": "https://aclanthology.org/2022.lrec-1.289"} {"id": "meisinger-etal-2022-increasing", "title": "Increasing CMDI's Semantic Interoperability with schema.org", "abstract": "The CLARIN Concept Registry (CCR) is the common semantic ground for most CMDI-based profiles to describe language-related resources in the CLARIN universe. While the CCR supports semantic interoperability within this universe, it does not extend beyond it. The flexibility of CMDI, however, allows users to use other term or concept registries when defining their metadata components. In this paper, we describe our use of schema.org, a light ontology used by many parties across disciplines.", "keywords": "metadata management;semantic interoperability;cmdi;schema.org", "url": "https://aclanthology.org/2022.lrec-1.290"} {"id": "gimeno-gomez-martinez-hinarejos-2022-lip", "title": "LIP-RTVE: An Audiovisual Database for Continuous Spanish in the Wild", "abstract": "Speech is considered as a multi-modal process where hearing and vision are two fundamentals pillars. In fact, several studies have demonstrated that the robustness of Automatic Speech Recognition systems can be improved when audio and visual cues are combined to represent the nature of speech. In addition, Visual Speech Recognition, an open research problem whose purpose is to interpret speech by reading the lips of the speaker, has been a focus of interest in the last decades. Nevertheless, in order to estimate these systems in the currently Deep Learning era, large-scale databases are required. On the other hand, while most of these databases are dedicated to English, other languages lack sufficient resources. Thus, this paper presents a semi-automatically annotated audiovisual database to deal with unconstrained natural Spanish, providing 13 hours of data extracted from Spanish television. Furthermore, baseline results for both speaker-dependent and speaker-independent scenarios are reported using Hidden Markov Models, a traditional paradigm that has been widely used in the field of Speech Technologies.", "keywords": "audiovisual database;speech recognition;lipreading;computer vision", "url": "https://aclanthology.org/2022.lrec-1.294"} {"id": "yun-etal-2022-modality", "title": "Modality Alignment between Deep Representations for Effective Video-and-Language Learning", "abstract": "Video-and-Language learning, such as video question answering or video captioning, is the next challenge in the deep learning society, as it pursues the way how human intelligence perceives everyday life. These tasks require the ability of multi-modal reasoning which is to handle both visual information and text information simultaneously across time. In this point of view, a cross-modality attention module that fuses video representation and text representation takes a critical role in most recent approaches. However, existing Video-and-Language models merely compute the attention weights without considering the different characteristics of video modality and text modality. Such na\u00a8\u0131ve attention module hinders the current models to fully enjoy the strength of cross-modality. In this paper, we propose a novel Modality Alignment method that benefits the cross-modality attention module by guiding it to easily amalgamate multiple modalities. Specifically, we exploit Centered Kernel Alignment (CKA) which was originally proposed to measure the similarity between two deep representations. Our method directly optimizes CKA to make an alignment between video and text embedding representations, hence it aids the cross-modality attention module to combine information over different modalities. Experiments on real-world Video QA tasks demonstrate that our method outperforms conventional multi-modal methods significantly with +3.57% accuracy increment compared to the baseline in a popular benchmark dataset. Additionally, in a synthetic data environment, we show that learning the alignment with our method boosts the performance of the cross-modality attention.", "keywords": "multi-modality;video-and-language learning;cross-modality attention", "url": "https://aclanthology.org/2022.lrec-1.295"} {"id": "parisse-etal-2022-multidimensional", "title": "Multidimensional Coding of Multimodal Languaging in Multi-Party Settings", "abstract": "In natural language settings, many interactions include more than two speakers, and real-life interpretation is based on all types of information available in all modalities. This constitutes a challenge for corpus-based analyses because the information in the audio and visual channels must be included in the coding. The goal of the DINLANG project is to tackle that challenge and analyze spontaneous interactions in family dinner settings (two adults and two to three children). The families use either French, or LSF (French sign language). Our aim is to compare how participants share language across the range of modalities found in vocal and visual languaging in coordination with dining. In order to pinpoint similarities and differences, we had to find a common coding tool for all situations (variations from one family to another) and modalities. Our coding procedure incorporates the use of the ELAN software. We created a template organized around participants, situations, and modalities, rather than around language forms. Spoken language transcription can be integrated, when it exists, but it is not mandatory. Data that has been created with another software can be injected in ELAN files if it is linked using time stamps. Analyses performed with the coded files rely on ELAN\u2019s structured search functionalities, which allow to achieve fine-grained temporal analyses and which can be completed by using spreadsheets or R language.", "keywords": "multimodality;interaction;spoken language;sign language;elan;template;coding", "url": "https://aclanthology.org/2022.lrec-1.297"} {"id": "fraser-etal-2022-extracting", "title": "Extracting Age-Related Stereotypes from Social Media Texts", "abstract": "Age-related stereotypes are pervasive in our society, and yet have been under-studied in the NLP community. Here, we present a method for extracting age-related stereotypes from Twitter data, generating a corpus of 300,000 over-generalizations about four contemporary generations (baby boomers, generation X, millennials, and generation Z), as well as \"old\" and \"young\" people more generally. By employing word-association metrics, semi-supervised topic modelling, and density-based clustering, we uncover many common stereotypes as reported in the media and in the psychological literature, as well as some more novel findings. We also observe trends consistent with the existing literature, namely that definitions of \"young\" and \"old\" age appear to be context-dependent, stereotypes for different generations vary across different topics (e.g., work versus family life), and some age-based stereotypes are distinct from generational stereotypes. The method easily extends to other social group labels, and therefore can be used in future work to study stereotypes of different social categories. By better understanding how stereotypes are formed and spread, and by tracking emerging stereotypes, we hope to eventually develop mitigating measures against such biased statements.", "keywords": "stereotypes;bias;ageism;social media;computational social science", "url": "https://aclanthology.org/2022.lrec-1.341"} {"id": "uban-etal-2022-multi", "title": "Multi-Aspect Transfer Learning for Detecting Low Resource Mental Disorders on Social Media", "abstract": "Mental disorders are a serious and increasingly relevant public health issue. NLP methods have the potential to assist with automatic mental health disorder detection, but building annotated datasets for this task can be challenging; moreover, annotated data is very scarce for disorders other than depression. Understanding the commonalities between certain disorders is also important for clinicians who face the problem of shifting standards of diagnosis. We propose that transfer learning with linguistic features can be useful for approaching both the technical problem of improving mental disorder detection in the context of data scarcity, and the clinical problem of understanding the overlapping symptoms between certain disorders. In this paper, we target four disorders: depression, PTSD, anorexia and self-harm. We explore multi-aspect transfer learning for detecting mental disorders from social media texts, using deep learning models with multi-aspect representations of language (including multiple types of interpretable linguistic features). We explore different transfer learning strategies for cross-disorder and cross-platform transfer, and show that transfer learning can be effective for improving prediction performance for disorders where little annotated data is available. We offer insights into which linguistic features are the most useful vehicles for transferring knowledge, through ablation experiments, as well as error analysis.", "keywords": "mental disorders;depression;transfer learning;explainability;social media;low resource", "url": "https://aclanthology.org/2022.lrec-1.343"} {"id": "mubarak-etal-2022-arcovidvac", "title": "ArCovidVac: Analyzing Arabic Tweets About COVID-19 Vaccination", "abstract": "The emergence of the COVID-19 pandemic and the first global infodemic have changed our lives in many different ways. We relied on social media to get the latest information about COVID-19 pandemic and at the same time to disseminate information. The content in social media consisted not only health related advice, plans, and informative news from policymakers, but also contains conspiracies and rumors. It became important to identify such information as soon as they are posted to make an actionable decision (e.g., debunking rumors, or taking certain measures for traveling). To address this challenge, we develop and publicly release the first largest manually annotated Arabic tweet dataset, ArCovidVac, for COVID-19 vaccination campaign, covering many countries in the Arab region. The dataset is enriched with different layers of annotation, including, (i) Informativeness more vs. less importance of the tweets); (ii) fine-grained tweet content types (e.g., advice, rumors, restriction, authenticate news/information); and (iii) stance towards vaccination (pro-vaccination, neutral, anti-vaccination). Further, we performed in-depth analysis of the data, exploring the popularity of different vaccines, trending hashtags, topics, and presence of offensiveness in the tweets. We studied the data for individual types of tweets and temporal changes in stance towards vaccine. We benchmarked the ArCovidVac dataset using transformer architectures for informativeness, content types, and stance detection.", "keywords": "covid-19;vaccination;stance detection;arabic tweets;tweet classification", "url": "https://aclanthology.org/2022.lrec-1.344"} {"id": "pritzen-etal-2022-multitask", "title": "Multitask Learning for Grapheme-to-Phoneme Conversion of Anglicisms in German Speech Recognition", "abstract": "Anglicisms are a challenge in German speech recognition. Due to their irregular pronunciation compared to native German words, automatically generated pronunciation dictionaries often contain incorrect phoneme sequences for Anglicisms. In this work, we propose a multitask sequence-to-sequence approach for grapheme-to-phoneme conversion to improve the phonetization of Anglicisms. We extended a grapheme-to-phoneme model with a classification task to distinguish Anglicisms from native German words. With this approach, the model learns to generate different pronunciations depending on the classification result. We used our model to create supplementary Anglicism pronunciation dictionaries to be added to an existing German speech recognition model. Tested on a special Anglicism evaluation set, we improved the recognition of Anglicisms compared to a baseline model, reducing the word error rate by a relative 1 % and the Anglicism error rate by a relative 3 %. With our experiment, we show that multitask learning can help solving the challenge of Anglicisms in German speech recognition.", "keywords": "automatic speech recognition;grapheme-to-phoneme models;sequence-to-sequence models;multitask learning;anglicisms", "url": "https://aclanthology.org/2022.lrec-1.346"} {"id": "hautli-janisz-etal-2022-qt30", "title": "QT30: A Corpus of Argument and Conflict in Broadcast Debate", "abstract": "Broadcast political debate is a core pillar of democracy: it is the public\u2019s easiest access to opinions that shape policies and enables the general public to make informed choices. With QT30, we present the largest corpus of analysed dialogical argumentation ever created (19,842 utterances, 280,000 words) and also the largest corpus of analysed broadcast political debate to date, using 30 episodes of BBC\u2019s \u2018Question Time\u2019 from 2020 and 2021. Question Time is the prime institution in UK broadcast political debate and features questions from the public on current political issues, which are responded to by a weekly panel of five figures of UK politics and society. QT30 is highly argumentative and combines language of well-versed political rhetoric with direct, often combative, justification-seeking of the general public. QT30 is annotated with Inference Anchoring Theory, a framework well-known in argument mining, which encodes the way arguments and conflicts are created and reacted to in dialogical settings. The resource is freely available at http://corpora.aifdb.org/qt30.", "keywords": "broadcast political debate;argumentation and conflict;question time;inference anchoring theory", "url": "https://aclanthology.org/2022.lrec-1.352"} {"id": "buzanov-etal-2022-multilingual", "title": "Multilingual Pragmaticon: Database of Discourse Formulae", "abstract": "The paper presents a multilingual database aimed to be used as a tool for typological analysis of response constructions called discourse formulae (DF), cf. English `No way\u00a1 or French `\u00c7a va\u00a1 (\u00a0 `all right'). The two primary qualities that make DF of theoretical interest for linguists are their idiomaticity and the special nature of their meanings (cf. consent, refusal, negation), determined by their dialogical function. The formal and semantic structures of these items are language-specific. Compiling a database with DF from various languages would help estimate the diversity of DF in both of these aspects, and, at the same time, establish some frequently occurring patterns. The DF in the database are accompanied with glosses and assigned with multiple tags, such as pragmatic function, additional semantics, the illocutionary type of the context, etc. As a starting point, Russian, Serbian and Slovene DF are included into the database. This data already shows substantial grammatical and lexical variability.", "keywords": "linguistic database;pragmaticalization;discourse formulae;pragmatic typology;construction grammar", "url": "https://aclanthology.org/2022.lrec-1.355"} {"id": "reiter-etal-2022-exploring", "title": "Exploring Text Recombination for Automatic Narrative Level Detection", "abstract": "Automatizing the process of understanding the global narrative structure of long texts and stories is still a major challenge for state-of-the-art natural language understanding systems, particularly because annotated data is scarce and existing annotation workflows do not scale well to the annotation of complex narrative phenomena. In this work, we focus on the identification of narrative levels in texts corresponding to stories that are embedded in stories. Lacking sufficient pre-annotated training data, we explore a solution to deal with data scarcity that is common in machine learning: the automatic augmentation of an existing small data set of annotated samples with the help of data synthesis. We present a workflow for narrative level detection, that includes the operationalization of the task, a model, and a data augmentation protocol for automatically generating narrative texts annotated with breaks between narrative levels. Our experiments suggest that narrative levels in long text constitute a challenging phenomenon for state-of-the-art NLP models, but generating training data synthetically does improve the prediction results considerably.", "keywords": "computational literary studies;narrative levels;training data induction", "url": "https://aclanthology.org/2022.lrec-1.357"} {"id": "bawden-etal-2022-automatic", "title": "Automatic Normalisation of Early Modern French", "abstract": "Spelling normalisation is a useful step in the study and analysis of historical language texts, whether it is manual analysis by experts or automatic analysis using downstream natural language processing (NLP) tools. Not only does it help to homogenise the variable spelling that often exists in historical texts, but it also facilitates the use of off-the-shelf contemporary NLP tools, if contemporary spelling conventions are used for normalisation. We present FREEMnorm, a new benchmark for the normalisation of Early Modern French (from the 17th century) into contemporary French and provide a thorough comparison of three different normalisation methods: ABA, an alignment-based approach and MT-approaches, (both statistical and neural), including extensive parameter searching, which is often missing in the normalisation literature.", "keywords": "digital humanities;normalisation;spelling;modern french;machine translation;historical", "url": "https://aclanthology.org/2022.lrec-1.358"} {"id": "heyns-van-zaanen-2022-detecting", "title": "Detecting Multiple Transitions in Literary Texts", "abstract": "Identifying the high level structure of texts provides important information when performing distant reading analysis. The structure of texts is not necessarily linear, as transitions, such as changes in the scenery or flashbacks, can be present. As a first step in identifying this structure, we aim to identify transitions in texts. Previous work (Heyns and van Zaanen, 2021) proposed a system that can successfully identify one transition in literary texts. The text is split in snippets and LDA is applied, resulting in a sequence of topics. A transition is introduced at the point that separates the topics (before and after the point) best. In this article, we extend the existing system such that it can detect multiple transitions. Additionally, we introduce a new system that inherently handles multiple transitions in texts. The new system also relies on LDA information, but is more robust than the previous system. We apply these systems to texts with known transitions (as they are constructed by concatenating text snippets stemming from different source texts) and evaluation both systems on texts with one transition and texts with two transitions. As both systems rely on LDA to identify transitions between snippets, we also show the impact of varying the number of LDA topics on the results as well. The new system consistently outperforms the previous system, not only on texts with multiple transitions, but also on single boundary texts.", "keywords": "topic modelling;lda;transition identification", "url": "https://aclanthology.org/2022.lrec-1.360"} {"id": "machado-pardo-2022-evaluating", "title": "Evaluating Methods for Extraction of Aspect Terms in Opinion Texts in Portuguese - the Challenges of Implicit Aspects", "abstract": "One of the challenges of aspect-based sentiment analysis is the implicit mention of aspects. These are more difficult to identify and may require world knowledge to do so. In this work, we evaluate frequency-based, hybrid, and machine learning methods, including the use of the pre-trained BERT language model, in the task of extracting aspect terms in opinionated texts in Portuguese, emphasizing the analysis of implicit aspects. Besides the comparative evaluation of methods, the differential of this work lies in the analysis's novelty using a typology of implicit aspects that shows the knowledge needed to identify each implicit aspect term, thus allowing a mapping of the strengths and weaknesses of each method.", "keywords": "sentiment analysis;aspect extraction;implicit aspects", "url": "https://aclanthology.org/2022.lrec-1.407"} {"id": "cambria-etal-2022-senticnet", "title": "SenticNet 7: A Commonsense-based Neurosymbolic AI Framework for Explainable Sentiment Analysis", "abstract": "In recent years, AI research has demonstrated enormous potential for the benefit of humanity and society. While often better than its human counterparts in classification and pattern recognition tasks, however, AI still struggles with complex tasks that require commonsense reasoning such as natural language understanding. In this context, the key limitations of current AI models are: dependency, reproducibility, trustworthiness, interpretability, and explainability. In this work, we propose a commonsense-based neurosymbolic framework that aims to overcome these issues in the context of sentiment analysis. In particular, we employ unsupervised and reproducible subsymbolic techniques such as auto-regressive language models and kernel methods to build trustworthy symbolic representations that convert natural language to a sort of protolanguage and, hence, extract polarity from text in a completely interpretable and explainable manner.", "keywords": "neurosymbolic ai;sentiment analysis;natural language processing", "url": "https://aclanthology.org/2022.lrec-1.408"} {"id": "lugli-etal-2022-embeddings", "title": "Embeddings models for Buddhist Sanskrit", "abstract": "The paper presents novel resources and experiments for Buddhist Sanskrit, broadly defined here including all the varieties of Sanskrit in which Buddhist texts have been transmitted. We release a novel corpus of Buddhist texts, a novel corpus of general Sanskrit and word similarity and word analogy datasets for intrinsic evaluation of Buddhist Sanskrit embeddings models. We compare the performance of word2vec and fastText static embeddings models, with default and optimized parameter settings, as well as contextual models BERT and GPT-2, with different training regimes (including a transfer learning approach using the general Sanskrit corpus) and different embeddings construction regimes (given the encoder layers). The results show that for semantic similarity the fastText embeddings yield the best results, while for word analogy tasks BERT embeddings work the best. We also show that for contextual models the optimal layer combination for embedding construction is task dependant, and that pretraining the contextual embeddings models on a reference corpus of general Sanskrit is beneficial, which is a promising finding for future development of embeddings for less-resourced languages and domains.", "keywords": "buddhist sanskrit;embeddings;intrinsic evaluation", "url": "https://aclanthology.org/2022.lrec-1.411"} {"id": "coto-solano-etal-2022-development", "title": "Development of Automatic Speech Recognition for the Documentation of Cook Islands M\u0101ori", "abstract": "This paper describes the process of data processing and training of an automatic speech recognition (ASR) system for Cook Islands M\u0101ori (CIM), an Indigenous language spoken by approximately 22,000 people in the South Pacific. We transcribed four hours of speech from adults and elderly speakers of the language and prepared two experiments. First, we trained three ASR systems: one statistical, Kaldi; and two based on Deep Learning, DeepSpeech and XLSR-Wav2Vec2. Wav2Vec2 tied with Kaldi for lowest character error rate (CER=6\u00b11) and was slightly behind in word error rate (WER=23\u00b12 versus WER=18\u00b12 for Kaldi). This provides evidence that Deep Learning ASR systems are reaching the performance of statistical methods on small datasets, and that they can work effectively with extremely low-resource Indigenous languages like CIM. In the second experiment we used Wav2Vec2 to train models with held-out speakers. While the performance decreased (CER=15\u00b17, WER=46\u00b116), the system still showed considerable learning. We intend to use ASR to accelerate the documentation of CIM, using newly transcribed texts to improve the ASR and also generate teaching and language revitalization materials. The trained model is available under a license based on the Kaitiakitanga License, which provides for non-commercial use while retaining control of the model by the Indigenous community.", "keywords": "automatic speech recognition;cook islands maori;language documentation;low-resource languages", "url": "https://aclanthology.org/2022.lrec-1.412"} {"id": "wiedemann-etal-2022-generalized", "title": "A Generalized Approach to Protest Event Detection in German Local News", "abstract": "Protest events provide information about social and political conflicts, the state of social cohesion and democratic conflict management, as well as the state of civil society in general. Social scientists are therefore interested in the systematic observation of protest events. With this paper, we release the first German language resource of protest event related article excerpts published in local news outlets. We use this dataset to train and evaluate transformer-based text classifiers to automatically detect relevant newspaper articles. Our best approach reaches a binary F1-score of 93.3 %, which is a promising result for our goal to support political science research. However, in a second experiment, we show that our model does not generalize equally well when applied to data from time periods and localities other than our training sample. To make protest event detection more robust, we test two ways of alternative preprocessing. First, we find that letting the classifier concentrate on sentences around protest keywords improves the F1-score for out-of-sample data up to +4 percentage points. Second, against our initial intuition, masking of named entities during preprocessing does not improve the generalization in terms of F1-scores. However, it leads to a significantly improved recall of the models.", "keywords": "protest event detection;protest event analysis;text classification;computational social science", "url": "https://aclanthology.org/2022.lrec-1.413"} {"id": "gnehm-etal-2022-evaluation", "title": "Evaluation of Transfer Learning and Domain Adaptation for Analyzing German-Speaking Job Advertisements", "abstract": "This paper presents text mining approaches on German-speaking job advertisements to enable social science research on the development of the labour market over the last 30 years. In order to build text mining applications providing information about profession and main task of a job, as well as experience and ICT skills needed, we experiment with transfer learning and domain adaptation. Our main contribution consists in building language models which are adapted to the domain of job advertisements, and their assessment on a broad range of machine learning problems. Our findings show the large value of domain adaptation in several respects. First, it boosts the performance of fine-tuned task-specific models consistently over all evaluation experiments. Second, it helps to mitigate rapid data shift over time in our special domain, and enhances the ability to learn from small updates with new, labeled task data. Third, domain-adaptation of language models is efficient: With continued in-domain pre-training we are able to outperform general-domain language models pre-trained on ten times more data. We share our domain-adapted language models and data with the research community.", "keywords": "text mining;transfer learning;domain adaptation;bert;computational social science;job advertisements", "url": "https://aclanthology.org/2022.lrec-1.414"} {"id": "perez-almendros-etal-2022-pre", "title": "Pre-Training Language Models for Identifying Patronizing and Condescending Language: An Analysis", "abstract": "Patronizing and Condescending Language (PCL) is a subtle but harmful type of discourse, yet the task of recognizing PCL remains under-studied by the NLP community. Recognizing PCL is challenging because of its subtle nature, because available datasets are limited in size, and because this task often relies on some form of commonsense knowledge. In this paper, we study to what extent PCL detection models can be improved by pre-training them on other, more established NLP tasks. We find that performance gains are indeed possible in this way, in particular when pre-training on tasks focusing on sentiment, harmful language and commonsense morality. In contrast, for tasks focusing on political speech and social justice, no or only very small improvements were witnessed. These findings improve our understanding of the nature of PCL.", "keywords": "patronizing and condescending language;pre-training strategies;natural language processing", "url": "https://aclanthology.org/2022.lrec-1.415"} {"id": "jauhiainen-etal-2022-heli", "title": "HeLI-OTS, Off-the-shelf Language Identifier for Text", "abstract": "This paper introduces HeLI-OTS, an off-the-shelf text language identification tool using the HeLI language identification method. The HeLI-OTS language identifier is equipped with language models for 200 languages and licensed for academic as well as commercial use. We present the HeLI method and its use in our previous research. Then we compare the performance of the HeLI-OTS language identifier with that of fastText on two different data sets, showing that fastText favors the recall of common languages, whereas HeLI-OTS reaches both high recall and high precision for all languages. While introducing existing off-the-shelf language identification tools, we also give a picture of digital humanities-related research that uses such tools. The validity of the results of such research depends on the results given by the language identifier used, and especially for research focusing on the less common languages, the tendency to favor widely used languages might be very detrimental, which Heli-OTS is now able to remedy.", "keywords": "anguage identification;text classification", "url": "https://aclanthology.org/2022.lrec-1.416"} {"id": "severini-etal-2022-towards", "title": "Towards a Broad Coverage Named Entity Resource: A Data-Efficient Approach for Many Diverse Languages", "abstract": "Parallel corpora are ideal for extracting a multilingual named entity (MNE) resource, i.e., a dataset of names translated into multiple languages. Prior work on extracting MNE datasets from parallel corpora required resources such as large monolingual corpora or word aligners that are unavailable or perform poorly for underresourced languages. We present CLC-BN, a new method for creating an MNE resource, and apply it to the Parallel Bible Corpus, a corpus of more than 1000 languages. CLC-BN learns a neural transliteration model from parallel-corpus statistics, without requiring any other bilingual resources, word aligners, or seed data. Experimental results show that CLC-BN clearly outperforms prior work. We release an MNE resource for 1340 languages and demonstrate its effectiveness in two downstream tasks: knowledge graph augmentation and bilingual lexicon induction.", "keywords": "low-resource;multilinguality;named entities;transliteration", "url": "https://aclanthology.org/2022.lrec-1.417"} {"id": "khan-etal-2022-towards", "title": "Towards the Construction of a WordNet for Old English", "abstract": "In this paper we will discuss our preliminary work towards the construction of a WordNet for Old English, taking our inspiration from other similar WN construction projects for ancient languages such as Ancient Greek, Latin and Sanskrit. The Old English WordNet (OldEWN) will build upon this innovative work in a number of different ways which we articulate in the article, most importantly by treateating figurative meaning as a 'first-class citizen' in the structuring of the semantic system. From a more practical perspective we will describe our plan to utilize a pre-existing lexicographic resource and the naisc system to automatically compile a provisional version of the WordNet which will then be checked and enriched by Old English experts.", "keywords": "wordnet;ancient;figurative;metaphor;metonymy;conceptual metaphor;embodiment", "url": "https://aclanthology.org/2022.lrec-1.418"} {"id": "vahtola-etal-2022-modeling", "title": "Modeling Noise in Paraphrase Detection", "abstract": "Noisy labels in training data present a challenging issue in classification tasks, misleading a model towards incorrect decisions during training. In this paper, we propose the use of a linear noise model to augment pre-trained language models to account for label noise in fine-tuning. We test our approach in a paraphrase detection task with various levels of noise and five different languages. Our experiments demonstrate the effectiveness of the additional noise model in making the training procedures more robust and stable. Furthermore, we show that this model can be applied without further knowledge about annotation confidence and reliability of individual training examples and we analyse our results in light of data selection and sampling strategies.", "keywords": "paraphrase detection;label noise", "url": "https://aclanthology.org/2022.lrec-1.461"} {"id": "laurenti-etal-2022-give", "title": "Give me your Intentions, I'll Predict our Actions: A Two-level Classification of Speech Acts for Crisis Management in Social Media", "abstract": "Discovered by (Austin,1962) and extensively promoted by (Searle, 1975), speech acts (SA) have been the object of extensive discussion in the philosophical and the linguistic literature, as well as in computational linguistics where the detection of SA have shown to be an important step in many down stream NLP applications. In this paper, we attempt to measure for the first time the role of SA on urgency detection in tweets, focusing on natural disasters. Indeed, SA are particularly relevant to identify intentions, desires, plans and preferences towards action, providing therefore actionable information that will help to set priorities for the human teams and decide appropriate rescue actions. To this end, we come up here with four main contributions: (1) A two-layer annotation scheme of SA both at the tweet and subtweet levels, (2) A new French dataset of 6,669 tweets annotated for both urgency and SA, (3) An in-depth analysis of the annotation campaign, highlighting the correlation between SA and urgency categories, and (4) A set of deep learning experiments to detect SA in a crisis corpus. Our results show that SA are correlated with urgency which is a first important step towards SA-aware NLP-based crisis management on social media.", "keywords": "speech acts;crisis events;social media", "url": "https://aclanthology.org/2022.lrec-1.462"} {"id": "turan-etal-2022-adapting", "title": "Adapting Language Models When Training on Privacy-Transformed Data", "abstract": "In recent years, voice-controlled personal assistants have revolutionized the interaction with smart devices and mobile applications. The collected data are then used by system providers to train language models (LMs). Each spoken message reveals personal information, hence removing private information from the input sentences is necessary. Our data sanitization process relies on recognizing and replacing named entities by other words from the same class. However, this may harm LM training because privacy-transformed data is unlikely to match the test distribution. This paper aims to fill the gap by focusing on the adaptation of LMs initially trained on privacy-transformed sentences using a small amount of original untransformed data. To do so, we combine class-based LMs, which provide an effective approach to overcome data sparsity in the context of n-gram LMs, and neural LMs, which handle longer contexts and can yield better predictions. Our experiments show that training an LM on privacy-transformed data result in a relative 11% word error rate (WER) increase compared to training on the original untransformed data, and adapting that model on a limited amount of original untransformed data leads to a relative 8% WER improvement over the model trained solely on privacy-transformed data.", "keywords": "class-based language modeling;privacy-preserving learning;language model adaptation;speech-to-text", "url": "https://aclanthology.org/2022.lrec-1.465"} {"id": "strobel-etal-2022-evaluation", "title": "Evaluation of HTR models without Ground Truth Material", "abstract": "The evaluation of Handwritten Text Recognition (HTR) models during their development is straightforward: because HTR is a supervised problem, the usual data split into training, validation, and test data sets allows the evaluation of models in terms of accuracy or error rates. However, the evaluation process becomes tricky as soon as we switch from development to application. A compilation of a new (and forcibly smaller) ground truth (GT) from a sample of the data that we want to apply the model on and the subsequent evaluation of models thereon only provides hints about the quality of the recognised text, as do confidence scores (if available) the models return. Moreover, if we have several models at hand, we face a model selection problem since we want to obtain the best possible result during the application phase. This calls for GT-free metrics to select the best model, which is why we (re-)introduce and compare different metrics, from simple, lexicon-based to more elaborate ones using standard language models and masked language models (MLM). We show that MLM-based evaluation can compete with lexicon-based methods, with the advantage that large and multilingual transformers are readily available, thus making compiling lexical resources for other metrics superfluous.", "keywords": "handwritten text recognition;digital humanities;evaluation;ground truth data;resources;model selection", "url": "https://aclanthology.org/2022.lrec-1.467"} {"id": "korybski-etal-2022-semi", "title": "A Semi-Automated Live Interlingual Communication Workflow Featuring Intralingual Respeaking: Evaluation and Benchmarking", "abstract": "In this paper, we present a semi-automated workflow for live interlingual speech-to-text communication which seeks to reduce the shortcomings of existing ASR systems: a human respeaker works with a speaker-dependent speech recognition software (e.g., Dragon Naturally Speaking) to deliver punctuated same-language output of superior quality than obtained using out-of-the-box automatic speech recognition of the original speech. This is fed into a machine translation engine (the EU\u2019s eTranslation) to produce live-caption ready text. We benchmark the quality of the output against the output of best-in-class (human) simultaneous interpreters working with the same source speeches from plenary sessions of the European Parliament. To evaluate the accuracy and facilitate the comparison between the two types of output, we use a tailored annotation approach based on the NTR model (Romero-Fresco and P\u00f6chhacker, 2017). We find that the semi-automated workflow combining intralingual respeaking and machine translation is capable of generating outputs that are similar in terms of accuracy and completeness to the outputs produced in the benchmarking workflow, although the small scale of our experiment requires caution in interpreting this result.", "keywords": "interpreting;respeaking;automatic speech recognition;machine translation", "url": "https://aclanthology.org/2022.lrec-1.468"} {"id": "prouteau-etal-2022-embedding", "title": "Are Embedding Spaces Interpretable? Results of an Intrusion Detection Evaluation on a Large French Corpus", "abstract": "Word embedding methods allow to represent words as vectors in a space that is structured using word co-occurrences so that words with close meanings are close in this space. These vectors are then provided as input to automatic systems to solve natural language processing problems. Because interpretability is a necessary condition to trusting such systems, interpretability of embedding spaces, the first link in the chain is an important issue. In this paper, we thus evaluate the interpretability of vectors extracted with two approaches: SPINE a k-sparse auto-encoder, and SINr, a graph-based method. This evaluation is based on a Word Intrusion Task with human annotators. It is operated using a large French corpus, and is thus, as far as we know, the first large-scale experiment regarding word embedding interpretability on this language. Furthermore, contrary to the approaches adopted in the literature where the evaluation is done on a small sample of frequent words, we consider a more realistic use-case where most of the vocabulary is kept for the evaluation. This allows to show how difficult this task is, even though SPINE and SINr show some promising results. In particular, SINr results are obtained with a very low amount of computation compared to SPINE, while being similarly interpretable.", "keywords": "word embeddings;interpretability;word intrusion task;human evaluation;french", "url": "https://aclanthology.org/2022.lrec-1.469"} {"id": "yu-etal-2022-universal", "title": "The Universal Anaphora Scorer", "abstract": "The aim of the Universal Anaphora initiative is to push forward the state of the art in anaphora and anaphora resolution by expanding the aspects of anaphoric interpretation which are or can be reliably annotated in anaphoric corpora, producing unified standards to annotate and encode these annotations, deliver datasets encoded according to these standards, and developing methods for evaluating models carrying out this type of interpretation. Such expansion of the scope of anaphora resolution requires a comparable expansion of the scope of the scorers used to evaluate this work. In this paper, we introduce an extended version of the Reference Coreference Scorer (Pradhan et al., 2014) that can be used to evaluate the extended range of anaphoric interpretation included in the current Universal Anaphora proposal. The UA scorer supports the evaluation of identity anaphora resolution and of bridging reference resolution, for which scorers already existed but not integrated in a single package. It also supports the evaluation of split antecedent anaphora and discourse deixis, for which no tools existed. The proposed approach to the evaluation of split antecedent anaphora is entirely novel; the proposed approach to the evaluation of discourse deixis leverages the encoding of discourse deixis proposed in Universal Anaphora to enable the use for discourse deixis of the same metrics already used for identity anaphora. The scorer was tested in the recent CODI-CRAC 2021 Shared Task on Anaphora Resolution in Dialogues.", "keywords": "anaphora resolution;evaluation;univeral anaphora", "url": "https://aclanthology.org/2022.lrec-1.521"} {"id": "zhukova-etal-2022-towards", "title": "Towards Evaluation of Cross-document Coreference Resolution Models Using Datasets with Diverse Annotation Schemes", "abstract": "Established cross-document coreference resolution (CDCR) datasets contain event-centric coreference chains of events and entities with identity relations. These datasets establish strict definitions of the coreference relations across related tests but typically ignore anaphora with more vague context-dependent loose coreference relations. In this paper, we qualitatively and quantitatively compare the annotation schemes of ECB+, a CDCR dataset with identity coreference relations, and NewsWCL50, a CDCR dataset with a mix of loose context-dependent and strict coreference relations. We propose a phrasing diversity metric (PD) that encounters for the diversity of full phrases unlike the previously proposed metrics and allows to evaluate lexical diversity of the CDCR datasets in a higher precision. The analysis shows that coreference chains of NewsWCL50 are more lexically diverse than those of ECB+ but annotating of NewsWCL50 leads to the lower inter-coder reliability. We discuss the different tasks that both CDCR datasets create for the CDCR models, i.e., lexical disambiguation and lexical diversity. Finally, to ensure generalizability of the CDCR models, we propose a direction for CDCR evaluation that combines CDCR datasets with multiple annotation schemes that focus of various properties of the coreference chains.", "keywords": "cross-document coreference resolution;lexical diversity;lexical disambiguation", "url": "https://aclanthology.org/2022.lrec-1.522"} {"id": "bhattarai-etal-2022-explainable", "title": "Explainable Tsetlin Machine Framework for Fake News Detection with Credibility Score Assessment", "abstract": "The proliferation of fake news, i.e., news intentionally spread for misinformation, poses a threat to individuals and society. Despite various fact-checking websites such as PolitiFact, robust detection techniques are required to deal with the increase in fake news. Several deep learning models show promising results for fake news classification, however, their black-box nature makes it difficult to explain their classification decisions and quality-assure the models. We here address this problem by proposing a novel interpretable fake news detection framework based on the recently introduced Tsetlin Machine (TM). In brief, we utilize the conjunctive clauses of the TM to capture lexical and semantic properties of both true and fake news text. Further, we use clause ensembles to calculate the credibility of fake news. For evaluation, we conduct experiments on two publicly available datasets, PolitiFact and GossipCop, and demonstrate that the TM framework significantly outperforms previously published baselines by at least 5% in terms of accuracy, with the added benefit of an interpretable logic-based representation. In addition, our approach provides a higher F1-score than BERT and XLNet, however, we obtain slightly lower accuracy. We finally present a case study on our model's explainability, demonstrating how it decomposes into meaningful words and their negations.", "keywords": "fake news detection;tsetlin machine;human-interpretable;language models;text classification;explainable", "url": "https://aclanthology.org/2022.lrec-1.523"} {"id": "hatab-etal-2022-enhancing", "title": "Enhancing Deep Learning with Embedded Features for Arabic Named Entity Recognition", "abstract": "The introduction of word embedding models has remarkably changed many Natural Language Processing tasks. Word embeddings can automatically capture the semantics of words and other hidden features. Nonetheless, the Arabic language is highly complex, which results in the loss of important information. This paper uses Madamira, an external knowledge source, to generate additional word features. We evaluate the utility of adding these features to conventional word and character embeddings to perform the Named Entity Recognition (NER) task on Modern Standard Arabic (MSA). Our NER model is implemented using Bidirectional Long Short Term Memory and Conditional Random Fields (BiLSTM-CRF). We add morphological and syntactical features to different word embeddings to train the model. The added features improve the performance by different values depending on the used embedding model. The best performance is achieved by using Bert embeddings. Moreover, our best model outperforms the previous systems to the best of our knowledge.", "keywords": "arabic natural language processing;named entity recognition;deep learning", "url": "https://aclanthology.org/2022.lrec-1.524"} {"id": "vakulenko-etal-2022-scai", "title": "SCAI-QReCC Shared Task on Conversational Question Answering", "abstract": "Search-Oriented Conversational AI (SCAI) is an established venue that regularly puts a spotlight upon the recent work advancing the field of conversational search. SCAI\u201921 was organised as an independent online event and featured a shared task on conversational question answering, on which this paper reports. The shared task featured three subtasks that correspond to three steps in conversational question answering: question rewriting, passage retrieval, and answer generation. This report discusses each subtask, but emphasizes the answer generation subtask as it attracted the most attention from the participants and we identified evaluation of answer correctness in the conversational settings as a major challenge and acurrent research gap. Alongside the automatic evaluation, we conducted two crowdsourcing experiments to collect annotations for answer plausibility and faithfulness. As a result of this shared task, the original conversational QA dataset used for evaluation was further extended with alternative correct answers produced by the participant systems.", "keywords": "conversational systems;question answering", "url": "https://aclanthology.org/2022.lrec-1.525"} {"id": "raring-etal-2022-semantic", "title": "Semantic Relations between Text Segments for Semantic Storytelling: Annotation Tool - Dataset - Evaluation", "abstract": "Semantic Storytelling describes the goal to automatically and semi-automatically generate stories based on extracted, processed, classified and annotated information from large content resources. Essential is the automated processing of text segments extracted from different content resources by identifying the relevance of a text segment to a topic and its semantic relation to other text segments. In this paper we present an approach to create an automatic classifier for semantic relations between extracted text segments from different news articles. We devise custom annotation guidelines based on various discourse structure theories and annotate a dataset of 2,501 sentence pairs extracted from 2,638 Wikinews articles. For the annotation, we developed a dedicated annotation tool. Based on the constructed dataset, we perform initial experiments with Transformer language models that are trained for the automatic classification of semantic relations. Our results with promising high accuracy scores suggest the validity and applicability of our approach for future Semantic Storytelling solutions.", "keywords": "semantic storytelling;text classification;discourse parsing;wikinews", "url": "https://aclanthology.org/2022.lrec-1.526"} {"id": "dhar-etal-2022-evaluating", "title": "Evaluating Pre-training Objectives for Low-Resource Translation into Morphologically Rich Languages", "abstract": "The scarcity of parallel data is a major limitation for Neural Machine Translation (NMT) systems, in particular for translation into morphologically rich languages (MRLs). An important way to overcome the lack of parallel data is to leverage target monolingual data, which is typically more abundant and easier to collect. We evaluate a number of techniques to achieve this, ranging from back-translation to random token masking, on the challenging task of translating English into four typologically diverse MRLs, under low-resource settings. Additionally, we introduce Inflection Pre-Training (or PT-Inflect), a novel pre-training objective whereby the NMT system is pre-trained on the task of re-inflecting lemmatized target sentences before being trained on standard source-to-target language translation. We conduct our evaluation on four typologically diverse target MRLs, and find that PT-Inflect surpasses NMT systems trained only on parallel data. While PT-Inflect is outperformed by back-translation overall, combining the two techniques leads to gains in some of the evaluated language pairs.", "keywords": "low resource nmt;morphology;inflection", "url": "https://aclanthology.org/2022.lrec-1.527"} {"id": "bhattacharyya-etal-2022-aligning", "title": "Aligning Images and Text with Semantic Role Labels for Fine-Grained Cross-Modal Understanding", "abstract": "As vision processing and natural language processing continue to advance, there is increasing interest in multimodal applications, such as image retrieval, caption generation, and human-robot interaction. These tasks require close alignment between the information in the images and text. In this paper, we present a new multimodal dataset that combines state of the art semantic annotation for language with the bounding boxes of corresponding images. This richer multimodal labeling supports cross-modal inference for applications in which such alignment is useful. Our semantic representations, developed in the natural language processing community, abstract away from the surface structure of the sentence, focusing on specific actions and the roles of their participants, a level that is equally relevant to images. We then utilize these representations in the form of semantic role labels in the captions and the images and demonstrate improvements in standard tasks such as image retrieval. The potential contributions of these additional labels is evaluated using a role-aware retrieval system based on graph convolutional and recurrent neural networks. The addition of semantic roles into this system provides a significant increase in capability and greater flexibility for these tasks, and could be extended to state-of-the-art techniques relying on transformers with larger amounts of annotated data.", "keywords": "cross-modal retrieval;semantic role labeling", "url": "https://aclanthology.org/2022.lrec-1.528"} {"id": "bertin-lemee-etal-2022-rosetta", "title": "Rosetta-LSF: an Aligned Corpus of French Sign Language and French for Text-to-Sign Translation", "abstract": "This article presents a new French Sign Language (LSF) corpus called \"Rosetta-LSF\". It was created to support future studies on the automatic translation of written French into LSF, rendered through the animation of a virtual signer. An overview of the field highlights the importance of a quality representation of LSF. In order to obtain quality animations understandable by signers, it must surpass the simple \"gloss transcription\" of the LSF lexical units to use in the discourse. To achieve this, we designed a corpus composed of four types of aligned data, and evaluated its usability. These are: news headlines in French, translations of these headlines into LSF in the form of videos showing animations of a virtual signer, gloss annotations of the \"traditional\" type\u2014although including additional information on the context in which each gestural unit is performed as well as their potential for adaptation to another context\u2014and AZee representations of the videos, i.e. formal expressions capturing the necessary and sufficient linguistic information. This article describes this data, exhibiting an example from the corpus. It is available online for public research.", "keywords": "sign language;machine translation;example-based translation;synthesis", "url": "https://aclanthology.org/2022.lrec-1.529"} {"id": "fomicheva-etal-2022-mlqe", "title": "MLQE-PE: A Multilingual Quality Estimation and Post-Editing Dataset", "abstract": "We present MLQE-PE, a new dataset for Machine Translation (MT) Quality Estimation (QE) and Automatic Post-Editing (APE). The dataset contains annotations for eleven language pairs, including both high- and low-resource languages. Specifically, it is annotated for translation quality with human labels for up to 10,000 translations per language pair in the following formats: sentence-level direct assessments and post-editing effort, and word-level binary good/bad labels. Apart from the quality-related scores, each source-translation sentence pair is accompanied by the corresponding post-edited sentence, as well as titles of the articles where the sentences were extracted from, and information on the neural MT models used to translate the text. We provide a thorough description of the data collection and annotation process as well as an analysis of the annotation distribution for each language pair. We also report the performance of baseline systems trained on the MLQE-PE dataset. The dataset is freely available and has already been used for several WMT shared tasks.", "keywords": "machine translation;quality estimation;evaluation;direct assessments;post-edits", "url": "https://aclanthology.org/2022.lrec-1.530"}