Muennighoff
commited on
Scheduled Commit
Browse files
data/retrieval_battle-f0f24c65-6f65-4435-a1e5-20bb12a2b295.jsonl
CHANGED
@@ -21,3 +21,13 @@
|
|
21 |
{"tstamp": 1722224768.3473, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "72ed621be6a54d4a8eb632a62857fa63", "0_model_name": "embed-english-v3.0", "0_prompt": "Good benchmark for multitask performance of llms", "0_output": [["Good benchmark for multitask performance of llms", "Title: Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once?\n\nAbstract: Large language models (LLMs) are typically prompted to follow a single instruction per inference call. In this work, we analyze whether LLMs also hold the capability to handle multiple instructions simultaneously, denoted as Multi-Task Inference. For this purpose, we introduce the MTI Bench(Multi-Task Inference Benchmark), a comprehensive evaluation benchmark encompassing 5,000 instances across 25 tasks. Each task in the MTI Bench involves 2 to 3 sub-tasks. As expected, we first demonstrate that Multi-Task Inference reduces the total inference time by 1.46 times in average since it does not require multiple inference calls. Interestingly, contrary to the expectation that LLMs would perform better when tasks are divided, we find that state-of-the-art LLMs, such as Llama-2-Chat-70B and GPT-4, show up to 7.3% and 12.4% improved performance with Multi-Task Inference compared to Single-Task Inference on the MTI Bench. We release the MTI Bench dataset and our code at this link https://github.com/guijinSON/MTI-Bench."]], "0_corpus": "arxiv", "1_conv_id": "492d5f636fc94d049f457c71188d11c0", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "Good benchmark for multitask performance of llms", "1_output": [["Good benchmark for multitask performance of llms", "Title: Limit theorems and ergodicity for general bootstrap random walks\n\nAbstract: Given the increments of a simple symmetric random walk $(X_n)_{n\\ge0}$, we characterize all possible ways of recycling these increments into a simple symmetric random walk $(Y_n)_{n\\ge0}$ adapted to the filtration of $(X_n)_{n\\ge0}$. We study the long term behavior of a suitably normalized two-dimensional process $((X_n,Y_n))_{n\\ge0}$. In particular, we provide necessary and sufficient conditions for the process to converge to a two-dimensional Brownian motion (possibly degenerate). We also discuss cases in which the limit is not Gaussian. Finally, we provide a simple necessary and sufficient condition for the ergodicity of the recycling transformation, thus generalizing results from Dubins and Smorodinsky (1992) and Fujita (2008), and solving the discrete version of the open problem of the ergodicity of the general L\\'evy transformation (see Mansuy and Yor, 2006)."]], "1_corpus": "arxiv"}
|
22 |
{"tstamp": 1722224795.2981, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "00300e8adfc2436a82273eaac0616ce2", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "A paper evaluating whether updating one fact in a language model affects related facts.", "0_output": [["A paper evaluating whether updating one fact in a language model affects related facts.", "Title: Evaluating the Ripple Effects of Knowledge Editing in Language Models\n\nAbstract: Modern language models capture a large body of factual knowledge. However, some facts can be incorrectly induced or become obsolete over time, resulting in factually incorrect generations. This has led to the development of various editing methods that allow updating facts encoded by the model. Evaluation of these methods has primarily focused on testing whether an individual fact has been successfully injected, and if similar predictions for other subjects have not changed. Here we argue that such evaluation is limited, since injecting one fact (e.g. ``Jack Depp is the son of Johnny Depp'') introduces a ``ripple effect'' in the form of additional facts that the model needs to update (e.g.``Jack Depp is the sibling of Lily-Rose Depp''). To address this issue, we propose a novel set of evaluation criteria that consider the implications of an edit on related facts. Using these criteria, we then construct RippleEdits, a diagnostic benchmark of 5K factual edits, capturing a variety of types of ripple effects. We evaluate prominent editing methods on RippleEdits, showing that current methods fail to introduce consistent changes in the model's knowledge. In addition, we find that a simple in-context editing baseline obtains the best scores on our benchmark, suggesting a promising research direction for model editing."]], "0_corpus": "arxiv", "1_conv_id": "44bd3abae6e44faba2d1cf54ce8bec32", "1_model_name": "text-embedding-004", "1_prompt": "A paper evaluating whether updating one fact in a language model affects related facts.", "1_output": [["A paper evaluating whether updating one fact in a language model affects related facts.", "Title: WikiFactDiff: A Large, Realistic, and Temporally Adaptable Dataset for Atomic Factual Knowledge Update in Causal Language Models\n\nAbstract: The factuality of large language model (LLMs) tends to decay over time since events posterior to their training are \"unknown\" to them. One way to keep models up-to-date could be factual update: the task of inserting, replacing, or removing certain simple (atomic) facts within the model. To study this task, we present WikiFactDiff, a dataset that describes the evolution of factual knowledge between two dates as a collection of simple facts divided into three categories: new, obsolete, and static. We describe several update scenarios arising from various combinations of these three types of basic update. The facts are represented by subject-relation-object triples; indeed, WikiFactDiff was constructed by comparing the state of the Wikidata knowledge base at 4 January 2021 and 27 February 2023. Those fact are accompanied by verbalization templates and cloze tests that enable running update algorithms and their evaluation metrics. Contrary to other datasets, such as zsRE and CounterFact, WikiFactDiff constitutes a realistic update setting that involves various update scenarios, including replacements, archival, and new entity insertions. We also present an evaluation of existing update algorithms on WikiFactDiff."]], "1_corpus": "arxiv"}
|
23 |
{"tstamp": 1722224897.7564, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "66d97eb1cd7d4e31b48bcf9e823bba43", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "0_output": [["A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "Title: Images in Language Space: Exploring the Suitability of Large Language Models for Vision & Language Tasks\n\nAbstract: Large language models have demonstrated robust performance on various language tasks using zero-shot or few-shot learning paradigms. While being actively researched, multimodal models that can additionally handle images as input have yet to catch up in size and generality with language-only models. In this work, we ask whether language-only models can be utilised for tasks that require visual input -- but also, as we argue, often require a strong reasoning component. Similar to some recent related work, we make visual information accessible to the language model using separate verbalisation models. Specifically, we investigate the performance of open-source, open-access language models against GPT-3 on five vision-language tasks when given textually-encoded visual information. Our results suggest that language models are effective for solving vision-language tasks even with limited samples. This approach also enhances the interpretability of a model's output by providing a means of tracing the output back through the verbalised image content."]], "0_corpus": "arxiv", "1_conv_id": "e97d1a7a80bf4651a54001a5a1acb3b0", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "1_output": [["A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "Title: Visual Classification via Description from Large Language Models\n\nAbstract: Vision-language models (VLMs) such as CLIP have shown promising performance on a variety of recognition tasks using the standard zero-shot classification procedure -- computing similarity between the query image and the embedded words for each category. By only using the category name, they neglect to make use of the rich context of additional information that language affords. The procedure gives no intermediate understanding of why a category is chosen, and furthermore provides no mechanism for adjusting the criteria used towards this decision. We present an alternative framework for classification with VLMs, which we call classification by description. We ask VLMs to check for descriptive features rather than broad categories: to find a tiger, look for its stripes; its claws; and more. By basing decisions on these descriptors, we can provide additional cues that encourage using the features we want to be used. In the process, we can get a clear idea of what features the model uses to construct its decision; it gains some level of inherent explainability. We query large language models (e.g., GPT-3) for these descriptors to obtain them in a scalable way. Extensive experiments show our framework has numerous advantages past interpretability. We show improvements in accuracy on ImageNet across distribution shifts; demonstrate the ability to adapt VLMs to recognize concepts unseen during training; and illustrate how descriptors can be edited to effectively mitigate bias compared to the baseline."]], "1_corpus": "arxiv"}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
{"tstamp": 1722224768.3473, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "72ed621be6a54d4a8eb632a62857fa63", "0_model_name": "embed-english-v3.0", "0_prompt": "Good benchmark for multitask performance of llms", "0_output": [["Good benchmark for multitask performance of llms", "Title: Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once?\n\nAbstract: Large language models (LLMs) are typically prompted to follow a single instruction per inference call. In this work, we analyze whether LLMs also hold the capability to handle multiple instructions simultaneously, denoted as Multi-Task Inference. For this purpose, we introduce the MTI Bench(Multi-Task Inference Benchmark), a comprehensive evaluation benchmark encompassing 5,000 instances across 25 tasks. Each task in the MTI Bench involves 2 to 3 sub-tasks. As expected, we first demonstrate that Multi-Task Inference reduces the total inference time by 1.46 times in average since it does not require multiple inference calls. Interestingly, contrary to the expectation that LLMs would perform better when tasks are divided, we find that state-of-the-art LLMs, such as Llama-2-Chat-70B and GPT-4, show up to 7.3% and 12.4% improved performance with Multi-Task Inference compared to Single-Task Inference on the MTI Bench. We release the MTI Bench dataset and our code at this link https://github.com/guijinSON/MTI-Bench."]], "0_corpus": "arxiv", "1_conv_id": "492d5f636fc94d049f457c71188d11c0", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "Good benchmark for multitask performance of llms", "1_output": [["Good benchmark for multitask performance of llms", "Title: Limit theorems and ergodicity for general bootstrap random walks\n\nAbstract: Given the increments of a simple symmetric random walk $(X_n)_{n\\ge0}$, we characterize all possible ways of recycling these increments into a simple symmetric random walk $(Y_n)_{n\\ge0}$ adapted to the filtration of $(X_n)_{n\\ge0}$. We study the long term behavior of a suitably normalized two-dimensional process $((X_n,Y_n))_{n\\ge0}$. In particular, we provide necessary and sufficient conditions for the process to converge to a two-dimensional Brownian motion (possibly degenerate). We also discuss cases in which the limit is not Gaussian. Finally, we provide a simple necessary and sufficient condition for the ergodicity of the recycling transformation, thus generalizing results from Dubins and Smorodinsky (1992) and Fujita (2008), and solving the discrete version of the open problem of the ergodicity of the general L\\'evy transformation (see Mansuy and Yor, 2006)."]], "1_corpus": "arxiv"}
|
22 |
{"tstamp": 1722224795.2981, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "00300e8adfc2436a82273eaac0616ce2", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "A paper evaluating whether updating one fact in a language model affects related facts.", "0_output": [["A paper evaluating whether updating one fact in a language model affects related facts.", "Title: Evaluating the Ripple Effects of Knowledge Editing in Language Models\n\nAbstract: Modern language models capture a large body of factual knowledge. However, some facts can be incorrectly induced or become obsolete over time, resulting in factually incorrect generations. This has led to the development of various editing methods that allow updating facts encoded by the model. Evaluation of these methods has primarily focused on testing whether an individual fact has been successfully injected, and if similar predictions for other subjects have not changed. Here we argue that such evaluation is limited, since injecting one fact (e.g. ``Jack Depp is the son of Johnny Depp'') introduces a ``ripple effect'' in the form of additional facts that the model needs to update (e.g.``Jack Depp is the sibling of Lily-Rose Depp''). To address this issue, we propose a novel set of evaluation criteria that consider the implications of an edit on related facts. Using these criteria, we then construct RippleEdits, a diagnostic benchmark of 5K factual edits, capturing a variety of types of ripple effects. We evaluate prominent editing methods on RippleEdits, showing that current methods fail to introduce consistent changes in the model's knowledge. In addition, we find that a simple in-context editing baseline obtains the best scores on our benchmark, suggesting a promising research direction for model editing."]], "0_corpus": "arxiv", "1_conv_id": "44bd3abae6e44faba2d1cf54ce8bec32", "1_model_name": "text-embedding-004", "1_prompt": "A paper evaluating whether updating one fact in a language model affects related facts.", "1_output": [["A paper evaluating whether updating one fact in a language model affects related facts.", "Title: WikiFactDiff: A Large, Realistic, and Temporally Adaptable Dataset for Atomic Factual Knowledge Update in Causal Language Models\n\nAbstract: The factuality of large language model (LLMs) tends to decay over time since events posterior to their training are \"unknown\" to them. One way to keep models up-to-date could be factual update: the task of inserting, replacing, or removing certain simple (atomic) facts within the model. To study this task, we present WikiFactDiff, a dataset that describes the evolution of factual knowledge between two dates as a collection of simple facts divided into three categories: new, obsolete, and static. We describe several update scenarios arising from various combinations of these three types of basic update. The facts are represented by subject-relation-object triples; indeed, WikiFactDiff was constructed by comparing the state of the Wikidata knowledge base at 4 January 2021 and 27 February 2023. Those fact are accompanied by verbalization templates and cloze tests that enable running update algorithms and their evaluation metrics. Contrary to other datasets, such as zsRE and CounterFact, WikiFactDiff constitutes a realistic update setting that involves various update scenarios, including replacements, archival, and new entity insertions. We also present an evaluation of existing update algorithms on WikiFactDiff."]], "1_corpus": "arxiv"}
|
23 |
{"tstamp": 1722224897.7564, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "66d97eb1cd7d4e31b48bcf9e823bba43", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "0_output": [["A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "Title: Images in Language Space: Exploring the Suitability of Large Language Models for Vision & Language Tasks\n\nAbstract: Large language models have demonstrated robust performance on various language tasks using zero-shot or few-shot learning paradigms. While being actively researched, multimodal models that can additionally handle images as input have yet to catch up in size and generality with language-only models. In this work, we ask whether language-only models can be utilised for tasks that require visual input -- but also, as we argue, often require a strong reasoning component. Similar to some recent related work, we make visual information accessible to the language model using separate verbalisation models. Specifically, we investigate the performance of open-source, open-access language models against GPT-3 on five vision-language tasks when given textually-encoded visual information. Our results suggest that language models are effective for solving vision-language tasks even with limited samples. This approach also enhances the interpretability of a model's output by providing a means of tracing the output back through the verbalised image content."]], "0_corpus": "arxiv", "1_conv_id": "e97d1a7a80bf4651a54001a5a1acb3b0", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "1_output": [["A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "Title: Visual Classification via Description from Large Language Models\n\nAbstract: Vision-language models (VLMs) such as CLIP have shown promising performance on a variety of recognition tasks using the standard zero-shot classification procedure -- computing similarity between the query image and the embedded words for each category. By only using the category name, they neglect to make use of the rich context of additional information that language affords. The procedure gives no intermediate understanding of why a category is chosen, and furthermore provides no mechanism for adjusting the criteria used towards this decision. We present an alternative framework for classification with VLMs, which we call classification by description. We ask VLMs to check for descriptive features rather than broad categories: to find a tiger, look for its stripes; its claws; and more. By basing decisions on these descriptors, we can provide additional cues that encourage using the features we want to be used. In the process, we can get a clear idea of what features the model uses to construct its decision; it gains some level of inherent explainability. We query large language models (e.g., GPT-3) for these descriptors to obtain them in a scalable way. Extensive experiments show our framework has numerous advantages past interpretability. We show improvements in accuracy on ImageNet across distribution shifts; demonstrate the ability to adapt VLMs to recognize concepts unseen during training; and illustrate how descriptors can be edited to effectively mitigate bias compared to the baseline."]], "1_corpus": "arxiv"}
|
24 |
+
{"tstamp": 1722224923.4451, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "04682ef77bb14d3d987fa49503bd1c7b", "0_model_name": "voyage-multilingual-2", "0_prompt": "Highest BLEU scores for fully NAT in WMT14 En-De with faster inference.", "0_output": [["Highest BLEU scores for fully NAT in WMT14 En-De with faster inference.", "Title: Fully Non-autoregressive Neural Machine Translation: Tricks of the Trade\n\nAbstract: Fully non-autoregressive neural machine translation (NAT) is proposed to simultaneously predict tokens with single forward of neural networks, which significantly reduces the inference latency at the expense of quality drop compared to the Transformer baseline. In this work, we target on closing the performance gap while maintaining the latency advantage. We first inspect the fundamental issues of fully NAT models, and adopt dependency reduction in the learning space of output tokens as the basic guidance. Then, we revisit methods in four different aspects that have been proven effective for improving NAT models, and carefully combine these techniques with necessary modifications. Our extensive experiments on three translation benchmarks show that the proposed system achieves the new state-of-the-art results for fully NAT models, and obtains comparable performance with the autoregressive and iterative NAT systems. For instance, one of the proposed models achieves 27.49 BLEU points on WMT14 En-De with approximately 16.5X speed up at inference time."]], "0_corpus": "arxiv", "1_conv_id": "e6e5e63a277c497b87192e49b67e8463", "1_model_name": "text-embedding-004", "1_prompt": "Highest BLEU scores for fully NAT in WMT14 En-De with faster inference.", "1_output": [["Highest BLEU scores for fully NAT in WMT14 En-De with faster inference.", "Title: Fully Non-autoregressive Neural Machine Translation: Tricks of the Trade\n\nAbstract: Fully non-autoregressive neural machine translation (NAT) is proposed to simultaneously predict tokens with single forward of neural networks, which significantly reduces the inference latency at the expense of quality drop compared to the Transformer baseline. In this work, we target on closing the performance gap while maintaining the latency advantage. We first inspect the fundamental issues of fully NAT models, and adopt dependency reduction in the learning space of output tokens as the basic guidance. Then, we revisit methods in four different aspects that have been proven effective for improving NAT models, and carefully combine these techniques with necessary modifications. Our extensive experiments on three translation benchmarks show that the proposed system achieves the new state-of-the-art results for fully NAT models, and obtains comparable performance with the autoregressive and iterative NAT systems. For instance, one of the proposed models achieves 27.49 BLEU points on WMT14 En-De with approximately 16.5X speed up at inference time."]], "1_corpus": "arxiv"}
|
25 |
+
{"tstamp": 1722224949.6115, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "f93384bd5692403b9b937e5912ee5056", "0_model_name": "voyage-multilingual-2", "0_prompt": "A comprehensive analysis of a 280 billion parameter language model and its performance across 152 diverse tasks ", "0_output": [["A comprehensive analysis of a 280 billion parameter language model and its performance across 152 diverse tasks ", "Title: Baichuan 2: Open Large-scale Language Models\n\nAbstract: Large language models (LLMs) have demonstrated remarkable performance on a variety of natural language tasks based on just a few examples of natural language instructions, reducing the need for extensive feature engineering. However, most powerful LLMs are closed-source or limited in their capability for languages other than English. In this technical report, we present Baichuan 2, a series of large-scale multilingual language models containing 7 billion and 13 billion parameters, trained from scratch, on 2.6 trillion tokens. Baichuan 2 matches or outperforms other open-source models of similar size on public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan 2 excels in vertical domains such as medicine and law. We will release all pre-training model checkpoints to benefit the research community in better understanding the training dynamics of Baichuan 2."]], "0_corpus": "arxiv", "1_conv_id": "582d39f1436e46c086fb73ddaf7c9a24", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "A comprehensive analysis of a 280 billion parameter language model and its performance across 152 diverse tasks ", "1_output": [["A comprehensive analysis of a 280 billion parameter language model and its performance across 152 diverse tasks ", "Title: Seq2Mol: Automatic design of de novo molecules conditioned by the target protein sequences through deep neural networks\n\nAbstract: De novo design of molecules has recently enjoyed the power of generative deep neural networks. Current approaches aim to generate molecules either resembling the properties of the molecules of the training set or molecules that are optimized with respect to specific physicochemical properties. None of the methods generates molecules specific to a target protein. In the approach presented here, we introduce a method which is conditioned on the protein target sequence to generate de novo molecules that are relevant to the target. We use an implementation adapted from Google's \"Show and Tell\" image caption generation method, to generate SMILES strings of molecules from protein sequence embeddings generated by a deep bi-directional language model ELMo. ELMo is used to generate contextualized embedding vectors of the protein sequence. Using reinforcement learning, the trained model is further optimized through augmented episodic likelihood to increase the diversity of the generated compounds compared to the training set. We used the model to generate compounds for two major drug target families, i.e. for GPCRs and Tyrosine Kinase targets. The model generated compounds which are structurally different form the training set, while also being more similar to compounds known to bind to the two families of drug targets compared to a random set of molecules. The compounds further display reasonable synthesizability and drug-likeness scores."]], "1_corpus": "arxiv"}
|
26 |
+
{"tstamp": 1722224964.6848, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "d3b69c82e28b436c969357870e392c7a", "0_model_name": "embed-english-v3.0", "0_prompt": "Paper introducing a model connecting object segmentation masks with natural language responses in multimodal settings.", "0_output": [["Paper introducing a model connecting object segmentation masks with natural language responses in multimodal settings.", "Title: Dynamic Multimodal Instance Segmentation guided by natural language queries\n\nAbstract: We address the problem of segmenting an object given a natural language expression that describes it. Current techniques tackle this task by either (\\textit{i}) directly or recursively merging linguistic and visual information in the channel dimension and then performing convolutions; or by (\\textit{ii}) mapping the expression to a space in which it can be thought of as a filter, whose response is directly related to the presence of the object at a given spatial coordinate in the image, so that a convolution can be applied to look for the object. We propose a novel method that integrates these two insights in order to fully exploit the recursive nature of language. Additionally, during the upsampling process, we take advantage of the intermediate information generated when downsampling the image, so that detailed segmentations can be obtained. We compare our method against the state-of-the-art approaches in four standard datasets, in which it surpasses all previous methods in six of eight of the splits for this task."]], "0_corpus": "arxiv", "1_conv_id": "b353792ca18b4bc9967223f30e5f0dc0", "1_model_name": "voyage-multilingual-2", "1_prompt": "Paper introducing a model connecting object segmentation masks with natural language responses in multimodal settings.", "1_output": [["Paper introducing a model connecting object segmentation masks with natural language responses in multimodal settings.", "Title: Comprehensive Multi-Modal Interactions for Referring Image Segmentation\n\nAbstract: We investigate Referring Image Segmentation (RIS), which outputs a segmentation map corresponding to the natural language description. Addressing RIS efficiently requires considering the interactions happening across visual and linguistic modalities and the interactions within each modality. Existing methods are limited because they either compute different forms of interactions sequentially (leading to error propagation) or ignore intramodal interactions. We address this limitation by performing all three interactions simultaneously through a Synchronous Multi-Modal Fusion Module (SFM). Moreover, to produce refined segmentation masks, we propose a novel Hierarchical Cross-Modal Aggregation Module (HCAM), where linguistic features facilitate the exchange of contextual information across the visual hierarchy. We present thorough ablation studies and validate our approach's performance on four benchmark datasets, showing considerable performance gains over the existing state-of-the-art (SOTA) methods."]], "1_corpus": "arxiv"}
|
27 |
+
{"tstamp": 1722225012.2867, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "184da0dfea8b4f9d9a55e8015a197572", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "age is what type of scale?", "0_output": [["age is what type of scale?", "Stats Stackexchange\n\nQ: Should I treat age as a continuous variable or factor? Age is a predictor variable among a few others (gender, weight and height) and my response is the log-odds of a certain disease (binomial glm). \nMy age data runs from 21 until 40. I am not sure whether to treat age as a continuous variable or as a factor with age groups: 21-25, 26-30, 31-35, 36-40. \nAre there any plots which I can produce that can help determine which would be the better approach? \n\nA: It depends on the context.\nFor example if you are looking for the effect of age on children's height, it makes sense to look at it as a continuous ( integer) value.\nIf you're looking for e.g. the effect of age on oncogenesis then it makes sense if you look at age groups. Young vs old, above 55 and below 55, ...\nFor your example, unless age is a confounder of a hidden factor such as for example being college grad or still a student ( risk factor for young adults STD infection), I'd bin my data into reasonable bin sizes.\n"]], "0_corpus": "stackexchange", "1_conv_id": "deb4d7aa81cf4de1a7c78febb6fc3a3a", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "age is what type of scale?", "1_output": [["age is what type of scale?", "Stats Stackexchange\n\nQ: Are the variable types here considered correct? If we want to determine the variable types, will it be as follows for the below variables?\nAge ---> quantitative, discrete (we can count)\nFitness ---> If the values that we will enter here are 0 and 1 only, will the type of this variable be qualitative, nominal?\nThanks a lot.\n\nA: Questions like this expose the problems with Stevens' typology of variables. Some variables just aren't any of his categories; or are hard to categorize. Age is ratio level data in the sense that someone who is (say) 40 years old is twice as old as someone who is 20 years old. You just haven't got it measured precisely. But so? What difference does that make? \nDichotomies - here it really doesn't matter HOW you treat them for most purposes. E.g. in regression, you can consider them to be nominal, ordinal or interval and you will get the same results. \nI wrote about other problems with this typology on my blog\n"]], "1_corpus": "stackexchange"}
|
28 |
+
{"tstamp": 1722225026.5662, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "65d610c94b434dcda9d6b7a2824209b1", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "A framework to improve logical problem-solving in language models integrating symbolic formulation and self-refinement.", "0_output": [["A framework to improve logical problem-solving in language models integrating symbolic formulation and self-refinement.", "Title: On the three state Potts model with competing interactions on the Bethe lattice\n\nAbstract: In the present paper the three state Potts model with competing binary interactions (with couplings $J$ and $J_p$) on the second order Bethe lattice is considered. The recurrent equations for the partition functions are derived. When $J_p=0$, by means of a construction of a special class of limiting Gibbs measures, it is shown how these equations are related with the surface energy of the Hamiltonian. This relation reduces the problem of describing the limit Gibbs measures to find of solutions of a nonlinear functional equation. Moreover, the set of ground states of the one-level model is completely described. Using this fact, one finds Gibbs measures (pure phases) associated with the translation-invariant ground states. The critical temperature is exactly found and the phase diagram is presented. The free energies corresponding to translations-invariant Gibbs measures are found. Certain physical quantities are calculated as well."]], "0_corpus": "arxiv", "1_conv_id": "c751d7d91b034a0e835787d7d077b783", "1_model_name": "voyage-multilingual-2", "1_prompt": "A framework to improve logical problem-solving in language models integrating symbolic formulation and self-refinement.", "1_output": [["A framework to improve logical problem-solving in language models integrating symbolic formulation and self-refinement.", "Title: Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning\n\nAbstract: Large Language Models (LLMs) have shown human-like reasoning abilities but still struggle with complex logical problems. This paper introduces a novel framework, Logic-LM, which integrates LLMs with symbolic solvers to improve logical problem-solving. Our method first utilizes LLMs to translate a natural language problem into a symbolic formulation. Afterward, a deterministic symbolic solver performs inference on the formulated problem. We also introduce a self-refinement module, which utilizes the symbolic solver's error messages to revise symbolic formalizations. We demonstrate Logic-LM's effectiveness on five logical reasoning datasets: ProofWriter, PrOntoQA, FOLIO, LogicalDeduction, and AR-LSAT. On average, Logic-LM achieves a significant performance boost of 39.2% over using LLM alone with standard prompting and 18.4% over LLM with chain-of-thought prompting. Our findings suggest that Logic-LM, by combining LLMs with symbolic logic, offers a promising avenue for faithful logical reasoning. Code and data are publicly available at https://github.com/teacherpeterpan/Logic-LLM."]], "1_corpus": "arxiv"}
|
29 |
+
{"tstamp": 1722225041.8999, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "e4e6ff5dc3a149a8a9cd5bc76feefd62", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "Looking for the paper on the 2020 CLEF edition covering automatic claim verification in social media.", "0_output": [["Looking for the paper on the 2020 CLEF edition covering automatic claim verification in social media.", "Title: Overview of CheckThat! 2020: Automatic Identification and Verification of Claims in Social Media\n\nAbstract: We present an overview of the third edition of the CheckThat! Lab at CLEF 2020. The lab featured five tasks in two different languages: English and Arabic. The first four tasks compose the full pipeline of claim verification in social media: Task 1 on check-worthiness estimation, Task 2 on retrieving previously fact-checked claims, Task 3 on evidence retrieval, and Task 4 on claim verification. The lab is completed with Task 5 on check-worthiness estimation in political debates and speeches. A total of 67 teams registered to participate in the lab (up from 47 at CLEF 2019), and 23 of them actually submitted runs (compared to 14 at CLEF 2019). Most teams used deep neural networks based on BERT, LSTMs, or CNNs, and achieved sizable improvements over the baselines on all tasks. Here we describe the tasks setup, the evaluation results, and a summary of the approaches used by the participants, and we discuss some lessons learned. Last but not least, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important tasks of check-worthiness estimation and automatic claim verification."]], "0_corpus": "arxiv", "1_conv_id": "069ed0d8c0fe49369e9c70a31c707261", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "Looking for the paper on the 2020 CLEF edition covering automatic claim verification in social media.", "1_output": [["Looking for the paper on the 2020 CLEF edition covering automatic claim verification in social media.", "Title: Overview of CheckThat! 2020: Automatic Identification and Verification of Claims in Social Media\n\nAbstract: We present an overview of the third edition of the CheckThat! Lab at CLEF 2020. The lab featured five tasks in two different languages: English and Arabic. The first four tasks compose the full pipeline of claim verification in social media: Task 1 on check-worthiness estimation, Task 2 on retrieving previously fact-checked claims, Task 3 on evidence retrieval, and Task 4 on claim verification. The lab is completed with Task 5 on check-worthiness estimation in political debates and speeches. A total of 67 teams registered to participate in the lab (up from 47 at CLEF 2019), and 23 of them actually submitted runs (compared to 14 at CLEF 2019). Most teams used deep neural networks based on BERT, LSTMs, or CNNs, and achieved sizable improvements over the baselines on all tasks. Here we describe the tasks setup, the evaluation results, and a summary of the approaches used by the participants, and we discuss some lessons learned. Last but not least, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important tasks of check-worthiness estimation and automatic claim verification."]], "1_corpus": "arxiv"}
|
30 |
+
{"tstamp": 1722225064.1705, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "ad3321dd65264d9ea93d600f7971ff6d", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "can r squared adjusted be negative?", "0_output": [["can r squared adjusted be negative?", "Stats Stackexchange\n\nQ: Is it a problem to get a negative adjusted r-squared? Background:\nI have the cross-sectional model:\n$Y_{i} = a + b X_{1,i} + c X_{2,i} + d X_{3,i} + e X_{4,i} + \\nu_i$.\nThe application is corporate finance. So each $Y_i$ is something like the change in return on assets over a 1 year period for firm $i$, and the regressors are typical corporate finance variables.\nIn corporate finance, very small values of $R^2$ are common, even sometimes $1\\%$. My $R^2$ is around $1\\%$ but my Adjusted $R^2$ is $-0.2\\%$. \nI have never seen papers report negative Adjusted $R^2$ but this could just be because they omit the publication of their adjusted $R^2$ when they see that it is negative. \nQuestion\nIs there some problem when the adjusted $R^2$ is negative?\n\nA: The formula for adjusted R square allows it to be negative. It is intended to approximate the actual percentage variance explained. So if the actual R square is close to zero the adjusted R square can be slightly negative. Just think of it as an estimate of zero.\n"]], "0_corpus": "stackexchange", "1_conv_id": "6766409719cd420188c069799f6c48d1", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "can r squared adjusted be negative?", "1_output": [["can r squared adjusted be negative?", "Stats Stackexchange\n\nQ: Is it a problem to get a negative adjusted r-squared? Background:\nI have the cross-sectional model:\n$Y_{i} = a + b X_{1,i} + c X_{2,i} + d X_{3,i} + e X_{4,i} + \\nu_i$.\nThe application is corporate finance. So each $Y_i$ is something like the change in return on assets over a 1 year period for firm $i$, and the regressors are typical corporate finance variables.\nIn corporate finance, very small values of $R^2$ are common, even sometimes $1\\%$. My $R^2$ is around $1\\%$ but my Adjusted $R^2$ is $-0.2\\%$. \nI have never seen papers report negative Adjusted $R^2$ but this could just be because they omit the publication of their adjusted $R^2$ when they see that it is negative. \nQuestion\nIs there some problem when the adjusted $R^2$ is negative?\n\nA: The formula for adjusted R square allows it to be negative. It is intended to approximate the actual percentage variance explained. So if the actual R square is close to zero the adjusted R square can be slightly negative. Just think of it as an estimate of zero.\n"]], "1_corpus": "stackexchange"}
|
31 |
+
{"tstamp": 1722225082.4068, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "fe65d33c55ca4dbf856b34d9fd465f40", "0_model_name": "text-embedding-3-large", "0_prompt": "Paper showcasing reinforcement learning from human feedback to train beneficial and safe AI assistants.", "0_output": [["Paper showcasing reinforcement learning from human feedback to train beneficial and safe AI assistants.", "Title: Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback\n\nAbstract: We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, efficiently improving our datasets and models. Finally, we investigate the robustness of RLHF training, and identify a roughly linear relation between the RL reward and the square root of the KL divergence between the policy and its initialization. Alongside our main results, we perform peripheral analyses on calibration, competing objectives, and the use of OOD detection, compare our models with human writers, and provide samples from our models using prompts appearing in recent related work."]], "0_corpus": "arxiv", "1_conv_id": "19f1b179abd4428983f12ecbb76ba96e", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "Paper showcasing reinforcement learning from human feedback to train beneficial and safe AI assistants.", "1_output": [["Paper showcasing reinforcement learning from human feedback to train beneficial and safe AI assistants.", "Title: Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback\n\nAbstract: We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, efficiently improving our datasets and models. Finally, we investigate the robustness of RLHF training, and identify a roughly linear relation between the RL reward and the square root of the KL divergence between the policy and its initialization. Alongside our main results, we perform peripheral analyses on calibration, competing objectives, and the use of OOD detection, compare our models with human writers, and provide samples from our models using prompts appearing in recent related work."]], "1_corpus": "arxiv"}
|
32 |
+
{"tstamp": 1722225101.4447, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "94d522a8ae794130b5549c97be9efcc8", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "Looking for a paper on Fine-grained Entity Type Classification using a neural network with hierarchy-aware loss.", "0_output": [["Looking for a paper on Fine-grained Entity Type Classification using a neural network with hierarchy-aware loss.", "Title: Graph Neural Network Approach to Semantic Type Detection in Tables\n\nAbstract: This study addresses the challenge of detecting semantic column types in relational tables, a key task in many real-world applications. While language models like BERT have improved prediction accuracy, their token input constraints limit the simultaneous processing of intra-table and inter-table information. We propose a novel approach using Graph Neural Networks (GNNs) to model intra-table dependencies, allowing language models to focus on inter-table information. Our proposed method not only outperforms existing state-of-the-art algorithms but also offers novel insights into the utility and functionality of various GNN types for semantic type detection. The code is available at https://github.com/hoseinzadeehsan/GAIT"]], "0_corpus": "arxiv", "1_conv_id": "5bc98453ebd94dd8adc5a3a7477560d1", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "Looking for a paper on Fine-grained Entity Type Classification using a neural network with hierarchy-aware loss.", "1_output": [["Looking for a paper on Fine-grained Entity Type Classification using a neural network with hierarchy-aware loss.", "Title: Neural Fine-Grained Entity Type Classification with Hierarchy-Aware Loss\n\nAbstract: The task of Fine-grained Entity Type Classification (FETC) consists of assigning types from a hierarchy to entity mentions in text. Existing methods rely on distant supervision and are thus susceptible to noisy labels that can be out-of-context or overly-specific for the training sentence. Previous methods that attempt to address these issues do so with heuristics or with the help of hand-crafted features. Instead, we propose an end-to-end solution with a neural network model that uses a variant of cross- entropy loss function to handle out-of-context labels, and hierarchical loss normalization to cope with overly-specific ones. Also, previous work solve FETC a multi-label classification followed by ad-hoc post-processing. In contrast, our solution is more elegant: we use public word embeddings to train a single-label that jointly learns representations for entity mentions and their context. We show experimentally that our approach is robust against noise and consistently outperforms the state-of-the-art on established benchmarks for the task."]], "1_corpus": "arxiv"}
|
33 |
+
{"tstamp": 1722225126.1987, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "61042e3396c64d919604031cbf9612dc", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "A paper introducing a new large language model", "0_output": [["A paper introducing a new large language model", "Title: Larger-Context Language Modelling\n\nAbstract: In this work, we propose a novel method to incorporate corpus-level discourse information into language modelling. We call this larger-context language model. We introduce a late fusion approach to a recurrent language model based on long short-term memory units (LSTM), which helps the LSTM unit keep intra-sentence dependencies and inter-sentence dependencies separate from each other. Through the evaluation on three corpora (IMDB, BBC, and PennTree Bank), we demon- strate that the proposed model improves perplexity significantly. In the experi- ments, we evaluate the proposed approach while varying the number of context sentences and observe that the proposed late fusion is superior to the usual way of incorporating additional inputs to the LSTM. By analyzing the trained larger- context language model, we discover that content words, including nouns, adjec- tives and verbs, benefit most from an increasing number of context sentences. This analysis suggests that larger-context language model improves the unconditional language model by capturing the theme of a document better and more easily."]], "0_corpus": "arxiv", "1_conv_id": "d2e0f34f14254329b828836aa5ea813e", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "A paper introducing a new large language model", "1_output": [["A paper introducing a new large language model", "Title: Large Language Models: A Survey\n\nAbstract: Large Language Models (LLMs) have drawn a lot of attention due to their strong performance on a wide range of natural language tasks, since the release of ChatGPT in November 2022. LLMs' ability of general-purpose language understanding and generation is acquired by training billions of model's parameters on massive amounts of text data, as predicted by scaling laws \\cite{kaplan2020scaling,hoffmann2022training}. The research area of LLMs, while very recent, is evolving rapidly in many different ways. In this paper, we review some of the most prominent LLMs, including three popular LLM families (GPT, LLaMA, PaLM), and discuss their characteristics, contributions and limitations. We also give an overview of techniques developed to build, and augment LLMs. We then survey popular datasets prepared for LLM training, fine-tuning, and evaluation, review widely used LLM evaluation metrics, and compare the performance of several popular LLMs on a set of representative benchmarks. Finally, we conclude the paper by discussing open challenges and future research directions."]], "1_corpus": "arxiv"}
|
data/retrieval_individual-f0f24c65-6f65-4435-a1e5-20bb12a2b295.jsonl
CHANGED
@@ -49,3 +49,23 @@
|
|
49 |
{"tstamp": 1722224802.5928, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722224802.3639, "finish": 1722224802.5928, "ip": "", "conv_id": "303ffd802de74cccb01c62b371bfd341", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "I'm looking for a QA model that retrieves both textual and visual information. ", "output": [["I'm looking for a QA model that retrieves both textual and visual information. ", "Title: Question Aware Vision Transformer for Multimodal Reasoning\n\nAbstract: Vision-Language (VL) models have gained significant research focus, enabling remarkable advances in multimodal reasoning. These architectures typically comprise a vision encoder, a Large Language Model (LLM), and a projection module that aligns visual features with the LLM's representation space. Despite their success, a critical limitation persists: the vision encoding process remains decoupled from user queries, often in the form of image-related questions. Consequently, the resulting visual features may not be optimally attuned to the query-specific elements of the image. To address this, we introduce QA-ViT, a Question Aware Vision Transformer approach for multimodal reasoning, which embeds question awareness directly within the vision encoder. This integration results in dynamic visual features focusing on relevant image aspects to the posed question. QA-ViT is model-agnostic and can be incorporated efficiently into any VL architecture. Extensive experiments demonstrate the effectiveness of applying our method to various multimodal architectures, leading to consistent improvement across diverse tasks and showcasing its potential for enhancing visual and scene-text understanding."]], "corpus": "arxiv"}
|
50 |
{"tstamp": 1722224861.2135, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722224831.7718, "finish": 1722224861.2135, "ip": "", "conv_id": "66d97eb1cd7d4e31b48bcf9e823bba43", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "output": [["A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "Title: Images in Language Space: Exploring the Suitability of Large Language Models for Vision & Language Tasks\n\nAbstract: Large language models have demonstrated robust performance on various language tasks using zero-shot or few-shot learning paradigms. While being actively researched, multimodal models that can additionally handle images as input have yet to catch up in size and generality with language-only models. In this work, we ask whether language-only models can be utilised for tasks that require visual input -- but also, as we argue, often require a strong reasoning component. Similar to some recent related work, we make visual information accessible to the language model using separate verbalisation models. Specifically, we investigate the performance of open-source, open-access language models against GPT-3 on five vision-language tasks when given textually-encoded visual information. Our results suggest that language models are effective for solving vision-language tasks even with limited samples. This approach also enhances the interpretability of a model's output by providing a means of tracing the output back through the verbalised image content."]], "corpus": "arxiv"}
|
51 |
{"tstamp": 1722224861.2135, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722224831.7718, "finish": 1722224861.2135, "ip": "", "conv_id": "e97d1a7a80bf4651a54001a5a1acb3b0", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "output": [["A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "Title: Visual Classification via Description from Large Language Models\n\nAbstract: Vision-language models (VLMs) such as CLIP have shown promising performance on a variety of recognition tasks using the standard zero-shot classification procedure -- computing similarity between the query image and the embedded words for each category. By only using the category name, they neglect to make use of the rich context of additional information that language affords. The procedure gives no intermediate understanding of why a category is chosen, and furthermore provides no mechanism for adjusting the criteria used towards this decision. We present an alternative framework for classification with VLMs, which we call classification by description. We ask VLMs to check for descriptive features rather than broad categories: to find a tiger, look for its stripes; its claws; and more. By basing decisions on these descriptors, we can provide additional cues that encourage using the features we want to be used. In the process, we can get a clear idea of what features the model uses to construct its decision; it gains some level of inherent explainability. We query large language models (e.g., GPT-3) for these descriptors to obtain them in a scalable way. Extensive experiments show our framework has numerous advantages past interpretability. We show improvements in accuracy on ImageNet across distribution shifts; demonstrate the ability to adapt VLMs to recognize concepts unseen during training; and illustrate how descriptors can be edited to effectively mitigate bias compared to the baseline."]], "corpus": "arxiv"}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
{"tstamp": 1722224802.5928, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722224802.3639, "finish": 1722224802.5928, "ip": "", "conv_id": "303ffd802de74cccb01c62b371bfd341", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "I'm looking for a QA model that retrieves both textual and visual information. ", "output": [["I'm looking for a QA model that retrieves both textual and visual information. ", "Title: Question Aware Vision Transformer for Multimodal Reasoning\n\nAbstract: Vision-Language (VL) models have gained significant research focus, enabling remarkable advances in multimodal reasoning. These architectures typically comprise a vision encoder, a Large Language Model (LLM), and a projection module that aligns visual features with the LLM's representation space. Despite their success, a critical limitation persists: the vision encoding process remains decoupled from user queries, often in the form of image-related questions. Consequently, the resulting visual features may not be optimally attuned to the query-specific elements of the image. To address this, we introduce QA-ViT, a Question Aware Vision Transformer approach for multimodal reasoning, which embeds question awareness directly within the vision encoder. This integration results in dynamic visual features focusing on relevant image aspects to the posed question. QA-ViT is model-agnostic and can be incorporated efficiently into any VL architecture. Extensive experiments demonstrate the effectiveness of applying our method to various multimodal architectures, leading to consistent improvement across diverse tasks and showcasing its potential for enhancing visual and scene-text understanding."]], "corpus": "arxiv"}
|
50 |
{"tstamp": 1722224861.2135, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722224831.7718, "finish": 1722224861.2135, "ip": "", "conv_id": "66d97eb1cd7d4e31b48bcf9e823bba43", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "output": [["A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "Title: Images in Language Space: Exploring the Suitability of Large Language Models for Vision & Language Tasks\n\nAbstract: Large language models have demonstrated robust performance on various language tasks using zero-shot or few-shot learning paradigms. While being actively researched, multimodal models that can additionally handle images as input have yet to catch up in size and generality with language-only models. In this work, we ask whether language-only models can be utilised for tasks that require visual input -- but also, as we argue, often require a strong reasoning component. Similar to some recent related work, we make visual information accessible to the language model using separate verbalisation models. Specifically, we investigate the performance of open-source, open-access language models against GPT-3 on five vision-language tasks when given textually-encoded visual information. Our results suggest that language models are effective for solving vision-language tasks even with limited samples. This approach also enhances the interpretability of a model's output by providing a means of tracing the output back through the verbalised image content."]], "corpus": "arxiv"}
|
51 |
{"tstamp": 1722224861.2135, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722224831.7718, "finish": 1722224861.2135, "ip": "", "conv_id": "e97d1a7a80bf4651a54001a5a1acb3b0", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "output": [["A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "Title: Visual Classification via Description from Large Language Models\n\nAbstract: Vision-language models (VLMs) such as CLIP have shown promising performance on a variety of recognition tasks using the standard zero-shot classification procedure -- computing similarity between the query image and the embedded words for each category. By only using the category name, they neglect to make use of the rich context of additional information that language affords. The procedure gives no intermediate understanding of why a category is chosen, and furthermore provides no mechanism for adjusting the criteria used towards this decision. We present an alternative framework for classification with VLMs, which we call classification by description. We ask VLMs to check for descriptive features rather than broad categories: to find a tiger, look for its stripes; its claws; and more. By basing decisions on these descriptors, we can provide additional cues that encourage using the features we want to be used. In the process, we can get a clear idea of what features the model uses to construct its decision; it gains some level of inherent explainability. We query large language models (e.g., GPT-3) for these descriptors to obtain them in a scalable way. Extensive experiments show our framework has numerous advantages past interpretability. We show improvements in accuracy on ImageNet across distribution shifts; demonstrate the ability to adapt VLMs to recognize concepts unseen during training; and illustrate how descriptors can be edited to effectively mitigate bias compared to the baseline."]], "corpus": "arxiv"}
|
52 |
+
{"tstamp": 1722224916.3804, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722224906.02, "finish": 1722224916.3804, "ip": "", "conv_id": "04682ef77bb14d3d987fa49503bd1c7b", "model_name": "voyage-multilingual-2", "prompt": "Highest BLEU scores for fully NAT in WMT14 En-De with faster inference.", "output": [["Highest BLEU scores for fully NAT in WMT14 En-De with faster inference.", "Title: Fully Non-autoregressive Neural Machine Translation: Tricks of the Trade\n\nAbstract: Fully non-autoregressive neural machine translation (NAT) is proposed to simultaneously predict tokens with single forward of neural networks, which significantly reduces the inference latency at the expense of quality drop compared to the Transformer baseline. In this work, we target on closing the performance gap while maintaining the latency advantage. We first inspect the fundamental issues of fully NAT models, and adopt dependency reduction in the learning space of output tokens as the basic guidance. Then, we revisit methods in four different aspects that have been proven effective for improving NAT models, and carefully combine these techniques with necessary modifications. Our extensive experiments on three translation benchmarks show that the proposed system achieves the new state-of-the-art results for fully NAT models, and obtains comparable performance with the autoregressive and iterative NAT systems. For instance, one of the proposed models achieves 27.49 BLEU points on WMT14 En-De with approximately 16.5X speed up at inference time."]], "corpus": "arxiv"}
|
53 |
+
{"tstamp": 1722224916.3804, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722224906.02, "finish": 1722224916.3804, "ip": "", "conv_id": "e6e5e63a277c497b87192e49b67e8463", "model_name": "text-embedding-004", "prompt": "Highest BLEU scores for fully NAT in WMT14 En-De with faster inference.", "output": [["Highest BLEU scores for fully NAT in WMT14 En-De with faster inference.", "Title: Fully Non-autoregressive Neural Machine Translation: Tricks of the Trade\n\nAbstract: Fully non-autoregressive neural machine translation (NAT) is proposed to simultaneously predict tokens with single forward of neural networks, which significantly reduces the inference latency at the expense of quality drop compared to the Transformer baseline. In this work, we target on closing the performance gap while maintaining the latency advantage. We first inspect the fundamental issues of fully NAT models, and adopt dependency reduction in the learning space of output tokens as the basic guidance. Then, we revisit methods in four different aspects that have been proven effective for improving NAT models, and carefully combine these techniques with necessary modifications. Our extensive experiments on three translation benchmarks show that the proposed system achieves the new state-of-the-art results for fully NAT models, and obtains comparable performance with the autoregressive and iterative NAT systems. For instance, one of the proposed models achieves 27.49 BLEU points on WMT14 En-De with approximately 16.5X speed up at inference time."]], "corpus": "arxiv"}
|
54 |
+
{"tstamp": 1722224937.6879, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722224937.2805, "finish": 1722224937.6879, "ip": "", "conv_id": "f93384bd5692403b9b937e5912ee5056", "model_name": "voyage-multilingual-2", "prompt": "A comprehensive analysis of a 280 billion parameter language model and its performance across 152 diverse tasks ", "output": [["A comprehensive analysis of a 280 billion parameter language model and its performance across 152 diverse tasks ", "Title: Baichuan 2: Open Large-scale Language Models\n\nAbstract: Large language models (LLMs) have demonstrated remarkable performance on a variety of natural language tasks based on just a few examples of natural language instructions, reducing the need for extensive feature engineering. However, most powerful LLMs are closed-source or limited in their capability for languages other than English. In this technical report, we present Baichuan 2, a series of large-scale multilingual language models containing 7 billion and 13 billion parameters, trained from scratch, on 2.6 trillion tokens. Baichuan 2 matches or outperforms other open-source models of similar size on public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan 2 excels in vertical domains such as medicine and law. We will release all pre-training model checkpoints to benefit the research community in better understanding the training dynamics of Baichuan 2."]], "corpus": "arxiv"}
|
55 |
+
{"tstamp": 1722224937.6879, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722224937.2805, "finish": 1722224937.6879, "ip": "", "conv_id": "582d39f1436e46c086fb73ddaf7c9a24", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "A comprehensive analysis of a 280 billion parameter language model and its performance across 152 diverse tasks ", "output": [["A comprehensive analysis of a 280 billion parameter language model and its performance across 152 diverse tasks ", "Title: Seq2Mol: Automatic design of de novo molecules conditioned by the target protein sequences through deep neural networks\n\nAbstract: De novo design of molecules has recently enjoyed the power of generative deep neural networks. Current approaches aim to generate molecules either resembling the properties of the molecules of the training set or molecules that are optimized with respect to specific physicochemical properties. None of the methods generates molecules specific to a target protein. In the approach presented here, we introduce a method which is conditioned on the protein target sequence to generate de novo molecules that are relevant to the target. We use an implementation adapted from Google's \"Show and Tell\" image caption generation method, to generate SMILES strings of molecules from protein sequence embeddings generated by a deep bi-directional language model ELMo. ELMo is used to generate contextualized embedding vectors of the protein sequence. Using reinforcement learning, the trained model is further optimized through augmented episodic likelihood to increase the diversity of the generated compounds compared to the training set. We used the model to generate compounds for two major drug target families, i.e. for GPCRs and Tyrosine Kinase targets. The model generated compounds which are structurally different form the training set, while also being more similar to compounds known to bind to the two families of drug targets compared to a random set of molecules. The compounds further display reasonable synthesizability and drug-likeness scores."]], "corpus": "arxiv"}
|
56 |
+
{"tstamp": 1722224956.6922, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722224956.3758, "finish": 1722224956.6922, "ip": "", "conv_id": "d3b69c82e28b436c969357870e392c7a", "model_name": "embed-english-v3.0", "prompt": "Paper introducing a model connecting object segmentation masks with natural language responses in multimodal settings.", "output": [["Paper introducing a model connecting object segmentation masks with natural language responses in multimodal settings.", "Title: Dynamic Multimodal Instance Segmentation guided by natural language queries\n\nAbstract: We address the problem of segmenting an object given a natural language expression that describes it. Current techniques tackle this task by either (\\textit{i}) directly or recursively merging linguistic and visual information in the channel dimension and then performing convolutions; or by (\\textit{ii}) mapping the expression to a space in which it can be thought of as a filter, whose response is directly related to the presence of the object at a given spatial coordinate in the image, so that a convolution can be applied to look for the object. We propose a novel method that integrates these two insights in order to fully exploit the recursive nature of language. Additionally, during the upsampling process, we take advantage of the intermediate information generated when downsampling the image, so that detailed segmentations can be obtained. We compare our method against the state-of-the-art approaches in four standard datasets, in which it surpasses all previous methods in six of eight of the splits for this task."]], "corpus": "arxiv"}
|
57 |
+
{"tstamp": 1722224956.6922, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722224956.3758, "finish": 1722224956.6922, "ip": "", "conv_id": "b353792ca18b4bc9967223f30e5f0dc0", "model_name": "voyage-multilingual-2", "prompt": "Paper introducing a model connecting object segmentation masks with natural language responses in multimodal settings.", "output": [["Paper introducing a model connecting object segmentation masks with natural language responses in multimodal settings.", "Title: Comprehensive Multi-Modal Interactions for Referring Image Segmentation\n\nAbstract: We investigate Referring Image Segmentation (RIS), which outputs a segmentation map corresponding to the natural language description. Addressing RIS efficiently requires considering the interactions happening across visual and linguistic modalities and the interactions within each modality. Existing methods are limited because they either compute different forms of interactions sequentially (leading to error propagation) or ignore intramodal interactions. We address this limitation by performing all three interactions simultaneously through a Synchronous Multi-Modal Fusion Module (SFM). Moreover, to produce refined segmentation masks, we propose a novel Hierarchical Cross-Modal Aggregation Module (HCAM), where linguistic features facilitate the exchange of contextual information across the visual hierarchy. We present thorough ablation studies and validate our approach's performance on four benchmark datasets, showing considerable performance gains over the existing state-of-the-art (SOTA) methods."]], "corpus": "arxiv"}
|
58 |
+
{"tstamp": 1722224980.8321, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722224972.0431, "finish": 1722224980.8321, "ip": "", "conv_id": "184da0dfea8b4f9d9a55e8015a197572", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "age is what type of scale?", "output": [["age is what type of scale?", "Stats Stackexchange\n\nQ: Should I treat age as a continuous variable or factor? Age is a predictor variable among a few others (gender, weight and height) and my response is the log-odds of a certain disease (binomial glm). \nMy age data runs from 21 until 40. I am not sure whether to treat age as a continuous variable or as a factor with age groups: 21-25, 26-30, 31-35, 36-40. \nAre there any plots which I can produce that can help determine which would be the better approach? \n\nA: It depends on the context.\nFor example if you are looking for the effect of age on children's height, it makes sense to look at it as a continuous ( integer) value.\nIf you're looking for e.g. the effect of age on oncogenesis then it makes sense if you look at age groups. Young vs old, above 55 and below 55, ...\nFor your example, unless age is a confounder of a hidden factor such as for example being college grad or still a student ( risk factor for young adults STD infection), I'd bin my data into reasonable bin sizes.\n"]], "corpus": "stackexchange"}
|
59 |
+
{"tstamp": 1722224980.8321, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722224972.0431, "finish": 1722224980.8321, "ip": "", "conv_id": "deb4d7aa81cf4de1a7c78febb6fc3a3a", "model_name": "GritLM/GritLM-7B", "prompt": "age is what type of scale?", "output": [["age is what type of scale?", "Stats Stackexchange\n\nQ: Are the variable types here considered correct? If we want to determine the variable types, will it be as follows for the below variables?\nAge ---> quantitative, discrete (we can count)\nFitness ---> If the values that we will enter here are 0 and 1 only, will the type of this variable be qualitative, nominal?\nThanks a lot.\n\nA: Questions like this expose the problems with Stevens' typology of variables. Some variables just aren't any of his categories; or are hard to categorize. Age is ratio level data in the sense that someone who is (say) 40 years old is twice as old as someone who is 20 years old. You just haven't got it measured precisely. But so? What difference does that make? \nDichotomies - here it really doesn't matter HOW you treat them for most purposes. E.g. in regression, you can consider them to be nominal, ordinal or interval and you will get the same results. \nI wrote about other problems with this typology on my blog\n"]], "corpus": "stackexchange"}
|
60 |
+
{"tstamp": 1722225019.8644, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722225019.5016, "finish": 1722225019.8644, "ip": "", "conv_id": "65d610c94b434dcda9d6b7a2824209b1", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "A framework to improve logical problem-solving in language models integrating symbolic formulation and self-refinement.", "output": [["A framework to improve logical problem-solving in language models integrating symbolic formulation and self-refinement.", "Title: On the three state Potts model with competing interactions on the Bethe lattice\n\nAbstract: In the present paper the three state Potts model with competing binary interactions (with couplings $J$ and $J_p$) on the second order Bethe lattice is considered. The recurrent equations for the partition functions are derived. When $J_p=0$, by means of a construction of a special class of limiting Gibbs measures, it is shown how these equations are related with the surface energy of the Hamiltonian. This relation reduces the problem of describing the limit Gibbs measures to find of solutions of a nonlinear functional equation. Moreover, the set of ground states of the one-level model is completely described. Using this fact, one finds Gibbs measures (pure phases) associated with the translation-invariant ground states. The critical temperature is exactly found and the phase diagram is presented. The free energies corresponding to translations-invariant Gibbs measures are found. Certain physical quantities are calculated as well."]], "corpus": "arxiv"}
|
61 |
+
{"tstamp": 1722225019.8644, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722225019.5016, "finish": 1722225019.8644, "ip": "", "conv_id": "c751d7d91b034a0e835787d7d077b783", "model_name": "voyage-multilingual-2", "prompt": "A framework to improve logical problem-solving in language models integrating symbolic formulation and self-refinement.", "output": [["A framework to improve logical problem-solving in language models integrating symbolic formulation and self-refinement.", "Title: Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning\n\nAbstract: Large Language Models (LLMs) have shown human-like reasoning abilities but still struggle with complex logical problems. This paper introduces a novel framework, Logic-LM, which integrates LLMs with symbolic solvers to improve logical problem-solving. Our method first utilizes LLMs to translate a natural language problem into a symbolic formulation. Afterward, a deterministic symbolic solver performs inference on the formulated problem. We also introduce a self-refinement module, which utilizes the symbolic solver's error messages to revise symbolic formalizations. We demonstrate Logic-LM's effectiveness on five logical reasoning datasets: ProofWriter, PrOntoQA, FOLIO, LogicalDeduction, and AR-LSAT. On average, Logic-LM achieves a significant performance boost of 39.2% over using LLM alone with standard prompting and 18.4% over LLM with chain-of-thought prompting. Our findings suggest that Logic-LM, by combining LLMs with symbolic logic, offers a promising avenue for faithful logical reasoning. Code and data are publicly available at https://github.com/teacherpeterpan/Logic-LLM."]], "corpus": "arxiv"}
|
62 |
+
{"tstamp": 1722225033.4419, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722225033.1286, "finish": 1722225033.4419, "ip": "", "conv_id": "e4e6ff5dc3a149a8a9cd5bc76feefd62", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "Looking for the paper on the 2020 CLEF edition covering automatic claim verification in social media.", "output": [["Looking for the paper on the 2020 CLEF edition covering automatic claim verification in social media.", "Title: Overview of CheckThat! 2020: Automatic Identification and Verification of Claims in Social Media\n\nAbstract: We present an overview of the third edition of the CheckThat! Lab at CLEF 2020. The lab featured five tasks in two different languages: English and Arabic. The first four tasks compose the full pipeline of claim verification in social media: Task 1 on check-worthiness estimation, Task 2 on retrieving previously fact-checked claims, Task 3 on evidence retrieval, and Task 4 on claim verification. The lab is completed with Task 5 on check-worthiness estimation in political debates and speeches. A total of 67 teams registered to participate in the lab (up from 47 at CLEF 2019), and 23 of them actually submitted runs (compared to 14 at CLEF 2019). Most teams used deep neural networks based on BERT, LSTMs, or CNNs, and achieved sizable improvements over the baselines on all tasks. Here we describe the tasks setup, the evaluation results, and a summary of the approaches used by the participants, and we discuss some lessons learned. Last but not least, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important tasks of check-worthiness estimation and automatic claim verification."]], "corpus": "arxiv"}
|
63 |
+
{"tstamp": 1722225033.4419, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722225033.1286, "finish": 1722225033.4419, "ip": "", "conv_id": "069ed0d8c0fe49369e9c70a31c707261", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Looking for the paper on the 2020 CLEF edition covering automatic claim verification in social media.", "output": [["Looking for the paper on the 2020 CLEF edition covering automatic claim verification in social media.", "Title: Overview of CheckThat! 2020: Automatic Identification and Verification of Claims in Social Media\n\nAbstract: We present an overview of the third edition of the CheckThat! Lab at CLEF 2020. The lab featured five tasks in two different languages: English and Arabic. The first four tasks compose the full pipeline of claim verification in social media: Task 1 on check-worthiness estimation, Task 2 on retrieving previously fact-checked claims, Task 3 on evidence retrieval, and Task 4 on claim verification. The lab is completed with Task 5 on check-worthiness estimation in political debates and speeches. A total of 67 teams registered to participate in the lab (up from 47 at CLEF 2019), and 23 of them actually submitted runs (compared to 14 at CLEF 2019). Most teams used deep neural networks based on BERT, LSTMs, or CNNs, and achieved sizable improvements over the baselines on all tasks. Here we describe the tasks setup, the evaluation results, and a summary of the approaches used by the participants, and we discuss some lessons learned. Last but not least, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important tasks of check-worthiness estimation and automatic claim verification."]], "corpus": "arxiv"}
|
64 |
+
{"tstamp": 1722225056.0879, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722225047.1129, "finish": 1722225056.0879, "ip": "", "conv_id": "ad3321dd65264d9ea93d600f7971ff6d", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "can r squared adjusted be negative?", "output": [["can r squared adjusted be negative?", "Stats Stackexchange\n\nQ: Is it a problem to get a negative adjusted r-squared? Background:\nI have the cross-sectional model:\n$Y_{i} = a + b X_{1,i} + c X_{2,i} + d X_{3,i} + e X_{4,i} + \\nu_i$.\nThe application is corporate finance. So each $Y_i$ is something like the change in return on assets over a 1 year period for firm $i$, and the regressors are typical corporate finance variables.\nIn corporate finance, very small values of $R^2$ are common, even sometimes $1\\%$. My $R^2$ is around $1\\%$ but my Adjusted $R^2$ is $-0.2\\%$. \nI have never seen papers report negative Adjusted $R^2$ but this could just be because they omit the publication of their adjusted $R^2$ when they see that it is negative. \nQuestion\nIs there some problem when the adjusted $R^2$ is negative?\n\nA: The formula for adjusted R square allows it to be negative. It is intended to approximate the actual percentage variance explained. So if the actual R square is close to zero the adjusted R square can be slightly negative. Just think of it as an estimate of zero.\n"]], "corpus": "stackexchange"}
|
65 |
+
{"tstamp": 1722225056.0879, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722225047.1129, "finish": 1722225056.0879, "ip": "", "conv_id": "6766409719cd420188c069799f6c48d1", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "can r squared adjusted be negative?", "output": [["can r squared adjusted be negative?", "Stats Stackexchange\n\nQ: Is it a problem to get a negative adjusted r-squared? Background:\nI have the cross-sectional model:\n$Y_{i} = a + b X_{1,i} + c X_{2,i} + d X_{3,i} + e X_{4,i} + \\nu_i$.\nThe application is corporate finance. So each $Y_i$ is something like the change in return on assets over a 1 year period for firm $i$, and the regressors are typical corporate finance variables.\nIn corporate finance, very small values of $R^2$ are common, even sometimes $1\\%$. My $R^2$ is around $1\\%$ but my Adjusted $R^2$ is $-0.2\\%$. \nI have never seen papers report negative Adjusted $R^2$ but this could just be because they omit the publication of their adjusted $R^2$ when they see that it is negative. \nQuestion\nIs there some problem when the adjusted $R^2$ is negative?\n\nA: The formula for adjusted R square allows it to be negative. It is intended to approximate the actual percentage variance explained. So if the actual R square is close to zero the adjusted R square can be slightly negative. Just think of it as an estimate of zero.\n"]], "corpus": "stackexchange"}
|
66 |
+
{"tstamp": 1722225074.1698, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722225073.7441, "finish": 1722225074.1698, "ip": "", "conv_id": "fe65d33c55ca4dbf856b34d9fd465f40", "model_name": "text-embedding-3-large", "prompt": "Paper showcasing reinforcement learning from human feedback to train beneficial and safe AI assistants.", "output": [["Paper showcasing reinforcement learning from human feedback to train beneficial and safe AI assistants.", "Title: Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback\n\nAbstract: We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, efficiently improving our datasets and models. Finally, we investigate the robustness of RLHF training, and identify a roughly linear relation between the RL reward and the square root of the KL divergence between the policy and its initialization. Alongside our main results, we perform peripheral analyses on calibration, competing objectives, and the use of OOD detection, compare our models with human writers, and provide samples from our models using prompts appearing in recent related work."]], "corpus": "arxiv"}
|
67 |
+
{"tstamp": 1722225074.1698, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722225073.7441, "finish": 1722225074.1698, "ip": "", "conv_id": "19f1b179abd4428983f12ecbb76ba96e", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Paper showcasing reinforcement learning from human feedback to train beneficial and safe AI assistants.", "output": [["Paper showcasing reinforcement learning from human feedback to train beneficial and safe AI assistants.", "Title: Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback\n\nAbstract: We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, efficiently improving our datasets and models. Finally, we investigate the robustness of RLHF training, and identify a roughly linear relation between the RL reward and the square root of the KL divergence between the policy and its initialization. Alongside our main results, we perform peripheral analyses on calibration, competing objectives, and the use of OOD detection, compare our models with human writers, and provide samples from our models using prompts appearing in recent related work."]], "corpus": "arxiv"}
|
68 |
+
{"tstamp": 1722225091.4142, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722225089.0182, "finish": 1722225091.4142, "ip": "", "conv_id": "94d522a8ae794130b5549c97be9efcc8", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Looking for a paper on Fine-grained Entity Type Classification using a neural network with hierarchy-aware loss.", "output": [["Looking for a paper on Fine-grained Entity Type Classification using a neural network with hierarchy-aware loss.", "Title: Graph Neural Network Approach to Semantic Type Detection in Tables\n\nAbstract: This study addresses the challenge of detecting semantic column types in relational tables, a key task in many real-world applications. While language models like BERT have improved prediction accuracy, their token input constraints limit the simultaneous processing of intra-table and inter-table information. We propose a novel approach using Graph Neural Networks (GNNs) to model intra-table dependencies, allowing language models to focus on inter-table information. Our proposed method not only outperforms existing state-of-the-art algorithms but also offers novel insights into the utility and functionality of various GNN types for semantic type detection. The code is available at https://github.com/hoseinzadeehsan/GAIT"]], "corpus": "arxiv"}
|
69 |
+
{"tstamp": 1722225091.4142, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722225089.0182, "finish": 1722225091.4142, "ip": "", "conv_id": "5bc98453ebd94dd8adc5a3a7477560d1", "model_name": "GritLM/GritLM-7B", "prompt": "Looking for a paper on Fine-grained Entity Type Classification using a neural network with hierarchy-aware loss.", "output": [["Looking for a paper on Fine-grained Entity Type Classification using a neural network with hierarchy-aware loss.", "Title: Neural Fine-Grained Entity Type Classification with Hierarchy-Aware Loss\n\nAbstract: The task of Fine-grained Entity Type Classification (FETC) consists of assigning types from a hierarchy to entity mentions in text. Existing methods rely on distant supervision and are thus susceptible to noisy labels that can be out-of-context or overly-specific for the training sentence. Previous methods that attempt to address these issues do so with heuristics or with the help of hand-crafted features. Instead, we propose an end-to-end solution with a neural network model that uses a variant of cross- entropy loss function to handle out-of-context labels, and hierarchical loss normalization to cope with overly-specific ones. Also, previous work solve FETC a multi-label classification followed by ad-hoc post-processing. In contrast, our solution is more elegant: we use public word embeddings to train a single-label that jointly learns representations for entity mentions and their context. We show experimentally that our approach is robust against noise and consistently outperforms the state-of-the-art on established benchmarks for the task."]], "corpus": "arxiv"}
|
70 |
+
{"tstamp": 1722225117.8002, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722225117.6315, "finish": 1722225117.8002, "ip": "", "conv_id": "61042e3396c64d919604031cbf9612dc", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "A paper introducing a new large language model", "output": [["A paper introducing a new large language model", "Title: Larger-Context Language Modelling\n\nAbstract: In this work, we propose a novel method to incorporate corpus-level discourse information into language modelling. We call this larger-context language model. We introduce a late fusion approach to a recurrent language model based on long short-term memory units (LSTM), which helps the LSTM unit keep intra-sentence dependencies and inter-sentence dependencies separate from each other. Through the evaluation on three corpora (IMDB, BBC, and PennTree Bank), we demon- strate that the proposed model improves perplexity significantly. In the experi- ments, we evaluate the proposed approach while varying the number of context sentences and observe that the proposed late fusion is superior to the usual way of incorporating additional inputs to the LSTM. By analyzing the trained larger- context language model, we discover that content words, including nouns, adjec- tives and verbs, benefit most from an increasing number of context sentences. This analysis suggests that larger-context language model improves the unconditional language model by capturing the theme of a document better and more easily."]], "corpus": "arxiv"}
|
71 |
+
{"tstamp": 1722225117.8002, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722225117.6315, "finish": 1722225117.8002, "ip": "", "conv_id": "d2e0f34f14254329b828836aa5ea813e", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "A paper introducing a new large language model", "output": [["A paper introducing a new large language model", "Title: Large Language Models: A Survey\n\nAbstract: Large Language Models (LLMs) have drawn a lot of attention due to their strong performance on a wide range of natural language tasks, since the release of ChatGPT in November 2022. LLMs' ability of general-purpose language understanding and generation is acquired by training billions of model's parameters on massive amounts of text data, as predicted by scaling laws \\cite{kaplan2020scaling,hoffmann2022training}. The research area of LLMs, while very recent, is evolving rapidly in many different ways. In this paper, we review some of the most prominent LLMs, including three popular LLM families (GPT, LLaMA, PaLM), and discuss their characteristics, contributions and limitations. We also give an overview of techniques developed to build, and augment LLMs. We then survey popular datasets prepared for LLM training, fine-tuning, and evaluation, review widely used LLM evaluation metrics, and compare the performance of several popular LLMs on a set of representative benchmarks. Finally, we conclude the paper by discussing open challenges and future research directions."]], "corpus": "arxiv"}
|