Muennighoff
commited on
Scheduled Commit
Browse files
data/retrieval_battle-f0f24c65-6f65-4435-a1e5-20bb12a2b295.jsonl
CHANGED
@@ -41,3 +41,16 @@
|
|
41 |
{"tstamp": 1722226587.236, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "472b115b6c084a85b1fd064506fb60c0", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "is bloodborne in the same universe as dark souls?", "0_output": [["is bloodborne in the same universe as dark souls?", "Math Stackexchange\n\nQ: Solving $\\arcsin(1-x)-2\\arcsin(x)=\\pi/2$ \\begin{eqnarray*}\n\\arcsin(1-x)-2\\arcsin(x) & = & \\frac{\\pi}{2}\\\\\n1-x & = & \\sin\\left(\\frac{\\pi}{2}+2\\arcsin(x)\\right)\\\\\n & = & \\cos\\left(2\\arcsin(x)\\right)\\\\\n & = & 1-2\\left(\\sin\\left(\\arcsin(x)\\right)\\right)^{2}\\\\\n & = & 1-2x^{2}\\\\\nx & = & 2x^{2}\\\\\nx\\left(x-\\frac{1}{2}\\right) & = & 0\n\\end{eqnarray*}\nSo $x=0$ or $x=\\frac{1}{2}$\nBut puttig $x=\\frac{1}{2}$ in the original expression gives $-\\frac {\\pi} 4 \\ne \\frac \\pi 2$\nSo, why do we get $x=-1/2$ as an answer?\n\nA: In your first step you added an extra solution. \nSince $\\arcsin x$ must be smaller than $\\pi/2$, the first line reads:\n$$\\arcsin(1-x)= \\frac{\\pi}{2}+2\\arcsin(x) \\le \\frac{\\pi}{2}$$\nThus, $x\\le 0$ as well.\nNow, by taking the $\\sin$ of both sides, you took a function that was only defined up to $x=1$ (e.g. $\\arcsin(x-1)$ ) and extended it to all reals (e.g $x-1$). Here is where you added the extra solution.\n"]], "0_corpus": "stackexchange", "1_conv_id": "f0dfad42986f425e91134329193506e8", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "is bloodborne in the same universe as dark souls?", "1_output": [["is bloodborne in the same universe as dark souls?", "Stackoverflow Stackexchange\n\nQ: Stop UIAlertView from dismissing Can you stop UIAlertView from dismissing? \nI want based on what button is pressed, to either animate adding a UITextField, or dismiss it. But I can't add a text field after the alert is dismissed.\n\nA: You should implement the UIAlertViewDelegate method :\n- (void)alertView:(UIAlertView *)alertView willDismissWithButtonIndex:(NSInteger)buttonIndex\n\nand based on the button index you can perform your custom action.\nOtherwise you should either create your own alertview class or subclass UIAlertView.\nIf you choose to subclass you can override the method : \n-(void)dismissWithClickedButtonIndex:(NSInteger)buttonIndex animated:(BOOL)animated \n\nto avoid dismissing the alertView if a specific button is pressed\n"]], "1_corpus": "stackexchange"}
|
42 |
{"tstamp": 1722226604.3253, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "febd759447c24fbbab98fe669530ae5b", "0_model_name": "voyage-multilingual-2", "0_prompt": "How well do ChatGPT and GPT-4 perform on logical reasoning datasets?", "0_output": [["How well do ChatGPT and GPT-4 perform on logical reasoning datasets?", "Title: Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4\n\nAbstract: Harnessing logical reasoning ability is a comprehensive natural language understanding endeavor. With the release of Generative Pretrained Transformer 4 (GPT-4), highlighted as \"advanced\" at reasoning tasks, we are eager to learn the GPT-4 performance on various logical reasoning tasks. This report analyses multiple logical reasoning datasets, with popular benchmarks like LogiQA and ReClor, and newly-released datasets like AR-LSAT. We test the multi-choice reading comprehension and natural language inference tasks with benchmarks requiring logical reasoning. We further construct a logical reasoning out-of-distribution dataset to investigate the robustness of ChatGPT and GPT-4. We also make a performance comparison between ChatGPT and GPT-4. Experiment results show that ChatGPT performs significantly better than the RoBERTa fine-tuning method on most logical reasoning benchmarks. With early access to the GPT-4 API we are able to conduct intense experiments on the GPT-4 model. The results show GPT-4 yields even higher performance on most logical reasoning datasets. Among benchmarks, ChatGPT and GPT-4 do relatively well on well-known datasets like LogiQA and ReClor. However, the performance drops significantly when handling newly released and out-of-distribution datasets. Logical reasoning remains challenging for ChatGPT and GPT-4, especially on out-of-distribution and natural language inference datasets. We release the prompt-style logical reasoning datasets as a benchmark suite and name it LogiEval."]], "0_corpus": "arxiv", "1_conv_id": "eac6e53b37a34cda9371c8b5695b6794", "1_model_name": "embed-english-v3.0", "1_prompt": "How well do ChatGPT and GPT-4 perform on logical reasoning datasets?", "1_output": [["How well do ChatGPT and GPT-4 perform on logical reasoning datasets?", "Title: Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4\n\nAbstract: Harnessing logical reasoning ability is a comprehensive natural language understanding endeavor. With the release of Generative Pretrained Transformer 4 (GPT-4), highlighted as \"advanced\" at reasoning tasks, we are eager to learn the GPT-4 performance on various logical reasoning tasks. This report analyses multiple logical reasoning datasets, with popular benchmarks like LogiQA and ReClor, and newly-released datasets like AR-LSAT. We test the multi-choice reading comprehension and natural language inference tasks with benchmarks requiring logical reasoning. We further construct a logical reasoning out-of-distribution dataset to investigate the robustness of ChatGPT and GPT-4. We also make a performance comparison between ChatGPT and GPT-4. Experiment results show that ChatGPT performs significantly better than the RoBERTa fine-tuning method on most logical reasoning benchmarks. With early access to the GPT-4 API we are able to conduct intense experiments on the GPT-4 model. The results show GPT-4 yields even higher performance on most logical reasoning datasets. Among benchmarks, ChatGPT and GPT-4 do relatively well on well-known datasets like LogiQA and ReClor. However, the performance drops significantly when handling newly released and out-of-distribution datasets. Logical reasoning remains challenging for ChatGPT and GPT-4, especially on out-of-distribution and natural language inference datasets. We release the prompt-style logical reasoning datasets as a benchmark suite and name it LogiEval."]], "1_corpus": "arxiv"}
|
43 |
{"tstamp": 1722226669.9586, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "bf820c0aa0c64ff9847080afc0f4d484", "0_model_name": "text-embedding-004", "0_prompt": "Research on improving the performance and energy efficiency of neural networks by utilizing 8-bit precision.", "0_output": [["Research on improving the performance and energy efficiency of neural networks by utilizing 8-bit precision.", "Title: Understanding the Impact of Precision Quantization on the Accuracy and Energy of Neural Networks\n\nAbstract: Deep neural networks are gaining in popularity as they are used to generate state-of-the-art results for a variety of computer vision and machine learning applications. At the same time, these networks have grown in depth and complexity in order to solve harder problems. Given the limitations in power budgets dedicated to these networks, the importance of low-power, low-memory solutions has been stressed in recent years. While a large number of dedicated hardware using different precisions has recently been proposed, there exists no comprehensive study of different bit precisions and arithmetic in both inputs and network parameters. In this work, we address this issue and perform a study of different bit-precisions in neural networks (from floating-point to fixed-point, powers of two, and binary). In our evaluation, we consider and analyze the effect of precision scaling on both network accuracy and hardware metrics including memory footprint, power and energy consumption, and design area. We also investigate training-time methodologies to compensate for the reduction in accuracy due to limited bit precision and demonstrate that in most cases, precision scaling can deliver significant benefits in design metrics at the cost of very modest decreases in network accuracy. In addition, we propose that a small portion of the benefits achieved when using lower precisions can be forfeited to increase the network size and therefore the accuracy. We evaluate our experiments, using three well-recognized networks and datasets to show its generality. We investigate the trade-offs and highlight the benefits of using lower precisions in terms of energy and memory footprint."]], "0_corpus": "arxiv", "1_conv_id": "8c0c14dc0aa642e382ed81777e5c8042", "1_model_name": "BM25", "1_prompt": "Research on improving the performance and energy efficiency of neural networks by utilizing 8-bit precision.", "1_output": [["Research on improving the performance and energy efficiency of neural networks by utilizing 8-bit precision.", "Title: Tartan: Accelerating Fully-Connected and Convolutional Layers in Deep Learning Networks by Exploiting Numerical Precision Variability\n\nAbstract: Tartan (TRT), a hardware accelerator for inference with Deep Neural Networks (DNNs), is presented and evaluated on Convolutional Neural Networks. TRT exploits the variable per layer precision requirements of DNNs to deliver execution time that is proportional to the precision p in bits used per layer for convolutional and fully-connected layers. Prior art has demonstrated an accelerator with the same execution performance only for convolutional layers. Experiments on image classification CNNs show that on average across all networks studied, TRT outperforms a state-of-the-art bit-parallel accelerator by 1:90x without any loss in accuracy while it is 1:17x more energy efficient. TRT requires no network retraining while it enables trading off accuracy for additional improvements in execution performance and energy efficiency. For example, if a 1% relative loss in accuracy is acceptable, TRT is on average 2:04x faster and 1:25x more energy efficient than a conventional bit-parallel accelerator. A Tartan configuration that processes 2-bits at time, requires less area than the 1-bit configuration, improves efficiency to 1:24x over the bit-parallel baseline while being 73% faster for convolutional layers and 60% faster for fully-connected layers is also presented."]], "1_corpus": "arxiv"}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
{"tstamp": 1722226587.236, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "472b115b6c084a85b1fd064506fb60c0", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "is bloodborne in the same universe as dark souls?", "0_output": [["is bloodborne in the same universe as dark souls?", "Math Stackexchange\n\nQ: Solving $\\arcsin(1-x)-2\\arcsin(x)=\\pi/2$ \\begin{eqnarray*}\n\\arcsin(1-x)-2\\arcsin(x) & = & \\frac{\\pi}{2}\\\\\n1-x & = & \\sin\\left(\\frac{\\pi}{2}+2\\arcsin(x)\\right)\\\\\n & = & \\cos\\left(2\\arcsin(x)\\right)\\\\\n & = & 1-2\\left(\\sin\\left(\\arcsin(x)\\right)\\right)^{2}\\\\\n & = & 1-2x^{2}\\\\\nx & = & 2x^{2}\\\\\nx\\left(x-\\frac{1}{2}\\right) & = & 0\n\\end{eqnarray*}\nSo $x=0$ or $x=\\frac{1}{2}$\nBut puttig $x=\\frac{1}{2}$ in the original expression gives $-\\frac {\\pi} 4 \\ne \\frac \\pi 2$\nSo, why do we get $x=-1/2$ as an answer?\n\nA: In your first step you added an extra solution. \nSince $\\arcsin x$ must be smaller than $\\pi/2$, the first line reads:\n$$\\arcsin(1-x)= \\frac{\\pi}{2}+2\\arcsin(x) \\le \\frac{\\pi}{2}$$\nThus, $x\\le 0$ as well.\nNow, by taking the $\\sin$ of both sides, you took a function that was only defined up to $x=1$ (e.g. $\\arcsin(x-1)$ ) and extended it to all reals (e.g $x-1$). Here is where you added the extra solution.\n"]], "0_corpus": "stackexchange", "1_conv_id": "f0dfad42986f425e91134329193506e8", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "is bloodborne in the same universe as dark souls?", "1_output": [["is bloodborne in the same universe as dark souls?", "Stackoverflow Stackexchange\n\nQ: Stop UIAlertView from dismissing Can you stop UIAlertView from dismissing? \nI want based on what button is pressed, to either animate adding a UITextField, or dismiss it. But I can't add a text field after the alert is dismissed.\n\nA: You should implement the UIAlertViewDelegate method :\n- (void)alertView:(UIAlertView *)alertView willDismissWithButtonIndex:(NSInteger)buttonIndex\n\nand based on the button index you can perform your custom action.\nOtherwise you should either create your own alertview class or subclass UIAlertView.\nIf you choose to subclass you can override the method : \n-(void)dismissWithClickedButtonIndex:(NSInteger)buttonIndex animated:(BOOL)animated \n\nto avoid dismissing the alertView if a specific button is pressed\n"]], "1_corpus": "stackexchange"}
|
42 |
{"tstamp": 1722226604.3253, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "febd759447c24fbbab98fe669530ae5b", "0_model_name": "voyage-multilingual-2", "0_prompt": "How well do ChatGPT and GPT-4 perform on logical reasoning datasets?", "0_output": [["How well do ChatGPT and GPT-4 perform on logical reasoning datasets?", "Title: Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4\n\nAbstract: Harnessing logical reasoning ability is a comprehensive natural language understanding endeavor. With the release of Generative Pretrained Transformer 4 (GPT-4), highlighted as \"advanced\" at reasoning tasks, we are eager to learn the GPT-4 performance on various logical reasoning tasks. This report analyses multiple logical reasoning datasets, with popular benchmarks like LogiQA and ReClor, and newly-released datasets like AR-LSAT. We test the multi-choice reading comprehension and natural language inference tasks with benchmarks requiring logical reasoning. We further construct a logical reasoning out-of-distribution dataset to investigate the robustness of ChatGPT and GPT-4. We also make a performance comparison between ChatGPT and GPT-4. Experiment results show that ChatGPT performs significantly better than the RoBERTa fine-tuning method on most logical reasoning benchmarks. With early access to the GPT-4 API we are able to conduct intense experiments on the GPT-4 model. The results show GPT-4 yields even higher performance on most logical reasoning datasets. Among benchmarks, ChatGPT and GPT-4 do relatively well on well-known datasets like LogiQA and ReClor. However, the performance drops significantly when handling newly released and out-of-distribution datasets. Logical reasoning remains challenging for ChatGPT and GPT-4, especially on out-of-distribution and natural language inference datasets. We release the prompt-style logical reasoning datasets as a benchmark suite and name it LogiEval."]], "0_corpus": "arxiv", "1_conv_id": "eac6e53b37a34cda9371c8b5695b6794", "1_model_name": "embed-english-v3.0", "1_prompt": "How well do ChatGPT and GPT-4 perform on logical reasoning datasets?", "1_output": [["How well do ChatGPT and GPT-4 perform on logical reasoning datasets?", "Title: Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4\n\nAbstract: Harnessing logical reasoning ability is a comprehensive natural language understanding endeavor. With the release of Generative Pretrained Transformer 4 (GPT-4), highlighted as \"advanced\" at reasoning tasks, we are eager to learn the GPT-4 performance on various logical reasoning tasks. This report analyses multiple logical reasoning datasets, with popular benchmarks like LogiQA and ReClor, and newly-released datasets like AR-LSAT. We test the multi-choice reading comprehension and natural language inference tasks with benchmarks requiring logical reasoning. We further construct a logical reasoning out-of-distribution dataset to investigate the robustness of ChatGPT and GPT-4. We also make a performance comparison between ChatGPT and GPT-4. Experiment results show that ChatGPT performs significantly better than the RoBERTa fine-tuning method on most logical reasoning benchmarks. With early access to the GPT-4 API we are able to conduct intense experiments on the GPT-4 model. The results show GPT-4 yields even higher performance on most logical reasoning datasets. Among benchmarks, ChatGPT and GPT-4 do relatively well on well-known datasets like LogiQA and ReClor. However, the performance drops significantly when handling newly released and out-of-distribution datasets. Logical reasoning remains challenging for ChatGPT and GPT-4, especially on out-of-distribution and natural language inference datasets. We release the prompt-style logical reasoning datasets as a benchmark suite and name it LogiEval."]], "1_corpus": "arxiv"}
|
43 |
{"tstamp": 1722226669.9586, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "bf820c0aa0c64ff9847080afc0f4d484", "0_model_name": "text-embedding-004", "0_prompt": "Research on improving the performance and energy efficiency of neural networks by utilizing 8-bit precision.", "0_output": [["Research on improving the performance and energy efficiency of neural networks by utilizing 8-bit precision.", "Title: Understanding the Impact of Precision Quantization on the Accuracy and Energy of Neural Networks\n\nAbstract: Deep neural networks are gaining in popularity as they are used to generate state-of-the-art results for a variety of computer vision and machine learning applications. At the same time, these networks have grown in depth and complexity in order to solve harder problems. Given the limitations in power budgets dedicated to these networks, the importance of low-power, low-memory solutions has been stressed in recent years. While a large number of dedicated hardware using different precisions has recently been proposed, there exists no comprehensive study of different bit precisions and arithmetic in both inputs and network parameters. In this work, we address this issue and perform a study of different bit-precisions in neural networks (from floating-point to fixed-point, powers of two, and binary). In our evaluation, we consider and analyze the effect of precision scaling on both network accuracy and hardware metrics including memory footprint, power and energy consumption, and design area. We also investigate training-time methodologies to compensate for the reduction in accuracy due to limited bit precision and demonstrate that in most cases, precision scaling can deliver significant benefits in design metrics at the cost of very modest decreases in network accuracy. In addition, we propose that a small portion of the benefits achieved when using lower precisions can be forfeited to increase the network size and therefore the accuracy. We evaluate our experiments, using three well-recognized networks and datasets to show its generality. We investigate the trade-offs and highlight the benefits of using lower precisions in terms of energy and memory footprint."]], "0_corpus": "arxiv", "1_conv_id": "8c0c14dc0aa642e382ed81777e5c8042", "1_model_name": "BM25", "1_prompt": "Research on improving the performance and energy efficiency of neural networks by utilizing 8-bit precision.", "1_output": [["Research on improving the performance and energy efficiency of neural networks by utilizing 8-bit precision.", "Title: Tartan: Accelerating Fully-Connected and Convolutional Layers in Deep Learning Networks by Exploiting Numerical Precision Variability\n\nAbstract: Tartan (TRT), a hardware accelerator for inference with Deep Neural Networks (DNNs), is presented and evaluated on Convolutional Neural Networks. TRT exploits the variable per layer precision requirements of DNNs to deliver execution time that is proportional to the precision p in bits used per layer for convolutional and fully-connected layers. Prior art has demonstrated an accelerator with the same execution performance only for convolutional layers. Experiments on image classification CNNs show that on average across all networks studied, TRT outperforms a state-of-the-art bit-parallel accelerator by 1:90x without any loss in accuracy while it is 1:17x more energy efficient. TRT requires no network retraining while it enables trading off accuracy for additional improvements in execution performance and energy efficiency. For example, if a 1% relative loss in accuracy is acceptable, TRT is on average 2:04x faster and 1:25x more energy efficient than a conventional bit-parallel accelerator. A Tartan configuration that processes 2-bits at time, requires less area than the 1-bit configuration, improves efficiency to 1:24x over the bit-parallel baseline while being 73% faster for convolutional layers and 60% faster for fully-connected layers is also presented."]], "1_corpus": "arxiv"}
|
44 |
+
{"tstamp": 1722226736.6929, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "5057b648e94c4ffe85732362ed4ad1e4", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "27129086b9024fe7a62de294a6649ab2", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
|
45 |
+
{"tstamp": 1722226755.2368, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "942de81de947444185a57999c86fe41c", "0_model_name": "embed-english-v3.0", "0_prompt": "Efficient transformer models for multi-page document classification", "0_output": [["Efficient transformer models for multi-page document classification", "Title: Revisiting Transformer-based Models for Long Document Classification\n\nAbstract: The recent literature in text classification is biased towards short text sequences (e.g., sentences or paragraphs). In real-world applications, multi-page multi-paragraph documents are common and they cannot be efficiently encoded by vanilla Transformer-based models. We compare different Transformer-based Long Document Classification (TrLDC) approaches that aim to mitigate the computational overhead of vanilla transformers to encode much longer text, namely sparse attention and hierarchical encoding methods. We examine several aspects of sparse attention (e.g., size of local attention window, use of global attention) and hierarchical (e.g., document splitting strategy) transformers on four document classification datasets covering different domains. We observe a clear benefit from being able to process longer text, and, based on our results, we derive practical advice of applying Transformer-based models on long document classification tasks."]], "0_corpus": "arxiv", "1_conv_id": "dc52388a815249e1bdb208cb75e7563c", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "Efficient transformer models for multi-page document classification", "1_output": [["Efficient transformer models for multi-page document classification", "Title: Revisiting Transformer-based Models for Long Document Classification\n\nAbstract: The recent literature in text classification is biased towards short text sequences (e.g., sentences or paragraphs). In real-world applications, multi-page multi-paragraph documents are common and they cannot be efficiently encoded by vanilla Transformer-based models. We compare different Transformer-based Long Document Classification (TrLDC) approaches that aim to mitigate the computational overhead of vanilla transformers to encode much longer text, namely sparse attention and hierarchical encoding methods. We examine several aspects of sparse attention (e.g., size of local attention window, use of global attention) and hierarchical (e.g., document splitting strategy) transformers on four document classification datasets covering different domains. We observe a clear benefit from being able to process longer text, and, based on our results, we derive practical advice of applying Transformer-based models on long document classification tasks."]], "1_corpus": "arxiv"}
|
46 |
+
{"tstamp": 1722226768.1617, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "e488f8737d2e4fd7ab943e48d4a3cd52", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "How to combine GPT-2 and BERT for better question generation and answering?", "0_output": [["How to combine GPT-2 and BERT for better question generation and answering?", "Title: Learning to Answer by Learning to Ask: Getting the Best of GPT-2 and BERT Worlds\n\nAbstract: Automatic question generation aims at the generation of questions from a context, with the corresponding answers being sub-spans of the given passage. Whereas, most of the methods mostly rely on heuristic rules to generate questions, more recently also neural network approaches have been proposed. In this work, we propose a variant of the self-attention Transformer network architectures model to generate meaningful and diverse questions. To this end, we propose an easy to use model consisting of the conjunction of the Transformer decoder GPT-2 model with Transformer encoder BERT for the downstream task for question answering. The model is trained in an end-to-end fashion, where the language model is trained to produce a question-answer-aware input representation that facilitates to generate an answer focused question. Our result of neural question generation from text on the SQuAD 1.1 dataset suggests that our method can produce semantically correct and diverse questions. Additionally, we assessed the performance of our proposed method for the downstream task of question answering. The analysis shows that our proposed generation & answering collaboration framework relatively improves both tasks and is particularly powerful in the semi-supervised setup. The results further suggest a robust and comparably lean pipeline facilitating question generation in the small-data regime."]], "0_corpus": "arxiv", "1_conv_id": "4e6f1429af8d488dbc21f92b03708925", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "How to combine GPT-2 and BERT for better question generation and answering?", "1_output": [["How to combine GPT-2 and BERT for better question generation and answering?", "Title: Radiation flux and spectrum in the Vaidya collapse model\n\nAbstract: We consider the quantization of a massless scalar field, using the geometric optics approximation, in the background spacetime of a collapsing spherical self-similar Vaidya star, which forms a black hole or a naked singularity. We show that the outgoing radiation flux of the quantized scalar field diverges on the Cauchy horizon. The spectrum of the produced scalar partcles is non-thermal when the background develops a naked singularity. These results are analogous to those obtained for the scalar quantization on a self-similar dust cloud."]], "1_corpus": "arxiv"}
|
47 |
+
{"tstamp": 1722226790.9489, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "d8ccfaccb72b40429980442180c503b9", "0_model_name": "text-embedding-3-large", "0_prompt": "Paper introducing a transformer-based model trained on multiple visual modalities achieving high accuracy.", "0_output": [["Paper introducing a transformer-based model trained on multiple visual modalities achieving high accuracy.", "Title: Omnivore: A Single Model for Many Visual Modalities\n\nAbstract: Prior work has studied different visual modalities in isolation and developed separate architectures for recognition of images, videos, and 3D data. Instead, in this paper, we propose a single model which excels at classifying images, videos, and single-view 3D data using exactly the same model parameters. Our 'Omnivore' model leverages the flexibility of transformer-based architectures and is trained jointly on classification tasks from different modalities. Omnivore is simple to train, uses off-the-shelf standard datasets, and performs at-par or better than modality-specific models of the same size. A single Omnivore model obtains 86.0% on ImageNet, 84.1% on Kinetics, and 67.1% on SUN RGB-D. After finetuning, our models outperform prior work on a variety of vision tasks and generalize across modalities. Omnivore's shared visual representation naturally enables cross-modal recognition without access to correspondences between modalities. We hope our results motivate researchers to model visual modalities together."]], "0_corpus": "arxiv", "1_conv_id": "ba7ddb3950104cf488c87a1656f8f414", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "Paper introducing a transformer-based model trained on multiple visual modalities achieving high accuracy.", "1_output": [["Paper introducing a transformer-based model trained on multiple visual modalities achieving high accuracy.", "Title: Ab initio Molecular Dynamics Study of D_2 Desorption from Si(100)\n\nAbstract: Ab initio molecular dynamics calculations of deuterium desorbing from Si(100) have been performed in order to monitor the energy redistribution among the hydrogen and silicon degrees of freedom during the desorption process. The calculations show that part of the potential energy at the transition state to desorption is transferred to the silicon lattice. The deuterium molecules leave the surface vibrationally hot and rotationally cold, in agreement with experiments; the mean kinetic energy, however, is larger than found in experiments."]], "1_corpus": "arxiv"}
|
48 |
+
{"tstamp": 1722226844.4942, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "0ecf1137eb8441729be550b264620830", "0_model_name": "text-embedding-004", "0_prompt": "A novel network combining convolution and transformer operations for better medical image segmentation.", "0_output": [["A novel network combining convolution and transformer operations for better medical image segmentation.", "Title: ConvFormer: Combining CNN and Transformer for Medical Image Segmentation\n\nAbstract: Convolutional neural network (CNN) based methods have achieved great successes in medical image segmentation, but their capability to learn global representations is still limited due to using small effective receptive fields of convolution operations. Transformer based methods are capable of modelling long-range dependencies of information for capturing global representations, yet their ability to model local context is lacking. Integrating CNN and Transformer to learn both local and global representations while exploring multi-scale features is instrumental in further improving medical image segmentation. In this paper, we propose a hierarchical CNN and Transformer hybrid architecture, called ConvFormer, for medical image segmentation. ConvFormer is based on several simple yet effective designs. (1) A feed forward module of Deformable Transformer (DeTrans) is re-designed to introduce local information, called Enhanced DeTrans. (2) A residual-shaped hybrid stem based on a combination of convolutions and Enhanced DeTrans is developed to capture both local and global representations to enhance representation ability. (3) Our encoder utilizes the residual-shaped hybrid stem in a hierarchical manner to generate feature maps in different scales, and an additional Enhanced DeTrans encoder with residual connections is built to exploit multi-scale features with feature maps of different scales as input. Experiments on several datasets show that our ConvFormer, trained from scratch, outperforms various CNN- or Transformer-based architectures, achieving state-of-the-art performance."]], "0_corpus": "arxiv", "1_conv_id": "d02461de08aa4ce7b1a4cee2f252e2eb", "1_model_name": "embed-english-v3.0", "1_prompt": "A novel network combining convolution and transformer operations for better medical image segmentation.", "1_output": [["A novel network combining convolution and transformer operations for better medical image segmentation.", "Title: Rethinking Boundary Detection in Deep Learning Models for Medical Image Segmentation\n\nAbstract: Medical image segmentation is a fundamental task in the community of medical image analysis. In this paper, a novel network architecture, referred to as Convolution, Transformer, and Operator (CTO), is proposed. CTO employs a combination of Convolutional Neural Networks (CNNs), Vision Transformer (ViT), and an explicit boundary detection operator to achieve high recognition accuracy while maintaining an optimal balance between accuracy and efficiency. The proposed CTO follows the standard encoder-decoder segmentation paradigm, where the encoder network incorporates a popular CNN backbone for capturing local semantic information, and a lightweight ViT assistant for integrating long-range dependencies. To enhance the learning capacity on boundary, a boundary-guided decoder network is proposed that uses a boundary mask obtained from a dedicated boundary detection operator as explicit supervision to guide the decoding learning process. The performance of the proposed method is evaluated on six challenging medical image segmentation datasets, demonstrating that CTO achieves state-of-the-art accuracy with a competitive model complexity."]], "1_corpus": "arxiv"}
|
49 |
+
{"tstamp": 1722226863.8341, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "3b181c53b714491a82ac48e1a1950309", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "How do different constraints in storytelling tasks impact the author's linguistic style?", "0_output": [["How do different constraints in storytelling tasks impact the author's linguistic style?", "Title: The Effect of Different Writing Tasks on Linguistic Style: A Case Study of the ROC Story Cloze Task\n\nAbstract: A writer's style depends not just on personal traits but also on her intent and mental state. In this paper, we show how variants of the same writing task can lead to measurable differences in writing style. We present a case study based on the story cloze task (Mostafazadeh et al., 2016a), where annotators were assigned similar writing tasks with different constraints: (1) writing an entire story, (2) adding a story ending for a given story context, and (3) adding an incoherent ending to a story. We show that a simple linear classifier informed by stylistic features is able to successfully distinguish among the three cases, without even looking at the story context. In addition, combining our stylistic features with language model predictions reaches state of the art performance on the story cloze challenge. Our results demonstrate that different task framings can dramatically affect the way people write."]], "0_corpus": "arxiv", "1_conv_id": "de6bb332c59b4774b8c38bdad9af80a0", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "How do different constraints in storytelling tasks impact the author's linguistic style?", "1_output": [["How do different constraints in storytelling tasks impact the author's linguistic style?", "Title: Limits on dynamically generated spin-orbit coupling: Absence of $l=1$ Pomeranchuk instabilities in metals\n\nAbstract: An ordered state in the spin sector that breaks parity without breaking time-reversal symmetry, i.e., that can be considered as dynamically generated spin-orbit coupling, was proposed to explain puzzling observations in a range of different systems. Here we derive severe restrictions for such a state that follow from a Ward identity related to spin conservation. It is shown that $l=1$ spin-Pomeranchuk instabilities are not possible in non-relativistic systems since the response of spin-current fluctuations is entirely incoherent and non-singular. This rules out relativistic spin-orbit coupling as an emergent low-energy phenomenon. We illustrate the exotic physical properties of the remaining higher angular momentum analogues of spin-orbit coupling and derive a geometric constraint for spin-orbit vectors in lattice systems."]], "1_corpus": "arxiv"}
|
50 |
+
{"tstamp": 1722226877.0152, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "85881b7aeaa44439a7c415dcfd68c525", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Study on reducing gender bias in word-level models, showing improved results in bias evaluation metrics.", "0_output": [["Study on reducing gender bias in word-level models, showing improved results in bias evaluation metrics.", "Title: Chaos or Noise - Difficulties of a Distinction\n\nAbstract: In experiments, the dynamical behavior of systems is reflected in time series. Due to the finiteness of the observational data set it is not possible to reconstruct the invariant measure up to arbitrary fine resolution and arbitrary high embedding dimension. These restrictions limit our ability to distinguish between signals generated by different systems, such as regular, chaotic or stochastic ones, when analyzed from a time series point of view. We propose to classify the signal behavior, without referring to any specific model, as stochastic or deterministic on a certain scale of the resolution $\\epsilon$, according to the dependence of the $(\\epsilon,\\tau)$-entropy, $h(\\epsilon, \\tau)$, and of the finite size Lyapunov exponent, $\\lambda(\\epsilon)$, on $\\epsilon$."]], "0_corpus": "arxiv", "1_conv_id": "a7b2c40c9e5c43e3a3e400d04100c725", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "Study on reducing gender bias in word-level models, showing improved results in bias evaluation metrics.", "1_output": [["Study on reducing gender bias in word-level models, showing improved results in bias evaluation metrics.", "Title: Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function\n\nAbstract: Gender bias exists in natural language datasets which neural language models tend to learn, resulting in biased text generation. In this research, we propose a debiasing approach based on the loss function modification. We introduce a new term to the loss function which attempts to equalize the probabilities of male and female words in the output. Using an array of bias evaluation metrics, we provide empirical evidence that our approach successfully mitigates gender bias in language models without increasing perplexity. In comparison to existing debiasing strategies, data augmentation, and word embedding debiasing, our method performs better in several aspects, especially in reducing gender bias in occupation words. Finally, we introduce a combination of data augmentation and our approach, and show that it outperforms existing strategies in all bias evaluation metrics."]], "1_corpus": "arxiv"}
|
51 |
+
{"tstamp": 1722226892.8444, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "4987ca9238374025ae9f6d61145d0142", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "Study analyzing the multitask accuracy of text models including GPT-3 on academic and professional knowledge areas.", "0_output": [["Study analyzing the multitask accuracy of text models including GPT-3 on academic and professional knowledge areas.", "Title: Vibrational Spectra of Defects in Silicon: An Orbital Radii Approach\n\nAbstract: A phenomenological approach to the stretching mode vibrational frequencies of defects in semiconductors is proposed. A novel quantum scale is defined in terms of the first principles pseudopotential based orbital radius and the principal quantum number of the element concerned. A universal linear relationship between the Sanderson electronegativity and this quantum scale is established. Next, we show that the stretching mode vibrational frequencies of hydrogen and chlorine in the silicon network scale linearly with this quantum scale. Predictions and identifications of defect environments around the Si-H and Si-Cl are possible. The assignments of vibrational modes in porous silicon are critically examined. We discuss our proposed scale in the context of Mendeleveyan scales in general, and suggest justifications for it. We believe that our approach can be gainfully extended to the vibrational spectra of other semiconductors."]], "0_corpus": "arxiv", "1_conv_id": "bf81fa11eb3f4d3cb9c7294f31d17a63", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "Study analyzing the multitask accuracy of text models including GPT-3 on academic and professional knowledge areas.", "1_output": [["Study analyzing the multitask accuracy of text models including GPT-3 on academic and professional knowledge areas.", "Title: Measuring Massive Multitask Language Understanding\n\nAbstract: We propose a new test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability. We find that while most recent models have near random-chance accuracy, the very largest GPT-3 model improves over random chance by almost 20 percentage points on average. However, on every one of the 57 tasks, the best models still need substantial improvements before they can reach expert-level accuracy. Models also have lopsided performance and frequently do not know when they are wrong. Worse, they still have near-random accuracy on some socially important subjects such as morality and law. By comprehensively evaluating the breadth and depth of a model's academic and professional understanding, our test can be used to analyze models across many tasks and to identify important shortcomings."]], "1_corpus": "arxiv"}
|
52 |
+
{"tstamp": 1722226904.182, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "35ba5141430e439182b4cb93495f60b2", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "Comparison of sparse attention and hierarchical encoding in long document transformers", "0_output": [["Comparison of sparse attention and hierarchical encoding in long document transformers", "Title: Revisiting Transformer-based Models for Long Document Classification\n\nAbstract: The recent literature in text classification is biased towards short text sequences (e.g., sentences or paragraphs). In real-world applications, multi-page multi-paragraph documents are common and they cannot be efficiently encoded by vanilla Transformer-based models. We compare different Transformer-based Long Document Classification (TrLDC) approaches that aim to mitigate the computational overhead of vanilla transformers to encode much longer text, namely sparse attention and hierarchical encoding methods. We examine several aspects of sparse attention (e.g., size of local attention window, use of global attention) and hierarchical (e.g., document splitting strategy) transformers on four document classification datasets covering different domains. We observe a clear benefit from being able to process longer text, and, based on our results, we derive practical advice of applying Transformer-based models on long document classification tasks."]], "0_corpus": "arxiv", "1_conv_id": "7c53ae4004484956a4ab85f088046698", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "Comparison of sparse attention and hierarchical encoding in long document transformers", "1_output": [["Comparison of sparse attention and hierarchical encoding in long document transformers", "Title: Revisiting Transformer-based Models for Long Document Classification\n\nAbstract: The recent literature in text classification is biased towards short text sequences (e.g., sentences or paragraphs). In real-world applications, multi-page multi-paragraph documents are common and they cannot be efficiently encoded by vanilla Transformer-based models. We compare different Transformer-based Long Document Classification (TrLDC) approaches that aim to mitigate the computational overhead of vanilla transformers to encode much longer text, namely sparse attention and hierarchical encoding methods. We examine several aspects of sparse attention (e.g., size of local attention window, use of global attention) and hierarchical (e.g., document splitting strategy) transformers on four document classification datasets covering different domains. We observe a clear benefit from being able to process longer text, and, based on our results, we derive practical advice of applying Transformer-based models on long document classification tasks."]], "1_corpus": "arxiv"}
|
53 |
+
{"tstamp": 1722226982.36, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "c81179613507466f9501f236a8beb4a9", "0_model_name": "BM25", "0_prompt": "Paper introducing universal adversarial input triggers to analyze vulnerabilities in NLP models and their transferability across different models.", "0_output": [["Paper introducing universal adversarial input triggers to analyze vulnerabilities in NLP models and their transferability across different models.", "Title: Universal Adversarial Triggers for Attacking and Analyzing NLP\n\nAbstract: Adversarial examples highlight model vulnerabilities and are useful for evaluation and interpretation. We define universal adversarial triggers: input-agnostic sequences of tokens that trigger a model to produce a specific prediction when concatenated to any input from a dataset. We propose a gradient-guided search over tokens which finds short trigger sequences (e.g., one word for classification and four words for language modeling) that successfully trigger the target prediction. For example, triggers cause SNLI entailment accuracy to drop from 89.94% to 0.55%, 72% of \"why\" questions in SQuAD to be answered \"to kill american people\", and the GPT-2 language model to spew racist output even when conditioned on non-racial contexts. Furthermore, although the triggers are optimized using white-box access to a specific model, they transfer to other models for all tasks we consider. Finally, since triggers are input-agnostic, they provide an analysis of global model behavior. For instance, they confirm that SNLI models exploit dataset biases and help to diagnose heuristics learned by reading comprehension models."]], "0_corpus": "arxiv", "1_conv_id": "95105e1e8df945a7b819bd50bcd0a76a", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Paper introducing universal adversarial input triggers to analyze vulnerabilities in NLP models and their transferability across different models.", "1_output": [["Paper introducing universal adversarial input triggers to analyze vulnerabilities in NLP models and their transferability across different models.", "Title: Universal Adversarial Triggers for Attacking and Analyzing NLP\n\nAbstract: Adversarial examples highlight model vulnerabilities and are useful for evaluation and interpretation. We define universal adversarial triggers: input-agnostic sequences of tokens that trigger a model to produce a specific prediction when concatenated to any input from a dataset. We propose a gradient-guided search over tokens which finds short trigger sequences (e.g., one word for classification and four words for language modeling) that successfully trigger the target prediction. For example, triggers cause SNLI entailment accuracy to drop from 89.94% to 0.55%, 72% of \"why\" questions in SQuAD to be answered \"to kill american people\", and the GPT-2 language model to spew racist output even when conditioned on non-racial contexts. Furthermore, although the triggers are optimized using white-box access to a specific model, they transfer to other models for all tasks we consider. Finally, since triggers are input-agnostic, they provide an analysis of global model behavior. For instance, they confirm that SNLI models exploit dataset biases and help to diagnose heuristics learned by reading comprehension models."]], "1_corpus": "arxiv"}
|
54 |
+
{"tstamp": 1722226993.9858, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "c966ef1f66124006834ce9fae7ec6c57", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "Adversarial example generation for text classification using BERT masked language model ", "0_output": [["Adversarial example generation for text classification using BERT masked language model ", "Title: Measuring Adversarial Datasets\n\nAbstract: In the era of widespread public use of AI systems across various domains, ensuring adversarial robustness has become increasingly vital to maintain safety and prevent undesirable errors. Researchers have curated various adversarial datasets (through perturbations) for capturing model deficiencies that cannot be revealed in standard benchmark datasets. However, little is known about how these adversarial examples differ from the original data points, and there is still no methodology to measure the intended and unintended consequences of those adversarial transformations. In this research, we conducted a systematic survey of existing quantifiable metrics that describe text instances in NLP tasks, among dimensions of difficulty, diversity, and disagreement. We selected several current adversarial effect datasets and compared the distributions between the original and their adversarial counterparts. The results provide valuable insights into what makes these datasets more challenging from a metrics perspective and whether they align with underlying assumptions."]], "0_corpus": "arxiv", "1_conv_id": "275ee03e6e634f92968096b192b9ae4a", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Adversarial example generation for text classification using BERT masked language model ", "1_output": [["Adversarial example generation for text classification using BERT masked language model ", "Title: Similar glassy features in the NMR response of pure and disordered La1.88Sr0.12CuO4\n\nAbstract: High Tc superconductivity in La2-xSrxCuO4 coexists with (striped and glassy) magnetic order. Here, we report NMR measurements of the 139La spin-lattice relaxation, which displays a stretched-exponential time dependence, in both pure and disordered x=0.12 single crystals. An analysis in terms of a distribution of relaxation rates T1^-1 indicates that i) the spin-freezing temperature is spatially inhomogeneous with an onset at Tg(onset)=20 K for the pristine samples, and ii) the width of the T1^-1 distribution in the vicinity of Tg(onset) is insensitive to an ~1% level of atomic disorder in CuO2 planes. This suggests that the stretched-exponential 139La relaxation, considered as a manifestation of the systems glassiness, may not arise from quenched disorder."]], "1_corpus": "arxiv"}
|
55 |
+
{"tstamp": 1722227007.968, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "73fda287fa15475a92a663d6a3dba7cb", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "Achieving high accuracy in English-French plagiarism detection using word embeddings", "0_output": [["Achieving high accuracy in English-French plagiarism detection using word embeddings", "Title: UsingWord Embedding for Cross-Language Plagiarism Detection\n\nAbstract: This paper proposes to use distributed representation of words (word embeddings) in cross-language textual similarity detection. The main contributions of this paper are the following: (a) we introduce new cross-language similarity detection methods based on distributed representation of words; (b) we combine the different methods proposed to verify their complementarity and finally obtain an overall F1 score of 89.15% for English-French similarity detection at chunk level (88.5% at sentence level) on a very challenging corpus."]], "0_corpus": "arxiv", "1_conv_id": "339520347d484e1c8068e44e4e4e7452", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "Achieving high accuracy in English-French plagiarism detection using word embeddings", "1_output": [["Achieving high accuracy in English-French plagiarism detection using word embeddings", "Title: Studies of Plasma Detachment Using a One Dimensional Model for Divertor Operation\n\nAbstract: To characterize the conditions required to reach advanced divertor regimes, a one-dimensional computational model has been developed based on a coordinate transformation to incorporate two-dimensional effects. This model includes transport of ions, two species each of atoms and molecules, momentum, and ion and electron energy both within and across the flux surfaces. Impurity radiation is calculated using a coronal equilibrium model which includes the effects of charge-exchange recombination. Numerical results indicate that impurity radiation acts to facilitate plasma detachment and enhances the power lost from the divertor channel in escaping neutral atoms by cooling the electrons and suppressing ionization. As divertor particle densities increase, cold and thermal molecules become increasingly important in cooling the plasma, with molecular densities dominating electron and atomic densities under some conditions."]], "1_corpus": "arxiv"}
|
56 |
+
{"tstamp": 1722227015.1404, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "de8d07d9dc434154b214bde3478b3319", "0_model_name": "BM25", "0_prompt": "A paper presenting a Transformer model for vision-and-language tasks that does not rely on object detection or ResNet.", "0_output": [["A paper presenting a Transformer model for vision-and-language tasks that does not rely on object detection or ResNet.", "Title: ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision\n\nAbstract: Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance. Our code and pre-trained weights are available at https://github.com/dandelin/vilt."]], "0_corpus": "arxiv", "1_conv_id": "21bf134438c34376b884b388bf7b8c19", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "A paper presenting a Transformer model for vision-and-language tasks that does not rely on object detection or ResNet.", "1_output": [["A paper presenting a Transformer model for vision-and-language tasks that does not rely on object detection or ResNet.", "Title: ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision\n\nAbstract: Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance. Our code and pre-trained weights are available at https://github.com/dandelin/vilt."]], "1_corpus": "arxiv"}
|
data/retrieval_individual-f0f24c65-6f65-4435-a1e5-20bb12a2b295.jsonl
CHANGED
@@ -95,3 +95,33 @@
|
|
95 |
{"tstamp": 1722226677.743, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722226677.0736, "finish": 1722226677.743, "ip": "", "conv_id": "07f2741db5f24f0baa9b00a4959cf2d9", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "Research showing improved results in NLP tasks by integrating domain knowledge into BERT.", "output": [["Research showing improved results in NLP tasks by integrating domain knowledge into BERT.", "Title: K-BERT: Enabling Language Representation with Knowledge Graph\n\nAbstract: Pre-trained language representation models, such as BERT, capture a general language representation from large-scale corpora, but lack domain-specific knowledge. When reading a domain text, experts make inferences with relevant knowledge. For machines to achieve this capability, we propose a knowledge-enabled language representation model (K-BERT) with knowledge graphs (KGs), in which triples are injected into the sentences as domain knowledge. However, too much knowledge incorporation may divert the sentence from its correct meaning, which is called knowledge noise (KN) issue. To overcome KN, K-BERT introduces soft-position and visible matrix to limit the impact of knowledge. K-BERT can easily inject domain knowledge into the models by equipped with a KG without pre-training by-self because it is capable of loading model parameters from the pre-trained BERT. Our investigation reveals promising results in twelve NLP tasks. Especially in domain-specific tasks (including finance, law, and medicine), K-BERT significantly outperforms BERT, which demonstrates that K-BERT is an excellent choice for solving the knowledge-driven problems that require experts."]], "corpus": "arxiv"}
|
96 |
{"tstamp": 1722226706.0355, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722226705.6836, "finish": 1722226706.0355, "ip": "", "conv_id": "ef7d7743ae024735925e8bc2e56cba6e", "model_name": "voyage-multilingual-2", "prompt": "Looking for a paper about a multimodal machine translation model that uses visual and textual information.", "output": [["Looking for a paper about a multimodal machine translation model that uses visual and textual information.", "Title: A Visual Attention Grounding Neural Model for Multimodal Machine Translation\n\nAbstract: We introduce a novel multimodal machine translation model that utilizes parallel visual and textual information. Our model jointly optimizes the learning of a shared visual-language embedding and a translator. The model leverages a visual attention grounding mechanism that links the visual semantics with the corresponding textual semantics. Our approach achieves competitive state-of-the-art results on the Multi30K and the Ambiguous COCO datasets. We also collected a new multilingual multimodal product description dataset to simulate a real-world international online shopping scenario. On this dataset, our visual attention grounding model outperforms other methods by a large margin."]], "corpus": "arxiv"}
|
97 |
{"tstamp": 1722226706.0355, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722226705.6836, "finish": 1722226706.0355, "ip": "", "conv_id": "60b9db869d3545cb81d2de810693a35c", "model_name": "embed-english-v3.0", "prompt": "Looking for a paper about a multimodal machine translation model that uses visual and textual information.", "output": [["Looking for a paper about a multimodal machine translation model that uses visual and textual information.", "Title: Exploring the Necessity of Visual Modality in Multimodal Machine Translation using Authentic Datasets\n\nAbstract: Recent research in the field of multimodal machine translation (MMT) has indicated that the visual modality is either dispensable or offers only marginal advantages. However, most of these conclusions are drawn from the analysis of experimental results based on a limited set of bilingual sentence-image pairs, such as Multi30k. In these kinds of datasets, the content of one bilingual parallel sentence pair must be well represented by a manually annotated image, which is different from the real-world translation scenario. In this work, we adhere to the universal multimodal machine translation framework proposed by Tang et al. (2022). This approach allows us to delve into the impact of the visual modality on translation efficacy by leveraging real-world translation datasets. Through a comprehensive exploration via probing tasks, we find that the visual modality proves advantageous for the majority of authentic translation datasets. Notably, the translation performance primarily hinges on the alignment and coherence between textual and visual contents. Furthermore, our results suggest that visual information serves a supplementary role in multimodal translation and can be substituted."]], "corpus": "arxiv"}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
95 |
{"tstamp": 1722226677.743, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722226677.0736, "finish": 1722226677.743, "ip": "", "conv_id": "07f2741db5f24f0baa9b00a4959cf2d9", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "Research showing improved results in NLP tasks by integrating domain knowledge into BERT.", "output": [["Research showing improved results in NLP tasks by integrating domain knowledge into BERT.", "Title: K-BERT: Enabling Language Representation with Knowledge Graph\n\nAbstract: Pre-trained language representation models, such as BERT, capture a general language representation from large-scale corpora, but lack domain-specific knowledge. When reading a domain text, experts make inferences with relevant knowledge. For machines to achieve this capability, we propose a knowledge-enabled language representation model (K-BERT) with knowledge graphs (KGs), in which triples are injected into the sentences as domain knowledge. However, too much knowledge incorporation may divert the sentence from its correct meaning, which is called knowledge noise (KN) issue. To overcome KN, K-BERT introduces soft-position and visible matrix to limit the impact of knowledge. K-BERT can easily inject domain knowledge into the models by equipped with a KG without pre-training by-self because it is capable of loading model parameters from the pre-trained BERT. Our investigation reveals promising results in twelve NLP tasks. Especially in domain-specific tasks (including finance, law, and medicine), K-BERT significantly outperforms BERT, which demonstrates that K-BERT is an excellent choice for solving the knowledge-driven problems that require experts."]], "corpus": "arxiv"}
|
96 |
{"tstamp": 1722226706.0355, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722226705.6836, "finish": 1722226706.0355, "ip": "", "conv_id": "ef7d7743ae024735925e8bc2e56cba6e", "model_name": "voyage-multilingual-2", "prompt": "Looking for a paper about a multimodal machine translation model that uses visual and textual information.", "output": [["Looking for a paper about a multimodal machine translation model that uses visual and textual information.", "Title: A Visual Attention Grounding Neural Model for Multimodal Machine Translation\n\nAbstract: We introduce a novel multimodal machine translation model that utilizes parallel visual and textual information. Our model jointly optimizes the learning of a shared visual-language embedding and a translator. The model leverages a visual attention grounding mechanism that links the visual semantics with the corresponding textual semantics. Our approach achieves competitive state-of-the-art results on the Multi30K and the Ambiguous COCO datasets. We also collected a new multilingual multimodal product description dataset to simulate a real-world international online shopping scenario. On this dataset, our visual attention grounding model outperforms other methods by a large margin."]], "corpus": "arxiv"}
|
97 |
{"tstamp": 1722226706.0355, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722226705.6836, "finish": 1722226706.0355, "ip": "", "conv_id": "60b9db869d3545cb81d2de810693a35c", "model_name": "embed-english-v3.0", "prompt": "Looking for a paper about a multimodal machine translation model that uses visual and textual information.", "output": [["Looking for a paper about a multimodal machine translation model that uses visual and textual information.", "Title: Exploring the Necessity of Visual Modality in Multimodal Machine Translation using Authentic Datasets\n\nAbstract: Recent research in the field of multimodal machine translation (MMT) has indicated that the visual modality is either dispensable or offers only marginal advantages. However, most of these conclusions are drawn from the analysis of experimental results based on a limited set of bilingual sentence-image pairs, such as Multi30k. In these kinds of datasets, the content of one bilingual parallel sentence pair must be well represented by a manually annotated image, which is different from the real-world translation scenario. In this work, we adhere to the universal multimodal machine translation framework proposed by Tang et al. (2022). This approach allows us to delve into the impact of the visual modality on translation efficacy by leveraging real-world translation datasets. Through a comprehensive exploration via probing tasks, we find that the visual modality proves advantageous for the majority of authentic translation datasets. Notably, the translation performance primarily hinges on the alignment and coherence between textual and visual contents. Furthermore, our results suggest that visual information serves a supplementary role in multimodal translation and can be substituted."]], "corpus": "arxiv"}
|
98 |
+
{"tstamp": 1722226725.7285, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722226725.6237, "finish": 1722226725.7285, "ip": "", "conv_id": "5057b648e94c4ffe85732362ed4ad1e4", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
99 |
+
{"tstamp": 1722226725.7285, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722226725.6237, "finish": 1722226725.7285, "ip": "", "conv_id": "27129086b9024fe7a62de294a6649ab2", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
100 |
+
{"tstamp": 1722226746.9841, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722226744.6099, "finish": 1722226746.9841, "ip": "", "conv_id": "942de81de947444185a57999c86fe41c", "model_name": "embed-english-v3.0", "prompt": "Efficient transformer models for multi-page document classification", "output": [["Efficient transformer models for multi-page document classification", "Title: Revisiting Transformer-based Models for Long Document Classification\n\nAbstract: The recent literature in text classification is biased towards short text sequences (e.g., sentences or paragraphs). In real-world applications, multi-page multi-paragraph documents are common and they cannot be efficiently encoded by vanilla Transformer-based models. We compare different Transformer-based Long Document Classification (TrLDC) approaches that aim to mitigate the computational overhead of vanilla transformers to encode much longer text, namely sparse attention and hierarchical encoding methods. We examine several aspects of sparse attention (e.g., size of local attention window, use of global attention) and hierarchical (e.g., document splitting strategy) transformers on four document classification datasets covering different domains. We observe a clear benefit from being able to process longer text, and, based on our results, we derive practical advice of applying Transformer-based models on long document classification tasks."]], "corpus": "arxiv"}
|
101 |
+
{"tstamp": 1722226746.9841, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722226744.6099, "finish": 1722226746.9841, "ip": "", "conv_id": "dc52388a815249e1bdb208cb75e7563c", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Efficient transformer models for multi-page document classification", "output": [["Efficient transformer models for multi-page document classification", "Title: Revisiting Transformer-based Models for Long Document Classification\n\nAbstract: The recent literature in text classification is biased towards short text sequences (e.g., sentences or paragraphs). In real-world applications, multi-page multi-paragraph documents are common and they cannot be efficiently encoded by vanilla Transformer-based models. We compare different Transformer-based Long Document Classification (TrLDC) approaches that aim to mitigate the computational overhead of vanilla transformers to encode much longer text, namely sparse attention and hierarchical encoding methods. We examine several aspects of sparse attention (e.g., size of local attention window, use of global attention) and hierarchical (e.g., document splitting strategy) transformers on four document classification datasets covering different domains. We observe a clear benefit from being able to process longer text, and, based on our results, we derive practical advice of applying Transformer-based models on long document classification tasks."]], "corpus": "arxiv"}
|
102 |
+
{"tstamp": 1722226762.1172, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722226762.0032, "finish": 1722226762.1172, "ip": "", "conv_id": "e488f8737d2e4fd7ab943e48d4a3cd52", "model_name": "GritLM/GritLM-7B", "prompt": "How to combine GPT-2 and BERT for better question generation and answering?", "output": [["How to combine GPT-2 and BERT for better question generation and answering?", "Title: Learning to Answer by Learning to Ask: Getting the Best of GPT-2 and BERT Worlds\n\nAbstract: Automatic question generation aims at the generation of questions from a context, with the corresponding answers being sub-spans of the given passage. Whereas, most of the methods mostly rely on heuristic rules to generate questions, more recently also neural network approaches have been proposed. In this work, we propose a variant of the self-attention Transformer network architectures model to generate meaningful and diverse questions. To this end, we propose an easy to use model consisting of the conjunction of the Transformer decoder GPT-2 model with Transformer encoder BERT for the downstream task for question answering. The model is trained in an end-to-end fashion, where the language model is trained to produce a question-answer-aware input representation that facilitates to generate an answer focused question. Our result of neural question generation from text on the SQuAD 1.1 dataset suggests that our method can produce semantically correct and diverse questions. Additionally, we assessed the performance of our proposed method for the downstream task of question answering. The analysis shows that our proposed generation & answering collaboration framework relatively improves both tasks and is particularly powerful in the semi-supervised setup. The results further suggest a robust and comparably lean pipeline facilitating question generation in the small-data regime."]], "corpus": "arxiv"}
|
103 |
+
{"tstamp": 1722226762.1172, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722226762.0032, "finish": 1722226762.1172, "ip": "", "conv_id": "4e6f1429af8d488dbc21f92b03708925", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "How to combine GPT-2 and BERT for better question generation and answering?", "output": [["How to combine GPT-2 and BERT for better question generation and answering?", "Title: Radiation flux and spectrum in the Vaidya collapse model\n\nAbstract: We consider the quantization of a massless scalar field, using the geometric optics approximation, in the background spacetime of a collapsing spherical self-similar Vaidya star, which forms a black hole or a naked singularity. We show that the outgoing radiation flux of the quantized scalar field diverges on the Cauchy horizon. The spectrum of the produced scalar partcles is non-thermal when the background develops a naked singularity. These results are analogous to those obtained for the scalar quantization on a self-similar dust cloud."]], "corpus": "arxiv"}
|
104 |
+
{"tstamp": 1722226773.3042, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722226772.7706, "finish": 1722226773.3042, "ip": "", "conv_id": "d8ccfaccb72b40429980442180c503b9", "model_name": "text-embedding-3-large", "prompt": "Paper introducing a transformer-based model trained on multiple visual modalities achieving high accuracy.", "output": [["Paper introducing a transformer-based model trained on multiple visual modalities achieving high accuracy.", "Title: Omnivore: A Single Model for Many Visual Modalities\n\nAbstract: Prior work has studied different visual modalities in isolation and developed separate architectures for recognition of images, videos, and 3D data. Instead, in this paper, we propose a single model which excels at classifying images, videos, and single-view 3D data using exactly the same model parameters. Our 'Omnivore' model leverages the flexibility of transformer-based architectures and is trained jointly on classification tasks from different modalities. Omnivore is simple to train, uses off-the-shelf standard datasets, and performs at-par or better than modality-specific models of the same size. A single Omnivore model obtains 86.0% on ImageNet, 84.1% on Kinetics, and 67.1% on SUN RGB-D. After finetuning, our models outperform prior work on a variety of vision tasks and generalize across modalities. Omnivore's shared visual representation naturally enables cross-modal recognition without access to correspondences between modalities. We hope our results motivate researchers to model visual modalities together."]], "corpus": "arxiv"}
|
105 |
+
{"tstamp": 1722226773.3042, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722226772.7706, "finish": 1722226773.3042, "ip": "", "conv_id": "ba7ddb3950104cf488c87a1656f8f414", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Paper introducing a transformer-based model trained on multiple visual modalities achieving high accuracy.", "output": [["Paper introducing a transformer-based model trained on multiple visual modalities achieving high accuracy.", "Title: Ab initio Molecular Dynamics Study of D_2 Desorption from Si(100)\n\nAbstract: Ab initio molecular dynamics calculations of deuterium desorbing from Si(100) have been performed in order to monitor the energy redistribution among the hydrogen and silicon degrees of freedom during the desorption process. The calculations show that part of the potential energy at the transition state to desorption is transferred to the silicon lattice. The deuterium molecules leave the surface vibrationally hot and rotationally cold, in agreement with experiments; the mean kinetic energy, however, is larger than found in experiments."]], "corpus": "arxiv"}
|
106 |
+
{"tstamp": 1722226804.2739, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722226803.6191, "finish": 1722226804.2739, "ip": "", "conv_id": "0ecf1137eb8441729be550b264620830", "model_name": "text-embedding-004", "prompt": "A novel network combining convolution and transformer operations for better medical image segmentation.", "output": [["A novel network combining convolution and transformer operations for better medical image segmentation.", "Title: ConvFormer: Combining CNN and Transformer for Medical Image Segmentation\n\nAbstract: Convolutional neural network (CNN) based methods have achieved great successes in medical image segmentation, but their capability to learn global representations is still limited due to using small effective receptive fields of convolution operations. Transformer based methods are capable of modelling long-range dependencies of information for capturing global representations, yet their ability to model local context is lacking. Integrating CNN and Transformer to learn both local and global representations while exploring multi-scale features is instrumental in further improving medical image segmentation. In this paper, we propose a hierarchical CNN and Transformer hybrid architecture, called ConvFormer, for medical image segmentation. ConvFormer is based on several simple yet effective designs. (1) A feed forward module of Deformable Transformer (DeTrans) is re-designed to introduce local information, called Enhanced DeTrans. (2) A residual-shaped hybrid stem based on a combination of convolutions and Enhanced DeTrans is developed to capture both local and global representations to enhance representation ability. (3) Our encoder utilizes the residual-shaped hybrid stem in a hierarchical manner to generate feature maps in different scales, and an additional Enhanced DeTrans encoder with residual connections is built to exploit multi-scale features with feature maps of different scales as input. Experiments on several datasets show that our ConvFormer, trained from scratch, outperforms various CNN- or Transformer-based architectures, achieving state-of-the-art performance."]], "corpus": "arxiv"}
|
107 |
+
{"tstamp": 1722226804.2739, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722226803.6191, "finish": 1722226804.2739, "ip": "", "conv_id": "d02461de08aa4ce7b1a4cee2f252e2eb", "model_name": "embed-english-v3.0", "prompt": "A novel network combining convolution and transformer operations for better medical image segmentation.", "output": [["A novel network combining convolution and transformer operations for better medical image segmentation.", "Title: Rethinking Boundary Detection in Deep Learning Models for Medical Image Segmentation\n\nAbstract: Medical image segmentation is a fundamental task in the community of medical image analysis. In this paper, a novel network architecture, referred to as Convolution, Transformer, and Operator (CTO), is proposed. CTO employs a combination of Convolutional Neural Networks (CNNs), Vision Transformer (ViT), and an explicit boundary detection operator to achieve high recognition accuracy while maintaining an optimal balance between accuracy and efficiency. The proposed CTO follows the standard encoder-decoder segmentation paradigm, where the encoder network incorporates a popular CNN backbone for capturing local semantic information, and a lightweight ViT assistant for integrating long-range dependencies. To enhance the learning capacity on boundary, a boundary-guided decoder network is proposed that uses a boundary mask obtained from a dedicated boundary detection operator as explicit supervision to guide the decoding learning process. The performance of the proposed method is evaluated on six challenging medical image segmentation datasets, demonstrating that CTO achieves state-of-the-art accuracy with a competitive model complexity."]], "corpus": "arxiv"}
|
108 |
+
{"tstamp": 1722226850.0107, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722226849.9077, "finish": 1722226850.0107, "ip": "", "conv_id": "3b181c53b714491a82ac48e1a1950309", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "How do different constraints in storytelling tasks impact the author's linguistic style?", "output": [["How do different constraints in storytelling tasks impact the author's linguistic style?", "Title: The Effect of Different Writing Tasks on Linguistic Style: A Case Study of the ROC Story Cloze Task\n\nAbstract: A writer's style depends not just on personal traits but also on her intent and mental state. In this paper, we show how variants of the same writing task can lead to measurable differences in writing style. We present a case study based on the story cloze task (Mostafazadeh et al., 2016a), where annotators were assigned similar writing tasks with different constraints: (1) writing an entire story, (2) adding a story ending for a given story context, and (3) adding an incoherent ending to a story. We show that a simple linear classifier informed by stylistic features is able to successfully distinguish among the three cases, without even looking at the story context. In addition, combining our stylistic features with language model predictions reaches state of the art performance on the story cloze challenge. Our results demonstrate that different task framings can dramatically affect the way people write."]], "corpus": "arxiv"}
|
109 |
+
{"tstamp": 1722226850.0107, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722226849.9077, "finish": 1722226850.0107, "ip": "", "conv_id": "de6bb332c59b4774b8c38bdad9af80a0", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "How do different constraints in storytelling tasks impact the author's linguistic style?", "output": [["How do different constraints in storytelling tasks impact the author's linguistic style?", "Title: Limits on dynamically generated spin-orbit coupling: Absence of $l=1$ Pomeranchuk instabilities in metals\n\nAbstract: An ordered state in the spin sector that breaks parity without breaking time-reversal symmetry, i.e., that can be considered as dynamically generated spin-orbit coupling, was proposed to explain puzzling observations in a range of different systems. Here we derive severe restrictions for such a state that follow from a Ward identity related to spin conservation. It is shown that $l=1$ spin-Pomeranchuk instabilities are not possible in non-relativistic systems since the response of spin-current fluctuations is entirely incoherent and non-singular. This rules out relativistic spin-orbit coupling as an emergent low-energy phenomenon. We illustrate the exotic physical properties of the remaining higher angular momentum analogues of spin-orbit coupling and derive a geometric constraint for spin-orbit vectors in lattice systems."]], "corpus": "arxiv"}
|
110 |
+
{"tstamp": 1722226871.6124, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722226871.4532, "finish": 1722226871.6124, "ip": "", "conv_id": "85881b7aeaa44439a7c415dcfd68c525", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "Study on reducing gender bias in word-level models, showing improved results in bias evaluation metrics.", "output": [["Study on reducing gender bias in word-level models, showing improved results in bias evaluation metrics.", "Title: Chaos or Noise - Difficulties of a Distinction\n\nAbstract: In experiments, the dynamical behavior of systems is reflected in time series. Due to the finiteness of the observational data set it is not possible to reconstruct the invariant measure up to arbitrary fine resolution and arbitrary high embedding dimension. These restrictions limit our ability to distinguish between signals generated by different systems, such as regular, chaotic or stochastic ones, when analyzed from a time series point of view. We propose to classify the signal behavior, without referring to any specific model, as stochastic or deterministic on a certain scale of the resolution $\\epsilon$, according to the dependence of the $(\\epsilon,\\tau)$-entropy, $h(\\epsilon, \\tau)$, and of the finite size Lyapunov exponent, $\\lambda(\\epsilon)$, on $\\epsilon$."]], "corpus": "arxiv"}
|
111 |
+
{"tstamp": 1722226871.6124, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722226871.4532, "finish": 1722226871.6124, "ip": "", "conv_id": "a7b2c40c9e5c43e3a3e400d04100c725", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Study on reducing gender bias in word-level models, showing improved results in bias evaluation metrics.", "output": [["Study on reducing gender bias in word-level models, showing improved results in bias evaluation metrics.", "Title: Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function\n\nAbstract: Gender bias exists in natural language datasets which neural language models tend to learn, resulting in biased text generation. In this research, we propose a debiasing approach based on the loss function modification. We introduce a new term to the loss function which attempts to equalize the probabilities of male and female words in the output. Using an array of bias evaluation metrics, we provide empirical evidence that our approach successfully mitigates gender bias in language models without increasing perplexity. In comparison to existing debiasing strategies, data augmentation, and word embedding debiasing, our method performs better in several aspects, especially in reducing gender bias in occupation words. Finally, we introduce a combination of data augmentation and our approach, and show that it outperforms existing strategies in all bias evaluation metrics."]], "corpus": "arxiv"}
|
112 |
+
{"tstamp": 1722226883.5164, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722226883.3755, "finish": 1722226883.5164, "ip": "", "conv_id": "4987ca9238374025ae9f6d61145d0142", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Study analyzing the multitask accuracy of text models including GPT-3 on academic and professional knowledge areas.", "output": [["Study analyzing the multitask accuracy of text models including GPT-3 on academic and professional knowledge areas.", "Title: Vibrational Spectra of Defects in Silicon: An Orbital Radii Approach\n\nAbstract: A phenomenological approach to the stretching mode vibrational frequencies of defects in semiconductors is proposed. A novel quantum scale is defined in terms of the first principles pseudopotential based orbital radius and the principal quantum number of the element concerned. A universal linear relationship between the Sanderson electronegativity and this quantum scale is established. Next, we show that the stretching mode vibrational frequencies of hydrogen and chlorine in the silicon network scale linearly with this quantum scale. Predictions and identifications of defect environments around the Si-H and Si-Cl are possible. The assignments of vibrational modes in porous silicon are critically examined. We discuss our proposed scale in the context of Mendeleveyan scales in general, and suggest justifications for it. We believe that our approach can be gainfully extended to the vibrational spectra of other semiconductors."]], "corpus": "arxiv"}
|
113 |
+
{"tstamp": 1722226883.5164, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722226883.3755, "finish": 1722226883.5164, "ip": "", "conv_id": "bf81fa11eb3f4d3cb9c7294f31d17a63", "model_name": "GritLM/GritLM-7B", "prompt": "Study analyzing the multitask accuracy of text models including GPT-3 on academic and professional knowledge areas.", "output": [["Study analyzing the multitask accuracy of text models including GPT-3 on academic and professional knowledge areas.", "Title: Measuring Massive Multitask Language Understanding\n\nAbstract: We propose a new test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability. We find that while most recent models have near random-chance accuracy, the very largest GPT-3 model improves over random chance by almost 20 percentage points on average. However, on every one of the 57 tasks, the best models still need substantial improvements before they can reach expert-level accuracy. Models also have lopsided performance and frequently do not know when they are wrong. Worse, they still have near-random accuracy on some socially important subjects such as morality and law. By comprehensively evaluating the breadth and depth of a model's academic and professional understanding, our test can be used to analyze models across many tasks and to identify important shortcomings."]], "corpus": "arxiv"}
|
114 |
+
{"tstamp": 1722226897.3303, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722226897.1552, "finish": 1722226897.3303, "ip": "", "conv_id": "35ba5141430e439182b4cb93495f60b2", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Comparison of sparse attention and hierarchical encoding in long document transformers", "output": [["Comparison of sparse attention and hierarchical encoding in long document transformers", "Title: Revisiting Transformer-based Models for Long Document Classification\n\nAbstract: The recent literature in text classification is biased towards short text sequences (e.g., sentences or paragraphs). In real-world applications, multi-page multi-paragraph documents are common and they cannot be efficiently encoded by vanilla Transformer-based models. We compare different Transformer-based Long Document Classification (TrLDC) approaches that aim to mitigate the computational overhead of vanilla transformers to encode much longer text, namely sparse attention and hierarchical encoding methods. We examine several aspects of sparse attention (e.g., size of local attention window, use of global attention) and hierarchical (e.g., document splitting strategy) transformers on four document classification datasets covering different domains. We observe a clear benefit from being able to process longer text, and, based on our results, we derive practical advice of applying Transformer-based models on long document classification tasks."]], "corpus": "arxiv"}
|
115 |
+
{"tstamp": 1722226897.3303, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722226897.1552, "finish": 1722226897.3303, "ip": "", "conv_id": "7c53ae4004484956a4ab85f088046698", "model_name": "GritLM/GritLM-7B", "prompt": "Comparison of sparse attention and hierarchical encoding in long document transformers", "output": [["Comparison of sparse attention and hierarchical encoding in long document transformers", "Title: Revisiting Transformer-based Models for Long Document Classification\n\nAbstract: The recent literature in text classification is biased towards short text sequences (e.g., sentences or paragraphs). In real-world applications, multi-page multi-paragraph documents are common and they cannot be efficiently encoded by vanilla Transformer-based models. We compare different Transformer-based Long Document Classification (TrLDC) approaches that aim to mitigate the computational overhead of vanilla transformers to encode much longer text, namely sparse attention and hierarchical encoding methods. We examine several aspects of sparse attention (e.g., size of local attention window, use of global attention) and hierarchical (e.g., document splitting strategy) transformers on four document classification datasets covering different domains. We observe a clear benefit from being able to process longer text, and, based on our results, we derive practical advice of applying Transformer-based models on long document classification tasks."]], "corpus": "arxiv"}
|
116 |
+
{"tstamp": 1722226909.7935, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722226909.6981, "finish": 1722226909.7935, "ip": "", "conv_id": "565cd683cdf7445296d0b3a2377c66c4", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Unsupervised Data Generation technique for creating training data to train NLP models without annotations.", "output": [["Unsupervised Data Generation technique for creating training data to train NLP models without annotations.", "Title: Towards Zero-Label Language Learning\n\nAbstract: This paper explores zero-label learning in Natural Language Processing (NLP), whereby no human-annotated data is used anywhere during training and models are trained purely on synthetic data. At the core of our framework is a novel approach for better leveraging the powerful pretrained language models. Specifically, inspired by the recent success of few-shot inference on GPT-3, we present a training data creation procedure named Unsupervised Data Generation (UDG), which leverages few-shot prompts to synthesize high-quality training data without real human annotations. Our method enables zero-label learning as we train task-specific models solely on the synthetic data, yet we achieve better or comparable results from strong baseline models trained on human-labeled data. Furthermore, when mixed with labeled data, our approach serves as a highly effective data augmentation procedure, achieving new state-of-the-art results on the SuperGLUE benchmark."]], "corpus": "arxiv"}
|
117 |
+
{"tstamp": 1722226909.7935, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722226909.6981, "finish": 1722226909.7935, "ip": "", "conv_id": "8bf31665ddaf4843a18cb2798d0df7e0", "model_name": "BM25", "prompt": "Unsupervised Data Generation technique for creating training data to train NLP models without annotations.", "output": [["Unsupervised Data Generation technique for creating training data to train NLP models without annotations.", "Title: Healthcare NER Models Using Language Model Pretraining\n\nAbstract: In this paper, we present our approach to extracting structured information from unstructured Electronic Health Records (EHR) [2] which can be used to, for example, study adverse drug reactions in patients due to chemicals in their products. Our solution uses a combination of Natural Language Processing (NLP) techniques and a web-based annotation tool to optimize the performance of a custom Named Entity Recognition (NER) [1] model trained on a limited amount of EHR training data. This work was presented at the first Health Search and Data Mining Workshop (HSDM 2020) [26]. We showcase a combination of tools and techniques leveraging the recent advancements in NLP aimed at targeting domain shifts by applying transfer learning and language model pre-training techniques [3]. We present a comparison of our technique to the current popular approaches and show the effective increase in performance of the NER model and the reduction in time to annotate data.A key observation of the results presented is that the F1 score of model (0.734) trained with our approach with just 50% of available training data outperforms the F1 score of the blank spaCy model without language model component (0.704) trained with 100% of the available training data. We also demonstrate an annotation tool to minimize domain expert time and the manual effort required to generate such a training dataset. Further, we plan to release the annotated dataset as well as the pre-trained model to the community to further research in medical health records."]], "corpus": "arxiv"}
|
118 |
+
{"tstamp": 1722226963.5592, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722226963.4173, "finish": 1722226963.5592, "ip": "", "conv_id": "0d903abd2da141e085a90ac55dddcac7", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Research about speech enhancement with speaker-aware features and no auxiliary test-phase signals", "output": [["Research about speech enhancement with speaker-aware features and no auxiliary test-phase signals", "Title: Auditory Model based Phase-Aware Bayesian Spectral Amplitude Estimator for Single-Channel Speech Enhancement\n\nAbstract: Bayesian estimation of short-time spectral amplitude is one of the most predominant approaches for the enhancement of the noise corrupted speech. The performance of these estimators are usually significantly improved when any perceptually relevant cost function is considered. On the other hand, the recent progress in the phase-based speech signal processing have shown that the phase-only enhancement based on spectral phase estimation methods can also provide joint improvement in the perceived speech quality and intelligibility, even in low SNR conditions. In this paper, to take advantage of both the perceptually motivated cost function involving STSAs of estimated and true clean speech and utilizing the prior spectral phase information, we have derived a phase-aware Bayesian STSA estimator. The parameters of the cost function are chosen based on the characteristics of the human auditory system, namely, the dynamic compressive nonlinearity of the cochlea, the perceived loudness theory and the simultaneous masking properties of the ear. This type of parameter selection scheme results in more noise reduction while limiting the speech distortion. The derived STSA estimator is optimal in the MMSE sense if the prior phase information is available. In practice, however, typically only an estimate of the clean speech phase can be obtained via employing different types of spectral phase estimation techniques which have been developed throughout the last few years. In a blind setup, we have evaluated the proposed Bayesian STSA estimator with different types of standard phase estimation methods available in the literature. Experimental results have shown that the proposed estimator can achieve substantial improvement in performance than the traditional phase-blind approaches."]], "corpus": "arxiv"}
|
119 |
+
{"tstamp": 1722226963.5592, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722226963.4173, "finish": 1722226963.5592, "ip": "", "conv_id": "6ac5d103c1db4b2ca3c16a5eaf7fb9dd", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Research about speech enhancement with speaker-aware features and no auxiliary test-phase signals", "output": [["Research about speech enhancement with speaker-aware features and no auxiliary test-phase signals", "Title: Speech Enhancement using Self-Adaptation and Multi-Head Self-Attention\n\nAbstract: This paper investigates a self-adaptation method for speech enhancement using auxiliary speaker-aware features; we extract a speaker representation used for adaptation directly from the test utterance. Conventional studies of deep neural network (DNN)--based speech enhancement mainly focus on building a speaker independent model. Meanwhile, in speech applications including speech recognition and synthesis, it is known that model adaptation to the target speaker improves the accuracy. Our research question is whether a DNN for speech enhancement can be adopted to unknown speakers without any auxiliary guidance signal in test-phase. To achieve this, we adopt multi-task learning of speech enhancement and speaker identification, and use the output of the final hidden layer of speaker identification branch as an auxiliary feature. In addition, we use multi-head self-attention for capturing long-term dependencies in the speech and noise. Experimental results on a public dataset show that our strategy achieves the state-of-the-art performance and also outperform conventional methods in terms of subjective quality."]], "corpus": "arxiv"}
|
120 |
+
{"tstamp": 1722226979.3012, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722226979.1956, "finish": 1722226979.3012, "ip": "", "conv_id": "c81179613507466f9501f236a8beb4a9", "model_name": "BM25", "prompt": "Paper introducing universal adversarial input triggers to analyze vulnerabilities in NLP models and their transferability across different models.", "output": [["Paper introducing universal adversarial input triggers to analyze vulnerabilities in NLP models and their transferability across different models.", "Title: Universal Adversarial Triggers for Attacking and Analyzing NLP\n\nAbstract: Adversarial examples highlight model vulnerabilities and are useful for evaluation and interpretation. We define universal adversarial triggers: input-agnostic sequences of tokens that trigger a model to produce a specific prediction when concatenated to any input from a dataset. We propose a gradient-guided search over tokens which finds short trigger sequences (e.g., one word for classification and four words for language modeling) that successfully trigger the target prediction. For example, triggers cause SNLI entailment accuracy to drop from 89.94% to 0.55%, 72% of \"why\" questions in SQuAD to be answered \"to kill american people\", and the GPT-2 language model to spew racist output even when conditioned on non-racial contexts. Furthermore, although the triggers are optimized using white-box access to a specific model, they transfer to other models for all tasks we consider. Finally, since triggers are input-agnostic, they provide an analysis of global model behavior. For instance, they confirm that SNLI models exploit dataset biases and help to diagnose heuristics learned by reading comprehension models."]], "corpus": "arxiv"}
|
121 |
+
{"tstamp": 1722226979.3012, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722226979.1956, "finish": 1722226979.3012, "ip": "", "conv_id": "95105e1e8df945a7b819bd50bcd0a76a", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Paper introducing universal adversarial input triggers to analyze vulnerabilities in NLP models and their transferability across different models.", "output": [["Paper introducing universal adversarial input triggers to analyze vulnerabilities in NLP models and their transferability across different models.", "Title: Universal Adversarial Triggers for Attacking and Analyzing NLP\n\nAbstract: Adversarial examples highlight model vulnerabilities and are useful for evaluation and interpretation. We define universal adversarial triggers: input-agnostic sequences of tokens that trigger a model to produce a specific prediction when concatenated to any input from a dataset. We propose a gradient-guided search over tokens which finds short trigger sequences (e.g., one word for classification and four words for language modeling) that successfully trigger the target prediction. For example, triggers cause SNLI entailment accuracy to drop from 89.94% to 0.55%, 72% of \"why\" questions in SQuAD to be answered \"to kill american people\", and the GPT-2 language model to spew racist output even when conditioned on non-racial contexts. Furthermore, although the triggers are optimized using white-box access to a specific model, they transfer to other models for all tasks we consider. Finally, since triggers are input-agnostic, they provide an analysis of global model behavior. For instance, they confirm that SNLI models exploit dataset biases and help to diagnose heuristics learned by reading comprehension models."]], "corpus": "arxiv"}
|
122 |
+
{"tstamp": 1722226991.5645, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722226991.3804, "finish": 1722226991.5645, "ip": "", "conv_id": "c966ef1f66124006834ce9fae7ec6c57", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Adversarial example generation for text classification using BERT masked language model ", "output": [["Adversarial example generation for text classification using BERT masked language model ", "Title: Measuring Adversarial Datasets\n\nAbstract: In the era of widespread public use of AI systems across various domains, ensuring adversarial robustness has become increasingly vital to maintain safety and prevent undesirable errors. Researchers have curated various adversarial datasets (through perturbations) for capturing model deficiencies that cannot be revealed in standard benchmark datasets. However, little is known about how these adversarial examples differ from the original data points, and there is still no methodology to measure the intended and unintended consequences of those adversarial transformations. In this research, we conducted a systematic survey of existing quantifiable metrics that describe text instances in NLP tasks, among dimensions of difficulty, diversity, and disagreement. We selected several current adversarial effect datasets and compared the distributions between the original and their adversarial counterparts. The results provide valuable insights into what makes these datasets more challenging from a metrics perspective and whether they align with underlying assumptions."]], "corpus": "arxiv"}
|
123 |
+
{"tstamp": 1722226991.5645, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722226991.3804, "finish": 1722226991.5645, "ip": "", "conv_id": "275ee03e6e634f92968096b192b9ae4a", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Adversarial example generation for text classification using BERT masked language model ", "output": [["Adversarial example generation for text classification using BERT masked language model ", "Title: Similar glassy features in the NMR response of pure and disordered La1.88Sr0.12CuO4\n\nAbstract: High Tc superconductivity in La2-xSrxCuO4 coexists with (striped and glassy) magnetic order. Here, we report NMR measurements of the 139La spin-lattice relaxation, which displays a stretched-exponential time dependence, in both pure and disordered x=0.12 single crystals. An analysis in terms of a distribution of relaxation rates T1^-1 indicates that i) the spin-freezing temperature is spatially inhomogeneous with an onset at Tg(onset)=20 K for the pristine samples, and ii) the width of the T1^-1 distribution in the vicinity of Tg(onset) is insensitive to an ~1% level of atomic disorder in CuO2 planes. This suggests that the stretched-exponential 139La relaxation, considered as a manifestation of the systems glassiness, may not arise from quenched disorder."]], "corpus": "arxiv"}
|
124 |
+
{"tstamp": 1722227001.4038, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722227001.3034, "finish": 1722227001.4038, "ip": "", "conv_id": "73fda287fa15475a92a663d6a3dba7cb", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Achieving high accuracy in English-French plagiarism detection using word embeddings", "output": [["Achieving high accuracy in English-French plagiarism detection using word embeddings", "Title: UsingWord Embedding for Cross-Language Plagiarism Detection\n\nAbstract: This paper proposes to use distributed representation of words (word embeddings) in cross-language textual similarity detection. The main contributions of this paper are the following: (a) we introduce new cross-language similarity detection methods based on distributed representation of words; (b) we combine the different methods proposed to verify their complementarity and finally obtain an overall F1 score of 89.15% for English-French similarity detection at chunk level (88.5% at sentence level) on a very challenging corpus."]], "corpus": "arxiv"}
|
125 |
+
{"tstamp": 1722227001.4038, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722227001.3034, "finish": 1722227001.4038, "ip": "", "conv_id": "339520347d484e1c8068e44e4e4e7452", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Achieving high accuracy in English-French plagiarism detection using word embeddings", "output": [["Achieving high accuracy in English-French plagiarism detection using word embeddings", "Title: Studies of Plasma Detachment Using a One Dimensional Model for Divertor Operation\n\nAbstract: To characterize the conditions required to reach advanced divertor regimes, a one-dimensional computational model has been developed based on a coordinate transformation to incorporate two-dimensional effects. This model includes transport of ions, two species each of atoms and molecules, momentum, and ion and electron energy both within and across the flux surfaces. Impurity radiation is calculated using a coronal equilibrium model which includes the effects of charge-exchange recombination. Numerical results indicate that impurity radiation acts to facilitate plasma detachment and enhances the power lost from the divertor channel in escaping neutral atoms by cooling the electrons and suppressing ionization. As divertor particle densities increase, cold and thermal molecules become increasingly important in cooling the plasma, with molecular densities dominating electron and atomic densities under some conditions."]], "corpus": "arxiv"}
|
126 |
+
{"tstamp": 1722227013.155, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722227012.9814, "finish": 1722227013.155, "ip": "", "conv_id": "de8d07d9dc434154b214bde3478b3319", "model_name": "BM25", "prompt": "A paper presenting a Transformer model for vision-and-language tasks that does not rely on object detection or ResNet.", "output": [["A paper presenting a Transformer model for vision-and-language tasks that does not rely on object detection or ResNet.", "Title: ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision\n\nAbstract: Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance. Our code and pre-trained weights are available at https://github.com/dandelin/vilt."]], "corpus": "arxiv"}
|
127 |
+
{"tstamp": 1722227013.155, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722227012.9814, "finish": 1722227013.155, "ip": "", "conv_id": "21bf134438c34376b884b388bf7b8c19", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "A paper presenting a Transformer model for vision-and-language tasks that does not rely on object detection or ResNet.", "output": [["A paper presenting a Transformer model for vision-and-language tasks that does not rely on object detection or ResNet.", "Title: ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision\n\nAbstract: Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance. Our code and pre-trained weights are available at https://github.com/dandelin/vilt."]], "corpus": "arxiv"}
|