{"tstamp": 1722223836.4697, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "1e069e026372469d813240ff569580d8", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "bf0b981876b14c12ad953a3a042f49d4", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: Gauge fixing, BRS invariance and Ward identities for randomly stirred flows\n\nAbstract: The Galilean invariance of the Navier-Stokes equation is shown to be akin to a global gauge symmetry familiar from quantum field theory. This symmetry leads to a multiple counting of infinitely many inertial reference frames in the path integral approach to randomly stirred fluids. This problem is solved by fixing the gauge, i.e., singling out one reference frame. The gauge fixed theory has an underlying Becchi-Rouet-Stora (BRS) symmetry which leads to the Ward identity relating the exact inverse response and vertex functions. This identification of Galilean invariance as a gauge symmetry is explored in detail, for different gauge choices and by performing a rigorous examination of a discretized version of the theory. The Navier-Stokes equation is also invariant under arbitrary rectilinear frame accelerations, known as extended Galilean invariance (EGI). We gauge fix this extended symmetry and derive the generalized Ward identity that follows from the BRS invariance of the gauge-fixed theory. This new Ward identity reduces to the standard one in the limit of zero acceleration. This gauge-fixing approach unambiguously shows that Galilean invariance and EGI constrain only the zero mode of the vertex but none of the higher wavenumber modes."]], "1_corpus": "arxiv"} {"tstamp": 1722223862.4429, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "236e330aa5dd46eeb2c678b95a41260c", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "0_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: Search for gravitational waves from binary black hole inspirals in LIGO data\n\nAbstract: We report on a search for gravitational waves from binary black hole inspirals in the data from the second science run of the LIGO interferometers. The search focused on binary systems with component masses between 3 and 20 solar masses. Optimally oriented binaries with distances up to 1 Mpc could be detected with efficiency of at least 90%. We found no events that could be identified as gravitational waves in the 385.6 hours of data that we searched."]], "0_corpus": "arxiv", "1_conv_id": "0ec343c3523d47eaae6a6a59d682f5f2", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "1_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: Is the Commercial Mass Media Necessary, or Even Desirable, for Liberal Democracy?\n\nAbstract: Is a commercial mass media, dependent on the market for its sustenance, necessary, or even desirable, for liberal democracy? Yochai Benkler has argued that a decentralized, peer-to-peer system of communications and information is both possible with digital technology and preferable to a system based on commercial mass media. He has contended in fact that the presence of politically powerful, copyright-rich mass media imposes significant barriers to the development of peer-to-peer information-sharing networks. In contrast, I have argued that the commercial mass media play an important, and perhaps even vital, role in liberal democracy by galvanizing public opinion, serving as a watchdog against government and corporate wrongdoing, agenda-setting (which enables public discourse), and serving as a relatively trustworthy source of information. This paper seeks to push the ball forward on this issue. It first isolates and enumerates the contributions that the commercial mass media are said to make towards liberal democracy. It then briefly assesses the extent to which the commercial mass media actually fulfills these constitutive functions. It then asks whether alternative institutions might serve some or all of these functions just as well or better. In so doing, it looks both to the past and the future. First, it examines the political party-supported press that thrived in the United States through much of the 19th century. Second, it examines government-funded mass media. Third, it looks, skeptically, at possibilities for peer-to-peer sharing of information and opinion in the digital network environment. I conclude that, despite the weaknesses of commercial mass media, an information policy suitable to liberal democracy should include a plurality of types of voices, including commercial mass media."]], "1_corpus": "arxiv"} {"tstamp": 1722223898.7505, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "65a408d68bae4872a39c85580503e046", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "0_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: Search for gravitational waves from binary black hole inspirals in LIGO data\n\nAbstract: We report on a search for gravitational waves from binary black hole inspirals in the data from the second science run of the LIGO interferometers. The search focused on binary systems with component masses between 3 and 20 solar masses. Optimally oriented binaries with distances up to 1 Mpc could be detected with efficiency of at least 90%. We found no events that could be identified as gravitational waves in the 385.6 hours of data that we searched."]], "0_corpus": "arxiv", "1_conv_id": "db30f73e43a742018303a5aa939ef5b7", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "1_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: InstructCoder: Instruction Tuning Large Language Models for Code Editing\n\nAbstract: Code editing encompasses a variety of pragmatic tasks that developers deal with daily. Despite its relevance and practical usefulness, automatic code editing remains an underexplored area in the evolution of deep learning models, partly due to data scarcity. In this work, we explore the use of Large Language Models (LLMs) to edit code based on user instructions. Evaluated on a novel human-written execution-based benchmark dubbed EditEval, we found current models often struggle to fulfill the instructions. In light of this, we contribute InstructCoder, the first instruction-tuning dataset designed to adapt LLMs for general-purpose code editing, containing high-diversity code-editing tasks such as comment insertion, code optimization, and code refactoring. It consists of over 114,000 instruction-input-output triplets and covers multiple distinct code editing scenarios. The collection process starts with filtered commit data sourced from GitHub Python repositories as seeds. Subsequently, the dataset is systematically expanded through an iterative process, where both seed and generated tasks are used to prompt ChatGPT for more data. Our findings reveal that open-source LLMs fine-tuned on InstructCoder can significantly enhance the accuracy of code edits, exhibiting superior code-editing performance matching advanced proprietary LLMs. The datasets and the source code are publicly available at https://github.com/qishenghu/CodeInstruct."]], "1_corpus": "arxiv"} {"tstamp": 1722223956.8024, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7fc1ca90e4f048ddb8f1a07606fe8c74", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "do you need a visa for a connecting flight in vietnam?", "0_output": [["do you need a visa for a connecting flight in vietnam?", "Meta Stackexchange\n\nQ: Stack Exchange site to ask Visa and passport related questions? Where can I ask questions about Passports and Visas? \nIs there a Stack Exchange site for this?\n\nA: travel's help centre says you can ask about visas.\n\ncustoms & border crossing, visas and other travel documents such as permits\n\nThey also have a passports tag.\nThe Expatriates Stack Exchange is also available for people living or intend to live abroad on a long-term basis.\nDo check out the help centre text of these sites before you ask to make sure your question is on topic. \n"]], "0_corpus": "stackexchange", "1_conv_id": "6798b0663e6740f8acf5d43bb61573f5", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "do you need a visa for a connecting flight in vietnam?", "1_output": [["do you need a visa for a connecting flight in vietnam?", "Stackoverflow Stackexchange\n\nQ: How to load data from splitted gzip files into redshift? Can I load data from splitted gzip files into an amazon redshift table?\nI can load data from single gzip file or from splitted text files.\nBut can I load data from splitted gzip files?\n\nA: I'm assuming here that you mean that you have multiple CSV files that are each gzipped.\nFirst, upload each file to an S3 bucket under the same prefix and delimiter.\ns3://S3_BUCKET/S3_PREFIX/file0.gz\ns3://S3_BUCKET/S3_PREFIX/file1.gz\nThen, execute the Redshift copy command:\ncopy TABLE_NAME from 's3://S3_BUCKET/S3_PREFIX' credentials 'aws_access_key_id=ACCESS_KEY;aws_secret_access_key=SECRET_KEY' csv gzip\n\n\n*\n\n*specify the S3 bucket and prefix\n\n*include credentials that have permission to read the S3 objects\n\n*ensure that the destination table already exists and has columns compatible with the CSV\n\n*specify the csv and gzip options\n\n\nAlso, it's a good idea to have your file count proportional to the number of nodes in your cluster.\n"]], "1_corpus": "stackexchange"} {"tstamp": 1722224073.6957, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "66e5be6065b649e7b34405ae69767708", "0_model_name": "text-embedding-004", "0_prompt": "How to reduce overoptimization in reinforcement learning from human feedback?", "0_output": [["How to reduce overoptimization in reinforcement learning from human feedback?", "Title: Reward Model Ensembles Help Mitigate Overoptimization\n\nAbstract: Reinforcement learning from human feedback (RLHF) is a standard approach for fine-tuning large language models to follow instructions. As part of this process, learned reward models are used to approximately model human preferences. However, as imperfect representations of the \"true\" reward, these learned reward models are susceptible to overoptimization. Gao et al. (2023) studied this phenomenon in a synthetic human feedback setup with a significantly larger \"gold\" reward model acting as the true reward (instead of humans) and showed that overoptimization remains a persistent problem regardless of the size of the proxy reward model and training data used. Using a similar setup, we conduct a systematic study to evaluate the efficacy of using ensemble-based conservative optimization objectives, specifically worst-case optimization (WCO) and uncertainty-weighted optimization (UWO), for mitigating reward model overoptimization when using two optimization methods: (a) best-of-n sampling (BoN) (b) proximal policy optimization (PPO). We additionally extend the setup of Gao et al. (2023) to include 25% label noise to better mirror real-world conditions. Both with and without label noise, we find that conservative optimization practically eliminates overoptimization and improves performance by up to 70% for BoN sampling. For PPO, ensemble-based conservative optimization always reduces overoptimization and outperforms single reward model optimization. Moreover, combining it with a small KL penalty successfully prevents overoptimization at no performance cost. Overall, our results demonstrate that ensemble-based conservative optimization can effectively counter overoptimization."]], "0_corpus": "arxiv", "1_conv_id": "4e634cb1df8740f69f2a07e9fae99789", "1_model_name": "embed-english-v3.0", "1_prompt": "How to reduce overoptimization in reinforcement learning from human feedback?", "1_output": [["How to reduce overoptimization in reinforcement learning from human feedback?", "Title: Scalable Ensembling For Mitigating Reward Overoptimisation\n\nAbstract: Reinforcement Learning from Human Feedback (RLHF) has enabled significant advancements within language modeling for powerful, instruction-following models. However, the alignment of these models remains a pressing challenge as the policy tends to overfit the learned ``proxy\" reward model past an inflection point of utility as measured by a ``gold\" reward model that is more performant -- a phenomenon known as overoptimisation. Prior work has mitigated this issue by computing a pessimistic statistic over an ensemble of reward models, which is common in Offline Reinforcement Learning but incredibly costly for language models with high memory requirements, making such approaches infeasible for sufficiently large models. To this end, we propose using a shared encoder but separate linear heads. We find this leads to similar performance as the full ensemble while allowing tremendous savings in memory and time required for training for models of similar size."]], "1_corpus": "arxiv"} {"tstamp": 1722224112.4536, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "63ac396150d24f6c8262e8040d6c76d1", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "what is the difference between domain users and authenticated users?", "0_output": [["what is the difference between domain users and authenticated users?", "Stackoverflow Stackexchange\n\nQ: Erlang: robustness against port owner's death What happens when the process which owns a spawned port dies and is restarted by the supervisor? \n\n\n*\n\n*Is there a way for the old port not to die with its owner and for the new owner to \"take over\"?\n\n*Failing that, is it possible to ensure that the spawned process terminates when its port dies?\n\nA: First, notice that you don't want the port owner to die. So move any \"dangerous\" code out to another process and make the port owner as dumb as possible. This is error-mitigation the Erlang way. Now, if that process dies, something is really bad, so in that case it may be clever to recycle the port as well. But since we moved everything out, we are counting on that not happening.\nRegarding 2, the port will send a specific message when it terminates, so you can arrange for your spawned process to gracefully detect this and terminate with it. See\nhttp://www.erlang.org/doc/reference_manual/ports.html\n"]], "0_corpus": "stackexchange", "1_conv_id": "f92109e0200e4f72b4f46ba3abd430c4", "1_model_name": "BM25", "1_prompt": "what is the difference between domain users and authenticated users?", "1_output": [["what is the difference between domain users and authenticated users?", "Stackoverflow Stackexchange\n\nQ: What are the differences between LDAP and Active Directory authentication? I have tried to search without success. \n(I do know difference between LDAP and Active Directory. Also read this What are the differences between LDAP and Active Directory?)\nAccording to my understanding, LDAP authenticates (binds) with DN (distinguish name) and password.\nE.g. CN=bob, OU=Users,DC=test,DC=com \nActive Directory authenticates with a short user name or full domain user name and password.\nE.g. bob or bob@test.com \nTherefore, LDAP does not allow to users with the same CN in the same tree (OU) but it allows the same CN in a different OU.\nActive Directory does not allow the same user name in the same domain, even the user object located in different tree of Active Directory.\nActive Directory allows the same user name in different domains.\nOther differences? \n\nA: Active Directory is a product and a directory service. LDAP is an application layer protocol and an IETF standard. Active Directory supports and implements LDAP, therefore allowing RFC-compliant LDAP clients to talk to it and perform LDAP operations such as bind, search, etc.\n"]], "1_corpus": "stackexchange"} {"tstamp": 1722224122.2784, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "a178424589e04da081763aca087bef8c", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: Gauge fixing, BRS invariance and Ward identities for randomly stirred flows\n\nAbstract: The Galilean invariance of the Navier-Stokes equation is shown to be akin to a global gauge symmetry familiar from quantum field theory. This symmetry leads to a multiple counting of infinitely many inertial reference frames in the path integral approach to randomly stirred fluids. This problem is solved by fixing the gauge, i.e., singling out one reference frame. The gauge fixed theory has an underlying Becchi-Rouet-Stora (BRS) symmetry which leads to the Ward identity relating the exact inverse response and vertex functions. This identification of Galilean invariance as a gauge symmetry is explored in detail, for different gauge choices and by performing a rigorous examination of a discretized version of the theory. The Navier-Stokes equation is also invariant under arbitrary rectilinear frame accelerations, known as extended Galilean invariance (EGI). We gauge fix this extended symmetry and derive the generalized Ward identity that follows from the BRS invariance of the gauge-fixed theory. This new Ward identity reduces to the standard one in the limit of zero acceleration. This gauge-fixing approach unambiguously shows that Galilean invariance and EGI constrain only the zero mode of the vertex but none of the higher wavenumber modes."]], "0_corpus": "arxiv", "1_conv_id": "30174116a3ee429cb683135e80854d8d", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"} {"tstamp": 1722224143.0986, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "6e0b0281dbdd47e5987c4f7945b122f3", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "7645ad3134ce4cf88ac3eace789bf588", "1_model_name": "BM25", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: Text and Code Embeddings by Contrastive Pre-Training\n\nAbstract: Text embeddings are useful features in many applications such as semantic search and computing text similarity. Previous work typically trains models customized for different use cases, varying in dataset choice, training objective and model architecture. In this work, we show that contrastive pre-training on unsupervised data at scale leads to high quality vector representations of text and code. The same unsupervised text embeddings that achieve new state-of-the-art results in linear-probe classification also display impressive semantic search capabilities and sometimes even perform competitively with fine-tuned models. On linear-probe classification accuracy averaging over 7 tasks, our best unsupervised model achieves a relative improvement of 4% and 1.8% over previous best unsupervised and supervised text embedding models respectively. The same text embeddings when evaluated on large-scale semantic search attains a relative improvement of 23.4%, 14.7%, and 10.6% over previous best unsupervised methods on MSMARCO, Natural Questions and TriviaQA benchmarks, respectively. Similarly to text embeddings, we train code embedding models on (text, code) pairs, obtaining a 20.8% relative improvement over prior best work on code search."]], "1_corpus": "arxiv"} {"tstamp": 1722224179.2706, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "83d0fa49a2b744c093ee6c2034861706", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: Gauge fixing, BRS invariance and Ward identities for randomly stirred flows\n\nAbstract: The Galilean invariance of the Navier-Stokes equation is shown to be akin to a global gauge symmetry familiar from quantum field theory. This symmetry leads to a multiple counting of infinitely many inertial reference frames in the path integral approach to randomly stirred fluids. This problem is solved by fixing the gauge, i.e., singling out one reference frame. The gauge fixed theory has an underlying Becchi-Rouet-Stora (BRS) symmetry which leads to the Ward identity relating the exact inverse response and vertex functions. This identification of Galilean invariance as a gauge symmetry is explored in detail, for different gauge choices and by performing a rigorous examination of a discretized version of the theory. The Navier-Stokes equation is also invariant under arbitrary rectilinear frame accelerations, known as extended Galilean invariance (EGI). We gauge fix this extended symmetry and derive the generalized Ward identity that follows from the BRS invariance of the gauge-fixed theory. This new Ward identity reduces to the standard one in the limit of zero acceleration. This gauge-fixing approach unambiguously shows that Galilean invariance and EGI constrain only the zero mode of the vertex but none of the higher wavenumber modes."]], "0_corpus": "arxiv", "1_conv_id": "d68199ab940144d0864864688663a7aa", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"} {"tstamp": 1722224206.9709, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f9e33c80a7f84346879ff3a110995c39", "0_model_name": "text-embedding-3-large", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "c38a6a7929194e63a6fae84fb9f4817d", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"} {"tstamp": 1722224292.4751, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "9c20a131b0b0460caf8e9cc90271ad58", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "8fbd7a439898404aa1b56dd7fd31b164", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"} {"tstamp": 1722224357.4907, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "daa307131487485b94995384043a8300", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "do crops respawn in skyrim?", "0_output": [["do crops respawn in skyrim?", "Meta Stackexchange\n\nQ: Google time and answer mistakes Take a look a this : \n\nOn the google search page I can see \n\n\n*\n\n*13 hours ago\n\n*1 answer\n\n\nWhen I actually posted the question :\n\n\n*\n\n*8 mins ago\n\n*0 answers\n\n\nThis should be fixed.\n"]], "0_corpus": "stackexchange", "1_conv_id": "789b1e47056746a7a25daa4239fc0c83", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "do crops respawn in skyrim?", "1_output": [["do crops respawn in skyrim?", "Gaming Stackexchange\n\nQ: Do Nirnroots regrow in Skyrim? In Oblivion, Nirnroots were a collectible - there was a finite number of them*, and they did not respawn.\nI've been noticing, however, that when I pick Nirnroots in Skyrim, the roots of the plant remain behind. This got me wondering...\nDo Nirnroots regrow in Skyrim?\nWhat about the Crimson Nirnroots of Blackreach?\n*discounting the respawning food-nirnroot in Frostcrag Spire\n\nA: I can confirm they grow back after 15 days also. I'm using the PS3 version and picked about 20 of them before doing the related quest. It does give you a negative number if you try dropping them and picking them back up. So, I was forced to wait 15 days to pick a total of 50 of them. \n"]], "1_corpus": "stackexchange"} {"tstamp": 1722224394.178, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "5f396b9d69c1400891c654d8ca529105", "0_model_name": "BM25", "0_prompt": "why star trek tng ended?", "0_output": [["why star trek tng ended?", "Scifi Stackexchange\n\nQ: Why did Star Trek: The Next Generation season 2 only have 22 episodes? In the Star Trek franchise shows Star Trek: The Next Generation, Star Trek: Deep Space Nine and Star Trek: Voyager, each show almost always had 26 episodes per season. Besides Voyager's first season, the only exception in this is the second season of Star Trek: The Next Generation, which instead had 22. Why is this?\n\nA: There was a writers' strike in 1988.\nThe Writers\u00b4 Guild of America went out on strike in the spring and summer of 1988. They demanded more creative control and residuals for the shows they wrote.\nThe writers of ST:TNG at Paramount joined in despite their Sacred Duty.\nThe strike ended on August 7, but the production pipeline was adversely affected, and ST:TNG wasn't able to start airing until November 21. (This episode, \"The Child\", contained the first appearance of Whoopi Goldberg playing Guinan.)\n"]], "0_corpus": "stackexchange", "1_conv_id": "f73a0f20a5ff46999df925bea5719cea", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "why star trek tng ended?", "1_output": [["why star trek tng ended?", "Mathematica Stackexchange\n\nQ: how to download Finite ELements tutorial as PDF For long tutorials, I prefer hard copy.\nIs it possible to download from somewhere this document\nhttps://reference.wolfram.com/language/FEMDocumentation/tutorial/FiniteElementOverview.html \nas PDF file? Few years ago WRI used to have PDF files for all its documentations, but I am not able to find one for the above now.\n"]], "1_corpus": "stackexchange"} {"tstamp": 1722224426.1887, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "ae3aeff4b1ee4843b902224782c54d18", "0_model_name": "BM25", "0_prompt": "Paper on SEACrowd", "0_output": [["Paper on SEACrowd", "Title: SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages\n\nAbstract: Southeast Asia (SEA) is a region rich in linguistic diversity and cultural variety, with over 1,300 indigenous languages and a population of 671 million people. However, prevailing AI models suffer from a significant lack of representation of texts, images, and audio datasets from SEA, compromising the quality of AI models for SEA languages. Evaluating models for SEA languages is challenging due to the scarcity of high-quality datasets, compounded by the dominance of English training data, raising concerns about potential cultural misrepresentation. To address these challenges, we introduce SEACrowd, a collaborative initiative that consolidates a comprehensive resource hub that fills the resource gap by providing standardized corpora in nearly 1,000 SEA languages across three modalities. Through our SEACrowd benchmarks, we assess the quality of AI models on 36 indigenous languages across 13 tasks, offering valuable insights into the current AI landscape in SEA. Furthermore, we propose strategies to facilitate greater AI advancements, maximizing potential utility and resource equity for the future of AI in SEA."]], "0_corpus": "arxiv", "1_conv_id": "92c627814732418a8538ae4e026e72c6", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Paper on SEACrowd", "1_output": [["Paper on SEACrowd", "Title: Carriers of \\emph{Sargassum} and mechanism for coastal inundation in the Caribbean Sea\n\nAbstract: We identify effective carriers of \\emph{Sargassum} in the Caribbean Sea and describe a mechanism for coastal choking. Revealed from satellite altimetry, the carriers of \\emph{Sargassum} are mesoscale eddies (vortices of 50-km radius or larger) with coherent material (i.e., fluid) boundaries. These are observer-independent -- \\emph{unlike} eddy boundaries identified with instantaneously closed streamlines of the altimetric sea-surface height field -- and furthermore harbor finite-time attractors for networks of elastically connected finite-size buoyant or \"inertial\" particles dragged by ocean currents and winds, a mathematical abstraction of \\emph{Sargassum} rafts. The mechanism of coastal inundation, identified using a minimal model of surface-intensified Caribbean Sea eddies, is thermal instability in the presence of bottom topography."]], "1_corpus": "arxiv"} {"tstamp": 1722224478.384, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "f0607dbb82af41dab851d3746f6f5f71", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "Paper that shows multitask finetuned language models can generalize to unseen languages", "0_output": [["Paper that shows multitask finetuned language models can generalize to unseen languages", "Title: Constructive Complexity and Artificial Reality: An Introduction\n\nAbstract: Basic problems of complex systems are outlined with an emphasis on irreducibility and dynamic many-to-many correspondences. We discuss the importance of a constructive approach to artificial reality and the significance of an internal observer."]], "0_corpus": "arxiv", "1_conv_id": "1f4529b8817f4f0b8077850d8dccbda8", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Paper that shows multitask finetuned language models can generalize to unseen languages", "1_output": [["Paper that shows multitask finetuned language models can generalize to unseen languages", "Title: The Absence of Positive Energy Bound States for a Class of Nonlocal Potentials\n\nAbstract: We generalize in this paper a theorem of Titchmarsh for the positivity of Fourier sine integrals. We apply then the theorem to derive simple conditions for the absence of positive energy bound states (bound states embedded in the continuum) for the radial Schr\\\"odinger equation with nonlocal potentials which are superposition of a local potential and separable potentials."]], "1_corpus": "arxiv"} {"tstamp": 1722224501.6296, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "b0eecc5d9fd54f6594ba143e3606f4d2", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "Paper that introduced GPT-3", "0_output": [["Paper that introduced GPT-3", "Title: a survey on GPT-3\n\nAbstract: This paper provides an introductory survey to GPT-3. We cover some of the historical development behind this technology, some of the key features of GPT-3, and discuss the machine learning model and the datasets used. We survey both academic and commercial efforts applying GPT-3 in diverse domains such as developing conversational AI chatbots, software development, creative work, domain knowledge, and business productivity. We discuss some of the challenges that GPT-3 faces such as the problems of training complexity, bias, and hallucination/incorrect answers. We also discuss the future research opportunities in this area."]], "0_corpus": "arxiv", "1_conv_id": "a278bcf9cee34726ac3b8100de0e1d2a", "1_model_name": "embed-english-v3.0", "1_prompt": "Paper that introduced GPT-3", "1_output": [["Paper that introduced GPT-3", "Title: Improving Short Text Classification With Augmented Data Using GPT-3\n\nAbstract: GPT-3 is a large-scale natural language model developed by OpenAI that can perform many different tasks, including topic classification. Although researchers claim that it requires only a small number of in-context examples to learn a task, in practice GPT-3 requires these training examples to be either of exceptional quality or a higher quantity than easily created by hand. To address this issue, this study teaches GPT-3 to classify whether a question is related to data science by augmenting a small training set with additional examples generated by GPT-3 itself. This study compares two classifiers: the GPT-3 Classification Endpoint with augmented examples, and the GPT-3 Completion Endpoint with an optimal training set chosen using a genetic algorithm. We find that while the augmented Completion Endpoint achieves upwards of 80 percent validation accuracy, using the augmented Classification Endpoint yields more consistent accuracy on unseen examples. In this way, giving large-scale machine learning models like GPT-3 the ability to propose their own additional training examples can result in improved classification performance."]], "1_corpus": "arxiv"} {"tstamp": 1722224531.6939, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "c7374c4ffde543f99eb8379b8225a12b", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "paper showing crosslingual generalization is possible", "0_output": [["paper showing crosslingual generalization is possible", "Title: Are Structural Concepts Universal in Transformer Language Models? Towards Interpretable Cross-Lingual Generalization\n\nAbstract: Large language models (LLMs) have exhibited considerable cross-lingual generalization abilities, whereby they implicitly transfer knowledge across languages. However, the transfer is not equally successful for all languages, especially for low-resource ones, which poses an ongoing challenge. It is unclear whether we have reached the limits of implicit cross-lingual generalization and if explicit knowledge transfer is viable. In this paper, we investigate the potential for explicitly aligning conceptual correspondence between languages to enhance cross-lingual generalization. Using the syntactic aspect of language as a testbed, our analyses of 43 languages reveal a high degree of alignability among the spaces of structural concepts within each language for both encoder-only and decoder-only LLMs. We then propose a meta-learning-based method to learn to align conceptual spaces of different languages, which facilitates zero-shot and few-shot generalization in concept classification and also offers insights into the cross-lingual in-context learning phenomenon. Experiments on syntactic analysis tasks show that our approach achieves competitive results with state-of-the-art methods and narrows the performance gap between languages, particularly benefiting those with limited resources."]], "0_corpus": "arxiv", "1_conv_id": "3283162f3da548e08c51faf0101b6c31", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "paper showing crosslingual generalization is possible", "1_output": [["paper showing crosslingual generalization is possible", "Title: Asymptotic Exit Location Distributions in the Stochastic Exit Problem\n\nAbstract: Consider a two-dimensional continuous-time dynamical system, with an attracting fixed point $S$. If the deterministic dynamics are perturbed by white noise (random perturbations) of strength $\\epsilon$, the system state will eventually leave the domain of attraction $\\Omega$ of $S$. We analyse the case when, as $\\epsilon\\to0$, the exit location on the boundary $\\partial\\Omega$ is increasingly concentrated near a saddle point $H$ of the deterministic dynamics. We show that the asymptotic form of the exit location distribution on $\\partial\\Omega$ is generically non-Gaussian and asymmetric, and classify the possible limiting distributions. A key role is played by a parameter $\\mu$, equal to the ratio $|\\lambda_s(H)|/\\lambda_u(H)$ of the stable and unstable eigenvalues of the linearized deterministic flow at $H$. If $\\mu<1$ then the exit location distribution is generically asymptotic as $\\epsilon\\to0$ to a Weibull distribution with shape parameter $2/\\mu$, on the $O(\\epsilon^{\\mu/2})$ length scale near $H$. If $\\mu>1$ it is generically asymptotic to a distribution on the $O(\\epsilon^{1/2})$ length scale, whose moments we compute. The asymmetry of the asymptotic exit location distribution is attributable to the generic presence of a `classically forbidden' region: a wedge-shaped subset of $\\Omega$ with $H$ as vertex, which is reached from $S$, in the $\\epsilon\\to0$ limit, only via `bent' (non-smooth) fluctuational paths that first pass through the vicinity of $H$. We deduce from the presence of this forbidden region that the classical Eyring formula for the small-$\\epsilon$ exponential asymptotics of the mean first exit time is generically inapplicable."]], "1_corpus": "arxiv"} {"tstamp": 1722224559.6652, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "4a5116c2fe724927a11e34daeb171b0b", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "are all naruto shippuden episodes dubbed yet?", "0_output": [["are all naruto shippuden episodes dubbed yet?", "Scifi Stackexchange\n\nQ: What is the only Jutsu included in the manga from Filler Episodes of Naruto Shippuden Anime My friend challenged me with this question a while ago.\nWhich jutsu premiered in Naruto Shippuden, but was thereafter picked up by Masashi Kishimoto into his Manga? \nThis has apparently only happened once, all the other jutsu's happened the other way. He created them and put them into the Manga, then were disseminated into the Shippuden\n\nA: The only jutsu that matches somehow the requirements (at least to my knowledge and search) is the Water Release: Water Shuriken (Suiton: Mizu Shuriken):\n\nThis jutsu appeared first in Naruto: Shipp\u016bden #412 - Neji's Judgment which aired in Japan on May 14, 2015 and was later also used in the Naruto novel Gaara Hiden: A Sandstorm Mirage which was published in Japan on June 4, 2015.\nHowever this is a novel and not the original manga although is was illustrated by Masashi Kishimoto.\n"]], "0_corpus": "stackexchange", "1_conv_id": "cb439cf0253b48a4bec18921c3e8c6db", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "are all naruto shippuden episodes dubbed yet?", "1_output": [["are all naruto shippuden episodes dubbed yet?", "Scifi Stackexchange\n\nQ: What is the only Jutsu included in the manga from Filler Episodes of Naruto Shippuden Anime My friend challenged me with this question a while ago.\nWhich jutsu premiered in Naruto Shippuden, but was thereafter picked up by Masashi Kishimoto into his Manga? \nThis has apparently only happened once, all the other jutsu's happened the other way. He created them and put them into the Manga, then were disseminated into the Shippuden\n\nA: The only jutsu that matches somehow the requirements (at least to my knowledge and search) is the Water Release: Water Shuriken (Suiton: Mizu Shuriken):\n\nThis jutsu appeared first in Naruto: Shipp\u016bden #412 - Neji's Judgment which aired in Japan on May 14, 2015 and was later also used in the Naruto novel Gaara Hiden: A Sandstorm Mirage which was published in Japan on June 4, 2015.\nHowever this is a novel and not the original manga although is was illustrated by Masashi Kishimoto.\n"]], "1_corpus": "stackexchange"} {"tstamp": 1722224725.1029, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "16484c76198f456e85b2160419ef24e9", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "what is the difference between phd and pdf?", "0_output": [["what is the difference between phd and pdf?", "Stackoverflow Stackexchange\n\nQ: Importing Drupal 5 content into Drupal 6 I need to import several thousand nodes from a Drupal 5 sql dump into a Drupal 6 website. I tried using Node Import and Import/Export API but very little success... any ideas as to how this can be done?\nMany thanks!\nJane\n\nA: I used Views Bonus Pack on a Drupal 5 site to export the nodes into a csv file.\nThen I used the feeds module to import the csv into a Drupal 6 site.\n"]], "0_corpus": "stackexchange", "1_conv_id": "85af0818d827487d854641defc5a3e76", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "what is the difference between phd and pdf?", "1_output": [["what is the difference between phd and pdf?", "Math Stackexchange\n\nQ: Difference between a PDF and PMF What is the difference between a probability density function and a probability mass function?\n\nA: One is for discrete variables, the other for continuous.\nThey are also interpreted differently. The PDF is a probability density. If f(x) is a PDF, f(x) doesn't tell you the probability of getting x (In fact, the probability of getting precisely x is 0).\nThe way you use a PDF is through integration. So, the probability of getting a value that is in a dx area around x is approximately f(x)*dx.\n"]], "1_corpus": "stackexchange"} {"tstamp": 1722224747.2038, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "27b74b6e3c64477498043a197913dbe6", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "Unified-IO model", "0_output": [["Unified-IO model", "Title: Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision, Language, Audio, and Action\n\nAbstract: We present Unified-IO 2, the first autoregressive multimodal model that is capable of understanding and generating image, text, audio, and action. To unify different modalities, we tokenize inputs and outputs -- images, text, audio, action, bounding boxes, etc., into a shared semantic space and then process them with a single encoder-decoder transformer model. Since training with such diverse modalities is challenging, we propose various architectural improvements to stabilize model training. We train our model from scratch on a large multimodal pre-training corpus from diverse sources with a multimodal mixture of denoisers objective. To learn an expansive set of skills, such as following multimodal instructions, we construct and finetune on an ensemble of 120 datasets with prompts and augmentations. With a single unified model, Unified-IO 2 achieves state-of-the-art performance on the GRIT benchmark and strong results in more than 35 benchmarks, including image generation and understanding, natural language understanding, video and audio understanding, and robotic manipulation. We release all our models to the research community."]], "0_corpus": "arxiv", "1_conv_id": "b4bc088789164404bebf053d0828474d", "1_model_name": "embed-english-v3.0", "1_prompt": "Unified-IO model", "1_output": [["Unified-IO model", "Title: A Unified Programming Model for Heterogeneous Computing with CPU and Accelerator Technologies\n\nAbstract: This paper consists of three parts. The first part provides a unified programming model for heterogeneous computing with CPU and accelerator (like GPU, FPGA, Google TPU, Atos QPU, and more) technologies. To some extent, this new programming model makes programming across CPUs and accelerators turn into usual programming tasks with common programming languages, and relieves complexity of programming across CPUs and accelerators. It can be achieved by extending file managements in common programming languages, such as C/C++, Fortran, Python, MPI, etc., to cover accelerators as I/O devices. In the second part, we show that all types of computer systems can be reduced to the simplest type of computer system, a single-core CPU computer system with I/O devices, by the unified programming model. Thereby, the unified programming model can truly build the programming of various computer systems on one API (i.e. file managements of common programming languages), and can make programming for various computer systems easier. In third part, we present a new approach to coupled applications computing (like multidisciplinary simulations) by the unified programming model. The unified programming model makes coupled applications computing more natural and easier since it only relies on its own power to couple multiple applications through MPI."]], "1_corpus": "arxiv"} {"tstamp": 1722224768.3473, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "72ed621be6a54d4a8eb632a62857fa63", "0_model_name": "embed-english-v3.0", "0_prompt": "Good benchmark for multitask performance of llms", "0_output": [["Good benchmark for multitask performance of llms", "Title: Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once?\n\nAbstract: Large language models (LLMs) are typically prompted to follow a single instruction per inference call. In this work, we analyze whether LLMs also hold the capability to handle multiple instructions simultaneously, denoted as Multi-Task Inference. For this purpose, we introduce the MTI Bench(Multi-Task Inference Benchmark), a comprehensive evaluation benchmark encompassing 5,000 instances across 25 tasks. Each task in the MTI Bench involves 2 to 3 sub-tasks. As expected, we first demonstrate that Multi-Task Inference reduces the total inference time by 1.46 times in average since it does not require multiple inference calls. Interestingly, contrary to the expectation that LLMs would perform better when tasks are divided, we find that state-of-the-art LLMs, such as Llama-2-Chat-70B and GPT-4, show up to 7.3% and 12.4% improved performance with Multi-Task Inference compared to Single-Task Inference on the MTI Bench. We release the MTI Bench dataset and our code at this link https://github.com/guijinSON/MTI-Bench."]], "0_corpus": "arxiv", "1_conv_id": "492d5f636fc94d049f457c71188d11c0", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "Good benchmark for multitask performance of llms", "1_output": [["Good benchmark for multitask performance of llms", "Title: Limit theorems and ergodicity for general bootstrap random walks\n\nAbstract: Given the increments of a simple symmetric random walk $(X_n)_{n\\ge0}$, we characterize all possible ways of recycling these increments into a simple symmetric random walk $(Y_n)_{n\\ge0}$ adapted to the filtration of $(X_n)_{n\\ge0}$. We study the long term behavior of a suitably normalized two-dimensional process $((X_n,Y_n))_{n\\ge0}$. In particular, we provide necessary and sufficient conditions for the process to converge to a two-dimensional Brownian motion (possibly degenerate). We also discuss cases in which the limit is not Gaussian. Finally, we provide a simple necessary and sufficient condition for the ergodicity of the recycling transformation, thus generalizing results from Dubins and Smorodinsky (1992) and Fujita (2008), and solving the discrete version of the open problem of the ergodicity of the general L\\'evy transformation (see Mansuy and Yor, 2006)."]], "1_corpus": "arxiv"} {"tstamp": 1722224795.2981, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "00300e8adfc2436a82273eaac0616ce2", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "A paper evaluating whether updating one fact in a language model affects related facts.", "0_output": [["A paper evaluating whether updating one fact in a language model affects related facts.", "Title: Evaluating the Ripple Effects of Knowledge Editing in Language Models\n\nAbstract: Modern language models capture a large body of factual knowledge. However, some facts can be incorrectly induced or become obsolete over time, resulting in factually incorrect generations. This has led to the development of various editing methods that allow updating facts encoded by the model. Evaluation of these methods has primarily focused on testing whether an individual fact has been successfully injected, and if similar predictions for other subjects have not changed. Here we argue that such evaluation is limited, since injecting one fact (e.g. ``Jack Depp is the son of Johnny Depp'') introduces a ``ripple effect'' in the form of additional facts that the model needs to update (e.g.``Jack Depp is the sibling of Lily-Rose Depp''). To address this issue, we propose a novel set of evaluation criteria that consider the implications of an edit on related facts. Using these criteria, we then construct RippleEdits, a diagnostic benchmark of 5K factual edits, capturing a variety of types of ripple effects. We evaluate prominent editing methods on RippleEdits, showing that current methods fail to introduce consistent changes in the model's knowledge. In addition, we find that a simple in-context editing baseline obtains the best scores on our benchmark, suggesting a promising research direction for model editing."]], "0_corpus": "arxiv", "1_conv_id": "44bd3abae6e44faba2d1cf54ce8bec32", "1_model_name": "text-embedding-004", "1_prompt": "A paper evaluating whether updating one fact in a language model affects related facts.", "1_output": [["A paper evaluating whether updating one fact in a language model affects related facts.", "Title: WikiFactDiff: A Large, Realistic, and Temporally Adaptable Dataset for Atomic Factual Knowledge Update in Causal Language Models\n\nAbstract: The factuality of large language model (LLMs) tends to decay over time since events posterior to their training are \"unknown\" to them. One way to keep models up-to-date could be factual update: the task of inserting, replacing, or removing certain simple (atomic) facts within the model. To study this task, we present WikiFactDiff, a dataset that describes the evolution of factual knowledge between two dates as a collection of simple facts divided into three categories: new, obsolete, and static. We describe several update scenarios arising from various combinations of these three types of basic update. The facts are represented by subject-relation-object triples; indeed, WikiFactDiff was constructed by comparing the state of the Wikidata knowledge base at 4 January 2021 and 27 February 2023. Those fact are accompanied by verbalization templates and cloze tests that enable running update algorithms and their evaluation metrics. Contrary to other datasets, such as zsRE and CounterFact, WikiFactDiff constitutes a realistic update setting that involves various update scenarios, including replacements, archival, and new entity insertions. We also present an evaluation of existing update algorithms on WikiFactDiff."]], "1_corpus": "arxiv"} {"tstamp": 1722224897.7564, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "66d97eb1cd7d4e31b48bcf9e823bba43", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "0_output": [["A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "Title: Images in Language Space: Exploring the Suitability of Large Language Models for Vision & Language Tasks\n\nAbstract: Large language models have demonstrated robust performance on various language tasks using zero-shot or few-shot learning paradigms. While being actively researched, multimodal models that can additionally handle images as input have yet to catch up in size and generality with language-only models. In this work, we ask whether language-only models can be utilised for tasks that require visual input -- but also, as we argue, often require a strong reasoning component. Similar to some recent related work, we make visual information accessible to the language model using separate verbalisation models. Specifically, we investigate the performance of open-source, open-access language models against GPT-3 on five vision-language tasks when given textually-encoded visual information. Our results suggest that language models are effective for solving vision-language tasks even with limited samples. This approach also enhances the interpretability of a model's output by providing a means of tracing the output back through the verbalised image content."]], "0_corpus": "arxiv", "1_conv_id": "e97d1a7a80bf4651a54001a5a1acb3b0", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "1_output": [["A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "Title: Visual Classification via Description from Large Language Models\n\nAbstract: Vision-language models (VLMs) such as CLIP have shown promising performance on a variety of recognition tasks using the standard zero-shot classification procedure -- computing similarity between the query image and the embedded words for each category. By only using the category name, they neglect to make use of the rich context of additional information that language affords. The procedure gives no intermediate understanding of why a category is chosen, and furthermore provides no mechanism for adjusting the criteria used towards this decision. We present an alternative framework for classification with VLMs, which we call classification by description. We ask VLMs to check for descriptive features rather than broad categories: to find a tiger, look for its stripes; its claws; and more. By basing decisions on these descriptors, we can provide additional cues that encourage using the features we want to be used. In the process, we can get a clear idea of what features the model uses to construct its decision; it gains some level of inherent explainability. We query large language models (e.g., GPT-3) for these descriptors to obtain them in a scalable way. Extensive experiments show our framework has numerous advantages past interpretability. We show improvements in accuracy on ImageNet across distribution shifts; demonstrate the ability to adapt VLMs to recognize concepts unseen during training; and illustrate how descriptors can be edited to effectively mitigate bias compared to the baseline."]], "1_corpus": "arxiv"} {"tstamp": 1722224923.4451, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "04682ef77bb14d3d987fa49503bd1c7b", "0_model_name": "voyage-multilingual-2", "0_prompt": "Highest BLEU scores for fully NAT in WMT14 En-De with faster inference.", "0_output": [["Highest BLEU scores for fully NAT in WMT14 En-De with faster inference.", "Title: Fully Non-autoregressive Neural Machine Translation: Tricks of the Trade\n\nAbstract: Fully non-autoregressive neural machine translation (NAT) is proposed to simultaneously predict tokens with single forward of neural networks, which significantly reduces the inference latency at the expense of quality drop compared to the Transformer baseline. In this work, we target on closing the performance gap while maintaining the latency advantage. We first inspect the fundamental issues of fully NAT models, and adopt dependency reduction in the learning space of output tokens as the basic guidance. Then, we revisit methods in four different aspects that have been proven effective for improving NAT models, and carefully combine these techniques with necessary modifications. Our extensive experiments on three translation benchmarks show that the proposed system achieves the new state-of-the-art results for fully NAT models, and obtains comparable performance with the autoregressive and iterative NAT systems. For instance, one of the proposed models achieves 27.49 BLEU points on WMT14 En-De with approximately 16.5X speed up at inference time."]], "0_corpus": "arxiv", "1_conv_id": "e6e5e63a277c497b87192e49b67e8463", "1_model_name": "text-embedding-004", "1_prompt": "Highest BLEU scores for fully NAT in WMT14 En-De with faster inference.", "1_output": [["Highest BLEU scores for fully NAT in WMT14 En-De with faster inference.", "Title: Fully Non-autoregressive Neural Machine Translation: Tricks of the Trade\n\nAbstract: Fully non-autoregressive neural machine translation (NAT) is proposed to simultaneously predict tokens with single forward of neural networks, which significantly reduces the inference latency at the expense of quality drop compared to the Transformer baseline. In this work, we target on closing the performance gap while maintaining the latency advantage. We first inspect the fundamental issues of fully NAT models, and adopt dependency reduction in the learning space of output tokens as the basic guidance. Then, we revisit methods in four different aspects that have been proven effective for improving NAT models, and carefully combine these techniques with necessary modifications. Our extensive experiments on three translation benchmarks show that the proposed system achieves the new state-of-the-art results for fully NAT models, and obtains comparable performance with the autoregressive and iterative NAT systems. For instance, one of the proposed models achieves 27.49 BLEU points on WMT14 En-De with approximately 16.5X speed up at inference time."]], "1_corpus": "arxiv"} {"tstamp": 1722224949.6115, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "f93384bd5692403b9b937e5912ee5056", "0_model_name": "voyage-multilingual-2", "0_prompt": "A comprehensive analysis of a 280 billion parameter language model and its performance across 152 diverse tasks ", "0_output": [["A comprehensive analysis of a 280 billion parameter language model and its performance across 152 diverse tasks ", "Title: Baichuan 2: Open Large-scale Language Models\n\nAbstract: Large language models (LLMs) have demonstrated remarkable performance on a variety of natural language tasks based on just a few examples of natural language instructions, reducing the need for extensive feature engineering. However, most powerful LLMs are closed-source or limited in their capability for languages other than English. In this technical report, we present Baichuan 2, a series of large-scale multilingual language models containing 7 billion and 13 billion parameters, trained from scratch, on 2.6 trillion tokens. Baichuan 2 matches or outperforms other open-source models of similar size on public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan 2 excels in vertical domains such as medicine and law. We will release all pre-training model checkpoints to benefit the research community in better understanding the training dynamics of Baichuan 2."]], "0_corpus": "arxiv", "1_conv_id": "582d39f1436e46c086fb73ddaf7c9a24", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "A comprehensive analysis of a 280 billion parameter language model and its performance across 152 diverse tasks ", "1_output": [["A comprehensive analysis of a 280 billion parameter language model and its performance across 152 diverse tasks ", "Title: Seq2Mol: Automatic design of de novo molecules conditioned by the target protein sequences through deep neural networks\n\nAbstract: De novo design of molecules has recently enjoyed the power of generative deep neural networks. Current approaches aim to generate molecules either resembling the properties of the molecules of the training set or molecules that are optimized with respect to specific physicochemical properties. None of the methods generates molecules specific to a target protein. In the approach presented here, we introduce a method which is conditioned on the protein target sequence to generate de novo molecules that are relevant to the target. We use an implementation adapted from Google's \"Show and Tell\" image caption generation method, to generate SMILES strings of molecules from protein sequence embeddings generated by a deep bi-directional language model ELMo. ELMo is used to generate contextualized embedding vectors of the protein sequence. Using reinforcement learning, the trained model is further optimized through augmented episodic likelihood to increase the diversity of the generated compounds compared to the training set. We used the model to generate compounds for two major drug target families, i.e. for GPCRs and Tyrosine Kinase targets. The model generated compounds which are structurally different form the training set, while also being more similar to compounds known to bind to the two families of drug targets compared to a random set of molecules. The compounds further display reasonable synthesizability and drug-likeness scores."]], "1_corpus": "arxiv"} {"tstamp": 1722224964.6848, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "d3b69c82e28b436c969357870e392c7a", "0_model_name": "embed-english-v3.0", "0_prompt": "Paper introducing a model connecting object segmentation masks with natural language responses in multimodal settings.", "0_output": [["Paper introducing a model connecting object segmentation masks with natural language responses in multimodal settings.", "Title: Dynamic Multimodal Instance Segmentation guided by natural language queries\n\nAbstract: We address the problem of segmenting an object given a natural language expression that describes it. Current techniques tackle this task by either (\\textit{i}) directly or recursively merging linguistic and visual information in the channel dimension and then performing convolutions; or by (\\textit{ii}) mapping the expression to a space in which it can be thought of as a filter, whose response is directly related to the presence of the object at a given spatial coordinate in the image, so that a convolution can be applied to look for the object. We propose a novel method that integrates these two insights in order to fully exploit the recursive nature of language. Additionally, during the upsampling process, we take advantage of the intermediate information generated when downsampling the image, so that detailed segmentations can be obtained. We compare our method against the state-of-the-art approaches in four standard datasets, in which it surpasses all previous methods in six of eight of the splits for this task."]], "0_corpus": "arxiv", "1_conv_id": "b353792ca18b4bc9967223f30e5f0dc0", "1_model_name": "voyage-multilingual-2", "1_prompt": "Paper introducing a model connecting object segmentation masks with natural language responses in multimodal settings.", "1_output": [["Paper introducing a model connecting object segmentation masks with natural language responses in multimodal settings.", "Title: Comprehensive Multi-Modal Interactions for Referring Image Segmentation\n\nAbstract: We investigate Referring Image Segmentation (RIS), which outputs a segmentation map corresponding to the natural language description. Addressing RIS efficiently requires considering the interactions happening across visual and linguistic modalities and the interactions within each modality. Existing methods are limited because they either compute different forms of interactions sequentially (leading to error propagation) or ignore intramodal interactions. We address this limitation by performing all three interactions simultaneously through a Synchronous Multi-Modal Fusion Module (SFM). Moreover, to produce refined segmentation masks, we propose a novel Hierarchical Cross-Modal Aggregation Module (HCAM), where linguistic features facilitate the exchange of contextual information across the visual hierarchy. We present thorough ablation studies and validate our approach's performance on four benchmark datasets, showing considerable performance gains over the existing state-of-the-art (SOTA) methods."]], "1_corpus": "arxiv"} {"tstamp": 1722225012.2867, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "184da0dfea8b4f9d9a55e8015a197572", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "age is what type of scale?", "0_output": [["age is what type of scale?", "Stats Stackexchange\n\nQ: Should I treat age as a continuous variable or factor? Age is a predictor variable among a few others (gender, weight and height) and my response is the log-odds of a certain disease (binomial glm). \nMy age data runs from 21 until 40. I am not sure whether to treat age as a continuous variable or as a factor with age groups: 21-25, 26-30, 31-35, 36-40. \nAre there any plots which I can produce that can help determine which would be the better approach? \n\nA: It depends on the context.\nFor example if you are looking for the effect of age on children's height, it makes sense to look at it as a continuous ( integer) value.\nIf you're looking for e.g. the effect of age on oncogenesis then it makes sense if you look at age groups. Young vs old, above 55 and below 55, ...\nFor your example, unless age is a confounder of a hidden factor such as for example being college grad or still a student ( risk factor for young adults STD infection), I'd bin my data into reasonable bin sizes.\n"]], "0_corpus": "stackexchange", "1_conv_id": "deb4d7aa81cf4de1a7c78febb6fc3a3a", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "age is what type of scale?", "1_output": [["age is what type of scale?", "Stats Stackexchange\n\nQ: Are the variable types here considered correct? If we want to determine the variable types, will it be as follows for the below variables?\nAge ---> quantitative, discrete (we can count)\nFitness ---> If the values that we will enter here are 0 and 1 only, will the type of this variable be qualitative, nominal?\nThanks a lot.\n\nA: Questions like this expose the problems with Stevens' typology of variables. Some variables just aren't any of his categories; or are hard to categorize. Age is ratio level data in the sense that someone who is (say) 40 years old is twice as old as someone who is 20 years old. You just haven't got it measured precisely. But so? What difference does that make? \nDichotomies - here it really doesn't matter HOW you treat them for most purposes. E.g. in regression, you can consider them to be nominal, ordinal or interval and you will get the same results. \nI wrote about other problems with this typology on my blog\n"]], "1_corpus": "stackexchange"} {"tstamp": 1722225026.5662, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "65d610c94b434dcda9d6b7a2824209b1", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "A framework to improve logical problem-solving in language models integrating symbolic formulation and self-refinement.", "0_output": [["A framework to improve logical problem-solving in language models integrating symbolic formulation and self-refinement.", "Title: On the three state Potts model with competing interactions on the Bethe lattice\n\nAbstract: In the present paper the three state Potts model with competing binary interactions (with couplings $J$ and $J_p$) on the second order Bethe lattice is considered. The recurrent equations for the partition functions are derived. When $J_p=0$, by means of a construction of a special class of limiting Gibbs measures, it is shown how these equations are related with the surface energy of the Hamiltonian. This relation reduces the problem of describing the limit Gibbs measures to find of solutions of a nonlinear functional equation. Moreover, the set of ground states of the one-level model is completely described. Using this fact, one finds Gibbs measures (pure phases) associated with the translation-invariant ground states. The critical temperature is exactly found and the phase diagram is presented. The free energies corresponding to translations-invariant Gibbs measures are found. Certain physical quantities are calculated as well."]], "0_corpus": "arxiv", "1_conv_id": "c751d7d91b034a0e835787d7d077b783", "1_model_name": "voyage-multilingual-2", "1_prompt": "A framework to improve logical problem-solving in language models integrating symbolic formulation and self-refinement.", "1_output": [["A framework to improve logical problem-solving in language models integrating symbolic formulation and self-refinement.", "Title: Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning\n\nAbstract: Large Language Models (LLMs) have shown human-like reasoning abilities but still struggle with complex logical problems. This paper introduces a novel framework, Logic-LM, which integrates LLMs with symbolic solvers to improve logical problem-solving. Our method first utilizes LLMs to translate a natural language problem into a symbolic formulation. Afterward, a deterministic symbolic solver performs inference on the formulated problem. We also introduce a self-refinement module, which utilizes the symbolic solver's error messages to revise symbolic formalizations. We demonstrate Logic-LM's effectiveness on five logical reasoning datasets: ProofWriter, PrOntoQA, FOLIO, LogicalDeduction, and AR-LSAT. On average, Logic-LM achieves a significant performance boost of 39.2% over using LLM alone with standard prompting and 18.4% over LLM with chain-of-thought prompting. Our findings suggest that Logic-LM, by combining LLMs with symbolic logic, offers a promising avenue for faithful logical reasoning. Code and data are publicly available at https://github.com/teacherpeterpan/Logic-LLM."]], "1_corpus": "arxiv"} {"tstamp": 1722225041.8999, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "e4e6ff5dc3a149a8a9cd5bc76feefd62", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "Looking for the paper on the 2020 CLEF edition covering automatic claim verification in social media.", "0_output": [["Looking for the paper on the 2020 CLEF edition covering automatic claim verification in social media.", "Title: Overview of CheckThat! 2020: Automatic Identification and Verification of Claims in Social Media\n\nAbstract: We present an overview of the third edition of the CheckThat! Lab at CLEF 2020. The lab featured five tasks in two different languages: English and Arabic. The first four tasks compose the full pipeline of claim verification in social media: Task 1 on check-worthiness estimation, Task 2 on retrieving previously fact-checked claims, Task 3 on evidence retrieval, and Task 4 on claim verification. The lab is completed with Task 5 on check-worthiness estimation in political debates and speeches. A total of 67 teams registered to participate in the lab (up from 47 at CLEF 2019), and 23 of them actually submitted runs (compared to 14 at CLEF 2019). Most teams used deep neural networks based on BERT, LSTMs, or CNNs, and achieved sizable improvements over the baselines on all tasks. Here we describe the tasks setup, the evaluation results, and a summary of the approaches used by the participants, and we discuss some lessons learned. Last but not least, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important tasks of check-worthiness estimation and automatic claim verification."]], "0_corpus": "arxiv", "1_conv_id": "069ed0d8c0fe49369e9c70a31c707261", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "Looking for the paper on the 2020 CLEF edition covering automatic claim verification in social media.", "1_output": [["Looking for the paper on the 2020 CLEF edition covering automatic claim verification in social media.", "Title: Overview of CheckThat! 2020: Automatic Identification and Verification of Claims in Social Media\n\nAbstract: We present an overview of the third edition of the CheckThat! Lab at CLEF 2020. The lab featured five tasks in two different languages: English and Arabic. The first four tasks compose the full pipeline of claim verification in social media: Task 1 on check-worthiness estimation, Task 2 on retrieving previously fact-checked claims, Task 3 on evidence retrieval, and Task 4 on claim verification. The lab is completed with Task 5 on check-worthiness estimation in political debates and speeches. A total of 67 teams registered to participate in the lab (up from 47 at CLEF 2019), and 23 of them actually submitted runs (compared to 14 at CLEF 2019). Most teams used deep neural networks based on BERT, LSTMs, or CNNs, and achieved sizable improvements over the baselines on all tasks. Here we describe the tasks setup, the evaluation results, and a summary of the approaches used by the participants, and we discuss some lessons learned. Last but not least, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important tasks of check-worthiness estimation and automatic claim verification."]], "1_corpus": "arxiv"} {"tstamp": 1722225064.1705, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "ad3321dd65264d9ea93d600f7971ff6d", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "can r squared adjusted be negative?", "0_output": [["can r squared adjusted be negative?", "Stats Stackexchange\n\nQ: Is it a problem to get a negative adjusted r-squared? Background:\nI have the cross-sectional model:\n$Y_{i} = a + b X_{1,i} + c X_{2,i} + d X_{3,i} + e X_{4,i} + \\nu_i$.\nThe application is corporate finance. So each $Y_i$ is something like the change in return on assets over a 1 year period for firm $i$, and the regressors are typical corporate finance variables.\nIn corporate finance, very small values of $R^2$ are common, even sometimes $1\\%$. My $R^2$ is around $1\\%$ but my Adjusted $R^2$ is $-0.2\\%$. \nI have never seen papers report negative Adjusted $R^2$ but this could just be because they omit the publication of their adjusted $R^2$ when they see that it is negative. \nQuestion\nIs there some problem when the adjusted $R^2$ is negative?\n\nA: The formula for adjusted R square allows it to be negative. It is intended to approximate the actual percentage variance explained. So if the actual R square is close to zero the adjusted R square can be slightly negative. Just think of it as an estimate of zero.\n"]], "0_corpus": "stackexchange", "1_conv_id": "6766409719cd420188c069799f6c48d1", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "can r squared adjusted be negative?", "1_output": [["can r squared adjusted be negative?", "Stats Stackexchange\n\nQ: Is it a problem to get a negative adjusted r-squared? Background:\nI have the cross-sectional model:\n$Y_{i} = a + b X_{1,i} + c X_{2,i} + d X_{3,i} + e X_{4,i} + \\nu_i$.\nThe application is corporate finance. So each $Y_i$ is something like the change in return on assets over a 1 year period for firm $i$, and the regressors are typical corporate finance variables.\nIn corporate finance, very small values of $R^2$ are common, even sometimes $1\\%$. My $R^2$ is around $1\\%$ but my Adjusted $R^2$ is $-0.2\\%$. \nI have never seen papers report negative Adjusted $R^2$ but this could just be because they omit the publication of their adjusted $R^2$ when they see that it is negative. \nQuestion\nIs there some problem when the adjusted $R^2$ is negative?\n\nA: The formula for adjusted R square allows it to be negative. It is intended to approximate the actual percentage variance explained. So if the actual R square is close to zero the adjusted R square can be slightly negative. Just think of it as an estimate of zero.\n"]], "1_corpus": "stackexchange"} {"tstamp": 1722225082.4068, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "fe65d33c55ca4dbf856b34d9fd465f40", "0_model_name": "text-embedding-3-large", "0_prompt": "Paper showcasing reinforcement learning from human feedback to train beneficial and safe AI assistants.", "0_output": [["Paper showcasing reinforcement learning from human feedback to train beneficial and safe AI assistants.", "Title: Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback\n\nAbstract: We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, efficiently improving our datasets and models. Finally, we investigate the robustness of RLHF training, and identify a roughly linear relation between the RL reward and the square root of the KL divergence between the policy and its initialization. Alongside our main results, we perform peripheral analyses on calibration, competing objectives, and the use of OOD detection, compare our models with human writers, and provide samples from our models using prompts appearing in recent related work."]], "0_corpus": "arxiv", "1_conv_id": "19f1b179abd4428983f12ecbb76ba96e", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "Paper showcasing reinforcement learning from human feedback to train beneficial and safe AI assistants.", "1_output": [["Paper showcasing reinforcement learning from human feedback to train beneficial and safe AI assistants.", "Title: Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback\n\nAbstract: We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, efficiently improving our datasets and models. Finally, we investigate the robustness of RLHF training, and identify a roughly linear relation between the RL reward and the square root of the KL divergence between the policy and its initialization. Alongside our main results, we perform peripheral analyses on calibration, competing objectives, and the use of OOD detection, compare our models with human writers, and provide samples from our models using prompts appearing in recent related work."]], "1_corpus": "arxiv"} {"tstamp": 1722225101.4447, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "94d522a8ae794130b5549c97be9efcc8", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "Looking for a paper on Fine-grained Entity Type Classification using a neural network with hierarchy-aware loss.", "0_output": [["Looking for a paper on Fine-grained Entity Type Classification using a neural network with hierarchy-aware loss.", "Title: Graph Neural Network Approach to Semantic Type Detection in Tables\n\nAbstract: This study addresses the challenge of detecting semantic column types in relational tables, a key task in many real-world applications. While language models like BERT have improved prediction accuracy, their token input constraints limit the simultaneous processing of intra-table and inter-table information. We propose a novel approach using Graph Neural Networks (GNNs) to model intra-table dependencies, allowing language models to focus on inter-table information. Our proposed method not only outperforms existing state-of-the-art algorithms but also offers novel insights into the utility and functionality of various GNN types for semantic type detection. The code is available at https://github.com/hoseinzadeehsan/GAIT"]], "0_corpus": "arxiv", "1_conv_id": "5bc98453ebd94dd8adc5a3a7477560d1", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "Looking for a paper on Fine-grained Entity Type Classification using a neural network with hierarchy-aware loss.", "1_output": [["Looking for a paper on Fine-grained Entity Type Classification using a neural network with hierarchy-aware loss.", "Title: Neural Fine-Grained Entity Type Classification with Hierarchy-Aware Loss\n\nAbstract: The task of Fine-grained Entity Type Classification (FETC) consists of assigning types from a hierarchy to entity mentions in text. Existing methods rely on distant supervision and are thus susceptible to noisy labels that can be out-of-context or overly-specific for the training sentence. Previous methods that attempt to address these issues do so with heuristics or with the help of hand-crafted features. Instead, we propose an end-to-end solution with a neural network model that uses a variant of cross- entropy loss function to handle out-of-context labels, and hierarchical loss normalization to cope with overly-specific ones. Also, previous work solve FETC a multi-label classification followed by ad-hoc post-processing. In contrast, our solution is more elegant: we use public word embeddings to train a single-label that jointly learns representations for entity mentions and their context. We show experimentally that our approach is robust against noise and consistently outperforms the state-of-the-art on established benchmarks for the task."]], "1_corpus": "arxiv"} {"tstamp": 1722225126.1987, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "61042e3396c64d919604031cbf9612dc", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "A paper introducing a new large language model", "0_output": [["A paper introducing a new large language model", "Title: Larger-Context Language Modelling\n\nAbstract: In this work, we propose a novel method to incorporate corpus-level discourse information into language modelling. We call this larger-context language model. We introduce a late fusion approach to a recurrent language model based on long short-term memory units (LSTM), which helps the LSTM unit keep intra-sentence dependencies and inter-sentence dependencies separate from each other. Through the evaluation on three corpora (IMDB, BBC, and PennTree Bank), we demon- strate that the proposed model improves perplexity significantly. In the experi- ments, we evaluate the proposed approach while varying the number of context sentences and observe that the proposed late fusion is superior to the usual way of incorporating additional inputs to the LSTM. By analyzing the trained larger- context language model, we discover that content words, including nouns, adjec- tives and verbs, benefit most from an increasing number of context sentences. This analysis suggests that larger-context language model improves the unconditional language model by capturing the theme of a document better and more easily."]], "0_corpus": "arxiv", "1_conv_id": "d2e0f34f14254329b828836aa5ea813e", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "A paper introducing a new large language model", "1_output": [["A paper introducing a new large language model", "Title: Large Language Models: A Survey\n\nAbstract: Large Language Models (LLMs) have drawn a lot of attention due to their strong performance on a wide range of natural language tasks, since the release of ChatGPT in November 2022. LLMs' ability of general-purpose language understanding and generation is acquired by training billions of model's parameters on massive amounts of text data, as predicted by scaling laws \\cite{kaplan2020scaling,hoffmann2022training}. The research area of LLMs, while very recent, is evolving rapidly in many different ways. In this paper, we review some of the most prominent LLMs, including three popular LLM families (GPT, LLaMA, PaLM), and discuss their characteristics, contributions and limitations. We also give an overview of techniques developed to build, and augment LLMs. We then survey popular datasets prepared for LLM training, fine-tuning, and evaluation, review widely used LLM evaluation metrics, and compare the performance of several popular LLMs on a set of representative benchmarks. Finally, we conclude the paper by discussing open challenges and future research directions."]], "1_corpus": "arxiv"} {"tstamp": 1722226463.5296, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "18807e629e6e490ba7000da4fea301d7", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "How can BERT models be adapted to perform better in biomedical natural language processing?", "0_output": [["How can BERT models be adapted to perform better in biomedical natural language processing?", "Title: Spatiotemporal complexity of a ratio-dependent predator-prey system\n\nAbstract: In this paper, we investigate the emergence of a ratio-dependent predator-prey system with Michaelis-Menten-type functional response and reaction-diffusion. We derive the conditions for Hopf, Turing and Wave bifurcation on a spatial domain. Furthermore, we present a theoretical analysis of evolutionary processes that involves organisms distribution and their interaction of spatially distributed population with local diffusion. The results of numerical simulations reveal that the typical dynamics of population density variation is the formation of isolated groups, i.e., stripelike or spotted or coexistence of both. Our study shows that the spatially extended model has not only more complex dynamic patterns in the space, but also chaos and spiral waves. It may help us better understand the dynamics of an aquatic community in a real marine environment."]], "0_corpus": "arxiv", "1_conv_id": "2ec1892a56e54d399d17ca78e43d3497", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "How can BERT models be adapted to perform better in biomedical natural language processing?", "1_output": [["How can BERT models be adapted to perform better in biomedical natural language processing?", "Title: Absorbing processes in Richardson diffusion: analytical results\n\nAbstract: We consider the recently addressed problem of a passive particle (a predator), being the center of a ``sphere of interception'' of radius $R$ and able to absorb other passive particles (the preys) entering into the sphere. Assuming that all the particles are advected by a turbulent flow and that, in particular, the Richardson equation properly describes the relative dispersion, we calculate an analytical expression for the flux into the sphere as a function of time, assuming an initial constant density of preys outside the sphere. In the same framework, we show that the distribution of times of first passage into the sphere has a $t^{-5/2}$ power law tail, seen in contrast to the $t^{-3/2}$ appearing in standard 3D diffusion. We also discuss the correction due to the integral length scale on the results in the stationary case."]], "1_corpus": "arxiv"} {"tstamp": 1722226479.3615, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "fbd8892a82924f81812aa7a1bf175969", "0_model_name": "embed-english-v3.0", "0_prompt": "Best practices and taxonomy of data selection for efficient language model pre-training.", "0_output": [["Best practices and taxonomy of data selection for efficient language model pre-training.", "Title: How to Train Data-Efficient LLMs\n\nAbstract: The training of large language models (LLMs) is expensive. In this paper, we study data-efficient approaches for pre-training LLMs, i.e., techniques that aim to optimize the Pareto frontier of model quality and training resource/data consumption. We seek to understand the tradeoffs associated with data selection routines based on (i) expensive-to-compute data-quality estimates, and (ii) maximization of coverage and diversity-based measures in the feature space. Our first technique, Ask-LLM, leverages the zero-shot reasoning capabilities of instruction-tuned LLMs to directly assess the quality of a training example. To target coverage, we propose Density sampling, which models the data distribution to select a diverse sample. In our comparison of 19 samplers, involving hundreds of evaluation tasks and pre-training runs, we find that Ask-LLM and Density are the best methods in their respective categories. Coverage sampling can recover the performance of the full data, while models trained on Ask-LLM data consistently outperform full-data training -- even when we reject 90% of the original dataset, while converging up to 70% faster."]], "0_corpus": "arxiv", "1_conv_id": "a40e5e6bd02d47a6bdc8261d2645d034", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "Best practices and taxonomy of data selection for efficient language model pre-training.", "1_output": [["Best practices and taxonomy of data selection for efficient language model pre-training.", "Title: A Survey on Data Selection for Language Models\n\nAbstract: A major factor in the recent success of large language models is the use of enormous and ever-growing text datasets for unsupervised pre-training. However, naively training a model on all available data may not be optimal (or feasible), as the quality of available text data can vary. Filtering out data can also decrease the carbon footprint and financial costs of training models by reducing the amount of training required. Data selection methods aim to determine which candidate data points to include in the training dataset and how to appropriately sample from the selected data points. The promise of improved data selection methods has caused the volume of research in the area to rapidly expand. However, because deep learning is mostly driven by empirical evidence and experimentation on large-scale data is expensive, few organizations have the resources for extensive data selection research. Consequently, knowledge of effective data selection practices has become concentrated within a few organizations, many of which do not openly share their findings and methodologies. To narrow this gap in knowledge, we present a comprehensive review of existing literature on data selection methods and related research areas, providing a taxonomy of existing approaches. By describing the current landscape of research, this work aims to accelerate progress in data selection by establishing an entry point for new and established researchers. Additionally, throughout this review we draw attention to noticeable holes in the literature and conclude the paper by proposing promising avenues for future research."]], "1_corpus": "arxiv"} {"tstamp": 1722226493.0887, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "64fcb06267ce4417bdbb737a15e252f1", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "Paper detailing advancements in self-supervised learning with ResNets that outperforms supervised benchmarks using ReLICv2.", "0_output": [["Paper detailing advancements in self-supervised learning with ResNets that outperforms supervised benchmarks using ReLICv2.", "Title: Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?\n\nAbstract: Despite recent progress made by self-supervised methods in representation learning with residual networks, they still underperform supervised learning on the ImageNet classification benchmark, limiting their applicability in performance-critical settings. Building on prior theoretical insights from ReLIC [Mitrovic et al., 2021], we include additional inductive biases into self-supervised learning. We propose a new self-supervised representation learning method, ReLICv2, which combines an explicit invariance loss with a contrastive objective over a varied set of appropriately constructed data views to avoid learning spurious correlations and obtain more informative representations. ReLICv2 achieves $77.1\\%$ top-$1$ accuracy on ImageNet under linear evaluation on a ResNet50, thus improving the previous state-of-the-art by absolute $+1.5\\%$; on larger ResNet models, ReLICv2 achieves up to $80.6\\%$ outperforming previous self-supervised approaches with margins up to $+2.3\\%$. Most notably, ReLICv2 is the first unsupervised representation learning method to consistently outperform the supervised baseline in a like-for-like comparison over a range of ResNet architectures. Using ReLICv2, we also learn more robust and transferable representations that generalize better out-of-distribution than previous work, both on image classification and semantic segmentation. Finally, we show that despite using ResNet encoders, ReLICv2 is comparable to state-of-the-art self-supervised vision transformers."]], "0_corpus": "arxiv", "1_conv_id": "febd99e4bd594450acbc82b751f287f9", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "Paper detailing advancements in self-supervised learning with ResNets that outperforms supervised benchmarks using ReLICv2.", "1_output": [["Paper detailing advancements in self-supervised learning with ResNets that outperforms supervised benchmarks using ReLICv2.", "Title: A Recharge Oscillator Model for Interannual Variability in Venus' Clouds\n\nAbstract: Sulfur dioxide is a radiatively and chemically important trace gas in the atmosphere of Venus and its abundance at the cloud-tops has been observed to vary on interannual to decadal timescales. This variability is thought to come from changes in the strength of convection which transports sulfur dioxide to the cloud-tops, {although} the dynamics behind such convective variability are unknown. Here we propose a new conceptual model for convective variability that links the radiative effects of water abundance at the cloud-base to convective strength within the clouds, which in turn affects water transport within the cloud. The model consists of two coupled equations which are identified as a recharge-discharge oscillator. The solutions of the coupled equations are finite amplitude sustained oscillations in convective strength and cloud-base water abundance on 3-9 year timescales. The characteristic oscillation timescale is given by the geometric mean of the radiative cooling time and the eddy mixing time near the base of the convective clouds."]], "1_corpus": "arxiv"} {"tstamp": 1722226507.5227, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "f2c53550507d40ec8adcb6609f96b488", "0_model_name": "BM25", "0_prompt": "Open-source toolkit for training custom models for NLP tasks like translation and summarization using PyTorch", "0_output": [["Open-source toolkit for training custom models for NLP tasks like translation and summarization using PyTorch", "Title: fairseq: A Fast, Extensible Toolkit for Sequence Modeling\n\nAbstract: fairseq is an open-source sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. The toolkit is based on PyTorch and supports distributed training across multiple GPUs and machines. We also support fast mixed-precision training and inference on modern GPUs. A demo video can be found at https://www.youtube.com/watch?v=OtgDdWtHvto"]], "0_corpus": "arxiv", "1_conv_id": "3669678b55c1435d9fb56dc66ea88dac", "1_model_name": "embed-english-v3.0", "1_prompt": "Open-source toolkit for training custom models for NLP tasks like translation and summarization using PyTorch", "1_output": [["Open-source toolkit for training custom models for NLP tasks like translation and summarization using PyTorch", "Title: OpenNMT: Open-Source Toolkit for Neural Machine Translation\n\nAbstract: We describe an open-source toolkit for neural machine translation (NMT). The toolkit prioritizes efficiency, modularity, and extensibility with the goal of supporting NMT research into model architectures, feature representations, and source modalities, while maintaining competitive performance and reasonable training requirements. The toolkit consists of modeling and translation support, as well as detailed pedagogical documentation about the underlying techniques."]], "1_corpus": "arxiv"} {"tstamp": 1722226517.2407, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "95fa3e667b574863b03d1f92d831c160", "0_model_name": "voyage-multilingual-2", "0_prompt": "Research on leveraging knowledge graphs in protein pretraining models to enhance performance in protein-protein interaction and function prediction.", "0_output": [["Research on leveraging knowledge graphs in protein pretraining models to enhance performance in protein-protein interaction and function prediction.", "Title: OntoProtein: Protein Pretraining With Gene Ontology Embedding\n\nAbstract: Self-supervised protein language models have proved their effectiveness in learning the proteins representations. With the increasing computational power, current protein language models pre-trained with millions of diverse sequences can advance the parameter scale from million-level to billion-level and achieve remarkable improvement. However, those prevailing approaches rarely consider incorporating knowledge graphs (KGs), which can provide rich structured knowledge facts for better protein representations. We argue that informative biology knowledge in KGs can enhance protein representation with external knowledge. In this work, we propose OntoProtein, the first general framework that makes use of structure in GO (Gene Ontology) into protein pre-training models. We construct a novel large-scale knowledge graph that consists of GO and its related proteins, and gene annotation texts or protein sequences describe all nodes in the graph. We propose novel contrastive learning with knowledge-aware negative sampling to jointly optimize the knowledge graph and protein embedding during pre-training. Experimental results show that OntoProtein can surpass state-of-the-art methods with pre-trained protein language models in TAPE benchmark and yield better performance compared with baselines in protein-protein interaction and protein function prediction. Code and datasets are available in https://github.com/zjunlp/OntoProtein."]], "0_corpus": "arxiv", "1_conv_id": "d93ed29725cd4d3582a5f31b38bcf131", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "Research on leveraging knowledge graphs in protein pretraining models to enhance performance in protein-protein interaction and function prediction.", "1_output": [["Research on leveraging knowledge graphs in protein pretraining models to enhance performance in protein-protein interaction and function prediction.", "Title: OntoProtein: Protein Pretraining With Gene Ontology Embedding\n\nAbstract: Self-supervised protein language models have proved their effectiveness in learning the proteins representations. With the increasing computational power, current protein language models pre-trained with millions of diverse sequences can advance the parameter scale from million-level to billion-level and achieve remarkable improvement. However, those prevailing approaches rarely consider incorporating knowledge graphs (KGs), which can provide rich structured knowledge facts for better protein representations. We argue that informative biology knowledge in KGs can enhance protein representation with external knowledge. In this work, we propose OntoProtein, the first general framework that makes use of structure in GO (Gene Ontology) into protein pre-training models. We construct a novel large-scale knowledge graph that consists of GO and its related proteins, and gene annotation texts or protein sequences describe all nodes in the graph. We propose novel contrastive learning with knowledge-aware negative sampling to jointly optimize the knowledge graph and protein embedding during pre-training. Experimental results show that OntoProtein can surpass state-of-the-art methods with pre-trained protein language models in TAPE benchmark and yield better performance compared with baselines in protein-protein interaction and protein function prediction. Code and datasets are available in https://github.com/zjunlp/OntoProtein."]], "1_corpus": "arxiv"} {"tstamp": 1722226550.2992, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "b2968b5bb6a44c7d9b2a02ad034e9c01", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "Looking for a research paper that introduces a new method using text-guided attention to improve polyp segmentation in colonoscopy images.", "0_output": [["Looking for a research paper that introduces a new method using text-guided attention to improve polyp segmentation in colonoscopy images.", "Title: TGANet: Text-guided attention for improved polyp segmentation\n\nAbstract: Colonoscopy is a gold standard procedure but is highly operator-dependent. Automated polyp segmentation, a precancerous precursor, can minimize missed rates and timely treatment of colon cancer at an early stage. Even though there are deep learning methods developed for this task, variability in polyp size can impact model training, thereby limiting it to the size attribute of the majority of samples in the training dataset that may provide sub-optimal results to differently sized polyps. In this work, we exploit size-related and polyp number-related features in the form of text attention during training. We introduce an auxiliary classification task to weight the text-based embedding that allows network to learn additional feature representations that can distinctly adapt to differently sized polyps and can adapt to cases with multiple polyps. Our experimental results demonstrate that these added text embeddings improve the overall performance of the model compared to state-of-the-art segmentation methods. We explore four different datasets and provide insights for size-specific improvements. Our proposed text-guided attention network (TGANet) can generalize well to variable-sized polyps in different datasets."]], "0_corpus": "arxiv", "1_conv_id": "dbc578b25b534b27ad0b1a5e3d074536", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Looking for a research paper that introduces a new method using text-guided attention to improve polyp segmentation in colonoscopy images.", "1_output": [["Looking for a research paper that introduces a new method using text-guided attention to improve polyp segmentation in colonoscopy images.", "Title: The W_t Transcendental Function and Quantum Mechanical Applications\n\nAbstract: We discuss the function wt(x) defined via the implicit equation wt(x)*tan[wt(x)]=x which appears in certain quantum mechanical and field theoretic applications. We investigate its analytic structure, develop series expansions for both small and large x, and provide various techniques for its numerical evaluation in the complex plane."]], "1_corpus": "arxiv"} {"tstamp": 1722226566.4041, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "ccb901303d3d47d5a7291c5fe9d4023b", "0_model_name": "BM25", "0_prompt": "How can we improve the interpretability of deep learning models using simpler models?", "0_output": [["How can we improve the interpretability of deep learning models using simpler models?", "Title: A Detailed Study of Interpretability of Deep Neural Network based Top Taggers\n\nAbstract: Recent developments in the methods of explainable AI (XAI) allow researchers to explore the inner workings of deep neural networks (DNNs), revealing crucial information about input-output relationships and realizing how data connects with machine learning models. In this paper we explore interpretability of DNN models designed to identify jets coming from top quark decay in high energy proton-proton collisions at the Large Hadron Collider (LHC). We review a subset of existing top tagger models and explore different quantitative methods to identify which features play the most important roles in identifying the top jets. We also investigate how and why feature importance varies across different XAI metrics, how correlations among features impact their explainability, and how latent space representations encode information as well as correlate with physically meaningful quantities. Our studies uncover some major pitfalls of existing XAI methods and illustrate how they can be overcome to obtain consistent and meaningful interpretation of these models. We additionally illustrate the activity of hidden layers as Neural Activation Pattern (NAP) diagrams and demonstrate how they can be used to understand how DNNs relay information across the layers and how this understanding can help to make such models significantly simpler by allowing effective model reoptimization and hyperparameter tuning. These studies not only facilitate a methodological approach to interpreting models but also unveil new insights about what these models learn. Incorporating these observations into augmented model design, we propose the Particle Flow Interaction Network (PFIN) model and demonstrate how interpretability-inspired model augmentation can improve top tagging performance."]], "0_corpus": "arxiv", "1_conv_id": "20451f45d5c04fcab80fde7e869693a4", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "How can we improve the interpretability of deep learning models using simpler models?", "1_output": [["How can we improve the interpretability of deep learning models using simpler models?", "Title: Interpretability Illusions in the Generalization of Simplified Models\n\nAbstract: A common method to study deep learning systems is to use simplified model representations--for example, using singular value decomposition to visualize the model's hidden states in a lower dimensional space. This approach assumes that the results of these simplifications are faithful to the original model. Here, we illustrate an important caveat to this assumption: even if the simplified representations can accurately approximate the full model on the training set, they may fail to accurately capture the model's behavior out of distribution. We illustrate this by training Transformer models on controlled datasets with systematic generalization splits, including the Dyck balanced-parenthesis languages and a code completion task. We simplify these models using tools like dimensionality reduction and clustering, and then explicitly test how these simplified proxies match the behavior of the original model. We find consistent generalization gaps: cases in which the simplified proxies are more faithful to the original model on the in-distribution evaluations and less faithful on various tests of systematic generalization. This includes cases where the original model generalizes systematically but the simplified proxies fail, and cases where the simplified proxies generalize better. Together, our results raise questions about the extent to which mechanistic interpretations derived using tools like SVD can reliably predict what a model will do in novel situations."]], "1_corpus": "arxiv"} {"tstamp": 1722226587.236, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "472b115b6c084a85b1fd064506fb60c0", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "is bloodborne in the same universe as dark souls?", "0_output": [["is bloodborne in the same universe as dark souls?", "Math Stackexchange\n\nQ: Solving $\\arcsin(1-x)-2\\arcsin(x)=\\pi/2$ \\begin{eqnarray*}\n\\arcsin(1-x)-2\\arcsin(x) & = & \\frac{\\pi}{2}\\\\\n1-x & = & \\sin\\left(\\frac{\\pi}{2}+2\\arcsin(x)\\right)\\\\\n & = & \\cos\\left(2\\arcsin(x)\\right)\\\\\n & = & 1-2\\left(\\sin\\left(\\arcsin(x)\\right)\\right)^{2}\\\\\n & = & 1-2x^{2}\\\\\nx & = & 2x^{2}\\\\\nx\\left(x-\\frac{1}{2}\\right) & = & 0\n\\end{eqnarray*}\nSo $x=0$ or $x=\\frac{1}{2}$\nBut puttig $x=\\frac{1}{2}$ in the original expression gives $-\\frac {\\pi} 4 \\ne \\frac \\pi 2$\nSo, why do we get $x=-1/2$ as an answer?\n\nA: In your first step you added an extra solution. \nSince $\\arcsin x$ must be smaller than $\\pi/2$, the first line reads:\n$$\\arcsin(1-x)= \\frac{\\pi}{2}+2\\arcsin(x) \\le \\frac{\\pi}{2}$$\nThus, $x\\le 0$ as well.\nNow, by taking the $\\sin$ of both sides, you took a function that was only defined up to $x=1$ (e.g. $\\arcsin(x-1)$ ) and extended it to all reals (e.g $x-1$). Here is where you added the extra solution.\n"]], "0_corpus": "stackexchange", "1_conv_id": "f0dfad42986f425e91134329193506e8", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "is bloodborne in the same universe as dark souls?", "1_output": [["is bloodborne in the same universe as dark souls?", "Stackoverflow Stackexchange\n\nQ: Stop UIAlertView from dismissing Can you stop UIAlertView from dismissing? \nI want based on what button is pressed, to either animate adding a UITextField, or dismiss it. But I can't add a text field after the alert is dismissed.\n\nA: You should implement the UIAlertViewDelegate method :\n- (void)alertView:(UIAlertView *)alertView willDismissWithButtonIndex:(NSInteger)buttonIndex\n\nand based on the button index you can perform your custom action.\nOtherwise you should either create your own alertview class or subclass UIAlertView.\nIf you choose to subclass you can override the method : \n-(void)dismissWithClickedButtonIndex:(NSInteger)buttonIndex animated:(BOOL)animated \n\nto avoid dismissing the alertView if a specific button is pressed\n"]], "1_corpus": "stackexchange"} {"tstamp": 1722226604.3253, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "febd759447c24fbbab98fe669530ae5b", "0_model_name": "voyage-multilingual-2", "0_prompt": "How well do ChatGPT and GPT-4 perform on logical reasoning datasets?", "0_output": [["How well do ChatGPT and GPT-4 perform on logical reasoning datasets?", "Title: Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4\n\nAbstract: Harnessing logical reasoning ability is a comprehensive natural language understanding endeavor. With the release of Generative Pretrained Transformer 4 (GPT-4), highlighted as \"advanced\" at reasoning tasks, we are eager to learn the GPT-4 performance on various logical reasoning tasks. This report analyses multiple logical reasoning datasets, with popular benchmarks like LogiQA and ReClor, and newly-released datasets like AR-LSAT. We test the multi-choice reading comprehension and natural language inference tasks with benchmarks requiring logical reasoning. We further construct a logical reasoning out-of-distribution dataset to investigate the robustness of ChatGPT and GPT-4. We also make a performance comparison between ChatGPT and GPT-4. Experiment results show that ChatGPT performs significantly better than the RoBERTa fine-tuning method on most logical reasoning benchmarks. With early access to the GPT-4 API we are able to conduct intense experiments on the GPT-4 model. The results show GPT-4 yields even higher performance on most logical reasoning datasets. Among benchmarks, ChatGPT and GPT-4 do relatively well on well-known datasets like LogiQA and ReClor. However, the performance drops significantly when handling newly released and out-of-distribution datasets. Logical reasoning remains challenging for ChatGPT and GPT-4, especially on out-of-distribution and natural language inference datasets. We release the prompt-style logical reasoning datasets as a benchmark suite and name it LogiEval."]], "0_corpus": "arxiv", "1_conv_id": "eac6e53b37a34cda9371c8b5695b6794", "1_model_name": "embed-english-v3.0", "1_prompt": "How well do ChatGPT and GPT-4 perform on logical reasoning datasets?", "1_output": [["How well do ChatGPT and GPT-4 perform on logical reasoning datasets?", "Title: Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4\n\nAbstract: Harnessing logical reasoning ability is a comprehensive natural language understanding endeavor. With the release of Generative Pretrained Transformer 4 (GPT-4), highlighted as \"advanced\" at reasoning tasks, we are eager to learn the GPT-4 performance on various logical reasoning tasks. This report analyses multiple logical reasoning datasets, with popular benchmarks like LogiQA and ReClor, and newly-released datasets like AR-LSAT. We test the multi-choice reading comprehension and natural language inference tasks with benchmarks requiring logical reasoning. We further construct a logical reasoning out-of-distribution dataset to investigate the robustness of ChatGPT and GPT-4. We also make a performance comparison between ChatGPT and GPT-4. Experiment results show that ChatGPT performs significantly better than the RoBERTa fine-tuning method on most logical reasoning benchmarks. With early access to the GPT-4 API we are able to conduct intense experiments on the GPT-4 model. The results show GPT-4 yields even higher performance on most logical reasoning datasets. Among benchmarks, ChatGPT and GPT-4 do relatively well on well-known datasets like LogiQA and ReClor. However, the performance drops significantly when handling newly released and out-of-distribution datasets. Logical reasoning remains challenging for ChatGPT and GPT-4, especially on out-of-distribution and natural language inference datasets. We release the prompt-style logical reasoning datasets as a benchmark suite and name it LogiEval."]], "1_corpus": "arxiv"} {"tstamp": 1722226669.9586, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "bf820c0aa0c64ff9847080afc0f4d484", "0_model_name": "text-embedding-004", "0_prompt": "Research on improving the performance and energy efficiency of neural networks by utilizing 8-bit precision.", "0_output": [["Research on improving the performance and energy efficiency of neural networks by utilizing 8-bit precision.", "Title: Understanding the Impact of Precision Quantization on the Accuracy and Energy of Neural Networks\n\nAbstract: Deep neural networks are gaining in popularity as they are used to generate state-of-the-art results for a variety of computer vision and machine learning applications. At the same time, these networks have grown in depth and complexity in order to solve harder problems. Given the limitations in power budgets dedicated to these networks, the importance of low-power, low-memory solutions has been stressed in recent years. While a large number of dedicated hardware using different precisions has recently been proposed, there exists no comprehensive study of different bit precisions and arithmetic in both inputs and network parameters. In this work, we address this issue and perform a study of different bit-precisions in neural networks (from floating-point to fixed-point, powers of two, and binary). In our evaluation, we consider and analyze the effect of precision scaling on both network accuracy and hardware metrics including memory footprint, power and energy consumption, and design area. We also investigate training-time methodologies to compensate for the reduction in accuracy due to limited bit precision and demonstrate that in most cases, precision scaling can deliver significant benefits in design metrics at the cost of very modest decreases in network accuracy. In addition, we propose that a small portion of the benefits achieved when using lower precisions can be forfeited to increase the network size and therefore the accuracy. We evaluate our experiments, using three well-recognized networks and datasets to show its generality. We investigate the trade-offs and highlight the benefits of using lower precisions in terms of energy and memory footprint."]], "0_corpus": "arxiv", "1_conv_id": "8c0c14dc0aa642e382ed81777e5c8042", "1_model_name": "BM25", "1_prompt": "Research on improving the performance and energy efficiency of neural networks by utilizing 8-bit precision.", "1_output": [["Research on improving the performance and energy efficiency of neural networks by utilizing 8-bit precision.", "Title: Tartan: Accelerating Fully-Connected and Convolutional Layers in Deep Learning Networks by Exploiting Numerical Precision Variability\n\nAbstract: Tartan (TRT), a hardware accelerator for inference with Deep Neural Networks (DNNs), is presented and evaluated on Convolutional Neural Networks. TRT exploits the variable per layer precision requirements of DNNs to deliver execution time that is proportional to the precision p in bits used per layer for convolutional and fully-connected layers. Prior art has demonstrated an accelerator with the same execution performance only for convolutional layers. Experiments on image classification CNNs show that on average across all networks studied, TRT outperforms a state-of-the-art bit-parallel accelerator by 1:90x without any loss in accuracy while it is 1:17x more energy efficient. TRT requires no network retraining while it enables trading off accuracy for additional improvements in execution performance and energy efficiency. For example, if a 1% relative loss in accuracy is acceptable, TRT is on average 2:04x faster and 1:25x more energy efficient than a conventional bit-parallel accelerator. A Tartan configuration that processes 2-bits at time, requires less area than the 1-bit configuration, improves efficiency to 1:24x over the bit-parallel baseline while being 73% faster for convolutional layers and 60% faster for fully-connected layers is also presented."]], "1_corpus": "arxiv"} {"tstamp": 1722226736.6929, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "5057b648e94c4ffe85732362ed4ad1e4", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "27129086b9024fe7a62de294a6649ab2", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"} {"tstamp": 1722226755.2368, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "942de81de947444185a57999c86fe41c", "0_model_name": "embed-english-v3.0", "0_prompt": "Efficient transformer models for multi-page document classification", "0_output": [["Efficient transformer models for multi-page document classification", "Title: Revisiting Transformer-based Models for Long Document Classification\n\nAbstract: The recent literature in text classification is biased towards short text sequences (e.g., sentences or paragraphs). In real-world applications, multi-page multi-paragraph documents are common and they cannot be efficiently encoded by vanilla Transformer-based models. We compare different Transformer-based Long Document Classification (TrLDC) approaches that aim to mitigate the computational overhead of vanilla transformers to encode much longer text, namely sparse attention and hierarchical encoding methods. We examine several aspects of sparse attention (e.g., size of local attention window, use of global attention) and hierarchical (e.g., document splitting strategy) transformers on four document classification datasets covering different domains. We observe a clear benefit from being able to process longer text, and, based on our results, we derive practical advice of applying Transformer-based models on long document classification tasks."]], "0_corpus": "arxiv", "1_conv_id": "dc52388a815249e1bdb208cb75e7563c", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "Efficient transformer models for multi-page document classification", "1_output": [["Efficient transformer models for multi-page document classification", "Title: Revisiting Transformer-based Models for Long Document Classification\n\nAbstract: The recent literature in text classification is biased towards short text sequences (e.g., sentences or paragraphs). In real-world applications, multi-page multi-paragraph documents are common and they cannot be efficiently encoded by vanilla Transformer-based models. We compare different Transformer-based Long Document Classification (TrLDC) approaches that aim to mitigate the computational overhead of vanilla transformers to encode much longer text, namely sparse attention and hierarchical encoding methods. We examine several aspects of sparse attention (e.g., size of local attention window, use of global attention) and hierarchical (e.g., document splitting strategy) transformers on four document classification datasets covering different domains. We observe a clear benefit from being able to process longer text, and, based on our results, we derive practical advice of applying Transformer-based models on long document classification tasks."]], "1_corpus": "arxiv"} {"tstamp": 1722226768.1617, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "e488f8737d2e4fd7ab943e48d4a3cd52", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "How to combine GPT-2 and BERT for better question generation and answering?", "0_output": [["How to combine GPT-2 and BERT for better question generation and answering?", "Title: Learning to Answer by Learning to Ask: Getting the Best of GPT-2 and BERT Worlds\n\nAbstract: Automatic question generation aims at the generation of questions from a context, with the corresponding answers being sub-spans of the given passage. Whereas, most of the methods mostly rely on heuristic rules to generate questions, more recently also neural network approaches have been proposed. In this work, we propose a variant of the self-attention Transformer network architectures model to generate meaningful and diverse questions. To this end, we propose an easy to use model consisting of the conjunction of the Transformer decoder GPT-2 model with Transformer encoder BERT for the downstream task for question answering. The model is trained in an end-to-end fashion, where the language model is trained to produce a question-answer-aware input representation that facilitates to generate an answer focused question. Our result of neural question generation from text on the SQuAD 1.1 dataset suggests that our method can produce semantically correct and diverse questions. Additionally, we assessed the performance of our proposed method for the downstream task of question answering. The analysis shows that our proposed generation & answering collaboration framework relatively improves both tasks and is particularly powerful in the semi-supervised setup. The results further suggest a robust and comparably lean pipeline facilitating question generation in the small-data regime."]], "0_corpus": "arxiv", "1_conv_id": "4e6f1429af8d488dbc21f92b03708925", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "How to combine GPT-2 and BERT for better question generation and answering?", "1_output": [["How to combine GPT-2 and BERT for better question generation and answering?", "Title: Radiation flux and spectrum in the Vaidya collapse model\n\nAbstract: We consider the quantization of a massless scalar field, using the geometric optics approximation, in the background spacetime of a collapsing spherical self-similar Vaidya star, which forms a black hole or a naked singularity. We show that the outgoing radiation flux of the quantized scalar field diverges on the Cauchy horizon. The spectrum of the produced scalar partcles is non-thermal when the background develops a naked singularity. These results are analogous to those obtained for the scalar quantization on a self-similar dust cloud."]], "1_corpus": "arxiv"} {"tstamp": 1722226790.9489, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "d8ccfaccb72b40429980442180c503b9", "0_model_name": "text-embedding-3-large", "0_prompt": "Paper introducing a transformer-based model trained on multiple visual modalities achieving high accuracy.", "0_output": [["Paper introducing a transformer-based model trained on multiple visual modalities achieving high accuracy.", "Title: Omnivore: A Single Model for Many Visual Modalities\n\nAbstract: Prior work has studied different visual modalities in isolation and developed separate architectures for recognition of images, videos, and 3D data. Instead, in this paper, we propose a single model which excels at classifying images, videos, and single-view 3D data using exactly the same model parameters. Our 'Omnivore' model leverages the flexibility of transformer-based architectures and is trained jointly on classification tasks from different modalities. Omnivore is simple to train, uses off-the-shelf standard datasets, and performs at-par or better than modality-specific models of the same size. A single Omnivore model obtains 86.0% on ImageNet, 84.1% on Kinetics, and 67.1% on SUN RGB-D. After finetuning, our models outperform prior work on a variety of vision tasks and generalize across modalities. Omnivore's shared visual representation naturally enables cross-modal recognition without access to correspondences between modalities. We hope our results motivate researchers to model visual modalities together."]], "0_corpus": "arxiv", "1_conv_id": "ba7ddb3950104cf488c87a1656f8f414", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "Paper introducing a transformer-based model trained on multiple visual modalities achieving high accuracy.", "1_output": [["Paper introducing a transformer-based model trained on multiple visual modalities achieving high accuracy.", "Title: Ab initio Molecular Dynamics Study of D_2 Desorption from Si(100)\n\nAbstract: Ab initio molecular dynamics calculations of deuterium desorbing from Si(100) have been performed in order to monitor the energy redistribution among the hydrogen and silicon degrees of freedom during the desorption process. The calculations show that part of the potential energy at the transition state to desorption is transferred to the silicon lattice. The deuterium molecules leave the surface vibrationally hot and rotationally cold, in agreement with experiments; the mean kinetic energy, however, is larger than found in experiments."]], "1_corpus": "arxiv"} {"tstamp": 1722226844.4942, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "0ecf1137eb8441729be550b264620830", "0_model_name": "text-embedding-004", "0_prompt": "A novel network combining convolution and transformer operations for better medical image segmentation.", "0_output": [["A novel network combining convolution and transformer operations for better medical image segmentation.", "Title: ConvFormer: Combining CNN and Transformer for Medical Image Segmentation\n\nAbstract: Convolutional neural network (CNN) based methods have achieved great successes in medical image segmentation, but their capability to learn global representations is still limited due to using small effective receptive fields of convolution operations. Transformer based methods are capable of modelling long-range dependencies of information for capturing global representations, yet their ability to model local context is lacking. Integrating CNN and Transformer to learn both local and global representations while exploring multi-scale features is instrumental in further improving medical image segmentation. In this paper, we propose a hierarchical CNN and Transformer hybrid architecture, called ConvFormer, for medical image segmentation. ConvFormer is based on several simple yet effective designs. (1) A feed forward module of Deformable Transformer (DeTrans) is re-designed to introduce local information, called Enhanced DeTrans. (2) A residual-shaped hybrid stem based on a combination of convolutions and Enhanced DeTrans is developed to capture both local and global representations to enhance representation ability. (3) Our encoder utilizes the residual-shaped hybrid stem in a hierarchical manner to generate feature maps in different scales, and an additional Enhanced DeTrans encoder with residual connections is built to exploit multi-scale features with feature maps of different scales as input. Experiments on several datasets show that our ConvFormer, trained from scratch, outperforms various CNN- or Transformer-based architectures, achieving state-of-the-art performance."]], "0_corpus": "arxiv", "1_conv_id": "d02461de08aa4ce7b1a4cee2f252e2eb", "1_model_name": "embed-english-v3.0", "1_prompt": "A novel network combining convolution and transformer operations for better medical image segmentation.", "1_output": [["A novel network combining convolution and transformer operations for better medical image segmentation.", "Title: Rethinking Boundary Detection in Deep Learning Models for Medical Image Segmentation\n\nAbstract: Medical image segmentation is a fundamental task in the community of medical image analysis. In this paper, a novel network architecture, referred to as Convolution, Transformer, and Operator (CTO), is proposed. CTO employs a combination of Convolutional Neural Networks (CNNs), Vision Transformer (ViT), and an explicit boundary detection operator to achieve high recognition accuracy while maintaining an optimal balance between accuracy and efficiency. The proposed CTO follows the standard encoder-decoder segmentation paradigm, where the encoder network incorporates a popular CNN backbone for capturing local semantic information, and a lightweight ViT assistant for integrating long-range dependencies. To enhance the learning capacity on boundary, a boundary-guided decoder network is proposed that uses a boundary mask obtained from a dedicated boundary detection operator as explicit supervision to guide the decoding learning process. The performance of the proposed method is evaluated on six challenging medical image segmentation datasets, demonstrating that CTO achieves state-of-the-art accuracy with a competitive model complexity."]], "1_corpus": "arxiv"} {"tstamp": 1722226863.8341, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "3b181c53b714491a82ac48e1a1950309", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "How do different constraints in storytelling tasks impact the author's linguistic style?", "0_output": [["How do different constraints in storytelling tasks impact the author's linguistic style?", "Title: The Effect of Different Writing Tasks on Linguistic Style: A Case Study of the ROC Story Cloze Task\n\nAbstract: A writer's style depends not just on personal traits but also on her intent and mental state. In this paper, we show how variants of the same writing task can lead to measurable differences in writing style. We present a case study based on the story cloze task (Mostafazadeh et al., 2016a), where annotators were assigned similar writing tasks with different constraints: (1) writing an entire story, (2) adding a story ending for a given story context, and (3) adding an incoherent ending to a story. We show that a simple linear classifier informed by stylistic features is able to successfully distinguish among the three cases, without even looking at the story context. In addition, combining our stylistic features with language model predictions reaches state of the art performance on the story cloze challenge. Our results demonstrate that different task framings can dramatically affect the way people write."]], "0_corpus": "arxiv", "1_conv_id": "de6bb332c59b4774b8c38bdad9af80a0", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "How do different constraints in storytelling tasks impact the author's linguistic style?", "1_output": [["How do different constraints in storytelling tasks impact the author's linguistic style?", "Title: Limits on dynamically generated spin-orbit coupling: Absence of $l=1$ Pomeranchuk instabilities in metals\n\nAbstract: An ordered state in the spin sector that breaks parity without breaking time-reversal symmetry, i.e., that can be considered as dynamically generated spin-orbit coupling, was proposed to explain puzzling observations in a range of different systems. Here we derive severe restrictions for such a state that follow from a Ward identity related to spin conservation. It is shown that $l=1$ spin-Pomeranchuk instabilities are not possible in non-relativistic systems since the response of spin-current fluctuations is entirely incoherent and non-singular. This rules out relativistic spin-orbit coupling as an emergent low-energy phenomenon. We illustrate the exotic physical properties of the remaining higher angular momentum analogues of spin-orbit coupling and derive a geometric constraint for spin-orbit vectors in lattice systems."]], "1_corpus": "arxiv"} {"tstamp": 1722226877.0152, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "85881b7aeaa44439a7c415dcfd68c525", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Study on reducing gender bias in word-level models, showing improved results in bias evaluation metrics.", "0_output": [["Study on reducing gender bias in word-level models, showing improved results in bias evaluation metrics.", "Title: Chaos or Noise - Difficulties of a Distinction\n\nAbstract: In experiments, the dynamical behavior of systems is reflected in time series. Due to the finiteness of the observational data set it is not possible to reconstruct the invariant measure up to arbitrary fine resolution and arbitrary high embedding dimension. These restrictions limit our ability to distinguish between signals generated by different systems, such as regular, chaotic or stochastic ones, when analyzed from a time series point of view. We propose to classify the signal behavior, without referring to any specific model, as stochastic or deterministic on a certain scale of the resolution $\\epsilon$, according to the dependence of the $(\\epsilon,\\tau)$-entropy, $h(\\epsilon, \\tau)$, and of the finite size Lyapunov exponent, $\\lambda(\\epsilon)$, on $\\epsilon$."]], "0_corpus": "arxiv", "1_conv_id": "a7b2c40c9e5c43e3a3e400d04100c725", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "Study on reducing gender bias in word-level models, showing improved results in bias evaluation metrics.", "1_output": [["Study on reducing gender bias in word-level models, showing improved results in bias evaluation metrics.", "Title: Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function\n\nAbstract: Gender bias exists in natural language datasets which neural language models tend to learn, resulting in biased text generation. In this research, we propose a debiasing approach based on the loss function modification. We introduce a new term to the loss function which attempts to equalize the probabilities of male and female words in the output. Using an array of bias evaluation metrics, we provide empirical evidence that our approach successfully mitigates gender bias in language models without increasing perplexity. In comparison to existing debiasing strategies, data augmentation, and word embedding debiasing, our method performs better in several aspects, especially in reducing gender bias in occupation words. Finally, we introduce a combination of data augmentation and our approach, and show that it outperforms existing strategies in all bias evaluation metrics."]], "1_corpus": "arxiv"} {"tstamp": 1722226892.8444, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "4987ca9238374025ae9f6d61145d0142", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "Study analyzing the multitask accuracy of text models including GPT-3 on academic and professional knowledge areas.", "0_output": [["Study analyzing the multitask accuracy of text models including GPT-3 on academic and professional knowledge areas.", "Title: Vibrational Spectra of Defects in Silicon: An Orbital Radii Approach\n\nAbstract: A phenomenological approach to the stretching mode vibrational frequencies of defects in semiconductors is proposed. A novel quantum scale is defined in terms of the first principles pseudopotential based orbital radius and the principal quantum number of the element concerned. A universal linear relationship between the Sanderson electronegativity and this quantum scale is established. Next, we show that the stretching mode vibrational frequencies of hydrogen and chlorine in the silicon network scale linearly with this quantum scale. Predictions and identifications of defect environments around the Si-H and Si-Cl are possible. The assignments of vibrational modes in porous silicon are critically examined. We discuss our proposed scale in the context of Mendeleveyan scales in general, and suggest justifications for it. We believe that our approach can be gainfully extended to the vibrational spectra of other semiconductors."]], "0_corpus": "arxiv", "1_conv_id": "bf81fa11eb3f4d3cb9c7294f31d17a63", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "Study analyzing the multitask accuracy of text models including GPT-3 on academic and professional knowledge areas.", "1_output": [["Study analyzing the multitask accuracy of text models including GPT-3 on academic and professional knowledge areas.", "Title: Measuring Massive Multitask Language Understanding\n\nAbstract: We propose a new test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability. We find that while most recent models have near random-chance accuracy, the very largest GPT-3 model improves over random chance by almost 20 percentage points on average. However, on every one of the 57 tasks, the best models still need substantial improvements before they can reach expert-level accuracy. Models also have lopsided performance and frequently do not know when they are wrong. Worse, they still have near-random accuracy on some socially important subjects such as morality and law. By comprehensively evaluating the breadth and depth of a model's academic and professional understanding, our test can be used to analyze models across many tasks and to identify important shortcomings."]], "1_corpus": "arxiv"} {"tstamp": 1722226904.182, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "35ba5141430e439182b4cb93495f60b2", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "Comparison of sparse attention and hierarchical encoding in long document transformers", "0_output": [["Comparison of sparse attention and hierarchical encoding in long document transformers", "Title: Revisiting Transformer-based Models for Long Document Classification\n\nAbstract: The recent literature in text classification is biased towards short text sequences (e.g., sentences or paragraphs). In real-world applications, multi-page multi-paragraph documents are common and they cannot be efficiently encoded by vanilla Transformer-based models. We compare different Transformer-based Long Document Classification (TrLDC) approaches that aim to mitigate the computational overhead of vanilla transformers to encode much longer text, namely sparse attention and hierarchical encoding methods. We examine several aspects of sparse attention (e.g., size of local attention window, use of global attention) and hierarchical (e.g., document splitting strategy) transformers on four document classification datasets covering different domains. We observe a clear benefit from being able to process longer text, and, based on our results, we derive practical advice of applying Transformer-based models on long document classification tasks."]], "0_corpus": "arxiv", "1_conv_id": "7c53ae4004484956a4ab85f088046698", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "Comparison of sparse attention and hierarchical encoding in long document transformers", "1_output": [["Comparison of sparse attention and hierarchical encoding in long document transformers", "Title: Revisiting Transformer-based Models for Long Document Classification\n\nAbstract: The recent literature in text classification is biased towards short text sequences (e.g., sentences or paragraphs). In real-world applications, multi-page multi-paragraph documents are common and they cannot be efficiently encoded by vanilla Transformer-based models. We compare different Transformer-based Long Document Classification (TrLDC) approaches that aim to mitigate the computational overhead of vanilla transformers to encode much longer text, namely sparse attention and hierarchical encoding methods. We examine several aspects of sparse attention (e.g., size of local attention window, use of global attention) and hierarchical (e.g., document splitting strategy) transformers on four document classification datasets covering different domains. We observe a clear benefit from being able to process longer text, and, based on our results, we derive practical advice of applying Transformer-based models on long document classification tasks."]], "1_corpus": "arxiv"} {"tstamp": 1722226982.36, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "c81179613507466f9501f236a8beb4a9", "0_model_name": "BM25", "0_prompt": "Paper introducing universal adversarial input triggers to analyze vulnerabilities in NLP models and their transferability across different models.", "0_output": [["Paper introducing universal adversarial input triggers to analyze vulnerabilities in NLP models and their transferability across different models.", "Title: Universal Adversarial Triggers for Attacking and Analyzing NLP\n\nAbstract: Adversarial examples highlight model vulnerabilities and are useful for evaluation and interpretation. We define universal adversarial triggers: input-agnostic sequences of tokens that trigger a model to produce a specific prediction when concatenated to any input from a dataset. We propose a gradient-guided search over tokens which finds short trigger sequences (e.g., one word for classification and four words for language modeling) that successfully trigger the target prediction. For example, triggers cause SNLI entailment accuracy to drop from 89.94% to 0.55%, 72% of \"why\" questions in SQuAD to be answered \"to kill american people\", and the GPT-2 language model to spew racist output even when conditioned on non-racial contexts. Furthermore, although the triggers are optimized using white-box access to a specific model, they transfer to other models for all tasks we consider. Finally, since triggers are input-agnostic, they provide an analysis of global model behavior. For instance, they confirm that SNLI models exploit dataset biases and help to diagnose heuristics learned by reading comprehension models."]], "0_corpus": "arxiv", "1_conv_id": "95105e1e8df945a7b819bd50bcd0a76a", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Paper introducing universal adversarial input triggers to analyze vulnerabilities in NLP models and their transferability across different models.", "1_output": [["Paper introducing universal adversarial input triggers to analyze vulnerabilities in NLP models and their transferability across different models.", "Title: Universal Adversarial Triggers for Attacking and Analyzing NLP\n\nAbstract: Adversarial examples highlight model vulnerabilities and are useful for evaluation and interpretation. We define universal adversarial triggers: input-agnostic sequences of tokens that trigger a model to produce a specific prediction when concatenated to any input from a dataset. We propose a gradient-guided search over tokens which finds short trigger sequences (e.g., one word for classification and four words for language modeling) that successfully trigger the target prediction. For example, triggers cause SNLI entailment accuracy to drop from 89.94% to 0.55%, 72% of \"why\" questions in SQuAD to be answered \"to kill american people\", and the GPT-2 language model to spew racist output even when conditioned on non-racial contexts. Furthermore, although the triggers are optimized using white-box access to a specific model, they transfer to other models for all tasks we consider. Finally, since triggers are input-agnostic, they provide an analysis of global model behavior. For instance, they confirm that SNLI models exploit dataset biases and help to diagnose heuristics learned by reading comprehension models."]], "1_corpus": "arxiv"} {"tstamp": 1722226993.9858, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "c966ef1f66124006834ce9fae7ec6c57", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "Adversarial example generation for text classification using BERT masked language model ", "0_output": [["Adversarial example generation for text classification using BERT masked language model ", "Title: Measuring Adversarial Datasets\n\nAbstract: In the era of widespread public use of AI systems across various domains, ensuring adversarial robustness has become increasingly vital to maintain safety and prevent undesirable errors. Researchers have curated various adversarial datasets (through perturbations) for capturing model deficiencies that cannot be revealed in standard benchmark datasets. However, little is known about how these adversarial examples differ from the original data points, and there is still no methodology to measure the intended and unintended consequences of those adversarial transformations. In this research, we conducted a systematic survey of existing quantifiable metrics that describe text instances in NLP tasks, among dimensions of difficulty, diversity, and disagreement. We selected several current adversarial effect datasets and compared the distributions between the original and their adversarial counterparts. The results provide valuable insights into what makes these datasets more challenging from a metrics perspective and whether they align with underlying assumptions."]], "0_corpus": "arxiv", "1_conv_id": "275ee03e6e634f92968096b192b9ae4a", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Adversarial example generation for text classification using BERT masked language model ", "1_output": [["Adversarial example generation for text classification using BERT masked language model ", "Title: Similar glassy features in the NMR response of pure and disordered La1.88Sr0.12CuO4\n\nAbstract: High Tc superconductivity in La2-xSrxCuO4 coexists with (striped and glassy) magnetic order. Here, we report NMR measurements of the 139La spin-lattice relaxation, which displays a stretched-exponential time dependence, in both pure and disordered x=0.12 single crystals. An analysis in terms of a distribution of relaxation rates T1^-1 indicates that i) the spin-freezing temperature is spatially inhomogeneous with an onset at Tg(onset)=20 K for the pristine samples, and ii) the width of the T1^-1 distribution in the vicinity of Tg(onset) is insensitive to an ~1% level of atomic disorder in CuO2 planes. This suggests that the stretched-exponential 139La relaxation, considered as a manifestation of the systems glassiness, may not arise from quenched disorder."]], "1_corpus": "arxiv"} {"tstamp": 1722227007.968, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "73fda287fa15475a92a663d6a3dba7cb", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "Achieving high accuracy in English-French plagiarism detection using word embeddings", "0_output": [["Achieving high accuracy in English-French plagiarism detection using word embeddings", "Title: UsingWord Embedding for Cross-Language Plagiarism Detection\n\nAbstract: This paper proposes to use distributed representation of words (word embeddings) in cross-language textual similarity detection. The main contributions of this paper are the following: (a) we introduce new cross-language similarity detection methods based on distributed representation of words; (b) we combine the different methods proposed to verify their complementarity and finally obtain an overall F1 score of 89.15% for English-French similarity detection at chunk level (88.5% at sentence level) on a very challenging corpus."]], "0_corpus": "arxiv", "1_conv_id": "339520347d484e1c8068e44e4e4e7452", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "Achieving high accuracy in English-French plagiarism detection using word embeddings", "1_output": [["Achieving high accuracy in English-French plagiarism detection using word embeddings", "Title: Studies of Plasma Detachment Using a One Dimensional Model for Divertor Operation\n\nAbstract: To characterize the conditions required to reach advanced divertor regimes, a one-dimensional computational model has been developed based on a coordinate transformation to incorporate two-dimensional effects. This model includes transport of ions, two species each of atoms and molecules, momentum, and ion and electron energy both within and across the flux surfaces. Impurity radiation is calculated using a coronal equilibrium model which includes the effects of charge-exchange recombination. Numerical results indicate that impurity radiation acts to facilitate plasma detachment and enhances the power lost from the divertor channel in escaping neutral atoms by cooling the electrons and suppressing ionization. As divertor particle densities increase, cold and thermal molecules become increasingly important in cooling the plasma, with molecular densities dominating electron and atomic densities under some conditions."]], "1_corpus": "arxiv"} {"tstamp": 1722227015.1404, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "de8d07d9dc434154b214bde3478b3319", "0_model_name": "BM25", "0_prompt": "A paper presenting a Transformer model for vision-and-language tasks that does not rely on object detection or ResNet.", "0_output": [["A paper presenting a Transformer model for vision-and-language tasks that does not rely on object detection or ResNet.", "Title: ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision\n\nAbstract: Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance. Our code and pre-trained weights are available at https://github.com/dandelin/vilt."]], "0_corpus": "arxiv", "1_conv_id": "21bf134438c34376b884b388bf7b8c19", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "A paper presenting a Transformer model for vision-and-language tasks that does not rely on object detection or ResNet.", "1_output": [["A paper presenting a Transformer model for vision-and-language tasks that does not rely on object detection or ResNet.", "Title: ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision\n\nAbstract: Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance. Our code and pre-trained weights are available at https://github.com/dandelin/vilt."]], "1_corpus": "arxiv"} {"tstamp": 1722227022.9101, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "13b46854dae44ccfa5b050966e192eba", "0_model_name": "text-embedding-004", "0_prompt": "How do explanations in context influence the performance of language models?", "0_output": [["How do explanations in context influence the performance of language models?", "Title: Can language models learn from explanations in context?\n\nAbstract: Language Models (LMs) can perform new tasks by adapting to a few in-context examples. For humans, explanations that connect examples to task principles can improve learning. We therefore investigate whether explanations of few-shot examples can help LMs. We annotate questions from 40 challenging tasks with answer explanations, and various matched control explanations. We evaluate how different types of explanations, instructions, and controls affect zero- and few-shot performance. We analyze these results using statistical multilevel modeling techniques that account for the nested dependencies among conditions, tasks, prompts, and models. We find that explanations can improve performance -- even without tuning. Furthermore, explanations hand-tuned for performance on a small validation set offer substantially larger benefits, and building a prompt by selecting examples and explanations together substantially improves performance over selecting examples alone. Finally, even untuned explanations outperform carefully matched controls, suggesting that the benefits are due to the link between an example and its explanation, rather than lower-level features. However, only large models benefit. In summary, explanations can support the in-context learning of large LMs on challenging tasks."]], "0_corpus": "arxiv", "1_conv_id": "f0f4c01bfc3f47babe5c73dfa7135da2", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "How do explanations in context influence the performance of language models?", "1_output": [["How do explanations in context influence the performance of language models?", "Title: Can language models learn from explanations in context?\n\nAbstract: Language Models (LMs) can perform new tasks by adapting to a few in-context examples. For humans, explanations that connect examples to task principles can improve learning. We therefore investigate whether explanations of few-shot examples can help LMs. We annotate questions from 40 challenging tasks with answer explanations, and various matched control explanations. We evaluate how different types of explanations, instructions, and controls affect zero- and few-shot performance. We analyze these results using statistical multilevel modeling techniques that account for the nested dependencies among conditions, tasks, prompts, and models. We find that explanations can improve performance -- even without tuning. Furthermore, explanations hand-tuned for performance on a small validation set offer substantially larger benefits, and building a prompt by selecting examples and explanations together substantially improves performance over selecting examples alone. Finally, even untuned explanations outperform carefully matched controls, suggesting that the benefits are due to the link between an example and its explanation, rather than lower-level features. However, only large models benefit. In summary, explanations can support the in-context learning of large LMs on challenging tasks."]], "1_corpus": "arxiv"} {"tstamp": 1722227028.8236, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "ce9a309e5ad24ba38deee7ff08db73eb", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "Study introducing a BERT-based framework achieving superior performance on scalable DST benchmarks like Sim-M and Sim-R.", "0_output": [["Study introducing a BERT-based framework achieving superior performance on scalable DST benchmarks like Sim-M and Sim-R.", "Title: BERT-DST: Scalable End-to-End Dialogue State Tracking with Bidirectional Encoder Representations from Transformer\n\nAbstract: An important yet rarely tackled problem in dialogue state tracking (DST) is scalability for dynamic ontology (e.g., movie, restaurant) and unseen slot values. We focus on a specific condition, where the ontology is unknown to the state tracker, but the target slot value (except for none and dontcare), possibly unseen during training, can be found as word segment in the dialogue context. Prior approaches often rely on candidate generation from n-gram enumeration or slot tagger outputs, which can be inefficient or suffer from error propagation. We propose BERT-DST, an end-to-end dialogue state tracker which directly extracts slot values from the dialogue context. We use BERT as dialogue context encoder whose contextualized language representations are suitable for scalable DST to identify slot values from their semantic context. Furthermore, we employ encoder parameter sharing across all slots with two advantages: (1) Number of parameters does not grow linearly with the ontology. (2) Language representation knowledge can be transferred among slots. Empirical evaluation shows BERT-DST with cross-slot parameter sharing outperforms prior work on the benchmark scalable DST datasets Sim-M and Sim-R, and achieves competitive performance on the standard DSTC2 and WOZ 2.0 datasets."]], "0_corpus": "arxiv", "1_conv_id": "fa33616b16934385bfd92cd0a0c413cf", "1_model_name": "text-embedding-3-large", "1_prompt": "Study introducing a BERT-based framework achieving superior performance on scalable DST benchmarks like Sim-M and Sim-R.", "1_output": [["Study introducing a BERT-based framework achieving superior performance on scalable DST benchmarks like Sim-M and Sim-R.", "Title: BERT-DST: Scalable End-to-End Dialogue State Tracking with Bidirectional Encoder Representations from Transformer\n\nAbstract: An important yet rarely tackled problem in dialogue state tracking (DST) is scalability for dynamic ontology (e.g., movie, restaurant) and unseen slot values. We focus on a specific condition, where the ontology is unknown to the state tracker, but the target slot value (except for none and dontcare), possibly unseen during training, can be found as word segment in the dialogue context. Prior approaches often rely on candidate generation from n-gram enumeration or slot tagger outputs, which can be inefficient or suffer from error propagation. We propose BERT-DST, an end-to-end dialogue state tracker which directly extracts slot values from the dialogue context. We use BERT as dialogue context encoder whose contextualized language representations are suitable for scalable DST to identify slot values from their semantic context. Furthermore, we employ encoder parameter sharing across all slots with two advantages: (1) Number of parameters does not grow linearly with the ontology. (2) Language representation knowledge can be transferred among slots. Empirical evaluation shows BERT-DST with cross-slot parameter sharing outperforms prior work on the benchmark scalable DST datasets Sim-M and Sim-R, and achieves competitive performance on the standard DSTC2 and WOZ 2.0 datasets."]], "1_corpus": "arxiv"} {"tstamp": 1722227065.5016, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "d4f40214f39349929660960ef995c744", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "A study showing the drawbacks of using standard MLE training and offering a new training procedure for better test performance in language models.", "0_output": [["A study showing the drawbacks of using standard MLE training and offering a new training procedure for better test performance in language models.", "Title: Recurrent Neural Language Models as Probabilistic Finite-state Automata\n\nAbstract: Studying language models (LMs) in terms of well-understood formalisms allows us to precisely characterize their abilities and limitations. Previous work has investigated the representational capacity of recurrent neural network (RNN) LMs in terms of their capacity to recognize unweighted formal languages. However, LMs do not describe unweighted formal languages -- rather, they define \\emph{probability distributions} over strings. In this work, we study what classes of such probability distributions RNN LMs can represent, which allows us to make more direct statements about their capabilities. We show that simple RNNs are equivalent to a subclass of probabilistic finite-state automata, and can thus model a strict subset of probability distributions expressible by finite-state models. Furthermore, we study the space complexity of representing finite-state LMs with RNNs. We show that, to represent an arbitrary deterministic finite-state LM with $N$ states over an alphabet $\\alphabet$, an RNN requires $\\Omega\\left(N |\\Sigma|\\right)$ neurons. These results present a first step towards characterizing the classes of distributions RNN LMs can represent and thus help us understand their capabilities and limitations."]], "0_corpus": "arxiv", "1_conv_id": "d39dbe79ef5d443683896e332508c895", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "A study showing the drawbacks of using standard MLE training and offering a new training procedure for better test performance in language models.", "1_output": [["A study showing the drawbacks of using standard MLE training and offering a new training procedure for better test performance in language models.", "Title: Neural Architecture Search as Sparse Supernet\n\nAbstract: This paper aims at enlarging the problem of Neural Architecture Search (NAS) from Single-Path and Multi-Path Search to automated Mixed-Path Search. In particular, we model the NAS problem as a sparse supernet using a new continuous architecture representation with a mixture of sparsity constraints. The sparse supernet enables us to automatically achieve sparsely-mixed paths upon a compact set of nodes. To optimize the proposed sparse supernet, we exploit a hierarchical accelerated proximal gradient algorithm within a bi-level optimization framework. Extensive experiments on Convolutional Neural Network and Recurrent Neural Network search demonstrate that the proposed method is capable of searching for compact, general and powerful neural architectures."]], "1_corpus": "arxiv"} {"tstamp": 1722227074.3205, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "ec5eb017dc4d4d9fa6d04d114fcc2e00", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "Propaganda-as-a-service via training-time attacks on language models for biased text generation ", "0_output": [["Propaganda-as-a-service via training-time attacks on language models for biased text generation ", "Title: Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures\n\nAbstract: We investigate a new threat to neural sequence-to-sequence (seq2seq) models: training-time attacks that cause models to \"spin\" their outputs so as to support an adversary-chosen sentiment or point of view -- but only when the input contains adversary-chosen trigger words. For example, a spinned summarization model outputs positive summaries of any text that mentions the name of some individual or organization. Model spinning introduces a \"meta-backdoor\" into a model. Whereas conventional backdoors cause models to produce incorrect outputs on inputs with the trigger, outputs of spinned models preserve context and maintain standard accuracy metrics, yet also satisfy a meta-task chosen by the adversary. Model spinning enables propaganda-as-a-service, where propaganda is defined as biased speech. An adversary can create customized language models that produce desired spins for chosen triggers, then deploy these models to generate disinformation (a platform attack), or else inject them into ML training pipelines (a supply-chain attack), transferring malicious functionality to downstream models trained by victims. To demonstrate the feasibility of model spinning, we develop a new backdooring technique. It stacks an adversarial meta-task onto a seq2seq model, backpropagates the desired meta-task output to points in the word-embedding space we call \"pseudo-words,\" and uses pseudo-words to shift the entire output distribution of the seq2seq model. We evaluate this attack on language generation, summarization, and translation models with different triggers and meta-tasks such as sentiment, toxicity, and entailment. Spinned models largely maintain their accuracy metrics (ROUGE and BLEU) while shifting their outputs to satisfy the adversary's meta-task. We also show that, in the case of a supply-chain attack, the spin functionality transfers to downstream models."]], "0_corpus": "arxiv", "1_conv_id": "009892afcd5f438aa105fea295c61e62", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "Propaganda-as-a-service via training-time attacks on language models for biased text generation ", "1_output": [["Propaganda-as-a-service via training-time attacks on language models for biased text generation ", "Title: Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures\n\nAbstract: We investigate a new threat to neural sequence-to-sequence (seq2seq) models: training-time attacks that cause models to \"spin\" their outputs so as to support an adversary-chosen sentiment or point of view -- but only when the input contains adversary-chosen trigger words. For example, a spinned summarization model outputs positive summaries of any text that mentions the name of some individual or organization. Model spinning introduces a \"meta-backdoor\" into a model. Whereas conventional backdoors cause models to produce incorrect outputs on inputs with the trigger, outputs of spinned models preserve context and maintain standard accuracy metrics, yet also satisfy a meta-task chosen by the adversary. Model spinning enables propaganda-as-a-service, where propaganda is defined as biased speech. An adversary can create customized language models that produce desired spins for chosen triggers, then deploy these models to generate disinformation (a platform attack), or else inject them into ML training pipelines (a supply-chain attack), transferring malicious functionality to downstream models trained by victims. To demonstrate the feasibility of model spinning, we develop a new backdooring technique. It stacks an adversarial meta-task onto a seq2seq model, backpropagates the desired meta-task output to points in the word-embedding space we call \"pseudo-words,\" and uses pseudo-words to shift the entire output distribution of the seq2seq model. We evaluate this attack on language generation, summarization, and translation models with different triggers and meta-tasks such as sentiment, toxicity, and entailment. Spinned models largely maintain their accuracy metrics (ROUGE and BLEU) while shifting their outputs to satisfy the adversary's meta-task. We also show that, in the case of a supply-chain attack, the spin functionality transfers to downstream models."]], "1_corpus": "arxiv"} {"tstamp": 1722227089.1997, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "a76136d3818e49c29e4baa8391ebbab2", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Research on the effectiveness of chain-of-thought prompting to solve difficult language model tasks", "0_output": [["Research on the effectiveness of chain-of-thought prompting to solve difficult language model tasks", "Title: System-Level Genetic Codes: An Explanation for Biological Complexity\n\nAbstract: Complex systems with tightly coadapted parts frequently appear in living systems and are difficult to account for through Darwinian evolution, that is random variation and natural selection, if the constituent parts are independently coded in the genetic code. If the parts are independently coded, multiple simultaneous mutations appear necessary to create or modify these systems. It is generally believed that most proteins are independently coded. The textbook rule is one gene for one enzyme. Thus, biochemical systems with tightly coadapted parts such as the blood clotting cascade pose a difficulty for Darwinian evolution. This problem can be overcome if the current understanding of the genetic code is incomplete and a system-level genetic code in which seemingly independent proteins are encoded in an interdependent, highly correlated manner exists. The methods by which human beings design and fabricate complex systems of tightly coadapted parts are explored for insights into the requirements for a system-level genetic code. Detailed examples of system-level codes for networks of matching parts are presented. The implications of identifying and deciphering the system-level genetic code if it exists for the prevention, treatment, and cure of heart disease, cancer, immune disorders, and for rational drug design are discussed."]], "0_corpus": "arxiv", "1_conv_id": "891e5ee1250b4d1fac7f62baa9315f5e", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "Research on the effectiveness of chain-of-thought prompting to solve difficult language model tasks", "1_output": [["Research on the effectiveness of chain-of-thought prompting to solve difficult language model tasks", "Title: ChainLM: Empowering Large Language Models with Improved Chain-of-Thought Prompting\n\nAbstract: Chain-of-Thought (CoT) prompting can enhance the reasoning capabilities of large language models (LLMs), establishing itself as a primary approach to solving complex reasoning tasks. Existing CoT synthesis approaches usually focus on simpler reasoning tasks and thus result in low-quality and inconsistent CoT prompts. In response to this challenge, we present an empirical investigation of CoT prompting and introduce CoTGenius, a novel framework designed for the automatic generation of superior CoT prompts. CoTGenius is developed based on three major evolution strategies, i.e., complicate, diversify, and specify-alongside two filtering mechanisms: evolutionary success judgement and correctness verification. We further employ CoTGenius to create an extensive CoT dataset, and subsequently fine-tune the Llama 2-Chat 7B and 13B models on this dataset. We call the resulting model ChainLM. To deal with the cumulative error issue in reasoning steps, we propose a step-level debating method, wherein multiple debaters discuss each reasoning step to arrive at the correct answer. Extensive experiments demonstrate that our ChainLM models exhibit enhanced proficiency in addressing a spectrum of complex reasoning problems compared to existing models. In addition, we conduct an in-depth analysis of the impact of data categories within CoTGenius on the model performance. We release our dataset and code at https://github.com/RUCAIBox/ChainLM."]], "1_corpus": "arxiv"} {"tstamp": 1722227103.829, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "0b5c946bf2ff4358b48d8d4a1a8e9854", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Study analyzing how well large language models can assist students in debugging code.", "0_output": [["Study analyzing how well large language models can assist students in debugging code.", "Title: A characterization of quasipositive Seifert surfaces (Constructions of quasipositive knots and links, III)\n\nAbstract: This article was originally published in Topology 31 (1992). The present hyperTeXed redaction corrects a few typographical errors and updates the references."]], "0_corpus": "arxiv", "1_conv_id": "721c802e8f3b4f46846d3f26b194aee4", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "Study analyzing how well large language models can assist students in debugging code.", "1_output": [["Study analyzing how well large language models can assist students in debugging code.", "Title: Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests\n\nAbstract: Background and Context: Over the past year, large language models (LLMs) have taken the world by storm. In computing education, like in other walks of life, many opportunities and threats have emerged as a consequence. Objectives: In this article, we explore such opportunities and threats in a specific area: responding to student programmers' help requests. More specifically, we assess how good LLMs are at identifying issues in problematic code that students request help on. Method: We collected a sample of help requests and code from an online programming course. We then prompted two different LLMs (OpenAI Codex and GPT-3.5) to identify and explain the issues in the students' code and assessed the LLM-generated answers both quantitatively and qualitatively. Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently find at least one actual issue in each student program (GPT-3.5 in 90% of the cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57% of the time). False positives are common (40% chance for GPT-3.5). The advice that the LLMs provide on the issues is often sensible. The LLMs perform better on issues involving program logic rather than on output formatting. Model solutions are frequently provided even when the LLM is prompted not to. LLM responses to prompts in a non-English language are only slightly worse than responses to English prompts. Implications: Our results continue to highlight the utility of LLMs in programming education. At the same time, the results highlight the unreliability of LLMs: LLMs make some of the same mistakes that students do, perhaps especially when formatting output as required by automated assessment systems. Our study informs teachers interested in using LLMs as well as future efforts to customize LLMs for the needs of programming education."]], "1_corpus": "arxiv"} {"tstamp": 1722227110.4423, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "afa919e9ef4546cea9fd3213076acaca", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "Metrics for evaluating step-by-step reasoning in language models", "0_output": [["Metrics for evaluating step-by-step reasoning in language models", "Title: ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning\n\nAbstract: Large language models show improved downstream task performance when prompted to generate step-by-step reasoning to justify their final answers. These reasoning steps greatly improve model interpretability and verification, but objectively studying their correctness (independent of the final answer) is difficult without reliable methods for automatic evaluation. We simply do not know how often the stated reasoning steps actually support the final end task predictions. In this work, we present ROSCOE, a suite of interpretable, unsupervised automatic scores that improve and extend previous text generation evaluation metrics. To evaluate ROSCOE against baseline metrics, we design a typology of reasoning errors and collect synthetic and human evaluation scores on commonly used reasoning datasets. In contrast with existing metrics, ROSCOE can measure semantic consistency, logicality, informativeness, fluency, and factuality - among other traits - by leveraging properties of step-by-step rationales. We empirically verify the strength of our metrics on five human annotated and six programmatically perturbed diagnostics datasets - covering a diverse set of tasks that require reasoning skills and show that ROSCOE can consistently outperform baseline metrics."]], "0_corpus": "arxiv", "1_conv_id": "1a87d93b61534b849c360a881d6e4c96", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Metrics for evaluating step-by-step reasoning in language models", "1_output": [["Metrics for evaluating step-by-step reasoning in language models", "Title: ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning\n\nAbstract: Large language models show improved downstream task performance when prompted to generate step-by-step reasoning to justify their final answers. These reasoning steps greatly improve model interpretability and verification, but objectively studying their correctness (independent of the final answer) is difficult without reliable methods for automatic evaluation. We simply do not know how often the stated reasoning steps actually support the final end task predictions. In this work, we present ROSCOE, a suite of interpretable, unsupervised automatic scores that improve and extend previous text generation evaluation metrics. To evaluate ROSCOE against baseline metrics, we design a typology of reasoning errors and collect synthetic and human evaluation scores on commonly used reasoning datasets. In contrast with existing metrics, ROSCOE can measure semantic consistency, logicality, informativeness, fluency, and factuality - among other traits - by leveraging properties of step-by-step rationales. We empirically verify the strength of our metrics on five human annotated and six programmatically perturbed diagnostics datasets - covering a diverse set of tasks that require reasoning skills and show that ROSCOE can consistently outperform baseline metrics."]], "1_corpus": "arxiv"} {"tstamp": 1722227122.621, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "118d687cc49c4f9d9169612aa7a8957a", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "Paper focusing on evaluating the effectiveness of large language models in terms of instruction adherence ", "0_output": [["Paper focusing on evaluating the effectiveness of large language models in terms of instruction adherence ", "Title: Evaluating Large Language Models at Evaluating Instruction Following\n\nAbstract: As research in large language models (LLMs) continues to accelerate, LLM-based evaluation has emerged as a scalable and cost-effective alternative to human evaluations for comparing the ever increasing list of models. This paper investigates the efficacy of these ``LLM evaluators'', particularly in using them to assess instruction following, a metric that gauges how closely generated text adheres to the given instruction. We introduce a challenging meta-evaluation benchmark, LLMBar, designed to test the ability of an LLM evaluator in discerning instruction-following outputs. The authors manually curated 419 pairs of outputs, one adhering to instructions while the other diverging, yet may possess deceptive qualities that mislead an LLM evaluator, e.g., a more engaging tone. Contrary to existing meta-evaluation, we discover that different evaluators (i.e., combinations of LLMs and prompts) exhibit distinct performance on LLMBar and even the highest-scoring ones have substantial room for improvement. We also present a novel suite of prompting strategies that further close the gap between LLM and human evaluators. With LLMBar, we hope to offer more insight into LLM evaluators and foster future research in developing better instruction-following models."]], "0_corpus": "arxiv", "1_conv_id": "0d726137660541c283b405566e8e9a21", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Paper focusing on evaluating the effectiveness of large language models in terms of instruction adherence ", "1_output": [["Paper focusing on evaluating the effectiveness of large language models in terms of instruction adherence ", "Title: Evaluating Large Language Models at Evaluating Instruction Following\n\nAbstract: As research in large language models (LLMs) continues to accelerate, LLM-based evaluation has emerged as a scalable and cost-effective alternative to human evaluations for comparing the ever increasing list of models. This paper investigates the efficacy of these ``LLM evaluators'', particularly in using them to assess instruction following, a metric that gauges how closely generated text adheres to the given instruction. We introduce a challenging meta-evaluation benchmark, LLMBar, designed to test the ability of an LLM evaluator in discerning instruction-following outputs. The authors manually curated 419 pairs of outputs, one adhering to instructions while the other diverging, yet may possess deceptive qualities that mislead an LLM evaluator, e.g., a more engaging tone. Contrary to existing meta-evaluation, we discover that different evaluators (i.e., combinations of LLMs and prompts) exhibit distinct performance on LLMBar and even the highest-scoring ones have substantial room for improvement. We also present a novel suite of prompting strategies that further close the gap between LLM and human evaluators. With LLMBar, we hope to offer more insight into LLM evaluators and foster future research in developing better instruction-following models."]], "1_corpus": "arxiv"} {"tstamp": 1722227128.3311, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "457524b04a99465e8c61334e8a23df34", "0_model_name": "voyage-multilingual-2", "0_prompt": "Tree-based Transformer model for code generation and long dependency handling", "0_output": [["Tree-based Transformer model for code generation and long dependency handling", "Title: TreeGen: A Tree-Based Transformer Architecture for Code Generation\n\nAbstract: A code generation system generates programming language code based on an input natural language description. State-of-the-art approaches rely on neural networks for code generation. However, these code generators suffer from two problems. One is the long dependency problem, where a code element often depends on another far-away code element. A variable reference, for example, depends on its definition, which may appear quite a few lines before. The other problem is structure modeling, as programs contain rich structural information. In this paper, we propose a novel tree-based neural architecture, TreeGen, for code generation. TreeGen uses the attention mechanism of Transformers to alleviate the long-dependency problem, and introduces a novel AST reader (encoder) to incorporate grammar rules and AST structures into the network. We evaluated TreeGen on a Python benchmark, HearthStone, and two semantic parsing benchmarks, ATIS and GEO. TreeGen outperformed the previous state-of-the-art approach by 4.5 percentage points on HearthStone, and achieved the best accuracy among neural network-based approaches on ATIS (89.1%) and GEO (89.6%). We also conducted an ablation test to better understand each component of our model."]], "0_corpus": "arxiv", "1_conv_id": "beab0578168746e2b70b8a85f759bc3a", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "Tree-based Transformer model for code generation and long dependency handling", "1_output": [["Tree-based Transformer model for code generation and long dependency handling", "Title: TreeGen: A Tree-Based Transformer Architecture for Code Generation\n\nAbstract: A code generation system generates programming language code based on an input natural language description. State-of-the-art approaches rely on neural networks for code generation. However, these code generators suffer from two problems. One is the long dependency problem, where a code element often depends on another far-away code element. A variable reference, for example, depends on its definition, which may appear quite a few lines before. The other problem is structure modeling, as programs contain rich structural information. In this paper, we propose a novel tree-based neural architecture, TreeGen, for code generation. TreeGen uses the attention mechanism of Transformers to alleviate the long-dependency problem, and introduces a novel AST reader (encoder) to incorporate grammar rules and AST structures into the network. We evaluated TreeGen on a Python benchmark, HearthStone, and two semantic parsing benchmarks, ATIS and GEO. TreeGen outperformed the previous state-of-the-art approach by 4.5 percentage points on HearthStone, and achieved the best accuracy among neural network-based approaches on ATIS (89.1%) and GEO (89.6%). We also conducted an ablation test to better understand each component of our model."]], "1_corpus": "arxiv"} {"tstamp": 1722227147.2195, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "cfdfbee509bf444ebe1455fb9cd8068a", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "Study showing improvements in spatio-temporal video grounding with transformer-based methods", "0_output": [["Study showing improvements in spatio-temporal video grounding with transformer-based methods", "Title: Human-centric Spatio-Temporal Video Grounding With Visual Transformers\n\nAbstract: In this work, we introduce a novel task - Humancentric Spatio-Temporal Video Grounding (HC-STVG). Unlike the existing referring expression tasks in images or videos, by focusing on humans, HC-STVG aims to localize a spatiotemporal tube of the target person from an untrimmed video based on a given textural description. This task is useful, especially for healthcare and security-related applications, where the surveillance videos can be extremely long but only a specific person during a specific period of time is concerned. HC-STVG is a video grounding task that requires both spatial (where) and temporal (when) localization. Unfortunately, the existing grounding methods cannot handle this task well. We tackle this task by proposing an effective baseline method named Spatio-Temporal Grounding with Visual Transformers (STGVT), which utilizes Visual Transformers to extract cross-modal representations for video-sentence matching and temporal localization. To facilitate this task, we also contribute an HC-STVG dataset consisting of 5,660 video-sentence pairs on complex multi-person scenes. Specifically, each video lasts for 20 seconds, pairing with a natural query sentence with an average of 17.25 words. Extensive experiments are conducted on this dataset, demonstrating the newly-proposed method outperforms the existing baseline methods."]], "0_corpus": "arxiv", "1_conv_id": "dcfe4354507d43b6a4ccdfce0c60a4a5", "1_model_name": "text-embedding-3-large", "1_prompt": "Study showing improvements in spatio-temporal video grounding with transformer-based methods", "1_output": [["Study showing improvements in spatio-temporal video grounding with transformer-based methods", "Title: Grounding Spatio-Temporal Language with Transformers\n\nAbstract: Language is an interface to the outside world. In order for embodied agents to use it, language must be grounded in other, sensorimotor modalities. While there is an extended literature studying how machines can learn grounded language, the topic of how to learn spatio-temporal linguistic concepts is still largely uncharted. To make progress in this direction, we here introduce a novel spatio-temporal language grounding task where the goal is to learn the meaning of spatio-temporal descriptions of behavioral traces of an embodied agent. This is achieved by training a truth function that predicts if a description matches a given history of observations. The descriptions involve time-extended predicates in past and present tense as well as spatio-temporal references to objects in the scene. To study the role of architectural biases in this task, we train several models including multimodal Transformer architectures; the latter implement different attention computations between words and objects across space and time. We test models on two classes of generalization: 1) generalization to randomly held-out sentences; 2) generalization to grammar primitives. We observe that maintaining object identity in the attention computation of our Transformers is instrumental to achieving good performance on generalization overall, and that summarizing object traces in a single token has little influence on performance. We then discuss how this opens new perspectives for language-guided autonomous embodied agents. We also release our code under open-source license as well as pretrained models and datasets to encourage the wider community to build upon and extend our work in the future."]], "1_corpus": "arxiv"} {"tstamp": 1722227169.0685, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "2f3d197a65dc467ca5bfd7baa9795464", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "A model improving question generation and answering with Transformers in semi-supervised setup.", "0_output": [["A model improving question generation and answering with Transformers in semi-supervised setup.", "Title: Machine Comprehension by Text-to-Text Neural Question Generation\n\nAbstract: We propose a recurrent neural model that generates natural-language questions from documents, conditioned on answers. We show how to train the model using a combination of supervised and reinforcement learning. After teacher forcing for standard maximum likelihood training, we fine-tune the model using policy gradient techniques to maximize several rewards that measure question quality. Most notably, one of these rewards is the performance of a question-answering system. We motivate question generation as a means to improve the performance of question answering systems. Our model is trained and evaluated on the recent question-answering dataset SQuAD."]], "0_corpus": "arxiv", "1_conv_id": "6fe550931a3f4d108db6ae848b48f728", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "A model improving question generation and answering with Transformers in semi-supervised setup.", "1_output": [["A model improving question generation and answering with Transformers in semi-supervised setup.", "Title: Hexagons and Interfaces in a Vibrated Granular Layer\n\nAbstract: The order parameter model based on parametric Ginzburg-Landau equation is used to describe high acceleration patterns in vibrated layer of granular material. At large amplitude of driving both hexagons and interfaces emerge. Transverse instability leading to formation of ``decorated'' interfaces and labyrinthine patterns, is found. Additional sub-harmonic forcing leads to controlled interface motion."]], "1_corpus": "arxiv"} {"tstamp": 1722227184.2501, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "df087591632840268d7cda35b4eee3d6", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "A model replacing RNN/CNN with self-attention for better HAR results ", "0_output": [["A model replacing RNN/CNN with self-attention for better HAR results ", "Title: Partial Representations and Amenable Fell Bundles over Free Groups\n\nAbstract: We show that a Fell bundle B = {B_t}_{t \\in F}, over an arbitrary free group F, is amenable, whenever it is orthogonal (in the sense that B_x^* B_y = 0, if x and y are distinct generators of F) and semi-saturated (in the sense that B_{ts} coincides with the closed linear span of B_t B_s, when the multiplication ``ts'' involves no cancelation)."]], "0_corpus": "arxiv", "1_conv_id": "39bbefcfdcb54d1fbfae7a726a6a9260", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "A model replacing RNN/CNN with self-attention for better HAR results ", "1_output": [["A model replacing RNN/CNN with self-attention for better HAR results ", "Title: Electromagnetic transitions with effective operators\n\nAbstract: In the no-core shell model formalism we compute effective one- and two-body operators, using the Lee-Suzuki procedure within the two-body cluster approximation. We evaluate the validity of the latter through calculations in reduced model spaces. In particular, we test the results for the two-body system and find that indeed the effective operators in the reduced space reproduce the expectation values or transition strengths computed in the full space. On the other hand, the renormalization for operators in the case of 6Li is very weak, suggesting the need for higher-body clusters in computing the effective interaction."]], "1_corpus": "arxiv"} {"tstamp": 1722227262.9897, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "06e0a8ed452a4140b64f3fc3ef65afc7", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "can you travel to kenya with an expired passport?", "0_output": [["can you travel to kenya with an expired passport?", "Gaming Stackexchange\n\nQ: Can I get a document back after I have returned it? After approving a migrant worker's passport and returning it to them, I realized that it was expired. I still have their work ticket, which appears valid. Is there any way to deny them entry to the country at this point?\n\nA: Nope.\nWell, you just made a big mistake: You approved before checking everything. The game is very harsh in this regard and does not allow you to roll back your decision, once you have given back the document.\nThis is part of the message - Think before you act, even if it takes valuable time ... Or just push your luck.\nAlso: You don't get fined immediately for a simple error - you get two chances every day to make mistakes (intentional or otherwise), and then the fines start kicking in.\n"]], "0_corpus": "stackexchange", "1_conv_id": "18595514e0f1447ea4ec341445ee2f8d", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "can you travel to kenya with an expired passport?", "1_output": [["can you travel to kenya with an expired passport?", "Gaming Stackexchange\n\nQ: Can I get a document back after I have returned it? After approving a migrant worker's passport and returning it to them, I realized that it was expired. I still have their work ticket, which appears valid. Is there any way to deny them entry to the country at this point?\n\nA: Nope.\nWell, you just made a big mistake: You approved before checking everything. The game is very harsh in this regard and does not allow you to roll back your decision, once you have given back the document.\nThis is part of the message - Think before you act, even if it takes valuable time ... Or just push your luck.\nAlso: You don't get fined immediately for a simple error - you get two chances every day to make mistakes (intentional or otherwise), and then the fines start kicking in.\n"]], "1_corpus": "stackexchange"} {"tstamp": 1722227306.3763, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "5dea31ed6e6640a48d84eccc072395b9", "0_model_name": "text-embedding-3-large", "0_prompt": "Paper on GPt models for embedding", "0_output": [["Paper on GPt models for embedding", "Title: Gaussian Process Topic Models\n\nAbstract: We introduce Gaussian Process Topic Models (GPTMs), a new family of topic models which can leverage a kernel among documents while extracting correlated topics. GPTMs can be considered a systematic generalization of the Correlated Topic Models (CTMs) using ideas from Gaussian Process (GP) based embedding. Since GPTMs work with both a topic covariance matrix and a document kernel matrix, learning GPTMs involves a novel component-solving a suitable Sylvester equation capturing both topic and document dependencies. The efficacy of GPTMs is demonstrated with experiments evaluating the quality of both topic modeling and embedding."]], "0_corpus": "arxiv", "1_conv_id": "64b61d37f88c4e7f90958d2817eca881", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "Paper on GPt models for embedding", "1_output": [["Paper on GPt models for embedding", "Title: Nimble GNN Embedding with Tensor-Train Decomposition\n\nAbstract: This paper describes a new method for representing embedding tables of graph neural networks (GNNs) more compactly via tensor-train (TT) decomposition. We consider the scenario where (a) the graph data that lack node features, thereby requiring the learning of embeddings during training; and (b) we wish to exploit GPU platforms, where smaller tables are needed to reduce host-to-GPU communication even for large-memory GPUs. The use of TT enables a compact parameterization of the embedding, rendering it small enough to fit entirely on modern GPUs even for massive graphs. When combined with judicious schemes for initialization and hierarchical graph partitioning, this approach can reduce the size of node embedding vectors by 1,659 times to 81,362 times on large publicly available benchmark datasets, achieving comparable or better accuracy and significant speedups on multi-GPU systems. In some cases, our model without explicit node features on input can even match the accuracy of models that use node features."]], "1_corpus": "arxiv"} {"tstamp": 1722227318.8507, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "b47a04ca853a4603ab5765eabae923fb", "0_model_name": "voyage-multilingual-2", "0_prompt": "Work using GPT models for text embeddings", "0_output": [["Work using GPT models for text embeddings", "Title: SGPT: GPT Sentence Embeddings for Semantic Search\n\nAbstract: Decoder transformers have continued increasing in scale reaching hundreds of billions of parameters. Due to their scale the same decoder sets state-of-the-art results on various language tasks via prompting or fine-tuning. Yet, these large foundation models remain unusable for the related fields of semantic search and sentence embeddings. This prevents possibly new state-of-the-art results and forces organizations to train and maintain separate models. To this end, we propose SGPT to use decoders for sentence embeddings and semantic search via prompting or fine-tuning. At 5.8 billion parameters SGPT improves on the previously best sentence embeddings by a margin of 7% and outperforms a concurrent method with 175 billion parameters as measured on the BEIR search benchmark. Code, models and result files are freely available at https://github.com/Muennighoff/sgpt."]], "0_corpus": "arxiv", "1_conv_id": "c086e98e80db4adfb1a1ffe9e6346a15", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Work using GPT models for text embeddings", "1_output": [["Work using GPT models for text embeddings", "Title: SGPT: GPT Sentence Embeddings for Semantic Search\n\nAbstract: Decoder transformers have continued increasing in scale reaching hundreds of billions of parameters. Due to their scale the same decoder sets state-of-the-art results on various language tasks via prompting or fine-tuning. Yet, these large foundation models remain unusable for the related fields of semantic search and sentence embeddings. This prevents possibly new state-of-the-art results and forces organizations to train and maintain separate models. To this end, we propose SGPT to use decoders for sentence embeddings and semantic search via prompting or fine-tuning. At 5.8 billion parameters SGPT improves on the previously best sentence embeddings by a margin of 7% and outperforms a concurrent method with 175 billion parameters as measured on the BEIR search benchmark. Code, models and result files are freely available at https://github.com/Muennighoff/sgpt."]], "1_corpus": "arxiv"} {"tstamp": 1722227337.5868, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "4972233d3fbe45d39f4c2244b64a0539", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "How to use LLMs for text embedding?", "0_output": [["How to use LLMs for text embedding?", "Title: Subgroups of inertia groups arising from abelian varieties\n\nAbstract: Given an abelian variety over a field with a discrete valuation, Grothendieck defined a certain open normal subgroup of the absolute inertia group. This subgroup encodes information on the extensions over which the abelian variety acquires semistable reduction. We study this subgroup, and use it to obtain information on the extensions over which the abelian variety acquires semistable reduction."]], "0_corpus": "arxiv", "1_conv_id": "1ecdceaadad9423e960a4be24b139ff5", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "How to use LLMs for text embedding?", "1_output": [["How to use LLMs for text embedding?", "Title: Polarizability Extraction for Waveguide-Fed Metasurfaces\n\nAbstract: We consider the design and modeling of metasurfaces that couple energy from guided waves to propagating wavefronts. This is a first step towards a comprehensive, multiscale modeling platform for metasurface antennas-large arrays of metamaterial elements embedded in a waveguide structure that radiates into free-space--in which the detailed electromagnetic responses of metamaterial elements are replaced by polarizable dipoles. We present two methods to extract the effective polarizability of a metamaterial element embedded in a one- or two-dimensional waveguide. The first method invokes surface equivalence principles, averaging over the effective surface currents and charges within an element to obtain the effective dipole moments; the second method is based on computing the coefficients of the scattered waves within the waveguide, from which the effective polarizability can be inferred. We demonstrate these methods on several variants of waveguide-fed metasurface elements, finding excellent agreement between the two, as well as with analytical expressions derived for irises with simpler geometries. Extending the polarizability extraction technique to higher order multipoles, we confirm the validity of the dipole approximation for common metamaterial elements. With the effective polarizabilities of the metamaterial elements accurately determined, the radiated fields generated by a metasurface antenna (inside and outside the antenna) can be found self-consistently by including the interactions between polarizable dipoles. The dipole description provides an alternative language and computational framework for engineering metasurface antennas, holograms, lenses, beam-forming arrays, and other electrically large, waveguide-fed metasurface structures."]], "1_corpus": "arxiv"} {"tstamp": 1722227347.7109, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "e2c63cb9c4d74f3a86a2d5ce975990a6", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "How to use LLMs for text embedding?", "0_output": [["How to use LLMs for text embedding?", "Title: Text clustering with LLM embeddings\n\nAbstract: Text clustering is an important approach for organising the growing amount of digital content, helping to structure and find hidden patterns in uncategorised data. However, the effectiveness of text clustering heavily relies on the choice of textual embeddings and clustering algorithms. We argue that recent advances in large language models (LLMs) can potentially improve this task. In this research, we investigated how different textual embeddings -- particularly those used in LLMs -- and clustering algorithms affect how text datasets are clustered. A series of experiments were conducted to assess how embeddings influence clustering results, the role played by dimensionality reduction through summarisation, and model size adjustment. Findings reveal that LLM embeddings excel at capturing subtleties in structured language, while BERT leads the lightweight options in performance. In addition, we observe that increasing model dimensionality and employing summarization techniques do not consistently lead to improvements in clustering efficiency, suggesting that these strategies require careful analysis to use in real-life models. These results highlight a complex balance between the need for refined text representation and computational feasibility in text clustering applications. This study extends traditional text clustering frameworks by incorporating embeddings from LLMs, providing a path for improved methodologies, while informing new avenues for future research in various types of textual analysis."]], "0_corpus": "arxiv", "1_conv_id": "2e8d1e46b8694af8bcd2c4bb95e142c3", "1_model_name": "text-embedding-004", "1_prompt": "How to use LLMs for text embedding?", "1_output": [["How to use LLMs for text embedding?", "Title: Improving Text Embeddings with Large Language Models\n\nAbstract: In this paper, we introduce a novel and simple method for obtaining high-quality text embeddings using only synthetic data and less than 1k training steps. Unlike existing methods that often depend on multi-stage intermediate pre-training with billions of weakly-supervised text pairs, followed by fine-tuning with a few labeled datasets, our method does not require building complex training pipelines or relying on manually collected datasets that are often constrained by task diversity and language coverage. We leverage proprietary LLMs to generate diverse synthetic data for hundreds of thousands of text embedding tasks across 93 languages. We then fine-tune open-source decoder-only LLMs on the synthetic data using standard contrastive loss. Experiments demonstrate that our method achieves strong performance on highly competitive text embedding benchmarks without using any labeled data. Furthermore, when fine-tuned with a mixture of synthetic and labeled data, our model sets new state-of-the-art results on the BEIR and MTEB benchmarks."]], "1_corpus": "arxiv"} {"tstamp": 1722227356.5243, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "5bebb04b32774842adaca330c780388e", "0_model_name": "BM25", "0_prompt": "How to use LLMs for text embedding?", "0_output": [["How to use LLMs for text embedding?", "Title: Text clustering with LLM embeddings\n\nAbstract: Text clustering is an important approach for organising the growing amount of digital content, helping to structure and find hidden patterns in uncategorised data. However, the effectiveness of text clustering heavily relies on the choice of textual embeddings and clustering algorithms. We argue that recent advances in large language models (LLMs) can potentially improve this task. In this research, we investigated how different textual embeddings -- particularly those used in LLMs -- and clustering algorithms affect how text datasets are clustered. A series of experiments were conducted to assess how embeddings influence clustering results, the role played by dimensionality reduction through summarisation, and model size adjustment. Findings reveal that LLM embeddings excel at capturing subtleties in structured language, while BERT leads the lightweight options in performance. In addition, we observe that increasing model dimensionality and employing summarization techniques do not consistently lead to improvements in clustering efficiency, suggesting that these strategies require careful analysis to use in real-life models. These results highlight a complex balance between the need for refined text representation and computational feasibility in text clustering applications. This study extends traditional text clustering frameworks by incorporating embeddings from LLMs, providing a path for improved methodologies, while informing new avenues for future research in various types of textual analysis."]], "0_corpus": "arxiv", "1_conv_id": "17f56269b9cf4be3ab5bf8d23d304eb2", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "How to use LLMs for text embedding?", "1_output": [["How to use LLMs for text embedding?", "Title: Polarizability Extraction for Waveguide-Fed Metasurfaces\n\nAbstract: We consider the design and modeling of metasurfaces that couple energy from guided waves to propagating wavefronts. This is a first step towards a comprehensive, multiscale modeling platform for metasurface antennas-large arrays of metamaterial elements embedded in a waveguide structure that radiates into free-space--in which the detailed electromagnetic responses of metamaterial elements are replaced by polarizable dipoles. We present two methods to extract the effective polarizability of a metamaterial element embedded in a one- or two-dimensional waveguide. The first method invokes surface equivalence principles, averaging over the effective surface currents and charges within an element to obtain the effective dipole moments; the second method is based on computing the coefficients of the scattered waves within the waveguide, from which the effective polarizability can be inferred. We demonstrate these methods on several variants of waveguide-fed metasurface elements, finding excellent agreement between the two, as well as with analytical expressions derived for irises with simpler geometries. Extending the polarizability extraction technique to higher order multipoles, we confirm the validity of the dipole approximation for common metamaterial elements. With the effective polarizabilities of the metamaterial elements accurately determined, the radiated fields generated by a metasurface antenna (inside and outside the antenna) can be found self-consistently by including the interactions between polarizable dipoles. The dipole description provides an alternative language and computational framework for engineering metasurface antennas, holograms, lenses, beam-forming arrays, and other electrically large, waveguide-fed metasurface structures."]], "1_corpus": "arxiv"} {"tstamp": 1722227365.1903, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "17c7b6e61796401ea7d401cf890f2283", "0_model_name": "BM25", "0_prompt": "How to use LLMs for text embedding?", "0_output": [["How to use LLMs for text embedding?", "Title: Text clustering with LLM embeddings\n\nAbstract: Text clustering is an important approach for organising the growing amount of digital content, helping to structure and find hidden patterns in uncategorised data. However, the effectiveness of text clustering heavily relies on the choice of textual embeddings and clustering algorithms. We argue that recent advances in large language models (LLMs) can potentially improve this task. In this research, we investigated how different textual embeddings -- particularly those used in LLMs -- and clustering algorithms affect how text datasets are clustered. A series of experiments were conducted to assess how embeddings influence clustering results, the role played by dimensionality reduction through summarisation, and model size adjustment. Findings reveal that LLM embeddings excel at capturing subtleties in structured language, while BERT leads the lightweight options in performance. In addition, we observe that increasing model dimensionality and employing summarization techniques do not consistently lead to improvements in clustering efficiency, suggesting that these strategies require careful analysis to use in real-life models. These results highlight a complex balance between the need for refined text representation and computational feasibility in text clustering applications. This study extends traditional text clustering frameworks by incorporating embeddings from LLMs, providing a path for improved methodologies, while informing new avenues for future research in various types of textual analysis."]], "0_corpus": "arxiv", "1_conv_id": "54529cd6bc5349eebebfd19f27aa985b", "1_model_name": "text-embedding-3-large", "1_prompt": "How to use LLMs for text embedding?", "1_output": [["How to use LLMs for text embedding?", "Title: How to use LLMs for Text Analysis\n\nAbstract: This guide introduces Large Language Models (LLM) as a highly versatile text analysis method within the social sciences. As LLMs are easy-to-use, cheap, fast, and applicable on a broad range of text analysis tasks, ranging from text annotation and classification to sentiment analysis and critical discourse analysis, many scholars believe that LLMs will transform how we do text analysis. This how-to guide is aimed at students and researchers with limited programming experience, and offers a simple introduction to how LLMs can be used for text analysis in your own research project, as well as advice on best practices. We will go through each of the steps of analyzing textual data with LLMs using Python: installing the software, setting up the API, loading the data, developing an analysis prompt, analyzing the text, and validating the results. As an illustrative example, we will use the challenging task of identifying populism in political texts, and show how LLMs move beyond the existing state-of-the-art."]], "1_corpus": "arxiv"} {"tstamp": 1722227372.1059, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "7c4f641b6b16434ba11ddcb50efe7dd3", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "How to use LLMs for text embedding?", "0_output": [["How to use LLMs for text embedding?", "Title: A Robust Parser Based on Syntactic Information\n\nAbstract: In this paper, we propose a robust parser which can parse extragrammatical sentences. This parser can recover them using only syntactic information. It can be easily modified and extended because it utilize only syntactic information."]], "0_corpus": "arxiv", "1_conv_id": "4b44b05947b841cb9601128f35a74d27", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "How to use LLMs for text embedding?", "1_output": [["How to use LLMs for text embedding?", "Title: LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders\n\nAbstract: Large decoder-only language models (LLMs) are the state-of-the-art models on most of today's NLP tasks and benchmarks. Yet, the community is only slowly adopting these models for text embedding tasks, which require rich contextualized representations. In this work, we introduce LLM2Vec, a simple unsupervised approach that can transform any decoder-only LLM into a strong text encoder. LLM2Vec consists of three simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. We demonstrate the effectiveness of LLM2Vec by applying it to 3 popular LLMs ranging from 1.3B to 7B parameters and evaluate the transformed models on English word- and sequence-level tasks. We outperform encoder-only models by a large margin on word-level tasks and reach a new unsupervised state-of-the-art performance on the Massive Text Embeddings Benchmark (MTEB). Moreover, when combining LLM2Vec with supervised contrastive learning, we achieve state-of-the-art performance on MTEB among models that train only on publicly available data. Our strong empirical results and extensive analysis demonstrate that LLMs can be effectively transformed into universal text encoders in a parameter-efficient manner without the need for expensive adaptation or synthetic GPT-4 generated data."]], "1_corpus": "arxiv"} {"tstamp": 1722227388.5972, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "3e693ba9880b4b2a96103949236f54eb", "0_model_name": "BM25", "0_prompt": "How to use GPT-3 for text embedding?", "0_output": [["How to use GPT-3 for text embedding?", "Title: Prior Art Search and Reranking for Generated Patent Text\n\nAbstract: Generative models, such as GPT-2, have demonstrated impressive results recently. A fundamental question we'd like to address is: where did the generated text come from? This work is our initial effort toward answering the question by using prior art search. The purpose of the prior art search is to find the most similar prior text in the training data of GPT-2. We take a reranking approach and apply it to the patent domain. Specifically, we pre-train GPT-2 models from scratch by using the patent data from the USPTO. The input for the prior art search is the patent text generated by the GPT-2 model. We also pre-trained BERT models from scratch for converting patent text to embeddings. The steps of reranking are: (1) search the most similar text in the training data of GPT-2 by taking a bag-of-word ranking approach (BM25), (2) convert the search results in text format to BERT embeddings, and (3) provide the final result by ranking the BERT embeddings based on their similarities with the patent text generated by GPT-2. The experiments in this work show that such reranking is better than ranking with embeddings alone. However, our mixed results also indicate that calculating the semantic similarities among long text spans is still challenging. To our knowledge, this work is the first to implement a reranking system to identify retrospectively the most similar inputs to a GPT model based on its output."]], "0_corpus": "arxiv", "1_conv_id": "0d5001addb9d4a18b601c3d872fd0520", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "How to use GPT-3 for text embedding?", "1_output": [["How to use GPT-3 for text embedding?", "Title: Towards General Text Embeddings with Multi-stage Contrastive Learning\n\nAbstract: We present GTE, a general-purpose text embedding model trained with multi-stage contrastive learning. In line with recent advancements in unifying various NLP tasks into a single format, we train a unified text embedding model by employing contrastive learning over a diverse mixture of datasets from multiple sources. By significantly increasing the number of training data during both unsupervised pre-training and supervised fine-tuning stages, we achieve substantial performance gains over existing embedding models. Notably, even with a relatively modest parameter count of 110M, GTE$_\\text{base}$ outperforms the black-box embedding API provided by OpenAI and even surpasses 10x larger text embedding models on the massive text embedding benchmark. Furthermore, without additional fine-tuning on each programming language individually, our model outperforms previous best code retrievers of similar size by treating code as text. In summary, our model achieves impressive results by effectively harnessing multi-stage contrastive learning, offering a powerful and efficient text embedding model with broad applicability across various NLP and code-related tasks."]], "1_corpus": "arxiv"} {"tstamp": 1722227396.7613, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "a588532639944ec6bf1492cbd64ad35d", "0_model_name": "voyage-multilingual-2", "0_prompt": "How to use GPT for text embedding?", "0_output": [["How to use GPT for text embedding?", "Title: Towards General Text Embeddings with Multi-stage Contrastive Learning\n\nAbstract: We present GTE, a general-purpose text embedding model trained with multi-stage contrastive learning. In line with recent advancements in unifying various NLP tasks into a single format, we train a unified text embedding model by employing contrastive learning over a diverse mixture of datasets from multiple sources. By significantly increasing the number of training data during both unsupervised pre-training and supervised fine-tuning stages, we achieve substantial performance gains over existing embedding models. Notably, even with a relatively modest parameter count of 110M, GTE$_\\text{base}$ outperforms the black-box embedding API provided by OpenAI and even surpasses 10x larger text embedding models on the massive text embedding benchmark. Furthermore, without additional fine-tuning on each programming language individually, our model outperforms previous best code retrievers of similar size by treating code as text. In summary, our model achieves impressive results by effectively harnessing multi-stage contrastive learning, offering a powerful and efficient text embedding model with broad applicability across various NLP and code-related tasks."]], "0_corpus": "arxiv", "1_conv_id": "3b7155d235974f69a8757e75f4f9bb52", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "How to use GPT for text embedding?", "1_output": [["How to use GPT for text embedding?", "Title: Phase Space Derivation of a Variational Principle for One Dimensional Hamiltonian Systems\n\nAbstract: We consider the bifurcation problem u'' + \\lambda u = N(u) with two point boundary conditions where N(u) is a general nonlinear term which may also depend on the eigenvalue \\lambda. A new derivation of a variational principle for the lowest eigenvalue \\lambda is given. This derivation makes use only of simple algebraic inequalities and leads directly to a more explicit expression for the eigenvalue than what had been given previously."]], "1_corpus": "arxiv"} {"tstamp": 1722227401.7211, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "2f09b604c1644cccabd56e80c3e71093", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "How to use GPT for text embedding?", "0_output": [["How to use GPT for text embedding?", "Title: Numerical Criteria for vey Ampleness of Divisors on Projective Bundles over an elliptic curve\n\nAbstract: In Butler, J.Differential Geom. 39 (1):1--34,1994, the author gives a sufficient condition for a line bundle associated with a divisor D to be normally generated on $X=P(E)$ where E is a vector bundle over a smooth curve C. A line bundle which is ample and normally generated is automatically very ample. Therefore the condition found in Butler's work, together with Miyaoka's well known ampleness criterion, give a sufficient condition for the very ampleness of D on X. This work is devoted to the study of numerical criteria for very ampleness of divisors D which do not satisfy the above criterion, in the case of C elliptic. Numerical conditions for the very ampleness of D are proved,improving existing results. In some cases a complete numerical characterization is found."]], "0_corpus": "arxiv", "1_conv_id": "66ad5195a32443cea82ad127a698d152", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "How to use GPT for text embedding?", "1_output": [["How to use GPT for text embedding?", "Title: Towards General Text Embeddings with Multi-stage Contrastive Learning\n\nAbstract: We present GTE, a general-purpose text embedding model trained with multi-stage contrastive learning. In line with recent advancements in unifying various NLP tasks into a single format, we train a unified text embedding model by employing contrastive learning over a diverse mixture of datasets from multiple sources. By significantly increasing the number of training data during both unsupervised pre-training and supervised fine-tuning stages, we achieve substantial performance gains over existing embedding models. Notably, even with a relatively modest parameter count of 110M, GTE$_\\text{base}$ outperforms the black-box embedding API provided by OpenAI and even surpasses 10x larger text embedding models on the massive text embedding benchmark. Furthermore, without additional fine-tuning on each programming language individually, our model outperforms previous best code retrievers of similar size by treating code as text. In summary, our model achieves impressive results by effectively harnessing multi-stage contrastive learning, offering a powerful and efficient text embedding model with broad applicability across various NLP and code-related tasks."]], "1_corpus": "arxiv"} {"tstamp": 1722227416.1574, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "e7d1f7ce30e2419d8edc849d80b6ca18", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "How to use GPT for embedding & search?", "0_output": [["How to use GPT for embedding & search?", "Title: Towards General Text Embeddings with Multi-stage Contrastive Learning\n\nAbstract: We present GTE, a general-purpose text embedding model trained with multi-stage contrastive learning. In line with recent advancements in unifying various NLP tasks into a single format, we train a unified text embedding model by employing contrastive learning over a diverse mixture of datasets from multiple sources. By significantly increasing the number of training data during both unsupervised pre-training and supervised fine-tuning stages, we achieve substantial performance gains over existing embedding models. Notably, even with a relatively modest parameter count of 110M, GTE$_\\text{base}$ outperforms the black-box embedding API provided by OpenAI and even surpasses 10x larger text embedding models on the massive text embedding benchmark. Furthermore, without additional fine-tuning on each programming language individually, our model outperforms previous best code retrievers of similar size by treating code as text. In summary, our model achieves impressive results by effectively harnessing multi-stage contrastive learning, offering a powerful and efficient text embedding model with broad applicability across various NLP and code-related tasks."]], "0_corpus": "arxiv", "1_conv_id": "a75a2008513a4c0e83f403f59cdd4efe", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "How to use GPT for embedding & search?", "1_output": [["How to use GPT for embedding & search?", "Title: SGPT: GPT Sentence Embeddings for Semantic Search\n\nAbstract: Decoder transformers have continued increasing in scale reaching hundreds of billions of parameters. Due to their scale the same decoder sets state-of-the-art results on various language tasks via prompting or fine-tuning. Yet, these large foundation models remain unusable for the related fields of semantic search and sentence embeddings. This prevents possibly new state-of-the-art results and forces organizations to train and maintain separate models. To this end, we propose SGPT to use decoders for sentence embeddings and semantic search via prompting or fine-tuning. At 5.8 billion parameters SGPT improves on the previously best sentence embeddings by a margin of 7% and outperforms a concurrent method with 175 billion parameters as measured on the BEIR search benchmark. Code, models and result files are freely available at https://github.com/Muennighoff/sgpt."]], "1_corpus": "arxiv"}