Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
arena-results / data /retrieval_battle-f0f24c65-6f65-4435-a1e5-20bb12a2b295.jsonl
Muennighoff's picture
Scheduled Commit
44b7adb verified
raw
history blame
50.7 kB
{"tstamp": 1722223836.4697, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "1e069e026372469d813240ff569580d8", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "bf0b981876b14c12ad953a3a042f49d4", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: Gauge fixing, BRS invariance and Ward identities for randomly stirred flows\n\nAbstract: The Galilean invariance of the Navier-Stokes equation is shown to be akin to a global gauge symmetry familiar from quantum field theory. This symmetry leads to a multiple counting of infinitely many inertial reference frames in the path integral approach to randomly stirred fluids. This problem is solved by fixing the gauge, i.e., singling out one reference frame. The gauge fixed theory has an underlying Becchi-Rouet-Stora (BRS) symmetry which leads to the Ward identity relating the exact inverse response and vertex functions. This identification of Galilean invariance as a gauge symmetry is explored in detail, for different gauge choices and by performing a rigorous examination of a discretized version of the theory. The Navier-Stokes equation is also invariant under arbitrary rectilinear frame accelerations, known as extended Galilean invariance (EGI). We gauge fix this extended symmetry and derive the generalized Ward identity that follows from the BRS invariance of the gauge-fixed theory. This new Ward identity reduces to the standard one in the limit of zero acceleration. This gauge-fixing approach unambiguously shows that Galilean invariance and EGI constrain only the zero mode of the vertex but none of the higher wavenumber modes."]], "1_corpus": "arxiv"}
{"tstamp": 1722223862.4429, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "236e330aa5dd46eeb2c678b95a41260c", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "0_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: Search for gravitational waves from binary black hole inspirals in LIGO data\n\nAbstract: We report on a search for gravitational waves from binary black hole inspirals in the data from the second science run of the LIGO interferometers. The search focused on binary systems with component masses between 3 and 20 solar masses. Optimally oriented binaries with distances up to 1 Mpc could be detected with efficiency of at least 90%. We found no events that could be identified as gravitational waves in the 385.6 hours of data that we searched."]], "0_corpus": "arxiv", "1_conv_id": "0ec343c3523d47eaae6a6a59d682f5f2", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "1_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: Is the Commercial Mass Media Necessary, or Even Desirable, for Liberal Democracy?\n\nAbstract: Is a commercial mass media, dependent on the market for its sustenance, necessary, or even desirable, for liberal democracy? Yochai Benkler has argued that a decentralized, peer-to-peer system of communications and information is both possible with digital technology and preferable to a system based on commercial mass media. He has contended in fact that the presence of politically powerful, copyright-rich mass media imposes significant barriers to the development of peer-to-peer information-sharing networks. In contrast, I have argued that the commercial mass media play an important, and perhaps even vital, role in liberal democracy by galvanizing public opinion, serving as a watchdog against government and corporate wrongdoing, agenda-setting (which enables public discourse), and serving as a relatively trustworthy source of information. This paper seeks to push the ball forward on this issue. It first isolates and enumerates the contributions that the commercial mass media are said to make towards liberal democracy. It then briefly assesses the extent to which the commercial mass media actually fulfills these constitutive functions. It then asks whether alternative institutions might serve some or all of these functions just as well or better. In so doing, it looks both to the past and the future. First, it examines the political party-supported press that thrived in the United States through much of the 19th century. Second, it examines government-funded mass media. Third, it looks, skeptically, at possibilities for peer-to-peer sharing of information and opinion in the digital network environment. I conclude that, despite the weaknesses of commercial mass media, an information policy suitable to liberal democracy should include a plurality of types of voices, including commercial mass media."]], "1_corpus": "arxiv"}
{"tstamp": 1722223898.7505, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "65a408d68bae4872a39c85580503e046", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "0_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: Search for gravitational waves from binary black hole inspirals in LIGO data\n\nAbstract: We report on a search for gravitational waves from binary black hole inspirals in the data from the second science run of the LIGO interferometers. The search focused on binary systems with component masses between 3 and 20 solar masses. Optimally oriented binaries with distances up to 1 Mpc could be detected with efficiency of at least 90%. We found no events that could be identified as gravitational waves in the 385.6 hours of data that we searched."]], "0_corpus": "arxiv", "1_conv_id": "db30f73e43a742018303a5aa939ef5b7", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "1_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: InstructCoder: Instruction Tuning Large Language Models for Code Editing\n\nAbstract: Code editing encompasses a variety of pragmatic tasks that developers deal with daily. Despite its relevance and practical usefulness, automatic code editing remains an underexplored area in the evolution of deep learning models, partly due to data scarcity. In this work, we explore the use of Large Language Models (LLMs) to edit code based on user instructions. Evaluated on a novel human-written execution-based benchmark dubbed EditEval, we found current models often struggle to fulfill the instructions. In light of this, we contribute InstructCoder, the first instruction-tuning dataset designed to adapt LLMs for general-purpose code editing, containing high-diversity code-editing tasks such as comment insertion, code optimization, and code refactoring. It consists of over 114,000 instruction-input-output triplets and covers multiple distinct code editing scenarios. The collection process starts with filtered commit data sourced from GitHub Python repositories as seeds. Subsequently, the dataset is systematically expanded through an iterative process, where both seed and generated tasks are used to prompt ChatGPT for more data. Our findings reveal that open-source LLMs fine-tuned on InstructCoder can significantly enhance the accuracy of code edits, exhibiting superior code-editing performance matching advanced proprietary LLMs. The datasets and the source code are publicly available at https://github.com/qishenghu/CodeInstruct."]], "1_corpus": "arxiv"}
{"tstamp": 1722223956.8024, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7fc1ca90e4f048ddb8f1a07606fe8c74", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "do you need a visa for a connecting flight in vietnam?", "0_output": [["do you need a visa for a connecting flight in vietnam?", "Meta Stackexchange\n\nQ: Stack Exchange site to ask Visa and passport related questions? Where can I ask questions about Passports and Visas? \nIs there a Stack Exchange site for this?\n\nA: travel's help centre says you can ask about visas.\n\ncustoms & border crossing, visas and other travel documents such as permits\n\nThey also have a passports tag.\nThe Expatriates Stack Exchange is also available for people living or intend to live abroad on a long-term basis.\nDo check out the help centre text of these sites before you ask to make sure your question is on topic. \n"]], "0_corpus": "stackexchange", "1_conv_id": "6798b0663e6740f8acf5d43bb61573f5", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "do you need a visa for a connecting flight in vietnam?", "1_output": [["do you need a visa for a connecting flight in vietnam?", "Stackoverflow Stackexchange\n\nQ: How to load data from splitted gzip files into redshift? Can I load data from splitted gzip files into an amazon redshift table?\nI can load data from single gzip file or from splitted text files.\nBut can I load data from splitted gzip files?\n\nA: I'm assuming here that you mean that you have multiple CSV files that are each gzipped.\nFirst, upload each file to an S3 bucket under the same prefix and delimiter.\ns3://S3_BUCKET/S3_PREFIX/file0.gz\ns3://S3_BUCKET/S3_PREFIX/file1.gz\nThen, execute the Redshift copy command:\ncopy TABLE_NAME from 's3://S3_BUCKET/S3_PREFIX' credentials 'aws_access_key_id=ACCESS_KEY;aws_secret_access_key=SECRET_KEY' csv gzip\n\n\n*\n\n*specify the S3 bucket and prefix\n\n*include credentials that have permission to read the S3 objects\n\n*ensure that the destination table already exists and has columns compatible with the CSV\n\n*specify the csv and gzip options\n\n\nAlso, it's a good idea to have your file count proportional to the number of nodes in your cluster.\n"]], "1_corpus": "stackexchange"}
{"tstamp": 1722224073.6957, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "66e5be6065b649e7b34405ae69767708", "0_model_name": "text-embedding-004", "0_prompt": "How to reduce overoptimization in reinforcement learning from human feedback?", "0_output": [["How to reduce overoptimization in reinforcement learning from human feedback?", "Title: Reward Model Ensembles Help Mitigate Overoptimization\n\nAbstract: Reinforcement learning from human feedback (RLHF) is a standard approach for fine-tuning large language models to follow instructions. As part of this process, learned reward models are used to approximately model human preferences. However, as imperfect representations of the \"true\" reward, these learned reward models are susceptible to overoptimization. Gao et al. (2023) studied this phenomenon in a synthetic human feedback setup with a significantly larger \"gold\" reward model acting as the true reward (instead of humans) and showed that overoptimization remains a persistent problem regardless of the size of the proxy reward model and training data used. Using a similar setup, we conduct a systematic study to evaluate the efficacy of using ensemble-based conservative optimization objectives, specifically worst-case optimization (WCO) and uncertainty-weighted optimization (UWO), for mitigating reward model overoptimization when using two optimization methods: (a) best-of-n sampling (BoN) (b) proximal policy optimization (PPO). We additionally extend the setup of Gao et al. (2023) to include 25% label noise to better mirror real-world conditions. Both with and without label noise, we find that conservative optimization practically eliminates overoptimization and improves performance by up to 70% for BoN sampling. For PPO, ensemble-based conservative optimization always reduces overoptimization and outperforms single reward model optimization. Moreover, combining it with a small KL penalty successfully prevents overoptimization at no performance cost. Overall, our results demonstrate that ensemble-based conservative optimization can effectively counter overoptimization."]], "0_corpus": "arxiv", "1_conv_id": "4e634cb1df8740f69f2a07e9fae99789", "1_model_name": "embed-english-v3.0", "1_prompt": "How to reduce overoptimization in reinforcement learning from human feedback?", "1_output": [["How to reduce overoptimization in reinforcement learning from human feedback?", "Title: Scalable Ensembling For Mitigating Reward Overoptimisation\n\nAbstract: Reinforcement Learning from Human Feedback (RLHF) has enabled significant advancements within language modeling for powerful, instruction-following models. However, the alignment of these models remains a pressing challenge as the policy tends to overfit the learned ``proxy\" reward model past an inflection point of utility as measured by a ``gold\" reward model that is more performant -- a phenomenon known as overoptimisation. Prior work has mitigated this issue by computing a pessimistic statistic over an ensemble of reward models, which is common in Offline Reinforcement Learning but incredibly costly for language models with high memory requirements, making such approaches infeasible for sufficiently large models. To this end, we propose using a shared encoder but separate linear heads. We find this leads to similar performance as the full ensemble while allowing tremendous savings in memory and time required for training for models of similar size."]], "1_corpus": "arxiv"}
{"tstamp": 1722224112.4536, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "63ac396150d24f6c8262e8040d6c76d1", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "what is the difference between domain users and authenticated users?", "0_output": [["what is the difference between domain users and authenticated users?", "Stackoverflow Stackexchange\n\nQ: Erlang: robustness against port owner's death What happens when the process which owns a spawned port dies and is restarted by the supervisor? \n\n\n*\n\n*Is there a way for the old port not to die with its owner and for the new owner to \"take over\"?\n\n*Failing that, is it possible to ensure that the spawned process terminates when its port dies?\n\nA: First, notice that you don't want the port owner to die. So move any \"dangerous\" code out to another process and make the port owner as dumb as possible. This is error-mitigation the Erlang way. Now, if that process dies, something is really bad, so in that case it may be clever to recycle the port as well. But since we moved everything out, we are counting on that not happening.\nRegarding 2, the port will send a specific message when it terminates, so you can arrange for your spawned process to gracefully detect this and terminate with it. See\nhttp://www.erlang.org/doc/reference_manual/ports.html\n"]], "0_corpus": "stackexchange", "1_conv_id": "f92109e0200e4f72b4f46ba3abd430c4", "1_model_name": "BM25", "1_prompt": "what is the difference between domain users and authenticated users?", "1_output": [["what is the difference between domain users and authenticated users?", "Stackoverflow Stackexchange\n\nQ: What are the differences between LDAP and Active Directory authentication? I have tried to search without success. \n(I do know difference between LDAP and Active Directory. Also read this What are the differences between LDAP and Active Directory?)\nAccording to my understanding, LDAP authenticates (binds) with DN (distinguish name) and password.\nE.g. CN=bob, OU=Users,DC=test,DC=com \nActive Directory authenticates with a short user name or full domain user name and password.\nE.g. bob or bob@test.com \nTherefore, LDAP does not allow to users with the same CN in the same tree (OU) but it allows the same CN in a different OU.\nActive Directory does not allow the same user name in the same domain, even the user object located in different tree of Active Directory.\nActive Directory allows the same user name in different domains.\nOther differences? \n\nA: Active Directory is a product and a directory service. LDAP is an application layer protocol and an IETF standard. Active Directory supports and implements LDAP, therefore allowing RFC-compliant LDAP clients to talk to it and perform LDAP operations such as bind, search, etc.\n"]], "1_corpus": "stackexchange"}
{"tstamp": 1722224122.2784, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "a178424589e04da081763aca087bef8c", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: Gauge fixing, BRS invariance and Ward identities for randomly stirred flows\n\nAbstract: The Galilean invariance of the Navier-Stokes equation is shown to be akin to a global gauge symmetry familiar from quantum field theory. This symmetry leads to a multiple counting of infinitely many inertial reference frames in the path integral approach to randomly stirred fluids. This problem is solved by fixing the gauge, i.e., singling out one reference frame. The gauge fixed theory has an underlying Becchi-Rouet-Stora (BRS) symmetry which leads to the Ward identity relating the exact inverse response and vertex functions. This identification of Galilean invariance as a gauge symmetry is explored in detail, for different gauge choices and by performing a rigorous examination of a discretized version of the theory. The Navier-Stokes equation is also invariant under arbitrary rectilinear frame accelerations, known as extended Galilean invariance (EGI). We gauge fix this extended symmetry and derive the generalized Ward identity that follows from the BRS invariance of the gauge-fixed theory. This new Ward identity reduces to the standard one in the limit of zero acceleration. This gauge-fixing approach unambiguously shows that Galilean invariance and EGI constrain only the zero mode of the vertex but none of the higher wavenumber modes."]], "0_corpus": "arxiv", "1_conv_id": "30174116a3ee429cb683135e80854d8d", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
{"tstamp": 1722224143.0986, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "6e0b0281dbdd47e5987c4f7945b122f3", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "7645ad3134ce4cf88ac3eace789bf588", "1_model_name": "BM25", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: Text and Code Embeddings by Contrastive Pre-Training\n\nAbstract: Text embeddings are useful features in many applications such as semantic search and computing text similarity. Previous work typically trains models customized for different use cases, varying in dataset choice, training objective and model architecture. In this work, we show that contrastive pre-training on unsupervised data at scale leads to high quality vector representations of text and code. The same unsupervised text embeddings that achieve new state-of-the-art results in linear-probe classification also display impressive semantic search capabilities and sometimes even perform competitively with fine-tuned models. On linear-probe classification accuracy averaging over 7 tasks, our best unsupervised model achieves a relative improvement of 4% and 1.8% over previous best unsupervised and supervised text embedding models respectively. The same text embeddings when evaluated on large-scale semantic search attains a relative improvement of 23.4%, 14.7%, and 10.6% over previous best unsupervised methods on MSMARCO, Natural Questions and TriviaQA benchmarks, respectively. Similarly to text embeddings, we train code embedding models on (text, code) pairs, obtaining a 20.8% relative improvement over prior best work on code search."]], "1_corpus": "arxiv"}
{"tstamp": 1722224179.2706, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "83d0fa49a2b744c093ee6c2034861706", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: Gauge fixing, BRS invariance and Ward identities for randomly stirred flows\n\nAbstract: The Galilean invariance of the Navier-Stokes equation is shown to be akin to a global gauge symmetry familiar from quantum field theory. This symmetry leads to a multiple counting of infinitely many inertial reference frames in the path integral approach to randomly stirred fluids. This problem is solved by fixing the gauge, i.e., singling out one reference frame. The gauge fixed theory has an underlying Becchi-Rouet-Stora (BRS) symmetry which leads to the Ward identity relating the exact inverse response and vertex functions. This identification of Galilean invariance as a gauge symmetry is explored in detail, for different gauge choices and by performing a rigorous examination of a discretized version of the theory. The Navier-Stokes equation is also invariant under arbitrary rectilinear frame accelerations, known as extended Galilean invariance (EGI). We gauge fix this extended symmetry and derive the generalized Ward identity that follows from the BRS invariance of the gauge-fixed theory. This new Ward identity reduces to the standard one in the limit of zero acceleration. This gauge-fixing approach unambiguously shows that Galilean invariance and EGI constrain only the zero mode of the vertex but none of the higher wavenumber modes."]], "0_corpus": "arxiv", "1_conv_id": "d68199ab940144d0864864688663a7aa", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
{"tstamp": 1722224206.9709, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f9e33c80a7f84346879ff3a110995c39", "0_model_name": "text-embedding-3-large", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "c38a6a7929194e63a6fae84fb9f4817d", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
{"tstamp": 1722224292.4751, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "9c20a131b0b0460caf8e9cc90271ad58", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "8fbd7a439898404aa1b56dd7fd31b164", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
{"tstamp": 1722224357.4907, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "daa307131487485b94995384043a8300", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "do crops respawn in skyrim?", "0_output": [["do crops respawn in skyrim?", "Meta Stackexchange\n\nQ: Google time and answer mistakes Take a look a this : \n\nOn the google search page I can see \n\n\n*\n\n*13 hours ago\n\n*1 answer\n\n\nWhen I actually posted the question :\n\n\n*\n\n*8 mins ago\n\n*0 answers\n\n\nThis should be fixed.\n"]], "0_corpus": "stackexchange", "1_conv_id": "789b1e47056746a7a25daa4239fc0c83", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "do crops respawn in skyrim?", "1_output": [["do crops respawn in skyrim?", "Gaming Stackexchange\n\nQ: Do Nirnroots regrow in Skyrim? In Oblivion, Nirnroots were a collectible - there was a finite number of them*, and they did not respawn.\nI've been noticing, however, that when I pick Nirnroots in Skyrim, the roots of the plant remain behind. This got me wondering...\nDo Nirnroots regrow in Skyrim?\nWhat about the Crimson Nirnroots of Blackreach?\n*discounting the respawning food-nirnroot in Frostcrag Spire\n\nA: I can confirm they grow back after 15 days also. I'm using the PS3 version and picked about 20 of them before doing the related quest. It does give you a negative number if you try dropping them and picking them back up. So, I was forced to wait 15 days to pick a total of 50 of them. \n"]], "1_corpus": "stackexchange"}
{"tstamp": 1722224394.178, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "5f396b9d69c1400891c654d8ca529105", "0_model_name": "BM25", "0_prompt": "why star trek tng ended?", "0_output": [["why star trek tng ended?", "Scifi Stackexchange\n\nQ: Why did Star Trek: The Next Generation season 2 only have 22 episodes? In the Star Trek franchise shows Star Trek: The Next Generation, Star Trek: Deep Space Nine and Star Trek: Voyager, each show almost always had 26 episodes per season. Besides Voyager's first season, the only exception in this is the second season of Star Trek: The Next Generation, which instead had 22. Why is this?\n\nA: There was a writers' strike in 1988.\nThe Writers\u00b4 Guild of America went out on strike in the spring and summer of 1988. They demanded more creative control and residuals for the shows they wrote.\nThe writers of ST:TNG at Paramount joined in despite their Sacred Duty.\nThe strike ended on August 7, but the production pipeline was adversely affected, and ST:TNG wasn't able to start airing until November 21. (This episode, \"The Child\", contained the first appearance of Whoopi Goldberg playing Guinan.)\n"]], "0_corpus": "stackexchange", "1_conv_id": "f73a0f20a5ff46999df925bea5719cea", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "why star trek tng ended?", "1_output": [["why star trek tng ended?", "Mathematica Stackexchange\n\nQ: how to download Finite ELements tutorial as PDF For long tutorials, I prefer hard copy.\nIs it possible to download from somewhere this document\nhttps://reference.wolfram.com/language/FEMDocumentation/tutorial/FiniteElementOverview.html \nas PDF file? Few years ago WRI used to have PDF files for all its documentations, but I am not able to find one for the above now.\n"]], "1_corpus": "stackexchange"}
{"tstamp": 1722224426.1887, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "ae3aeff4b1ee4843b902224782c54d18", "0_model_name": "BM25", "0_prompt": "Paper on SEACrowd", "0_output": [["Paper on SEACrowd", "Title: SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages\n\nAbstract: Southeast Asia (SEA) is a region rich in linguistic diversity and cultural variety, with over 1,300 indigenous languages and a population of 671 million people. However, prevailing AI models suffer from a significant lack of representation of texts, images, and audio datasets from SEA, compromising the quality of AI models for SEA languages. Evaluating models for SEA languages is challenging due to the scarcity of high-quality datasets, compounded by the dominance of English training data, raising concerns about potential cultural misrepresentation. To address these challenges, we introduce SEACrowd, a collaborative initiative that consolidates a comprehensive resource hub that fills the resource gap by providing standardized corpora in nearly 1,000 SEA languages across three modalities. Through our SEACrowd benchmarks, we assess the quality of AI models on 36 indigenous languages across 13 tasks, offering valuable insights into the current AI landscape in SEA. Furthermore, we propose strategies to facilitate greater AI advancements, maximizing potential utility and resource equity for the future of AI in SEA."]], "0_corpus": "arxiv", "1_conv_id": "92c627814732418a8538ae4e026e72c6", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Paper on SEACrowd", "1_output": [["Paper on SEACrowd", "Title: Carriers of \\emph{Sargassum} and mechanism for coastal inundation in the Caribbean Sea\n\nAbstract: We identify effective carriers of \\emph{Sargassum} in the Caribbean Sea and describe a mechanism for coastal choking. Revealed from satellite altimetry, the carriers of \\emph{Sargassum} are mesoscale eddies (vortices of 50-km radius or larger) with coherent material (i.e., fluid) boundaries. These are observer-independent -- \\emph{unlike} eddy boundaries identified with instantaneously closed streamlines of the altimetric sea-surface height field -- and furthermore harbor finite-time attractors for networks of elastically connected finite-size buoyant or \"inertial\" particles dragged by ocean currents and winds, a mathematical abstraction of \\emph{Sargassum} rafts. The mechanism of coastal inundation, identified using a minimal model of surface-intensified Caribbean Sea eddies, is thermal instability in the presence of bottom topography."]], "1_corpus": "arxiv"}
{"tstamp": 1722224478.384, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "f0607dbb82af41dab851d3746f6f5f71", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "Paper that shows multitask finetuned language models can generalize to unseen languages", "0_output": [["Paper that shows multitask finetuned language models can generalize to unseen languages", "Title: Constructive Complexity and Artificial Reality: An Introduction\n\nAbstract: Basic problems of complex systems are outlined with an emphasis on irreducibility and dynamic many-to-many correspondences. We discuss the importance of a constructive approach to artificial reality and the significance of an internal observer."]], "0_corpus": "arxiv", "1_conv_id": "1f4529b8817f4f0b8077850d8dccbda8", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Paper that shows multitask finetuned language models can generalize to unseen languages", "1_output": [["Paper that shows multitask finetuned language models can generalize to unseen languages", "Title: The Absence of Positive Energy Bound States for a Class of Nonlocal Potentials\n\nAbstract: We generalize in this paper a theorem of Titchmarsh for the positivity of Fourier sine integrals. We apply then the theorem to derive simple conditions for the absence of positive energy bound states (bound states embedded in the continuum) for the radial Schr\\\"odinger equation with nonlocal potentials which are superposition of a local potential and separable potentials."]], "1_corpus": "arxiv"}
{"tstamp": 1722224501.6296, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "b0eecc5d9fd54f6594ba143e3606f4d2", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "Paper that introduced GPT-3", "0_output": [["Paper that introduced GPT-3", "Title: a survey on GPT-3\n\nAbstract: This paper provides an introductory survey to GPT-3. We cover some of the historical development behind this technology, some of the key features of GPT-3, and discuss the machine learning model and the datasets used. We survey both academic and commercial efforts applying GPT-3 in diverse domains such as developing conversational AI chatbots, software development, creative work, domain knowledge, and business productivity. We discuss some of the challenges that GPT-3 faces such as the problems of training complexity, bias, and hallucination/incorrect answers. We also discuss the future research opportunities in this area."]], "0_corpus": "arxiv", "1_conv_id": "a278bcf9cee34726ac3b8100de0e1d2a", "1_model_name": "embed-english-v3.0", "1_prompt": "Paper that introduced GPT-3", "1_output": [["Paper that introduced GPT-3", "Title: Improving Short Text Classification With Augmented Data Using GPT-3\n\nAbstract: GPT-3 is a large-scale natural language model developed by OpenAI that can perform many different tasks, including topic classification. Although researchers claim that it requires only a small number of in-context examples to learn a task, in practice GPT-3 requires these training examples to be either of exceptional quality or a higher quantity than easily created by hand. To address this issue, this study teaches GPT-3 to classify whether a question is related to data science by augmenting a small training set with additional examples generated by GPT-3 itself. This study compares two classifiers: the GPT-3 Classification Endpoint with augmented examples, and the GPT-3 Completion Endpoint with an optimal training set chosen using a genetic algorithm. We find that while the augmented Completion Endpoint achieves upwards of 80 percent validation accuracy, using the augmented Classification Endpoint yields more consistent accuracy on unseen examples. In this way, giving large-scale machine learning models like GPT-3 the ability to propose their own additional training examples can result in improved classification performance."]], "1_corpus": "arxiv"}
{"tstamp": 1722224531.6939, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "c7374c4ffde543f99eb8379b8225a12b", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "paper showing crosslingual generalization is possible", "0_output": [["paper showing crosslingual generalization is possible", "Title: Are Structural Concepts Universal in Transformer Language Models? Towards Interpretable Cross-Lingual Generalization\n\nAbstract: Large language models (LLMs) have exhibited considerable cross-lingual generalization abilities, whereby they implicitly transfer knowledge across languages. However, the transfer is not equally successful for all languages, especially for low-resource ones, which poses an ongoing challenge. It is unclear whether we have reached the limits of implicit cross-lingual generalization and if explicit knowledge transfer is viable. In this paper, we investigate the potential for explicitly aligning conceptual correspondence between languages to enhance cross-lingual generalization. Using the syntactic aspect of language as a testbed, our analyses of 43 languages reveal a high degree of alignability among the spaces of structural concepts within each language for both encoder-only and decoder-only LLMs. We then propose a meta-learning-based method to learn to align conceptual spaces of different languages, which facilitates zero-shot and few-shot generalization in concept classification and also offers insights into the cross-lingual in-context learning phenomenon. Experiments on syntactic analysis tasks show that our approach achieves competitive results with state-of-the-art methods and narrows the performance gap between languages, particularly benefiting those with limited resources."]], "0_corpus": "arxiv", "1_conv_id": "3283162f3da548e08c51faf0101b6c31", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "paper showing crosslingual generalization is possible", "1_output": [["paper showing crosslingual generalization is possible", "Title: Asymptotic Exit Location Distributions in the Stochastic Exit Problem\n\nAbstract: Consider a two-dimensional continuous-time dynamical system, with an attracting fixed point $S$. If the deterministic dynamics are perturbed by white noise (random perturbations) of strength $\\epsilon$, the system state will eventually leave the domain of attraction $\\Omega$ of $S$. We analyse the case when, as $\\epsilon\\to0$, the exit location on the boundary $\\partial\\Omega$ is increasingly concentrated near a saddle point $H$ of the deterministic dynamics. We show that the asymptotic form of the exit location distribution on $\\partial\\Omega$ is generically non-Gaussian and asymmetric, and classify the possible limiting distributions. A key role is played by a parameter $\\mu$, equal to the ratio $|\\lambda_s(H)|/\\lambda_u(H)$ of the stable and unstable eigenvalues of the linearized deterministic flow at $H$. If $\\mu<1$ then the exit location distribution is generically asymptotic as $\\epsilon\\to0$ to a Weibull distribution with shape parameter $2/\\mu$, on the $O(\\epsilon^{\\mu/2})$ length scale near $H$. If $\\mu>1$ it is generically asymptotic to a distribution on the $O(\\epsilon^{1/2})$ length scale, whose moments we compute. The asymmetry of the asymptotic exit location distribution is attributable to the generic presence of a `classically forbidden' region: a wedge-shaped subset of $\\Omega$ with $H$ as vertex, which is reached from $S$, in the $\\epsilon\\to0$ limit, only via `bent' (non-smooth) fluctuational paths that first pass through the vicinity of $H$. We deduce from the presence of this forbidden region that the classical Eyring formula for the small-$\\epsilon$ exponential asymptotics of the mean first exit time is generically inapplicable."]], "1_corpus": "arxiv"}
{"tstamp": 1722224559.6652, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "4a5116c2fe724927a11e34daeb171b0b", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "are all naruto shippuden episodes dubbed yet?", "0_output": [["are all naruto shippuden episodes dubbed yet?", "Scifi Stackexchange\n\nQ: What is the only Jutsu included in the manga from Filler Episodes of Naruto Shippuden Anime My friend challenged me with this question a while ago.\nWhich jutsu premiered in Naruto Shippuden, but was thereafter picked up by Masashi Kishimoto into his Manga? \nThis has apparently only happened once, all the other jutsu's happened the other way. He created them and put them into the Manga, then were disseminated into the Shippuden\n\nA: The only jutsu that matches somehow the requirements (at least to my knowledge and search) is the Water Release: Water Shuriken (Suiton: Mizu Shuriken):\n\nThis jutsu appeared first in Naruto: Shipp\u016bden #412 - Neji's Judgment which aired in Japan on May 14, 2015 and was later also used in the Naruto novel Gaara Hiden: A Sandstorm Mirage which was published in Japan on June 4, 2015.\nHowever this is a novel and not the original manga although is was illustrated by Masashi Kishimoto.\n"]], "0_corpus": "stackexchange", "1_conv_id": "cb439cf0253b48a4bec18921c3e8c6db", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "are all naruto shippuden episodes dubbed yet?", "1_output": [["are all naruto shippuden episodes dubbed yet?", "Scifi Stackexchange\n\nQ: What is the only Jutsu included in the manga from Filler Episodes of Naruto Shippuden Anime My friend challenged me with this question a while ago.\nWhich jutsu premiered in Naruto Shippuden, but was thereafter picked up by Masashi Kishimoto into his Manga? \nThis has apparently only happened once, all the other jutsu's happened the other way. He created them and put them into the Manga, then were disseminated into the Shippuden\n\nA: The only jutsu that matches somehow the requirements (at least to my knowledge and search) is the Water Release: Water Shuriken (Suiton: Mizu Shuriken):\n\nThis jutsu appeared first in Naruto: Shipp\u016bden #412 - Neji's Judgment which aired in Japan on May 14, 2015 and was later also used in the Naruto novel Gaara Hiden: A Sandstorm Mirage which was published in Japan on June 4, 2015.\nHowever this is a novel and not the original manga although is was illustrated by Masashi Kishimoto.\n"]], "1_corpus": "stackexchange"}