Muennighoff
commited on
Scheduled Commit
Browse files
data/retrieval_battle-f0f24c65-6f65-4435-a1e5-20bb12a2b295.jsonl
CHANGED
@@ -2,3 +2,10 @@
|
|
2 |
{"tstamp": 1722223862.4429, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "236e330aa5dd46eeb2c678b95a41260c", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "0_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: Search for gravitational waves from binary black hole inspirals in LIGO data\n\nAbstract: We report on a search for gravitational waves from binary black hole inspirals in the data from the second science run of the LIGO interferometers. The search focused on binary systems with component masses between 3 and 20 solar masses. Optimally oriented binaries with distances up to 1 Mpc could be detected with efficiency of at least 90%. We found no events that could be identified as gravitational waves in the 385.6 hours of data that we searched."]], "0_corpus": "arxiv", "1_conv_id": "0ec343c3523d47eaae6a6a59d682f5f2", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "1_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: Is the Commercial Mass Media Necessary, or Even Desirable, for Liberal Democracy?\n\nAbstract: Is a commercial mass media, dependent on the market for its sustenance, necessary, or even desirable, for liberal democracy? Yochai Benkler has argued that a decentralized, peer-to-peer system of communications and information is both possible with digital technology and preferable to a system based on commercial mass media. He has contended in fact that the presence of politically powerful, copyright-rich mass media imposes significant barriers to the development of peer-to-peer information-sharing networks. In contrast, I have argued that the commercial mass media play an important, and perhaps even vital, role in liberal democracy by galvanizing public opinion, serving as a watchdog against government and corporate wrongdoing, agenda-setting (which enables public discourse), and serving as a relatively trustworthy source of information. This paper seeks to push the ball forward on this issue. It first isolates and enumerates the contributions that the commercial mass media are said to make towards liberal democracy. It then briefly assesses the extent to which the commercial mass media actually fulfills these constitutive functions. It then asks whether alternative institutions might serve some or all of these functions just as well or better. In so doing, it looks both to the past and the future. First, it examines the political party-supported press that thrived in the United States through much of the 19th century. Second, it examines government-funded mass media. Third, it looks, skeptically, at possibilities for peer-to-peer sharing of information and opinion in the digital network environment. I conclude that, despite the weaknesses of commercial mass media, an information policy suitable to liberal democracy should include a plurality of types of voices, including commercial mass media."]], "1_corpus": "arxiv"}
|
3 |
{"tstamp": 1722223898.7505, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "65a408d68bae4872a39c85580503e046", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "0_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: Search for gravitational waves from binary black hole inspirals in LIGO data\n\nAbstract: We report on a search for gravitational waves from binary black hole inspirals in the data from the second science run of the LIGO interferometers. The search focused on binary systems with component masses between 3 and 20 solar masses. Optimally oriented binaries with distances up to 1 Mpc could be detected with efficiency of at least 90%. We found no events that could be identified as gravitational waves in the 385.6 hours of data that we searched."]], "0_corpus": "arxiv", "1_conv_id": "db30f73e43a742018303a5aa939ef5b7", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "1_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: InstructCoder: Instruction Tuning Large Language Models for Code Editing\n\nAbstract: Code editing encompasses a variety of pragmatic tasks that developers deal with daily. Despite its relevance and practical usefulness, automatic code editing remains an underexplored area in the evolution of deep learning models, partly due to data scarcity. In this work, we explore the use of Large Language Models (LLMs) to edit code based on user instructions. Evaluated on a novel human-written execution-based benchmark dubbed EditEval, we found current models often struggle to fulfill the instructions. In light of this, we contribute InstructCoder, the first instruction-tuning dataset designed to adapt LLMs for general-purpose code editing, containing high-diversity code-editing tasks such as comment insertion, code optimization, and code refactoring. It consists of over 114,000 instruction-input-output triplets and covers multiple distinct code editing scenarios. The collection process starts with filtered commit data sourced from GitHub Python repositories as seeds. Subsequently, the dataset is systematically expanded through an iterative process, where both seed and generated tasks are used to prompt ChatGPT for more data. Our findings reveal that open-source LLMs fine-tuned on InstructCoder can significantly enhance the accuracy of code edits, exhibiting superior code-editing performance matching advanced proprietary LLMs. The datasets and the source code are publicly available at https://github.com/qishenghu/CodeInstruct."]], "1_corpus": "arxiv"}
|
4 |
{"tstamp": 1722223956.8024, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7fc1ca90e4f048ddb8f1a07606fe8c74", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "do you need a visa for a connecting flight in vietnam?", "0_output": [["do you need a visa for a connecting flight in vietnam?", "Meta Stackexchange\n\nQ: Stack Exchange site to ask Visa and passport related questions? Where can I ask questions about Passports and Visas? \nIs there a Stack Exchange site for this?\n\nA: travel's help centre says you can ask about visas.\n\ncustoms & border crossing, visas and other travel documents such as permits\n\nThey also have a passports tag.\nThe Expatriates Stack Exchange is also available for people living or intend to live abroad on a long-term basis.\nDo check out the help centre text of these sites before you ask to make sure your question is on topic. \n"]], "0_corpus": "stackexchange", "1_conv_id": "6798b0663e6740f8acf5d43bb61573f5", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "do you need a visa for a connecting flight in vietnam?", "1_output": [["do you need a visa for a connecting flight in vietnam?", "Stackoverflow Stackexchange\n\nQ: How to load data from splitted gzip files into redshift? Can I load data from splitted gzip files into an amazon redshift table?\nI can load data from single gzip file or from splitted text files.\nBut can I load data from splitted gzip files?\n\nA: I'm assuming here that you mean that you have multiple CSV files that are each gzipped.\nFirst, upload each file to an S3 bucket under the same prefix and delimiter.\ns3://S3_BUCKET/S3_PREFIX/file0.gz\ns3://S3_BUCKET/S3_PREFIX/file1.gz\nThen, execute the Redshift copy command:\ncopy TABLE_NAME from 's3://S3_BUCKET/S3_PREFIX' credentials 'aws_access_key_id=ACCESS_KEY;aws_secret_access_key=SECRET_KEY' csv gzip\n\n\n*\n\n*specify the S3 bucket and prefix\n\n*include credentials that have permission to read the S3 objects\n\n*ensure that the destination table already exists and has columns compatible with the CSV\n\n*specify the csv and gzip options\n\n\nAlso, it's a good idea to have your file count proportional to the number of nodes in your cluster.\n"]], "1_corpus": "stackexchange"}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
{"tstamp": 1722223862.4429, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "236e330aa5dd46eeb2c678b95a41260c", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "0_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: Search for gravitational waves from binary black hole inspirals in LIGO data\n\nAbstract: We report on a search for gravitational waves from binary black hole inspirals in the data from the second science run of the LIGO interferometers. The search focused on binary systems with component masses between 3 and 20 solar masses. Optimally oriented binaries with distances up to 1 Mpc could be detected with efficiency of at least 90%. We found no events that could be identified as gravitational waves in the 385.6 hours of data that we searched."]], "0_corpus": "arxiv", "1_conv_id": "0ec343c3523d47eaae6a6a59d682f5f2", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "1_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: Is the Commercial Mass Media Necessary, or Even Desirable, for Liberal Democracy?\n\nAbstract: Is a commercial mass media, dependent on the market for its sustenance, necessary, or even desirable, for liberal democracy? Yochai Benkler has argued that a decentralized, peer-to-peer system of communications and information is both possible with digital technology and preferable to a system based on commercial mass media. He has contended in fact that the presence of politically powerful, copyright-rich mass media imposes significant barriers to the development of peer-to-peer information-sharing networks. In contrast, I have argued that the commercial mass media play an important, and perhaps even vital, role in liberal democracy by galvanizing public opinion, serving as a watchdog against government and corporate wrongdoing, agenda-setting (which enables public discourse), and serving as a relatively trustworthy source of information. This paper seeks to push the ball forward on this issue. It first isolates and enumerates the contributions that the commercial mass media are said to make towards liberal democracy. It then briefly assesses the extent to which the commercial mass media actually fulfills these constitutive functions. It then asks whether alternative institutions might serve some or all of these functions just as well or better. In so doing, it looks both to the past and the future. First, it examines the political party-supported press that thrived in the United States through much of the 19th century. Second, it examines government-funded mass media. Third, it looks, skeptically, at possibilities for peer-to-peer sharing of information and opinion in the digital network environment. I conclude that, despite the weaknesses of commercial mass media, an information policy suitable to liberal democracy should include a plurality of types of voices, including commercial mass media."]], "1_corpus": "arxiv"}
|
3 |
{"tstamp": 1722223898.7505, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "65a408d68bae4872a39c85580503e046", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "0_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: Search for gravitational waves from binary black hole inspirals in LIGO data\n\nAbstract: We report on a search for gravitational waves from binary black hole inspirals in the data from the second science run of the LIGO interferometers. The search focused on binary systems with component masses between 3 and 20 solar masses. Optimally oriented binaries with distances up to 1 Mpc could be detected with efficiency of at least 90%. We found no events that could be identified as gravitational waves in the 385.6 hours of data that we searched."]], "0_corpus": "arxiv", "1_conv_id": "db30f73e43a742018303a5aa939ef5b7", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "1_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: InstructCoder: Instruction Tuning Large Language Models for Code Editing\n\nAbstract: Code editing encompasses a variety of pragmatic tasks that developers deal with daily. Despite its relevance and practical usefulness, automatic code editing remains an underexplored area in the evolution of deep learning models, partly due to data scarcity. In this work, we explore the use of Large Language Models (LLMs) to edit code based on user instructions. Evaluated on a novel human-written execution-based benchmark dubbed EditEval, we found current models often struggle to fulfill the instructions. In light of this, we contribute InstructCoder, the first instruction-tuning dataset designed to adapt LLMs for general-purpose code editing, containing high-diversity code-editing tasks such as comment insertion, code optimization, and code refactoring. It consists of over 114,000 instruction-input-output triplets and covers multiple distinct code editing scenarios. The collection process starts with filtered commit data sourced from GitHub Python repositories as seeds. Subsequently, the dataset is systematically expanded through an iterative process, where both seed and generated tasks are used to prompt ChatGPT for more data. Our findings reveal that open-source LLMs fine-tuned on InstructCoder can significantly enhance the accuracy of code edits, exhibiting superior code-editing performance matching advanced proprietary LLMs. The datasets and the source code are publicly available at https://github.com/qishenghu/CodeInstruct."]], "1_corpus": "arxiv"}
|
4 |
{"tstamp": 1722223956.8024, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7fc1ca90e4f048ddb8f1a07606fe8c74", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "do you need a visa for a connecting flight in vietnam?", "0_output": [["do you need a visa for a connecting flight in vietnam?", "Meta Stackexchange\n\nQ: Stack Exchange site to ask Visa and passport related questions? Where can I ask questions about Passports and Visas? \nIs there a Stack Exchange site for this?\n\nA: travel's help centre says you can ask about visas.\n\ncustoms & border crossing, visas and other travel documents such as permits\n\nThey also have a passports tag.\nThe Expatriates Stack Exchange is also available for people living or intend to live abroad on a long-term basis.\nDo check out the help centre text of these sites before you ask to make sure your question is on topic. \n"]], "0_corpus": "stackexchange", "1_conv_id": "6798b0663e6740f8acf5d43bb61573f5", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "do you need a visa for a connecting flight in vietnam?", "1_output": [["do you need a visa for a connecting flight in vietnam?", "Stackoverflow Stackexchange\n\nQ: How to load data from splitted gzip files into redshift? Can I load data from splitted gzip files into an amazon redshift table?\nI can load data from single gzip file or from splitted text files.\nBut can I load data from splitted gzip files?\n\nA: I'm assuming here that you mean that you have multiple CSV files that are each gzipped.\nFirst, upload each file to an S3 bucket under the same prefix and delimiter.\ns3://S3_BUCKET/S3_PREFIX/file0.gz\ns3://S3_BUCKET/S3_PREFIX/file1.gz\nThen, execute the Redshift copy command:\ncopy TABLE_NAME from 's3://S3_BUCKET/S3_PREFIX' credentials 'aws_access_key_id=ACCESS_KEY;aws_secret_access_key=SECRET_KEY' csv gzip\n\n\n*\n\n*specify the S3 bucket and prefix\n\n*include credentials that have permission to read the S3 objects\n\n*ensure that the destination table already exists and has columns compatible with the CSV\n\n*specify the csv and gzip options\n\n\nAlso, it's a good idea to have your file count proportional to the number of nodes in your cluster.\n"]], "1_corpus": "stackexchange"}
|
5 |
+
{"tstamp": 1722224073.6957, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "66e5be6065b649e7b34405ae69767708", "0_model_name": "text-embedding-004", "0_prompt": "How to reduce overoptimization in reinforcement learning from human feedback?", "0_output": [["How to reduce overoptimization in reinforcement learning from human feedback?", "Title: Reward Model Ensembles Help Mitigate Overoptimization\n\nAbstract: Reinforcement learning from human feedback (RLHF) is a standard approach for fine-tuning large language models to follow instructions. As part of this process, learned reward models are used to approximately model human preferences. However, as imperfect representations of the \"true\" reward, these learned reward models are susceptible to overoptimization. Gao et al. (2023) studied this phenomenon in a synthetic human feedback setup with a significantly larger \"gold\" reward model acting as the true reward (instead of humans) and showed that overoptimization remains a persistent problem regardless of the size of the proxy reward model and training data used. Using a similar setup, we conduct a systematic study to evaluate the efficacy of using ensemble-based conservative optimization objectives, specifically worst-case optimization (WCO) and uncertainty-weighted optimization (UWO), for mitigating reward model overoptimization when using two optimization methods: (a) best-of-n sampling (BoN) (b) proximal policy optimization (PPO). We additionally extend the setup of Gao et al. (2023) to include 25% label noise to better mirror real-world conditions. Both with and without label noise, we find that conservative optimization practically eliminates overoptimization and improves performance by up to 70% for BoN sampling. For PPO, ensemble-based conservative optimization always reduces overoptimization and outperforms single reward model optimization. Moreover, combining it with a small KL penalty successfully prevents overoptimization at no performance cost. Overall, our results demonstrate that ensemble-based conservative optimization can effectively counter overoptimization."]], "0_corpus": "arxiv", "1_conv_id": "4e634cb1df8740f69f2a07e9fae99789", "1_model_name": "embed-english-v3.0", "1_prompt": "How to reduce overoptimization in reinforcement learning from human feedback?", "1_output": [["How to reduce overoptimization in reinforcement learning from human feedback?", "Title: Scalable Ensembling For Mitigating Reward Overoptimisation\n\nAbstract: Reinforcement Learning from Human Feedback (RLHF) has enabled significant advancements within language modeling for powerful, instruction-following models. However, the alignment of these models remains a pressing challenge as the policy tends to overfit the learned ``proxy\" reward model past an inflection point of utility as measured by a ``gold\" reward model that is more performant -- a phenomenon known as overoptimisation. Prior work has mitigated this issue by computing a pessimistic statistic over an ensemble of reward models, which is common in Offline Reinforcement Learning but incredibly costly for language models with high memory requirements, making such approaches infeasible for sufficiently large models. To this end, we propose using a shared encoder but separate linear heads. We find this leads to similar performance as the full ensemble while allowing tremendous savings in memory and time required for training for models of similar size."]], "1_corpus": "arxiv"}
|
6 |
+
{"tstamp": 1722224112.4536, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "63ac396150d24f6c8262e8040d6c76d1", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "what is the difference between domain users and authenticated users?", "0_output": [["what is the difference between domain users and authenticated users?", "Stackoverflow Stackexchange\n\nQ: Erlang: robustness against port owner's death What happens when the process which owns a spawned port dies and is restarted by the supervisor? \n\n\n*\n\n*Is there a way for the old port not to die with its owner and for the new owner to \"take over\"?\n\n*Failing that, is it possible to ensure that the spawned process terminates when its port dies?\n\nA: First, notice that you don't want the port owner to die. So move any \"dangerous\" code out to another process and make the port owner as dumb as possible. This is error-mitigation the Erlang way. Now, if that process dies, something is really bad, so in that case it may be clever to recycle the port as well. But since we moved everything out, we are counting on that not happening.\nRegarding 2, the port will send a specific message when it terminates, so you can arrange for your spawned process to gracefully detect this and terminate with it. See\nhttp://www.erlang.org/doc/reference_manual/ports.html\n"]], "0_corpus": "stackexchange", "1_conv_id": "f92109e0200e4f72b4f46ba3abd430c4", "1_model_name": "BM25", "1_prompt": "what is the difference between domain users and authenticated users?", "1_output": [["what is the difference between domain users and authenticated users?", "Stackoverflow Stackexchange\n\nQ: What are the differences between LDAP and Active Directory authentication? I have tried to search without success. \n(I do know difference between LDAP and Active Directory. Also read this What are the differences between LDAP and Active Directory?)\nAccording to my understanding, LDAP authenticates (binds) with DN (distinguish name) and password.\nE.g. CN=bob, OU=Users,DC=test,DC=com \nActive Directory authenticates with a short user name or full domain user name and password.\nE.g. bob or bob@test.com \nTherefore, LDAP does not allow to users with the same CN in the same tree (OU) but it allows the same CN in a different OU.\nActive Directory does not allow the same user name in the same domain, even the user object located in different tree of Active Directory.\nActive Directory allows the same user name in different domains.\nOther differences? \n\nA: Active Directory is a product and a directory service. LDAP is an application layer protocol and an IETF standard. Active Directory supports and implements LDAP, therefore allowing RFC-compliant LDAP clients to talk to it and perform LDAP operations such as bind, search, etc.\n"]], "1_corpus": "stackexchange"}
|
7 |
+
{"tstamp": 1722224122.2784, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "a178424589e04da081763aca087bef8c", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: Gauge fixing, BRS invariance and Ward identities for randomly stirred flows\n\nAbstract: The Galilean invariance of the Navier-Stokes equation is shown to be akin to a global gauge symmetry familiar from quantum field theory. This symmetry leads to a multiple counting of infinitely many inertial reference frames in the path integral approach to randomly stirred fluids. This problem is solved by fixing the gauge, i.e., singling out one reference frame. The gauge fixed theory has an underlying Becchi-Rouet-Stora (BRS) symmetry which leads to the Ward identity relating the exact inverse response and vertex functions. This identification of Galilean invariance as a gauge symmetry is explored in detail, for different gauge choices and by performing a rigorous examination of a discretized version of the theory. The Navier-Stokes equation is also invariant under arbitrary rectilinear frame accelerations, known as extended Galilean invariance (EGI). We gauge fix this extended symmetry and derive the generalized Ward identity that follows from the BRS invariance of the gauge-fixed theory. This new Ward identity reduces to the standard one in the limit of zero acceleration. This gauge-fixing approach unambiguously shows that Galilean invariance and EGI constrain only the zero mode of the vertex but none of the higher wavenumber modes."]], "0_corpus": "arxiv", "1_conv_id": "30174116a3ee429cb683135e80854d8d", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
|
8 |
+
{"tstamp": 1722224143.0986, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "6e0b0281dbdd47e5987c4f7945b122f3", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "7645ad3134ce4cf88ac3eace789bf588", "1_model_name": "BM25", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: Text and Code Embeddings by Contrastive Pre-Training\n\nAbstract: Text embeddings are useful features in many applications such as semantic search and computing text similarity. Previous work typically trains models customized for different use cases, varying in dataset choice, training objective and model architecture. In this work, we show that contrastive pre-training on unsupervised data at scale leads to high quality vector representations of text and code. The same unsupervised text embeddings that achieve new state-of-the-art results in linear-probe classification also display impressive semantic search capabilities and sometimes even perform competitively with fine-tuned models. On linear-probe classification accuracy averaging over 7 tasks, our best unsupervised model achieves a relative improvement of 4% and 1.8% over previous best unsupervised and supervised text embedding models respectively. The same text embeddings when evaluated on large-scale semantic search attains a relative improvement of 23.4%, 14.7%, and 10.6% over previous best unsupervised methods on MSMARCO, Natural Questions and TriviaQA benchmarks, respectively. Similarly to text embeddings, we train code embedding models on (text, code) pairs, obtaining a 20.8% relative improvement over prior best work on code search."]], "1_corpus": "arxiv"}
|
9 |
+
{"tstamp": 1722224179.2706, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "83d0fa49a2b744c093ee6c2034861706", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: Gauge fixing, BRS invariance and Ward identities for randomly stirred flows\n\nAbstract: The Galilean invariance of the Navier-Stokes equation is shown to be akin to a global gauge symmetry familiar from quantum field theory. This symmetry leads to a multiple counting of infinitely many inertial reference frames in the path integral approach to randomly stirred fluids. This problem is solved by fixing the gauge, i.e., singling out one reference frame. The gauge fixed theory has an underlying Becchi-Rouet-Stora (BRS) symmetry which leads to the Ward identity relating the exact inverse response and vertex functions. This identification of Galilean invariance as a gauge symmetry is explored in detail, for different gauge choices and by performing a rigorous examination of a discretized version of the theory. The Navier-Stokes equation is also invariant under arbitrary rectilinear frame accelerations, known as extended Galilean invariance (EGI). We gauge fix this extended symmetry and derive the generalized Ward identity that follows from the BRS invariance of the gauge-fixed theory. This new Ward identity reduces to the standard one in the limit of zero acceleration. This gauge-fixing approach unambiguously shows that Galilean invariance and EGI constrain only the zero mode of the vertex but none of the higher wavenumber modes."]], "0_corpus": "arxiv", "1_conv_id": "d68199ab940144d0864864688663a7aa", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
|
10 |
+
{"tstamp": 1722224206.9709, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f9e33c80a7f84346879ff3a110995c39", "0_model_name": "text-embedding-3-large", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "c38a6a7929194e63a6fae84fb9f4817d", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
|
11 |
+
{"tstamp": 1722224292.4751, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "9c20a131b0b0460caf8e9cc90271ad58", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "8fbd7a439898404aa1b56dd7fd31b164", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
|
data/retrieval_individual-f0f24c65-6f65-4435-a1e5-20bb12a2b295.jsonl
CHANGED
@@ -8,3 +8,16 @@
|
|
8 |
{"tstamp": 1722223940.126, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722223916.0484, "finish": 1722223940.126, "ip": "", "conv_id": "6798b0663e6740f8acf5d43bb61573f5", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "do you need a visa for a connecting flight in vietnam?", "output": [["do you need a visa for a connecting flight in vietnam?", "Stackoverflow Stackexchange\n\nQ: How to load data from splitted gzip files into redshift? Can I load data from splitted gzip files into an amazon redshift table?\nI can load data from single gzip file or from splitted text files.\nBut can I load data from splitted gzip files?\n\nA: I'm assuming here that you mean that you have multiple CSV files that are each gzipped.\nFirst, upload each file to an S3 bucket under the same prefix and delimiter.\ns3://S3_BUCKET/S3_PREFIX/file0.gz\ns3://S3_BUCKET/S3_PREFIX/file1.gz\nThen, execute the Redshift copy command:\ncopy TABLE_NAME from 's3://S3_BUCKET/S3_PREFIX' credentials 'aws_access_key_id=ACCESS_KEY;aws_secret_access_key=SECRET_KEY' csv gzip\n\n\n*\n\n*specify the S3 bucket and prefix\n\n*include credentials that have permission to read the S3 objects\n\n*ensure that the destination table already exists and has columns compatible with the CSV\n\n*specify the csv and gzip options\n\n\nAlso, it's a good idea to have your file count proportional to the number of nodes in your cluster.\n"]], "corpus": "stackexchange"}
|
9 |
{"tstamp": 1722223986.4119, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722223969.2294, "finish": 1722223986.4119, "ip": "", "conv_id": "66e5be6065b649e7b34405ae69767708", "model_name": "text-embedding-004", "prompt": "How to reduce overoptimization in reinforcement learning from human feedback?", "output": [["How to reduce overoptimization in reinforcement learning from human feedback?", "Title: Reward Model Ensembles Help Mitigate Overoptimization\n\nAbstract: Reinforcement learning from human feedback (RLHF) is a standard approach for fine-tuning large language models to follow instructions. As part of this process, learned reward models are used to approximately model human preferences. However, as imperfect representations of the \"true\" reward, these learned reward models are susceptible to overoptimization. Gao et al. (2023) studied this phenomenon in a synthetic human feedback setup with a significantly larger \"gold\" reward model acting as the true reward (instead of humans) and showed that overoptimization remains a persistent problem regardless of the size of the proxy reward model and training data used. Using a similar setup, we conduct a systematic study to evaluate the efficacy of using ensemble-based conservative optimization objectives, specifically worst-case optimization (WCO) and uncertainty-weighted optimization (UWO), for mitigating reward model overoptimization when using two optimization methods: (a) best-of-n sampling (BoN) (b) proximal policy optimization (PPO). We additionally extend the setup of Gao et al. (2023) to include 25% label noise to better mirror real-world conditions. Both with and without label noise, we find that conservative optimization practically eliminates overoptimization and improves performance by up to 70% for BoN sampling. For PPO, ensemble-based conservative optimization always reduces overoptimization and outperforms single reward model optimization. Moreover, combining it with a small KL penalty successfully prevents overoptimization at no performance cost. Overall, our results demonstrate that ensemble-based conservative optimization can effectively counter overoptimization."]], "corpus": "arxiv"}
|
10 |
{"tstamp": 1722223986.4119, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722223969.2294, "finish": 1722223986.4119, "ip": "", "conv_id": "4e634cb1df8740f69f2a07e9fae99789", "model_name": "embed-english-v3.0", "prompt": "How to reduce overoptimization in reinforcement learning from human feedback?", "output": [["How to reduce overoptimization in reinforcement learning from human feedback?", "Title: Scalable Ensembling For Mitigating Reward Overoptimisation\n\nAbstract: Reinforcement Learning from Human Feedback (RLHF) has enabled significant advancements within language modeling for powerful, instruction-following models. However, the alignment of these models remains a pressing challenge as the policy tends to overfit the learned ``proxy\" reward model past an inflection point of utility as measured by a ``gold\" reward model that is more performant -- a phenomenon known as overoptimisation. Prior work has mitigated this issue by computing a pessimistic statistic over an ensemble of reward models, which is common in Offline Reinforcement Learning but incredibly costly for language models with high memory requirements, making such approaches infeasible for sufficiently large models. To this end, we propose using a shared encoder but separate linear heads. We find this leads to similar performance as the full ensemble while allowing tremendous savings in memory and time required for training for models of similar size."]], "corpus": "arxiv"}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
{"tstamp": 1722223940.126, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722223916.0484, "finish": 1722223940.126, "ip": "", "conv_id": "6798b0663e6740f8acf5d43bb61573f5", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "do you need a visa for a connecting flight in vietnam?", "output": [["do you need a visa for a connecting flight in vietnam?", "Stackoverflow Stackexchange\n\nQ: How to load data from splitted gzip files into redshift? Can I load data from splitted gzip files into an amazon redshift table?\nI can load data from single gzip file or from splitted text files.\nBut can I load data from splitted gzip files?\n\nA: I'm assuming here that you mean that you have multiple CSV files that are each gzipped.\nFirst, upload each file to an S3 bucket under the same prefix and delimiter.\ns3://S3_BUCKET/S3_PREFIX/file0.gz\ns3://S3_BUCKET/S3_PREFIX/file1.gz\nThen, execute the Redshift copy command:\ncopy TABLE_NAME from 's3://S3_BUCKET/S3_PREFIX' credentials 'aws_access_key_id=ACCESS_KEY;aws_secret_access_key=SECRET_KEY' csv gzip\n\n\n*\n\n*specify the S3 bucket and prefix\n\n*include credentials that have permission to read the S3 objects\n\n*ensure that the destination table already exists and has columns compatible with the CSV\n\n*specify the csv and gzip options\n\n\nAlso, it's a good idea to have your file count proportional to the number of nodes in your cluster.\n"]], "corpus": "stackexchange"}
|
9 |
{"tstamp": 1722223986.4119, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722223969.2294, "finish": 1722223986.4119, "ip": "", "conv_id": "66e5be6065b649e7b34405ae69767708", "model_name": "text-embedding-004", "prompt": "How to reduce overoptimization in reinforcement learning from human feedback?", "output": [["How to reduce overoptimization in reinforcement learning from human feedback?", "Title: Reward Model Ensembles Help Mitigate Overoptimization\n\nAbstract: Reinforcement learning from human feedback (RLHF) is a standard approach for fine-tuning large language models to follow instructions. As part of this process, learned reward models are used to approximately model human preferences. However, as imperfect representations of the \"true\" reward, these learned reward models are susceptible to overoptimization. Gao et al. (2023) studied this phenomenon in a synthetic human feedback setup with a significantly larger \"gold\" reward model acting as the true reward (instead of humans) and showed that overoptimization remains a persistent problem regardless of the size of the proxy reward model and training data used. Using a similar setup, we conduct a systematic study to evaluate the efficacy of using ensemble-based conservative optimization objectives, specifically worst-case optimization (WCO) and uncertainty-weighted optimization (UWO), for mitigating reward model overoptimization when using two optimization methods: (a) best-of-n sampling (BoN) (b) proximal policy optimization (PPO). We additionally extend the setup of Gao et al. (2023) to include 25% label noise to better mirror real-world conditions. Both with and without label noise, we find that conservative optimization practically eliminates overoptimization and improves performance by up to 70% for BoN sampling. For PPO, ensemble-based conservative optimization always reduces overoptimization and outperforms single reward model optimization. Moreover, combining it with a small KL penalty successfully prevents overoptimization at no performance cost. Overall, our results demonstrate that ensemble-based conservative optimization can effectively counter overoptimization."]], "corpus": "arxiv"}
|
10 |
{"tstamp": 1722223986.4119, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722223969.2294, "finish": 1722223986.4119, "ip": "", "conv_id": "4e634cb1df8740f69f2a07e9fae99789", "model_name": "embed-english-v3.0", "prompt": "How to reduce overoptimization in reinforcement learning from human feedback?", "output": [["How to reduce overoptimization in reinforcement learning from human feedback?", "Title: Scalable Ensembling For Mitigating Reward Overoptimisation\n\nAbstract: Reinforcement Learning from Human Feedback (RLHF) has enabled significant advancements within language modeling for powerful, instruction-following models. However, the alignment of these models remains a pressing challenge as the policy tends to overfit the learned ``proxy\" reward model past an inflection point of utility as measured by a ``gold\" reward model that is more performant -- a phenomenon known as overoptimisation. Prior work has mitigated this issue by computing a pessimistic statistic over an ensemble of reward models, which is common in Offline Reinforcement Learning but incredibly costly for language models with high memory requirements, making such approaches infeasible for sufficiently large models. To this end, we propose using a shared encoder but separate linear heads. We find this leads to similar performance as the full ensemble while allowing tremendous savings in memory and time required for training for models of similar size."]], "corpus": "arxiv"}
|
11 |
+
{"tstamp": 1722224082.8819, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722224081.5576, "finish": 1722224082.8819, "ip": "", "conv_id": "63ac396150d24f6c8262e8040d6c76d1", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "what is the difference between domain users and authenticated users?", "output": [["what is the difference between domain users and authenticated users?", "Stackoverflow Stackexchange\n\nQ: Erlang: robustness against port owner's death What happens when the process which owns a spawned port dies and is restarted by the supervisor? \n\n\n*\n\n*Is there a way for the old port not to die with its owner and for the new owner to \"take over\"?\n\n*Failing that, is it possible to ensure that the spawned process terminates when its port dies?\n\nA: First, notice that you don't want the port owner to die. So move any \"dangerous\" code out to another process and make the port owner as dumb as possible. This is error-mitigation the Erlang way. Now, if that process dies, something is really bad, so in that case it may be clever to recycle the port as well. But since we moved everything out, we are counting on that not happening.\nRegarding 2, the port will send a specific message when it terminates, so you can arrange for your spawned process to gracefully detect this and terminate with it. See\nhttp://www.erlang.org/doc/reference_manual/ports.html\n"]], "corpus": "stackexchange"}
|
12 |
+
{"tstamp": 1722224082.8819, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722224081.5576, "finish": 1722224082.8819, "ip": "", "conv_id": "f92109e0200e4f72b4f46ba3abd430c4", "model_name": "BM25", "prompt": "what is the difference between domain users and authenticated users?", "output": [["what is the difference between domain users and authenticated users?", "Stackoverflow Stackexchange\n\nQ: What are the differences between LDAP and Active Directory authentication? I have tried to search without success. \n(I do know difference between LDAP and Active Directory. Also read this What are the differences between LDAP and Active Directory?)\nAccording to my understanding, LDAP authenticates (binds) with DN (distinguish name) and password.\nE.g. CN=bob, OU=Users,DC=test,DC=com \nActive Directory authenticates with a short user name or full domain user name and password.\nE.g. bob or bob@test.com \nTherefore, LDAP does not allow to users with the same CN in the same tree (OU) but it allows the same CN in a different OU.\nActive Directory does not allow the same user name in the same domain, even the user object located in different tree of Active Directory.\nActive Directory allows the same user name in different domains.\nOther differences? \n\nA: Active Directory is a product and a directory service. LDAP is an application layer protocol and an IETF standard. Active Directory supports and implements LDAP, therefore allowing RFC-compliant LDAP clients to talk to it and perform LDAP operations such as bind, search, etc.\n"]], "corpus": "stackexchange"}
|
13 |
+
{"tstamp": 1722224119.7729, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722224119.6733, "finish": 1722224119.7729, "ip": "", "conv_id": "a178424589e04da081763aca087bef8c", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: Gauge fixing, BRS invariance and Ward identities for randomly stirred flows\n\nAbstract: The Galilean invariance of the Navier-Stokes equation is shown to be akin to a global gauge symmetry familiar from quantum field theory. This symmetry leads to a multiple counting of infinitely many inertial reference frames in the path integral approach to randomly stirred fluids. This problem is solved by fixing the gauge, i.e., singling out one reference frame. The gauge fixed theory has an underlying Becchi-Rouet-Stora (BRS) symmetry which leads to the Ward identity relating the exact inverse response and vertex functions. This identification of Galilean invariance as a gauge symmetry is explored in detail, for different gauge choices and by performing a rigorous examination of a discretized version of the theory. The Navier-Stokes equation is also invariant under arbitrary rectilinear frame accelerations, known as extended Galilean invariance (EGI). We gauge fix this extended symmetry and derive the generalized Ward identity that follows from the BRS invariance of the gauge-fixed theory. This new Ward identity reduces to the standard one in the limit of zero acceleration. This gauge-fixing approach unambiguously shows that Galilean invariance and EGI constrain only the zero mode of the vertex but none of the higher wavenumber modes."]], "corpus": "arxiv"}
|
14 |
+
{"tstamp": 1722224119.7729, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722224119.6733, "finish": 1722224119.7729, "ip": "", "conv_id": "30174116a3ee429cb683135e80854d8d", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
15 |
+
{"tstamp": 1722224139.4932, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722224128.1403, "finish": 1722224139.4932, "ip": "", "conv_id": "6e0b0281dbdd47e5987c4f7945b122f3", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
16 |
+
{"tstamp": 1722224139.4932, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722224128.1403, "finish": 1722224139.4932, "ip": "", "conv_id": "7645ad3134ce4cf88ac3eace789bf588", "model_name": "BM25", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: Text and Code Embeddings by Contrastive Pre-Training\n\nAbstract: Text embeddings are useful features in many applications such as semantic search and computing text similarity. Previous work typically trains models customized for different use cases, varying in dataset choice, training objective and model architecture. In this work, we show that contrastive pre-training on unsupervised data at scale leads to high quality vector representations of text and code. The same unsupervised text embeddings that achieve new state-of-the-art results in linear-probe classification also display impressive semantic search capabilities and sometimes even perform competitively with fine-tuned models. On linear-probe classification accuracy averaging over 7 tasks, our best unsupervised model achieves a relative improvement of 4% and 1.8% over previous best unsupervised and supervised text embedding models respectively. The same text embeddings when evaluated on large-scale semantic search attains a relative improvement of 23.4%, 14.7%, and 10.6% over previous best unsupervised methods on MSMARCO, Natural Questions and TriviaQA benchmarks, respectively. Similarly to text embeddings, we train code embedding models on (text, code) pairs, obtaining a 20.8% relative improvement over prior best work on code search."]], "corpus": "arxiv"}
|
17 |
+
{"tstamp": 1722224166.1773, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722224149.8574, "finish": 1722224166.1773, "ip": "", "conv_id": "83d0fa49a2b744c093ee6c2034861706", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: Gauge fixing, BRS invariance and Ward identities for randomly stirred flows\n\nAbstract: The Galilean invariance of the Navier-Stokes equation is shown to be akin to a global gauge symmetry familiar from quantum field theory. This symmetry leads to a multiple counting of infinitely many inertial reference frames in the path integral approach to randomly stirred fluids. This problem is solved by fixing the gauge, i.e., singling out one reference frame. The gauge fixed theory has an underlying Becchi-Rouet-Stora (BRS) symmetry which leads to the Ward identity relating the exact inverse response and vertex functions. This identification of Galilean invariance as a gauge symmetry is explored in detail, for different gauge choices and by performing a rigorous examination of a discretized version of the theory. The Navier-Stokes equation is also invariant under arbitrary rectilinear frame accelerations, known as extended Galilean invariance (EGI). We gauge fix this extended symmetry and derive the generalized Ward identity that follows from the BRS invariance of the gauge-fixed theory. This new Ward identity reduces to the standard one in the limit of zero acceleration. This gauge-fixing approach unambiguously shows that Galilean invariance and EGI constrain only the zero mode of the vertex but none of the higher wavenumber modes."]], "corpus": "arxiv"}
|
18 |
+
{"tstamp": 1722224166.1773, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722224149.8574, "finish": 1722224166.1773, "ip": "", "conv_id": "d68199ab940144d0864864688663a7aa", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
19 |
+
{"tstamp": 1722224198.0558, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722224186.7354, "finish": 1722224198.0558, "ip": "", "conv_id": "f9e33c80a7f84346879ff3a110995c39", "model_name": "text-embedding-3-large", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
20 |
+
{"tstamp": 1722224198.0558, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722224186.7354, "finish": 1722224198.0558, "ip": "", "conv_id": "c38a6a7929194e63a6fae84fb9f4817d", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
21 |
+
{"tstamp": 1722224239.3833, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722224211.8761, "finish": 1722224239.3833, "ip": "", "conv_id": "9c20a131b0b0460caf8e9cc90271ad58", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
22 |
+
{"tstamp": 1722224239.3833, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722224211.8761, "finish": 1722224239.3833, "ip": "", "conv_id": "8fbd7a439898404aa1b56dd7fd31b164", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
23 |
+
{"tstamp": 1722224314.5447, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722224297.936, "finish": 1722224314.5447, "ip": "", "conv_id": "0f3a6c21731d453084b96cd37936a511", "model_name": "GritLM/GritLM-7B", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
|