Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
44b7adb
·
verified ·
1 Parent(s): 7fe3912

Scheduled Commit

Browse files
data/retrieval_battle-f0f24c65-6f65-4435-a1e5-20bb12a2b295.jsonl CHANGED
@@ -9,3 +9,10 @@
9
  {"tstamp": 1722224179.2706, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "83d0fa49a2b744c093ee6c2034861706", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: Gauge fixing, BRS invariance and Ward identities for randomly stirred flows\n\nAbstract: The Galilean invariance of the Navier-Stokes equation is shown to be akin to a global gauge symmetry familiar from quantum field theory. This symmetry leads to a multiple counting of infinitely many inertial reference frames in the path integral approach to randomly stirred fluids. This problem is solved by fixing the gauge, i.e., singling out one reference frame. The gauge fixed theory has an underlying Becchi-Rouet-Stora (BRS) symmetry which leads to the Ward identity relating the exact inverse response and vertex functions. This identification of Galilean invariance as a gauge symmetry is explored in detail, for different gauge choices and by performing a rigorous examination of a discretized version of the theory. The Navier-Stokes equation is also invariant under arbitrary rectilinear frame accelerations, known as extended Galilean invariance (EGI). We gauge fix this extended symmetry and derive the generalized Ward identity that follows from the BRS invariance of the gauge-fixed theory. This new Ward identity reduces to the standard one in the limit of zero acceleration. This gauge-fixing approach unambiguously shows that Galilean invariance and EGI constrain only the zero mode of the vertex but none of the higher wavenumber modes."]], "0_corpus": "arxiv", "1_conv_id": "d68199ab940144d0864864688663a7aa", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
10
  {"tstamp": 1722224206.9709, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f9e33c80a7f84346879ff3a110995c39", "0_model_name": "text-embedding-3-large", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "c38a6a7929194e63a6fae84fb9f4817d", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
11
  {"tstamp": 1722224292.4751, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "9c20a131b0b0460caf8e9cc90271ad58", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "8fbd7a439898404aa1b56dd7fd31b164", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
 
 
 
 
 
 
 
 
9
  {"tstamp": 1722224179.2706, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "83d0fa49a2b744c093ee6c2034861706", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: Gauge fixing, BRS invariance and Ward identities for randomly stirred flows\n\nAbstract: The Galilean invariance of the Navier-Stokes equation is shown to be akin to a global gauge symmetry familiar from quantum field theory. This symmetry leads to a multiple counting of infinitely many inertial reference frames in the path integral approach to randomly stirred fluids. This problem is solved by fixing the gauge, i.e., singling out one reference frame. The gauge fixed theory has an underlying Becchi-Rouet-Stora (BRS) symmetry which leads to the Ward identity relating the exact inverse response and vertex functions. This identification of Galilean invariance as a gauge symmetry is explored in detail, for different gauge choices and by performing a rigorous examination of a discretized version of the theory. The Navier-Stokes equation is also invariant under arbitrary rectilinear frame accelerations, known as extended Galilean invariance (EGI). We gauge fix this extended symmetry and derive the generalized Ward identity that follows from the BRS invariance of the gauge-fixed theory. This new Ward identity reduces to the standard one in the limit of zero acceleration. This gauge-fixing approach unambiguously shows that Galilean invariance and EGI constrain only the zero mode of the vertex but none of the higher wavenumber modes."]], "0_corpus": "arxiv", "1_conv_id": "d68199ab940144d0864864688663a7aa", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
10
  {"tstamp": 1722224206.9709, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f9e33c80a7f84346879ff3a110995c39", "0_model_name": "text-embedding-3-large", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "c38a6a7929194e63a6fae84fb9f4817d", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
11
  {"tstamp": 1722224292.4751, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "9c20a131b0b0460caf8e9cc90271ad58", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "8fbd7a439898404aa1b56dd7fd31b164", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
12
+ {"tstamp": 1722224357.4907, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "daa307131487485b94995384043a8300", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "do crops respawn in skyrim?", "0_output": [["do crops respawn in skyrim?", "Meta Stackexchange\n\nQ: Google time and answer mistakes Take a look a this : \n\nOn the google search page I can see \n\n\n*\n\n*13 hours ago\n\n*1 answer\n\n\nWhen I actually posted the question :\n\n\n*\n\n*8 mins ago\n\n*0 answers\n\n\nThis should be fixed.\n"]], "0_corpus": "stackexchange", "1_conv_id": "789b1e47056746a7a25daa4239fc0c83", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "do crops respawn in skyrim?", "1_output": [["do crops respawn in skyrim?", "Gaming Stackexchange\n\nQ: Do Nirnroots regrow in Skyrim? In Oblivion, Nirnroots were a collectible - there was a finite number of them*, and they did not respawn.\nI've been noticing, however, that when I pick Nirnroots in Skyrim, the roots of the plant remain behind. This got me wondering...\nDo Nirnroots regrow in Skyrim?\nWhat about the Crimson Nirnroots of Blackreach?\n*discounting the respawning food-nirnroot in Frostcrag Spire\n\nA: I can confirm they grow back after 15 days also. I'm using the PS3 version and picked about 20 of them before doing the related quest. It does give you a negative number if you try dropping them and picking them back up. So, I was forced to wait 15 days to pick a total of 50 of them. \n"]], "1_corpus": "stackexchange"}
13
+ {"tstamp": 1722224394.178, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "5f396b9d69c1400891c654d8ca529105", "0_model_name": "BM25", "0_prompt": "why star trek tng ended?", "0_output": [["why star trek tng ended?", "Scifi Stackexchange\n\nQ: Why did Star Trek: The Next Generation season 2 only have 22 episodes? In the Star Trek franchise shows Star Trek: The Next Generation, Star Trek: Deep Space Nine and Star Trek: Voyager, each show almost always had 26 episodes per season. Besides Voyager's first season, the only exception in this is the second season of Star Trek: The Next Generation, which instead had 22. Why is this?\n\nA: There was a writers' strike in 1988.\nThe Writers\u00b4 Guild of America went out on strike in the spring and summer of 1988. They demanded more creative control and residuals for the shows they wrote.\nThe writers of ST:TNG at Paramount joined in despite their Sacred Duty.\nThe strike ended on August 7, but the production pipeline was adversely affected, and ST:TNG wasn't able to start airing until November 21. (This episode, \"The Child\", contained the first appearance of Whoopi Goldberg playing Guinan.)\n"]], "0_corpus": "stackexchange", "1_conv_id": "f73a0f20a5ff46999df925bea5719cea", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "why star trek tng ended?", "1_output": [["why star trek tng ended?", "Mathematica Stackexchange\n\nQ: how to download Finite ELements tutorial as PDF For long tutorials, I prefer hard copy.\nIs it possible to download from somewhere this document\nhttps://reference.wolfram.com/language/FEMDocumentation/tutorial/FiniteElementOverview.html \nas PDF file? Few years ago WRI used to have PDF files for all its documentations, but I am not able to find one for the above now.\n"]], "1_corpus": "stackexchange"}
14
+ {"tstamp": 1722224426.1887, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "ae3aeff4b1ee4843b902224782c54d18", "0_model_name": "BM25", "0_prompt": "Paper on SEACrowd", "0_output": [["Paper on SEACrowd", "Title: SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages\n\nAbstract: Southeast Asia (SEA) is a region rich in linguistic diversity and cultural variety, with over 1,300 indigenous languages and a population of 671 million people. However, prevailing AI models suffer from a significant lack of representation of texts, images, and audio datasets from SEA, compromising the quality of AI models for SEA languages. Evaluating models for SEA languages is challenging due to the scarcity of high-quality datasets, compounded by the dominance of English training data, raising concerns about potential cultural misrepresentation. To address these challenges, we introduce SEACrowd, a collaborative initiative that consolidates a comprehensive resource hub that fills the resource gap by providing standardized corpora in nearly 1,000 SEA languages across three modalities. Through our SEACrowd benchmarks, we assess the quality of AI models on 36 indigenous languages across 13 tasks, offering valuable insights into the current AI landscape in SEA. Furthermore, we propose strategies to facilitate greater AI advancements, maximizing potential utility and resource equity for the future of AI in SEA."]], "0_corpus": "arxiv", "1_conv_id": "92c627814732418a8538ae4e026e72c6", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Paper on SEACrowd", "1_output": [["Paper on SEACrowd", "Title: Carriers of \\emph{Sargassum} and mechanism for coastal inundation in the Caribbean Sea\n\nAbstract: We identify effective carriers of \\emph{Sargassum} in the Caribbean Sea and describe a mechanism for coastal choking. Revealed from satellite altimetry, the carriers of \\emph{Sargassum} are mesoscale eddies (vortices of 50-km radius or larger) with coherent material (i.e., fluid) boundaries. These are observer-independent -- \\emph{unlike} eddy boundaries identified with instantaneously closed streamlines of the altimetric sea-surface height field -- and furthermore harbor finite-time attractors for networks of elastically connected finite-size buoyant or \"inertial\" particles dragged by ocean currents and winds, a mathematical abstraction of \\emph{Sargassum} rafts. The mechanism of coastal inundation, identified using a minimal model of surface-intensified Caribbean Sea eddies, is thermal instability in the presence of bottom topography."]], "1_corpus": "arxiv"}
15
+ {"tstamp": 1722224478.384, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "f0607dbb82af41dab851d3746f6f5f71", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "Paper that shows multitask finetuned language models can generalize to unseen languages", "0_output": [["Paper that shows multitask finetuned language models can generalize to unseen languages", "Title: Constructive Complexity and Artificial Reality: An Introduction\n\nAbstract: Basic problems of complex systems are outlined with an emphasis on irreducibility and dynamic many-to-many correspondences. We discuss the importance of a constructive approach to artificial reality and the significance of an internal observer."]], "0_corpus": "arxiv", "1_conv_id": "1f4529b8817f4f0b8077850d8dccbda8", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Paper that shows multitask finetuned language models can generalize to unseen languages", "1_output": [["Paper that shows multitask finetuned language models can generalize to unseen languages", "Title: The Absence of Positive Energy Bound States for a Class of Nonlocal Potentials\n\nAbstract: We generalize in this paper a theorem of Titchmarsh for the positivity of Fourier sine integrals. We apply then the theorem to derive simple conditions for the absence of positive energy bound states (bound states embedded in the continuum) for the radial Schr\\\"odinger equation with nonlocal potentials which are superposition of a local potential and separable potentials."]], "1_corpus": "arxiv"}
16
+ {"tstamp": 1722224501.6296, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "b0eecc5d9fd54f6594ba143e3606f4d2", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "Paper that introduced GPT-3", "0_output": [["Paper that introduced GPT-3", "Title: a survey on GPT-3\n\nAbstract: This paper provides an introductory survey to GPT-3. We cover some of the historical development behind this technology, some of the key features of GPT-3, and discuss the machine learning model and the datasets used. We survey both academic and commercial efforts applying GPT-3 in diverse domains such as developing conversational AI chatbots, software development, creative work, domain knowledge, and business productivity. We discuss some of the challenges that GPT-3 faces such as the problems of training complexity, bias, and hallucination/incorrect answers. We also discuss the future research opportunities in this area."]], "0_corpus": "arxiv", "1_conv_id": "a278bcf9cee34726ac3b8100de0e1d2a", "1_model_name": "embed-english-v3.0", "1_prompt": "Paper that introduced GPT-3", "1_output": [["Paper that introduced GPT-3", "Title: Improving Short Text Classification With Augmented Data Using GPT-3\n\nAbstract: GPT-3 is a large-scale natural language model developed by OpenAI that can perform many different tasks, including topic classification. Although researchers claim that it requires only a small number of in-context examples to learn a task, in practice GPT-3 requires these training examples to be either of exceptional quality or a higher quantity than easily created by hand. To address this issue, this study teaches GPT-3 to classify whether a question is related to data science by augmenting a small training set with additional examples generated by GPT-3 itself. This study compares two classifiers: the GPT-3 Classification Endpoint with augmented examples, and the GPT-3 Completion Endpoint with an optimal training set chosen using a genetic algorithm. We find that while the augmented Completion Endpoint achieves upwards of 80 percent validation accuracy, using the augmented Classification Endpoint yields more consistent accuracy on unseen examples. In this way, giving large-scale machine learning models like GPT-3 the ability to propose their own additional training examples can result in improved classification performance."]], "1_corpus": "arxiv"}
17
+ {"tstamp": 1722224531.6939, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "c7374c4ffde543f99eb8379b8225a12b", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "paper showing crosslingual generalization is possible", "0_output": [["paper showing crosslingual generalization is possible", "Title: Are Structural Concepts Universal in Transformer Language Models? Towards Interpretable Cross-Lingual Generalization\n\nAbstract: Large language models (LLMs) have exhibited considerable cross-lingual generalization abilities, whereby they implicitly transfer knowledge across languages. However, the transfer is not equally successful for all languages, especially for low-resource ones, which poses an ongoing challenge. It is unclear whether we have reached the limits of implicit cross-lingual generalization and if explicit knowledge transfer is viable. In this paper, we investigate the potential for explicitly aligning conceptual correspondence between languages to enhance cross-lingual generalization. Using the syntactic aspect of language as a testbed, our analyses of 43 languages reveal a high degree of alignability among the spaces of structural concepts within each language for both encoder-only and decoder-only LLMs. We then propose a meta-learning-based method to learn to align conceptual spaces of different languages, which facilitates zero-shot and few-shot generalization in concept classification and also offers insights into the cross-lingual in-context learning phenomenon. Experiments on syntactic analysis tasks show that our approach achieves competitive results with state-of-the-art methods and narrows the performance gap between languages, particularly benefiting those with limited resources."]], "0_corpus": "arxiv", "1_conv_id": "3283162f3da548e08c51faf0101b6c31", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "paper showing crosslingual generalization is possible", "1_output": [["paper showing crosslingual generalization is possible", "Title: Asymptotic Exit Location Distributions in the Stochastic Exit Problem\n\nAbstract: Consider a two-dimensional continuous-time dynamical system, with an attracting fixed point $S$. If the deterministic dynamics are perturbed by white noise (random perturbations) of strength $\\epsilon$, the system state will eventually leave the domain of attraction $\\Omega$ of $S$. We analyse the case when, as $\\epsilon\\to0$, the exit location on the boundary $\\partial\\Omega$ is increasingly concentrated near a saddle point $H$ of the deterministic dynamics. We show that the asymptotic form of the exit location distribution on $\\partial\\Omega$ is generically non-Gaussian and asymmetric, and classify the possible limiting distributions. A key role is played by a parameter $\\mu$, equal to the ratio $|\\lambda_s(H)|/\\lambda_u(H)$ of the stable and unstable eigenvalues of the linearized deterministic flow at $H$. If $\\mu<1$ then the exit location distribution is generically asymptotic as $\\epsilon\\to0$ to a Weibull distribution with shape parameter $2/\\mu$, on the $O(\\epsilon^{\\mu/2})$ length scale near $H$. If $\\mu>1$ it is generically asymptotic to a distribution on the $O(\\epsilon^{1/2})$ length scale, whose moments we compute. The asymmetry of the asymptotic exit location distribution is attributable to the generic presence of a `classically forbidden' region: a wedge-shaped subset of $\\Omega$ with $H$ as vertex, which is reached from $S$, in the $\\epsilon\\to0$ limit, only via `bent' (non-smooth) fluctuational paths that first pass through the vicinity of $H$. We deduce from the presence of this forbidden region that the classical Eyring formula for the small-$\\epsilon$ exponential asymptotics of the mean first exit time is generically inapplicable."]], "1_corpus": "arxiv"}
18
+ {"tstamp": 1722224559.6652, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "4a5116c2fe724927a11e34daeb171b0b", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "are all naruto shippuden episodes dubbed yet?", "0_output": [["are all naruto shippuden episodes dubbed yet?", "Scifi Stackexchange\n\nQ: What is the only Jutsu included in the manga from Filler Episodes of Naruto Shippuden Anime My friend challenged me with this question a while ago.\nWhich jutsu premiered in Naruto Shippuden, but was thereafter picked up by Masashi Kishimoto into his Manga? \nThis has apparently only happened once, all the other jutsu's happened the other way. He created them and put them into the Manga, then were disseminated into the Shippuden\n\nA: The only jutsu that matches somehow the requirements (at least to my knowledge and search) is the Water Release: Water Shuriken (Suiton: Mizu Shuriken):\n\nThis jutsu appeared first in Naruto: Shipp\u016bden #412 - Neji's Judgment which aired in Japan on May 14, 2015 and was later also used in the Naruto novel Gaara Hiden: A Sandstorm Mirage which was published in Japan on June 4, 2015.\nHowever this is a novel and not the original manga although is was illustrated by Masashi Kishimoto.\n"]], "0_corpus": "stackexchange", "1_conv_id": "cb439cf0253b48a4bec18921c3e8c6db", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "are all naruto shippuden episodes dubbed yet?", "1_output": [["are all naruto shippuden episodes dubbed yet?", "Scifi Stackexchange\n\nQ: What is the only Jutsu included in the manga from Filler Episodes of Naruto Shippuden Anime My friend challenged me with this question a while ago.\nWhich jutsu premiered in Naruto Shippuden, but was thereafter picked up by Masashi Kishimoto into his Manga? \nThis has apparently only happened once, all the other jutsu's happened the other way. He created them and put them into the Manga, then were disseminated into the Shippuden\n\nA: The only jutsu that matches somehow the requirements (at least to my knowledge and search) is the Water Release: Water Shuriken (Suiton: Mizu Shuriken):\n\nThis jutsu appeared first in Naruto: Shipp\u016bden #412 - Neji's Judgment which aired in Japan on May 14, 2015 and was later also used in the Naruto novel Gaara Hiden: A Sandstorm Mirage which was published in Japan on June 4, 2015.\nHowever this is a novel and not the original manga although is was illustrated by Masashi Kishimoto.\n"]], "1_corpus": "stackexchange"}
data/retrieval_individual-f0f24c65-6f65-4435-a1e5-20bb12a2b295.jsonl CHANGED
@@ -21,3 +21,19 @@
21
  {"tstamp": 1722224239.3833, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722224211.8761, "finish": 1722224239.3833, "ip": "", "conv_id": "9c20a131b0b0460caf8e9cc90271ad58", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
22
  {"tstamp": 1722224239.3833, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722224211.8761, "finish": 1722224239.3833, "ip": "", "conv_id": "8fbd7a439898404aa1b56dd7fd31b164", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
23
  {"tstamp": 1722224314.5447, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722224297.936, "finish": 1722224314.5447, "ip": "", "conv_id": "0f3a6c21731d453084b96cd37936a511", "model_name": "GritLM/GritLM-7B", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  {"tstamp": 1722224239.3833, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722224211.8761, "finish": 1722224239.3833, "ip": "", "conv_id": "9c20a131b0b0460caf8e9cc90271ad58", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
22
  {"tstamp": 1722224239.3833, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722224211.8761, "finish": 1722224239.3833, "ip": "", "conv_id": "8fbd7a439898404aa1b56dd7fd31b164", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
23
  {"tstamp": 1722224314.5447, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722224297.936, "finish": 1722224314.5447, "ip": "", "conv_id": "0f3a6c21731d453084b96cd37936a511", "model_name": "GritLM/GritLM-7B", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
24
+ {"tstamp": 1722224324.8398, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722224324.706, "finish": 1722224324.8398, "ip": "", "conv_id": "08b6460c6346418a838ff449ed4c08ef", "model_name": "GritLM/GritLM-7B", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
25
+ {"tstamp": 1722224330.1257, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722224329.9681, "finish": 1722224330.1257, "ip": "", "conv_id": "909cb447bc7143aaab3a3aa9c3d760de", "model_name": "GritLM/GritLM-7B", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "corpus": "arxiv"}
26
+ {"tstamp": 1722224351.7536, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722224349.5273, "finish": 1722224351.7536, "ip": "", "conv_id": "daa307131487485b94995384043a8300", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "do crops respawn in skyrim?", "output": [["do crops respawn in skyrim?", "Meta Stackexchange\n\nQ: Google time and answer mistakes Take a look a this : \n\nOn the google search page I can see \n\n\n*\n\n*13 hours ago\n\n*1 answer\n\n\nWhen I actually posted the question :\n\n\n*\n\n*8 mins ago\n\n*0 answers\n\n\nThis should be fixed.\n"]], "corpus": "stackexchange"}
27
+ {"tstamp": 1722224351.7536, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722224349.5273, "finish": 1722224351.7536, "ip": "", "conv_id": "789b1e47056746a7a25daa4239fc0c83", "model_name": "GritLM/GritLM-7B", "prompt": "do crops respawn in skyrim?", "output": [["do crops respawn in skyrim?", "Gaming Stackexchange\n\nQ: Do Nirnroots regrow in Skyrim? In Oblivion, Nirnroots were a collectible - there was a finite number of them*, and they did not respawn.\nI've been noticing, however, that when I pick Nirnroots in Skyrim, the roots of the plant remain behind. This got me wondering...\nDo Nirnroots regrow in Skyrim?\nWhat about the Crimson Nirnroots of Blackreach?\n*discounting the respawning food-nirnroot in Frostcrag Spire\n\nA: I can confirm they grow back after 15 days also. I'm using the PS3 version and picked about 20 of them before doing the related quest. It does give you a negative number if you try dropping them and picking them back up. So, I was forced to wait 15 days to pick a total of 50 of them. \n"]], "corpus": "stackexchange"}
28
+ {"tstamp": 1722224385.9413, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722224377.0352, "finish": 1722224385.9413, "ip": "", "conv_id": "5f396b9d69c1400891c654d8ca529105", "model_name": "BM25", "prompt": "why star trek tng ended?", "output": [["why star trek tng ended?", "Scifi Stackexchange\n\nQ: Why did Star Trek: The Next Generation season 2 only have 22 episodes? In the Star Trek franchise shows Star Trek: The Next Generation, Star Trek: Deep Space Nine and Star Trek: Voyager, each show almost always had 26 episodes per season. Besides Voyager's first season, the only exception in this is the second season of Star Trek: The Next Generation, which instead had 22. Why is this?\n\nA: There was a writers' strike in 1988.\nThe Writers\u00b4 Guild of America went out on strike in the spring and summer of 1988. They demanded more creative control and residuals for the shows they wrote.\nThe writers of ST:TNG at Paramount joined in despite their Sacred Duty.\nThe strike ended on August 7, but the production pipeline was adversely affected, and ST:TNG wasn't able to start airing until November 21. (This episode, \"The Child\", contained the first appearance of Whoopi Goldberg playing Guinan.)\n"]], "corpus": "stackexchange"}
29
+ {"tstamp": 1722224385.9413, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722224377.0352, "finish": 1722224385.9413, "ip": "", "conv_id": "f73a0f20a5ff46999df925bea5719cea", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "why star trek tng ended?", "output": [["why star trek tng ended?", "Mathematica Stackexchange\n\nQ: how to download Finite ELements tutorial as PDF For long tutorials, I prefer hard copy.\nIs it possible to download from somewhere this document\nhttps://reference.wolfram.com/language/FEMDocumentation/tutorial/FiniteElementOverview.html \nas PDF file? Few years ago WRI used to have PDF files for all its documentations, but I am not able to find one for the above now.\n"]], "corpus": "stackexchange"}
30
+ {"tstamp": 1722224422.78, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722224422.6325, "finish": 1722224422.78, "ip": "", "conv_id": "ae3aeff4b1ee4843b902224782c54d18", "model_name": "BM25", "prompt": "Paper on SEACrowd", "output": [["Paper on SEACrowd", "Title: SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages\n\nAbstract: Southeast Asia (SEA) is a region rich in linguistic diversity and cultural variety, with over 1,300 indigenous languages and a population of 671 million people. However, prevailing AI models suffer from a significant lack of representation of texts, images, and audio datasets from SEA, compromising the quality of AI models for SEA languages. Evaluating models for SEA languages is challenging due to the scarcity of high-quality datasets, compounded by the dominance of English training data, raising concerns about potential cultural misrepresentation. To address these challenges, we introduce SEACrowd, a collaborative initiative that consolidates a comprehensive resource hub that fills the resource gap by providing standardized corpora in nearly 1,000 SEA languages across three modalities. Through our SEACrowd benchmarks, we assess the quality of AI models on 36 indigenous languages across 13 tasks, offering valuable insights into the current AI landscape in SEA. Furthermore, we propose strategies to facilitate greater AI advancements, maximizing potential utility and resource equity for the future of AI in SEA."]], "corpus": "arxiv"}
31
+ {"tstamp": 1722224422.78, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722224422.6325, "finish": 1722224422.78, "ip": "", "conv_id": "92c627814732418a8538ae4e026e72c6", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Paper on SEACrowd", "output": [["Paper on SEACrowd", "Title: Carriers of \\emph{Sargassum} and mechanism for coastal inundation in the Caribbean Sea\n\nAbstract: We identify effective carriers of \\emph{Sargassum} in the Caribbean Sea and describe a mechanism for coastal choking. Revealed from satellite altimetry, the carriers of \\emph{Sargassum} are mesoscale eddies (vortices of 50-km radius or larger) with coherent material (i.e., fluid) boundaries. These are observer-independent -- \\emph{unlike} eddy boundaries identified with instantaneously closed streamlines of the altimetric sea-surface height field -- and furthermore harbor finite-time attractors for networks of elastically connected finite-size buoyant or \"inertial\" particles dragged by ocean currents and winds, a mathematical abstraction of \\emph{Sargassum} rafts. The mechanism of coastal inundation, identified using a minimal model of surface-intensified Caribbean Sea eddies, is thermal instability in the presence of bottom topography."]], "corpus": "arxiv"}
32
+ {"tstamp": 1722224472.5842, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722224452.2343, "finish": 1722224472.5842, "ip": "", "conv_id": "f0607dbb82af41dab851d3746f6f5f71", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Paper that shows multitask finetuned language models can generalize to unseen languages", "output": [["Paper that shows multitask finetuned language models can generalize to unseen languages", "Title: Constructive Complexity and Artificial Reality: An Introduction\n\nAbstract: Basic problems of complex systems are outlined with an emphasis on irreducibility and dynamic many-to-many correspondences. We discuss the importance of a constructive approach to artificial reality and the significance of an internal observer."]], "corpus": "arxiv"}
33
+ {"tstamp": 1722224472.5842, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722224452.2343, "finish": 1722224472.5842, "ip": "", "conv_id": "1f4529b8817f4f0b8077850d8dccbda8", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Paper that shows multitask finetuned language models can generalize to unseen languages", "output": [["Paper that shows multitask finetuned language models can generalize to unseen languages", "Title: The Absence of Positive Energy Bound States for a Class of Nonlocal Potentials\n\nAbstract: We generalize in this paper a theorem of Titchmarsh for the positivity of Fourier sine integrals. We apply then the theorem to derive simple conditions for the absence of positive energy bound states (bound states embedded in the continuum) for the radial Schr\\\"odinger equation with nonlocal potentials which are superposition of a local potential and separable potentials."]], "corpus": "arxiv"}
34
+ {"tstamp": 1722224493.6843, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722224493.4669, "finish": 1722224493.6843, "ip": "", "conv_id": "b0eecc5d9fd54f6594ba143e3606f4d2", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Paper that introduced GPT-3", "output": [["Paper that introduced GPT-3", "Title: a survey on GPT-3\n\nAbstract: This paper provides an introductory survey to GPT-3. We cover some of the historical development behind this technology, some of the key features of GPT-3, and discuss the machine learning model and the datasets used. We survey both academic and commercial efforts applying GPT-3 in diverse domains such as developing conversational AI chatbots, software development, creative work, domain knowledge, and business productivity. We discuss some of the challenges that GPT-3 faces such as the problems of training complexity, bias, and hallucination/incorrect answers. We also discuss the future research opportunities in this area."]], "corpus": "arxiv"}
35
+ {"tstamp": 1722224493.6843, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722224493.4669, "finish": 1722224493.6843, "ip": "", "conv_id": "a278bcf9cee34726ac3b8100de0e1d2a", "model_name": "embed-english-v3.0", "prompt": "Paper that introduced GPT-3", "output": [["Paper that introduced GPT-3", "Title: Improving Short Text Classification With Augmented Data Using GPT-3\n\nAbstract: GPT-3 is a large-scale natural language model developed by OpenAI that can perform many different tasks, including topic classification. Although researchers claim that it requires only a small number of in-context examples to learn a task, in practice GPT-3 requires these training examples to be either of exceptional quality or a higher quantity than easily created by hand. To address this issue, this study teaches GPT-3 to classify whether a question is related to data science by augmenting a small training set with additional examples generated by GPT-3 itself. This study compares two classifiers: the GPT-3 Classification Endpoint with augmented examples, and the GPT-3 Completion Endpoint with an optimal training set chosen using a genetic algorithm. We find that while the augmented Completion Endpoint achieves upwards of 80 percent validation accuracy, using the augmented Classification Endpoint yields more consistent accuracy on unseen examples. In this way, giving large-scale machine learning models like GPT-3 the ability to propose their own additional training examples can result in improved classification performance."]], "corpus": "arxiv"}
36
+ {"tstamp": 1722224521.4641, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722224519.3158, "finish": 1722224521.4641, "ip": "", "conv_id": "c7374c4ffde543f99eb8379b8225a12b", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "paper showing crosslingual generalization is possible", "output": [["paper showing crosslingual generalization is possible", "Title: Are Structural Concepts Universal in Transformer Language Models? Towards Interpretable Cross-Lingual Generalization\n\nAbstract: Large language models (LLMs) have exhibited considerable cross-lingual generalization abilities, whereby they implicitly transfer knowledge across languages. However, the transfer is not equally successful for all languages, especially for low-resource ones, which poses an ongoing challenge. It is unclear whether we have reached the limits of implicit cross-lingual generalization and if explicit knowledge transfer is viable. In this paper, we investigate the potential for explicitly aligning conceptual correspondence between languages to enhance cross-lingual generalization. Using the syntactic aspect of language as a testbed, our analyses of 43 languages reveal a high degree of alignability among the spaces of structural concepts within each language for both encoder-only and decoder-only LLMs. We then propose a meta-learning-based method to learn to align conceptual spaces of different languages, which facilitates zero-shot and few-shot generalization in concept classification and also offers insights into the cross-lingual in-context learning phenomenon. Experiments on syntactic analysis tasks show that our approach achieves competitive results with state-of-the-art methods and narrows the performance gap between languages, particularly benefiting those with limited resources."]], "corpus": "arxiv"}
37
+ {"tstamp": 1722224521.4641, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722224519.3158, "finish": 1722224521.4641, "ip": "", "conv_id": "3283162f3da548e08c51faf0101b6c31", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "paper showing crosslingual generalization is possible", "output": [["paper showing crosslingual generalization is possible", "Title: Asymptotic Exit Location Distributions in the Stochastic Exit Problem\n\nAbstract: Consider a two-dimensional continuous-time dynamical system, with an attracting fixed point $S$. If the deterministic dynamics are perturbed by white noise (random perturbations) of strength $\\epsilon$, the system state will eventually leave the domain of attraction $\\Omega$ of $S$. We analyse the case when, as $\\epsilon\\to0$, the exit location on the boundary $\\partial\\Omega$ is increasingly concentrated near a saddle point $H$ of the deterministic dynamics. We show that the asymptotic form of the exit location distribution on $\\partial\\Omega$ is generically non-Gaussian and asymmetric, and classify the possible limiting distributions. A key role is played by a parameter $\\mu$, equal to the ratio $|\\lambda_s(H)|/\\lambda_u(H)$ of the stable and unstable eigenvalues of the linearized deterministic flow at $H$. If $\\mu<1$ then the exit location distribution is generically asymptotic as $\\epsilon\\to0$ to a Weibull distribution with shape parameter $2/\\mu$, on the $O(\\epsilon^{\\mu/2})$ length scale near $H$. If $\\mu>1$ it is generically asymptotic to a distribution on the $O(\\epsilon^{1/2})$ length scale, whose moments we compute. The asymmetry of the asymptotic exit location distribution is attributable to the generic presence of a `classically forbidden' region: a wedge-shaped subset of $\\Omega$ with $H$ as vertex, which is reached from $S$, in the $\\epsilon\\to0$ limit, only via `bent' (non-smooth) fluctuational paths that first pass through the vicinity of $H$. We deduce from the presence of this forbidden region that the classical Eyring formula for the small-$\\epsilon$ exponential asymptotics of the mean first exit time is generically inapplicable."]], "corpus": "arxiv"}
38
+ {"tstamp": 1722224554.1283, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722224545.2204, "finish": 1722224554.1283, "ip": "", "conv_id": "4a5116c2fe724927a11e34daeb171b0b", "model_name": "GritLM/GritLM-7B", "prompt": "are all naruto shippuden episodes dubbed yet?", "output": [["are all naruto shippuden episodes dubbed yet?", "Scifi Stackexchange\n\nQ: What is the only Jutsu included in the manga from Filler Episodes of Naruto Shippuden Anime My friend challenged me with this question a while ago.\nWhich jutsu premiered in Naruto Shippuden, but was thereafter picked up by Masashi Kishimoto into his Manga? \nThis has apparently only happened once, all the other jutsu's happened the other way. He created them and put them into the Manga, then were disseminated into the Shippuden\n\nA: The only jutsu that matches somehow the requirements (at least to my knowledge and search) is the Water Release: Water Shuriken (Suiton: Mizu Shuriken):\n\nThis jutsu appeared first in Naruto: Shipp\u016bden #412 - Neji's Judgment which aired in Japan on May 14, 2015 and was later also used in the Naruto novel Gaara Hiden: A Sandstorm Mirage which was published in Japan on June 4, 2015.\nHowever this is a novel and not the original manga although is was illustrated by Masashi Kishimoto.\n"]], "corpus": "stackexchange"}
39
+ {"tstamp": 1722224554.1283, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722224545.2204, "finish": 1722224554.1283, "ip": "", "conv_id": "cb439cf0253b48a4bec18921c3e8c6db", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "are all naruto shippuden episodes dubbed yet?", "output": [["are all naruto shippuden episodes dubbed yet?", "Scifi Stackexchange\n\nQ: What is the only Jutsu included in the manga from Filler Episodes of Naruto Shippuden Anime My friend challenged me with this question a while ago.\nWhich jutsu premiered in Naruto Shippuden, but was thereafter picked up by Masashi Kishimoto into his Manga? \nThis has apparently only happened once, all the other jutsu's happened the other way. He created them and put them into the Manga, then were disseminated into the Shippuden\n\nA: The only jutsu that matches somehow the requirements (at least to my knowledge and search) is the Water Release: Water Shuriken (Suiton: Mizu Shuriken):\n\nThis jutsu appeared first in Naruto: Shipp\u016bden #412 - Neji's Judgment which aired in Japan on May 14, 2015 and was later also used in the Naruto novel Gaara Hiden: A Sandstorm Mirage which was published in Japan on June 4, 2015.\nHowever this is a novel and not the original manga although is was illustrated by Masashi Kishimoto.\n"]], "corpus": "stackexchange"}