Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
965af29
1 Parent(s): 08b6485

Scheduled Commit

Browse files
data/retrieval_battle-cc11c9ee-fc93-4cd6-a4b5-246a7f21038c.jsonl CHANGED
@@ -24,3 +24,4 @@
24
  {"tstamp": 1722325274.4844, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "8fd0f41e8c554c77a868abae1d83248a", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "Which programming paradigm supposedly eliminates side-effects?", "0_output": [["Which programming paradigm supposedly eliminates side-effects?", "Stackoverflow Stackexchange\n\nQ: I want to call it side-effect programming, but there's gotta be a better term for this What is the name of the following programming paradigm:\nCode is executed based on a set of logical tests resolving to true (the clause). The clause is comprised of operators and operands. Each operand is a value/object.\nInstead of evaluating the clause explicitly, as is done in imperative languages (e.g., normal flow control like if(){}), the clause is declared and bound to the resulting code. When the clause is satisfied, at any point in the future, the code will execute. \nSo, it's basically a dependency tree that re-evaluates whether the code should execute whenever a dependency changes state.\nE.g.,\nwhen(a && b && c < 3 && d.changes())\n{\n runThisCode();\n}\n\nI'm looking for a more formal name and definition, and I haven't come up with anything after searching for it. It's somewhere between declarative and imperative, but I've never seen a language or paradigm that lets one do this.\nThanks,\nSean\n\nA: Maybe it is dataflow programming? Or reactive programming?\n"]], "0_corpus": "stackexchange", "1_conv_id": "7a4eb2fc2e1c49adb1cf34025bab88e4", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Which programming paradigm supposedly eliminates side-effects?", "1_output": [["Which programming paradigm supposedly eliminates side-effects?", "Stackoverflow Stackexchange\n\nQ: I want to call it side-effect programming, but there's gotta be a better term for this What is the name of the following programming paradigm:\nCode is executed based on a set of logical tests resolving to true (the clause). The clause is comprised of operators and operands. Each operand is a value/object.\nInstead of evaluating the clause explicitly, as is done in imperative languages (e.g., normal flow control like if(){}), the clause is declared and bound to the resulting code. When the clause is satisfied, at any point in the future, the code will execute. \nSo, it's basically a dependency tree that re-evaluates whether the code should execute whenever a dependency changes state.\nE.g.,\nwhen(a && b && c < 3 && d.changes())\n{\n runThisCode();\n}\n\nI'm looking for a more formal name and definition, and I haven't come up with anything after searching for it. It's somewhere between declarative and imperative, but I've never seen a language or paradigm that lets one do this.\nThanks,\nSean\n\nA: Maybe it is dataflow programming? Or reactive programming?\n"]], "1_corpus": "stackexchange"}
25
  {"tstamp": 1722330580.3861, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "343e583202b94432845a84d92016e273", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "late interaction retrieval engine", "0_output": [["late interaction retrieval engine", "Title: Improving Out-of-Distribution Generalization of Neural Rerankers with Contextualized Late Interaction\n\nAbstract: Recent progress in information retrieval finds that embedding query and document representation into multi-vector yields a robust bi-encoder retriever on out-of-distribution datasets. In this paper, we explore whether late interaction, the simplest form of multi-vector, is also helpful to neural rerankers that only use the [CLS] vector to compute the similarity score. Although intuitively, the attention mechanism of rerankers at the previous layers already gathers the token-level information, we find adding late interaction still brings an extra 5% improvement in average on out-of-distribution datasets, with little increase in latency and no degradation in in-domain effectiveness. Through extensive experiments and analysis, we show that the finding is consistent across different model sizes and first-stage retrievers of diverse natures and that the improvement is more prominent on longer queries."]], "0_corpus": "arxiv", "1_conv_id": "5305603bc27e4f5e9c54451f4c2e4662", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "late interaction retrieval engine", "1_output": [["late interaction retrieval engine", "Title: PLAID: An Efficient Engine for Late Interaction Retrieval\n\nAbstract: Pre-trained language models are increasingly important components across multiple information retrieval (IR) paradigms. Late interaction, introduced with the ColBERT model and recently refined in ColBERTv2, is a popular paradigm that holds state-of-the-art status across many benchmarks. To dramatically speed up the search latency of late interaction, we introduce the Performance-optimized Late Interaction Driver (PLAID). Without impacting quality, PLAID swiftly eliminates low-scoring passages using a novel centroid interaction mechanism that treats every passage as a lightweight bag of centroids. PLAID uses centroid interaction as well as centroid pruning, a mechanism for sparsifying the bag of centroids, within a highly-optimized engine to reduce late interaction search latency by up to 7$\\times$ on a GPU and 45$\\times$ on a CPU against vanilla ColBERTv2, while continuing to deliver state-of-the-art retrieval quality. This allows the PLAID engine with ColBERTv2 to achieve latency of tens of milliseconds on a GPU and tens or just few hundreds of milliseconds on a CPU at large scale, even at the largest scales we evaluate with 140M passages."]], "1_corpus": "arxiv"}
26
  {"tstamp": 1722330643.9035, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "f57e49c925ec4e97b7cf34fa8fae1544", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "retrieval system that uses an inverted index with lexically matched dense token embeddings", "0_output": [["retrieval system that uses an inverted index with lexically matched dense token embeddings", "Title: COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List\n\nAbstract: Classical information retrieval systems such as BM25 rely on exact lexical match and carry out search efficiently with inverted list index. Recent neural IR models shifts towards soft semantic matching all query document terms, but they lose the computation efficiency of exact match systems. This paper presents COIL, a contextualized exact match retrieval architecture that brings semantic lexical matching. COIL scoring is based on overlapping query document tokens' contextualized representations. The new architecture stores contextualized token representations in inverted lists, bringing together the efficiency of exact match and the representation power of deep language models. Our experimental results show COIL outperforms classical lexical retrievers and state-of-the-art deep LM retrievers with similar or smaller latency."]], "0_corpus": "arxiv", "1_conv_id": "533774b5f05b4ce9a1513bb1b034d6cc", "1_model_name": "embed-english-v3.0", "1_prompt": "retrieval system that uses an inverted index with lexically matched dense token embeddings", "1_output": [["retrieval system that uses an inverted index with lexically matched dense token embeddings", "Title: Efficient Inverted Indexes for Approximate Retrieval over Learned Sparse Representations\n\nAbstract: Learned sparse representations form an attractive class of contextual embeddings for text retrieval. That is so because they are effective models of relevance and are interpretable by design. Despite their apparent compatibility with inverted indexes, however, retrieval over sparse embeddings remains challenging. That is due to the distributional differences between learned embeddings and term frequency-based lexical models of relevance such as BM25. Recognizing this challenge, a great deal of research has gone into, among other things, designing retrieval algorithms tailored to the properties of learned sparse representations, including approximate retrieval systems. In fact, this task featured prominently in the latest BigANN Challenge at NeurIPS 2023, where approximate algorithms were evaluated on a large benchmark dataset by throughput and recall. In this work, we propose a novel organization of the inverted index that enables fast yet effective approximate retrieval over learned sparse embeddings. Our approach organizes inverted lists into geometrically-cohesive blocks, each equipped with a summary vector. During query processing, we quickly determine if a block must be evaluated using the summaries. As we show experimentally, single-threaded query processing using our method, Seismic, reaches sub-millisecond per-query latency on various sparse embeddings of the MS MARCO dataset while maintaining high recall. Our results indicate that Seismic is one to two orders of magnitude faster than state-of-the-art inverted index-based solutions and further outperforms the winning (graph-based) submissions to the BigANN Challenge by a significant margin."]], "1_corpus": "arxiv"}
 
 
24
  {"tstamp": 1722325274.4844, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "8fd0f41e8c554c77a868abae1d83248a", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "Which programming paradigm supposedly eliminates side-effects?", "0_output": [["Which programming paradigm supposedly eliminates side-effects?", "Stackoverflow Stackexchange\n\nQ: I want to call it side-effect programming, but there's gotta be a better term for this What is the name of the following programming paradigm:\nCode is executed based on a set of logical tests resolving to true (the clause). The clause is comprised of operators and operands. Each operand is a value/object.\nInstead of evaluating the clause explicitly, as is done in imperative languages (e.g., normal flow control like if(){}), the clause is declared and bound to the resulting code. When the clause is satisfied, at any point in the future, the code will execute. \nSo, it's basically a dependency tree that re-evaluates whether the code should execute whenever a dependency changes state.\nE.g.,\nwhen(a && b && c < 3 && d.changes())\n{\n runThisCode();\n}\n\nI'm looking for a more formal name and definition, and I haven't come up with anything after searching for it. It's somewhere between declarative and imperative, but I've never seen a language or paradigm that lets one do this.\nThanks,\nSean\n\nA: Maybe it is dataflow programming? Or reactive programming?\n"]], "0_corpus": "stackexchange", "1_conv_id": "7a4eb2fc2e1c49adb1cf34025bab88e4", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Which programming paradigm supposedly eliminates side-effects?", "1_output": [["Which programming paradigm supposedly eliminates side-effects?", "Stackoverflow Stackexchange\n\nQ: I want to call it side-effect programming, but there's gotta be a better term for this What is the name of the following programming paradigm:\nCode is executed based on a set of logical tests resolving to true (the clause). The clause is comprised of operators and operands. Each operand is a value/object.\nInstead of evaluating the clause explicitly, as is done in imperative languages (e.g., normal flow control like if(){}), the clause is declared and bound to the resulting code. When the clause is satisfied, at any point in the future, the code will execute. \nSo, it's basically a dependency tree that re-evaluates whether the code should execute whenever a dependency changes state.\nE.g.,\nwhen(a && b && c < 3 && d.changes())\n{\n runThisCode();\n}\n\nI'm looking for a more formal name and definition, and I haven't come up with anything after searching for it. It's somewhere between declarative and imperative, but I've never seen a language or paradigm that lets one do this.\nThanks,\nSean\n\nA: Maybe it is dataflow programming? Or reactive programming?\n"]], "1_corpus": "stackexchange"}
25
  {"tstamp": 1722330580.3861, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "343e583202b94432845a84d92016e273", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "late interaction retrieval engine", "0_output": [["late interaction retrieval engine", "Title: Improving Out-of-Distribution Generalization of Neural Rerankers with Contextualized Late Interaction\n\nAbstract: Recent progress in information retrieval finds that embedding query and document representation into multi-vector yields a robust bi-encoder retriever on out-of-distribution datasets. In this paper, we explore whether late interaction, the simplest form of multi-vector, is also helpful to neural rerankers that only use the [CLS] vector to compute the similarity score. Although intuitively, the attention mechanism of rerankers at the previous layers already gathers the token-level information, we find adding late interaction still brings an extra 5% improvement in average on out-of-distribution datasets, with little increase in latency and no degradation in in-domain effectiveness. Through extensive experiments and analysis, we show that the finding is consistent across different model sizes and first-stage retrievers of diverse natures and that the improvement is more prominent on longer queries."]], "0_corpus": "arxiv", "1_conv_id": "5305603bc27e4f5e9c54451f4c2e4662", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "late interaction retrieval engine", "1_output": [["late interaction retrieval engine", "Title: PLAID: An Efficient Engine for Late Interaction Retrieval\n\nAbstract: Pre-trained language models are increasingly important components across multiple information retrieval (IR) paradigms. Late interaction, introduced with the ColBERT model and recently refined in ColBERTv2, is a popular paradigm that holds state-of-the-art status across many benchmarks. To dramatically speed up the search latency of late interaction, we introduce the Performance-optimized Late Interaction Driver (PLAID). Without impacting quality, PLAID swiftly eliminates low-scoring passages using a novel centroid interaction mechanism that treats every passage as a lightweight bag of centroids. PLAID uses centroid interaction as well as centroid pruning, a mechanism for sparsifying the bag of centroids, within a highly-optimized engine to reduce late interaction search latency by up to 7$\\times$ on a GPU and 45$\\times$ on a CPU against vanilla ColBERTv2, while continuing to deliver state-of-the-art retrieval quality. This allows the PLAID engine with ColBERTv2 to achieve latency of tens of milliseconds on a GPU and tens or just few hundreds of milliseconds on a CPU at large scale, even at the largest scales we evaluate with 140M passages."]], "1_corpus": "arxiv"}
26
  {"tstamp": 1722330643.9035, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "f57e49c925ec4e97b7cf34fa8fae1544", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "retrieval system that uses an inverted index with lexically matched dense token embeddings", "0_output": [["retrieval system that uses an inverted index with lexically matched dense token embeddings", "Title: COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List\n\nAbstract: Classical information retrieval systems such as BM25 rely on exact lexical match and carry out search efficiently with inverted list index. Recent neural IR models shifts towards soft semantic matching all query document terms, but they lose the computation efficiency of exact match systems. This paper presents COIL, a contextualized exact match retrieval architecture that brings semantic lexical matching. COIL scoring is based on overlapping query document tokens' contextualized representations. The new architecture stores contextualized token representations in inverted lists, bringing together the efficiency of exact match and the representation power of deep language models. Our experimental results show COIL outperforms classical lexical retrievers and state-of-the-art deep LM retrievers with similar or smaller latency."]], "0_corpus": "arxiv", "1_conv_id": "533774b5f05b4ce9a1513bb1b034d6cc", "1_model_name": "embed-english-v3.0", "1_prompt": "retrieval system that uses an inverted index with lexically matched dense token embeddings", "1_output": [["retrieval system that uses an inverted index with lexically matched dense token embeddings", "Title: Efficient Inverted Indexes for Approximate Retrieval over Learned Sparse Representations\n\nAbstract: Learned sparse representations form an attractive class of contextual embeddings for text retrieval. That is so because they are effective models of relevance and are interpretable by design. Despite their apparent compatibility with inverted indexes, however, retrieval over sparse embeddings remains challenging. That is due to the distributional differences between learned embeddings and term frequency-based lexical models of relevance such as BM25. Recognizing this challenge, a great deal of research has gone into, among other things, designing retrieval algorithms tailored to the properties of learned sparse representations, including approximate retrieval systems. In fact, this task featured prominently in the latest BigANN Challenge at NeurIPS 2023, where approximate algorithms were evaluated on a large benchmark dataset by throughput and recall. In this work, we propose a novel organization of the inverted index that enables fast yet effective approximate retrieval over learned sparse embeddings. Our approach organizes inverted lists into geometrically-cohesive blocks, each equipped with a summary vector. During query processing, we quickly determine if a block must be evaluated using the summaries. As we show experimentally, single-threaded query processing using our method, Seismic, reaches sub-millisecond per-query latency on various sparse embeddings of the MS MARCO dataset while maintaining high recall. Our results indicate that Seismic is one to two orders of magnitude faster than state-of-the-art inverted index-based solutions and further outperforms the winning (graph-based) submissions to the BigANN Challenge by a significant margin."]], "1_corpus": "arxiv"}
27
+ {"tstamp": 1722330702.3173, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "46b6675f1fc1409db4ccac33e39d2053", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "What are gyros sandwiches commonly called in Berlin", "0_output": [["What are gyros sandwiches commonly called in Berlin", "Gyros\n\nGyros, sometimes anglicized as a gyro (; , ) in some regions, is meat cooked on a vertical rotisserie, then sliced and served wrapped or stuffed in pita bread, along with other ingredients such as tomato, onion, fried potatoes, and tzatziki. In Greece, it is normally made with pork or sometimes with chicken, whilst beef and lamb are also used in other countries.\nName\nThe name comes from the Greek (, 'circle' or 'turn'). It is a calque of the Turkish , from , also meaning \"turn\".\nIn Athens and other parts of southern Greece, the skewered meat dish elsewhere called souvlaki is known as kalamaki, while souvlaki is a term used generally for gyros, and similar dishes.\nIn Greek, \"gyros\" is a nominative singular noun, but the final 's' is often interpreted in English usage as plural, leading to the singular back-formation \"gyro\". The Greek pronunciation is , though some English speakers pronounce it .\nSimilar dishes\nGyros is made in a similar manner to other dishes such as the Arab shawarma, Canadian donair, Mexican al pastor, and the Turkish döner kebab."]], "0_corpus": "wikipedia", "1_conv_id": "c551a862edc9498984544ee6c52b863e", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "What are gyros sandwiches commonly called in Berlin", "1_output": [["What are gyros sandwiches commonly called in Berlin", "Berlin\n\nBerlin is the capital and largest city of Germany, both by area and by population. Its more than 3.85 million inhabitants make it the European Union's most populous city, as measured by population within city limits having gained this status after the United Kingdom's, and thus London's, departure from the European Union. Simultaneously, the city is one of the states of Germany, and is the third smallest state in the country in terms of area. Berlin is surrounded by the state of Brandenburg, and Brandenburg's capital Potsdam is nearby. The urban area of Berlin has a population of over 4.5 million and is therefore the most populous urban area in Germany. The Berlin-Brandenburg capital region has around 6.2 million inhabitants and is Germany's second-largest metropolitan region after the Rhine-Ruhr region, and the sixth-biggest metropolitan region by GDP in the European Union.\nBerlin was built along the banks of the Spree river, which flows into the Havel in the western borough of Spandau. The city incorporates lakes in the western and southeastern boroughs, the largest of which is Müggelsee. About one-third of the city's area is composed of forests, parks and gardens, rivers, canals, and lakes."]], "1_corpus": "wikipedia"}
data/retrieval_individual-cc11c9ee-fc93-4cd6-a4b5-246a7f21038c.jsonl CHANGED
@@ -68,3 +68,7 @@
68
  {"tstamp": 1722330571.4973, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722330571.2712, "finish": 1722330571.4973, "ip": "", "conv_id": "5305603bc27e4f5e9c54451f4c2e4662", "model_name": "GritLM/GritLM-7B", "prompt": "late interaction retrieval engine", "output": [["late interaction retrieval engine", "Title: PLAID: An Efficient Engine for Late Interaction Retrieval\n\nAbstract: Pre-trained language models are increasingly important components across multiple information retrieval (IR) paradigms. Late interaction, introduced with the ColBERT model and recently refined in ColBERTv2, is a popular paradigm that holds state-of-the-art status across many benchmarks. To dramatically speed up the search latency of late interaction, we introduce the Performance-optimized Late Interaction Driver (PLAID). Without impacting quality, PLAID swiftly eliminates low-scoring passages using a novel centroid interaction mechanism that treats every passage as a lightweight bag of centroids. PLAID uses centroid interaction as well as centroid pruning, a mechanism for sparsifying the bag of centroids, within a highly-optimized engine to reduce late interaction search latency by up to 7$\\times$ on a GPU and 45$\\times$ on a CPU against vanilla ColBERTv2, while continuing to deliver state-of-the-art retrieval quality. This allows the PLAID engine with ColBERTv2 to achieve latency of tens of milliseconds on a GPU and tens or just few hundreds of milliseconds on a CPU at large scale, even at the largest scales we evaluate with 140M passages."]], "corpus": "arxiv"}
69
  {"tstamp": 1722330633.1919, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722330632.9349, "finish": 1722330633.1919, "ip": "", "conv_id": "f57e49c925ec4e97b7cf34fa8fae1544", "model_name": "GritLM/GritLM-7B", "prompt": "retrieval system that uses an inverted index with lexically matched dense token embeddings", "output": [["retrieval system that uses an inverted index with lexically matched dense token embeddings", "Title: COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List\n\nAbstract: Classical information retrieval systems such as BM25 rely on exact lexical match and carry out search efficiently with inverted list index. Recent neural IR models shifts towards soft semantic matching all query document terms, but they lose the computation efficiency of exact match systems. This paper presents COIL, a contextualized exact match retrieval architecture that brings semantic lexical matching. COIL scoring is based on overlapping query document tokens' contextualized representations. The new architecture stores contextualized token representations in inverted lists, bringing together the efficiency of exact match and the representation power of deep language models. Our experimental results show COIL outperforms classical lexical retrievers and state-of-the-art deep LM retrievers with similar or smaller latency."]], "corpus": "arxiv"}
70
  {"tstamp": 1722330633.1919, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722330632.9349, "finish": 1722330633.1919, "ip": "", "conv_id": "533774b5f05b4ce9a1513bb1b034d6cc", "model_name": "embed-english-v3.0", "prompt": "retrieval system that uses an inverted index with lexically matched dense token embeddings", "output": [["retrieval system that uses an inverted index with lexically matched dense token embeddings", "Title: Efficient Inverted Indexes for Approximate Retrieval over Learned Sparse Representations\n\nAbstract: Learned sparse representations form an attractive class of contextual embeddings for text retrieval. That is so because they are effective models of relevance and are interpretable by design. Despite their apparent compatibility with inverted indexes, however, retrieval over sparse embeddings remains challenging. That is due to the distributional differences between learned embeddings and term frequency-based lexical models of relevance such as BM25. Recognizing this challenge, a great deal of research has gone into, among other things, designing retrieval algorithms tailored to the properties of learned sparse representations, including approximate retrieval systems. In fact, this task featured prominently in the latest BigANN Challenge at NeurIPS 2023, where approximate algorithms were evaluated on a large benchmark dataset by throughput and recall. In this work, we propose a novel organization of the inverted index that enables fast yet effective approximate retrieval over learned sparse embeddings. Our approach organizes inverted lists into geometrically-cohesive blocks, each equipped with a summary vector. During query processing, we quickly determine if a block must be evaluated using the summaries. As we show experimentally, single-threaded query processing using our method, Seismic, reaches sub-millisecond per-query latency on various sparse embeddings of the MS MARCO dataset while maintaining high recall. Our results indicate that Seismic is one to two orders of magnitude faster than state-of-the-art inverted index-based solutions and further outperforms the winning (graph-based) submissions to the BigANN Challenge by a significant margin."]], "corpus": "arxiv"}
 
 
 
 
 
68
  {"tstamp": 1722330571.4973, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722330571.2712, "finish": 1722330571.4973, "ip": "", "conv_id": "5305603bc27e4f5e9c54451f4c2e4662", "model_name": "GritLM/GritLM-7B", "prompt": "late interaction retrieval engine", "output": [["late interaction retrieval engine", "Title: PLAID: An Efficient Engine for Late Interaction Retrieval\n\nAbstract: Pre-trained language models are increasingly important components across multiple information retrieval (IR) paradigms. Late interaction, introduced with the ColBERT model and recently refined in ColBERTv2, is a popular paradigm that holds state-of-the-art status across many benchmarks. To dramatically speed up the search latency of late interaction, we introduce the Performance-optimized Late Interaction Driver (PLAID). Without impacting quality, PLAID swiftly eliminates low-scoring passages using a novel centroid interaction mechanism that treats every passage as a lightweight bag of centroids. PLAID uses centroid interaction as well as centroid pruning, a mechanism for sparsifying the bag of centroids, within a highly-optimized engine to reduce late interaction search latency by up to 7$\\times$ on a GPU and 45$\\times$ on a CPU against vanilla ColBERTv2, while continuing to deliver state-of-the-art retrieval quality. This allows the PLAID engine with ColBERTv2 to achieve latency of tens of milliseconds on a GPU and tens or just few hundreds of milliseconds on a CPU at large scale, even at the largest scales we evaluate with 140M passages."]], "corpus": "arxiv"}
69
  {"tstamp": 1722330633.1919, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722330632.9349, "finish": 1722330633.1919, "ip": "", "conv_id": "f57e49c925ec4e97b7cf34fa8fae1544", "model_name": "GritLM/GritLM-7B", "prompt": "retrieval system that uses an inverted index with lexically matched dense token embeddings", "output": [["retrieval system that uses an inverted index with lexically matched dense token embeddings", "Title: COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List\n\nAbstract: Classical information retrieval systems such as BM25 rely on exact lexical match and carry out search efficiently with inverted list index. Recent neural IR models shifts towards soft semantic matching all query document terms, but they lose the computation efficiency of exact match systems. This paper presents COIL, a contextualized exact match retrieval architecture that brings semantic lexical matching. COIL scoring is based on overlapping query document tokens' contextualized representations. The new architecture stores contextualized token representations in inverted lists, bringing together the efficiency of exact match and the representation power of deep language models. Our experimental results show COIL outperforms classical lexical retrievers and state-of-the-art deep LM retrievers with similar or smaller latency."]], "corpus": "arxiv"}
70
  {"tstamp": 1722330633.1919, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722330632.9349, "finish": 1722330633.1919, "ip": "", "conv_id": "533774b5f05b4ce9a1513bb1b034d6cc", "model_name": "embed-english-v3.0", "prompt": "retrieval system that uses an inverted index with lexically matched dense token embeddings", "output": [["retrieval system that uses an inverted index with lexically matched dense token embeddings", "Title: Efficient Inverted Indexes for Approximate Retrieval over Learned Sparse Representations\n\nAbstract: Learned sparse representations form an attractive class of contextual embeddings for text retrieval. That is so because they are effective models of relevance and are interpretable by design. Despite their apparent compatibility with inverted indexes, however, retrieval over sparse embeddings remains challenging. That is due to the distributional differences between learned embeddings and term frequency-based lexical models of relevance such as BM25. Recognizing this challenge, a great deal of research has gone into, among other things, designing retrieval algorithms tailored to the properties of learned sparse representations, including approximate retrieval systems. In fact, this task featured prominently in the latest BigANN Challenge at NeurIPS 2023, where approximate algorithms were evaluated on a large benchmark dataset by throughput and recall. In this work, we propose a novel organization of the inverted index that enables fast yet effective approximate retrieval over learned sparse embeddings. Our approach organizes inverted lists into geometrically-cohesive blocks, each equipped with a summary vector. During query processing, we quickly determine if a block must be evaluated using the summaries. As we show experimentally, single-threaded query processing using our method, Seismic, reaches sub-millisecond per-query latency on various sparse embeddings of the MS MARCO dataset while maintaining high recall. Our results indicate that Seismic is one to two orders of magnitude faster than state-of-the-art inverted index-based solutions and further outperforms the winning (graph-based) submissions to the BigANN Challenge by a significant margin."]], "corpus": "arxiv"}
71
+ {"tstamp": 1722330681.8431, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722330681.5247, "finish": 1722330681.8431, "ip": "", "conv_id": "46b6675f1fc1409db4ccac33e39d2053", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "What are gyros sandwiches commonly called in Berlin", "output": [["What are gyros sandwiches commonly called in Berlin", "Gyros\n\nGyros, sometimes anglicized as a gyro (; , ) in some regions, is meat cooked on a vertical rotisserie, then sliced and served wrapped or stuffed in pita bread, along with other ingredients such as tomato, onion, fried potatoes, and tzatziki. In Greece, it is normally made with pork or sometimes with chicken, whilst beef and lamb are also used in other countries.\nName\nThe name comes from the Greek (, 'circle' or 'turn'). It is a calque of the Turkish , from , also meaning \"turn\".\nIn Athens and other parts of southern Greece, the skewered meat dish elsewhere called souvlaki is known as kalamaki, while souvlaki is a term used generally for gyros, and similar dishes.\nIn Greek, \"gyros\" is a nominative singular noun, but the final 's' is often interpreted in English usage as plural, leading to the singular back-formation \"gyro\". The Greek pronunciation is , though some English speakers pronounce it .\nSimilar dishes\nGyros is made in a similar manner to other dishes such as the Arab shawarma, Canadian donair, Mexican al pastor, and the Turkish döner kebab."]], "corpus": "wikipedia"}
72
+ {"tstamp": 1722330681.8431, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722330681.5247, "finish": 1722330681.8431, "ip": "", "conv_id": "c551a862edc9498984544ee6c52b863e", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "What are gyros sandwiches commonly called in Berlin", "output": [["What are gyros sandwiches commonly called in Berlin", "Berlin\n\nBerlin is the capital and largest city of Germany, both by area and by population. Its more than 3.85 million inhabitants make it the European Union's most populous city, as measured by population within city limits having gained this status after the United Kingdom's, and thus London's, departure from the European Union. Simultaneously, the city is one of the states of Germany, and is the third smallest state in the country in terms of area. Berlin is surrounded by the state of Brandenburg, and Brandenburg's capital Potsdam is nearby. The urban area of Berlin has a population of over 4.5 million and is therefore the most populous urban area in Germany. The Berlin-Brandenburg capital region has around 6.2 million inhabitants and is Germany's second-largest metropolitan region after the Rhine-Ruhr region, and the sixth-biggest metropolitan region by GDP in the European Union.\nBerlin was built along the banks of the Spree river, which flows into the Havel in the western borough of Spandau. The city incorporates lakes in the western and southeastern boroughs, the largest of which is Müggelsee. About one-third of the city's area is composed of forests, parks and gardens, rivers, canals, and lakes."]], "corpus": "wikipedia"}
73
+ {"tstamp": 1722330729.6076, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722330729.4064, "finish": 1722330729.6076, "ip": "", "conv_id": "004df53316f44ed484e7cb2752ed51d7", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Who was the most recent ACM turing award winner?", "output": [["Who was the most recent ACM turing award winner?", "Turing Award\n\nThe ACM A. M. Turing Award is an annual prize given by the Association for Computing Machinery (ACM) for contributions of lasting and major technical importance to computer science. It is generally recognized as the highest distinction in the field of computer science and is often referred to as the \"Nobel Prize of Computing\".\nThe award is named after Alan Turing, who was a British mathematician and reader in mathematics at the University of Manchester. Turing is often credited as being the founder of theoretical computer science and artificial intelligence, and a key contributor to the Allied cryptanalysis of the Enigma cipher during World War II.\nFrom 2007 to 2013, the award was accompanied by a prize of , with financial support provided by Intel and Google. Since 2014, the award has been accompanied by a prize of  million, with financial support provided by Google.\nThe first recipient, in 1966, was Alan Perlis, of Carnegie Mellon University. The youngest recipient was Donald Knuth who won in 1974, at the age of 36, while the oldest recipient was Alfred Aho who won in 2020, at the age of 79. Only three women have been awarded the prize: Frances Allen (in 2006), Barbara Liskov (in 2008), and Shafi Goldwasser (in 2012). As of 2024, 77 people have been awarded the prize, with the most recent recipient, in 2023, being Avi Wigderson."]], "corpus": "wikipedia"}
74
+ {"tstamp": 1722330729.6076, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722330729.4064, "finish": 1722330729.6076, "ip": "", "conv_id": "65495c27f0584d28bdff79babacc3d0f", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Who was the most recent ACM turing award winner?", "output": [["Who was the most recent ACM turing award winner?", "Turing Award\n\nThe ACM A. M. Turing Award is an annual prize given by the Association for Computing Machinery (ACM) for contributions of lasting and major technical importance to computer science. It is generally recognized as the highest distinction in the field of computer science and is often referred to as the \"Nobel Prize of Computing\".\nThe award is named after Alan Turing, who was a British mathematician and reader in mathematics at the University of Manchester. Turing is often credited as being the founder of theoretical computer science and artificial intelligence, and a key contributor to the Allied cryptanalysis of the Enigma cipher during World War II.\nFrom 2007 to 2013, the award was accompanied by a prize of , with financial support provided by Intel and Google. Since 2014, the award has been accompanied by a prize of  million, with financial support provided by Google.\nThe first recipient, in 1966, was Alan Perlis, of Carnegie Mellon University. The youngest recipient was Donald Knuth who won in 1974, at the age of 36, while the oldest recipient was Alfred Aho who won in 2020, at the age of 79. Only three women have been awarded the prize: Frances Allen (in 2006), Barbara Liskov (in 2008), and Shafi Goldwasser (in 2012). As of 2024, 77 people have been awarded the prize, with the most recent recipient, in 2023, being Avi Wigderson."]], "corpus": "wikipedia"}