Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
edbc1bf
·
verified ·
1 Parent(s): 3803413

Scheduled Commit

Browse files
data/retrieval_individual-6da8b2cf-9395-4671-ad9c-18b0374353dc.jsonl CHANGED
@@ -6,3 +6,5 @@
6
  {"tstamp": 1722349153.9565, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722349153.4182, "finish": 1722349153.9565, "ip": "", "conv_id": "5d9c5311eab24ce3b4e8a86a4a95c56d", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "a great benchmark for text embeddings", "output": [["a great benchmark for text embeddings", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
7
  {"tstamp": 1722349221.6561, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722349221.2733, "finish": 1722349221.6561, "ip": "", "conv_id": "6d655f5306ab4af996d219e94482c453", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "a great benchmark for text embeddings", "output": [["a great benchmark for text embeddings", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
8
  {"tstamp": 1722349221.6561, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722349221.2733, "finish": 1722349221.6561, "ip": "", "conv_id": "7775a304716743ed9ffdd51a63c5f726", "model_name": "voyage-multilingual-2", "prompt": "a great benchmark for text embeddings", "output": [["a great benchmark for text embeddings", "Title: Word Emdeddings through Hellinger PCA\n\nAbstract: Word embeddings resulting from neural language models have been shown to be successful for a large variety of NLP tasks. However, such architecture might be difficult to train and time-consuming. Instead, we propose to drastically simplify the word embeddings computation through a Hellinger PCA of the word co-occurence matrix. We compare those new word embeddings with some well-known embeddings on NER and movie review tasks and show that we can reach similar or even better performance. Although deep learning is not really necessary for generating good word embeddings, we show that it can provide an easy way to adapt embeddings to specific tasks."]], "corpus": "arxiv"}
 
 
 
6
  {"tstamp": 1722349153.9565, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722349153.4182, "finish": 1722349153.9565, "ip": "", "conv_id": "5d9c5311eab24ce3b4e8a86a4a95c56d", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "a great benchmark for text embeddings", "output": [["a great benchmark for text embeddings", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
7
  {"tstamp": 1722349221.6561, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722349221.2733, "finish": 1722349221.6561, "ip": "", "conv_id": "6d655f5306ab4af996d219e94482c453", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "a great benchmark for text embeddings", "output": [["a great benchmark for text embeddings", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
8
  {"tstamp": 1722349221.6561, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722349221.2733, "finish": 1722349221.6561, "ip": "", "conv_id": "7775a304716743ed9ffdd51a63c5f726", "model_name": "voyage-multilingual-2", "prompt": "a great benchmark for text embeddings", "output": [["a great benchmark for text embeddings", "Title: Word Emdeddings through Hellinger PCA\n\nAbstract: Word embeddings resulting from neural language models have been shown to be successful for a large variety of NLP tasks. However, such architecture might be difficult to train and time-consuming. Instead, we propose to drastically simplify the word embeddings computation through a Hellinger PCA of the word co-occurence matrix. We compare those new word embeddings with some well-known embeddings on NER and movie review tasks and show that we can reach similar or even better performance. Although deep learning is not really necessary for generating good word embeddings, we show that it can provide an easy way to adapt embeddings to specific tasks."]], "corpus": "arxiv"}
9
+ {"tstamp": 1722350209.3102, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722350208.796, "finish": 1722350209.3102, "ip": "", "conv_id": "bd4f4c99e889490da720274e28ba3ed1", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "the function of the excretory system is to help maintain homeostasis by", "output": [["the function of the excretory system is to help maintain homeostasis by", "Excretory system\n\nThe excretory system is a passive biological system that removes excess, unnecessary materials from the body fluids of an organism, so as to help maintain internal chemical homeostasis and prevent damage to the body. The dual function of excretory systems is the elimination of the waste products of metabolism and to drain the body of used up and broken down components in a liquid and gaseous state. In humans and other amniotes (mammals, birds and reptiles), most of these substances leave the body as urine and to some degree exhalation, mammals also expel them through sweating.\nOnly the organs specifically used for the excretion are considered a part of the excretory system. In the narrow sense, the term refers to the urinary system. However, as excretion involves several functions that are only superficially related, it is not usually used in more formal classifications of anatomy or function.\nAs most healthy functioning organs produce metabolic and other wastes, the entire organism depends on the function of the system. Breaking down of one of more of the systems is a serious health condition, for example kidney failure.\nSystems"]], "corpus": "wikipedia"}
10
+ {"tstamp": 1722350209.3102, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722350208.796, "finish": 1722350209.3102, "ip": "", "conv_id": "39ff287375224ca9bb8c3fc0faa9470b", "model_name": "text-embedding-3-large", "prompt": "the function of the excretory system is to help maintain homeostasis by", "output": [["the function of the excretory system is to help maintain homeostasis by", "Excretory system\n\nThe excretory system is a passive biological system that removes excess, unnecessary materials from the body fluids of an organism, so as to help maintain internal chemical homeostasis and prevent damage to the body. The dual function of excretory systems is the elimination of the waste products of metabolism and to drain the body of used up and broken down components in a liquid and gaseous state. In humans and other amniotes (mammals, birds and reptiles), most of these substances leave the body as urine and to some degree exhalation, mammals also expel them through sweating.\nOnly the organs specifically used for the excretion are considered a part of the excretory system. In the narrow sense, the term refers to the urinary system. However, as excretion involves several functions that are only superficially related, it is not usually used in more formal classifications of anatomy or function.\nAs most healthy functioning organs produce metabolic and other wastes, the entire organism depends on the function of the system. Breaking down of one of more of the systems is a serious health condition, for example kidney failure.\nSystems"]], "corpus": "wikipedia"}