Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
c007ee7
·
verified ·
1 Parent(s): 2d3812e

Scheduled Commit

Browse files
data/retrieval_battle-f0f24c65-6f65-4435-a1e5-20bb12a2b295.jsonl CHANGED
@@ -68,3 +68,13 @@
68
  {"tstamp": 1722227184.2501, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "df087591632840268d7cda35b4eee3d6", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "A model replacing RNN/CNN with self-attention for better HAR results ", "0_output": [["A model replacing RNN/CNN with self-attention for better HAR results ", "Title: Partial Representations and Amenable Fell Bundles over Free Groups\n\nAbstract: We show that a Fell bundle B = {B_t}_{t \\in F}, over an arbitrary free group F, is amenable, whenever it is orthogonal (in the sense that B_x^* B_y = 0, if x and y are distinct generators of F) and semi-saturated (in the sense that B_{ts} coincides with the closed linear span of B_t B_s, when the multiplication ``ts'' involves no cancelation)."]], "0_corpus": "arxiv", "1_conv_id": "39bbefcfdcb54d1fbfae7a726a6a9260", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "A model replacing RNN/CNN with self-attention for better HAR results ", "1_output": [["A model replacing RNN/CNN with self-attention for better HAR results ", "Title: Electromagnetic transitions with effective operators\n\nAbstract: In the no-core shell model formalism we compute effective one- and two-body operators, using the Lee-Suzuki procedure within the two-body cluster approximation. We evaluate the validity of the latter through calculations in reduced model spaces. In particular, we test the results for the two-body system and find that indeed the effective operators in the reduced space reproduce the expectation values or transition strengths computed in the full space. On the other hand, the renormalization for operators in the case of 6Li is very weak, suggesting the need for higher-body clusters in computing the effective interaction."]], "1_corpus": "arxiv"}
69
  {"tstamp": 1722227262.9897, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "06e0a8ed452a4140b64f3fc3ef65afc7", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "can you travel to kenya with an expired passport?", "0_output": [["can you travel to kenya with an expired passport?", "Gaming Stackexchange\n\nQ: Can I get a document back after I have returned it? After approving a migrant worker's passport and returning it to them, I realized that it was expired. I still have their work ticket, which appears valid. Is there any way to deny them entry to the country at this point?\n\nA: Nope.\nWell, you just made a big mistake: You approved before checking everything. The game is very harsh in this regard and does not allow you to roll back your decision, once you have given back the document.\nThis is part of the message - Think before you act, even if it takes valuable time ... Or just push your luck.\nAlso: You don't get fined immediately for a simple error - you get two chances every day to make mistakes (intentional or otherwise), and then the fines start kicking in.\n"]], "0_corpus": "stackexchange", "1_conv_id": "18595514e0f1447ea4ec341445ee2f8d", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "can you travel to kenya with an expired passport?", "1_output": [["can you travel to kenya with an expired passport?", "Gaming Stackexchange\n\nQ: Can I get a document back after I have returned it? After approving a migrant worker's passport and returning it to them, I realized that it was expired. I still have their work ticket, which appears valid. Is there any way to deny them entry to the country at this point?\n\nA: Nope.\nWell, you just made a big mistake: You approved before checking everything. The game is very harsh in this regard and does not allow you to roll back your decision, once you have given back the document.\nThis is part of the message - Think before you act, even if it takes valuable time ... Or just push your luck.\nAlso: You don't get fined immediately for a simple error - you get two chances every day to make mistakes (intentional or otherwise), and then the fines start kicking in.\n"]], "1_corpus": "stackexchange"}
70
  {"tstamp": 1722227306.3763, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "5dea31ed6e6640a48d84eccc072395b9", "0_model_name": "text-embedding-3-large", "0_prompt": "Paper on GPt models for embedding", "0_output": [["Paper on GPt models for embedding", "Title: Gaussian Process Topic Models\n\nAbstract: We introduce Gaussian Process Topic Models (GPTMs), a new family of topic models which can leverage a kernel among documents while extracting correlated topics. GPTMs can be considered a systematic generalization of the Correlated Topic Models (CTMs) using ideas from Gaussian Process (GP) based embedding. Since GPTMs work with both a topic covariance matrix and a document kernel matrix, learning GPTMs involves a novel component-solving a suitable Sylvester equation capturing both topic and document dependencies. The efficacy of GPTMs is demonstrated with experiments evaluating the quality of both topic modeling and embedding."]], "0_corpus": "arxiv", "1_conv_id": "64b61d37f88c4e7f90958d2817eca881", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "Paper on GPt models for embedding", "1_output": [["Paper on GPt models for embedding", "Title: Nimble GNN Embedding with Tensor-Train Decomposition\n\nAbstract: This paper describes a new method for representing embedding tables of graph neural networks (GNNs) more compactly via tensor-train (TT) decomposition. We consider the scenario where (a) the graph data that lack node features, thereby requiring the learning of embeddings during training; and (b) we wish to exploit GPU platforms, where smaller tables are needed to reduce host-to-GPU communication even for large-memory GPUs. The use of TT enables a compact parameterization of the embedding, rendering it small enough to fit entirely on modern GPUs even for massive graphs. When combined with judicious schemes for initialization and hierarchical graph partitioning, this approach can reduce the size of node embedding vectors by 1,659 times to 81,362 times on large publicly available benchmark datasets, achieving comparable or better accuracy and significant speedups on multi-GPU systems. In some cases, our model without explicit node features on input can even match the accuracy of models that use node features."]], "1_corpus": "arxiv"}
 
 
 
 
 
 
 
 
 
 
 
68
  {"tstamp": 1722227184.2501, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "df087591632840268d7cda35b4eee3d6", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "A model replacing RNN/CNN with self-attention for better HAR results ", "0_output": [["A model replacing RNN/CNN with self-attention for better HAR results ", "Title: Partial Representations and Amenable Fell Bundles over Free Groups\n\nAbstract: We show that a Fell bundle B = {B_t}_{t \\in F}, over an arbitrary free group F, is amenable, whenever it is orthogonal (in the sense that B_x^* B_y = 0, if x and y are distinct generators of F) and semi-saturated (in the sense that B_{ts} coincides with the closed linear span of B_t B_s, when the multiplication ``ts'' involves no cancelation)."]], "0_corpus": "arxiv", "1_conv_id": "39bbefcfdcb54d1fbfae7a726a6a9260", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "A model replacing RNN/CNN with self-attention for better HAR results ", "1_output": [["A model replacing RNN/CNN with self-attention for better HAR results ", "Title: Electromagnetic transitions with effective operators\n\nAbstract: In the no-core shell model formalism we compute effective one- and two-body operators, using the Lee-Suzuki procedure within the two-body cluster approximation. We evaluate the validity of the latter through calculations in reduced model spaces. In particular, we test the results for the two-body system and find that indeed the effective operators in the reduced space reproduce the expectation values or transition strengths computed in the full space. On the other hand, the renormalization for operators in the case of 6Li is very weak, suggesting the need for higher-body clusters in computing the effective interaction."]], "1_corpus": "arxiv"}
69
  {"tstamp": 1722227262.9897, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "06e0a8ed452a4140b64f3fc3ef65afc7", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "can you travel to kenya with an expired passport?", "0_output": [["can you travel to kenya with an expired passport?", "Gaming Stackexchange\n\nQ: Can I get a document back after I have returned it? After approving a migrant worker's passport and returning it to them, I realized that it was expired. I still have their work ticket, which appears valid. Is there any way to deny them entry to the country at this point?\n\nA: Nope.\nWell, you just made a big mistake: You approved before checking everything. The game is very harsh in this regard and does not allow you to roll back your decision, once you have given back the document.\nThis is part of the message - Think before you act, even if it takes valuable time ... Or just push your luck.\nAlso: You don't get fined immediately for a simple error - you get two chances every day to make mistakes (intentional or otherwise), and then the fines start kicking in.\n"]], "0_corpus": "stackexchange", "1_conv_id": "18595514e0f1447ea4ec341445ee2f8d", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "can you travel to kenya with an expired passport?", "1_output": [["can you travel to kenya with an expired passport?", "Gaming Stackexchange\n\nQ: Can I get a document back after I have returned it? After approving a migrant worker's passport and returning it to them, I realized that it was expired. I still have their work ticket, which appears valid. Is there any way to deny them entry to the country at this point?\n\nA: Nope.\nWell, you just made a big mistake: You approved before checking everything. The game is very harsh in this regard and does not allow you to roll back your decision, once you have given back the document.\nThis is part of the message - Think before you act, even if it takes valuable time ... Or just push your luck.\nAlso: You don't get fined immediately for a simple error - you get two chances every day to make mistakes (intentional or otherwise), and then the fines start kicking in.\n"]], "1_corpus": "stackexchange"}
70
  {"tstamp": 1722227306.3763, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "5dea31ed6e6640a48d84eccc072395b9", "0_model_name": "text-embedding-3-large", "0_prompt": "Paper on GPt models for embedding", "0_output": [["Paper on GPt models for embedding", "Title: Gaussian Process Topic Models\n\nAbstract: We introduce Gaussian Process Topic Models (GPTMs), a new family of topic models which can leverage a kernel among documents while extracting correlated topics. GPTMs can be considered a systematic generalization of the Correlated Topic Models (CTMs) using ideas from Gaussian Process (GP) based embedding. Since GPTMs work with both a topic covariance matrix and a document kernel matrix, learning GPTMs involves a novel component-solving a suitable Sylvester equation capturing both topic and document dependencies. The efficacy of GPTMs is demonstrated with experiments evaluating the quality of both topic modeling and embedding."]], "0_corpus": "arxiv", "1_conv_id": "64b61d37f88c4e7f90958d2817eca881", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "Paper on GPt models for embedding", "1_output": [["Paper on GPt models for embedding", "Title: Nimble GNN Embedding with Tensor-Train Decomposition\n\nAbstract: This paper describes a new method for representing embedding tables of graph neural networks (GNNs) more compactly via tensor-train (TT) decomposition. We consider the scenario where (a) the graph data that lack node features, thereby requiring the learning of embeddings during training; and (b) we wish to exploit GPU platforms, where smaller tables are needed to reduce host-to-GPU communication even for large-memory GPUs. The use of TT enables a compact parameterization of the embedding, rendering it small enough to fit entirely on modern GPUs even for massive graphs. When combined with judicious schemes for initialization and hierarchical graph partitioning, this approach can reduce the size of node embedding vectors by 1,659 times to 81,362 times on large publicly available benchmark datasets, achieving comparable or better accuracy and significant speedups on multi-GPU systems. In some cases, our model without explicit node features on input can even match the accuracy of models that use node features."]], "1_corpus": "arxiv"}
71
+ {"tstamp": 1722227318.8507, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "b47a04ca853a4603ab5765eabae923fb", "0_model_name": "voyage-multilingual-2", "0_prompt": "Work using GPT models for text embeddings", "0_output": [["Work using GPT models for text embeddings", "Title: SGPT: GPT Sentence Embeddings for Semantic Search\n\nAbstract: Decoder transformers have continued increasing in scale reaching hundreds of billions of parameters. Due to their scale the same decoder sets state-of-the-art results on various language tasks via prompting or fine-tuning. Yet, these large foundation models remain unusable for the related fields of semantic search and sentence embeddings. This prevents possibly new state-of-the-art results and forces organizations to train and maintain separate models. To this end, we propose SGPT to use decoders for sentence embeddings and semantic search via prompting or fine-tuning. At 5.8 billion parameters SGPT improves on the previously best sentence embeddings by a margin of 7% and outperforms a concurrent method with 175 billion parameters as measured on the BEIR search benchmark. Code, models and result files are freely available at https://github.com/Muennighoff/sgpt."]], "0_corpus": "arxiv", "1_conv_id": "c086e98e80db4adfb1a1ffe9e6346a15", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Work using GPT models for text embeddings", "1_output": [["Work using GPT models for text embeddings", "Title: SGPT: GPT Sentence Embeddings for Semantic Search\n\nAbstract: Decoder transformers have continued increasing in scale reaching hundreds of billions of parameters. Due to their scale the same decoder sets state-of-the-art results on various language tasks via prompting or fine-tuning. Yet, these large foundation models remain unusable for the related fields of semantic search and sentence embeddings. This prevents possibly new state-of-the-art results and forces organizations to train and maintain separate models. To this end, we propose SGPT to use decoders for sentence embeddings and semantic search via prompting or fine-tuning. At 5.8 billion parameters SGPT improves on the previously best sentence embeddings by a margin of 7% and outperforms a concurrent method with 175 billion parameters as measured on the BEIR search benchmark. Code, models and result files are freely available at https://github.com/Muennighoff/sgpt."]], "1_corpus": "arxiv"}
72
+ {"tstamp": 1722227337.5868, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "4972233d3fbe45d39f4c2244b64a0539", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "How to use LLMs for text embedding?", "0_output": [["How to use LLMs for text embedding?", "Title: Subgroups of inertia groups arising from abelian varieties\n\nAbstract: Given an abelian variety over a field with a discrete valuation, Grothendieck defined a certain open normal subgroup of the absolute inertia group. This subgroup encodes information on the extensions over which the abelian variety acquires semistable reduction. We study this subgroup, and use it to obtain information on the extensions over which the abelian variety acquires semistable reduction."]], "0_corpus": "arxiv", "1_conv_id": "1ecdceaadad9423e960a4be24b139ff5", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "How to use LLMs for text embedding?", "1_output": [["How to use LLMs for text embedding?", "Title: Polarizability Extraction for Waveguide-Fed Metasurfaces\n\nAbstract: We consider the design and modeling of metasurfaces that couple energy from guided waves to propagating wavefronts. This is a first step towards a comprehensive, multiscale modeling platform for metasurface antennas-large arrays of metamaterial elements embedded in a waveguide structure that radiates into free-space--in which the detailed electromagnetic responses of metamaterial elements are replaced by polarizable dipoles. We present two methods to extract the effective polarizability of a metamaterial element embedded in a one- or two-dimensional waveguide. The first method invokes surface equivalence principles, averaging over the effective surface currents and charges within an element to obtain the effective dipole moments; the second method is based on computing the coefficients of the scattered waves within the waveguide, from which the effective polarizability can be inferred. We demonstrate these methods on several variants of waveguide-fed metasurface elements, finding excellent agreement between the two, as well as with analytical expressions derived for irises with simpler geometries. Extending the polarizability extraction technique to higher order multipoles, we confirm the validity of the dipole approximation for common metamaterial elements. With the effective polarizabilities of the metamaterial elements accurately determined, the radiated fields generated by a metasurface antenna (inside and outside the antenna) can be found self-consistently by including the interactions between polarizable dipoles. The dipole description provides an alternative language and computational framework for engineering metasurface antennas, holograms, lenses, beam-forming arrays, and other electrically large, waveguide-fed metasurface structures."]], "1_corpus": "arxiv"}
73
+ {"tstamp": 1722227347.7109, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "e2c63cb9c4d74f3a86a2d5ce975990a6", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "How to use LLMs for text embedding?", "0_output": [["How to use LLMs for text embedding?", "Title: Text clustering with LLM embeddings\n\nAbstract: Text clustering is an important approach for organising the growing amount of digital content, helping to structure and find hidden patterns in uncategorised data. However, the effectiveness of text clustering heavily relies on the choice of textual embeddings and clustering algorithms. We argue that recent advances in large language models (LLMs) can potentially improve this task. In this research, we investigated how different textual embeddings -- particularly those used in LLMs -- and clustering algorithms affect how text datasets are clustered. A series of experiments were conducted to assess how embeddings influence clustering results, the role played by dimensionality reduction through summarisation, and model size adjustment. Findings reveal that LLM embeddings excel at capturing subtleties in structured language, while BERT leads the lightweight options in performance. In addition, we observe that increasing model dimensionality and employing summarization techniques do not consistently lead to improvements in clustering efficiency, suggesting that these strategies require careful analysis to use in real-life models. These results highlight a complex balance between the need for refined text representation and computational feasibility in text clustering applications. This study extends traditional text clustering frameworks by incorporating embeddings from LLMs, providing a path for improved methodologies, while informing new avenues for future research in various types of textual analysis."]], "0_corpus": "arxiv", "1_conv_id": "2e8d1e46b8694af8bcd2c4bb95e142c3", "1_model_name": "text-embedding-004", "1_prompt": "How to use LLMs for text embedding?", "1_output": [["How to use LLMs for text embedding?", "Title: Improving Text Embeddings with Large Language Models\n\nAbstract: In this paper, we introduce a novel and simple method for obtaining high-quality text embeddings using only synthetic data and less than 1k training steps. Unlike existing methods that often depend on multi-stage intermediate pre-training with billions of weakly-supervised text pairs, followed by fine-tuning with a few labeled datasets, our method does not require building complex training pipelines or relying on manually collected datasets that are often constrained by task diversity and language coverage. We leverage proprietary LLMs to generate diverse synthetic data for hundreds of thousands of text embedding tasks across 93 languages. We then fine-tune open-source decoder-only LLMs on the synthetic data using standard contrastive loss. Experiments demonstrate that our method achieves strong performance on highly competitive text embedding benchmarks without using any labeled data. Furthermore, when fine-tuned with a mixture of synthetic and labeled data, our model sets new state-of-the-art results on the BEIR and MTEB benchmarks."]], "1_corpus": "arxiv"}
74
+ {"tstamp": 1722227356.5243, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "5bebb04b32774842adaca330c780388e", "0_model_name": "BM25", "0_prompt": "How to use LLMs for text embedding?", "0_output": [["How to use LLMs for text embedding?", "Title: Text clustering with LLM embeddings\n\nAbstract: Text clustering is an important approach for organising the growing amount of digital content, helping to structure and find hidden patterns in uncategorised data. However, the effectiveness of text clustering heavily relies on the choice of textual embeddings and clustering algorithms. We argue that recent advances in large language models (LLMs) can potentially improve this task. In this research, we investigated how different textual embeddings -- particularly those used in LLMs -- and clustering algorithms affect how text datasets are clustered. A series of experiments were conducted to assess how embeddings influence clustering results, the role played by dimensionality reduction through summarisation, and model size adjustment. Findings reveal that LLM embeddings excel at capturing subtleties in structured language, while BERT leads the lightweight options in performance. In addition, we observe that increasing model dimensionality and employing summarization techniques do not consistently lead to improvements in clustering efficiency, suggesting that these strategies require careful analysis to use in real-life models. These results highlight a complex balance between the need for refined text representation and computational feasibility in text clustering applications. This study extends traditional text clustering frameworks by incorporating embeddings from LLMs, providing a path for improved methodologies, while informing new avenues for future research in various types of textual analysis."]], "0_corpus": "arxiv", "1_conv_id": "17f56269b9cf4be3ab5bf8d23d304eb2", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "How to use LLMs for text embedding?", "1_output": [["How to use LLMs for text embedding?", "Title: Polarizability Extraction for Waveguide-Fed Metasurfaces\n\nAbstract: We consider the design and modeling of metasurfaces that couple energy from guided waves to propagating wavefronts. This is a first step towards a comprehensive, multiscale modeling platform for metasurface antennas-large arrays of metamaterial elements embedded in a waveguide structure that radiates into free-space--in which the detailed electromagnetic responses of metamaterial elements are replaced by polarizable dipoles. We present two methods to extract the effective polarizability of a metamaterial element embedded in a one- or two-dimensional waveguide. The first method invokes surface equivalence principles, averaging over the effective surface currents and charges within an element to obtain the effective dipole moments; the second method is based on computing the coefficients of the scattered waves within the waveguide, from which the effective polarizability can be inferred. We demonstrate these methods on several variants of waveguide-fed metasurface elements, finding excellent agreement between the two, as well as with analytical expressions derived for irises with simpler geometries. Extending the polarizability extraction technique to higher order multipoles, we confirm the validity of the dipole approximation for common metamaterial elements. With the effective polarizabilities of the metamaterial elements accurately determined, the radiated fields generated by a metasurface antenna (inside and outside the antenna) can be found self-consistently by including the interactions between polarizable dipoles. The dipole description provides an alternative language and computational framework for engineering metasurface antennas, holograms, lenses, beam-forming arrays, and other electrically large, waveguide-fed metasurface structures."]], "1_corpus": "arxiv"}
75
+ {"tstamp": 1722227365.1903, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "17c7b6e61796401ea7d401cf890f2283", "0_model_name": "BM25", "0_prompt": "How to use LLMs for text embedding?", "0_output": [["How to use LLMs for text embedding?", "Title: Text clustering with LLM embeddings\n\nAbstract: Text clustering is an important approach for organising the growing amount of digital content, helping to structure and find hidden patterns in uncategorised data. However, the effectiveness of text clustering heavily relies on the choice of textual embeddings and clustering algorithms. We argue that recent advances in large language models (LLMs) can potentially improve this task. In this research, we investigated how different textual embeddings -- particularly those used in LLMs -- and clustering algorithms affect how text datasets are clustered. A series of experiments were conducted to assess how embeddings influence clustering results, the role played by dimensionality reduction through summarisation, and model size adjustment. Findings reveal that LLM embeddings excel at capturing subtleties in structured language, while BERT leads the lightweight options in performance. In addition, we observe that increasing model dimensionality and employing summarization techniques do not consistently lead to improvements in clustering efficiency, suggesting that these strategies require careful analysis to use in real-life models. These results highlight a complex balance between the need for refined text representation and computational feasibility in text clustering applications. This study extends traditional text clustering frameworks by incorporating embeddings from LLMs, providing a path for improved methodologies, while informing new avenues for future research in various types of textual analysis."]], "0_corpus": "arxiv", "1_conv_id": "54529cd6bc5349eebebfd19f27aa985b", "1_model_name": "text-embedding-3-large", "1_prompt": "How to use LLMs for text embedding?", "1_output": [["How to use LLMs for text embedding?", "Title: How to use LLMs for Text Analysis\n\nAbstract: This guide introduces Large Language Models (LLM) as a highly versatile text analysis method within the social sciences. As LLMs are easy-to-use, cheap, fast, and applicable on a broad range of text analysis tasks, ranging from text annotation and classification to sentiment analysis and critical discourse analysis, many scholars believe that LLMs will transform how we do text analysis. This how-to guide is aimed at students and researchers with limited programming experience, and offers a simple introduction to how LLMs can be used for text analysis in your own research project, as well as advice on best practices. We will go through each of the steps of analyzing textual data with LLMs using Python: installing the software, setting up the API, loading the data, developing an analysis prompt, analyzing the text, and validating the results. As an illustrative example, we will use the challenging task of identifying populism in political texts, and show how LLMs move beyond the existing state-of-the-art."]], "1_corpus": "arxiv"}
76
+ {"tstamp": 1722227372.1059, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "7c4f641b6b16434ba11ddcb50efe7dd3", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "How to use LLMs for text embedding?", "0_output": [["How to use LLMs for text embedding?", "Title: A Robust Parser Based on Syntactic Information\n\nAbstract: In this paper, we propose a robust parser which can parse extragrammatical sentences. This parser can recover them using only syntactic information. It can be easily modified and extended because it utilize only syntactic information."]], "0_corpus": "arxiv", "1_conv_id": "4b44b05947b841cb9601128f35a74d27", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "How to use LLMs for text embedding?", "1_output": [["How to use LLMs for text embedding?", "Title: LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders\n\nAbstract: Large decoder-only language models (LLMs) are the state-of-the-art models on most of today's NLP tasks and benchmarks. Yet, the community is only slowly adopting these models for text embedding tasks, which require rich contextualized representations. In this work, we introduce LLM2Vec, a simple unsupervised approach that can transform any decoder-only LLM into a strong text encoder. LLM2Vec consists of three simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. We demonstrate the effectiveness of LLM2Vec by applying it to 3 popular LLMs ranging from 1.3B to 7B parameters and evaluate the transformed models on English word- and sequence-level tasks. We outperform encoder-only models by a large margin on word-level tasks and reach a new unsupervised state-of-the-art performance on the Massive Text Embeddings Benchmark (MTEB). Moreover, when combining LLM2Vec with supervised contrastive learning, we achieve state-of-the-art performance on MTEB among models that train only on publicly available data. Our strong empirical results and extensive analysis demonstrate that LLMs can be effectively transformed into universal text encoders in a parameter-efficient manner without the need for expensive adaptation or synthetic GPT-4 generated data."]], "1_corpus": "arxiv"}
77
+ {"tstamp": 1722227388.5972, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "3e693ba9880b4b2a96103949236f54eb", "0_model_name": "BM25", "0_prompt": "How to use GPT-3 for text embedding?", "0_output": [["How to use GPT-3 for text embedding?", "Title: Prior Art Search and Reranking for Generated Patent Text\n\nAbstract: Generative models, such as GPT-2, have demonstrated impressive results recently. A fundamental question we'd like to address is: where did the generated text come from? This work is our initial effort toward answering the question by using prior art search. The purpose of the prior art search is to find the most similar prior text in the training data of GPT-2. We take a reranking approach and apply it to the patent domain. Specifically, we pre-train GPT-2 models from scratch by using the patent data from the USPTO. The input for the prior art search is the patent text generated by the GPT-2 model. We also pre-trained BERT models from scratch for converting patent text to embeddings. The steps of reranking are: (1) search the most similar text in the training data of GPT-2 by taking a bag-of-word ranking approach (BM25), (2) convert the search results in text format to BERT embeddings, and (3) provide the final result by ranking the BERT embeddings based on their similarities with the patent text generated by GPT-2. The experiments in this work show that such reranking is better than ranking with embeddings alone. However, our mixed results also indicate that calculating the semantic similarities among long text spans is still challenging. To our knowledge, this work is the first to implement a reranking system to identify retrospectively the most similar inputs to a GPT model based on its output."]], "0_corpus": "arxiv", "1_conv_id": "0d5001addb9d4a18b601c3d872fd0520", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "How to use GPT-3 for text embedding?", "1_output": [["How to use GPT-3 for text embedding?", "Title: Towards General Text Embeddings with Multi-stage Contrastive Learning\n\nAbstract: We present GTE, a general-purpose text embedding model trained with multi-stage contrastive learning. In line with recent advancements in unifying various NLP tasks into a single format, we train a unified text embedding model by employing contrastive learning over a diverse mixture of datasets from multiple sources. By significantly increasing the number of training data during both unsupervised pre-training and supervised fine-tuning stages, we achieve substantial performance gains over existing embedding models. Notably, even with a relatively modest parameter count of 110M, GTE$_\\text{base}$ outperforms the black-box embedding API provided by OpenAI and even surpasses 10x larger text embedding models on the massive text embedding benchmark. Furthermore, without additional fine-tuning on each programming language individually, our model outperforms previous best code retrievers of similar size by treating code as text. In summary, our model achieves impressive results by effectively harnessing multi-stage contrastive learning, offering a powerful and efficient text embedding model with broad applicability across various NLP and code-related tasks."]], "1_corpus": "arxiv"}
78
+ {"tstamp": 1722227396.7613, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "a588532639944ec6bf1492cbd64ad35d", "0_model_name": "voyage-multilingual-2", "0_prompt": "How to use GPT for text embedding?", "0_output": [["How to use GPT for text embedding?", "Title: Towards General Text Embeddings with Multi-stage Contrastive Learning\n\nAbstract: We present GTE, a general-purpose text embedding model trained with multi-stage contrastive learning. In line with recent advancements in unifying various NLP tasks into a single format, we train a unified text embedding model by employing contrastive learning over a diverse mixture of datasets from multiple sources. By significantly increasing the number of training data during both unsupervised pre-training and supervised fine-tuning stages, we achieve substantial performance gains over existing embedding models. Notably, even with a relatively modest parameter count of 110M, GTE$_\\text{base}$ outperforms the black-box embedding API provided by OpenAI and even surpasses 10x larger text embedding models on the massive text embedding benchmark. Furthermore, without additional fine-tuning on each programming language individually, our model outperforms previous best code retrievers of similar size by treating code as text. In summary, our model achieves impressive results by effectively harnessing multi-stage contrastive learning, offering a powerful and efficient text embedding model with broad applicability across various NLP and code-related tasks."]], "0_corpus": "arxiv", "1_conv_id": "3b7155d235974f69a8757e75f4f9bb52", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "How to use GPT for text embedding?", "1_output": [["How to use GPT for text embedding?", "Title: Phase Space Derivation of a Variational Principle for One Dimensional Hamiltonian Systems\n\nAbstract: We consider the bifurcation problem u'' + \\lambda u = N(u) with two point boundary conditions where N(u) is a general nonlinear term which may also depend on the eigenvalue \\lambda. A new derivation of a variational principle for the lowest eigenvalue \\lambda is given. This derivation makes use only of simple algebraic inequalities and leads directly to a more explicit expression for the eigenvalue than what had been given previously."]], "1_corpus": "arxiv"}
79
+ {"tstamp": 1722227401.7211, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "2f09b604c1644cccabd56e80c3e71093", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "How to use GPT for text embedding?", "0_output": [["How to use GPT for text embedding?", "Title: Numerical Criteria for vey Ampleness of Divisors on Projective Bundles over an elliptic curve\n\nAbstract: In Butler, J.Differential Geom. 39 (1):1--34,1994, the author gives a sufficient condition for a line bundle associated with a divisor D to be normally generated on $X=P(E)$ where E is a vector bundle over a smooth curve C. A line bundle which is ample and normally generated is automatically very ample. Therefore the condition found in Butler's work, together with Miyaoka's well known ampleness criterion, give a sufficient condition for the very ampleness of D on X. This work is devoted to the study of numerical criteria for very ampleness of divisors D which do not satisfy the above criterion, in the case of C elliptic. Numerical conditions for the very ampleness of D are proved,improving existing results. In some cases a complete numerical characterization is found."]], "0_corpus": "arxiv", "1_conv_id": "66ad5195a32443cea82ad127a698d152", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "How to use GPT for text embedding?", "1_output": [["How to use GPT for text embedding?", "Title: Towards General Text Embeddings with Multi-stage Contrastive Learning\n\nAbstract: We present GTE, a general-purpose text embedding model trained with multi-stage contrastive learning. In line with recent advancements in unifying various NLP tasks into a single format, we train a unified text embedding model by employing contrastive learning over a diverse mixture of datasets from multiple sources. By significantly increasing the number of training data during both unsupervised pre-training and supervised fine-tuning stages, we achieve substantial performance gains over existing embedding models. Notably, even with a relatively modest parameter count of 110M, GTE$_\\text{base}$ outperforms the black-box embedding API provided by OpenAI and even surpasses 10x larger text embedding models on the massive text embedding benchmark. Furthermore, without additional fine-tuning on each programming language individually, our model outperforms previous best code retrievers of similar size by treating code as text. In summary, our model achieves impressive results by effectively harnessing multi-stage contrastive learning, offering a powerful and efficient text embedding model with broad applicability across various NLP and code-related tasks."]], "1_corpus": "arxiv"}
80
+ {"tstamp": 1722227416.1574, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "e7d1f7ce30e2419d8edc849d80b6ca18", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "How to use GPT for embedding & search?", "0_output": [["How to use GPT for embedding & search?", "Title: Towards General Text Embeddings with Multi-stage Contrastive Learning\n\nAbstract: We present GTE, a general-purpose text embedding model trained with multi-stage contrastive learning. In line with recent advancements in unifying various NLP tasks into a single format, we train a unified text embedding model by employing contrastive learning over a diverse mixture of datasets from multiple sources. By significantly increasing the number of training data during both unsupervised pre-training and supervised fine-tuning stages, we achieve substantial performance gains over existing embedding models. Notably, even with a relatively modest parameter count of 110M, GTE$_\\text{base}$ outperforms the black-box embedding API provided by OpenAI and even surpasses 10x larger text embedding models on the massive text embedding benchmark. Furthermore, without additional fine-tuning on each programming language individually, our model outperforms previous best code retrievers of similar size by treating code as text. In summary, our model achieves impressive results by effectively harnessing multi-stage contrastive learning, offering a powerful and efficient text embedding model with broad applicability across various NLP and code-related tasks."]], "0_corpus": "arxiv", "1_conv_id": "a75a2008513a4c0e83f403f59cdd4efe", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "How to use GPT for embedding & search?", "1_output": [["How to use GPT for embedding & search?", "Title: SGPT: GPT Sentence Embeddings for Semantic Search\n\nAbstract: Decoder transformers have continued increasing in scale reaching hundreds of billions of parameters. Due to their scale the same decoder sets state-of-the-art results on various language tasks via prompting or fine-tuning. Yet, these large foundation models remain unusable for the related fields of semantic search and sentence embeddings. This prevents possibly new state-of-the-art results and forces organizations to train and maintain separate models. To this end, we propose SGPT to use decoders for sentence embeddings and semantic search via prompting or fine-tuning. At 5.8 billion parameters SGPT improves on the previously best sentence embeddings by a margin of 7% and outperforms a concurrent method with 175 billion parameters as measured on the BEIR search benchmark. Code, models and result files are freely available at https://github.com/Muennighoff/sgpt."]], "1_corpus": "arxiv"}
data/retrieval_individual-f0f24c65-6f65-4435-a1e5-20bb12a2b295.jsonl CHANGED
@@ -161,3 +161,21 @@
161
  {"tstamp": 1722227294.6746, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722227294.2019, "finish": 1722227294.6746, "ip": "", "conv_id": "64b61d37f88c4e7f90958d2817eca881", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Paper on GPt models for embedding", "output": [["Paper on GPt models for embedding", "Title: Nimble GNN Embedding with Tensor-Train Decomposition\n\nAbstract: This paper describes a new method for representing embedding tables of graph neural networks (GNNs) more compactly via tensor-train (TT) decomposition. We consider the scenario where (a) the graph data that lack node features, thereby requiring the learning of embeddings during training; and (b) we wish to exploit GPU platforms, where smaller tables are needed to reduce host-to-GPU communication even for large-memory GPUs. The use of TT enables a compact parameterization of the embedding, rendering it small enough to fit entirely on modern GPUs even for massive graphs. When combined with judicious schemes for initialization and hierarchical graph partitioning, this approach can reduce the size of node embedding vectors by 1,659 times to 81,362 times on large publicly available benchmark datasets, achieving comparable or better accuracy and significant speedups on multi-GPU systems. In some cases, our model without explicit node features on input can even match the accuracy of models that use node features."]], "corpus": "arxiv"}
162
  {"tstamp": 1722227314.6981, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722227314.3433, "finish": 1722227314.6981, "ip": "", "conv_id": "b47a04ca853a4603ab5765eabae923fb", "model_name": "voyage-multilingual-2", "prompt": "Work using GPT models for text embeddings", "output": [["Work using GPT models for text embeddings", "Title: SGPT: GPT Sentence Embeddings for Semantic Search\n\nAbstract: Decoder transformers have continued increasing in scale reaching hundreds of billions of parameters. Due to their scale the same decoder sets state-of-the-art results on various language tasks via prompting or fine-tuning. Yet, these large foundation models remain unusable for the related fields of semantic search and sentence embeddings. This prevents possibly new state-of-the-art results and forces organizations to train and maintain separate models. To this end, we propose SGPT to use decoders for sentence embeddings and semantic search via prompting or fine-tuning. At 5.8 billion parameters SGPT improves on the previously best sentence embeddings by a margin of 7% and outperforms a concurrent method with 175 billion parameters as measured on the BEIR search benchmark. Code, models and result files are freely available at https://github.com/Muennighoff/sgpt."]], "corpus": "arxiv"}
163
  {"tstamp": 1722227314.6981, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722227314.3433, "finish": 1722227314.6981, "ip": "", "conv_id": "c086e98e80db4adfb1a1ffe9e6346a15", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Work using GPT models for text embeddings", "output": [["Work using GPT models for text embeddings", "Title: SGPT: GPT Sentence Embeddings for Semantic Search\n\nAbstract: Decoder transformers have continued increasing in scale reaching hundreds of billions of parameters. Due to their scale the same decoder sets state-of-the-art results on various language tasks via prompting or fine-tuning. Yet, these large foundation models remain unusable for the related fields of semantic search and sentence embeddings. This prevents possibly new state-of-the-art results and forces organizations to train and maintain separate models. To this end, we propose SGPT to use decoders for sentence embeddings and semantic search via prompting or fine-tuning. At 5.8 billion parameters SGPT improves on the previously best sentence embeddings by a margin of 7% and outperforms a concurrent method with 175 billion parameters as measured on the BEIR search benchmark. Code, models and result files are freely available at https://github.com/Muennighoff/sgpt."]], "corpus": "arxiv"}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
161
  {"tstamp": 1722227294.6746, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722227294.2019, "finish": 1722227294.6746, "ip": "", "conv_id": "64b61d37f88c4e7f90958d2817eca881", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Paper on GPt models for embedding", "output": [["Paper on GPt models for embedding", "Title: Nimble GNN Embedding with Tensor-Train Decomposition\n\nAbstract: This paper describes a new method for representing embedding tables of graph neural networks (GNNs) more compactly via tensor-train (TT) decomposition. We consider the scenario where (a) the graph data that lack node features, thereby requiring the learning of embeddings during training; and (b) we wish to exploit GPU platforms, where smaller tables are needed to reduce host-to-GPU communication even for large-memory GPUs. The use of TT enables a compact parameterization of the embedding, rendering it small enough to fit entirely on modern GPUs even for massive graphs. When combined with judicious schemes for initialization and hierarchical graph partitioning, this approach can reduce the size of node embedding vectors by 1,659 times to 81,362 times on large publicly available benchmark datasets, achieving comparable or better accuracy and significant speedups on multi-GPU systems. In some cases, our model without explicit node features on input can even match the accuracy of models that use node features."]], "corpus": "arxiv"}
162
  {"tstamp": 1722227314.6981, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722227314.3433, "finish": 1722227314.6981, "ip": "", "conv_id": "b47a04ca853a4603ab5765eabae923fb", "model_name": "voyage-multilingual-2", "prompt": "Work using GPT models for text embeddings", "output": [["Work using GPT models for text embeddings", "Title: SGPT: GPT Sentence Embeddings for Semantic Search\n\nAbstract: Decoder transformers have continued increasing in scale reaching hundreds of billions of parameters. Due to their scale the same decoder sets state-of-the-art results on various language tasks via prompting or fine-tuning. Yet, these large foundation models remain unusable for the related fields of semantic search and sentence embeddings. This prevents possibly new state-of-the-art results and forces organizations to train and maintain separate models. To this end, we propose SGPT to use decoders for sentence embeddings and semantic search via prompting or fine-tuning. At 5.8 billion parameters SGPT improves on the previously best sentence embeddings by a margin of 7% and outperforms a concurrent method with 175 billion parameters as measured on the BEIR search benchmark. Code, models and result files are freely available at https://github.com/Muennighoff/sgpt."]], "corpus": "arxiv"}
163
  {"tstamp": 1722227314.6981, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722227314.3433, "finish": 1722227314.6981, "ip": "", "conv_id": "c086e98e80db4adfb1a1ffe9e6346a15", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Work using GPT models for text embeddings", "output": [["Work using GPT models for text embeddings", "Title: SGPT: GPT Sentence Embeddings for Semantic Search\n\nAbstract: Decoder transformers have continued increasing in scale reaching hundreds of billions of parameters. Due to their scale the same decoder sets state-of-the-art results on various language tasks via prompting or fine-tuning. Yet, these large foundation models remain unusable for the related fields of semantic search and sentence embeddings. This prevents possibly new state-of-the-art results and forces organizations to train and maintain separate models. To this end, we propose SGPT to use decoders for sentence embeddings and semantic search via prompting or fine-tuning. At 5.8 billion parameters SGPT improves on the previously best sentence embeddings by a margin of 7% and outperforms a concurrent method with 175 billion parameters as measured on the BEIR search benchmark. Code, models and result files are freely available at https://github.com/Muennighoff/sgpt."]], "corpus": "arxiv"}
164
+ {"tstamp": 1722227335.0278, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722227334.9303, "finish": 1722227335.0278, "ip": "", "conv_id": "4972233d3fbe45d39f4c2244b64a0539", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "How to use LLMs for text embedding?", "output": [["How to use LLMs for text embedding?", "Title: Subgroups of inertia groups arising from abelian varieties\n\nAbstract: Given an abelian variety over a field with a discrete valuation, Grothendieck defined a certain open normal subgroup of the absolute inertia group. This subgroup encodes information on the extensions over which the abelian variety acquires semistable reduction. We study this subgroup, and use it to obtain information on the extensions over which the abelian variety acquires semistable reduction."]], "corpus": "arxiv"}
165
+ {"tstamp": 1722227335.0278, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722227334.9303, "finish": 1722227335.0278, "ip": "", "conv_id": "1ecdceaadad9423e960a4be24b139ff5", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "How to use LLMs for text embedding?", "output": [["How to use LLMs for text embedding?", "Title: Polarizability Extraction for Waveguide-Fed Metasurfaces\n\nAbstract: We consider the design and modeling of metasurfaces that couple energy from guided waves to propagating wavefronts. This is a first step towards a comprehensive, multiscale modeling platform for metasurface antennas-large arrays of metamaterial elements embedded in a waveguide structure that radiates into free-space--in which the detailed electromagnetic responses of metamaterial elements are replaced by polarizable dipoles. We present two methods to extract the effective polarizability of a metamaterial element embedded in a one- or two-dimensional waveguide. The first method invokes surface equivalence principles, averaging over the effective surface currents and charges within an element to obtain the effective dipole moments; the second method is based on computing the coefficients of the scattered waves within the waveguide, from which the effective polarizability can be inferred. We demonstrate these methods on several variants of waveguide-fed metasurface elements, finding excellent agreement between the two, as well as with analytical expressions derived for irises with simpler geometries. Extending the polarizability extraction technique to higher order multipoles, we confirm the validity of the dipole approximation for common metamaterial elements. With the effective polarizabilities of the metamaterial elements accurately determined, the radiated fields generated by a metasurface antenna (inside and outside the antenna) can be found self-consistently by including the interactions between polarizable dipoles. The dipole description provides an alternative language and computational framework for engineering metasurface antennas, holograms, lenses, beam-forming arrays, and other electrically large, waveguide-fed metasurface structures."]], "corpus": "arxiv"}
166
+ {"tstamp": 1722227340.7045, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722227340.0933, "finish": 1722227340.7045, "ip": "", "conv_id": "e2c63cb9c4d74f3a86a2d5ce975990a6", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "How to use LLMs for text embedding?", "output": [["How to use LLMs for text embedding?", "Title: Text clustering with LLM embeddings\n\nAbstract: Text clustering is an important approach for organising the growing amount of digital content, helping to structure and find hidden patterns in uncategorised data. However, the effectiveness of text clustering heavily relies on the choice of textual embeddings and clustering algorithms. We argue that recent advances in large language models (LLMs) can potentially improve this task. In this research, we investigated how different textual embeddings -- particularly those used in LLMs -- and clustering algorithms affect how text datasets are clustered. A series of experiments were conducted to assess how embeddings influence clustering results, the role played by dimensionality reduction through summarisation, and model size adjustment. Findings reveal that LLM embeddings excel at capturing subtleties in structured language, while BERT leads the lightweight options in performance. In addition, we observe that increasing model dimensionality and employing summarization techniques do not consistently lead to improvements in clustering efficiency, suggesting that these strategies require careful analysis to use in real-life models. These results highlight a complex balance between the need for refined text representation and computational feasibility in text clustering applications. This study extends traditional text clustering frameworks by incorporating embeddings from LLMs, providing a path for improved methodologies, while informing new avenues for future research in various types of textual analysis."]], "corpus": "arxiv"}
167
+ {"tstamp": 1722227340.7045, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722227340.0933, "finish": 1722227340.7045, "ip": "", "conv_id": "2e8d1e46b8694af8bcd2c4bb95e142c3", "model_name": "text-embedding-004", "prompt": "How to use LLMs for text embedding?", "output": [["How to use LLMs for text embedding?", "Title: Improving Text Embeddings with Large Language Models\n\nAbstract: In this paper, we introduce a novel and simple method for obtaining high-quality text embeddings using only synthetic data and less than 1k training steps. Unlike existing methods that often depend on multi-stage intermediate pre-training with billions of weakly-supervised text pairs, followed by fine-tuning with a few labeled datasets, our method does not require building complex training pipelines or relying on manually collected datasets that are often constrained by task diversity and language coverage. We leverage proprietary LLMs to generate diverse synthetic data for hundreds of thousands of text embedding tasks across 93 languages. We then fine-tune open-source decoder-only LLMs on the synthetic data using standard contrastive loss. Experiments demonstrate that our method achieves strong performance on highly competitive text embedding benchmarks without using any labeled data. Furthermore, when fine-tuned with a mixture of synthetic and labeled data, our model sets new state-of-the-art results on the BEIR and MTEB benchmarks."]], "corpus": "arxiv"}
168
+ {"tstamp": 1722227352.0786, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722227351.9669, "finish": 1722227352.0786, "ip": "", "conv_id": "5bebb04b32774842adaca330c780388e", "model_name": "BM25", "prompt": "How to use LLMs for text embedding?", "output": [["How to use LLMs for text embedding?", "Title: Text clustering with LLM embeddings\n\nAbstract: Text clustering is an important approach for organising the growing amount of digital content, helping to structure and find hidden patterns in uncategorised data. However, the effectiveness of text clustering heavily relies on the choice of textual embeddings and clustering algorithms. We argue that recent advances in large language models (LLMs) can potentially improve this task. In this research, we investigated how different textual embeddings -- particularly those used in LLMs -- and clustering algorithms affect how text datasets are clustered. A series of experiments were conducted to assess how embeddings influence clustering results, the role played by dimensionality reduction through summarisation, and model size adjustment. Findings reveal that LLM embeddings excel at capturing subtleties in structured language, while BERT leads the lightweight options in performance. In addition, we observe that increasing model dimensionality and employing summarization techniques do not consistently lead to improvements in clustering efficiency, suggesting that these strategies require careful analysis to use in real-life models. These results highlight a complex balance between the need for refined text representation and computational feasibility in text clustering applications. This study extends traditional text clustering frameworks by incorporating embeddings from LLMs, providing a path for improved methodologies, while informing new avenues for future research in various types of textual analysis."]], "corpus": "arxiv"}
169
+ {"tstamp": 1722227352.0786, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722227351.9669, "finish": 1722227352.0786, "ip": "", "conv_id": "17f56269b9cf4be3ab5bf8d23d304eb2", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "How to use LLMs for text embedding?", "output": [["How to use LLMs for text embedding?", "Title: Polarizability Extraction for Waveguide-Fed Metasurfaces\n\nAbstract: We consider the design and modeling of metasurfaces that couple energy from guided waves to propagating wavefronts. This is a first step towards a comprehensive, multiscale modeling platform for metasurface antennas-large arrays of metamaterial elements embedded in a waveguide structure that radiates into free-space--in which the detailed electromagnetic responses of metamaterial elements are replaced by polarizable dipoles. We present two methods to extract the effective polarizability of a metamaterial element embedded in a one- or two-dimensional waveguide. The first method invokes surface equivalence principles, averaging over the effective surface currents and charges within an element to obtain the effective dipole moments; the second method is based on computing the coefficients of the scattered waves within the waveguide, from which the effective polarizability can be inferred. We demonstrate these methods on several variants of waveguide-fed metasurface elements, finding excellent agreement between the two, as well as with analytical expressions derived for irises with simpler geometries. Extending the polarizability extraction technique to higher order multipoles, we confirm the validity of the dipole approximation for common metamaterial elements. With the effective polarizabilities of the metamaterial elements accurately determined, the radiated fields generated by a metasurface antenna (inside and outside the antenna) can be found self-consistently by including the interactions between polarizable dipoles. The dipole description provides an alternative language and computational framework for engineering metasurface antennas, holograms, lenses, beam-forming arrays, and other electrically large, waveguide-fed metasurface structures."]], "corpus": "arxiv"}
170
+ {"tstamp": 1722227359.8827, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722227359.3298, "finish": 1722227359.8827, "ip": "", "conv_id": "17c7b6e61796401ea7d401cf890f2283", "model_name": "BM25", "prompt": "How to use LLMs for text embedding?", "output": [["How to use LLMs for text embedding?", "Title: Text clustering with LLM embeddings\n\nAbstract: Text clustering is an important approach for organising the growing amount of digital content, helping to structure and find hidden patterns in uncategorised data. However, the effectiveness of text clustering heavily relies on the choice of textual embeddings and clustering algorithms. We argue that recent advances in large language models (LLMs) can potentially improve this task. In this research, we investigated how different textual embeddings -- particularly those used in LLMs -- and clustering algorithms affect how text datasets are clustered. A series of experiments were conducted to assess how embeddings influence clustering results, the role played by dimensionality reduction through summarisation, and model size adjustment. Findings reveal that LLM embeddings excel at capturing subtleties in structured language, while BERT leads the lightweight options in performance. In addition, we observe that increasing model dimensionality and employing summarization techniques do not consistently lead to improvements in clustering efficiency, suggesting that these strategies require careful analysis to use in real-life models. These results highlight a complex balance between the need for refined text representation and computational feasibility in text clustering applications. This study extends traditional text clustering frameworks by incorporating embeddings from LLMs, providing a path for improved methodologies, while informing new avenues for future research in various types of textual analysis."]], "corpus": "arxiv"}
171
+ {"tstamp": 1722227359.8827, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722227359.3298, "finish": 1722227359.8827, "ip": "", "conv_id": "54529cd6bc5349eebebfd19f27aa985b", "model_name": "text-embedding-3-large", "prompt": "How to use LLMs for text embedding?", "output": [["How to use LLMs for text embedding?", "Title: How to use LLMs for Text Analysis\n\nAbstract: This guide introduces Large Language Models (LLM) as a highly versatile text analysis method within the social sciences. As LLMs are easy-to-use, cheap, fast, and applicable on a broad range of text analysis tasks, ranging from text annotation and classification to sentiment analysis and critical discourse analysis, many scholars believe that LLMs will transform how we do text analysis. This how-to guide is aimed at students and researchers with limited programming experience, and offers a simple introduction to how LLMs can be used for text analysis in your own research project, as well as advice on best practices. We will go through each of the steps of analyzing textual data with LLMs using Python: installing the software, setting up the API, loading the data, developing an analysis prompt, analyzing the text, and validating the results. As an illustrative example, we will use the challenging task of identifying populism in political texts, and show how LLMs move beyond the existing state-of-the-art."]], "corpus": "arxiv"}
172
+ {"tstamp": 1722227369.3743, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722227369.3003, "finish": 1722227369.3743, "ip": "", "conv_id": "7c4f641b6b16434ba11ddcb50efe7dd3", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "How to use LLMs for text embedding?", "output": [["How to use LLMs for text embedding?", "Title: A Robust Parser Based on Syntactic Information\n\nAbstract: In this paper, we propose a robust parser which can parse extragrammatical sentences. This parser can recover them using only syntactic information. It can be easily modified and extended because it utilize only syntactic information."]], "corpus": "arxiv"}
173
+ {"tstamp": 1722227369.3743, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722227369.3003, "finish": 1722227369.3743, "ip": "", "conv_id": "4b44b05947b841cb9601128f35a74d27", "model_name": "GritLM/GritLM-7B", "prompt": "How to use LLMs for text embedding?", "output": [["How to use LLMs for text embedding?", "Title: LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders\n\nAbstract: Large decoder-only language models (LLMs) are the state-of-the-art models on most of today's NLP tasks and benchmarks. Yet, the community is only slowly adopting these models for text embedding tasks, which require rich contextualized representations. In this work, we introduce LLM2Vec, a simple unsupervised approach that can transform any decoder-only LLM into a strong text encoder. LLM2Vec consists of three simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. We demonstrate the effectiveness of LLM2Vec by applying it to 3 popular LLMs ranging from 1.3B to 7B parameters and evaluate the transformed models on English word- and sequence-level tasks. We outperform encoder-only models by a large margin on word-level tasks and reach a new unsupervised state-of-the-art performance on the Massive Text Embeddings Benchmark (MTEB). Moreover, when combining LLM2Vec with supervised contrastive learning, we achieve state-of-the-art performance on MTEB among models that train only on publicly available data. Our strong empirical results and extensive analysis demonstrate that LLMs can be effectively transformed into universal text encoders in a parameter-efficient manner without the need for expensive adaptation or synthetic GPT-4 generated data."]], "corpus": "arxiv"}
174
+ {"tstamp": 1722227382.0191, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722227381.8224, "finish": 1722227382.0191, "ip": "", "conv_id": "3e693ba9880b4b2a96103949236f54eb", "model_name": "BM25", "prompt": "How to use GPT-3 for text embedding?", "output": [["How to use GPT-3 for text embedding?", "Title: Prior Art Search and Reranking for Generated Patent Text\n\nAbstract: Generative models, such as GPT-2, have demonstrated impressive results recently. A fundamental question we'd like to address is: where did the generated text come from? This work is our initial effort toward answering the question by using prior art search. The purpose of the prior art search is to find the most similar prior text in the training data of GPT-2. We take a reranking approach and apply it to the patent domain. Specifically, we pre-train GPT-2 models from scratch by using the patent data from the USPTO. The input for the prior art search is the patent text generated by the GPT-2 model. We also pre-trained BERT models from scratch for converting patent text to embeddings. The steps of reranking are: (1) search the most similar text in the training data of GPT-2 by taking a bag-of-word ranking approach (BM25), (2) convert the search results in text format to BERT embeddings, and (3) provide the final result by ranking the BERT embeddings based on their similarities with the patent text generated by GPT-2. The experiments in this work show that such reranking is better than ranking with embeddings alone. However, our mixed results also indicate that calculating the semantic similarities among long text spans is still challenging. To our knowledge, this work is the first to implement a reranking system to identify retrospectively the most similar inputs to a GPT model based on its output."]], "corpus": "arxiv"}
175
+ {"tstamp": 1722227382.0191, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722227381.8224, "finish": 1722227382.0191, "ip": "", "conv_id": "0d5001addb9d4a18b601c3d872fd0520", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "How to use GPT-3 for text embedding?", "output": [["How to use GPT-3 for text embedding?", "Title: Towards General Text Embeddings with Multi-stage Contrastive Learning\n\nAbstract: We present GTE, a general-purpose text embedding model trained with multi-stage contrastive learning. In line with recent advancements in unifying various NLP tasks into a single format, we train a unified text embedding model by employing contrastive learning over a diverse mixture of datasets from multiple sources. By significantly increasing the number of training data during both unsupervised pre-training and supervised fine-tuning stages, we achieve substantial performance gains over existing embedding models. Notably, even with a relatively modest parameter count of 110M, GTE$_\\text{base}$ outperforms the black-box embedding API provided by OpenAI and even surpasses 10x larger text embedding models on the massive text embedding benchmark. Furthermore, without additional fine-tuning on each programming language individually, our model outperforms previous best code retrievers of similar size by treating code as text. In summary, our model achieves impressive results by effectively harnessing multi-stage contrastive learning, offering a powerful and efficient text embedding model with broad applicability across various NLP and code-related tasks."]], "corpus": "arxiv"}
176
+ {"tstamp": 1722227394.1544, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722227393.8059, "finish": 1722227394.1544, "ip": "", "conv_id": "a588532639944ec6bf1492cbd64ad35d", "model_name": "voyage-multilingual-2", "prompt": "How to use GPT for text embedding?", "output": [["How to use GPT for text embedding?", "Title: Towards General Text Embeddings with Multi-stage Contrastive Learning\n\nAbstract: We present GTE, a general-purpose text embedding model trained with multi-stage contrastive learning. In line with recent advancements in unifying various NLP tasks into a single format, we train a unified text embedding model by employing contrastive learning over a diverse mixture of datasets from multiple sources. By significantly increasing the number of training data during both unsupervised pre-training and supervised fine-tuning stages, we achieve substantial performance gains over existing embedding models. Notably, even with a relatively modest parameter count of 110M, GTE$_\\text{base}$ outperforms the black-box embedding API provided by OpenAI and even surpasses 10x larger text embedding models on the massive text embedding benchmark. Furthermore, without additional fine-tuning on each programming language individually, our model outperforms previous best code retrievers of similar size by treating code as text. In summary, our model achieves impressive results by effectively harnessing multi-stage contrastive learning, offering a powerful and efficient text embedding model with broad applicability across various NLP and code-related tasks."]], "corpus": "arxiv"}
177
+ {"tstamp": 1722227394.1544, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722227393.8059, "finish": 1722227394.1544, "ip": "", "conv_id": "3b7155d235974f69a8757e75f4f9bb52", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "How to use GPT for text embedding?", "output": [["How to use GPT for text embedding?", "Title: Phase Space Derivation of a Variational Principle for One Dimensional Hamiltonian Systems\n\nAbstract: We consider the bifurcation problem u'' + \\lambda u = N(u) with two point boundary conditions where N(u) is a general nonlinear term which may also depend on the eigenvalue \\lambda. A new derivation of a variational principle for the lowest eigenvalue \\lambda is given. This derivation makes use only of simple algebraic inequalities and leads directly to a more explicit expression for the eigenvalue than what had been given previously."]], "corpus": "arxiv"}
178
+ {"tstamp": 1722227399.2543, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722227399.1155, "finish": 1722227399.2543, "ip": "", "conv_id": "2f09b604c1644cccabd56e80c3e71093", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "How to use GPT for text embedding?", "output": [["How to use GPT for text embedding?", "Title: Numerical Criteria for vey Ampleness of Divisors on Projective Bundles over an elliptic curve\n\nAbstract: In Butler, J.Differential Geom. 39 (1):1--34,1994, the author gives a sufficient condition for a line bundle associated with a divisor D to be normally generated on $X=P(E)$ where E is a vector bundle over a smooth curve C. A line bundle which is ample and normally generated is automatically very ample. Therefore the condition found in Butler's work, together with Miyaoka's well known ampleness criterion, give a sufficient condition for the very ampleness of D on X. This work is devoted to the study of numerical criteria for very ampleness of divisors D which do not satisfy the above criterion, in the case of C elliptic. Numerical conditions for the very ampleness of D are proved,improving existing results. In some cases a complete numerical characterization is found."]], "corpus": "arxiv"}
179
+ {"tstamp": 1722227399.2543, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722227399.1155, "finish": 1722227399.2543, "ip": "", "conv_id": "66ad5195a32443cea82ad127a698d152", "model_name": "GritLM/GritLM-7B", "prompt": "How to use GPT for text embedding?", "output": [["How to use GPT for text embedding?", "Title: Towards General Text Embeddings with Multi-stage Contrastive Learning\n\nAbstract: We present GTE, a general-purpose text embedding model trained with multi-stage contrastive learning. In line with recent advancements in unifying various NLP tasks into a single format, we train a unified text embedding model by employing contrastive learning over a diverse mixture of datasets from multiple sources. By significantly increasing the number of training data during both unsupervised pre-training and supervised fine-tuning stages, we achieve substantial performance gains over existing embedding models. Notably, even with a relatively modest parameter count of 110M, GTE$_\\text{base}$ outperforms the black-box embedding API provided by OpenAI and even surpasses 10x larger text embedding models on the massive text embedding benchmark. Furthermore, without additional fine-tuning on each programming language individually, our model outperforms previous best code retrievers of similar size by treating code as text. In summary, our model achieves impressive results by effectively harnessing multi-stage contrastive learning, offering a powerful and efficient text embedding model with broad applicability across various NLP and code-related tasks."]], "corpus": "arxiv"}
180
+ {"tstamp": 1722227413.4478, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722227413.2492, "finish": 1722227413.4478, "ip": "", "conv_id": "e7d1f7ce30e2419d8edc849d80b6ca18", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "How to use GPT for embedding & search?", "output": [["How to use GPT for embedding & search?", "Title: Towards General Text Embeddings with Multi-stage Contrastive Learning\n\nAbstract: We present GTE, a general-purpose text embedding model trained with multi-stage contrastive learning. In line with recent advancements in unifying various NLP tasks into a single format, we train a unified text embedding model by employing contrastive learning over a diverse mixture of datasets from multiple sources. By significantly increasing the number of training data during both unsupervised pre-training and supervised fine-tuning stages, we achieve substantial performance gains over existing embedding models. Notably, even with a relatively modest parameter count of 110M, GTE$_\\text{base}$ outperforms the black-box embedding API provided by OpenAI and even surpasses 10x larger text embedding models on the massive text embedding benchmark. Furthermore, without additional fine-tuning on each programming language individually, our model outperforms previous best code retrievers of similar size by treating code as text. In summary, our model achieves impressive results by effectively harnessing multi-stage contrastive learning, offering a powerful and efficient text embedding model with broad applicability across various NLP and code-related tasks."]], "corpus": "arxiv"}
181
+ {"tstamp": 1722227413.4478, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722227413.2492, "finish": 1722227413.4478, "ip": "", "conv_id": "a75a2008513a4c0e83f403f59cdd4efe", "model_name": "GritLM/GritLM-7B", "prompt": "How to use GPT for embedding & search?", "output": [["How to use GPT for embedding & search?", "Title: SGPT: GPT Sentence Embeddings for Semantic Search\n\nAbstract: Decoder transformers have continued increasing in scale reaching hundreds of billions of parameters. Due to their scale the same decoder sets state-of-the-art results on various language tasks via prompting or fine-tuning. Yet, these large foundation models remain unusable for the related fields of semantic search and sentence embeddings. This prevents possibly new state-of-the-art results and forces organizations to train and maintain separate models. To this end, we propose SGPT to use decoders for sentence embeddings and semantic search via prompting or fine-tuning. At 5.8 billion parameters SGPT improves on the previously best sentence embeddings by a margin of 7% and outperforms a concurrent method with 175 billion parameters as measured on the BEIR search benchmark. Code, models and result files are freely available at https://github.com/Muennighoff/sgpt."]], "corpus": "arxiv"}