Muennighoff
commited on
Scheduled Commit
Browse files
data/retrieval_battle-b9229914-47bc-4da8-a21b-89329fff8207.jsonl
CHANGED
@@ -7,3 +7,5 @@
|
|
7 |
{"tstamp": 1722310164.7577, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "fc308a6564e84da2b576fda8f5b0f500", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "Study on improving computational efficiency and reducing parameters in recommender systems using prompt tuning and pruning techniques", "0_output": [["Study on improving computational efficiency and reducing parameters in recommender systems using prompt tuning and pruning techniques", "Title: Spatio-temporal dynamics and plastic flow of vortices in superconductors with periodic arrays of pinning sites\n\nAbstract: We present simulations of flux-gradient-driven superconducting rigid vortices interacting with square and triangular arrays of columnar pinning sites in an increasing external magnetic field. These simulations allow us to quantitatively relate spatio-temporal microscopic information of the vortex lattice with typically measured macroscopic quantities, such as the magnetization $M(H)$. The flux lattice does not become completely commensurate with the pinning sites throughout the sample at the magnetization matching peaks, but forms a commensurate lattice in a region close to the edge of the sample. Matching fields related to unstable vortex configurations do not produce peaks in $M(H)$. We observe a variety of evolving complex flux profiles, including flat terraces or plateaus separated by winding current-carrying strings and, near the peaks in $M(H)$, plateaus only in certain regions, which move through the sample as the field increases."]], "0_corpus": "arxiv", "1_conv_id": "659b5d7e65414017a44941dc4860d179", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Study on improving computational efficiency and reducing parameters in recommender systems using prompt tuning and pruning techniques", "1_output": [["Study on improving computational efficiency and reducing parameters in recommender systems using prompt tuning and pruning techniques", "Title: Local Hypercomplex Analyticity\n\nAbstract: The notion of analyticity is studied in the context of hypercomplex numbers. A critical review of the problems arising from the conventional approach is given. We describe a local analyticity condition which yields the desired type of hypercomplex solutions. The result is the definition of a generalized complex analyticity to hypercomplex space. 02.10.Tq/Vr, 02.30.-f/Dk, 02.90.+p"]], "1_corpus": "arxiv"}
|
8 |
{"tstamp": 1722310269.5459, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "356862c28de946ae8a434438c62a06e4", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "diablo 2 is not working on windows 7?", "0_output": [["diablo 2 is not working on windows 7?", "Gaming Stackexchange\n\nQ: Playing Diablo 1 on Windows 7 I just found an old copy of Diablo (the first game) in a box and whacked it into my laptop I purchased just yesterday.\nAfter the quick install I hit play and found that the screen remained black (nothing was rendering). There was however sound.\nI downloaded the patch but still nothing.\nWith it being a new laptop, I figure I might be missing some required software like Direct X maybe? I'm not really sure.\nIf anyone has any suggestions or knows how to fix this issue, that would be really helpful.\n\nA: This may sound weird (and I have actually done this to play Diablo on my Win 7 laptop), but right click on your desktop and click on Screen Resolution. Leave that open and then run Diablo.\n"]], "0_corpus": "stackexchange", "1_conv_id": "375b8963e38a4bd191128977a52d9b97", "1_model_name": "BM25", "1_prompt": "diablo 2 is not working on windows 7?", "1_output": [["diablo 2 is not working on windows 7?", "Gaming Stackexchange\n\nQ: How do I configure Diablo 2 in Wine to -skiptobnet and -w? I am using Ubuntu Gnome and Diablo 2 works ok with Wine. How do I add shortcut options to the game? I want the -skiptobnet and -w for windowed mode.\nPath is ~/.wine/drive_c/Program Files (x86)/Diablo II/Game.exe.\n\nA: I was able to get options to work using the windows .lnk file that was created in the wine start menu directory during installation.\nYou can launch Diablo II through terminal or create a new launcher using this command replacing PATH_TO_WINE and PATH_TO_SHORTCUT:\nenv WINEPREFIX=\"PATH_TO_WINE\" wine C:\\\\windows\\\\command\\\\start.exe PATH_TO_SHORTCUT -skiptobnet -w\n\nFor example:\nenv WINEPREFIX=\"/home/nonterrorist/.wine\" wine C:\\\\windows\\\\command\\\\start.exe /Unix /home/nonterrorist/.wine/dosdevices/c:/users/Public/Start\\ Menu/Programs/Diablo\\ II/Diablo\\ II\\ -\\ Lord\\ of\\ Destruction.lnk -skiptobnet -w\n\nNote in the example above, you will need to escape any spaces in the shortcut path using \\\n"]], "1_corpus": "stackexchange"}
|
9 |
{"tstamp": 1722310291.9285, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "9776843ed00d4cab89c0894cad832500", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "what are chapters in the bible called?", "0_output": [["what are chapters in the bible called?", "Stackoverflow Stackexchange\n\nQ: How to strip executables thoroughly I'd like to strip as much as I can - on Linux: an ELF. I only want in there the stuff I need to run it.\nI tried using strip:\nstrip --strip-all elf\n\nBut it doesn't seem to do a good job: nm still displays lots of stuff, and the binary is still big.\nWhat should I do?\n\nA: I would check out this great article that goes into depth on making an ELF executable as small as possible. Maybe it has info that could help!\nhttp://www.muppetlabs.com/~breadbox/software/tiny/teensy.html\n"]], "0_corpus": "stackexchange", "1_conv_id": "87810785bece4d6dab306293a64980f2", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "what are chapters in the bible called?", "1_output": [["what are chapters in the bible called?", "English Stackexchange\n\nQ: Why are the divisions of the Bible called \"verses\"? In common parlance, a verse is a writing arranged with a metrical rhythm, typically having a rhyme. Most of the books of Bible contain prose which do not follow metrical rhythm or rhyme. But their sentences are called verses. My question therefore is: Why are the divisions of the Bible called verses?\n\nA: A number of questions on Stack Exchange - English Language and Usage have commented on the words 'stanza' and 'verse' in relation to poetry and to scripture.\n'Verses' are also used in Shakespeare.\nMerriam Webster defines a 'verse' in three ways : metrical verse, a stanza or a biblical verse.\nIn any work which requires to be referenced and quoted as part of its function, stanzas or verses are numbered accordingly so that they can be cited.\n"]], "1_corpus": "stackexchange"}
|
|
|
|
|
|
7 |
{"tstamp": 1722310164.7577, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "fc308a6564e84da2b576fda8f5b0f500", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "Study on improving computational efficiency and reducing parameters in recommender systems using prompt tuning and pruning techniques", "0_output": [["Study on improving computational efficiency and reducing parameters in recommender systems using prompt tuning and pruning techniques", "Title: Spatio-temporal dynamics and plastic flow of vortices in superconductors with periodic arrays of pinning sites\n\nAbstract: We present simulations of flux-gradient-driven superconducting rigid vortices interacting with square and triangular arrays of columnar pinning sites in an increasing external magnetic field. These simulations allow us to quantitatively relate spatio-temporal microscopic information of the vortex lattice with typically measured macroscopic quantities, such as the magnetization $M(H)$. The flux lattice does not become completely commensurate with the pinning sites throughout the sample at the magnetization matching peaks, but forms a commensurate lattice in a region close to the edge of the sample. Matching fields related to unstable vortex configurations do not produce peaks in $M(H)$. We observe a variety of evolving complex flux profiles, including flat terraces or plateaus separated by winding current-carrying strings and, near the peaks in $M(H)$, plateaus only in certain regions, which move through the sample as the field increases."]], "0_corpus": "arxiv", "1_conv_id": "659b5d7e65414017a44941dc4860d179", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Study on improving computational efficiency and reducing parameters in recommender systems using prompt tuning and pruning techniques", "1_output": [["Study on improving computational efficiency and reducing parameters in recommender systems using prompt tuning and pruning techniques", "Title: Local Hypercomplex Analyticity\n\nAbstract: The notion of analyticity is studied in the context of hypercomplex numbers. A critical review of the problems arising from the conventional approach is given. We describe a local analyticity condition which yields the desired type of hypercomplex solutions. The result is the definition of a generalized complex analyticity to hypercomplex space. 02.10.Tq/Vr, 02.30.-f/Dk, 02.90.+p"]], "1_corpus": "arxiv"}
|
8 |
{"tstamp": 1722310269.5459, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "356862c28de946ae8a434438c62a06e4", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "diablo 2 is not working on windows 7?", "0_output": [["diablo 2 is not working on windows 7?", "Gaming Stackexchange\n\nQ: Playing Diablo 1 on Windows 7 I just found an old copy of Diablo (the first game) in a box and whacked it into my laptop I purchased just yesterday.\nAfter the quick install I hit play and found that the screen remained black (nothing was rendering). There was however sound.\nI downloaded the patch but still nothing.\nWith it being a new laptop, I figure I might be missing some required software like Direct X maybe? I'm not really sure.\nIf anyone has any suggestions or knows how to fix this issue, that would be really helpful.\n\nA: This may sound weird (and I have actually done this to play Diablo on my Win 7 laptop), but right click on your desktop and click on Screen Resolution. Leave that open and then run Diablo.\n"]], "0_corpus": "stackexchange", "1_conv_id": "375b8963e38a4bd191128977a52d9b97", "1_model_name": "BM25", "1_prompt": "diablo 2 is not working on windows 7?", "1_output": [["diablo 2 is not working on windows 7?", "Gaming Stackexchange\n\nQ: How do I configure Diablo 2 in Wine to -skiptobnet and -w? I am using Ubuntu Gnome and Diablo 2 works ok with Wine. How do I add shortcut options to the game? I want the -skiptobnet and -w for windowed mode.\nPath is ~/.wine/drive_c/Program Files (x86)/Diablo II/Game.exe.\n\nA: I was able to get options to work using the windows .lnk file that was created in the wine start menu directory during installation.\nYou can launch Diablo II through terminal or create a new launcher using this command replacing PATH_TO_WINE and PATH_TO_SHORTCUT:\nenv WINEPREFIX=\"PATH_TO_WINE\" wine C:\\\\windows\\\\command\\\\start.exe PATH_TO_SHORTCUT -skiptobnet -w\n\nFor example:\nenv WINEPREFIX=\"/home/nonterrorist/.wine\" wine C:\\\\windows\\\\command\\\\start.exe /Unix /home/nonterrorist/.wine/dosdevices/c:/users/Public/Start\\ Menu/Programs/Diablo\\ II/Diablo\\ II\\ -\\ Lord\\ of\\ Destruction.lnk -skiptobnet -w\n\nNote in the example above, you will need to escape any spaces in the shortcut path using \\\n"]], "1_corpus": "stackexchange"}
|
9 |
{"tstamp": 1722310291.9285, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "9776843ed00d4cab89c0894cad832500", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "what are chapters in the bible called?", "0_output": [["what are chapters in the bible called?", "Stackoverflow Stackexchange\n\nQ: How to strip executables thoroughly I'd like to strip as much as I can - on Linux: an ELF. I only want in there the stuff I need to run it.\nI tried using strip:\nstrip --strip-all elf\n\nBut it doesn't seem to do a good job: nm still displays lots of stuff, and the binary is still big.\nWhat should I do?\n\nA: I would check out this great article that goes into depth on making an ELF executable as small as possible. Maybe it has info that could help!\nhttp://www.muppetlabs.com/~breadbox/software/tiny/teensy.html\n"]], "0_corpus": "stackexchange", "1_conv_id": "87810785bece4d6dab306293a64980f2", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "what are chapters in the bible called?", "1_output": [["what are chapters in the bible called?", "English Stackexchange\n\nQ: Why are the divisions of the Bible called \"verses\"? In common parlance, a verse is a writing arranged with a metrical rhythm, typically having a rhyme. Most of the books of Bible contain prose which do not follow metrical rhythm or rhyme. But their sentences are called verses. My question therefore is: Why are the divisions of the Bible called verses?\n\nA: A number of questions on Stack Exchange - English Language and Usage have commented on the words 'stanza' and 'verse' in relation to poetry and to scripture.\n'Verses' are also used in Shakespeare.\nMerriam Webster defines a 'verse' in three ways : metrical verse, a stanza or a biblical verse.\nIn any work which requires to be referenced and quoted as part of its function, stanzas or verses are numbered accordingly so that they can be cited.\n"]], "1_corpus": "stackexchange"}
|
10 |
+
{"tstamp": 1722310355.5437, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "256e5bb6399a4ba7b599c9ba9a402e95", "0_model_name": "embed-english-v3.0", "0_prompt": "Learning word representations to improve IR tasks like query expansion and classification", "0_output": [["Learning word representations to improve IR tasks like query expansion and classification", "Title: Relevance-based Word Embedding\n\nAbstract: Learning a high-dimensional dense representation for vocabulary terms, also known as a word embedding, has recently attracted much attention in natural language processing and information retrieval tasks. The embedding vectors are typically learned based on term proximity in a large corpus. This means that the objective in well-known word embedding algorithms, e.g., word2vec, is to accurately predict adjacent word(s) for a given word or context. However, this objective is not necessarily equivalent to the goal of many information retrieval (IR) tasks. The primary objective in various IR tasks is to capture relevance instead of term proximity, syntactic, or even semantic similarity. This is the motivation for developing unsupervised relevance-based word embedding models that learn word representations based on query-document relevance information. In this paper, we propose two learning models with different objective functions; one learns a relevance distribution over the vocabulary set for each query, and the other classifies each term as belonging to the relevant or non-relevant class for each query. To train our models, we used over six million unique queries and the top ranked documents retrieved in response to each query, which are assumed to be relevant to the query. We extrinsically evaluate our learned word representation models using two IR tasks: query expansion and query classification. Both query expansion experiments on four TREC collections and query classification experiments on the KDD Cup 2005 dataset suggest that the relevance-based word embedding models significantly outperform state-of-the-art proximity-based embedding models, such as word2vec and GloVe."]], "0_corpus": "arxiv", "1_conv_id": "2f7dc574c8b94ae7a397af80c4cf05f5", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Learning word representations to improve IR tasks like query expansion and classification", "1_output": [["Learning word representations to improve IR tasks like query expansion and classification", "Title: Relevance-based Word Embedding\n\nAbstract: Learning a high-dimensional dense representation for vocabulary terms, also known as a word embedding, has recently attracted much attention in natural language processing and information retrieval tasks. The embedding vectors are typically learned based on term proximity in a large corpus. This means that the objective in well-known word embedding algorithms, e.g., word2vec, is to accurately predict adjacent word(s) for a given word or context. However, this objective is not necessarily equivalent to the goal of many information retrieval (IR) tasks. The primary objective in various IR tasks is to capture relevance instead of term proximity, syntactic, or even semantic similarity. This is the motivation for developing unsupervised relevance-based word embedding models that learn word representations based on query-document relevance information. In this paper, we propose two learning models with different objective functions; one learns a relevance distribution over the vocabulary set for each query, and the other classifies each term as belonging to the relevant or non-relevant class for each query. To train our models, we used over six million unique queries and the top ranked documents retrieved in response to each query, which are assumed to be relevant to the query. We extrinsically evaluate our learned word representation models using two IR tasks: query expansion and query classification. Both query expansion experiments on four TREC collections and query classification experiments on the KDD Cup 2005 dataset suggest that the relevance-based word embedding models significantly outperform state-of-the-art proximity-based embedding models, such as word2vec and GloVe."]], "1_corpus": "arxiv"}
|
11 |
+
{"tstamp": 1722310388.2572, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "8c01a7990c7d433c8f9c9a38ccfd45b3", "0_model_name": "voyage-multilingual-2", "0_prompt": "who starred in the film far from the madding crowd", "0_output": [["who starred in the film far from the madding crowd", "Far from the Madding Crowd (2015 film)\n\nPlot The film is set in 1870 in Britain. While working on her aunt's farm in Dorset, Bathsheba Everdene (Carey Mulligan) meets neighbouring farmer, Gabriel Oak (Matthias Schoenaerts). He eventually proposes, but the headstrong Bathsheba declines, saying she is too independent. One night, Gabriel's new sheepdog herds his entire flock off a steep cliff. Penniless, Gabriel leaves to search for work. In contrast, Bathsheba inherits an uncle's farm and leaves to run it.\nWhile at a fair trying to find employment, Gabriel sees recruiting officers. A girl there, Fanny Robbin, points out one soldier, Sergeant Frank Troy, her sweetheart. She suggests Gabriel seek employment at a farm in Weatherbury. Upon Gabriel's arrival, several buildings are on fire. He saves the barn from destruction. At dawn the next day, he meets the farm's new mistress, Bathsheba, who hires him as a shepherd.\nWhile in town trading her seed, Bathsheba sees her neighbour William Boldwood (Michael Sheen), a prosperous and mature bachelor. Bathsheba sends Boldwood a Valentine as a joke. He is both offended and intrigued. In the meantime, Fanny goes to the wrong church for her wedding; Troy, believing he's been jilted, is devastated. Boldwood comes to speak to Bathsheba, sees her handling sheep, and is charmed. He invites her to his home, where he proposes marriage. Bathsheba delays giving him a final answer, and Gabriel admonishes her for toying with Boldwood's affections. Stung by the criticism, she fires him. The next day, a crisis arises with the sheep that only Gabriel can manage. She successfully persuades him to return."]], "0_corpus": "wikipedia", "1_conv_id": "2dafecf528e44c34a578dd2bad99f101", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "who starred in the film far from the madding crowd", "1_output": [["who starred in the film far from the madding crowd", "Far from the Madding Crowd (2015 film)\n\nFar from the Madding Crowd is a 2015 British romantic drama film directed by Thomas Vinterberg and starring Carey Mulligan, Matthias Schoenaerts, Tom Sturridge, Michael Sheen, and Juno Temple. An adaptation by David Nicholls of the 1874 novel Far from the Madding Crowd by Thomas Hardy, it is the fourth film adaptation of the novel."]], "1_corpus": "wikipedia"}
|
data/retrieval_individual-b9229914-47bc-4da8-a21b-89329fff8207.jsonl
CHANGED
@@ -55,3 +55,7 @@
|
|
55 |
{"tstamp": 1722310277.0391, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722310276.8952, "finish": 1722310277.0391, "ip": "", "conv_id": "87810785bece4d6dab306293a64980f2", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "what are chapters in the bible called?", "output": [["what are chapters in the bible called?", "English Stackexchange\n\nQ: Why are the divisions of the Bible called \"verses\"? In common parlance, a verse is a writing arranged with a metrical rhythm, typically having a rhyme. Most of the books of Bible contain prose which do not follow metrical rhythm or rhyme. But their sentences are called verses. My question therefore is: Why are the divisions of the Bible called verses?\n\nA: A number of questions on Stack Exchange - English Language and Usage have commented on the words 'stanza' and 'verse' in relation to poetry and to scripture.\n'Verses' are also used in Shakespeare.\nMerriam Webster defines a 'verse' in three ways : metrical verse, a stanza or a biblical verse.\nIn any work which requires to be referenced and quoted as part of its function, stanzas or verses are numbered accordingly so that they can be cited.\n"]], "corpus": "stackexchange"}
|
56 |
{"tstamp": 1722310317.0996, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722310316.7231, "finish": 1722310317.0996, "ip": "", "conv_id": "895db1e023ff4806920e918196f681a3", "model_name": "GritLM/GritLM-7B", "prompt": "who are the parents of the three fates", "output": [["who are the parents of the three fates", "Clotho\n\nAs one of the three fates her influence in Greek mythology was significant. Along with her sisters and Hermes, Clotho was given credit for creating the alphabet for their people. Even though Clotho and her sisters were worshiped as goddesses, their representation of fate is more central to their role in mythology. Thread represented human life and her decisions represented the fate of all people in society.\nOrigin\nAlthough there does not seem to be an epic tale in Classical Greek mythology in which the Fates are the main focus, they have played critical roles in the lives of deities and mortals. It is likely that the versions of myths about their primal origins were lost in the transitions from prehistoric Greek mythology to that of the Classical writings.\nAccording to Hesiod's Theogony, Clotho and her sisters (Atropos and Lachesis) were the daughters of Nyx (Night), without the assistance of a father. Later in the same work (ll. 901-906) they are said to have been born of Zeus and Themis. Clotho is mentioned in the tenth book of the Republic of Plato as the daughter of Necessity, as well.\nIn Roman mythology, it was believed that Clotho was the daughter of Uranus and Gaia.\nThe Ivory Shoulder\nAs one of the Three Fates, Clotho participated in creating the alphabet with Hermes, forced the goddess Aphrodite into making love with other gods, weakened the monster Typhon with poison fruit, persuaded Zeus to kill Asclepius with a bolt of lightning, and aided the deities in their war with the Giants by killing Agrius and Thoas with bronze clubs."]], "corpus": "wikipedia"}
|
57 |
{"tstamp": 1722310317.0996, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722310316.7231, "finish": 1722310317.0996, "ip": "", "conv_id": "190f215de1614b968c5d2a87d0375480", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "who are the parents of the three fates", "output": [["who are the parents of the three fates", "Fates\n\nThe Fates are a common motif in European polytheism, most frequently represented as a trio of goddesses. The Fates shape the destiny of each human, often expressed in textile metaphors such as spinning fibers into yarn, or weaving threads on a loom. The trio are generally conceived of as sisters and are often given the names Clotho, Lachesis, and Atropos, which are the names of the Moirai, the version of the Fates who appear in Greek mythology. These divine figures are often artistically depicted as beautiful maidens with consideration to their serious responsibility: the life of mortals. Poets, on the other hand, typically express the Fates as ugly and unwavering, representing the gravity of their role within the mythological and human worlds."]], "corpus": "wikipedia"}
|
|
|
|
|
|
|
|
|
|
55 |
{"tstamp": 1722310277.0391, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722310276.8952, "finish": 1722310277.0391, "ip": "", "conv_id": "87810785bece4d6dab306293a64980f2", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "what are chapters in the bible called?", "output": [["what are chapters in the bible called?", "English Stackexchange\n\nQ: Why are the divisions of the Bible called \"verses\"? In common parlance, a verse is a writing arranged with a metrical rhythm, typically having a rhyme. Most of the books of Bible contain prose which do not follow metrical rhythm or rhyme. But their sentences are called verses. My question therefore is: Why are the divisions of the Bible called verses?\n\nA: A number of questions on Stack Exchange - English Language and Usage have commented on the words 'stanza' and 'verse' in relation to poetry and to scripture.\n'Verses' are also used in Shakespeare.\nMerriam Webster defines a 'verse' in three ways : metrical verse, a stanza or a biblical verse.\nIn any work which requires to be referenced and quoted as part of its function, stanzas or verses are numbered accordingly so that they can be cited.\n"]], "corpus": "stackexchange"}
|
56 |
{"tstamp": 1722310317.0996, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722310316.7231, "finish": 1722310317.0996, "ip": "", "conv_id": "895db1e023ff4806920e918196f681a3", "model_name": "GritLM/GritLM-7B", "prompt": "who are the parents of the three fates", "output": [["who are the parents of the three fates", "Clotho\n\nAs one of the three fates her influence in Greek mythology was significant. Along with her sisters and Hermes, Clotho was given credit for creating the alphabet for their people. Even though Clotho and her sisters were worshiped as goddesses, their representation of fate is more central to their role in mythology. Thread represented human life and her decisions represented the fate of all people in society.\nOrigin\nAlthough there does not seem to be an epic tale in Classical Greek mythology in which the Fates are the main focus, they have played critical roles in the lives of deities and mortals. It is likely that the versions of myths about their primal origins were lost in the transitions from prehistoric Greek mythology to that of the Classical writings.\nAccording to Hesiod's Theogony, Clotho and her sisters (Atropos and Lachesis) were the daughters of Nyx (Night), without the assistance of a father. Later in the same work (ll. 901-906) they are said to have been born of Zeus and Themis. Clotho is mentioned in the tenth book of the Republic of Plato as the daughter of Necessity, as well.\nIn Roman mythology, it was believed that Clotho was the daughter of Uranus and Gaia.\nThe Ivory Shoulder\nAs one of the Three Fates, Clotho participated in creating the alphabet with Hermes, forced the goddess Aphrodite into making love with other gods, weakened the monster Typhon with poison fruit, persuaded Zeus to kill Asclepius with a bolt of lightning, and aided the deities in their war with the Giants by killing Agrius and Thoas with bronze clubs."]], "corpus": "wikipedia"}
|
57 |
{"tstamp": 1722310317.0996, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722310316.7231, "finish": 1722310317.0996, "ip": "", "conv_id": "190f215de1614b968c5d2a87d0375480", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "who are the parents of the three fates", "output": [["who are the parents of the three fates", "Fates\n\nThe Fates are a common motif in European polytheism, most frequently represented as a trio of goddesses. The Fates shape the destiny of each human, often expressed in textile metaphors such as spinning fibers into yarn, or weaving threads on a loom. The trio are generally conceived of as sisters and are often given the names Clotho, Lachesis, and Atropos, which are the names of the Moirai, the version of the Fates who appear in Greek mythology. These divine figures are often artistically depicted as beautiful maidens with consideration to their serious responsibility: the life of mortals. Poets, on the other hand, typically express the Fates as ugly and unwavering, representing the gravity of their role within the mythological and human worlds."]], "corpus": "wikipedia"}
|
58 |
+
{"tstamp": 1722310335.4722, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722310335.2095, "finish": 1722310335.4722, "ip": "", "conv_id": "256e5bb6399a4ba7b599c9ba9a402e95", "model_name": "embed-english-v3.0", "prompt": "Learning word representations to improve IR tasks like query expansion and classification", "output": [["Learning word representations to improve IR tasks like query expansion and classification", "Title: Relevance-based Word Embedding\n\nAbstract: Learning a high-dimensional dense representation for vocabulary terms, also known as a word embedding, has recently attracted much attention in natural language processing and information retrieval tasks. The embedding vectors are typically learned based on term proximity in a large corpus. This means that the objective in well-known word embedding algorithms, e.g., word2vec, is to accurately predict adjacent word(s) for a given word or context. However, this objective is not necessarily equivalent to the goal of many information retrieval (IR) tasks. The primary objective in various IR tasks is to capture relevance instead of term proximity, syntactic, or even semantic similarity. This is the motivation for developing unsupervised relevance-based word embedding models that learn word representations based on query-document relevance information. In this paper, we propose two learning models with different objective functions; one learns a relevance distribution over the vocabulary set for each query, and the other classifies each term as belonging to the relevant or non-relevant class for each query. To train our models, we used over six million unique queries and the top ranked documents retrieved in response to each query, which are assumed to be relevant to the query. We extrinsically evaluate our learned word representation models using two IR tasks: query expansion and query classification. Both query expansion experiments on four TREC collections and query classification experiments on the KDD Cup 2005 dataset suggest that the relevance-based word embedding models significantly outperform state-of-the-art proximity-based embedding models, such as word2vec and GloVe."]], "corpus": "arxiv"}
|
59 |
+
{"tstamp": 1722310335.4722, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722310335.2095, "finish": 1722310335.4722, "ip": "", "conv_id": "2f7dc574c8b94ae7a397af80c4cf05f5", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Learning word representations to improve IR tasks like query expansion and classification", "output": [["Learning word representations to improve IR tasks like query expansion and classification", "Title: Relevance-based Word Embedding\n\nAbstract: Learning a high-dimensional dense representation for vocabulary terms, also known as a word embedding, has recently attracted much attention in natural language processing and information retrieval tasks. The embedding vectors are typically learned based on term proximity in a large corpus. This means that the objective in well-known word embedding algorithms, e.g., word2vec, is to accurately predict adjacent word(s) for a given word or context. However, this objective is not necessarily equivalent to the goal of many information retrieval (IR) tasks. The primary objective in various IR tasks is to capture relevance instead of term proximity, syntactic, or even semantic similarity. This is the motivation for developing unsupervised relevance-based word embedding models that learn word representations based on query-document relevance information. In this paper, we propose two learning models with different objective functions; one learns a relevance distribution over the vocabulary set for each query, and the other classifies each term as belonging to the relevant or non-relevant class for each query. To train our models, we used over six million unique queries and the top ranked documents retrieved in response to each query, which are assumed to be relevant to the query. We extrinsically evaluate our learned word representation models using two IR tasks: query expansion and query classification. Both query expansion experiments on four TREC collections and query classification experiments on the KDD Cup 2005 dataset suggest that the relevance-based word embedding models significantly outperform state-of-the-art proximity-based embedding models, such as word2vec and GloVe."]], "corpus": "arxiv"}
|
60 |
+
{"tstamp": 1722310364.3913, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722310363.9154, "finish": 1722310364.3913, "ip": "", "conv_id": "8c01a7990c7d433c8f9c9a38ccfd45b3", "model_name": "voyage-multilingual-2", "prompt": "who starred in the film far from the madding crowd", "output": [["who starred in the film far from the madding crowd", "Far from the Madding Crowd (2015 film)\n\nPlot The film is set in 1870 in Britain. While working on her aunt's farm in Dorset, Bathsheba Everdene (Carey Mulligan) meets neighbouring farmer, Gabriel Oak (Matthias Schoenaerts). He eventually proposes, but the headstrong Bathsheba declines, saying she is too independent. One night, Gabriel's new sheepdog herds his entire flock off a steep cliff. Penniless, Gabriel leaves to search for work. In contrast, Bathsheba inherits an uncle's farm and leaves to run it.\nWhile at a fair trying to find employment, Gabriel sees recruiting officers. A girl there, Fanny Robbin, points out one soldier, Sergeant Frank Troy, her sweetheart. She suggests Gabriel seek employment at a farm in Weatherbury. Upon Gabriel's arrival, several buildings are on fire. He saves the barn from destruction. At dawn the next day, he meets the farm's new mistress, Bathsheba, who hires him as a shepherd.\nWhile in town trading her seed, Bathsheba sees her neighbour William Boldwood (Michael Sheen), a prosperous and mature bachelor. Bathsheba sends Boldwood a Valentine as a joke. He is both offended and intrigued. In the meantime, Fanny goes to the wrong church for her wedding; Troy, believing he's been jilted, is devastated. Boldwood comes to speak to Bathsheba, sees her handling sheep, and is charmed. He invites her to his home, where he proposes marriage. Bathsheba delays giving him a final answer, and Gabriel admonishes her for toying with Boldwood's affections. Stung by the criticism, she fires him. The next day, a crisis arises with the sheep that only Gabriel can manage. She successfully persuades him to return."]], "corpus": "wikipedia"}
|
61 |
+
{"tstamp": 1722310364.3913, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722310363.9154, "finish": 1722310364.3913, "ip": "", "conv_id": "2dafecf528e44c34a578dd2bad99f101", "model_name": "GritLM/GritLM-7B", "prompt": "who starred in the film far from the madding crowd", "output": [["who starred in the film far from the madding crowd", "Far from the Madding Crowd (2015 film)\n\nFar from the Madding Crowd is a 2015 British romantic drama film directed by Thomas Vinterberg and starring Carey Mulligan, Matthias Schoenaerts, Tom Sturridge, Michael Sheen, and Juno Temple. An adaptation by David Nicholls of the 1874 novel Far from the Madding Crowd by Thomas Hardy, it is the fourth film adaptation of the novel."]], "corpus": "wikipedia"}
|