Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
1883c49
1 Parent(s): 3b390af

Scheduled Commit

Browse files
data/retrieval_battle-23cee96f-6768-4f6e-9a62-131dedf90618.jsonl CHANGED
@@ -4,3 +4,4 @@
4
  {"tstamp": 1732158854.6465, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "9d748c451543481b89a753c4d343b0cb", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "我的狗狗真的太恐怖!", "0_output": [["我的狗狗真的太恐怖!", "Takashi Shimizu\n\nJu-on: The Curse (呪怨) (2000) (Video)\nJu-on: The Curse 2 (呪怨2) (2000) (Video)\n首吊り気球 伊藤潤二恐怖Collection (Video)\n心霊ビデオV 本当にあった怖い話 恐怖心霊写真館 (Shin rei bideo V: Honto ni atta kowai hanashi - kyoufushin rei shashin-kan) (2000) (Video)\n心霊ビデオVI 本当にあった怖い話 恐怖タレント体験談 (Shin rei bideo VI: Honto ni atta kowai hanashi - kyoufu tarento taikendan) (2000) (Video)\nTomie: Re-birth (富江 rebirth) (2001)\nJu-on: The Grudge (呪怨) (2002)\nJu-on: The Grudge 2 (呪怨2) (2003)\nThe Grudge (2004)\nMarebito (2004)\nDark Tales of Japan, episode Blonde Kwaidan, (2004) (TV)\nThe Great Horror Family, TV Series (2004–2005)\nReincarnation, (輪廻 Rinne) (2005)\nThe Grudge 2 (2006)\nGhost vs. Alien 03 (2007)\nTen Nights of Dream (2006)\nThe Shock Labyrinth 3D (戦慄迷宮3D) (2009)\nTormented (2011)\nScared of the Dark (2013)\nFlight 7500 (2014)\nKiki's Delivery Service (2014), live-action adaptation of the novel of the same name\nNightCry (2015), live-action trailer for the video game NightCry, formerly known as Project Scissors\nResident Evil: Vendetta (Biohazard: Vendetta) (2017, executive producer)\nInnocent Curse (こどもつかい) (2017)\nHowling Village (犬鳴村) (2019)\nSuicide Forest Village (樹海村) (2021)\nHomunculus (2021)\nOx-Head Village (牛首村) (2022)\nImmersion (忌怪島) (2023)\nSana (ミンナのウタ) (2023)\nAnoko wa Daare? (2024)\nAccolades\n2022 — Star Asia Lifetime Achievement Award at 21st New York Asian Film Festival."]], "0_corpus": "wikipedia", "1_conv_id": "0a99924f7b7042c39c35e40546a3db10", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "我的狗狗真的太恐怖!", "1_output": [["我的狗狗真的太恐怖!", "Nǃxau ǂToma\n\nNǃxau ǂToma (short: Nǃxau, alternative spelling Gcao Tekene Çoma; 16 December 1944 – 5 July 2003) was a Namibian bush farmer and actor who starred in the 1980 film The Gods Must Be Crazy and its sequels, in which he played the Kalahari Bushman Xixo. The Namibian called him \"Namibia's most famous actor\".\nBiography\nNǃxau was a member of the ǃKung people, one of several peoples known as Bushmen. N!xau was a Namibian who starred in the 1980 movie The Gods Must Be Crazy and its sequels, in which he played the Kalahari Bushman Xixo."]], "1_corpus": "wikipedia"}
5
  {"tstamp": 1732170549.9185, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "cf7c6193c5d441069dfa74527920a48d", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "What's the purpose of `if __name__ == '__main__'`?", "0_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What does `if name == \"__main__\"` mean in Python? \nPossible Duplicate:\nWhat does <if name==“main”:> do? \n\nI have wrote scripts in Python for quite a while now and I study more of Python as I need it. When reading other people's code I meet if name == \"__main__\": construct quite often. \nWhat is it good for?\n\nA: Basically, \nThere's a distinction between the \"main\" script file and external files which were imported or referenced in another way. If the script is the \"main\" script then the special variable __name__ will equal \"__main__\".\nYou can use this to protect the \"execution\" code from the classes and variables the script has. This enables you to import a script and use classes and variables without actually running that script's main code if it has any.\nSee also: What does if name == “main”: do?\n"]], "0_corpus": "stackexchange", "1_conv_id": "8c97e1b4eeec4115b1baa781642e1911", "1_model_name": "voyage-multilingual-2", "1_prompt": "What's the purpose of `if __name__ == '__main__'`?", "1_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What does `if name == \"__main__\"` mean in Python? \nPossible Duplicate:\nWhat does <if name==“main”:> do? \n\nI have wrote scripts in Python for quite a while now and I study more of Python as I need it. When reading other people's code I meet if name == \"__main__\": construct quite often. \nWhat is it good for?\n\nA: Basically, \nThere's a distinction between the \"main\" script file and external files which were imported or referenced in another way. If the script is the \"main\" script then the special variable __name__ will equal \"__main__\".\nYou can use this to protect the \"execution\" code from the classes and variables the script has. This enables you to import a script and use classes and variables without actually running that script's main code if it has any.\nSee also: What does if name == “main”: do?\n"]], "1_corpus": "stackexchange"}
6
  {"tstamp": 1732170868.4077, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "9a7b66bbabe24510b8e7d79850439a17", "0_model_name": "BM25", "0_prompt": "why dp is not visible in whatsapp?", "0_output": [["why dp is not visible in whatsapp?", "Stackoverflow Stackexchange\n\nQ: Compose - AnimatedVisibility and offset When adding an enter and exit transition to AnimatedVisibility, if the composable animated has an offset modifier, the transition is not visible:\nAnimatedVisibility(\n visible = visible,\n enter = fadeIn(animationSpec = tween(1000)),\n exit = fadeOut(animationSpec = tween(1000))\n ) {\n Text(\n text = \"Hello, world!\", modifier = Modifier\n // This will make the transition not working\n //.offset(100.dp, 0.dp)\n )\n }\n Button(onClick = { visible = !visible }) {\n Text(\"Click Me\")\n }\n\n\nThis is the animation without the offset:\n\nand this is the animation with the offset:\n\nIs there a way to make it work?\n\nA: Don't out the offset modifier onto the Text. Put it onto AnimatedVisibility instead:\nAnimatedVisibility(\n visible = visible,\n enter = fadeIn(animationSpec = tween(1000)),\n exit = fadeOut(animationSpec = tween(1000)),\n modifier = Modifier.offset(100.dp, 0.dp)\n ) {\n Text(text = \"Hello, world!\")\n }\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "8455d9723ef843ab98b27ef65a1d235c", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "why dp is not visible in whatsapp?", "1_output": [["why dp is not visible in whatsapp?", "Stackoverflow Stackexchange\n\nQ: Picture in picture mode in Android API level lower than 24 If picture in picture mode is not supported on Android API version lower than 24, then how does whatsapp use it (in API version lower than 24)? Is there any way to use Picture in picture mode playing in API lower than 24?\n"]], "1_corpus": "stackexchange"}
 
 
4
  {"tstamp": 1732158854.6465, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "9d748c451543481b89a753c4d343b0cb", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "我的狗狗真的太恐怖!", "0_output": [["我的狗狗真的太恐怖!", "Takashi Shimizu\n\nJu-on: The Curse (呪怨) (2000) (Video)\nJu-on: The Curse 2 (呪怨2) (2000) (Video)\n首吊り気球 伊藤潤二恐怖Collection (Video)\n心霊ビデオV 本当にあった怖い話 恐怖心霊写真館 (Shin rei bideo V: Honto ni atta kowai hanashi - kyoufushin rei shashin-kan) (2000) (Video)\n心霊ビデオVI 本当にあった怖い話 恐怖タレント体験談 (Shin rei bideo VI: Honto ni atta kowai hanashi - kyoufu tarento taikendan) (2000) (Video)\nTomie: Re-birth (富江 rebirth) (2001)\nJu-on: The Grudge (呪怨) (2002)\nJu-on: The Grudge 2 (呪怨2) (2003)\nThe Grudge (2004)\nMarebito (2004)\nDark Tales of Japan, episode Blonde Kwaidan, (2004) (TV)\nThe Great Horror Family, TV Series (2004–2005)\nReincarnation, (輪廻 Rinne) (2005)\nThe Grudge 2 (2006)\nGhost vs. Alien 03 (2007)\nTen Nights of Dream (2006)\nThe Shock Labyrinth 3D (戦慄迷宮3D) (2009)\nTormented (2011)\nScared of the Dark (2013)\nFlight 7500 (2014)\nKiki's Delivery Service (2014), live-action adaptation of the novel of the same name\nNightCry (2015), live-action trailer for the video game NightCry, formerly known as Project Scissors\nResident Evil: Vendetta (Biohazard: Vendetta) (2017, executive producer)\nInnocent Curse (こどもつかい) (2017)\nHowling Village (犬鳴村) (2019)\nSuicide Forest Village (樹海村) (2021)\nHomunculus (2021)\nOx-Head Village (牛首村) (2022)\nImmersion (忌怪島) (2023)\nSana (ミンナのウタ) (2023)\nAnoko wa Daare? (2024)\nAccolades\n2022 — Star Asia Lifetime Achievement Award at 21st New York Asian Film Festival."]], "0_corpus": "wikipedia", "1_conv_id": "0a99924f7b7042c39c35e40546a3db10", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "我的狗狗真的太恐怖!", "1_output": [["我的狗狗真的太恐怖!", "Nǃxau ǂToma\n\nNǃxau ǂToma (short: Nǃxau, alternative spelling Gcao Tekene Çoma; 16 December 1944 – 5 July 2003) was a Namibian bush farmer and actor who starred in the 1980 film The Gods Must Be Crazy and its sequels, in which he played the Kalahari Bushman Xixo. The Namibian called him \"Namibia's most famous actor\".\nBiography\nNǃxau was a member of the ǃKung people, one of several peoples known as Bushmen. N!xau was a Namibian who starred in the 1980 movie The Gods Must Be Crazy and its sequels, in which he played the Kalahari Bushman Xixo."]], "1_corpus": "wikipedia"}
5
  {"tstamp": 1732170549.9185, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "cf7c6193c5d441069dfa74527920a48d", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "What's the purpose of `if __name__ == '__main__'`?", "0_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What does `if name == \"__main__\"` mean in Python? \nPossible Duplicate:\nWhat does <if name==“main”:> do? \n\nI have wrote scripts in Python for quite a while now and I study more of Python as I need it. When reading other people's code I meet if name == \"__main__\": construct quite often. \nWhat is it good for?\n\nA: Basically, \nThere's a distinction between the \"main\" script file and external files which were imported or referenced in another way. If the script is the \"main\" script then the special variable __name__ will equal \"__main__\".\nYou can use this to protect the \"execution\" code from the classes and variables the script has. This enables you to import a script and use classes and variables without actually running that script's main code if it has any.\nSee also: What does if name == “main”: do?\n"]], "0_corpus": "stackexchange", "1_conv_id": "8c97e1b4eeec4115b1baa781642e1911", "1_model_name": "voyage-multilingual-2", "1_prompt": "What's the purpose of `if __name__ == '__main__'`?", "1_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What does `if name == \"__main__\"` mean in Python? \nPossible Duplicate:\nWhat does <if name==“main”:> do? \n\nI have wrote scripts in Python for quite a while now and I study more of Python as I need it. When reading other people's code I meet if name == \"__main__\": construct quite often. \nWhat is it good for?\n\nA: Basically, \nThere's a distinction between the \"main\" script file and external files which were imported or referenced in another way. If the script is the \"main\" script then the special variable __name__ will equal \"__main__\".\nYou can use this to protect the \"execution\" code from the classes and variables the script has. This enables you to import a script and use classes and variables without actually running that script's main code if it has any.\nSee also: What does if name == “main”: do?\n"]], "1_corpus": "stackexchange"}
6
  {"tstamp": 1732170868.4077, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "9a7b66bbabe24510b8e7d79850439a17", "0_model_name": "BM25", "0_prompt": "why dp is not visible in whatsapp?", "0_output": [["why dp is not visible in whatsapp?", "Stackoverflow Stackexchange\n\nQ: Compose - AnimatedVisibility and offset When adding an enter and exit transition to AnimatedVisibility, if the composable animated has an offset modifier, the transition is not visible:\nAnimatedVisibility(\n visible = visible,\n enter = fadeIn(animationSpec = tween(1000)),\n exit = fadeOut(animationSpec = tween(1000))\n ) {\n Text(\n text = \"Hello, world!\", modifier = Modifier\n // This will make the transition not working\n //.offset(100.dp, 0.dp)\n )\n }\n Button(onClick = { visible = !visible }) {\n Text(\"Click Me\")\n }\n\n\nThis is the animation without the offset:\n\nand this is the animation with the offset:\n\nIs there a way to make it work?\n\nA: Don't out the offset modifier onto the Text. Put it onto AnimatedVisibility instead:\nAnimatedVisibility(\n visible = visible,\n enter = fadeIn(animationSpec = tween(1000)),\n exit = fadeOut(animationSpec = tween(1000)),\n modifier = Modifier.offset(100.dp, 0.dp)\n ) {\n Text(text = \"Hello, world!\")\n }\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "8455d9723ef843ab98b27ef65a1d235c", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "why dp is not visible in whatsapp?", "1_output": [["why dp is not visible in whatsapp?", "Stackoverflow Stackexchange\n\nQ: Picture in picture mode in Android API level lower than 24 If picture in picture mode is not supported on Android API version lower than 24, then how does whatsapp use it (in API version lower than 24)? Is there any way to use Picture in picture mode playing in API lower than 24?\n"]], "1_corpus": "stackexchange"}
7
+ {"tstamp": 1732195149.0384, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "e35b2012e72048918117849572ca6665", "0_model_name": "voyage-multilingual-2", "0_prompt": "I am investigating the vulnerabilities of vision-language models (VLM) agents to adversarial pop-ups and their impact on task performance. How do adversarial pop-ups influence the decision-making processes of VLM agents across various environments? What specific mechanisms contribute to the susceptibility of these agents to such attacks, and how do these vulnerabilities differ among various VLMs? What insights from recent literature can enhance the understanding of improving the robustness of VLM agents against visual input-based attacks?", "0_output": [["I am investigating the vulnerabilities of vision-language models (VLM) agents to adversarial pop-ups and their impact on task performance. How do adversarial pop-ups influence the decision-making processes of VLM agents across various environments? What specific mechanisms contribute to the susceptibility of these agents to such attacks, and how do these vulnerabilities differ among various VLMs? What insights from recent literature can enhance the understanding of improving the robustness of VLM agents against visual input-based attacks?", "Title: Adversarial Attacks on Multimodal Agents\n\nAbstract: Vision-enabled language models (VLMs) are now used to build autonomous multimodal agents capable of taking actions in real environments. In this paper, we show that multimodal agents raise new safety risks, even though attacking agents is more challenging than prior attacks due to limited access to and knowledge about the environment. Our attacks use adversarial text strings to guide gradient-based perturbation over one trigger image in the environment: (1) our captioner attack attacks white-box captioners if they are used to process images into captions as additional inputs to the VLM; (2) our CLIP attack attacks a set of CLIP models jointly, which can transfer to proprietary VLMs. To evaluate the attacks, we curated VisualWebArena-Adv, a set of adversarial tasks based on VisualWebArena, an environment for web-based multimodal agent tasks. Within an L-infinity norm of $16/256$ on a single image, the captioner attack can make a captioner-augmented GPT-4V agent execute the adversarial goals with a 75% success rate. When we remove the captioner or use GPT-4V to generate its own captions, the CLIP attack can achieve success rates of 21% and 43%, respectively. Experiments on agents based on other VLMs, such as Gemini-1.5, Claude-3, and GPT-4o, show interesting differences in their robustness. Further analysis reveals several key factors contributing to the attack's success, and we also discuss the implications for defenses as well. Project page: https://chenwu.io/attack-agent Code and data: https://github.com/ChenWu98/agent-attack"]], "0_corpus": "arxiv", "1_conv_id": "d6d2eae1b1534f8fbdae02d2571162de", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "I am investigating the vulnerabilities of vision-language models (VLM) agents to adversarial pop-ups and their impact on task performance. How do adversarial pop-ups influence the decision-making processes of VLM agents across various environments? What specific mechanisms contribute to the susceptibility of these agents to such attacks, and how do these vulnerabilities differ among various VLMs? What insights from recent literature can enhance the understanding of improving the robustness of VLM agents against visual input-based attacks?", "1_output": [["I am investigating the vulnerabilities of vision-language models (VLM) agents to adversarial pop-ups and their impact on task performance. How do adversarial pop-ups influence the decision-making processes of VLM agents across various environments? What specific mechanisms contribute to the susceptibility of these agents to such attacks, and how do these vulnerabilities differ among various VLMs? What insights from recent literature can enhance the understanding of improving the robustness of VLM agents against visual input-based attacks?", "Title: Travelling Wave Solutions in Nonlinear Diffusive and Dispersive Media\n\nAbstract: We investigate the presence of soliton solutions in some classes of nonlinear partial differential equations, namely generalized Korteweg-de Vries-Burgers, Korteveg-de Vries-Huxley, and Korteveg-de Vries-Burgers-Huxley equations, which combine effects of diffusion, dispersion, and nonlinearity. We emphasize the chiral behavior of the travelling solutions, whose velocities are determined by the parameters that define the equation. For some appropriate choices, we show that these equations can be mapped onto equations of motion of relativistic 1+1 dimensional phi^{4} and phi^{6} field theories of real scalar fields. We also study systems of two coupled nonlinear equations of the types mentioned."]], "1_corpus": "arxiv"}
data/retrieval_individual-23cee96f-6768-4f6e-9a62-131dedf90618.jsonl CHANGED
@@ -26,3 +26,5 @@
26
  {"tstamp": 1732192889.3601, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1732192889.0588, "finish": 1732192889.3601, "ip": "", "conv_id": "91966d131dcf4e1bbcb1aa62475a3958", "model_name": "text-embedding-004", "prompt": "when searching a legal document what should be looked out for", "output": [["when searching a legal document what should be looked out for", "Legal research\n\nThe processes of legal research vary according to the country and the legal system involved. Legal research involves tasks such as:\nFinding primary sources of law, or primary authority, in a given jurisdiction. The main primary sources of law include constitutions, case law, statutes, and regulations.\nSearching secondary authority for background information about legal topics. Secondary authorities can come in many forms, such as law reviews, legal dictionaries, legal treatises, and legal encyclopedias such as American Jurisprudence and Corpus Juris Secundum.\nSearching non-legal sources for investigative or supporting factual information.\nLegal research is performed by anyone with a need for legal information, including lawyers, law librarians, and paralegals. Sources of legal information can include printed books, free legal research websites (like Cornell Law School's Legal Information Institute, Findlaw.com, Martindale Hubbell, or CanLII), and websites or software providing paid access to legal research databases such as Wolters Kluwer, LexisNexis, Westlaw, Lex Intell, VLex, and Bloomberg Law. Law libraries around the world provide research services to help their patrons in law schools, law firms, and other research environments find the legal information they need. Many law libraries and institutions provide free access to legal information on the web, either individually or via collective action, such as with the Free Access to Law Movement.\nDatabases and software tools"]], "corpus": "wikipedia"}
27
  {"tstamp": 1732193119.8754, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1732193119.4342, "finish": 1732193119.8754, "ip": "", "conv_id": "7f7c8d116e1b4fd087ac26f6d9fe25fb", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "corpus": "wikipedia"}
28
  {"tstamp": 1732193119.8754, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1732193119.4342, "finish": 1732193119.8754, "ip": "", "conv_id": "0cde77fe053a49a28eb8ad047c631399", "model_name": "embed-english-v3.0", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "corpus": "wikipedia"}
 
 
 
26
  {"tstamp": 1732192889.3601, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1732192889.0588, "finish": 1732192889.3601, "ip": "", "conv_id": "91966d131dcf4e1bbcb1aa62475a3958", "model_name": "text-embedding-004", "prompt": "when searching a legal document what should be looked out for", "output": [["when searching a legal document what should be looked out for", "Legal research\n\nThe processes of legal research vary according to the country and the legal system involved. Legal research involves tasks such as:\nFinding primary sources of law, or primary authority, in a given jurisdiction. The main primary sources of law include constitutions, case law, statutes, and regulations.\nSearching secondary authority for background information about legal topics. Secondary authorities can come in many forms, such as law reviews, legal dictionaries, legal treatises, and legal encyclopedias such as American Jurisprudence and Corpus Juris Secundum.\nSearching non-legal sources for investigative or supporting factual information.\nLegal research is performed by anyone with a need for legal information, including lawyers, law librarians, and paralegals. Sources of legal information can include printed books, free legal research websites (like Cornell Law School's Legal Information Institute, Findlaw.com, Martindale Hubbell, or CanLII), and websites or software providing paid access to legal research databases such as Wolters Kluwer, LexisNexis, Westlaw, Lex Intell, VLex, and Bloomberg Law. Law libraries around the world provide research services to help their patrons in law schools, law firms, and other research environments find the legal information they need. Many law libraries and institutions provide free access to legal information on the web, either individually or via collective action, such as with the Free Access to Law Movement.\nDatabases and software tools"]], "corpus": "wikipedia"}
27
  {"tstamp": 1732193119.8754, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1732193119.4342, "finish": 1732193119.8754, "ip": "", "conv_id": "7f7c8d116e1b4fd087ac26f6d9fe25fb", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "corpus": "wikipedia"}
28
  {"tstamp": 1732193119.8754, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1732193119.4342, "finish": 1732193119.8754, "ip": "", "conv_id": "0cde77fe053a49a28eb8ad047c631399", "model_name": "embed-english-v3.0", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "corpus": "wikipedia"}
29
+ {"tstamp": 1732195123.0391, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1732195122.5831, "finish": 1732195123.0391, "ip": "", "conv_id": "e35b2012e72048918117849572ca6665", "model_name": "voyage-multilingual-2", "prompt": "I am investigating the vulnerabilities of vision-language models (VLM) agents to adversarial pop-ups and their impact on task performance. How do adversarial pop-ups influence the decision-making processes of VLM agents across various environments? What specific mechanisms contribute to the susceptibility of these agents to such attacks, and how do these vulnerabilities differ among various VLMs? What insights from recent literature can enhance the understanding of improving the robustness of VLM agents against visual input-based attacks?", "output": [["I am investigating the vulnerabilities of vision-language models (VLM) agents to adversarial pop-ups and their impact on task performance. How do adversarial pop-ups influence the decision-making processes of VLM agents across various environments? What specific mechanisms contribute to the susceptibility of these agents to such attacks, and how do these vulnerabilities differ among various VLMs? What insights from recent literature can enhance the understanding of improving the robustness of VLM agents against visual input-based attacks?", "Title: Adversarial Attacks on Multimodal Agents\n\nAbstract: Vision-enabled language models (VLMs) are now used to build autonomous multimodal agents capable of taking actions in real environments. In this paper, we show that multimodal agents raise new safety risks, even though attacking agents is more challenging than prior attacks due to limited access to and knowledge about the environment. Our attacks use adversarial text strings to guide gradient-based perturbation over one trigger image in the environment: (1) our captioner attack attacks white-box captioners if they are used to process images into captions as additional inputs to the VLM; (2) our CLIP attack attacks a set of CLIP models jointly, which can transfer to proprietary VLMs. To evaluate the attacks, we curated VisualWebArena-Adv, a set of adversarial tasks based on VisualWebArena, an environment for web-based multimodal agent tasks. Within an L-infinity norm of $16/256$ on a single image, the captioner attack can make a captioner-augmented GPT-4V agent execute the adversarial goals with a 75% success rate. When we remove the captioner or use GPT-4V to generate its own captions, the CLIP attack can achieve success rates of 21% and 43%, respectively. Experiments on agents based on other VLMs, such as Gemini-1.5, Claude-3, and GPT-4o, show interesting differences in their robustness. Further analysis reveals several key factors contributing to the attack's success, and we also discuss the implications for defenses as well. Project page: https://chenwu.io/attack-agent Code and data: https://github.com/ChenWu98/agent-attack"]], "corpus": "arxiv"}
30
+ {"tstamp": 1732195123.0391, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1732195122.5831, "finish": 1732195123.0391, "ip": "", "conv_id": "d6d2eae1b1534f8fbdae02d2571162de", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "I am investigating the vulnerabilities of vision-language models (VLM) agents to adversarial pop-ups and their impact on task performance. How do adversarial pop-ups influence the decision-making processes of VLM agents across various environments? What specific mechanisms contribute to the susceptibility of these agents to such attacks, and how do these vulnerabilities differ among various VLMs? What insights from recent literature can enhance the understanding of improving the robustness of VLM agents against visual input-based attacks?", "output": [["I am investigating the vulnerabilities of vision-language models (VLM) agents to adversarial pop-ups and their impact on task performance. How do adversarial pop-ups influence the decision-making processes of VLM agents across various environments? What specific mechanisms contribute to the susceptibility of these agents to such attacks, and how do these vulnerabilities differ among various VLMs? What insights from recent literature can enhance the understanding of improving the robustness of VLM agents against visual input-based attacks?", "Title: Travelling Wave Solutions in Nonlinear Diffusive and Dispersive Media\n\nAbstract: We investigate the presence of soliton solutions in some classes of nonlinear partial differential equations, namely generalized Korteweg-de Vries-Burgers, Korteveg-de Vries-Huxley, and Korteveg-de Vries-Burgers-Huxley equations, which combine effects of diffusion, dispersion, and nonlinearity. We emphasize the chiral behavior of the travelling solutions, whose velocities are determined by the parameters that define the equation. For some appropriate choices, we show that these equations can be mapped onto equations of motion of relativistic 1+1 dimensional phi^{4} and phi^{6} field theories of real scalar fields. We also study systems of two coupled nonlinear equations of the types mentioned."]], "corpus": "arxiv"}