Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Size:
< 1K
Libraries:
Datasets
Dask
arena-results / data /retrieval_battle-02ddbc43-8a77-4d21-b3a5-485d76c2003b.jsonl
Muennighoff's picture
Scheduled Commit
71187d7 verified
raw
history blame
5.99 kB
{"tstamp": 1722223022.6009, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "776f02bd82f74b80aff4960db5817568", "0_model_name": "BM25", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Stackoverflow Stackexchange\n\nQ: Controller Testing with Minitest and Devise failing In a Rails 3.28 application using Devise, I setup a controller that checks whether a user is logged in or not with if user_signed_in?. Once the tests hit this line, I always get a undefined method 'authenticate!' for nil:NilClass error. I wanted to test some code in a scenario that no user is logged in.\nI am thinking that I need to setup the tests according to the options specified in the Devise Wiki here, however, all the instructions pertain to rspec.\nIn particular, I think I need to have the following to work in minitest:\nRSpec.configure do |config|\n config.include Devise::TestHelpers, :type => :controller\n config.extend ControllerMacros, :type => :controller\nend\n\nHas anyone succeded in setting this up to have Devise work with minitest?\n\nA: Add the following to your minitest_helper.rb file:\nclass MiniTest::Rails::ActionController::TestCase\n include Devise::TestHelpers\nend\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "d88e38d4fcfa46b08b4d9c9a414d77c7", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Stackoverflow Stackexchange\n\nQ: android - getting nullpointerexception for no specific reason I am working on an android project and as far as I know, I don't make any specific mistakes in my code that would lead to an exception. \nI get these two error lines everytime I run my project, but the rest of my code runs without any issues.\nThe NullPointerException doesn't have a concrete occurance location, which confuses me.\nI have already searched for similar issues. A possible solution was adding the READ_EXTERNAL_STORAGE and WRITE_EXTERNAL_STORAGE as uses-permission in my Android Manifest file, but to no avail.\n2020-02-28 XX:XX:XX.333 14615-14651/com.example.package_name E/Perf: Fail to get file list com.example.package_name\n2020-02-28 XX:XX:XX.333 14615-14651/com.example.package_name E/Perf: getFolderSize() : Exception_1 = java.lang.NullPointerException: Attempt to get length of null array\n\n\nA: The error was thrown by my OnePlus, on the android emulator there was no error shown\n"]], "1_corpus": "stackexchange"}
{"tstamp": 1722223253.8945, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "42ce372770374b24ad9d88378f058800", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "4cce5456d4794da8bc4d8b54a94fa7a6", "1_model_name": "text-embedding-004", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "1_corpus": "arxiv"}