Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Size:
< 1K
Libraries:
Datasets
Dask
File size: 5,886 Bytes
5895eab
 
35f71aa
 
1
2
3
4
5
{"tstamp": 1722285451.3737, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722285393.6024, "finish": 1722285451.3737, "ip": "", "conv_id": "7e44552270b344dea5521daf9ce93acd", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "When is AI2 founded? ", "output": [["When is AI2 founded? ", "XAI (company)\n\nX.AI Corp., doing business as xAI, is an American startup company working in the area of artificial intelligence (AI). Founded by Elon Musk in March 2023, its stated goal is \"to understand the true nature of the universe\".\nHistory\nIn 2015, Musk was the founding co-chairman at the non-profit AI research company OpenAI. He reportedly committed to invest as much as $1 billion in OpenAI, but stepped down from the board in 2018 after an unsuccessful bid to take over its management, the result of a disagreement over AI safety.\nxAI was founded by Musk in Nevada on March 9, 2023, and has since been headquartered in the San Francisco Bay Area in California.\nIgor Babuschkin, formerly associated with Google's DeepMind unit, was recruited by Musk to be Chief Engineer."]], "corpus": "wikipedia"}
{"tstamp": 1722285451.3737, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722285393.6024, "finish": 1722285451.3737, "ip": "", "conv_id": "a0b407bea4b14a948f8927488624e444", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "When is AI2 founded? ", "output": [["When is AI2 founded? ", "Aiways\n\nBlind Spot Detection \u2013 alerts the driver with a sound and a light if a vehicle or cyclist is passing in the vehicles blind spot\nForward Collision Warning \u2013 detects vehicles and pedestrians in front of the car and sounds an alarm if approaching at more than 10\u00a0km/h\nRear Cross Traffic Alert \u2013 warns the driver when a vehicle is about to pass behind the car\nProduction Vehicles\nAiways U5\nAiways' first vehicle is the 4.68m long electric CUV Aiways U5. It was presented at the Geneva Motor Show in March 2019 and was launched in China in December 2019. Exports from China began in 2020 and the U5 is now sold in Belgium, Denmark, France, Germany, Israel, Italy, La Reunion, Suisse (Garage Emplatures) and the Netherlands\nIn 2021 the U5 was given a \u2018facelift\u2019 which included new convenience technology features, new paint colors and a new 63kWh battery.\nAiways U6\nThe aerodynamic design of the U6 is heavily influenced by the U6 ion concept, its design sitting somewhere between coupe and more traditional crossover. The U6 will go on sale in China by the end of 2021 and in Europe at the beginning of 2022.\nConcept Vehicles"]], "corpus": "wikipedia"}
{"tstamp": 1722285610.6024, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722285509.9526, "finish": 1722285610.6024, "ip": "", "conv_id": "c220e2e9a633405693f0b6370f1a3737", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Paper analyzing modeling decisions for large language models with a limited GPU budget", "output": [["Paper analyzing modeling decisions for large language models with a limited GPU budget", "Title: LLMem: Estimating GPU Memory Usage for Fine-Tuning Pre-Trained LLMs\n\nAbstract: Fine-tuning pre-trained large language models (LLMs) with limited hardware presents challenges due to GPU memory constraints. Various distributed fine-tuning methods have been proposed to alleviate memory constraints on GPU. However, determining the most effective method for achieving rapid fine-tuning while preventing GPU out-of-memory issues in a given environment remains unclear. To address this challenge, we introduce LLMem, a solution that estimates the GPU memory consumption when applying distributed fine-tuning methods across multiple GPUs and identifies the optimal method. We conduct GPU memory usage estimation prior to fine-tuning, leveraging the fundamental structure of transformer-based decoder models and the memory usage distribution of each method. Experimental results show that LLMem accurately estimates peak GPU memory usage on a single GPU, with error rates of up to 1.6%. Additionally, it shows an average error rate of 3.0% when applying distributed fine-tuning methods to LLMs with more than a billion parameters on multi-GPU setups."]], "corpus": "arxiv"}
{"tstamp": 1722285610.6024, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722285509.9526, "finish": 1722285610.6024, "ip": "", "conv_id": "d013187c76bf44b28f057529b292e5d1", "model_name": "voyage-multilingual-2", "prompt": "Paper analyzing modeling decisions for large language models with a limited GPU budget", "output": [["Paper analyzing modeling decisions for large language models with a limited GPU budget", "Title: The Case for Co-Designing Model Architectures with Hardware\n\nAbstract: While GPUs are responsible for training the vast majority of state-of-the-art deep learning models, the implications of their architecture are often overlooked when designing new deep learning (DL) models. As a consequence, modifying a DL model to be more amenable to the target hardware can significantly improve the runtime performance of DL training and inference. In this paper, we provide a set of guidelines for users to maximize the runtime performance of their transformer models. These guidelines have been created by carefully considering the impact of various model hyperparameters controlling model shape on the efficiency of the underlying computation kernels executed on the GPU. We find the throughput of models with efficient model shapes is up to 39\\% higher while preserving accuracy compared to models with a similar number of parameters but with unoptimized shapes."]], "corpus": "arxiv"}