datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.59M
| likes
int64 0
6.32k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
40
⌀ | createdAt
unknown | card
stringlengths 19
1.01M
|
---|---|---|---|---|---|---|---|---|
atmatechai/commonvoice_male_3_nld | atmatechai | "2024-11-18T07:45:54Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T07:45:50Z" | ---
dataset_info:
features:
- name: client_id
dtype: string
- name: text
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
- name: sdr_noise
dtype: string
- name: pesq_speech_quality
dtype: string
- name: text_description
dtype: string
splits:
- name: train
num_bytes: 508005
num_examples: 756
download_size: 169712
dataset_size: 508005
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Mitake/cluster5_news | Mitake | "2024-11-18T08:25:43Z" | 3 | 0 | [
"license:mit",
"size_categories:10K<n<100K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T08:24:43Z" | ---
license: mit
---
|
aiswariya24/DSA_interview_datasets | aiswariya24 | "2024-11-18T09:03:21Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T09:03:13Z" | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Answer
dtype: string
- name: Correctness
dtype: string
- name: FollowUp Question
struct:
- name: Part1
dtype: string
- name: Part2
dtype: string
splits:
- name: train
num_bytes: 1272747.038841808
num_examples: 2973
- name: test
num_bytes: 545829.9611581921
num_examples: 1275
download_size: 853468
dataset_size: 1818577.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
LeviWarren/vangogh | LeviWarren | "2024-11-18T10:02:59Z" | 3 | 0 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"language:nl",
"language:de",
"license:llama2",
"size_categories:1K<n<10K",
"region:us",
"art"
] | [
"question-answering",
"text-generation"
] | "2024-11-18T09:59:08Z" | ---
license: llama2
task_categories:
- question-answering
- text-generation
language:
- en
- nl
- de
tags:
- art
pretty_name: mylittledataset
size_categories:
- 1K<n<10K
--- |
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_390711e7-4c98-4125-b927-d13c67f9c9f4 | argilla-internal-testing | "2024-11-18T10:29:04Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T10:29:04Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_c9884a70-fbaf-4d64-8b9f-6dd15a93e00b | argilla-internal-testing | "2024-11-18T10:29:05Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T10:29:04Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_578c909c-53cb-45f8-805d-e3284bedfcf1 | argilla-internal-testing | "2024-11-18T10:29:12Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T10:29:11Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_38048450-e255-4735-b7cc-a6a163161a4b | argilla-internal-testing | "2024-11-18T10:29:16Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T10:29:14Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_9492b135-c68e-4767-92de-ea06cf8829cc | argilla-internal-testing | "2024-11-18T10:29:21Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T10:29:20Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
vanshp1232/Rakesh5553 | vanshp1232 | "2024-11-18T10:30:10Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:argilla",
"region:us",
"rlfh",
"argilla",
"human-feedback"
] | null | "2024-11-18T10:30:06Z" | ---
size_categories: n<1K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for Rakesh5553
This dataset has been created with [Argilla](https://github.com/argilla-io/argilla). As shown in the sections below, this dataset can be loaded into your Argilla server as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.Dataset.from_hub("vanshp1232/Rakesh5553")
```
This will load the settings and records from the dataset repository and push them to you Argilla server for exploration and annotation.
## Using this dataset with `datasets`
To load the records of this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("vanshp1232/Rakesh5553")
```
This will only load the records of the dataset, but not the Argilla settings.
## Dataset Structure
This dataset repo contains:
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `rg.Dataset.from_hub` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
* A dataset configuration folder conforming to the Argilla dataset format in `.argilla`.
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
### Fields
The **fields** are the features or text of a dataset's records. For example, the 'text' column of a text classification dataset of the 'prompt' column of an instruction following dataset.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| instruction | User instruction | text | True | True |
### Questions
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| relevance_score | How Relevant is the conversation based upon expert. Is the conversation highly curated for you or not. Please don't judge accuracy. | rating | True | N/A | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
| accuracy_score | How accurate is the conversation based upon persona | rating | True | if | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
| clarity_score | How clear is the conversation based upon persona | rating | True | Is the LLM getting confused | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
| actionable_score | How actionable is the conversation based upon persona | rating | True | Is the LLM response to actionable for example, it shows comparison card on the right question. | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
| engagement_score | How engaging is the conversation based upon persona | rating | True | Are there a lot of question that are being shown if yes, high score else low score | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
| completeness_score | is the conversation complete based upon persona | rating | True | is the conversation complete based upon persona, not leaving any key aspect out | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
| feedback | feedback | text | True | What do you think can be improved in the given conversation. How good was the conversation as per you? | N/A |
<!-- check length of metadata properties -->
### Metadata
The **metadata** is a dictionary that can be used to provide additional information about the dataset record.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
| conv_id | Conversation ID | | - | True |
| turn | Conversation Turn | | 0 - 100 | True |
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"_server_id": "1568a4cc-1731-4efa-b35b-e1d492269b1f",
"fields": {
"instruction": "**user**: Hi\n**assistant**: Hello Rakesh! How can I assist you today? If you have any questions about software or need recommendations, feel free to ask.\n**user**: Best tools in graphic designing\n**assistant**: Hey Rakesh! Graphic design tools are essential for creating stunning visuals. Here\u0027s a quick comparison of some top tools:\n\n| **Tool** | **Features** | **Best For** |\n|----------------|----------------------------------------|-------------------------------|\n| **Adobe Photoshop** | Advanced editing, photo manipulation | Professional designers |\n| **Canva** | User-friendly, templates | Beginners, social media posts |\n| **Sketch** | UI/UX design, vector editing | Web and app designers |\n| **CorelDRAW** | Vector illustration, layout | Print and advertising |\n| **Figma** | Collaborative design, prototyping | Team projects, UI/UX |\n\n*Which one are you most interested in exploring further?*\n**user**: Which is tool is better for graphic designing \n**assistant**: Hey Rakesh! Choosing the best graphic design tool really depends on your specific needs and skill level. Here\u2019s a quick overview to help you decide:\n\n| **Tool** | **Strengths** | **Ideal For** |\n|----------------|-----------------------------------------|--------------------------------|\n| **Adobe Photoshop** | Comprehensive photo editing and manipulation | Professionals and photographers |\n| **Canva** | Easy to use with a vast library of templates | Beginners and quick projects |\n| **Sketch** | Excellent for UI/UX design and prototyping | Web and mobile app designers |\n| **CorelDRAW** | Strong vector graphics capabilities | Print design and illustrations |\n| **Figma** | Real-time collaboration and prototyping | Teams working on UI/UX projects |\n\n*What type of graphic design work are you looking to do? This will help narrow down the best tool for you!*\n**user**: give me more\n**assistant**: Sure, Rakesh! Here\u2019s a deeper dive into each of the graphic design tools I mentioned earlier, including their pricing and some unique features:\n\n| **Tool** | **Description** | **Pricing** | **Unique Features** |\n|----------------|-------------------------------------------|-----------------------------------|---------------------------------------------|\n| **Adobe Photoshop** | Industry-standard for photo editing and graphic design. | Starting at \u20b9638.38/month (incl. GST) | Advanced retouching, extensive plugins, and AI features for image enhancement. |\n| **Canva** | A versatile design tool with a drag-and-drop interface. | Free version available; Pro at $12.99/month | Thousands of templates, easy social media integration, and collaborative features. |\n| **Sketch** | Focused on UI/UX design with vector editing capabilities. | Starting at $99/year | Symbols for reusable design elements, plugins for extended functionality. |\n| **CorelDRAW** | Powerful vector graphics software for illustrations and layouts. | Starting at $249/year | Non-destructive editing, advanced typography tools, and color management. |\n| **Figma** | A web-based design tool for collaborative projects. | Free for individual use; Pro at $12/month | Real-time collaboration, design systems, and prototyping capabilities. |\n\n*Do any of these tools resonate with your needs? Let me know if you want to explore a specific one further or if you have any other questions!*\n**feedback**: radar Chart didnot create. \n**user**: give me more tools\n**assistant**: Sure, Rakesh! Here are some additional graphic design tools that you might find useful:\n\n| **Tool** | **Description** | **Pricing** | **Unique Features** |\n|----------------------|-------------------------------------------------|---------------------------------|---------------------------------------------|\n| **Affinity Photo** | A powerful photo editing software for professionals. | One-time purchase of $54.99 | Advanced retouching, HDR merging, and extensive editing tools. |\n| **Inkscape** | Open-source vector graphics editor. | Free | Supports SVG format, great for illustrations and logos. |\n| **Gravit Designer** | A versatile design tool for web and print. | Free version available; Pro at $49/year | Cloud-based, cross-platform, and vector editing capabilities. |\n| **Pencil Project** | Open-source GUI prototyping tool. | Free | Built-in shapes collection for UI design and mockups. |\n| **Adobe Illustrator**| Industry-standard for vector graphics design. | Starting at \u20b9638.38/month (incl. GST) | Advanced vector editing, typography, and integration with other Adobe products. |\n\n*Do any of these tools catch your eye? Let me know if you want more details on a specific one or if you have other questions!*\n**feedback**: Chart has not been created\n**user**: compare then in user ratings\n**assistant**: Here\u0027s a concise analysis of the user ratings for the top graphic design tools based on the number of users who rated them:\n\n1. **Canva** stands out with the highest number of ratings (11,852), showcasing its popularity and user engagement. It has a strong ease of use and features rating, making it ideal for beginners and those looking for quick design solutions.\n\n2. **Figma** follows closely with 733 ratings, indicating a solid user base. It excels in features and ease of use, making it a favorite among UI/UX designers for collaborative projects.\n\n3. **Affinity Photo** has a respectable rating from 87 users, highlighting its value for money and support. It\u0027s a great choice for professionals looking for advanced photo editing capabilities.\n\n4. **CorelDRAW Graphics Suite** has a good rating from 67 users, but lacks detailed metrics in several areas. It is known for its vector graphics capabilities, making it suitable for print design.\n\n5. **Pencil Project** has the least number of ratings (21), but still offers a decent ease of use and value for money. It is more niche, focusing on prototyping and wireframing.\n\nNow, let\u0027s visualize this data in a radar chart:\n\nnull{\"graph_type\": \"radar\", \"x_label\": \"Tools\", \"y_label\": \"Ratings\", \"json_payload\": [{\"category\": \"Ease of Use\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.67}, {\"tool_name\": \"Figma\", \"rating\": 4.46}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.44}, {\"tool_name\": \"CorelDRAW Graphics Suite\", \"rating\": 0}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.33}]}, {\"category\": \"Feature Rating\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.53}, {\"tool_name\": \"Figma\", \"rating\": 4.63}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.53}, {\"tool_name\": \"CorelDRAW Graphics Suite\", \"rating\": 0}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.1}]}, {\"category\": \"Rating\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.72}, {\"tool_name\": \"Figma\", \"rating\": 4.73}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.77}, {\"tool_name\": \"CorelDRAW Graphics Suite\", \"rating\": 4.8}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.33}]}, {\"category\": \"Support Rating\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.39}, {\"tool_name\": \"Figma\", \"rating\": 4.31}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.58}, {\"tool_name\": \"CorelDRAW Graphics Suite\", \"rating\": 0}, {\"tool_name\": \"Pencil Project\", \"rating\": 3.67}]}, {\"category\": \"Value for Money\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.62}, {\"tool_name\": \"Figma\", \"rating\": 4.62}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.83}, {\"tool_name\": \"CorelDRAW Graphics Suite\", \"rating\": 0}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.64}]}, {\"category\": \"Likelihood to Recommend\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.58}, {\"tool_name\": \"Figma\", \"rating\": 4.54}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.58}, {\"tool_name\": \"CorelDRAW Graphics Suite\", \"rating\": 0}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.17}]}]}\n\n### Business Implications:\n- **Canva** and **Figma** are excellent choices for users seeking user-friendly interfaces and strong collaborative features, making them suitable for teams and individuals alike.\n- **Affinity Photo** is ideal for those who require advanced photo editing tools, while **CorelDRAW** may appeal to users focused on vector graphics.\n- **Pencil Project** serves a niche market, appealing to developers and designers working on prototypes.\n\nIf you\u0027re interested in exploring these tools further, consider booking a demo to see how they can fit your specific design needs!\n**user**: create it again\n**assistant**: Absolutely! Let\u0027s dive into the analysis of the top graphic design tools based on user ratings, focusing on their strengths and weaknesses.\n\n### Data-Driven Insights:\nThe analysis reveals that **Canva** and **Figma** are leading the pack in user ratings, showcasing their popularity and user engagement. Canva is particularly noted for its ease of use and extensive features, making it ideal for beginners and quick design tasks. Figma excels in collaborative features, making it a favorite among UI/UX designers. \n\n**Affinity Photo** stands out for its value for money and advanced photo editing capabilities, while **CorelDRAW** is recognized for its vector graphics expertise, although it has a smaller user base. **Pencil Project**, while less popular, serves a niche market focused on prototyping and wireframing.\n\n### Radar Chart Visualization:\nHere\u2019s the radar chart visualizing the ratings for these tools:\n\nnull{\"graph_type\": \"radar\", \"x_label\": \"Tools\", \"y_label\": \"Ratings\", \"json_payload\": [{\"category\": \"Ease of Use\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.67}, {\"tool_name\": \"Figma\", \"rating\": 4.46}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.44}, {\"tool_name\": \"CorelDRAW\", \"rating\": 4.17}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.33}]}, {\"category\": \"Feature Rating\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.53}, {\"tool_name\": \"Figma\", \"rating\": 4.63}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.53}, {\"tool_name\": \"CorelDRAW\", \"rating\": 4.45}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.1}]}, {\"category\": \"Rating\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.72}, {\"tool_name\": \"Figma\", \"rating\": 4.73}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.77}, {\"tool_name\": \"CorelDRAW\", \"rating\": 4.52}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.33}]}, {\"category\": \"Support Rating\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.39}, {\"tool_name\": \"Figma\", \"rating\": 4.31}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.58}, {\"tool_name\": \"CorelDRAW\", \"rating\": 4.12}, {\"tool_name\": \"Pencil Project\", \"rating\": 3.67}]}, {\"category\": \"Value for Money\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.62}, {\"tool_name\": \"Figma\", \"rating\": 4.62}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.83}, {\"tool_name\": \"CorelDRAW\", \"rating\": 4.23}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.64}]}, {\"category\": \"Likelihood to Recommend\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.58}, {\"tool_name\": \"Figma\", \"rating\": 4.54}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.58}, {\"tool_name\": \"CorelDRAW\", \"rating\": 4.21}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.17}]}]}\n\n### Business Implications:\n- **Canva** and **Figma** are excellent choices for users seeking user-friendly interfaces and strong collaborative features, making them suitable for teams and individuals alike.\n- **Affinity Photo** is ideal for those who require advanced photo editing tools, while **CorelDRAW** may appeal to users focused on vector graphics.\n- **Pencil Project** serves a niche market, appealing to developers and designers working on prototypes.\n\nIf you\u0027re interested in exploring these tools further, consider booking a demo to see how they can fit your specific design needs! Let me know if you have any other questions or need more information!\n**user**: Give me their pricing card\n**assistant**: Absolutely, Rakesh! Here\u2019s a comparison card showcasing the pricing and features of the graphic design tools we discussed:\n\nnull{\"graph_type\": \"comparison card\", \"x_label\": \"Tools\", \"y_label\": \"Pricing\", \"json_payload\": [{\"icon\": \"https://gdm-catalog-fmapi-prod.imgix.net/ProductLogo/974e79f0-c83c-4b9f-a75b-e3d3f66500f0.jpeg\", \"tool_name\": \"Canva\", \"tags\": {\"price\": \"12.99\", \"currency\": \"$\", \"wyzard_discount\": 5, \"wyzard_pricing\": \"12.34\", \"subscription_frequency\": \"MONTHLY\", \"plan_type\": \"Pro\"}, \"features\": [{\"description\": \"User-friendly interface\", \"availability\": \"true\"}, {\"description\": \"Extensive template library\", \"availability\": \"true\"}, {\"description\": \"Collaboration features\", \"availability\": \"true\"}], \"more_info\": \"https://www.canva.com/en_in/\"}, {\"icon\": \"https://gdm-catalog-fmapi-prod.imgix.net/ProductLogo/18df3d15-0497-4661-b2e1-bfeba3a21cd2.jpeg\", \"tool_name\": \"Figma\", \"tags\": {\"price\": \"12.00\", \"currency\": \"$\", \"wyzard_discount\": 5, \"wyzard_pricing\": \"11.40\", \"subscription_frequency\": \"MONTHLY\", \"plan_type\": \"Professional\"}, \"features\": [{\"description\": \"Real-time collaboration\", \"availability\": \"true\"}, {\"description\": \"Prototyping capabilities\", \"availability\": \"true\"}, {\"description\": \"Version control\", \"availability\": \"true\"}], \"more_info\": \"https://www.figma.com/\"}, {\"icon\": \"https://gdm-catalog-fmapi-prod.imgix.net/ProductLogo/b4a11aaf-afdc-4d1c-8cab-fe6594b96a6a.png\", \"tool_name\": \"Affinity Photo\", \"tags\": {\"price\": \"54.99\", \"currency\": \"$\", \"wyzard_discount\": 5, \"wyzard_pricing\": \"52.24\", \"subscription_frequency\": \"YEARLY\", \"plan_type\": \"One Time License\"}, \"features\": [{\"description\": \"Advanced photo editing tools\", \"availability\": \"true\"}, {\"description\": \"Non-destructive editing\", \"availability\": \"true\"}, {\"description\": \"Supports various file formats\", \"availability\": \"true\"}], \"more_info\": \"https://affinity.serif.com/en-us/photo/\"}, {\"icon\": \"none\", \"tool_name\": \"CorelDRAW Graphics Suite\", \"tags\": {\"price\": \"249.00\", \"currency\": \"$\", \"wyzard_discount\": 5, \"wyzard_pricing\": \"236.55\", \"subscription_frequency\": \"YEARLY\", \"plan_type\": \"One Time Fee\"}, \"features\": [{\"description\": \"Vector illustration tools\", \"availability\": \"true\"}, {\"description\": \"Page layout capabilities\", \"availability\": \"true\"}, {\"description\": \"Photo editing integration\", \"availability\": \"true\"}], \"more_info\": \"https://www.coreldraw.com/en/\"}, {\"icon\": \"https://gdm-catalog-fmapi-prod.imgix.net/ProductLogo/1738a478-5c20-42e2-a5cc-a37a166f2ea8.png\", \"tool_name\": \"Pencil Project\", \"tags\": {\"price\": \"0.00\", \"currency\": \"$\", \"wyzard_discount\": 5, \"wyzard_pricing\": \"0.00\", \"subscription_frequency\": \"FREE\", \"plan_type\": \"Free\"}, \"features\": [{\"description\": \"Open-source prototyping tool\", \"availability\": \"true\"}, {\"description\": \"Built-in shape collections\", \"availability\": \"true\"}, {\"description\": \"Document export options\", \"availability\": \"true\"}], \"more_info\": \"https://pencil.evolus.vn/\"}]}.\n\n### Business Implications:\n- **Canva** and **Figma** are excellent for users looking for collaborative features and ease of use, making them suitable for teams and quick projects.\n- **Affinity Photo** offers advanced editing capabilities at a one-time cost, ideal for professionals focused on photo editing.\n- **CorelDRAW Graphics Suite** provides robust vector graphics tools, suitable for print design and illustrations.\n- **Pencil Project** is a free option for those focusing on prototyping and wireframing.\n\nIf you\u0027re interested in any of these tools, consider booking a demo to see how they can fit your specific design needs! Let me know if you have any other questions or need more information!\n**user**: Give me their Ratings\n**assistant**: ### Data-Driven Insights:\nThe ratings for the graphic design tools indicate that **Affinity Photo** holds the highest rating among the tools analyzed, showcasing its strong value for money and user satisfaction. **Figma** follows closely, particularly excelling in features and ease of use, making it a favorite for collaborative projects. **Canva** also has a high rating, with a significant user base, indicating its popularity and effectiveness for beginners and quick design tasks. **CorelDRAW** has a decent rating but with a smaller user base, while **Pencil Project** has the lowest rating, reflecting its niche appeal.\n\n### Bubble Chart Visualization:\nnull{\"graph_type\": \"bubble\", \"x_label\": \"Tools\", \"y_label\": \"Ratings\", \"json_payload\": [{\"id\": \"Canva\", \"data\": [{\"y\": 4.72, \"x\": \"Canva\", \"icon\": \"https://gdm-catalog-fmapi-prod.imgix.net/ProductLogo/974e79f0-c83c-4b9f-a75b-e3d3f66500f0.jpeg\", \"size\": 11852}]}, {\"id\": \"Figma\", \"data\": [{\"y\": 4.73, \"x\": \"Figma\", \"icon\": \"https://gdm-catalog-fmapi-prod.imgix.net/ProductLogo/18df3d15-0497-4661-b2e1-bfeba3a21cd2.jpeg\", \"size\": 733}]}, {\"id\": \"Affinity Photo\", \"data\": [{\"y\": 4.77, \"x\": \"Affinity Photo\", \"icon\": \"https://gdm-catalog-fmapi-prod.imgix.net/ProductLogo/b4a11aaf-afdc-4d1c-8cab-fe6594b96a6a.png\", \"size\": 87}]}, {\"id\": \"CorelDRAW\", \"data\": [{\"y\": 4.52, \"x\": \"CorelDRAW\", \"icon\": \"https://gdm-catalog-fmapi-prod.imgix.net/ProductLogo/c777fa4c-f314-481f-a60a-fa59bfb43c42.jpeg\", \"size\": 1494}]}, {\"id\": \"Pencil Project\", \"data\": [{\"y\": 4.33, \"x\": \"Pencil Project\", \"icon\": \"https://gdm-catalog-fmapi-prod.imgix.net/ProductLogo/1738a478-5c20-42e2-a5cc-a37a166f2ea8.png\", \"size\": 21}]}]}\n\n### Business Implications:\n- **Affinity Photo** is ideal for professionals seeking advanced photo editing capabilities, while **Figma** and **Canva** cater well to collaborative and beginner-friendly environments, respectively.\n- **CorelDRAW** is suitable for users focused on vector graphics, although it has a smaller user base.\n- **Pencil Project** serves a niche market, appealing to those focused on prototyping.\n\nIf you\u0027re interested in exploring these tools further, consider booking a demo to see how they can fit your specific design needs! Let me know if you have any other questions or need more information!\n**user**: give me features of each tool\n**assistant**: Sure, Rakesh! Here are the features of each graphic design tool we discussed:\n\n### Features of Graphic Design Tools\n\n| **Tool** | **Key Features** |\n|----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| **Canva** | - User-friendly drag-and-drop interface\u003cbr\u003e- Extensive library of templates for various projects\u003cbr\u003e- Collaboration features for team projects\u003cbr\u003e- Ability to create social media graphics, presentations, and more\u003cbr\u003e- Free and paid plans available |\n| **Figma** | - Real-time collaboration for teams\u003cbr\u003e- Prototyping capabilities with interactive elements\u003cbr\u003e- Version control to track changes\u003cbr\u003e- Plugin support for extended functionality\u003cbr\u003e- Cloud-based for easy access across devices |\n| **Affinity Photo** | - Advanced photo editing tools\u003cbr\u003e- Non-destructive editing capabilities\u003cbr\u003e- Supports various file formats including PSD\u003cbr\u003e- Extensive retouching and color correction tools\u003cbr\u003e- One-time purchase with no subscription fees |\n| **CorelDRAW** | - Powerful vector illustration tools\u003cbr\u003e- Page layout capabilities for brochures and multi-page documents\u003cbr\u003e- Advanced typography tools\u003cbr\u003e- Color management for print accuracy\u003cbr\u003e- Supports various file formats for import/export |\n| **Pencil Project** | - Open-source GUI prototyping tool\u003cbr\u003e- Built-in shape collections for UI design\u003cbr\u003e- Document export options in multiple formats (PNG, PDF, SVG)\u003cbr\u003e- Inter-page linking for creating interactive prototypes\u003cbr\u003e- Free to use |\n\nIf you have any specific needs or projects in mind, let me know, and I can help you choose the best tool for your requirements!"
},
"id": "a5b83ba2-985c-4c70-8c1a-15cfd20707d6",
"metadata": {
"conv_id": "c67b65f2-fbe1-4027-b1e5-c6e95856a341",
"turn": 0
},
"responses": {},
"status": "pending",
"suggestions": {},
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"_server_id": "1568a4cc-1731-4efa-b35b-e1d492269b1f",
"conv_id": "c67b65f2-fbe1-4027-b1e5-c6e95856a341",
"id": "a5b83ba2-985c-4c70-8c1a-15cfd20707d6",
"instruction": "**user**: Hi\n**assistant**: Hello Rakesh! How can I assist you today? If you have any questions about software or need recommendations, feel free to ask.\n**user**: Best tools in graphic designing\n**assistant**: Hey Rakesh! Graphic design tools are essential for creating stunning visuals. Here\u0027s a quick comparison of some top tools:\n\n| **Tool** | **Features** | **Best For** |\n|----------------|----------------------------------------|-------------------------------|\n| **Adobe Photoshop** | Advanced editing, photo manipulation | Professional designers |\n| **Canva** | User-friendly, templates | Beginners, social media posts |\n| **Sketch** | UI/UX design, vector editing | Web and app designers |\n| **CorelDRAW** | Vector illustration, layout | Print and advertising |\n| **Figma** | Collaborative design, prototyping | Team projects, UI/UX |\n\n*Which one are you most interested in exploring further?*\n**user**: Which is tool is better for graphic designing \n**assistant**: Hey Rakesh! Choosing the best graphic design tool really depends on your specific needs and skill level. Here\u2019s a quick overview to help you decide:\n\n| **Tool** | **Strengths** | **Ideal For** |\n|----------------|-----------------------------------------|--------------------------------|\n| **Adobe Photoshop** | Comprehensive photo editing and manipulation | Professionals and photographers |\n| **Canva** | Easy to use with a vast library of templates | Beginners and quick projects |\n| **Sketch** | Excellent for UI/UX design and prototyping | Web and mobile app designers |\n| **CorelDRAW** | Strong vector graphics capabilities | Print design and illustrations |\n| **Figma** | Real-time collaboration and prototyping | Teams working on UI/UX projects |\n\n*What type of graphic design work are you looking to do? This will help narrow down the best tool for you!*\n**user**: give me more\n**assistant**: Sure, Rakesh! Here\u2019s a deeper dive into each of the graphic design tools I mentioned earlier, including their pricing and some unique features:\n\n| **Tool** | **Description** | **Pricing** | **Unique Features** |\n|----------------|-------------------------------------------|-----------------------------------|---------------------------------------------|\n| **Adobe Photoshop** | Industry-standard for photo editing and graphic design. | Starting at \u20b9638.38/month (incl. GST) | Advanced retouching, extensive plugins, and AI features for image enhancement. |\n| **Canva** | A versatile design tool with a drag-and-drop interface. | Free version available; Pro at $12.99/month | Thousands of templates, easy social media integration, and collaborative features. |\n| **Sketch** | Focused on UI/UX design with vector editing capabilities. | Starting at $99/year | Symbols for reusable design elements, plugins for extended functionality. |\n| **CorelDRAW** | Powerful vector graphics software for illustrations and layouts. | Starting at $249/year | Non-destructive editing, advanced typography tools, and color management. |\n| **Figma** | A web-based design tool for collaborative projects. | Free for individual use; Pro at $12/month | Real-time collaboration, design systems, and prototyping capabilities. |\n\n*Do any of these tools resonate with your needs? Let me know if you want to explore a specific one further or if you have any other questions!*\n**feedback**: radar Chart didnot create. \n**user**: give me more tools\n**assistant**: Sure, Rakesh! Here are some additional graphic design tools that you might find useful:\n\n| **Tool** | **Description** | **Pricing** | **Unique Features** |\n|----------------------|-------------------------------------------------|---------------------------------|---------------------------------------------|\n| **Affinity Photo** | A powerful photo editing software for professionals. | One-time purchase of $54.99 | Advanced retouching, HDR merging, and extensive editing tools. |\n| **Inkscape** | Open-source vector graphics editor. | Free | Supports SVG format, great for illustrations and logos. |\n| **Gravit Designer** | A versatile design tool for web and print. | Free version available; Pro at $49/year | Cloud-based, cross-platform, and vector editing capabilities. |\n| **Pencil Project** | Open-source GUI prototyping tool. | Free | Built-in shapes collection for UI design and mockups. |\n| **Adobe Illustrator**| Industry-standard for vector graphics design. | Starting at \u20b9638.38/month (incl. GST) | Advanced vector editing, typography, and integration with other Adobe products. |\n\n*Do any of these tools catch your eye? Let me know if you want more details on a specific one or if you have other questions!*\n**feedback**: Chart has not been created\n**user**: compare then in user ratings\n**assistant**: Here\u0027s a concise analysis of the user ratings for the top graphic design tools based on the number of users who rated them:\n\n1. **Canva** stands out with the highest number of ratings (11,852), showcasing its popularity and user engagement. It has a strong ease of use and features rating, making it ideal for beginners and those looking for quick design solutions.\n\n2. **Figma** follows closely with 733 ratings, indicating a solid user base. It excels in features and ease of use, making it a favorite among UI/UX designers for collaborative projects.\n\n3. **Affinity Photo** has a respectable rating from 87 users, highlighting its value for money and support. It\u0027s a great choice for professionals looking for advanced photo editing capabilities.\n\n4. **CorelDRAW Graphics Suite** has a good rating from 67 users, but lacks detailed metrics in several areas. It is known for its vector graphics capabilities, making it suitable for print design.\n\n5. **Pencil Project** has the least number of ratings (21), but still offers a decent ease of use and value for money. It is more niche, focusing on prototyping and wireframing.\n\nNow, let\u0027s visualize this data in a radar chart:\n\nnull{\"graph_type\": \"radar\", \"x_label\": \"Tools\", \"y_label\": \"Ratings\", \"json_payload\": [{\"category\": \"Ease of Use\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.67}, {\"tool_name\": \"Figma\", \"rating\": 4.46}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.44}, {\"tool_name\": \"CorelDRAW Graphics Suite\", \"rating\": 0}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.33}]}, {\"category\": \"Feature Rating\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.53}, {\"tool_name\": \"Figma\", \"rating\": 4.63}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.53}, {\"tool_name\": \"CorelDRAW Graphics Suite\", \"rating\": 0}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.1}]}, {\"category\": \"Rating\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.72}, {\"tool_name\": \"Figma\", \"rating\": 4.73}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.77}, {\"tool_name\": \"CorelDRAW Graphics Suite\", \"rating\": 4.8}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.33}]}, {\"category\": \"Support Rating\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.39}, {\"tool_name\": \"Figma\", \"rating\": 4.31}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.58}, {\"tool_name\": \"CorelDRAW Graphics Suite\", \"rating\": 0}, {\"tool_name\": \"Pencil Project\", \"rating\": 3.67}]}, {\"category\": \"Value for Money\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.62}, {\"tool_name\": \"Figma\", \"rating\": 4.62}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.83}, {\"tool_name\": \"CorelDRAW Graphics Suite\", \"rating\": 0}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.64}]}, {\"category\": \"Likelihood to Recommend\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.58}, {\"tool_name\": \"Figma\", \"rating\": 4.54}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.58}, {\"tool_name\": \"CorelDRAW Graphics Suite\", \"rating\": 0}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.17}]}]}\n\n### Business Implications:\n- **Canva** and **Figma** are excellent choices for users seeking user-friendly interfaces and strong collaborative features, making them suitable for teams and individuals alike.\n- **Affinity Photo** is ideal for those who require advanced photo editing tools, while **CorelDRAW** may appeal to users focused on vector graphics.\n- **Pencil Project** serves a niche market, appealing to developers and designers working on prototypes.\n\nIf you\u0027re interested in exploring these tools further, consider booking a demo to see how they can fit your specific design needs!\n**user**: create it again\n**assistant**: Absolutely! Let\u0027s dive into the analysis of the top graphic design tools based on user ratings, focusing on their strengths and weaknesses.\n\n### Data-Driven Insights:\nThe analysis reveals that **Canva** and **Figma** are leading the pack in user ratings, showcasing their popularity and user engagement. Canva is particularly noted for its ease of use and extensive features, making it ideal for beginners and quick design tasks. Figma excels in collaborative features, making it a favorite among UI/UX designers. \n\n**Affinity Photo** stands out for its value for money and advanced photo editing capabilities, while **CorelDRAW** is recognized for its vector graphics expertise, although it has a smaller user base. **Pencil Project**, while less popular, serves a niche market focused on prototyping and wireframing.\n\n### Radar Chart Visualization:\nHere\u2019s the radar chart visualizing the ratings for these tools:\n\nnull{\"graph_type\": \"radar\", \"x_label\": \"Tools\", \"y_label\": \"Ratings\", \"json_payload\": [{\"category\": \"Ease of Use\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.67}, {\"tool_name\": \"Figma\", \"rating\": 4.46}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.44}, {\"tool_name\": \"CorelDRAW\", \"rating\": 4.17}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.33}]}, {\"category\": \"Feature Rating\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.53}, {\"tool_name\": \"Figma\", \"rating\": 4.63}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.53}, {\"tool_name\": \"CorelDRAW\", \"rating\": 4.45}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.1}]}, {\"category\": \"Rating\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.72}, {\"tool_name\": \"Figma\", \"rating\": 4.73}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.77}, {\"tool_name\": \"CorelDRAW\", \"rating\": 4.52}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.33}]}, {\"category\": \"Support Rating\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.39}, {\"tool_name\": \"Figma\", \"rating\": 4.31}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.58}, {\"tool_name\": \"CorelDRAW\", \"rating\": 4.12}, {\"tool_name\": \"Pencil Project\", \"rating\": 3.67}]}, {\"category\": \"Value for Money\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.62}, {\"tool_name\": \"Figma\", \"rating\": 4.62}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.83}, {\"tool_name\": \"CorelDRAW\", \"rating\": 4.23}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.64}]}, {\"category\": \"Likelihood to Recommend\", \"data\": [{\"tool_name\": \"Canva\", \"rating\": 4.58}, {\"tool_name\": \"Figma\", \"rating\": 4.54}, {\"tool_name\": \"Affinity Photo\", \"rating\": 4.58}, {\"tool_name\": \"CorelDRAW\", \"rating\": 4.21}, {\"tool_name\": \"Pencil Project\", \"rating\": 4.17}]}]}\n\n### Business Implications:\n- **Canva** and **Figma** are excellent choices for users seeking user-friendly interfaces and strong collaborative features, making them suitable for teams and individuals alike.\n- **Affinity Photo** is ideal for those who require advanced photo editing tools, while **CorelDRAW** may appeal to users focused on vector graphics.\n- **Pencil Project** serves a niche market, appealing to developers and designers working on prototypes.\n\nIf you\u0027re interested in exploring these tools further, consider booking a demo to see how they can fit your specific design needs! Let me know if you have any other questions or need more information!\n**user**: Give me their pricing card\n**assistant**: Absolutely, Rakesh! Here\u2019s a comparison card showcasing the pricing and features of the graphic design tools we discussed:\n\nnull{\"graph_type\": \"comparison card\", \"x_label\": \"Tools\", \"y_label\": \"Pricing\", \"json_payload\": [{\"icon\": \"https://gdm-catalog-fmapi-prod.imgix.net/ProductLogo/974e79f0-c83c-4b9f-a75b-e3d3f66500f0.jpeg\", \"tool_name\": \"Canva\", \"tags\": {\"price\": \"12.99\", \"currency\": \"$\", \"wyzard_discount\": 5, \"wyzard_pricing\": \"12.34\", \"subscription_frequency\": \"MONTHLY\", \"plan_type\": \"Pro\"}, \"features\": [{\"description\": \"User-friendly interface\", \"availability\": \"true\"}, {\"description\": \"Extensive template library\", \"availability\": \"true\"}, {\"description\": \"Collaboration features\", \"availability\": \"true\"}], \"more_info\": \"https://www.canva.com/en_in/\"}, {\"icon\": \"https://gdm-catalog-fmapi-prod.imgix.net/ProductLogo/18df3d15-0497-4661-b2e1-bfeba3a21cd2.jpeg\", \"tool_name\": \"Figma\", \"tags\": {\"price\": \"12.00\", \"currency\": \"$\", \"wyzard_discount\": 5, \"wyzard_pricing\": \"11.40\", \"subscription_frequency\": \"MONTHLY\", \"plan_type\": \"Professional\"}, \"features\": [{\"description\": \"Real-time collaboration\", \"availability\": \"true\"}, {\"description\": \"Prototyping capabilities\", \"availability\": \"true\"}, {\"description\": \"Version control\", \"availability\": \"true\"}], \"more_info\": \"https://www.figma.com/\"}, {\"icon\": \"https://gdm-catalog-fmapi-prod.imgix.net/ProductLogo/b4a11aaf-afdc-4d1c-8cab-fe6594b96a6a.png\", \"tool_name\": \"Affinity Photo\", \"tags\": {\"price\": \"54.99\", \"currency\": \"$\", \"wyzard_discount\": 5, \"wyzard_pricing\": \"52.24\", \"subscription_frequency\": \"YEARLY\", \"plan_type\": \"One Time License\"}, \"features\": [{\"description\": \"Advanced photo editing tools\", \"availability\": \"true\"}, {\"description\": \"Non-destructive editing\", \"availability\": \"true\"}, {\"description\": \"Supports various file formats\", \"availability\": \"true\"}], \"more_info\": \"https://affinity.serif.com/en-us/photo/\"}, {\"icon\": \"none\", \"tool_name\": \"CorelDRAW Graphics Suite\", \"tags\": {\"price\": \"249.00\", \"currency\": \"$\", \"wyzard_discount\": 5, \"wyzard_pricing\": \"236.55\", \"subscription_frequency\": \"YEARLY\", \"plan_type\": \"One Time Fee\"}, \"features\": [{\"description\": \"Vector illustration tools\", \"availability\": \"true\"}, {\"description\": \"Page layout capabilities\", \"availability\": \"true\"}, {\"description\": \"Photo editing integration\", \"availability\": \"true\"}], \"more_info\": \"https://www.coreldraw.com/en/\"}, {\"icon\": \"https://gdm-catalog-fmapi-prod.imgix.net/ProductLogo/1738a478-5c20-42e2-a5cc-a37a166f2ea8.png\", \"tool_name\": \"Pencil Project\", \"tags\": {\"price\": \"0.00\", \"currency\": \"$\", \"wyzard_discount\": 5, \"wyzard_pricing\": \"0.00\", \"subscription_frequency\": \"FREE\", \"plan_type\": \"Free\"}, \"features\": [{\"description\": \"Open-source prototyping tool\", \"availability\": \"true\"}, {\"description\": \"Built-in shape collections\", \"availability\": \"true\"}, {\"description\": \"Document export options\", \"availability\": \"true\"}], \"more_info\": \"https://pencil.evolus.vn/\"}]}.\n\n### Business Implications:\n- **Canva** and **Figma** are excellent for users looking for collaborative features and ease of use, making them suitable for teams and quick projects.\n- **Affinity Photo** offers advanced editing capabilities at a one-time cost, ideal for professionals focused on photo editing.\n- **CorelDRAW Graphics Suite** provides robust vector graphics tools, suitable for print design and illustrations.\n- **Pencil Project** is a free option for those focusing on prototyping and wireframing.\n\nIf you\u0027re interested in any of these tools, consider booking a demo to see how they can fit your specific design needs! Let me know if you have any other questions or need more information!\n**user**: Give me their Ratings\n**assistant**: ### Data-Driven Insights:\nThe ratings for the graphic design tools indicate that **Affinity Photo** holds the highest rating among the tools analyzed, showcasing its strong value for money and user satisfaction. **Figma** follows closely, particularly excelling in features and ease of use, making it a favorite for collaborative projects. **Canva** also has a high rating, with a significant user base, indicating its popularity and effectiveness for beginners and quick design tasks. **CorelDRAW** has a decent rating but with a smaller user base, while **Pencil Project** has the lowest rating, reflecting its niche appeal.\n\n### Bubble Chart Visualization:\nnull{\"graph_type\": \"bubble\", \"x_label\": \"Tools\", \"y_label\": \"Ratings\", \"json_payload\": [{\"id\": \"Canva\", \"data\": [{\"y\": 4.72, \"x\": \"Canva\", \"icon\": \"https://gdm-catalog-fmapi-prod.imgix.net/ProductLogo/974e79f0-c83c-4b9f-a75b-e3d3f66500f0.jpeg\", \"size\": 11852}]}, {\"id\": \"Figma\", \"data\": [{\"y\": 4.73, \"x\": \"Figma\", \"icon\": \"https://gdm-catalog-fmapi-prod.imgix.net/ProductLogo/18df3d15-0497-4661-b2e1-bfeba3a21cd2.jpeg\", \"size\": 733}]}, {\"id\": \"Affinity Photo\", \"data\": [{\"y\": 4.77, \"x\": \"Affinity Photo\", \"icon\": \"https://gdm-catalog-fmapi-prod.imgix.net/ProductLogo/b4a11aaf-afdc-4d1c-8cab-fe6594b96a6a.png\", \"size\": 87}]}, {\"id\": \"CorelDRAW\", \"data\": [{\"y\": 4.52, \"x\": \"CorelDRAW\", \"icon\": \"https://gdm-catalog-fmapi-prod.imgix.net/ProductLogo/c777fa4c-f314-481f-a60a-fa59bfb43c42.jpeg\", \"size\": 1494}]}, {\"id\": \"Pencil Project\", \"data\": [{\"y\": 4.33, \"x\": \"Pencil Project\", \"icon\": \"https://gdm-catalog-fmapi-prod.imgix.net/ProductLogo/1738a478-5c20-42e2-a5cc-a37a166f2ea8.png\", \"size\": 21}]}]}\n\n### Business Implications:\n- **Affinity Photo** is ideal for professionals seeking advanced photo editing capabilities, while **Figma** and **Canva** cater well to collaborative and beginner-friendly environments, respectively.\n- **CorelDRAW** is suitable for users focused on vector graphics, although it has a smaller user base.\n- **Pencil Project** serves a niche market, appealing to those focused on prototyping.\n\nIf you\u0027re interested in exploring these tools further, consider booking a demo to see how they can fit your specific design needs! Let me know if you have any other questions or need more information!\n**user**: give me features of each tool\n**assistant**: Sure, Rakesh! Here are the features of each graphic design tool we discussed:\n\n### Features of Graphic Design Tools\n\n| **Tool** | **Key Features** |\n|----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| **Canva** | - User-friendly drag-and-drop interface\u003cbr\u003e- Extensive library of templates for various projects\u003cbr\u003e- Collaboration features for team projects\u003cbr\u003e- Ability to create social media graphics, presentations, and more\u003cbr\u003e- Free and paid plans available |\n| **Figma** | - Real-time collaboration for teams\u003cbr\u003e- Prototyping capabilities with interactive elements\u003cbr\u003e- Version control to track changes\u003cbr\u003e- Plugin support for extended functionality\u003cbr\u003e- Cloud-based for easy access across devices |\n| **Affinity Photo** | - Advanced photo editing tools\u003cbr\u003e- Non-destructive editing capabilities\u003cbr\u003e- Supports various file formats including PSD\u003cbr\u003e- Extensive retouching and color correction tools\u003cbr\u003e- One-time purchase with no subscription fees |\n| **CorelDRAW** | - Powerful vector illustration tools\u003cbr\u003e- Page layout capabilities for brochures and multi-page documents\u003cbr\u003e- Advanced typography tools\u003cbr\u003e- Color management for print accuracy\u003cbr\u003e- Supports various file formats for import/export |\n| **Pencil Project** | - Open-source GUI prototyping tool\u003cbr\u003e- Built-in shape collections for UI design\u003cbr\u003e- Document export options in multiple formats (PNG, PDF, SVG)\u003cbr\u003e- Inter-page linking for creating interactive prototypes\u003cbr\u003e- Free to use |\n\nIf you have any specific needs or projects in mind, let me know, and I can help you choose the best tool for your requirements!",
"status": "pending",
"turn": 0
}
```
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
Review the user interactions with the chatbot.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
IlyasMoutawwakil/OpenVINO-Benchmarks | IlyasMoutawwakil | "2024-11-18T11:07:45Z" | 3 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-11-18T11:07:45Z" | ---
license: apache-2.0
---
|
wangyubo/hatefulM | wangyubo | "2024-11-18T11:53:54Z" | 3 | 0 | [
"license:unknown",
"region:us"
] | null | "2024-11-18T11:53:54Z" | ---
license: unknown
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_9ca7b7d7-54cc-4f0f-a3d7-5323cb3dce4d | argilla-internal-testing | "2024-11-18T12:30:11Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T12:30:10Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1454
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_f2ce2eba-f2fe-4fae-8e37-425915796892 | argilla-internal-testing | "2024-11-18T12:30:23Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T12:30:22Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1454
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_6112cd34-5fce-4c13-b0d7-676a2bb953f1 | argilla-internal-testing | "2024-11-18T12:30:23Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T12:30:22Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1454
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_4680180e-3fdb-4823-9c22-e7d9334c27d7 | argilla-internal-testing | "2024-11-18T12:30:31Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T12:30:30Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1454
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_dbc77f7f-3096-45a9-87ed-5c9e8613f4e9 | argilla-internal-testing | "2024-11-18T12:30:42Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T12:30:40Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1454
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_a8139a23-c25f-4bc4-ab16-ecf5e7c5eea9 | argilla-internal-testing | "2024-11-18T12:39:29Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T12:39:28Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_d58377ab-1c1c-4a73-8a34-933e5b46cc43 | argilla-internal-testing | "2024-11-18T12:39:29Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T12:39:29Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_d74754d1-9fc6-4aae-8803-95d8f11771a3 | argilla-internal-testing | "2024-11-18T12:39:31Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T12:39:31Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_a087f349-b4a2-465b-9394-71dbfcafa2fd | argilla-internal-testing | "2024-11-18T12:39:51Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T12:39:50Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_9b5fc76c-2660-4c38-8577-86dfe9b99ae7 | argilla-internal-testing | "2024-11-18T12:39:57Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T12:39:56Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_8b705105-ba12-420f-9d01-91e628c5a9d3 | argilla-internal-testing | "2024-11-18T14:11:25Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T14:11:22Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_bdb92b4f-3814-46c4-a25e-24b291aeae22 | argilla-internal-testing | "2024-11-18T14:11:26Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T14:11:25Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_8d487bc2-47bd-461f-b7a8-b1ddc67521f5 | argilla-internal-testing | "2024-11-18T14:11:28Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T14:11:28Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_eb58d6b2-282f-4bd6-8da4-100337288155 | argilla-internal-testing | "2024-11-18T14:11:30Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T14:11:29Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_7ee81ab6-4305-425d-b1d3-7c42675b0218 | argilla-internal-testing | "2024-11-18T14:11:51Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T14:11:49Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_556d8b4a-ea9f-4bf7-a1fa-21dac503257e | argilla-internal-testing | "2024-11-18T14:12:52Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T14:12:51Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_143add39-e304-4541-8699-70598629205d | argilla-internal-testing | "2024-11-18T14:12:55Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T14:12:54Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_3a3f1eb1-edfa-41ff-b19c-aa1916d8ae65 | argilla-internal-testing | "2024-11-18T14:13:02Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T14:13:01Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_17287d6f-46bb-4027-9feb-66dd191d7e9e | argilla-internal-testing | "2024-11-18T14:13:11Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T14:13:10Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_4b77bd13-0e97-4b94-98b8-b0b76cf25823 | argilla-internal-testing | "2024-11-18T14:13:35Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T14:13:33Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yaniokodf/alpaca_zh_demo | yaniokodf | "2024-11-18T14:37:36Z" | 3 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T14:26:30Z" | ---
license: apache-2.0
---
|
OpenScholar/OpenScholar-DataStore-V2 | OpenScholar | "2024-11-18T14:30:53Z" | 3 | 0 | [
"license:odc-by",
"size_categories:100M<n<1B",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | null | "2024-11-18T14:30:52Z" | ---
license: odc-by
---
This repository consists of two directories:
- pes2o-chunks-32-shards: Embeddings (split into 32 shards) for peS2o V2
- pes2o-embs-32-shards: Corresponding text chunks (split into 32 shards) for peS2o V2
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_bb984135-06e3-444c-9b1f-a56e50c0ad37 | argilla-internal-testing | "2024-11-18T23:56:18Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T23:56:17Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_192f91bb-4f43-4c04-a584-27eece2ff77e | argilla-internal-testing | "2024-11-18T23:56:18Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T23:56:18Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_36fe9989-9ace-49a0-876a-b074b7e22dc5 | argilla-internal-testing | "2024-11-18T23:56:41Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T23:56:39Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_11eb5a74-eebe-4592-b1fb-b899a71790ad | argilla-internal-testing | "2024-11-18T23:56:54Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T23:56:52Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_2f8106af-14b2-4708-975c-688725cfd911 | argilla-internal-testing | "2024-11-18T23:57:26Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T23:57:25Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
corniclr25/stack-mined-python-v1 | corniclr25 | "2024-11-19T08:37:30Z" | 3 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T07:55:52Z" | ---
dataset_info:
features:
- name: query
dtype: string
- name: document
dtype: string
- name: negatives
sequence: string
- name: metadata
struct:
- name: objective
struct:
- name: paired
sequence: 'null'
- name: self
sequence: 'null'
- name: triplet
sequence:
sequence: string
splits:
- name: train
num_bytes: 107750038700
num_examples: 10000000
download_size: 41516082386
dataset_size: 107750038700
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
corniclr25/stack-mined-javascript-v1 | corniclr25 | "2024-11-19T08:57:22Z" | 3 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T08:37:35Z" | ---
dataset_info:
features:
- name: query
dtype: string
- name: document
dtype: string
- name: negatives
sequence: string
- name: metadata
struct:
- name: objective
struct:
- name: paired
sequence: 'null'
- name: self
sequence: 'null'
- name: triplet
sequence:
sequence: string
splits:
- name: train
num_bytes: 29123533349
num_examples: 2334784
download_size: 11806323547
dataset_size: 29123533349
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kz-transformers/mmlu-translated-kk | kz-transformers | "2024-11-19T09:18:15Z" | 3 | 0 | [
"task_categories:multiple-choice",
"language:kk",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"multiple-choice"
] | "2024-11-19T09:06:23Z" | ---
language:
- kk
license: apache-2.0
task_categories:
- multiple-choice
pretty_name: mmlu-kk-translated
dataset_info:
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: id
dtype: int64
splits:
- name: test
num_bytes: 11819526
num_examples: 14038
- name: validation
num_bytes: 1295921
num_examples: 1531
- name: dev
num_bytes: 212047
num_examples: 285
download_size: 5565517
dataset_size: 13327494
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: validation
path: data/validation-*
- split: dev
path: data/dev-*
---
|
Lakshay1Dagar/marketing_prompts_v3 | Lakshay1Dagar | "2024-11-19T11:48:42Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T11:48:40Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 12784
num_examples: 19
download_size: 8681
dataset_size: 12784
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Vinisf/Vinirp | Vinisf | "2024-11-19T14:30:54Z" | 3 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-19T14:29:27Z" | ---
license: openrail
---
|
enjalot/ls-dadabase | enjalot | "2024-11-19T16:37:35Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"latent-scope"
] | null | "2024-11-19T16:37:19Z" |
---
tags:
- latent-scope
---
# ls-dadabase
This dataset contains the files necessary to view in [latentscope](https://github.com/enjalot/latent-scope).
The files in the `latentscope` are used by the app to view. You can also preview the scope TODO
Total size of dataset files: 376.7 MB
TODO: download script inside latentscope
|
open-llm-leaderboard/EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto-details | open-llm-leaderboard | "2024-11-19T21:29:20Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T21:26:09Z" | ---
pretty_name: Evaluation run of EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto](https://huggingface.co/EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto-details\"\
,\n\tname=\"EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-19T21-26-08.744302](https://huggingface.co/datasets/open-llm-leaderboard/EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto-details/blob/main/EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto/results_2024-11-19T21-26-08.744302.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_loose_acc,none\": 0.41035120147874304,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.021167895542791835,\n \"\
inst_level_strict_acc,none\": 0.5203836930455635,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"acc,none\": 0.3515625,\n \"acc_stderr,none\"\
: 0.00435295793176245,\n \"acc_norm,none\": 0.4471397068361655,\n \
\ \"acc_norm_stderr,none\": 0.0053604900103817995,\n \"inst_level_loose_acc,none\"\
: 0.5551558752997602,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\
,\n \"prompt_level_strict_acc,none\": 0.3659889094269871,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.02072934168450657,\n \"\
exact_match,none\": 0.1336858006042296,\n \"exact_match_stderr,none\"\
: 0.00888347241287297,\n \"alias\": \"leaderboard\"\n },\n \
\ \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.4804721402534282,\n\
\ \"acc_norm_stderr,none\": 0.006196076922023256,\n \"alias\"\
: \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.816,\n \"acc_norm_stderr,none\": 0.02455581299422255\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.5935828877005348,\n\
\ \"acc_norm_stderr,none\": 0.036013904358574385\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.424,\n \"acc_norm_stderr,none\":\
\ 0.03131803437491622\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.536,\n\
\ \"acc_norm_stderr,none\": 0.031603975145223735\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.328,\n \"acc_norm_stderr,none\":\
\ 0.029752391824475363\n },\n \"leaderboard_bbh_hyperbaton\": {\n\
\ \"alias\": \" - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\"\
: 0.62,\n \"acc_norm_stderr,none\": 0.030760116042626098\n },\n\
\ \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.436,\n \"acc_norm_stderr,none\": 0.031425567060281365\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.408,\n \"acc_norm_stderr,none\": 0.031145209846548512\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.688,\n \"acc_norm_stderr,none\": 0.029361067575219852\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.612,\n \"acc_norm_stderr,none\": 0.030881038748993974\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.616,\n \"acc_norm_stderr,none\":\
\ 0.030821679117375447\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.476,\n \"acc_norm_stderr,none\": 0.03164968895968774\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.4726027397260274,\n \"acc_norm_stderr,none\": 0.04146035831053314\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.552,\n \
\ \"acc_norm_stderr,none\": 0.03151438761115348\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.448,\n \"acc_norm_stderr,none\":\
\ 0.03151438761115349\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.550561797752809,\n \"acc_norm_stderr,none\": 0.037389649660569645\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.716,\n \"acc_norm_stderr,none\": 0.028576958730437443\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.164,\n\
\ \"acc_norm_stderr,none\": 0.02346526100207671\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.16,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.14,\n \"acc_norm_stderr,none\": 0.021989409645240245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.308,\n \"acc_norm_stderr,none\":\
\ 0.02925692860650181\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.5,\n \"acc_norm_stderr,none\": 0.031686212526223896\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.31208053691275167,\n\
\ \"acc_norm_stderr,none\": 0.013434067707656962,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.3333333333333333,\n \"acc_norm_stderr,none\": 0.033586181457325226\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.31135531135531136,\n\
\ \"acc_norm_stderr,none\": 0.019834784635900788\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.30357142857142855,\n \"acc_norm_stderr,none\"\
: 0.021747782232917543\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.3659889094269871,\n \"prompt_level_strict_acc_stderr,none\": 0.02072934168450657,\n\
\ \"inst_level_strict_acc,none\": 0.5203836930455635,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.41035120147874304,\n \"prompt_level_loose_acc_stderr,none\": 0.021167895542791835,\n\
\ \"inst_level_loose_acc,none\": 0.5551558752997602,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.1336858006042296,\n \"exact_match_stderr,none\"\
: 0.00888347241287297,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.28013029315960913,\n\
\ \"exact_match_stderr,none\": 0.025671206118429726\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.056910569105691054,\n \"exact_match_stderr,none\": 0.020974566219895126\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.015151515151515152,\n\
\ \"exact_match_stderr,none\": 0.01067276863717474\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\": \"\
\ - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.02857142857142857,\n \"exact_match_stderr,none\": 0.009973998820736053\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.12337662337662338,\n\
\ \"exact_match_stderr,none\": 0.026587484423674337\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.24870466321243523,\n \"exact_match_stderr,none\"\
: 0.03119584087770025\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.05185185185185185,\n \"exact_match_stderr,none\"\
: 0.019154368449050496\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.3515625,\n\
\ \"acc_stderr,none\": 0.00435295793176245\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4060846560846561,\n \"acc_norm_stderr,none\"\
: 0.017602041943777794,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.532,\n\
\ \"acc_norm_stderr,none\": 0.031621252575725574\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.3359375,\n \"acc_norm_stderr,none\"\
: 0.029577647634376425\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.352,\n \"acc_norm_stderr,none\": 0.030266288057359866\n\
\ }\n },\n \"leaderboard\": {\n \"prompt_level_loose_acc,none\"\
: 0.41035120147874304,\n \"prompt_level_loose_acc_stderr,none\": 0.021167895542791835,\n\
\ \"inst_level_strict_acc,none\": 0.5203836930455635,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"acc,none\": 0.3515625,\n \"acc_stderr,none\": 0.00435295793176245,\n\
\ \"acc_norm,none\": 0.4471397068361655,\n \"acc_norm_stderr,none\"\
: 0.0053604900103817995,\n \"inst_level_loose_acc,none\": 0.5551558752997602,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"prompt_level_strict_acc,none\"\
: 0.3659889094269871,\n \"prompt_level_strict_acc_stderr,none\": 0.02072934168450657,\n\
\ \"exact_match,none\": 0.1336858006042296,\n \"exact_match_stderr,none\"\
: 0.00888347241287297,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.4804721402534282,\n \"acc_norm_stderr,none\"\
: 0.006196076922023256,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.816,\n \"acc_norm_stderr,none\": 0.02455581299422255\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5935828877005348,\n \"acc_norm_stderr,none\"\
: 0.036013904358574385\n },\n \"leaderboard_bbh_date_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.424,\n \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.328,\n \"acc_norm_stderr,none\": 0.029752391824475363\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.62,\n \"acc_norm_stderr,none\": 0.030760116042626098\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.436,\n \"acc_norm_stderr,none\": 0.031425567060281365\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.408,\n \"acc_norm_stderr,none\": 0.031145209846548512\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.688,\n \"acc_norm_stderr,none\": 0.029361067575219852\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.612,\n \"acc_norm_stderr,none\": 0.030881038748993974\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.616,\n \"acc_norm_stderr,none\": 0.030821679117375447\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.476,\n \"acc_norm_stderr,none\": 0.03164968895968774\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.4726027397260274,\n\
\ \"acc_norm_stderr,none\": 0.04146035831053314\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.550561797752809,\n \"acc_norm_stderr,none\"\
: 0.037389649660569645\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.716,\n \"acc_norm_stderr,none\": 0.028576958730437443\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.164,\n \"acc_norm_stderr,none\": 0.02346526100207671\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.16,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.14,\n \"acc_norm_stderr,none\": 0.021989409645240245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.308,\n \"acc_norm_stderr,none\": 0.02925692860650181\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.5,\n \"acc_norm_stderr,none\": 0.031686212526223896\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.31208053691275167,\n\
\ \"acc_norm_stderr,none\": 0.013434067707656962,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.3333333333333333,\n\
\ \"acc_norm_stderr,none\": 0.033586181457325226\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.31135531135531136,\n \"acc_norm_stderr,none\": 0.019834784635900788\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.30357142857142855,\n \"acc_norm_stderr,none\"\
: 0.021747782232917543\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.3659889094269871,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.02072934168450657,\n \
\ \"inst_level_strict_acc,none\": 0.5203836930455635,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.41035120147874304,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.021167895542791835,\n \"inst_level_loose_acc,none\"\
: 0.5551558752997602,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.1336858006042296,\n\
\ \"exact_match_stderr,none\": 0.00888347241287297,\n \"alias\": \"\
\ - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.28013029315960913,\n \"exact_match_stderr,none\": 0.025671206118429726\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.056910569105691054,\n \"exact_match_stderr,none\": 0.020974566219895126\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.015151515151515152,\n \"exact_match_stderr,none\"\
: 0.01067276863717474\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.02857142857142857,\n \"exact_match_stderr,none\"\
: 0.009973998820736053\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.12337662337662338,\n \"exact_match_stderr,none\": 0.026587484423674337\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.24870466321243523,\n \"exact_match_stderr,none\"\
: 0.03119584087770025\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.05185185185185185,\n \"exact_match_stderr,none\": 0.019154368449050496\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.3515625,\n \"acc_stderr,none\": 0.00435295793176245\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.4060846560846561,\n\
\ \"acc_norm_stderr,none\": 0.017602041943777794,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.532,\n \"acc_norm_stderr,none\": 0.031621252575725574\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.3359375,\n \"acc_norm_stderr,none\": 0.029577647634376425\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.352,\n \"acc_norm_stderr,none\": 0.030266288057359866\n\
\ }\n}\n```"
repo_url: https://huggingface.co/EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_navigate
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_snarks
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_gpqa_extended
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_gpqa_main
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_gpqa_main_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_ifeval
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_ifeval_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_mmlu_pro
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_musr_object_placements
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-19T21-26-08.744302.jsonl'
- config_name: EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_19T21_26_08.744302
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-19T21-26-08.744302.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-19T21-26-08.744302.jsonl'
---
# Dataset Card for Evaluation run of EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto](https://huggingface.co/EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto-details",
name="EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-19T21-26-08.744302](https://huggingface.co/datasets/open-llm-leaderboard/EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto-details/blob/main/EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto/results_2024-11-19T21-26-08.744302.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_loose_acc,none": 0.41035120147874304,
"prompt_level_loose_acc_stderr,none": 0.021167895542791835,
"inst_level_strict_acc,none": 0.5203836930455635,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.3515625,
"acc_stderr,none": 0.00435295793176245,
"acc_norm,none": 0.4471397068361655,
"acc_norm_stderr,none": 0.0053604900103817995,
"inst_level_loose_acc,none": 0.5551558752997602,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.3659889094269871,
"prompt_level_strict_acc_stderr,none": 0.02072934168450657,
"exact_match,none": 0.1336858006042296,
"exact_match_stderr,none": 0.00888347241287297,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.4804721402534282,
"acc_norm_stderr,none": 0.006196076922023256,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.816,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5935828877005348,
"acc_norm_stderr,none": 0.036013904358574385
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.328,
"acc_norm_stderr,none": 0.029752391824475363
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.62,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.436,
"acc_norm_stderr,none": 0.031425567060281365
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.408,
"acc_norm_stderr,none": 0.031145209846548512
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.688,
"acc_norm_stderr,none": 0.029361067575219852
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.612,
"acc_norm_stderr,none": 0.030881038748993974
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.616,
"acc_norm_stderr,none": 0.030821679117375447
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.476,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4726027397260274,
"acc_norm_stderr,none": 0.04146035831053314
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.550561797752809,
"acc_norm_stderr,none": 0.037389649660569645
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.716,
"acc_norm_stderr,none": 0.028576958730437443
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.164,
"acc_norm_stderr,none": 0.02346526100207671
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.16,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.14,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.308,
"acc_norm_stderr,none": 0.02925692860650181
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.5,
"acc_norm_stderr,none": 0.031686212526223896
},
"leaderboard_gpqa": {
"acc_norm,none": 0.31208053691275167,
"acc_norm_stderr,none": 0.013434067707656962,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3333333333333333,
"acc_norm_stderr,none": 0.033586181457325226
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.31135531135531136,
"acc_norm_stderr,none": 0.019834784635900788
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.30357142857142855,
"acc_norm_stderr,none": 0.021747782232917543
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.3659889094269871,
"prompt_level_strict_acc_stderr,none": 0.02072934168450657,
"inst_level_strict_acc,none": 0.5203836930455635,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.41035120147874304,
"prompt_level_loose_acc_stderr,none": 0.021167895542791835,
"inst_level_loose_acc,none": 0.5551558752997602,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.1336858006042296,
"exact_match_stderr,none": 0.00888347241287297,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.28013029315960913,
"exact_match_stderr,none": 0.025671206118429726
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.056910569105691054,
"exact_match_stderr,none": 0.020974566219895126
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.015151515151515152,
"exact_match_stderr,none": 0.01067276863717474
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.02857142857142857,
"exact_match_stderr,none": 0.009973998820736053
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.12337662337662338,
"exact_match_stderr,none": 0.026587484423674337
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.24870466321243523,
"exact_match_stderr,none": 0.03119584087770025
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.05185185185185185,
"exact_match_stderr,none": 0.019154368449050496
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.3515625,
"acc_stderr,none": 0.00435295793176245
},
"leaderboard_musr": {
"acc_norm,none": 0.4060846560846561,
"acc_norm_stderr,none": 0.017602041943777794,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.532,
"acc_norm_stderr,none": 0.031621252575725574
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.3359375,
"acc_norm_stderr,none": 0.029577647634376425
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.352,
"acc_norm_stderr,none": 0.030266288057359866
}
},
"leaderboard": {
"prompt_level_loose_acc,none": 0.41035120147874304,
"prompt_level_loose_acc_stderr,none": 0.021167895542791835,
"inst_level_strict_acc,none": 0.5203836930455635,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.3515625,
"acc_stderr,none": 0.00435295793176245,
"acc_norm,none": 0.4471397068361655,
"acc_norm_stderr,none": 0.0053604900103817995,
"inst_level_loose_acc,none": 0.5551558752997602,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.3659889094269871,
"prompt_level_strict_acc_stderr,none": 0.02072934168450657,
"exact_match,none": 0.1336858006042296,
"exact_match_stderr,none": 0.00888347241287297,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.4804721402534282,
"acc_norm_stderr,none": 0.006196076922023256,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.816,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5935828877005348,
"acc_norm_stderr,none": 0.036013904358574385
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.328,
"acc_norm_stderr,none": 0.029752391824475363
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.62,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.436,
"acc_norm_stderr,none": 0.031425567060281365
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.408,
"acc_norm_stderr,none": 0.031145209846548512
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.688,
"acc_norm_stderr,none": 0.029361067575219852
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.612,
"acc_norm_stderr,none": 0.030881038748993974
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.616,
"acc_norm_stderr,none": 0.030821679117375447
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.476,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4726027397260274,
"acc_norm_stderr,none": 0.04146035831053314
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.550561797752809,
"acc_norm_stderr,none": 0.037389649660569645
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.716,
"acc_norm_stderr,none": 0.028576958730437443
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.164,
"acc_norm_stderr,none": 0.02346526100207671
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.16,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.14,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.308,
"acc_norm_stderr,none": 0.02925692860650181
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.5,
"acc_norm_stderr,none": 0.031686212526223896
},
"leaderboard_gpqa": {
"acc_norm,none": 0.31208053691275167,
"acc_norm_stderr,none": 0.013434067707656962,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3333333333333333,
"acc_norm_stderr,none": 0.033586181457325226
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.31135531135531136,
"acc_norm_stderr,none": 0.019834784635900788
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.30357142857142855,
"acc_norm_stderr,none": 0.021747782232917543
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.3659889094269871,
"prompt_level_strict_acc_stderr,none": 0.02072934168450657,
"inst_level_strict_acc,none": 0.5203836930455635,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.41035120147874304,
"prompt_level_loose_acc_stderr,none": 0.021167895542791835,
"inst_level_loose_acc,none": 0.5551558752997602,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.1336858006042296,
"exact_match_stderr,none": 0.00888347241287297,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.28013029315960913,
"exact_match_stderr,none": 0.025671206118429726
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.056910569105691054,
"exact_match_stderr,none": 0.020974566219895126
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.015151515151515152,
"exact_match_stderr,none": 0.01067276863717474
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.02857142857142857,
"exact_match_stderr,none": 0.009973998820736053
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.12337662337662338,
"exact_match_stderr,none": 0.026587484423674337
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.24870466321243523,
"exact_match_stderr,none": 0.03119584087770025
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.05185185185185185,
"exact_match_stderr,none": 0.019154368449050496
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.3515625,
"acc_stderr,none": 0.00435295793176245
},
"leaderboard_musr": {
"acc_norm,none": 0.4060846560846561,
"acc_norm_stderr,none": 0.017602041943777794,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.532,
"acc_norm_stderr,none": 0.031621252575725574
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.3359375,
"acc_norm_stderr,none": 0.029577647634376425
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.352,
"acc_norm_stderr,none": 0.030266288057359866
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo-details | open-llm-leaderboard | "2024-11-19T21:30:19Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T21:27:08Z" | ---
pretty_name: Evaluation run of cluebbers/Llama-3.1-8B-paraphrase-type-generation-apty-ipo
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [cluebbers/Llama-3.1-8B-paraphrase-type-generation-apty-ipo](https://huggingface.co/cluebbers/Llama-3.1-8B-paraphrase-type-generation-apty-ipo)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo-details\"\
,\n\tname=\"cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-19T21-27-07.770301](https://huggingface.co/datasets/open-llm-leaderboard/cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo-details/blob/main/cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo/results_2024-11-19T21-27-07.770301.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"acc_norm,none\": 0.3674925411856272,\n \"acc_norm_stderr,none\"\
: 0.005284553107918915,\n \"inst_level_strict_acc,none\": 0.18585131894484413,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"inst_level_loose_acc,none\"\
: 0.20503597122302158,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\
,\n \"exact_match,none\": 0.006797583081570997,\n \"exact_match_stderr,none\"\
: 0.002250499312351689,\n \"prompt_level_loose_acc,none\": 0.09611829944547134,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.01268416714715648,\n \
\ \"acc,none\": 0.2590591755319149,\n \"acc_stderr,none\": 0.00399429731369927,\n\
\ \"prompt_level_strict_acc,none\": 0.07948243992606285,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.011640047542656398,\n \"\
alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \
\ \"acc_norm,none\": 0.3804894983509807,\n \"acc_norm_stderr,none\"\
: 0.006125798368883372,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.744,\n\
\ \"acc_norm_stderr,none\": 0.027657108718204846\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5347593582887701,\n \"acc_norm_stderr,none\"\
: 0.036573080985189216\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.332,\n \"acc_norm_stderr,none\": 0.029844039047465857\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.46,\n\
\ \"acc_norm_stderr,none\": 0.031584653891499004\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.5,\n \"acc_norm_stderr,none\": 0.031686212526223896\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\"\
: \" - leaderboard_bbh_geometric_shapes\",\n \"acc_norm,none\": 0.348,\n\
\ \"acc_norm_stderr,none\": 0.030186568464511673\n },\n \
\ \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.532,\n \"acc_norm_stderr,none\":\
\ 0.031621252575725574\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.284,\n \"acc_norm_stderr,none\":\
\ 0.02857695873043744\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.276,\n \"acc_norm_stderr,none\":\
\ 0.02832853727421142\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.424,\n \"acc_norm_stderr,none\":\
\ 0.03131803437491622\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \
\ \"acc_norm,none\": 0.496,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \"\
\ - leaderboard_bbh_navigate\",\n \"acc_norm,none\": 0.428,\n \
\ \"acc_norm_stderr,none\": 0.031355968923772626\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.376,\n \"acc_norm_stderr,none\": 0.03069633626739458\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.21232876712328766,\n \"acc_norm_stderr,none\": 0.03396197282917473\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.276,\n \"acc_norm_stderr,none\": 0.02832853727421142\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.24,\n \
\ \"acc_norm_stderr,none\": 0.027065293652238982\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.248,\n \"acc_norm_stderr,none\":\
\ 0.027367497504863593\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.449438202247191,\n \"acc_norm_stderr,none\": 0.037389649660569645\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.584,\n \"acc_norm_stderr,none\": 0.031235856237014505\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.252,\n\
\ \"acc_norm_stderr,none\": 0.027513851933031318\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.208,\n \"acc_norm_stderr,none\": 0.02572139890141637\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.132,\n \"acc_norm_stderr,none\":\
\ 0.021450980824038166\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.296,\n \"acc_norm_stderr,none\":\
\ 0.028928939388379694\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2634228187919463,\n\
\ \"acc_norm_stderr,none\": 0.012770163430421852,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.2727272727272727,\n \"acc_norm_stderr,none\": 0.03173071239071728\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.2509157509157509,\n\
\ \"acc_norm_stderr,none\": 0.018570818425352315\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.27455357142857145,\n \"acc_norm_stderr,none\"\
: 0.021108747290633768\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.07948243992606285,\n \"prompt_level_strict_acc_stderr,none\": 0.011640047542656396,\n\
\ \"inst_level_strict_acc,none\": 0.18585131894484413,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.09611829944547134,\n \"prompt_level_loose_acc_stderr,none\": 0.01268416714715648,\n\
\ \"inst_level_loose_acc,none\": 0.20503597122302158,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.006797583081570997,\n \"exact_match_stderr,none\"\
: 0.002250499312351689,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.009771986970684038,\n\
\ \"exact_match_stderr,none\": 0.005623391633915856\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_geometry_hard\"\
: {\n \"alias\": \" - leaderboard_math_geometry_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n \
\ },\n \"leaderboard_math_intermediate_algebra_hard\": {\n \
\ \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n \
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.0,\n\
\ \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \
\ \"exact_match,none\": 0.010362694300518135,\n \"exact_match_stderr,none\"\
: 0.007308424386792209\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.02962962962962963,\n \"exact_match_stderr,none\"\
: 0.014648038602753809\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.2590591755319149,\n\
\ \"acc_stderr,none\": 0.00399429731369927\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.43253968253968256,\n \"acc_norm_stderr,none\"\
: 0.017870703594488484,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.52,\n\
\ \"acc_norm_stderr,none\": 0.03166085340849512\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.34765625,\n \"acc_norm_stderr,none\"\
: 0.029822440677853123\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n\
\ }\n },\n \"leaderboard\": {\n \"acc_norm,none\": 0.3674925411856272,\n\
\ \"acc_norm_stderr,none\": 0.005284553107918915,\n \"inst_level_strict_acc,none\"\
: 0.18585131894484413,\n \"inst_level_strict_acc_stderr,none\": \"N/A\",\n\
\ \"inst_level_loose_acc,none\": 0.20503597122302158,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.006797583081570997,\n \"exact_match_stderr,none\"\
: 0.002250499312351689,\n \"prompt_level_loose_acc,none\": 0.09611829944547134,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.01268416714715648,\n \"\
acc,none\": 0.2590591755319149,\n \"acc_stderr,none\": 0.00399429731369927,\n\
\ \"prompt_level_strict_acc,none\": 0.07948243992606285,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.011640047542656398,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.3804894983509807,\n \"acc_norm_stderr,none\"\
: 0.006125798368883372,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.744,\n \"acc_norm_stderr,none\": 0.027657108718204846\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5347593582887701,\n \"acc_norm_stderr,none\"\
: 0.036573080985189216\n },\n \"leaderboard_bbh_date_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.332,\n \"acc_norm_stderr,none\": 0.029844039047465857\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.5,\n \"acc_norm_stderr,none\": 0.031686212526223896\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.348,\n \"acc_norm_stderr,none\": 0.030186568464511673\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.532,\n \"acc_norm_stderr,none\": 0.031621252575725574\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.284,\n \"acc_norm_stderr,none\": 0.02857695873043744\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.276,\n \"acc_norm_stderr,none\": 0.02832853727421142\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.424,\n \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.496,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.428,\n \"acc_norm_stderr,none\": 0.031355968923772626\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.376,\n \"acc_norm_stderr,none\": 0.03069633626739458\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.21232876712328766,\n\
\ \"acc_norm_stderr,none\": 0.03396197282917473\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.276,\n \"acc_norm_stderr,none\": 0.02832853727421142\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.24,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.248,\n \"acc_norm_stderr,none\": 0.027367497504863593\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.449438202247191,\n \"acc_norm_stderr,none\"\
: 0.037389649660569645\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.584,\n \"acc_norm_stderr,none\": 0.031235856237014505\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.252,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.208,\n \"acc_norm_stderr,none\": 0.02572139890141637\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.132,\n \"acc_norm_stderr,none\": 0.021450980824038166\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.296,\n \"acc_norm_stderr,none\": 0.028928939388379694\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2634228187919463,\n\
\ \"acc_norm_stderr,none\": 0.012770163430421852,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.2727272727272727,\n\
\ \"acc_norm_stderr,none\": 0.03173071239071728\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.2509157509157509,\n \"acc_norm_stderr,none\": 0.018570818425352315\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.27455357142857145,\n \"acc_norm_stderr,none\"\
: 0.021108747290633768\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.07948243992606285,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.011640047542656396,\n \
\ \"inst_level_strict_acc,none\": 0.18585131894484413,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.09611829944547134,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.01268416714715648,\n \"inst_level_loose_acc,none\"\
: 0.20503597122302158,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n\
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.006797583081570997,\n\
\ \"exact_match_stderr,none\": 0.002250499312351689,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.009771986970684038,\n \"exact_match_stderr,none\": 0.005623391633915856\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_geometry_hard\"\
: {\n \"alias\": \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n \
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\": \" - leaderboard_math_num_theory_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.010362694300518135,\n \"exact_match_stderr,none\"\
: 0.007308424386792209\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.02962962962962963,\n \"exact_match_stderr,none\": 0.014648038602753809\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.2590591755319149,\n \"acc_stderr,none\": 0.00399429731369927\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.43253968253968256,\n\
\ \"acc_norm_stderr,none\": 0.017870703594488484,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n },\n \"leaderboard_musr_object_placements\"\
: {\n \"alias\": \" - leaderboard_musr_object_placements\",\n \"\
acc_norm,none\": 0.34765625,\n \"acc_norm_stderr,none\": 0.029822440677853123\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n\
\ }\n}\n```"
repo_url: https://huggingface.co/cluebbers/Llama-3.1-8B-paraphrase-type-generation-apty-ipo
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_navigate
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_snarks
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_gpqa_extended
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_gpqa_main
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_gpqa_main_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_ifeval
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_ifeval_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_mmlu_pro
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_musr_object_placements
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-19T21-27-07.770301.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_19T21_27_07.770301
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-19T21-27-07.770301.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-19T21-27-07.770301.jsonl'
---
# Dataset Card for Evaluation run of cluebbers/Llama-3.1-8B-paraphrase-type-generation-apty-ipo
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [cluebbers/Llama-3.1-8B-paraphrase-type-generation-apty-ipo](https://huggingface.co/cluebbers/Llama-3.1-8B-paraphrase-type-generation-apty-ipo)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo-details",
name="cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-19T21-27-07.770301](https://huggingface.co/datasets/open-llm-leaderboard/cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo-details/blob/main/cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-ipo/results_2024-11-19T21-27-07.770301.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"acc_norm,none": 0.3674925411856272,
"acc_norm_stderr,none": 0.005284553107918915,
"inst_level_strict_acc,none": 0.18585131894484413,
"inst_level_strict_acc_stderr,none": "N/A",
"inst_level_loose_acc,none": 0.20503597122302158,
"inst_level_loose_acc_stderr,none": "N/A",
"exact_match,none": 0.006797583081570997,
"exact_match_stderr,none": 0.002250499312351689,
"prompt_level_loose_acc,none": 0.09611829944547134,
"prompt_level_loose_acc_stderr,none": 0.01268416714715648,
"acc,none": 0.2590591755319149,
"acc_stderr,none": 0.00399429731369927,
"prompt_level_strict_acc,none": 0.07948243992606285,
"prompt_level_strict_acc_stderr,none": 0.011640047542656398,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.3804894983509807,
"acc_norm_stderr,none": 0.006125798368883372,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.744,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5347593582887701,
"acc_norm_stderr,none": 0.036573080985189216
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.332,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.5,
"acc_norm_stderr,none": 0.031686212526223896
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.348,
"acc_norm_stderr,none": 0.030186568464511673
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.532,
"acc_norm_stderr,none": 0.031621252575725574
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.284,
"acc_norm_stderr,none": 0.02857695873043744
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.276,
"acc_norm_stderr,none": 0.02832853727421142
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.428,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.376,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.21232876712328766,
"acc_norm_stderr,none": 0.03396197282917473
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.276,
"acc_norm_stderr,none": 0.02832853727421142
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.24,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.248,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.449438202247191,
"acc_norm_stderr,none": 0.037389649660569645
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.584,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.208,
"acc_norm_stderr,none": 0.02572139890141637
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.132,
"acc_norm_stderr,none": 0.021450980824038166
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.296,
"acc_norm_stderr,none": 0.028928939388379694
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2634228187919463,
"acc_norm_stderr,none": 0.012770163430421852,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2727272727272727,
"acc_norm_stderr,none": 0.03173071239071728
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2509157509157509,
"acc_norm_stderr,none": 0.018570818425352315
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.27455357142857145,
"acc_norm_stderr,none": 0.021108747290633768
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.07948243992606285,
"prompt_level_strict_acc_stderr,none": 0.011640047542656396,
"inst_level_strict_acc,none": 0.18585131894484413,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.09611829944547134,
"prompt_level_loose_acc_stderr,none": 0.01268416714715648,
"inst_level_loose_acc,none": 0.20503597122302158,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.006797583081570997,
"exact_match_stderr,none": 0.002250499312351689,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.009771986970684038,
"exact_match_stderr,none": 0.005623391633915856
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.010362694300518135,
"exact_match_stderr,none": 0.007308424386792209
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.02962962962962963,
"exact_match_stderr,none": 0.014648038602753809
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.2590591755319149,
"acc_stderr,none": 0.00399429731369927
},
"leaderboard_musr": {
"acc_norm,none": 0.43253968253968256,
"acc_norm_stderr,none": 0.017870703594488484,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.34765625,
"acc_norm_stderr,none": 0.029822440677853123
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
}
},
"leaderboard": {
"acc_norm,none": 0.3674925411856272,
"acc_norm_stderr,none": 0.005284553107918915,
"inst_level_strict_acc,none": 0.18585131894484413,
"inst_level_strict_acc_stderr,none": "N/A",
"inst_level_loose_acc,none": 0.20503597122302158,
"inst_level_loose_acc_stderr,none": "N/A",
"exact_match,none": 0.006797583081570997,
"exact_match_stderr,none": 0.002250499312351689,
"prompt_level_loose_acc,none": 0.09611829944547134,
"prompt_level_loose_acc_stderr,none": 0.01268416714715648,
"acc,none": 0.2590591755319149,
"acc_stderr,none": 0.00399429731369927,
"prompt_level_strict_acc,none": 0.07948243992606285,
"prompt_level_strict_acc_stderr,none": 0.011640047542656398,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.3804894983509807,
"acc_norm_stderr,none": 0.006125798368883372,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.744,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5347593582887701,
"acc_norm_stderr,none": 0.036573080985189216
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.332,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.5,
"acc_norm_stderr,none": 0.031686212526223896
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.348,
"acc_norm_stderr,none": 0.030186568464511673
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.532,
"acc_norm_stderr,none": 0.031621252575725574
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.284,
"acc_norm_stderr,none": 0.02857695873043744
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.276,
"acc_norm_stderr,none": 0.02832853727421142
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.428,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.376,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.21232876712328766,
"acc_norm_stderr,none": 0.03396197282917473
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.276,
"acc_norm_stderr,none": 0.02832853727421142
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.24,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.248,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.449438202247191,
"acc_norm_stderr,none": 0.037389649660569645
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.584,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.208,
"acc_norm_stderr,none": 0.02572139890141637
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.132,
"acc_norm_stderr,none": 0.021450980824038166
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.296,
"acc_norm_stderr,none": 0.028928939388379694
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2634228187919463,
"acc_norm_stderr,none": 0.012770163430421852,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2727272727272727,
"acc_norm_stderr,none": 0.03173071239071728
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2509157509157509,
"acc_norm_stderr,none": 0.018570818425352315
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.27455357142857145,
"acc_norm_stderr,none": 0.021108747290633768
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.07948243992606285,
"prompt_level_strict_acc_stderr,none": 0.011640047542656396,
"inst_level_strict_acc,none": 0.18585131894484413,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.09611829944547134,
"prompt_level_loose_acc_stderr,none": 0.01268416714715648,
"inst_level_loose_acc,none": 0.20503597122302158,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.006797583081570997,
"exact_match_stderr,none": 0.002250499312351689,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.009771986970684038,
"exact_match_stderr,none": 0.005623391633915856
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.010362694300518135,
"exact_match_stderr,none": 0.007308424386792209
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.02962962962962963,
"exact_match_stderr,none": 0.014648038602753809
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.2590591755319149,
"acc_stderr,none": 0.00399429731369927
},
"leaderboard_musr": {
"acc_norm,none": 0.43253968253968256,
"acc_norm_stderr,none": 0.017870703594488484,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.34765625,
"acc_norm_stderr,none": 0.029822440677853123
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc-details | open-llm-leaderboard | "2024-11-19T21:31:30Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T21:27:59Z" | ---
pretty_name: Evaluation run of cluebbers/Llama-3.1-8B-paraphrase-type-generation-etpc
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [cluebbers/Llama-3.1-8B-paraphrase-type-generation-etpc](https://huggingface.co/cluebbers/Llama-3.1-8B-paraphrase-type-generation-etpc)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc-details\"\
,\n\tname=\"cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-19T21-27-59.247727](https://huggingface.co/datasets/open-llm-leaderboard/cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc-details/blob/main/cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc/results_2024-11-19T21-27-59.247727.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"acc_norm,none\": 0.36619535607731224,\n \"acc_norm_stderr,none\"\
: 0.0052734901127333995,\n \"prompt_level_loose_acc,none\": 0.08317929759704251,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.01188373476400857,\n \
\ \"inst_level_strict_acc,none\": 0.17146282973621102,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"acc,none\": 0.25556848404255317,\n \"acc_stderr,none\"\
: 0.003976629780041046,\n \"prompt_level_strict_acc,none\": 0.07024029574861368,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.010997187906692914,\n \
\ \"exact_match,none\": 0.004531722054380665,\n \"exact_match_stderr,none\"\
: 0.0018490874508892755,\n \"inst_level_loose_acc,none\": 0.19184652278177458,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"alias\"\
: \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.37858010762020483,\n \"acc_norm_stderr,none\": 0.006105180850940205,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.764,\n \"acc_norm_stderr,none\": 0.026909337594953852\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.5294117647058824,\n\
\ \"acc_norm_stderr,none\": 0.03659829510813266\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.32,\n \"acc_norm_stderr,none\": 0.029561724955240978\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.456,\n\
\ \"acc_norm_stderr,none\": 0.031563285061213475\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\":\
\ 0.03167708558254714\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.332,\n \"acc_norm_stderr,none\": 0.029844039047465857\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.552,\n \
\ \"acc_norm_stderr,none\": 0.03151438761115348\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.284,\n \"acc_norm_stderr,none\":\
\ 0.02857695873043744\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.284,\n \"acc_norm_stderr,none\":\
\ 0.02857695873043744\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.416,\n \"acc_norm_stderr,none\":\
\ 0.031235856237014505\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \
\ \"acc_norm,none\": 0.496,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \"\
\ - leaderboard_bbh_navigate\",\n \"acc_norm,none\": 0.42,\n \
\ \"acc_norm_stderr,none\": 0.03127799950463661\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.368,\n \"acc_norm_stderr,none\": 0.03056207062099311\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.2054794520547945,\n \"acc_norm_stderr,none\": 0.03355465401072847\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.26,\n \"acc_norm_stderr,none\": 0.027797315752644335\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.248,\n \
\ \"acc_norm_stderr,none\": 0.027367497504863593\n },\n \"\
leaderboard_bbh_salient_translation_error_detection\": {\n \"alias\"\
: \" - leaderboard_bbh_salient_translation_error_detection\",\n \"acc_norm,none\"\
: 0.26,\n \"acc_norm_stderr,none\": 0.027797315752644335\n },\n\
\ \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.4550561797752809,\n \"acc_norm_stderr,none\"\
: 0.03743016495716991\n },\n \"leaderboard_bbh_sports_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \
\ \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.256,\n\
\ \"acc_norm_stderr,none\": 0.027657108718204846\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.192,\n \"acc_norm_stderr,none\": 0.024960691989171963\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.128,\n \"acc_norm_stderr,none\":\
\ 0.021172081336336534\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.292,\n \"acc_norm_stderr,none\":\
\ 0.02881432040220563\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2651006711409396,\n\
\ \"acc_norm_stderr,none\": 0.012799282786329112,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.2777777777777778,\n \"acc_norm_stderr,none\": 0.03191178226713548\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.2600732600732601,\n\
\ \"acc_norm_stderr,none\": 0.018790743352015988\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.265625,\n \"acc_norm_stderr,none\"\
: 0.02089005840079951\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.07024029574861368,\n \"prompt_level_strict_acc_stderr,none\": 0.010997187906692914,\n\
\ \"inst_level_strict_acc,none\": 0.17146282973621102,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.08317929759704251,\n \"prompt_level_loose_acc_stderr,none\": 0.01188373476400857,\n\
\ \"inst_level_loose_acc,none\": 0.19184652278177458,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.004531722054380665,\n \"exact_match_stderr,none\"\
: 0.0018490874508892755,\n \"alias\": \" - leaderboard_math_hard\"\n\
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.003257328990228013,\n\
\ \"exact_match_stderr,none\": 0.003257328990228013\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.008130081300813009,\n \"exact_match_stderr,none\": 0.008130081300813007\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.007575757575757576,\n\
\ \"exact_match_stderr,none\": 0.007575757575757577\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_num_theory_hard\"\
: {\n \"alias\": \" - leaderboard_math_num_theory_hard\",\n \
\ \"exact_match,none\": 0.006493506493506494,\n \"exact_match_stderr,none\"\
: 0.006493506493506494\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \
\ \"exact_match,none\": 0.0051813471502590676,\n \"exact_match_stderr,none\"\
: 0.0051813471502590676\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.007407407407407408,\n \"exact_match_stderr,none\"\
: 0.007407407407407408\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.25556848404255317,\n\
\ \"acc_stderr,none\": 0.003976629780041046\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4312169312169312,\n \"acc_norm_stderr,none\"\
: 0.01788862311572941,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \"\
\ - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.512,\n\
\ \"acc_norm_stderr,none\": 0.03167708558254714\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.3515625,\n \"acc_norm_stderr,none\"\
: 0.029899590697818237\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n\
\ }\n },\n \"leaderboard\": {\n \"acc_norm,none\": 0.36619535607731224,\n\
\ \"acc_norm_stderr,none\": 0.0052734901127333995,\n \"prompt_level_loose_acc,none\"\
: 0.08317929759704251,\n \"prompt_level_loose_acc_stderr,none\": 0.01188373476400857,\n\
\ \"inst_level_strict_acc,none\": 0.17146282973621102,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"acc,none\": 0.25556848404255317,\n \"acc_stderr,none\"\
: 0.003976629780041046,\n \"prompt_level_strict_acc,none\": 0.07024029574861368,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.010997187906692914,\n \
\ \"exact_match,none\": 0.004531722054380665,\n \"exact_match_stderr,none\"\
: 0.0018490874508892755,\n \"inst_level_loose_acc,none\": 0.19184652278177458,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"alias\": \"leaderboard\"\
\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.37858010762020483,\n\
\ \"acc_norm_stderr,none\": 0.006105180850940205,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.764,\n \"acc_norm_stderr,none\": 0.026909337594953852\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5294117647058824,\n \"acc_norm_stderr,none\"\
: 0.03659829510813266\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.32,\n \"acc_norm_stderr,none\": 0.029561724955240978\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.456,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.332,\n \"acc_norm_stderr,none\": 0.029844039047465857\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.284,\n \"acc_norm_stderr,none\": 0.02857695873043744\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.284,\n \"acc_norm_stderr,none\": 0.02857695873043744\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.416,\n \"acc_norm_stderr,none\": 0.031235856237014505\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.496,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.368,\n \"acc_norm_stderr,none\": 0.03056207062099311\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.2054794520547945,\n\
\ \"acc_norm_stderr,none\": 0.03355465401072847\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.26,\n \"acc_norm_stderr,none\": 0.027797315752644335\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.248,\n \"acc_norm_stderr,none\": 0.027367497504863593\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.26,\n \"acc_norm_stderr,none\": 0.027797315752644335\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.4550561797752809,\n \"acc_norm_stderr,none\"\
: 0.03743016495716991\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \"\
acc_norm,none\": 0.256,\n \"acc_norm_stderr,none\": 0.027657108718204846\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.192,\n \"acc_norm_stderr,none\": 0.024960691989171963\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.128,\n \"acc_norm_stderr,none\": 0.021172081336336534\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.292,\n \"acc_norm_stderr,none\": 0.02881432040220563\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2651006711409396,\n\
\ \"acc_norm_stderr,none\": 0.012799282786329112,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.2777777777777778,\n\
\ \"acc_norm_stderr,none\": 0.03191178226713548\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.2600732600732601,\n \"acc_norm_stderr,none\": 0.018790743352015988\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.265625,\n \"acc_norm_stderr,none\": 0.02089005840079951\n\
\ },\n \"leaderboard_ifeval\": {\n \"alias\": \" - leaderboard_ifeval\"\
,\n \"prompt_level_strict_acc,none\": 0.07024029574861368,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.010997187906692914,\n \"inst_level_strict_acc,none\": 0.17146282973621102,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.08317929759704251,\n \"prompt_level_loose_acc_stderr,none\": 0.01188373476400857,\n\
\ \"inst_level_loose_acc,none\": 0.19184652278177458,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\"\n },\n \"leaderboard_math_hard\": {\n \"exact_match,none\"\
: 0.004531722054380665,\n \"exact_match_stderr,none\": 0.0018490874508892755,\n\
\ \"alias\": \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.003257328990228013,\n \"exact_match_stderr,none\": 0.003257328990228013\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.008130081300813009,\n \"exact_match_stderr,none\": 0.008130081300813007\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.007575757575757576,\n \"exact_match_stderr,none\"\
: 0.007575757575757577\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n \
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\": \" - leaderboard_math_num_theory_hard\"\
,\n \"exact_match,none\": 0.006493506493506494,\n \"exact_match_stderr,none\"\
: 0.006493506493506494\n },\n \"leaderboard_math_prealgebra_hard\": {\n \
\ \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \"exact_match,none\"\
: 0.0051813471502590676,\n \"exact_match_stderr,none\": 0.0051813471502590676\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" -\
\ leaderboard_math_precalculus_hard\",\n \"exact_match,none\": 0.007407407407407408,\n\
\ \"exact_match_stderr,none\": 0.007407407407407408\n },\n \"leaderboard_mmlu_pro\"\
: {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.25556848404255317,\n\
\ \"acc_stderr,none\": 0.003976629780041046\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4312169312169312,\n \"acc_norm_stderr,none\"\
: 0.01788862311572941,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.3515625,\n\
\ \"acc_norm_stderr,none\": 0.029899590697818237\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n }\n}\n```"
repo_url: https://huggingface.co/cluebbers/Llama-3.1-8B-paraphrase-type-generation-etpc
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_navigate
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_snarks
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_gpqa_extended
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_gpqa_main
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_gpqa_main_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_ifeval
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_ifeval_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_mmlu_pro
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_musr_object_placements
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-19T21-27-59.247727.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_19T21_27_59.247727
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-19T21-27-59.247727.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-19T21-27-59.247727.jsonl'
---
# Dataset Card for Evaluation run of cluebbers/Llama-3.1-8B-paraphrase-type-generation-etpc
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [cluebbers/Llama-3.1-8B-paraphrase-type-generation-etpc](https://huggingface.co/cluebbers/Llama-3.1-8B-paraphrase-type-generation-etpc)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc-details",
name="cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-19T21-27-59.247727](https://huggingface.co/datasets/open-llm-leaderboard/cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc-details/blob/main/cluebbers__Llama-3.1-8B-paraphrase-type-generation-etpc/results_2024-11-19T21-27-59.247727.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"acc_norm,none": 0.36619535607731224,
"acc_norm_stderr,none": 0.0052734901127333995,
"prompt_level_loose_acc,none": 0.08317929759704251,
"prompt_level_loose_acc_stderr,none": 0.01188373476400857,
"inst_level_strict_acc,none": 0.17146282973621102,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.25556848404255317,
"acc_stderr,none": 0.003976629780041046,
"prompt_level_strict_acc,none": 0.07024029574861368,
"prompt_level_strict_acc_stderr,none": 0.010997187906692914,
"exact_match,none": 0.004531722054380665,
"exact_match_stderr,none": 0.0018490874508892755,
"inst_level_loose_acc,none": 0.19184652278177458,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.37858010762020483,
"acc_norm_stderr,none": 0.006105180850940205,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.764,
"acc_norm_stderr,none": 0.026909337594953852
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5294117647058824,
"acc_norm_stderr,none": 0.03659829510813266
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.32,
"acc_norm_stderr,none": 0.029561724955240978
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.332,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.284,
"acc_norm_stderr,none": 0.02857695873043744
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.284,
"acc_norm_stderr,none": 0.02857695873043744
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.416,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.368,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.2054794520547945,
"acc_norm_stderr,none": 0.03355465401072847
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.26,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.248,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.26,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.4550561797752809,
"acc_norm_stderr,none": 0.03743016495716991
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.256,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.192,
"acc_norm_stderr,none": 0.024960691989171963
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.128,
"acc_norm_stderr,none": 0.021172081336336534
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.292,
"acc_norm_stderr,none": 0.02881432040220563
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2651006711409396,
"acc_norm_stderr,none": 0.012799282786329112,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2777777777777778,
"acc_norm_stderr,none": 0.03191178226713548
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2600732600732601,
"acc_norm_stderr,none": 0.018790743352015988
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.265625,
"acc_norm_stderr,none": 0.02089005840079951
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.07024029574861368,
"prompt_level_strict_acc_stderr,none": 0.010997187906692914,
"inst_level_strict_acc,none": 0.17146282973621102,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.08317929759704251,
"prompt_level_loose_acc_stderr,none": 0.01188373476400857,
"inst_level_loose_acc,none": 0.19184652278177458,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.004531722054380665,
"exact_match_stderr,none": 0.0018490874508892755,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.003257328990228013,
"exact_match_stderr,none": 0.003257328990228013
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.008130081300813009,
"exact_match_stderr,none": 0.008130081300813007
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.007575757575757576,
"exact_match_stderr,none": 0.007575757575757577
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.006493506493506494,
"exact_match_stderr,none": 0.006493506493506494
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.0051813471502590676,
"exact_match_stderr,none": 0.0051813471502590676
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.007407407407407408,
"exact_match_stderr,none": 0.007407407407407408
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.25556848404255317,
"acc_stderr,none": 0.003976629780041046
},
"leaderboard_musr": {
"acc_norm,none": 0.4312169312169312,
"acc_norm_stderr,none": 0.01788862311572941,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.3515625,
"acc_norm_stderr,none": 0.029899590697818237
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
}
},
"leaderboard": {
"acc_norm,none": 0.36619535607731224,
"acc_norm_stderr,none": 0.0052734901127333995,
"prompt_level_loose_acc,none": 0.08317929759704251,
"prompt_level_loose_acc_stderr,none": 0.01188373476400857,
"inst_level_strict_acc,none": 0.17146282973621102,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.25556848404255317,
"acc_stderr,none": 0.003976629780041046,
"prompt_level_strict_acc,none": 0.07024029574861368,
"prompt_level_strict_acc_stderr,none": 0.010997187906692914,
"exact_match,none": 0.004531722054380665,
"exact_match_stderr,none": 0.0018490874508892755,
"inst_level_loose_acc,none": 0.19184652278177458,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.37858010762020483,
"acc_norm_stderr,none": 0.006105180850940205,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.764,
"acc_norm_stderr,none": 0.026909337594953852
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5294117647058824,
"acc_norm_stderr,none": 0.03659829510813266
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.32,
"acc_norm_stderr,none": 0.029561724955240978
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.332,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.284,
"acc_norm_stderr,none": 0.02857695873043744
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.284,
"acc_norm_stderr,none": 0.02857695873043744
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.416,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.368,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.2054794520547945,
"acc_norm_stderr,none": 0.03355465401072847
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.26,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.248,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.26,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.4550561797752809,
"acc_norm_stderr,none": 0.03743016495716991
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.256,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.192,
"acc_norm_stderr,none": 0.024960691989171963
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.128,
"acc_norm_stderr,none": 0.021172081336336534
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.292,
"acc_norm_stderr,none": 0.02881432040220563
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2651006711409396,
"acc_norm_stderr,none": 0.012799282786329112,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2777777777777778,
"acc_norm_stderr,none": 0.03191178226713548
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2600732600732601,
"acc_norm_stderr,none": 0.018790743352015988
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.265625,
"acc_norm_stderr,none": 0.02089005840079951
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.07024029574861368,
"prompt_level_strict_acc_stderr,none": 0.010997187906692914,
"inst_level_strict_acc,none": 0.17146282973621102,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.08317929759704251,
"prompt_level_loose_acc_stderr,none": 0.01188373476400857,
"inst_level_loose_acc,none": 0.19184652278177458,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.004531722054380665,
"exact_match_stderr,none": 0.0018490874508892755,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.003257328990228013,
"exact_match_stderr,none": 0.003257328990228013
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.008130081300813009,
"exact_match_stderr,none": 0.008130081300813007
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.007575757575757576,
"exact_match_stderr,none": 0.007575757575757577
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.006493506493506494,
"exact_match_stderr,none": 0.006493506493506494
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.0051813471502590676,
"exact_match_stderr,none": 0.0051813471502590676
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.007407407407407408,
"exact_match_stderr,none": 0.007407407407407408
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.25556848404255317,
"acc_stderr,none": 0.003976629780041046
},
"leaderboard_musr": {
"acc_norm,none": 0.4312169312169312,
"acc_norm_stderr,none": 0.01788862311572941,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.3515625,
"acc_norm_stderr,none": 0.029899590697818237
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/vonjack__SmolLM2-1.7B-Merged-details | open-llm-leaderboard | "2024-11-19T21:48:08Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T21:44:27Z" | ---
pretty_name: Evaluation run of vonjack/SmolLM2-1.7B-Merged
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [vonjack/SmolLM2-1.7B-Merged](https://huggingface.co/vonjack/SmolLM2-1.7B-Merged)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/vonjack__SmolLM2-1.7B-Merged-details\"\
,\n\tname=\"vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-19T21-44-26.742480](https://huggingface.co/datasets/open-llm-leaderboard/vonjack__SmolLM2-1.7B-Merged-details/blob/main/vonjack__SmolLM2-1.7B-Merged/results_2024-11-19T21-44-26.742480.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_strict_acc,none\": 0.3031423290203327,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.019778732375985417,\n \"\
inst_level_loose_acc,none\": 0.4592326139088729,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.045317220543806644,\n \
\ \"exact_match_stderr,none\": 0.005596079941752634,\n \"inst_level_strict_acc,none\"\
: 0.4364508393285372,\n \"inst_level_strict_acc_stderr,none\": \"N/A\"\
,\n \"prompt_level_loose_acc,none\": 0.32162661737523107,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.020100807813645375,\n \"\
acc,none\": 0.2047872340425532,\n \"acc_stderr,none\": 0.0036791055578188313,\n\
\ \"acc_norm,none\": 0.34193799455182255,\n \"acc_norm_stderr,none\"\
: 0.00513143649230213,\n \"alias\": \"leaderboard\"\n },\n \
\ \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.3551466759243187,\n\
\ \"acc_norm_stderr,none\": 0.0059334827105570745,\n \"alias\"\
: \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.68,\n \"acc_norm_stderr,none\": 0.02956172495524098\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.5721925133689839,\n\
\ \"acc_norm_stderr,none\": 0.03627762136497335\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.472,\n \"acc_norm_stderr,none\":\
\ 0.031636489531544396\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.316,\n \"acc_norm_stderr,none\": 0.029462657598578648\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.536,\n\
\ \"acc_norm_stderr,none\": 0.031603975145223735\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.328,\n \"acc_norm_stderr,none\":\
\ 0.029752391824475363\n },\n \"leaderboard_bbh_hyperbaton\": {\n\
\ \"alias\": \" - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\"\
: 0.496,\n \"acc_norm_stderr,none\": 0.0316851985511992\n },\n\
\ \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.244,\n \"acc_norm_stderr,none\": 0.02721799546455311\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.156,\n \"acc_norm_stderr,none\": 0.022995023034068682\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.408,\n \"acc_norm_stderr,none\": 0.031145209846548512\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.524,\n \"acc_norm_stderr,none\": 0.03164968895968774\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\":\
\ 0.03160397514522374\n },\n \"leaderboard_bbh_object_counting\":\
\ {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.268,\n \"acc_norm_stderr,none\": 0.02806876238252672\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.3219178082191781,\n \"acc_norm_stderr,none\": 0.038799816296271356\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.216,\n \"acc_norm_stderr,none\": 0.02607865766373279\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.136,\n \
\ \"acc_norm_stderr,none\": 0.021723342617052086\n },\n \"\
leaderboard_bbh_salient_translation_error_detection\": {\n \"alias\"\
: \" - leaderboard_bbh_salient_translation_error_detection\",\n \"acc_norm,none\"\
: 0.312,\n \"acc_norm_stderr,none\": 0.02936106757521985\n },\n\
\ \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.5056179775280899,\n \"acc_norm_stderr,none\"\
: 0.03757992900475984\n },\n \"leaderboard_bbh_sports_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \
\ \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.052,\n\
\ \"acc_norm_stderr,none\": 0.014070391025641678\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.128,\n \"acc_norm_stderr,none\": 0.021172081336336534\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.14,\n \"acc_norm_stderr,none\": 0.021989409645240245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.356,\n \"acc_norm_stderr,none\":\
\ 0.0303436806571532\n },\n \"leaderboard_bbh_web_of_lies\": {\n \
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.27936241610738255,\n\
\ \"acc_norm_stderr,none\": 0.01301112498383348,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.2878787878787879,\n \"acc_norm_stderr,none\": 0.03225883512300998\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.28205128205128205,\n\
\ \"acc_norm_stderr,none\": 0.019275803929950375\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.27232142857142855,\n \"acc_norm_stderr,none\"\
: 0.02105508212932411\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.3031423290203327,\n \"prompt_level_strict_acc_stderr,none\": 0.019778732375985417,\n\
\ \"inst_level_strict_acc,none\": 0.4364508393285372,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.32162661737523107,\n \"prompt_level_loose_acc_stderr,none\": 0.020100807813645375,\n\
\ \"inst_level_loose_acc,none\": 0.4592326139088729,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.045317220543806644,\n \"exact_match_stderr,none\"\
: 0.005596079941752634,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.11074918566775244,\n\
\ \"exact_match_stderr,none\": 0.017939969209400983\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.008130081300813009,\n \"exact_match_stderr,none\": 0.008130081300813007\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.007575757575757576,\n\
\ \"exact_match_stderr,none\": 0.007575757575757577\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.017857142857142856,\n \"exact_match_stderr,none\": 0.007928503387888855\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.012987012987012988,\n\
\ \"exact_match_stderr,none\": 0.009153145279150204\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.08808290155440414,\n \"exact_match_stderr,none\"\
: 0.020453746601601056\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.2047872340425532,\n \"acc_stderr,none\"\
: 0.0036791055578188313\n },\n \"leaderboard_musr\": {\n \
\ \"acc_norm,none\": 0.33994708994708994,\n \"acc_norm_stderr,none\"\
: 0.016513530548897988,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.536,\n\
\ \"acc_norm_stderr,none\": 0.031603975145223735\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.234375,\n \"acc_norm_stderr,none\"\
: 0.02652733398834892\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.252,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ }\n },\n \"leaderboard\": {\n \"prompt_level_strict_acc,none\"\
: 0.3031423290203327,\n \"prompt_level_strict_acc_stderr,none\": 0.019778732375985417,\n\
\ \"inst_level_loose_acc,none\": 0.4592326139088729,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.045317220543806644,\n \"exact_match_stderr,none\"\
: 0.005596079941752634,\n \"inst_level_strict_acc,none\": 0.4364508393285372,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.32162661737523107,\n \"prompt_level_loose_acc_stderr,none\": 0.020100807813645375,\n\
\ \"acc,none\": 0.2047872340425532,\n \"acc_stderr,none\": 0.0036791055578188313,\n\
\ \"acc_norm,none\": 0.34193799455182255,\n \"acc_norm_stderr,none\"\
: 0.00513143649230213,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.3551466759243187,\n \"acc_norm_stderr,none\"\
: 0.0059334827105570745,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.68,\n \"acc_norm_stderr,none\": 0.02956172495524098\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5721925133689839,\n \"acc_norm_stderr,none\"\
: 0.03627762136497335\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.472,\n \"acc_norm_stderr,none\": 0.031636489531544396\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.316,\n \"acc_norm_stderr,none\": 0.029462657598578648\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.328,\n \"acc_norm_stderr,none\": 0.029752391824475363\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.496,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.244,\n \"acc_norm_stderr,none\": 0.02721799546455311\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.156,\n \"acc_norm_stderr,none\": 0.022995023034068682\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.408,\n \"acc_norm_stderr,none\": 0.031145209846548512\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.524,\n \"acc_norm_stderr,none\": 0.03164968895968774\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.268,\n \"acc_norm_stderr,none\": 0.02806876238252672\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.3219178082191781,\n\
\ \"acc_norm_stderr,none\": 0.038799816296271356\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.216,\n \"acc_norm_stderr,none\": 0.02607865766373279\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.136,\n \"acc_norm_stderr,none\": 0.021723342617052086\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.312,\n \"acc_norm_stderr,none\": 0.02936106757521985\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.5056179775280899,\n \"acc_norm_stderr,none\"\
: 0.03757992900475984\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.052,\n \"acc_norm_stderr,none\": 0.014070391025641678\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.128,\n \"acc_norm_stderr,none\": 0.021172081336336534\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.14,\n \"acc_norm_stderr,none\": 0.021989409645240245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.356,\n \"acc_norm_stderr,none\": 0.0303436806571532\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.27936241610738255,\n\
\ \"acc_norm_stderr,none\": 0.01301112498383348,\n \"alias\": \" -\
\ leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"alias\"\
: \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.2878787878787879,\n\
\ \"acc_norm_stderr,none\": 0.03225883512300998\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.28205128205128205,\n \"acc_norm_stderr,none\": 0.019275803929950375\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.27232142857142855,\n \"acc_norm_stderr,none\"\
: 0.02105508212932411\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.3031423290203327,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.019778732375985417,\n \
\ \"inst_level_strict_acc,none\": 0.4364508393285372,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.32162661737523107,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.020100807813645375,\n \"inst_level_loose_acc,none\"\
: 0.4592326139088729,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.045317220543806644,\n\
\ \"exact_match_stderr,none\": 0.005596079941752634,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.11074918566775244,\n \"exact_match_stderr,none\": 0.017939969209400983\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.008130081300813009,\n \"exact_match_stderr,none\": 0.008130081300813007\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.007575757575757576,\n \"exact_match_stderr,none\"\
: 0.007575757575757577\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.017857142857142856,\n \"exact_match_stderr,none\"\
: 0.007928503387888855\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.012987012987012988,\n \"exact_match_stderr,none\": 0.009153145279150204\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.08808290155440414,\n \"exact_match_stderr,none\"\
: 0.020453746601601056\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_mmlu_pro\"\
: {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.2047872340425532,\n\
\ \"acc_stderr,none\": 0.0036791055578188313\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.33994708994708994,\n \"acc_norm_stderr,none\"\
: 0.016513530548897988,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.234375,\n\
\ \"acc_norm_stderr,none\": 0.02652733398834892\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.252,\n \"acc_norm_stderr,none\": 0.027513851933031318\n }\n}\n```"
repo_url: https://huggingface.co/vonjack/SmolLM2-1.7B-Merged
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_navigate
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_snarks
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_gpqa_extended
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_gpqa_main
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_gpqa_main_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_ifeval
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_ifeval_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_mmlu_pro
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_musr_object_placements
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-19T21-44-26.742480.jsonl'
- config_name: vonjack__SmolLM2-1.7B-Merged__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_19T21_44_26.742480
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-19T21-44-26.742480.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-19T21-44-26.742480.jsonl'
---
# Dataset Card for Evaluation run of vonjack/SmolLM2-1.7B-Merged
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [vonjack/SmolLM2-1.7B-Merged](https://huggingface.co/vonjack/SmolLM2-1.7B-Merged)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/vonjack__SmolLM2-1.7B-Merged-details",
name="vonjack__SmolLM2-1.7B-Merged__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-19T21-44-26.742480](https://huggingface.co/datasets/open-llm-leaderboard/vonjack__SmolLM2-1.7B-Merged-details/blob/main/vonjack__SmolLM2-1.7B-Merged/results_2024-11-19T21-44-26.742480.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_strict_acc,none": 0.3031423290203327,
"prompt_level_strict_acc_stderr,none": 0.019778732375985417,
"inst_level_loose_acc,none": 0.4592326139088729,
"inst_level_loose_acc_stderr,none": "N/A",
"exact_match,none": 0.045317220543806644,
"exact_match_stderr,none": 0.005596079941752634,
"inst_level_strict_acc,none": 0.4364508393285372,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.32162661737523107,
"prompt_level_loose_acc_stderr,none": 0.020100807813645375,
"acc,none": 0.2047872340425532,
"acc_stderr,none": 0.0036791055578188313,
"acc_norm,none": 0.34193799455182255,
"acc_norm_stderr,none": 0.00513143649230213,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.3551466759243187,
"acc_norm_stderr,none": 0.0059334827105570745,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.68,
"acc_norm_stderr,none": 0.02956172495524098
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5721925133689839,
"acc_norm_stderr,none": 0.03627762136497335
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.472,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.328,
"acc_norm_stderr,none": 0.029752391824475363
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.244,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.156,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.408,
"acc_norm_stderr,none": 0.031145209846548512
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.524,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.268,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.3219178082191781,
"acc_norm_stderr,none": 0.038799816296271356
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.216,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.136,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.312,
"acc_norm_stderr,none": 0.02936106757521985
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.5056179775280899,
"acc_norm_stderr,none": 0.03757992900475984
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.052,
"acc_norm_stderr,none": 0.014070391025641678
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.128,
"acc_norm_stderr,none": 0.021172081336336534
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.14,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.356,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.27936241610738255,
"acc_norm_stderr,none": 0.01301112498383348,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2878787878787879,
"acc_norm_stderr,none": 0.03225883512300998
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.28205128205128205,
"acc_norm_stderr,none": 0.019275803929950375
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.27232142857142855,
"acc_norm_stderr,none": 0.02105508212932411
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.3031423290203327,
"prompt_level_strict_acc_stderr,none": 0.019778732375985417,
"inst_level_strict_acc,none": 0.4364508393285372,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.32162661737523107,
"prompt_level_loose_acc_stderr,none": 0.020100807813645375,
"inst_level_loose_acc,none": 0.4592326139088729,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.045317220543806644,
"exact_match_stderr,none": 0.005596079941752634,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.11074918566775244,
"exact_match_stderr,none": 0.017939969209400983
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.008130081300813009,
"exact_match_stderr,none": 0.008130081300813007
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.007575757575757576,
"exact_match_stderr,none": 0.007575757575757577
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.017857142857142856,
"exact_match_stderr,none": 0.007928503387888855
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.012987012987012988,
"exact_match_stderr,none": 0.009153145279150204
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.08808290155440414,
"exact_match_stderr,none": 0.020453746601601056
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.2047872340425532,
"acc_stderr,none": 0.0036791055578188313
},
"leaderboard_musr": {
"acc_norm,none": 0.33994708994708994,
"acc_norm_stderr,none": 0.016513530548897988,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.234375,
"acc_norm_stderr,none": 0.02652733398834892
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
}
},
"leaderboard": {
"prompt_level_strict_acc,none": 0.3031423290203327,
"prompt_level_strict_acc_stderr,none": 0.019778732375985417,
"inst_level_loose_acc,none": 0.4592326139088729,
"inst_level_loose_acc_stderr,none": "N/A",
"exact_match,none": 0.045317220543806644,
"exact_match_stderr,none": 0.005596079941752634,
"inst_level_strict_acc,none": 0.4364508393285372,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.32162661737523107,
"prompt_level_loose_acc_stderr,none": 0.020100807813645375,
"acc,none": 0.2047872340425532,
"acc_stderr,none": 0.0036791055578188313,
"acc_norm,none": 0.34193799455182255,
"acc_norm_stderr,none": 0.00513143649230213,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.3551466759243187,
"acc_norm_stderr,none": 0.0059334827105570745,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.68,
"acc_norm_stderr,none": 0.02956172495524098
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5721925133689839,
"acc_norm_stderr,none": 0.03627762136497335
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.472,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.328,
"acc_norm_stderr,none": 0.029752391824475363
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.244,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.156,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.408,
"acc_norm_stderr,none": 0.031145209846548512
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.524,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.268,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.3219178082191781,
"acc_norm_stderr,none": 0.038799816296271356
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.216,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.136,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.312,
"acc_norm_stderr,none": 0.02936106757521985
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.5056179775280899,
"acc_norm_stderr,none": 0.03757992900475984
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.052,
"acc_norm_stderr,none": 0.014070391025641678
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.128,
"acc_norm_stderr,none": 0.021172081336336534
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.14,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.356,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.27936241610738255,
"acc_norm_stderr,none": 0.01301112498383348,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2878787878787879,
"acc_norm_stderr,none": 0.03225883512300998
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.28205128205128205,
"acc_norm_stderr,none": 0.019275803929950375
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.27232142857142855,
"acc_norm_stderr,none": 0.02105508212932411
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.3031423290203327,
"prompt_level_strict_acc_stderr,none": 0.019778732375985417,
"inst_level_strict_acc,none": 0.4364508393285372,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.32162661737523107,
"prompt_level_loose_acc_stderr,none": 0.020100807813645375,
"inst_level_loose_acc,none": 0.4592326139088729,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.045317220543806644,
"exact_match_stderr,none": 0.005596079941752634,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.11074918566775244,
"exact_match_stderr,none": 0.017939969209400983
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.008130081300813009,
"exact_match_stderr,none": 0.008130081300813007
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.007575757575757576,
"exact_match_stderr,none": 0.007575757575757577
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.017857142857142856,
"exact_match_stderr,none": 0.007928503387888855
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.012987012987012988,
"exact_match_stderr,none": 0.009153145279150204
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.08808290155440414,
"exact_match_stderr,none": 0.020453746601601056
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.2047872340425532,
"acc_stderr,none": 0.0036791055578188313
},
"leaderboard_musr": {
"acc_norm,none": 0.33994708994708994,
"acc_norm_stderr,none": 0.016513530548897988,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.234375,
"acc_norm_stderr,none": 0.02652733398834892
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid-details | open-llm-leaderboard | "2024-11-19T22:06:04Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T22:02:56Z" | ---
pretty_name: Evaluation run of cluebbers/Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [cluebbers/Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid](https://huggingface.co/cluebbers/Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid-details\"\
,\n\tname=\"cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-19T22-02-56.232495](https://huggingface.co/datasets/open-llm-leaderboard/cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid-details/blob/main/cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid/results_2024-11-19T22-02-56.232495.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_loose_acc,none\": 0.09242144177449169,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.012463258511767319,\n \"\
inst_level_loose_acc,none\": 0.19664268585131894,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"acc,none\": 0.2562333776595745,\n \"acc_stderr,none\"\
: 0.003980020690139148,\n \"exact_match,none\": 0.006797583081570997,\n\
\ \"exact_match_stderr,none\": 0.0022605202818857095,\n \"\
acc_norm,none\": 0.3673628226747957,\n \"acc_norm_stderr,none\": 0.005279715115717286,\n\
\ \"prompt_level_strict_acc,none\": 0.08502772643253234,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.012002932679468982,\n \"\
inst_level_strict_acc,none\": 0.17865707434052758,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.37962159347335533,\n \"acc_norm_stderr,none\"\
: 0.006111875587492996,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.752,\n\
\ \"acc_norm_stderr,none\": 0.027367497504863593\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5240641711229946,\n \"acc_norm_stderr,none\"\
: 0.03661929361528698\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.324,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.464,\n\
\ \"acc_norm_stderr,none\": 0.03160397514522374\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.5,\n \"acc_norm_stderr,none\": 0.031686212526223896\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\"\
: \" - leaderboard_bbh_geometric_shapes\",\n \"acc_norm,none\": 0.324,\n\
\ \"acc_norm_stderr,none\": 0.029658294924545567\n },\n \
\ \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.568,\n \"acc_norm_stderr,none\":\
\ 0.03139181076542941\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.3,\n \"acc_norm_stderr,none\": 0.029040893477575783\n\
\ },\n \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n\
\ \"acc_norm,none\": 0.272,\n \"acc_norm_stderr,none\": 0.028200088296309975\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.416,\n \"acc_norm_stderr,none\": 0.031235856237014505\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.492,\n \"acc_norm_stderr,none\": 0.03168215643141386\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\"\
: \" - leaderboard_bbh_object_counting\",\n \"acc_norm,none\": 0.376,\n\
\ \"acc_norm_stderr,none\": 0.03069633626739458\n },\n \
\ \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" - leaderboard_bbh_penguins_in_a_table\"\
,\n \"acc_norm,none\": 0.2054794520547945,\n \"acc_norm_stderr,none\"\
: 0.03355465401072847\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.28,\n \"acc_norm_stderr,none\": 0.02845414827783231\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.228,\n \
\ \"acc_norm_stderr,none\": 0.026587432487268498\n },\n \"\
leaderboard_bbh_salient_translation_error_detection\": {\n \"alias\"\
: \" - leaderboard_bbh_salient_translation_error_detection\",\n \"acc_norm,none\"\
: 0.268,\n \"acc_norm_stderr,none\": 0.02806876238252672\n },\n\
\ \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.4438202247191011,\n \"acc_norm_stderr,none\"\
: 0.03734431584194247\n },\n \"leaderboard_bbh_sports_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \
\ \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.24,\n\
\ \"acc_norm_stderr,none\": 0.027065293652238982\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.184,\n \"acc_norm_stderr,none\": 0.02455581299422255\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.144,\n \"acc_norm_stderr,none\":\
\ 0.022249407735450245\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.3,\n \"acc_norm_stderr,none\": 0.029040893477575783\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\":\
\ \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\": 0.488,\n\
\ \"acc_norm_stderr,none\": 0.03167708558254714\n },\n \
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2684563758389262,\n\
\ \"acc_norm_stderr,none\": 0.01284985608403652,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.2727272727272727,\n \"acc_norm_stderr,none\": 0.03173071239071728\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.2600732600732601,\n\
\ \"acc_norm_stderr,none\": 0.018790743352015988\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.2767857142857143,\n \"acc_norm_stderr,none\"\
: 0.021161749643954904\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.08502772643253234,\n \"prompt_level_strict_acc_stderr,none\": 0.012002932679468982,\n\
\ \"inst_level_strict_acc,none\": 0.17865707434052758,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.09242144177449169,\n \"prompt_level_loose_acc_stderr,none\": 0.012463258511767319,\n\
\ \"inst_level_loose_acc,none\": 0.19664268585131894,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.006797583081570997,\n \"exact_match_stderr,none\"\
: 0.0022605202818857095,\n \"alias\": \" - leaderboard_math_hard\"\n\
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.009771986970684038,\n\
\ \"exact_match_stderr,none\": 0.005623391633915856\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_geometry_hard\"\
: {\n \"alias\": \" - leaderboard_math_geometry_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n \
\ },\n \"leaderboard_math_intermediate_algebra_hard\": {\n \
\ \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.0035714285714285713,\n \"exact_match_stderr,none\"\
: 0.0035714285714285713\n },\n \"leaderboard_math_num_theory_hard\"\
: {\n \"alias\": \" - leaderboard_math_num_theory_hard\",\n \
\ \"exact_match,none\": 0.006493506493506494,\n \"exact_match_stderr,none\"\
: 0.006493506493506494\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \
\ \"exact_match,none\": 0.010362694300518135,\n \"exact_match_stderr,none\"\
: 0.007308424386792209\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.014814814814814815,\n \"exact_match_stderr,none\"\
: 0.010436494549594376\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.2562333776595745,\n\
\ \"acc_stderr,none\": 0.003980020690139148\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4298941798941799,\n \"acc_norm_stderr,none\"\
: 0.017856543173457542,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.52,\n\
\ \"acc_norm_stderr,none\": 0.03166085340849512\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.34765625,\n \"acc_norm_stderr,none\"\
: 0.029822440677853123\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.424,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ }\n },\n \"leaderboard\": {\n \"prompt_level_loose_acc,none\"\
: 0.09242144177449169,\n \"prompt_level_loose_acc_stderr,none\": 0.012463258511767319,\n\
\ \"inst_level_loose_acc,none\": 0.19664268585131894,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"acc,none\": 0.2562333776595745,\n \"acc_stderr,none\"\
: 0.003980020690139148,\n \"exact_match,none\": 0.006797583081570997,\n \
\ \"exact_match_stderr,none\": 0.0022605202818857095,\n \"acc_norm,none\"\
: 0.3673628226747957,\n \"acc_norm_stderr,none\": 0.005279715115717286,\n\
\ \"prompt_level_strict_acc,none\": 0.08502772643253234,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.012002932679468982,\n \"inst_level_strict_acc,none\": 0.17865707434052758,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"alias\": \"\
leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.37962159347335533,\n\
\ \"acc_norm_stderr,none\": 0.006111875587492996,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.752,\n \"acc_norm_stderr,none\": 0.027367497504863593\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5240641711229946,\n \"acc_norm_stderr,none\"\
: 0.03661929361528698\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.324,\n \"acc_norm_stderr,none\": 0.029658294924545567\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.5,\n \"acc_norm_stderr,none\": 0.031686212526223896\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.324,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.568,\n \"acc_norm_stderr,none\": 0.03139181076542941\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.3,\n \"acc_norm_stderr,none\": 0.029040893477575783\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.272,\n \"acc_norm_stderr,none\": 0.028200088296309975\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.416,\n \"acc_norm_stderr,none\": 0.031235856237014505\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.492,\n \"acc_norm_stderr,none\": 0.03168215643141386\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.376,\n \"acc_norm_stderr,none\": 0.03069633626739458\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.2054794520547945,\n\
\ \"acc_norm_stderr,none\": 0.03355465401072847\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.28,\n \"acc_norm_stderr,none\": 0.02845414827783231\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.228,\n \"acc_norm_stderr,none\": 0.026587432487268498\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.268,\n \"acc_norm_stderr,none\": 0.02806876238252672\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.4438202247191011,\n \"acc_norm_stderr,none\"\
: 0.03734431584194247\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \"\
acc_norm,none\": 0.24,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.184,\n \"acc_norm_stderr,none\": 0.02455581299422255\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.144,\n \"acc_norm_stderr,none\": 0.022249407735450245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.3,\n \"acc_norm_stderr,none\": 0.029040893477575783\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2684563758389262,\n\
\ \"acc_norm_stderr,none\": 0.01284985608403652,\n \"alias\": \" -\
\ leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"alias\"\
: \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.2727272727272727,\n\
\ \"acc_norm_stderr,none\": 0.03173071239071728\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.2600732600732601,\n \"acc_norm_stderr,none\": 0.018790743352015988\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.2767857142857143,\n \"acc_norm_stderr,none\"\
: 0.021161749643954904\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.08502772643253234,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.012002932679468982,\n \
\ \"inst_level_strict_acc,none\": 0.17865707434052758,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.09242144177449169,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.012463258511767319,\n \"inst_level_loose_acc,none\"\
: 0.19664268585131894,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n\
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.006797583081570997,\n\
\ \"exact_match_stderr,none\": 0.0022605202818857095,\n \"alias\"\
: \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.009771986970684038,\n \"exact_match_stderr,none\": 0.005623391633915856\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_geometry_hard\"\
: {\n \"alias\": \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.0035714285714285713,\n \"exact_match_stderr,none\"\
: 0.0035714285714285713\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.006493506493506494,\n \"exact_match_stderr,none\": 0.006493506493506494\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.010362694300518135,\n \"exact_match_stderr,none\"\
: 0.007308424386792209\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.014814814814814815,\n \"exact_match_stderr,none\": 0.010436494549594376\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.2562333776595745,\n \"acc_stderr,none\": 0.003980020690139148\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.4298941798941799,\n\
\ \"acc_norm_stderr,none\": 0.017856543173457542,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n },\n \"leaderboard_musr_object_placements\"\
: {\n \"alias\": \" - leaderboard_musr_object_placements\",\n \"\
acc_norm,none\": 0.34765625,\n \"acc_norm_stderr,none\": 0.029822440677853123\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.424,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ }\n}\n```"
repo_url: https://huggingface.co/cluebbers/Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_navigate
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_snarks
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_gpqa_extended
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_gpqa_main
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_gpqa_main_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_ifeval
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_ifeval_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_mmlu_pro
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_musr_object_placements
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-19T22-02-56.232495.jsonl'
- config_name: cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_19T22_02_56.232495
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-19T22-02-56.232495.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-19T22-02-56.232495.jsonl'
---
# Dataset Card for Evaluation run of cluebbers/Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [cluebbers/Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid](https://huggingface.co/cluebbers/Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid-details",
name="cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-19T22-02-56.232495](https://huggingface.co/datasets/open-llm-leaderboard/cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid-details/blob/main/cluebbers__Llama-3.1-8B-paraphrase-type-generation-apty-sigmoid/results_2024-11-19T22-02-56.232495.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_loose_acc,none": 0.09242144177449169,
"prompt_level_loose_acc_stderr,none": 0.012463258511767319,
"inst_level_loose_acc,none": 0.19664268585131894,
"inst_level_loose_acc_stderr,none": "N/A",
"acc,none": 0.2562333776595745,
"acc_stderr,none": 0.003980020690139148,
"exact_match,none": 0.006797583081570997,
"exact_match_stderr,none": 0.0022605202818857095,
"acc_norm,none": 0.3673628226747957,
"acc_norm_stderr,none": 0.005279715115717286,
"prompt_level_strict_acc,none": 0.08502772643253234,
"prompt_level_strict_acc_stderr,none": 0.012002932679468982,
"inst_level_strict_acc,none": 0.17865707434052758,
"inst_level_strict_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.37962159347335533,
"acc_norm_stderr,none": 0.006111875587492996,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.752,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5240641711229946,
"acc_norm_stderr,none": 0.03661929361528698
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.324,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.5,
"acc_norm_stderr,none": 0.031686212526223896
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.324,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.3,
"acc_norm_stderr,none": 0.029040893477575783
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.272,
"acc_norm_stderr,none": 0.028200088296309975
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.416,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.492,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.376,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.2054794520547945,
"acc_norm_stderr,none": 0.03355465401072847
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.28,
"acc_norm_stderr,none": 0.02845414827783231
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.228,
"acc_norm_stderr,none": 0.026587432487268498
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.268,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.4438202247191011,
"acc_norm_stderr,none": 0.03734431584194247
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.24,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.184,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.144,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.3,
"acc_norm_stderr,none": 0.029040893477575783
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2684563758389262,
"acc_norm_stderr,none": 0.01284985608403652,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2727272727272727,
"acc_norm_stderr,none": 0.03173071239071728
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2600732600732601,
"acc_norm_stderr,none": 0.018790743352015988
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.2767857142857143,
"acc_norm_stderr,none": 0.021161749643954904
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.08502772643253234,
"prompt_level_strict_acc_stderr,none": 0.012002932679468982,
"inst_level_strict_acc,none": 0.17865707434052758,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.09242144177449169,
"prompt_level_loose_acc_stderr,none": 0.012463258511767319,
"inst_level_loose_acc,none": 0.19664268585131894,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.006797583081570997,
"exact_match_stderr,none": 0.0022605202818857095,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.009771986970684038,
"exact_match_stderr,none": 0.005623391633915856
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0035714285714285713,
"exact_match_stderr,none": 0.0035714285714285713
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.006493506493506494,
"exact_match_stderr,none": 0.006493506493506494
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.010362694300518135,
"exact_match_stderr,none": 0.007308424386792209
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.014814814814814815,
"exact_match_stderr,none": 0.010436494549594376
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.2562333776595745,
"acc_stderr,none": 0.003980020690139148
},
"leaderboard_musr": {
"acc_norm,none": 0.4298941798941799,
"acc_norm_stderr,none": 0.017856543173457542,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.34765625,
"acc_norm_stderr,none": 0.029822440677853123
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
}
},
"leaderboard": {
"prompt_level_loose_acc,none": 0.09242144177449169,
"prompt_level_loose_acc_stderr,none": 0.012463258511767319,
"inst_level_loose_acc,none": 0.19664268585131894,
"inst_level_loose_acc_stderr,none": "N/A",
"acc,none": 0.2562333776595745,
"acc_stderr,none": 0.003980020690139148,
"exact_match,none": 0.006797583081570997,
"exact_match_stderr,none": 0.0022605202818857095,
"acc_norm,none": 0.3673628226747957,
"acc_norm_stderr,none": 0.005279715115717286,
"prompt_level_strict_acc,none": 0.08502772643253234,
"prompt_level_strict_acc_stderr,none": 0.012002932679468982,
"inst_level_strict_acc,none": 0.17865707434052758,
"inst_level_strict_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.37962159347335533,
"acc_norm_stderr,none": 0.006111875587492996,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.752,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5240641711229946,
"acc_norm_stderr,none": 0.03661929361528698
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.324,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.5,
"acc_norm_stderr,none": 0.031686212526223896
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.324,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.3,
"acc_norm_stderr,none": 0.029040893477575783
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.272,
"acc_norm_stderr,none": 0.028200088296309975
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.416,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.492,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.376,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.2054794520547945,
"acc_norm_stderr,none": 0.03355465401072847
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.28,
"acc_norm_stderr,none": 0.02845414827783231
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.228,
"acc_norm_stderr,none": 0.026587432487268498
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.268,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.4438202247191011,
"acc_norm_stderr,none": 0.03734431584194247
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.24,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.184,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.144,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.3,
"acc_norm_stderr,none": 0.029040893477575783
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2684563758389262,
"acc_norm_stderr,none": 0.01284985608403652,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2727272727272727,
"acc_norm_stderr,none": 0.03173071239071728
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2600732600732601,
"acc_norm_stderr,none": 0.018790743352015988
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.2767857142857143,
"acc_norm_stderr,none": 0.021161749643954904
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.08502772643253234,
"prompt_level_strict_acc_stderr,none": 0.012002932679468982,
"inst_level_strict_acc,none": 0.17865707434052758,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.09242144177449169,
"prompt_level_loose_acc_stderr,none": 0.012463258511767319,
"inst_level_loose_acc,none": 0.19664268585131894,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.006797583081570997,
"exact_match_stderr,none": 0.0022605202818857095,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.009771986970684038,
"exact_match_stderr,none": 0.005623391633915856
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0035714285714285713,
"exact_match_stderr,none": 0.0035714285714285713
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.006493506493506494,
"exact_match_stderr,none": 0.006493506493506494
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.010362694300518135,
"exact_match_stderr,none": 0.007308424386792209
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.014814814814814815,
"exact_match_stderr,none": 0.010436494549594376
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.2562333776595745,
"acc_stderr,none": 0.003980020690139148
},
"leaderboard_musr": {
"acc_norm,none": 0.4298941798941799,
"acc_norm_stderr,none": 0.017856543173457542,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.34765625,
"acc_norm_stderr,none": 0.029822440677853123
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
YangZhoumill/factor_medium_64k | YangZhoumill | "2024-11-19T22:10:20Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T22:06:22Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: question
dtype: string
- name: solution
dtype: string
- name: op
dtype: int64
- name: id
dtype: int64
- name: template
dtype: string
- name: mode
dtype: string
- name: length
dtype: string
- name: d
dtype: int64
splits:
- name: ops_2
num_bytes: 138096441
num_examples: 480
- name: ops_3
num_bytes: 136357405
num_examples: 480
- name: ops_4
num_bytes: 132803657
num_examples: 480
- name: ops_5
num_bytes: 128621002
num_examples: 480
- name: ops_6
num_bytes: 317065486
num_examples: 1159
- name: ops_7
num_bytes: 127027447
num_examples: 492
- name: ops_8
num_bytes: 258260134
num_examples: 1003
- name: ops_9
num_bytes: 215930558
num_examples: 826
- name: ops_10
num_bytes: 186097498
num_examples: 759
- name: ops_11
num_bytes: 183021458
num_examples: 711
- name: ops_12
num_bytes: 180492543
num_examples: 692
- name: ops_13
num_bytes: 168159764
num_examples: 646
- name: ops_14
num_bytes: 151505080
num_examples: 571
- name: ops_15
num_bytes: 172408278
num_examples: 719
- name: ops_16
num_bytes: 164422053
num_examples: 678
- name: ops_17
num_bytes: 164290988
num_examples: 649
- name: ops_18
num_bytes: 156514421
num_examples: 610
- name: ops_19
num_bytes: 133023791
num_examples: 586
- name: ops_20
num_bytes: 125417848
num_examples: 538
download_size: 862021789
dataset_size: 3239515852
configs:
- config_name: default
data_files:
- split: ops_2
path: data/ops_2-*
- split: ops_3
path: data/ops_3-*
- split: ops_4
path: data/ops_4-*
- split: ops_5
path: data/ops_5-*
- split: ops_6
path: data/ops_6-*
- split: ops_7
path: data/ops_7-*
- split: ops_8
path: data/ops_8-*
- split: ops_9
path: data/ops_9-*
- split: ops_10
path: data/ops_10-*
- split: ops_11
path: data/ops_11-*
- split: ops_12
path: data/ops_12-*
- split: ops_13
path: data/ops_13-*
- split: ops_14
path: data/ops_14-*
- split: ops_15
path: data/ops_15-*
- split: ops_16
path: data/ops_16-*
- split: ops_17
path: data/ops_17-*
- split: ops_18
path: data/ops_18-*
- split: ops_19
path: data/ops_19-*
- split: ops_20
path: data/ops_20-*
---
|
open-llm-leaderboard/icefog72__Ice0.37-18.11-RP-details | open-llm-leaderboard | "2024-11-19T23:59:42Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T23:56:46Z" | ---
pretty_name: Evaluation run of icefog72/Ice0.37-18.11-RP
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [icefog72/Ice0.37-18.11-RP](https://huggingface.co/icefog72/Ice0.37-18.11-RP)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/icefog72__Ice0.37-18.11-RP-details\"\
,\n\tname=\"icefog72__Ice0.37-18.11-RP__leaderboard_bbh_boolean_expressions\",\n\
\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-19T23-56-46.205594](https://huggingface.co/datasets/open-llm-leaderboard/icefog72__Ice0.37-18.11-RP-details/blob/main/icefog72__Ice0.37-18.11-RP/results_2024-11-19T23-56-46.205594.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"acc,none\": 0.3143284574468085,\n \"acc_stderr,none\"\
: 0.004232520301337217,\n \"exact_match,none\": 0.06419939577039276,\n\
\ \"exact_match_stderr,none\": 0.0066144283283954206,\n \"\
prompt_level_strict_acc,none\": 0.43807763401109057,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.021350931135490865,\n \"acc_norm,none\": 0.46984044623167726,\n \
\ \"acc_norm_stderr,none\": 0.005333659102069773,\n \"prompt_level_loose_acc,none\"\
: 0.4713493530499076,\n \"prompt_level_loose_acc_stderr,none\": 0.02148122093008049,\n\
\ \"inst_level_strict_acc,none\": 0.5563549160671463,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"inst_level_loose_acc,none\"\
: 0.5911270983213429,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\
,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.507203610484291,\n \"acc_norm_stderr,none\"\
: 0.0061443884276944525,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.828,\n\
\ \"acc_norm_stderr,none\": 0.02391551394448624\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6096256684491979,\n \"acc_norm_stderr,none\"\
: 0.03576973947986408\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.408,\n \"acc_norm_stderr,none\": 0.031145209846548512\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.692,\n\
\ \"acc_norm_stderr,none\": 0.02925692860650181\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.56,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\"\
: \" - leaderboard_bbh_geometric_shapes\",\n \"acc_norm,none\": 0.464,\n\
\ \"acc_norm_stderr,none\": 0.03160397514522374\n },\n \
\ \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.792,\n \"acc_norm_stderr,none\":\
\ 0.025721398901416368\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.452,\n \"acc_norm_stderr,none\":\
\ 0.03153986449255664\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.44,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.668,\n \"acc_norm_stderr,none\": 0.029844039047465857\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.616,\n \"acc_norm_stderr,none\": 0.030821679117375447\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.576,\n \"acc_norm_stderr,none\":\
\ 0.03131803437491622\n },\n \"leaderboard_bbh_object_counting\":\
\ {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.356,\n \"acc_norm_stderr,none\": 0.0303436806571532\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.4315068493150685,\n \"acc_norm_stderr,none\": 0.041131302645371945\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.524,\n \"acc_norm_stderr,none\": 0.03164968895968774\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.504,\n \
\ \"acc_norm_stderr,none\": 0.0316851985511992\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.44,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" -\
\ leaderboard_bbh_snarks\",\n \"acc_norm,none\": 0.6292134831460674,\n\
\ \"acc_norm_stderr,none\": 0.03630566062020178\n },\n \
\ \"leaderboard_bbh_sports_understanding\": {\n \"alias\": \" - leaderboard_bbh_sports_understanding\"\
,\n \"acc_norm,none\": 0.828,\n \"acc_norm_stderr,none\":\
\ 0.02391551394448624\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \
\ \"acc_norm,none\": 0.268,\n \"acc_norm_stderr,none\": 0.02806876238252672\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.192,\n \"acc_norm_stderr,none\":\
\ 0.024960691989171963\n },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.144,\n \"acc_norm_stderr,none\":\
\ 0.022249407735450245\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.296,\n \"acc_norm_stderr,none\":\
\ 0.028928939388379694\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.31208053691275167,\n\
\ \"acc_norm_stderr,none\": 0.013434267788620318,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.29797979797979796,\n \"acc_norm_stderr,none\": 0.03258630383836556\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.32234432234432236,\n\
\ \"acc_norm_stderr,none\": 0.020020102750045735\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.30580357142857145,\n \"acc_norm_stderr,none\"\
: 0.021792582688757046\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.43807763401109057,\n \"prompt_level_strict_acc_stderr,none\": 0.021350931135490865,\n\
\ \"inst_level_strict_acc,none\": 0.5563549160671463,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.4713493530499076,\n \"prompt_level_loose_acc_stderr,none\": 0.02148122093008049,\n\
\ \"inst_level_loose_acc,none\": 0.5911270983213429,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.06419939577039276,\n \"exact_match_stderr,none\"\
: 0.0066144283283954206,\n \"alias\": \" - leaderboard_math_hard\"\n\
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.09120521172638436,\n\
\ \"exact_match_stderr,none\": 0.016458188366731106\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.024390243902439025,\n \"exact_match_stderr,none\": 0.013965813032045565\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.022727272727272728,\n\
\ \"exact_match_stderr,none\": 0.0130210469090637\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\": \"\
\ - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.02857142857142857,\n \"exact_match_stderr,none\": 0.009973998820736053\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.05194805194805195,\n\
\ \"exact_match_stderr,none\": 0.017941344490765\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.16580310880829016,\n \"exact_match_stderr,none\"\
: 0.026839845022314426\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.022222222222222223,\n \"exact_match_stderr,none\"\
: 0.01273389971505968\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.3143284574468085,\n\
\ \"acc_stderr,none\": 0.004232520301337217\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.43386243386243384,\n \"acc_norm_stderr,none\"\
: 0.017804882924407392,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.54,\n\
\ \"acc_norm_stderr,none\": 0.031584653891499004\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.42578125,\n \"acc_norm_stderr,none\"\
: 0.030964342373467638\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.336,\n \"acc_norm_stderr,none\": 0.02993325909419153\n\
\ }\n },\n \"leaderboard\": {\n \"acc,none\": 0.3143284574468085,\n\
\ \"acc_stderr,none\": 0.004232520301337217,\n \"exact_match,none\"\
: 0.06419939577039276,\n \"exact_match_stderr,none\": 0.0066144283283954206,\n\
\ \"prompt_level_strict_acc,none\": 0.43807763401109057,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.021350931135490865,\n \"acc_norm,none\": 0.46984044623167726,\n \
\ \"acc_norm_stderr,none\": 0.005333659102069773,\n \"prompt_level_loose_acc,none\"\
: 0.4713493530499076,\n \"prompt_level_loose_acc_stderr,none\": 0.02148122093008049,\n\
\ \"inst_level_strict_acc,none\": 0.5563549160671463,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"inst_level_loose_acc,none\": 0.5911270983213429,\n \
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"alias\": \"leaderboard\"\
\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.507203610484291,\n\
\ \"acc_norm_stderr,none\": 0.0061443884276944525,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.828,\n \"acc_norm_stderr,none\": 0.02391551394448624\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6096256684491979,\n \"acc_norm_stderr,none\"\
: 0.03576973947986408\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.408,\n \"acc_norm_stderr,none\": 0.031145209846548512\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.692,\n \"acc_norm_stderr,none\": 0.02925692860650181\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.56,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.792,\n \"acc_norm_stderr,none\": 0.025721398901416368\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.452,\n \"acc_norm_stderr,none\": 0.03153986449255664\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.44,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.668,\n \"acc_norm_stderr,none\": 0.029844039047465857\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.616,\n \"acc_norm_stderr,none\": 0.030821679117375447\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.356,\n \"acc_norm_stderr,none\": 0.0303436806571532\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.4315068493150685,\n\
\ \"acc_norm_stderr,none\": 0.041131302645371945\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.524,\n \"acc_norm_stderr,none\": 0.03164968895968774\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.504,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.44,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6292134831460674,\n \"acc_norm_stderr,none\"\
: 0.03630566062020178\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.828,\n \"acc_norm_stderr,none\": 0.02391551394448624\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.268,\n \"acc_norm_stderr,none\": 0.02806876238252672\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.192,\n \"acc_norm_stderr,none\": 0.024960691989171963\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.144,\n \"acc_norm_stderr,none\": 0.022249407735450245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.296,\n \"acc_norm_stderr,none\": 0.028928939388379694\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.31208053691275167,\n\
\ \"acc_norm_stderr,none\": 0.013434267788620318,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.29797979797979796,\n\
\ \"acc_norm_stderr,none\": 0.03258630383836556\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.32234432234432236,\n \"acc_norm_stderr,none\": 0.020020102750045735\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.30580357142857145,\n \"acc_norm_stderr,none\"\
: 0.021792582688757046\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.43807763401109057,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.021350931135490865,\n \
\ \"inst_level_strict_acc,none\": 0.5563549160671463,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.4713493530499076,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.02148122093008049,\n \"inst_level_loose_acc,none\"\
: 0.5911270983213429,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.06419939577039276,\n\
\ \"exact_match_stderr,none\": 0.0066144283283954206,\n \"alias\"\
: \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.09120521172638436,\n \"exact_match_stderr,none\": 0.016458188366731106\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.024390243902439025,\n \"exact_match_stderr,none\": 0.013965813032045565\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.022727272727272728,\n \"exact_match_stderr,none\"\
: 0.0130210469090637\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.02857142857142857,\n \"exact_match_stderr,none\"\
: 0.009973998820736053\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.05194805194805195,\n \"exact_match_stderr,none\": 0.017941344490765\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.16580310880829016,\n \"exact_match_stderr,none\"\
: 0.026839845022314426\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.022222222222222223,\n \"exact_match_stderr,none\": 0.01273389971505968\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.3143284574468085,\n \"acc_stderr,none\": 0.004232520301337217\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.43386243386243384,\n\
\ \"acc_norm_stderr,none\": 0.017804882924407392,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.54,\n \"acc_norm_stderr,none\": 0.031584653891499004\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.42578125,\n \"acc_norm_stderr,none\": 0.030964342373467638\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.336,\n \"acc_norm_stderr,none\": 0.02993325909419153\n\
\ }\n}\n```"
repo_url: https://huggingface.co/icefog72/Ice0.37-18.11-RP
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_navigate
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_snarks
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_gpqa_extended
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_gpqa_main
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_gpqa_main_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_ifeval
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_ifeval_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_mmlu_pro
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_musr_object_placements
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-19T23-56-46.205594.jsonl'
- config_name: icefog72__Ice0.37-18.11-RP__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_19T23_56_46.205594
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-19T23-56-46.205594.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-19T23-56-46.205594.jsonl'
---
# Dataset Card for Evaluation run of icefog72/Ice0.37-18.11-RP
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [icefog72/Ice0.37-18.11-RP](https://huggingface.co/icefog72/Ice0.37-18.11-RP)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/icefog72__Ice0.37-18.11-RP-details",
name="icefog72__Ice0.37-18.11-RP__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-19T23-56-46.205594](https://huggingface.co/datasets/open-llm-leaderboard/icefog72__Ice0.37-18.11-RP-details/blob/main/icefog72__Ice0.37-18.11-RP/results_2024-11-19T23-56-46.205594.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"acc,none": 0.3143284574468085,
"acc_stderr,none": 0.004232520301337217,
"exact_match,none": 0.06419939577039276,
"exact_match_stderr,none": 0.0066144283283954206,
"prompt_level_strict_acc,none": 0.43807763401109057,
"prompt_level_strict_acc_stderr,none": 0.021350931135490865,
"acc_norm,none": 0.46984044623167726,
"acc_norm_stderr,none": 0.005333659102069773,
"prompt_level_loose_acc,none": 0.4713493530499076,
"prompt_level_loose_acc_stderr,none": 0.02148122093008049,
"inst_level_strict_acc,none": 0.5563549160671463,
"inst_level_strict_acc_stderr,none": "N/A",
"inst_level_loose_acc,none": 0.5911270983213429,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.507203610484291,
"acc_norm_stderr,none": 0.0061443884276944525,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.828,
"acc_norm_stderr,none": 0.02391551394448624
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6096256684491979,
"acc_norm_stderr,none": 0.03576973947986408
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.408,
"acc_norm_stderr,none": 0.031145209846548512
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.692,
"acc_norm_stderr,none": 0.02925692860650181
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.792,
"acc_norm_stderr,none": 0.025721398901416368
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.452,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.44,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.668,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.616,
"acc_norm_stderr,none": 0.030821679117375447
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.356,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4315068493150685,
"acc_norm_stderr,none": 0.041131302645371945
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.524,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.504,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.44,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6292134831460674,
"acc_norm_stderr,none": 0.03630566062020178
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.828,
"acc_norm_stderr,none": 0.02391551394448624
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.268,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.192,
"acc_norm_stderr,none": 0.024960691989171963
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.144,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.296,
"acc_norm_stderr,none": 0.028928939388379694
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_gpqa": {
"acc_norm,none": 0.31208053691275167,
"acc_norm_stderr,none": 0.013434267788620318,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.29797979797979796,
"acc_norm_stderr,none": 0.03258630383836556
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.32234432234432236,
"acc_norm_stderr,none": 0.020020102750045735
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.30580357142857145,
"acc_norm_stderr,none": 0.021792582688757046
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.43807763401109057,
"prompt_level_strict_acc_stderr,none": 0.021350931135490865,
"inst_level_strict_acc,none": 0.5563549160671463,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.4713493530499076,
"prompt_level_loose_acc_stderr,none": 0.02148122093008049,
"inst_level_loose_acc,none": 0.5911270983213429,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.06419939577039276,
"exact_match_stderr,none": 0.0066144283283954206,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.09120521172638436,
"exact_match_stderr,none": 0.016458188366731106
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.024390243902439025,
"exact_match_stderr,none": 0.013965813032045565
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.022727272727272728,
"exact_match_stderr,none": 0.0130210469090637
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.02857142857142857,
"exact_match_stderr,none": 0.009973998820736053
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.05194805194805195,
"exact_match_stderr,none": 0.017941344490765
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.16580310880829016,
"exact_match_stderr,none": 0.026839845022314426
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.022222222222222223,
"exact_match_stderr,none": 0.01273389971505968
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.3143284574468085,
"acc_stderr,none": 0.004232520301337217
},
"leaderboard_musr": {
"acc_norm,none": 0.43386243386243384,
"acc_norm_stderr,none": 0.017804882924407392,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.42578125,
"acc_norm_stderr,none": 0.030964342373467638
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.336,
"acc_norm_stderr,none": 0.02993325909419153
}
},
"leaderboard": {
"acc,none": 0.3143284574468085,
"acc_stderr,none": 0.004232520301337217,
"exact_match,none": 0.06419939577039276,
"exact_match_stderr,none": 0.0066144283283954206,
"prompt_level_strict_acc,none": 0.43807763401109057,
"prompt_level_strict_acc_stderr,none": 0.021350931135490865,
"acc_norm,none": 0.46984044623167726,
"acc_norm_stderr,none": 0.005333659102069773,
"prompt_level_loose_acc,none": 0.4713493530499076,
"prompt_level_loose_acc_stderr,none": 0.02148122093008049,
"inst_level_strict_acc,none": 0.5563549160671463,
"inst_level_strict_acc_stderr,none": "N/A",
"inst_level_loose_acc,none": 0.5911270983213429,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.507203610484291,
"acc_norm_stderr,none": 0.0061443884276944525,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.828,
"acc_norm_stderr,none": 0.02391551394448624
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6096256684491979,
"acc_norm_stderr,none": 0.03576973947986408
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.408,
"acc_norm_stderr,none": 0.031145209846548512
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.692,
"acc_norm_stderr,none": 0.02925692860650181
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.792,
"acc_norm_stderr,none": 0.025721398901416368
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.452,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.44,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.668,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.616,
"acc_norm_stderr,none": 0.030821679117375447
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.356,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4315068493150685,
"acc_norm_stderr,none": 0.041131302645371945
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.524,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.504,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.44,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6292134831460674,
"acc_norm_stderr,none": 0.03630566062020178
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.828,
"acc_norm_stderr,none": 0.02391551394448624
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.268,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.192,
"acc_norm_stderr,none": 0.024960691989171963
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.144,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.296,
"acc_norm_stderr,none": 0.028928939388379694
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_gpqa": {
"acc_norm,none": 0.31208053691275167,
"acc_norm_stderr,none": 0.013434267788620318,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.29797979797979796,
"acc_norm_stderr,none": 0.03258630383836556
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.32234432234432236,
"acc_norm_stderr,none": 0.020020102750045735
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.30580357142857145,
"acc_norm_stderr,none": 0.021792582688757046
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.43807763401109057,
"prompt_level_strict_acc_stderr,none": 0.021350931135490865,
"inst_level_strict_acc,none": 0.5563549160671463,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.4713493530499076,
"prompt_level_loose_acc_stderr,none": 0.02148122093008049,
"inst_level_loose_acc,none": 0.5911270983213429,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.06419939577039276,
"exact_match_stderr,none": 0.0066144283283954206,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.09120521172638436,
"exact_match_stderr,none": 0.016458188366731106
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.024390243902439025,
"exact_match_stderr,none": 0.013965813032045565
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.022727272727272728,
"exact_match_stderr,none": 0.0130210469090637
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.02857142857142857,
"exact_match_stderr,none": 0.009973998820736053
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.05194805194805195,
"exact_match_stderr,none": 0.017941344490765
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.16580310880829016,
"exact_match_stderr,none": 0.026839845022314426
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.022222222222222223,
"exact_match_stderr,none": 0.01273389971505968
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.3143284574468085,
"acc_stderr,none": 0.004232520301337217
},
"leaderboard_musr": {
"acc_norm,none": 0.43386243386243384,
"acc_norm_stderr,none": 0.017804882924407392,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.42578125,
"acc_norm_stderr,none": 0.030964342373467638
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.336,
"acc_norm_stderr,none": 0.02993325909419153
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
eliasfiz/rlhf-raw | eliasfiz | "2024-11-19T23:59:39Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T23:59:33Z" | ---
dataset_info:
features:
- name: audioId
dtype: string
- name: audio_bits
dtype: audio
- name: rank
dtype: int64
- name: batchId
dtype: string
- name: prompt
dtype: string
- name: emotion
dtype: string
splits:
- name: train
num_bytes: 96468490.0
num_examples: 750
download_size: 87971561
dataset_size: 96468490.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MahmoudIbrahim/Arabic_NVIDIA | MahmoudIbrahim | "2024-11-20T01:04:33Z" | 3 | 0 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:ar",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"finance"
] | [
"question-answering",
"text-generation"
] | "2024-11-20T00:36:01Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 63028674
num_examples: 20300
download_size: 22579566
dataset_size: 63028674
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- question-answering
- text-generation
language:
- ar
tags:
- finance
pretty_name: 'n'
size_categories:
- 10K<n<100K
---
## Dataset Overview
This dataset contains texts translated into Arabic, where the NVIDIA data, specifically the nvidia/HelpSteer2 dataset,
was translated using Google Translate to achieve accurate translation.
This data is specifically designed for use in question answering and text generation tasks.
## Dataset Details
- **Original Dataset**: NVIDIA's **HelpSteer2** dataset.
- **Language**: Translated into **Arabic**.
- **Use Case**: question answering and text generation tasks.
|
amuvarma/mls-train-500 | amuvarma | "2024-11-20T10:50:55Z" | 3 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T03:06:37Z" | ---
dataset_info:
features:
- name: audio_values
sequence: float64
- name: transcript_ids
sequence: int64
- name: labels
sequence: int64
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 1200822000000
num_examples: 500000
download_size: 738443026079
dataset_size: 1200822000000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nuprl-staging/training_classifier_intermediate_depth1 | nuprl-staging | "2024-11-20T04:23:56Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T04:23:55Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: answer
dtype: string
- name: pythoncode
dtype: string
- name: depth0
dtype: string
- name: depth1
dtype: string
- name: depth2
dtype: string
- name: depth3
dtype: string
- name: depth4
dtype: string
- name: depth5
dtype: string
- name: depth6
dtype: string
- name: depth7
dtype: string
- name: depthn0
dtype: string
- name: depthn1
dtype: string
- name: depthn2
dtype: string
- name: depthn3
dtype: string
- name: depthn4
dtype: string
- name: depthn5
dtype: string
- name: depthn6
dtype: string
- name: depthn7
dtype: string
- name: num_nodes
dtype: int64
- name: num_edges
dtype: int64
- name: num_classes
dtype: int64
- name: path_length
dtype: int64
- name: num_cycle
dtype: int64
- name: correctness
dtype: bool
- name: one_correct
dtype: bool
splits:
- name: train
num_bytes: 4201926
num_examples: 3000
- name: test
num_bytes: 703852
num_examples: 500
download_size: 1085393
dataset_size: 4905778
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
StormblessedKal/torgo_imperative_train | StormblessedKal | "2024-11-20T08:44:33Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T08:44:28Z" | ---
dataset_info:
features:
- name: Filename
dtype: string
- name: Transcription
dtype: string
- name: Pathname
dtype: string
- name: Audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 165075014.0
num_examples: 854
download_size: 164460754
dataset_size: 165075014.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-sk-unrevised_NoQuant_32_16_0.05_32_BestF1 | ferrazzipietro | "2024-11-20T09:33:33Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T09:33:30Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 140757
num_examples: 97
- name: test
num_bytes: 1213941
num_examples: 743
download_size: 277939
dataset_size: 1354698
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
paolordls/crosslg-sft-qa-sm | paolordls | "2024-11-20T09:49:23Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T09:49:20Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: scenario_id
dtype: int64
- name: example_id
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 106775
num_examples: 200
download_size: 52100
dataset_size: 106775
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AmitIsraeli/pops_10k_depth | AmitIsraeli | "2024-11-20T11:05:51Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T10:59:26Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: depth_predicted
dtype: image
splits:
- name: train
num_bytes: 3197717112.0
num_examples: 10000
download_size: 3196879201
dataset_size: 3197717112.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
youseon/pakdd_table_merge_data_task_1 | youseon | "2024-11-20T13:20:41Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T13:20:35Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 103509254
num_examples: 9196
download_size: 20719831
dataset_size: 103509254
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CNX-PathLLM/ECCV-2024-close | CNX-PathLLM | "2024-11-20T14:46:07Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T14:46:01Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: slide_id
dtype: string
- name: project
dtype: string
- name: f1024
dtype: string
- name: f2048
dtype: string
- name: f4096
dtype: string
splits:
- name: train
num_bytes: 1863837.034026465
num_examples: 2947
- name: test
num_bytes: 396033.3316892725
num_examples: 632
download_size: 429899
dataset_size: 2259870.3657157375
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Hotpot-Killer/instructpg-dataset | Hotpot-Killer | "2024-11-20T15:08:43Z" | 3 | 0 | [
"license:mit",
"region:us"
] | null | "2024-11-20T15:08:43Z" | ---
license: mit
---
|
PbOE1209/IndicPublicSpeech_kn | PbOE1209 | "2024-11-20T17:59:44Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T15:11:14Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: original_text
dtype: string
- name: translation
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 34591589576.048
num_examples: 11504
download_size: 30860056732
dataset_size: 34591589576.048
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
asadfgglie/BanBan_2024-10-17-facial_expressions-nli | asadfgglie | "2024-11-20T15:40:10Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T15:40:06Z" | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': not_entailment
splits:
- name: train
num_bytes: 645894
num_examples: 4726
download_size: 94897
dataset_size: 645894
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "BanBan_2024-10-17-facial_expressions-nli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rashid0784/common_voice_audio_quality_enhancement_v3 | rashid0784 | "2024-11-20T16:28:17Z" | 3 | 1 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T16:06:39Z" | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: original file
dtype: audio
- name: generated noisy audio
dtype: audio
- name: accent
dtype: string
splits:
- name: train
num_bytes: 9932306058.536
num_examples: 115104
- name: test
num_bytes: 1236121698.6
num_examples: 15980
download_size: 6803617592
dataset_size: 11168427757.136
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
neurograce/SubstationDataset | neurograce | "2024-11-20T19:15:22Z" | 3 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-11-20T19:15:22Z" | ---
license: apache-2.0
---
|
Jhonatan321/Datasets_Jhona | Jhonatan321 | "2024-11-20T20:04:22Z" | 3 | 0 | [
"task_categories:text-generation",
"language:es",
"license:mit",
"region:us",
"code"
] | [
"text-generation"
] | "2024-11-20T20:03:11Z" | ---
license: mit
task_categories:
- text-generation
language:
- es
tags:
- code
--- |
allenai/tulu-3-sft-reused-off-policy | allenai | "2024-11-21T16:53:27Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T22:31:43Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 584556391
num_examples: 96911
download_size: 301118275
dataset_size: 584556391
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
<img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-3/Tulu3-logo.png" alt="Tulu3 banner" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Llama 3.1 Tulu 3 SFT reused (off-policy)
*Note that this collection is licensed under ODC-BY-1.0 license; different licenses apply to subsets of the data. Some portions of the dataset are non-commercial. We present the mixture as a research artifact.*
This preference dataset is part of our Tulu 3 preference mixture:
it contains prompts from our [SFT mixture](https://huggingface.co/datasets/allenai/tulu-3-sft-mixture) and it contains 96,911 generation pairs obtained using the following models:
- [Mistral 7B Instruct v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) (Apache 2.0)
- [Mistral Nemo Instruct 2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) (Apache 2.0)
- [Tulu 2 7B](https://huggingface.co/allenai/tulu-2-7b) (Ai2 ImpACT Low Risk License)
- [Tulu 2 13B](https://huggingface.co/allenai/tulu-2-13b) (Ai2 ImpACT Low Risk License)
- [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) (Apache 2.0)
- [Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) (Apache 2.0)
- [MPT 30B Chat](https://huggingface.co/mosaicml/mpt-30b-chat) (CC-BY-SA-4.0)
- [MPT 7B 8k Chat](https://huggingface.co/mosaicml/mpt-7b-8k-chat) (CC-BY-SA-4.0)
- [Google Gemma 2 27B it](https://huggingface.co/google/gemma-2-27b-it) (Gemma is provided under and subject to the Gemma Terms of Use found at [ai.google.dev/gemma/terms](https://ai.google.dev/gemma/terms))
- [Google Gemma 2 9B it](https://huggingface.co/google/gemma-2-9b-it) (Gemma is provided under and subject to the Gemma Terms of Use found at [ai.google.dev/gemma/terms](https://ai.google.dev/gemma/terms))
- [InternLM2.5 20B](https://huggingface.co/internlm/internlm2_5-20b-chat) (InternLM weights are fully open for academic research and also allow free commercial usage. A commercial license can be obtained as instructed in the model card.)
- [InternLM2.5 7B](https://huggingface.co/internlm/internlm2_5-7b-chat) (InternLM weights are fully open for academic research and also allow free commercial usage. A commercial license can be obtained as instructed in the model card.)
- [InternLM2.5 1.8B](https://huggingface.co/internlm/internlm2_5-1_8b-chat) (InternLM weights are fully open for academic research and also allow free commercial usage. A commercial license can be obtained as instructed in the model card.)
- [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b-instruct) (Apache 2.0)
- [Qwen2.5 72B Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) (Qwen is licensed under the Qwen LICENSE AGREEMENT, Copyright (c) Alibaba Cloud. All Rights Reserved.)
- [Qwen2.5 32B Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) (Apache 2.0)
- [Qwen2.5 14B Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) (Apache 2.0)
- [Qwen2.5 7B Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) (Apache 2.0)
- [Llama 3.1 8B Instruct ](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) (this dataset was partially "Built with Llama" and is thus subject to the Llama 3.1 License)
- [Llama 3.1 70B Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) (this dataset was partially "Built with Llama" and is thus subject to the Llama 3.1 License)
- [Llama 3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B) (this dataset was partially "Built with Meta Llama 3" and is thus subject to the Llama 3 License)
- [GPT-4 Turbo](https://openai.com/index/new-models-and-developer-products-announced-at-devday/) and [GPT-4o](https://openai.com/index/hello-gpt-4o/) (Outputs produced by GPT-4 are subject to OpenAI's [terms of use](https://openai.com/policies/row-terms-of-use))
- [Claude 3.5 Sonnet](https://www.anthropic.com/news/claude-3-5-sonnet) (Outputs produced by Claude are subject to Anthropic [terms of service](https://www.anthropic.com/legal/commercial-terms) and [usage policy](https://www.anthropic.com/legal/aup))
## License
This dataset is licensed under ODC-BY. It is intended for research and educational use in accordance with Ai2's [Responsible Use Guidelines](https://allenai.org/responsible-use). This dataset includes output data generated from third party models that are subject to separate terms governing their use.
|
allenai/tulu-3-wildchat-reused-on-policy-8b | allenai | "2024-11-21T16:50:25Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T22:32:24Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 141797370
num_examples: 17207
download_size: 84322720
dataset_size: 141797370
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
<img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-3/Tulu3-logo.png" alt="Tulu3 banner" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Llama 3.1 Tulu 3 Wildchat reused (on-policy 8B)
*Note that this collection is licensed under ODC-BY-1.0 license; different licenses apply to subsets of the data. Some portions of the dataset are non-commercial. We present the mixture as a research artifact.*
This preference dataset is part of our Tulu 3 preference mixture:
it contains prompts from [WildChat](allenai/WildChat-1M) and it contains 17,207 generation pairs (some of which on-policy completions from https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B) obtained using the following models:
- [Mistral 7B Instruct v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) (Apache 2.0)
- [Mistral Nemo Instruct 2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) (Apache 2.0)
- [Tulu 2 7B](https://huggingface.co/allenai/tulu-2-7b) (Ai2 ImpACT Low Risk License)
- [Tulu 2 13B](https://huggingface.co/allenai/tulu-2-13b) (Ai2 ImpACT Low Risk License)
- [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) (Apache 2.0)
- [Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) (Apache 2.0)
- [MPT 30B Chat](https://huggingface.co/mosaicml/mpt-30b-chat) (CC-BY-SA-4.0)
- [MPT 7B 8k Chat](https://huggingface.co/mosaicml/mpt-7b-8k-chat) (CC-BY-SA-4.0)
- [Google Gemma 2 27B it](https://huggingface.co/google/gemma-2-27b-it) (Gemma is provided under and subject to the Gemma Terms of Use found at [ai.google.dev/gemma/terms](https://ai.google.dev/gemma/terms))
- [Google Gemma 2 9B it](https://huggingface.co/google/gemma-2-9b-it) (Gemma is provided under and subject to the Gemma Terms of Use found at [ai.google.dev/gemma/terms](https://ai.google.dev/gemma/terms))
- [InternLM2.5 20B](https://huggingface.co/internlm/internlm2_5-20b-chat) (InternLM weights are fully open for academic research and also allow free commercial usage. A commercial license can be obtained as instructed in the model card.)
- [InternLM2.5 7B](https://huggingface.co/internlm/internlm2_5-7b-chat) (InternLM weights are fully open for academic research and also allow free commercial usage. A commercial license can be obtained as instructed in the model card.)
- [InternLM2.5 1.8B](https://huggingface.co/internlm/internlm2_5-1_8b-chat) (InternLM weights are fully open for academic research and also allow free commercial usage. A commercial license can be obtained as instructed in the model card.)
- [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b-instruct) (Apache 2.0)
- [Qwen2.5 72B Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) (Qwen is licensed under the Qwen LICENSE AGREEMENT, Copyright (c) Alibaba Cloud. All Rights Reserved.)
- [Qwen2.5 32B Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) (Apache 2.0)
- [Qwen2.5 14B Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) (Apache 2.0)
- [Qwen2.5 7B Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) (Apache 2.0)
- [Llama 3.1 8B Instruct ](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) (this dataset was partially "Built with Llama" and is thus subject to the Llama 3.1 License)
- [Llama 3.1 70B Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) (this dataset was partially "Built with Llama" and is thus subject to the Llama 3.1 License)
- [Llama 3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B) (this dataset was partially "Built with Meta Llama 3" and is thus subject to the Llama 3 License)
- [GPT-4 Turbo](https://openai.com/index/new-models-and-developer-products-announced-at-devday/) and [GPT-4o](https://openai.com/index/hello-gpt-4o/) (Outputs produced by GPT-4 are subject to OpenAI's [terms of use](https://openai.com/policies/row-terms-of-use))
- [Claude 3.5 Sonnet](https://www.anthropic.com/news/claude-3-5-sonnet) (Outputs produced by Claude are subject to Anthropic [terms of service](https://www.anthropic.com/legal/commercial-terms) and [usage policy](https://www.anthropic.com/legal/aup))
## Completion Generation Approach:
Given a set of prompts, we generated the completions and preferences using a synthetic pipeline that combines both on-policy and off-policy data, and obtained the preference annotations on four different aspects using the Ultrafeedback template and an LLM judge. The code for the synthetic generation pipeline is found in the scripts/synth_pref directory of [open-instruct](https://github.com/allenai/open-instruct/)
## License
This dataset is licensed under ODC-BY. It is intended for research and educational use in accordance with Ai2's [Responsible Use Guidelines](https://allenai.org/responsible-use). This dataset includes output data generated from third party models that are subject to separate terms governing their use.
|
allenai/llama-3.1-tulu-3-70b-preference-mixture | allenai | "2024-11-21T16:02:31Z" | 3 | 3 | [
"license:odc-by",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T22:51:26Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 2282537650
num_examples: 334367
download_size: 1229520687
dataset_size: 2282537650
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: odc-by
size_categories:
- 100K<n<1M
---
<img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-3/Tulu3-logo.png" alt="Tulu3 banner" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Llama 3.1 Tulu 3 70B Preference Mixture
*Note that this collection is licensed under ODC-BY-1.0 license; different licenses apply to subsets of the data. Some portions of the dataset are non-commercial. We present the mixture as a research artifact.*
This preference mixture used for DPO on our the [Llama 3.1 Tulu 3 70B SFT](https://huggingface.co/allenai/Llama-3.1-Tulu-3-70B-SFT) checkpoint to obtain [Llama 3.1 Tulu 3 70B DPO](https://huggingface.co/allenai/Llama-3.1-Tulu-3-70B-DPO).
This mix is made up from the following preference datasets:
- https://huggingface.co/datasets/allenai/tulu-3-sft-reused-off-policy
- https://huggingface.co/datasets/allenai/tulu-3-sft-reused-on-policy-70b
- https://huggingface.co/datasets/allenai/tulu-3-wildchat-if-on-policy-70b
- https://huggingface.co/datasets/allenai/tulu-3-IF-augmented-on-policy-70b
- https://huggingface.co/datasets/allenai/tulu-3-wildchat-unused
- https://huggingface.co/datasets/allenai/tulu-3-wildchat-reused-on-policy-70b
- https://huggingface.co/datasets/allenai/tulu-3-ultrafeedback-cleaned-on-policy-70b
It contains 334,367 generation pairs obtained using the following models:
- [Mistral 7B Instruct v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) (Apache 2.0)
- [Mistral Nemo Instruct 2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) (Apache 2.0)
- [Tulu 2 7B](https://huggingface.co/allenai/tulu-2-7b) (Ai2 ImpACT Low Risk License)
- [Tulu 2 13B](https://huggingface.co/allenai/tulu-2-13b) (Ai2 ImpACT Low Risk License)
- [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) (Apache 2.0)
- [Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) (Apache 2.0)
- [MPT 30B Chat](https://huggingface.co/mosaicml/mpt-30b-chat) (CC-BY-SA-4.0)
- [MPT 7B 8k Chat](https://huggingface.co/mosaicml/mpt-7b-8k-chat) (CC-BY-SA-4.0)
- [Google Gemma 2 27B it](https://huggingface.co/google/gemma-2-27b-it) (Gemma is provided under and subject to the Gemma Terms of Use found at [ai.google.dev/gemma/terms](https://ai.google.dev/gemma/terms))
- [Google Gemma 2 9B it](https://huggingface.co/google/gemma-2-9b-it) (Gemma is provided under and subject to the Gemma Terms of Use found at [ai.google.dev/gemma/terms](https://ai.google.dev/gemma/terms))
- [InternLM2.5 20B](https://huggingface.co/internlm/internlm2_5-20b-chat) (InternLM weights are fully open for academic research and also allow free commercial usage. A commercial license can be obtained as instructed in the model card.)
- [InternLM2.5 7B](https://huggingface.co/internlm/internlm2_5-7b-chat) (InternLM weights are fully open for academic research and also allow free commercial usage. A commercial license can be obtained as instructed in the model card.)
- [InternLM2.5 1.8B](https://huggingface.co/internlm/internlm2_5-1_8b-chat) (InternLM weights are fully open for academic research and also allow free commercial usage. A commercial license can be obtained as instructed in the model card.)
- [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b-instruct) (Apache 2.0)
- [Qwen2.5 72B Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) (Qwen is licensed under the Qwen LICENSE AGREEMENT, Copyright (c) Alibaba Cloud. All Rights Reserved.)
- [Qwen2.5 32B Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) (Apache 2.0)
- [Qwen2.5 14B Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) (Apache 2.0)
- [Qwen2.5 7B Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) (Apache 2.0)
- [Llama 3.1 8B Instruct ](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) (this dataset was partially "Built with Llama" and is thus subject to the Llama 3.1 License)
- [Llama 3.1 70B Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) (this dataset was partially "Built with Llama" and is thus subject to the Llama 3.1 License)
- [Llama 3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B) (this dataset was partially "Built with Meta Llama 3" and is thus subject to the Llama 3 License)
- [GPT-4 Turbo](https://openai.com/index/new-models-and-developer-products-announced-at-devday/) and [GPT-4o](https://openai.com/index/hello-gpt-4o/) (Outputs produced by GPT-4 are subject to OpenAI's [terms of use](https://openai.com/policies/row-terms-of-use))
- [Claude 3.5 Sonnet](https://www.anthropic.com/news/claude-3-5-sonnet) (Outputs produced by Claude are subject to Anthropic [terms of service](https://www.anthropic.com/legal/commercial-terms) and [usage policy](https://www.anthropic.com/legal/aup))
### Model Family
| **Stage** | **Llama 3.1 8B** | **Llama 3.1 70B** |
|----------------------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|
| **Base Model** | [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [meta-llama/Llama-3.1-70B](https://huggingface.co/meta-llama/Llama-3.1-70B) |
| **SFT** | [allenai/Llama-3.1-Tulu-3-8B-SFT](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B-SFT) | [allenai/Llama-3.1-Tulu-3-70B-SFT](https://huggingface.co/allenai/Llama-3.1-Tulu-3-70B-SFT) |
| **DPO** | [allenai/Llama-3.1-Tulu-3-8B-DPO](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B-DPO) | [allenai/Llama-3.1-Tulu-3-70B-DPO](https://huggingface.co/allenai/Llama-3.1-Tulu-3-70B-DPO) |
| **Final Models (RLVR)** | [allenai/Llama-3.1-Tulu-3-8B](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B) | [allenai/Llama-3.1-Tulu-3-70B](https://huggingface.co/allenai/Llama-3.1-Tulu-3-70B) |
| **Reward Model (RM)**| [allenai/Llama-3.1-Tulu-3-8B-RM](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B-RM) | (Same as 8B) |
## License
This dataset is licensed under ODC-BY. It is intended for research and educational use in accordance with Ai2's [Responsible Use Guidelines](https://allenai.org/responsible-use). This dataset includes output data generated from third party models that are subject to separate terms governing their use.
## Citation
If Tülu3 or any of the related materials were helpful to your work, please cite:
```
@article{lambert2024tulu3,
title = {Tülu 3: Pushing Frontiers in Open Language Model Post-Training},
author = {
Nathan Lambert and
Jacob Morrison and
Valentina Pyatkin and
Shengyi Huang and
Hamish Ivison and
Faeze Brahman and
Lester James V. Miranda and
Alisa Liu and
Nouha Dziri and
Shane Lyu and
Yuling Gu and
Saumya Malik and
Victoria Graf and
Jena D. Hwang and
Jiangjiang Yang and
Ronan Le Bras and
Oyvind Tafjord and
Chris Wilhelm and
Luca Soldaini and
Noah A. Smith and
Yizhong Wang and
Pradeep Dasigi and
Hannaneh Hajishirzi
},
year = {2024},
email = {tulu@allenai.org}
}
```
|
lamblamb/from_news_to_forecast | lamblamb | "2024-11-21T07:59:18Z" | 3 | 0 | [
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T00:50:38Z" | ---
license: mit
---
```
@inproceedings{wang2024newsforecast,
title={From News to Forecast: Integrating Event Analysis in LLM-Based Time Series Forecasting with Reflection},
author={Wang, Xinlei and Feng, Maike and Qiu, Jing and Gu, Jinjin and Zhao, Junhua},
booktitle={Neural Information Processing Systems},
year={2024}
}
``` |
jamesnatulan/cuelang | jamesnatulan | "2024-11-21T01:36:51Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T01:31:54Z" | ---
dataset_info:
features:
- name: repo_path
dtype: string
- name: repo_name
dtype: string
- name: url
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 88645676
num_examples: 13235
download_size: 20801533
dataset_size: 88645676
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
STL-Yonsei/SWIR-Fruit_and_Vegetable_Dataset | STL-Yonsei | "2024-11-21T04:54:12Z" | 3 | 0 | [
"license:cc-by-nc-4.0",
"modality:image",
"region:us"
] | null | "2024-11-21T02:19:27Z" | ---
license: cc-by-nc-4.0
---
|
1231czx/test_ver2_rebuttal_af_rm_bon64_005 | 1231czx | "2024-11-21T03:18:28Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T03:18:27Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: generator
dtype: string
splits:
- name: train
num_bytes: 1288815
num_examples: 805
download_size: 775756
dataset_size: 1288815
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
1231czx/test_ver2_rebuttal_af_rrm_bon8_005 | 1231czx | "2024-11-21T03:19:05Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T03:19:04Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: generator
dtype: string
splits:
- name: train
num_bytes: 1476291
num_examples: 805
download_size: 879535
dataset_size: 1476291
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
spow12/llava_instruct_mix_jp | spow12 | "2024-11-21T05:36:14Z" | 3 | 0 | [
"task_categories:visual-question-answering",
"language:ja",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"visual-question-answering"
] | "2024-11-21T04:20:41Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
- name: en_conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 11259562724.229902
num_examples: 258968
download_size: 10767717705
dataset_size: 11259562724.229902
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- visual-question-answering
language:
- ja
---
Translated using ChatWaifu_12B_v2.2(private)
# Prompt
```python
prompt = [
{
'role': 'user',
'content': [
{
'type': 'text',
'text': f"""Translate the sentece to japanese.
If there is any structure like markdown table or chart, using original format.
Here is the sentence to translate: 36.76"""
},
]
},
{
'role': 'assistant',
'content': [
{
'type': 'text',
'text': f"""36.76"""
},
]
},
{
'role': 'user',
'content': [
{
'type': 'text',
'text': f"""Translate the sentece to japanese.
If there is any structure like markdown table or chart, using original format.
Here is the sentence to translate: Sure! Here's the extracted results written in markdown\n| Characteristic | Forecast spend in billion GBP |\n|:----------------------|--------------------------------:|\n| Food and grocery | 148.3 |\n| Clothing and footwear | 46.4 |\n| Home | 37.4 |\n| Electricals | 21.2 |\n| Health and beauty | 18.5 |"""
},
]
},
{
'role': 'assistant',
'content': [
{
'type': 'text',
'text': f"""以下はマークダウン形式で書かれた抽出された結果です。
特徴 | 予測支出(十億GBP)
-------------------|-----------------------------
食品と日用雑貨 | 148.3
衣服と靴 | 46.4
住宅 | 37.4
電気製品 | 21.2
健康と美容 | 18.5"""
},
]
},
{
'role': 'user',
'content': [
{
'type': 'text',
'text': f"""Translate the sentece to japanese.
If there is any structure like markdown table or chart, using original format.
Here is the sentence to translate: Yes"""
},
]
},
{
'role': 'assistant',
'content': [
{
'type': 'text',
'text': f"""はい"""
},
]
},
{
'role': 'user',
'content': [
{
'type': 'text',
'text': f"""Translate the sentece to japanese.
If there is any structure like markdown table or chart, using original format.
Here is the sentence to translate: Maxime Barbier"""
},
]
},
{
'role': 'assistant',
'content': [
{
'type': 'text',
'text': f"""マキシム・バルビエ"""
},
]
},
{
'role': 'user',
'content': [
{
'type': 'text',
'text': f"""Translate the sentece to japanese.
If there is any structure like markdown table or chart, using original format.
Here is the sentence to translate: Can you extract the full data and reformat it as a markdown table?"""
},
]
},
{
'role': 'assistant',
'content': [
{
'type': 'text',
'text': f"""全データを抽出して、マークダウン・テーブルとして再フォーマットできますか?"""
},
]
},
{
'role': 'user',
'content': [
{
'type': 'text',
'text': f"""Translate the sentece to japanese.
If there is any structure like markdown table or chart, using original format.
Here is the sentence to translate {text}"""
},
]
},
]
```
# Caution!
This dataset may contain errors or incorrect translations, even after filtering.
# Original Dataset
[theblackcat102/llava-instruct-mix](https://huggingface.co/datasets/theblackcat102/llava-instruct-mix)
|
deepklarity/top_chrome_extensions | deepklarity | "2024-11-21T05:27:58Z" | 3 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T05:21:45Z" | ---
license: apache-2.0
---
### Top Chrome extensions dataset
This dataset contains data of around 4000 rows the top chrome extensions available on [Chrome web store](https://chromewebstore.google.com/?pli=1)
The dataset was scraped in `October-2024`.
We aim to use this dataset to perform analysis and identify trends and get a bird's eye view of Chrome extension ecosystem and build a extension recommender.
#### Mantainers
- [Nishritha Damera](https://www.linkedin.com/in/nishrithadamera/)
|
JayKr/Dog_Eye_Research_Dataset | JayKr | "2024-11-21T05:35:49Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T05:35:45Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2106105
num_examples: 10410
download_size: 844470
dataset_size: 2106105
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SHASWATSINGH3101/key_info_bail_para_prompts_TRAIN_V1 | SHASWATSINGH3101 | "2024-11-21T05:39:09Z" | 3 | 0 | [
"license:cc-by-nc-nd-4.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | null | "2024-11-21T05:36:38Z" | ---
license: cc-by-nc-nd-4.0
---
|
cyberorigin/alita | cyberorigin | "2024-11-21T06:14:31Z" | 3 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | null | "2024-11-21T05:41:55Z" | ---
license: mit
---
## CyberOrigin Dataset
Our data includes information from home services, the logistics industry, and laboratory scenarios.
For more details, please refer to our [Offical Data Website](https://cyberorigin2077.github.io/)
contents of the dataset:
```
alita # dataset root path
└── data/
├── metadata_Traj01.json
├── segment_ids_Traj01.bin # for each frame segment_ids uniquely points to the segment index that frame i came from. You may want to use this to separate non-contiguous frames from different videos (transitions).
├── videos_Traj01.bin # 16x16 image patches at 15hz, each patch is vector-quantized into 2^18 possible integer values. These can be decoded into 256x256 RGB images using the provided magvit2.ckpt weights.
├── ...
└── ...
```
```json
{
"task": "ALITA",
"total_episodes": ,
"total_frames": ,
"token_dtype": "uint32",
"vocab_size": 262144,
"fps": 15,
"language_annotation": "None",
}
``` |
bkk21/triper-gemma-10-dataset | bkk21 | "2024-11-21T06:32:34Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T06:32:30Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: korean_v1
num_bytes: 54303611
num_examples: 53070
download_size: 1252931
dataset_size: 54303611
configs:
- config_name: default
data_files:
- split: korean_v1
path: data/korean_v1-*
---
|
hyunsoow/custom_drug_dataset | hyunsoow | "2024-11-21T06:44:26Z" | 3 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T06:44:13Z" | ---
dataset_info:
features:
- name: patient_id
dtype: int64
- name: drugName
dtype: string
- name: condition
dtype: string
- name: review
dtype: string
- name: rating
dtype: float64
- name: date
dtype: string
- name: usefulCount
dtype: int64
- name: review_length
dtype: int64
splits:
- name: train
num_bytes: 65975578
num_examples: 110811
- name: validation
num_bytes: 16422578
num_examples: 27703
- name: test
num_bytes: 27430466
num_examples: 46108
download_size: 63886981
dataset_size: 109828622
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
piporica/custom_drug_dataset | piporica | "2024-11-21T06:44:31Z" | 3 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T06:44:16Z" | ---
dataset_info:
features:
- name: patient_id
dtype: int64
- name: drugName
dtype: string
- name: condition
dtype: string
- name: review
dtype: string
- name: rating
dtype: float64
- name: date
dtype: string
- name: usefulCount
dtype: int64
- name: review_length
dtype: int64
splits:
- name: train
num_bytes: 65975578
num_examples: 110811
- name: validation
num_bytes: 16422578
num_examples: 27703
- name: test
num_bytes: 27430466
num_examples: 46108
download_size: 63886981
dataset_size: 109828622
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
Lancelot53/temp_qgen_dataset | Lancelot53 | "2024-11-21T22:13:19Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T07:29:54Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: question
dtype: string
- name: output
dtype: string
- name: generator
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 219083
num_examples: 195
- name: validation
num_bytes: 52562
num_examples: 49
- name: test
num_bytes: 51947
num_examples: 49
download_size: 131936
dataset_size: 323592
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
bizb0630/hamza_1.0 | bizb0630 | "2023-12-18T01:18:36Z" | 2 | 0 | [
"task_categories:translation",
"language:uz",
"language:ru",
"license:mit",
"region:us"
] | [
"translation"
] | "2023-12-18T00:59:36Z" | ---
license: mit
task_categories:
- translation
language:
- uz
- ru
pretty_name: uzbek-russian_parallel_corpora
---
# Hamza - Uzbek-Russian parallel corpora.
## Overview
**Hamza** is a parallel corpus containing over 15,000 aligned sentences in Uzbek and Russian.
## Creation
Created using [lingtrain](https://github.com/averkij/lingtrain-aligner). Text mined from different websites and telegram channels.
### Format
The dataset is presented in TMX (Translation Memory eXchange).
|
Tristepin/quakec-v1 | Tristepin | "2024-01-25T08:30:26Z" | 2 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-01-25T08:29:47Z" | ---
license: mit
---
|
bew/medium-chillhop-tokenized | bew | "2024-02-02T21:57:43Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-02T21:56:57Z" | ---
dataset_info:
features:
- name: tokens
sequence:
sequence: int16
- name: mask
sequence:
sequence: bool
splits:
- name: train
num_bytes: 579449352
num_examples: 8313
download_size: 375690299
dataset_size: 579449352
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
data-is-better-together/10k-prompt-collective-argilla-format | data-is-better-together | "2024-02-26T07:57:49Z" | 2 | 1 | [
"language:en",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:argilla",
"region:us",
"rlfh",
"argilla",
"human-feedback"
] | null | "2024-02-22T10:33:31Z" | ---
language:
- en
size_categories: 10K<n<100K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for 10k-prompt-collective-argilla
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("DIBT/10k-prompt-collective-argilla")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/10k-prompt-collective-argilla")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| prompt | Prompt | text | True | True |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| quality | Rate the quality of the prompt | label_selection | True | N/A | ['0', '1', '2', '3', '4'] |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"prompt": "Provide step-by-step instructions on how to make a safe and effective homemade all-purpose cleaner from common household ingredients. The guide should include measurements, tips for storing the cleaner, and additional variations or scents that can be added. Additionally, the guide should be written in clear and concise language, with helpful visuals or photographs to aid in the process."
},
"metadata": {
"evolved_from": null,
"kind": "synthetic",
"source": "ultrachat"
},
"responses": [
{
"status": "submitted",
"user_id": "d23b12c2-b601-490e-b5b3-2040eb393a00",
"values": {
"quality": {
"value": "4"
}
}
},
{
"status": "submitted",
"user_id": "e2bdd868-f28e-46fc-9254-a6ec1e291889",
"values": {
"quality": {
"value": "4"
}
}
}
],
"suggestions": [],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": null,
"metadata": "{\"source\": \"ultrachat\", \"kind\": \"synthetic\", \"evolved_from\": null}",
"prompt": "Provide step-by-step instructions on how to make a safe and effective homemade all-purpose cleaner from common household ingredients. The guide should include measurements, tips for storing the cleaner, and additional variations or scents that can be added. Additionally, the guide should be written in clear and concise language, with helpful visuals or photographs to aid in the process.",
"quality": [
{
"status": "submitted",
"user_id": "d23b12c2-b601-490e-b5b3-2040eb393a00",
"value": "4"
},
{
"status": "submitted",
"user_id": "e2bdd868-f28e-46fc-9254-a6ec1e291889",
"value": "4"
}
],
"quality-suggestion": null,
"quality-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
}
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **prompt** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **quality** is of type `label_selection` with the following allowed values ['0', '1', '2', '3', '4'].
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **quality-suggestion** is of type `label_selection` with the following allowed values ['0', '1', '2', '3', '4'].
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
# Task
We are collaboratively creating a database of prompts in English. The annotation guidelines below should help you get started but you can also ask questions in the [Discord Channel](https://discord.com/channels/879548962464493619/1205128865735770142).
Our aim is to identify effective prompts and understand the interaction between AI-generated and human-generated prompts.
The focus is on rating prompts that are clear, interesting and complex for fine-tuning open source LLMs.
What makes a prompt good?
That's a difficult question but here are some aspects:
- The intent of the user is clear.
- The question, instruction or task for the assistant is challenging or interesting because it involves solving a complex problem, reasoning, involving being creative, etc.
In other words, first of all the intent (what the user asks) should be clear. Then we can look into how interesting and complex is the task. The most interesting the prompt is the higher rating should be.
## Guidelines
You need to assign a rating to each prompt thinking about the complexity for an assistant and if the intent is clear. A very good prompt is one that is challenging but also very clear in the intent of the user.
You can use keyboard shortcuts (the numbers) to quickly rate the examples.
If you find some pattern, you can also use the search box and filters as well as the bulk labelling mode, please use this with care and only when you find a clear pattern (e.g., prompts that are completely incorrect and share a common issue).
If you are unsure about your answer, you can click on the tag and then “Save as a draft” to save if for later. In the case that you feel unequipped of rating a specific prompt, you can use the “Discard” button.
## Ratings
### 1. Very Bad:
The prompt doesn't communicate its purpose, is non-sensical or is in a language other than English.
The prompt assumes the usage of tools or capabilities that don’t apply to this model, like generating an image or scraping a website.
*Examples:*
>"Do the thing."
>“Hello!”
>"asdajflajfada”
>“Quiero que redactes una entrada de blog.”
>"Extract data from a website.”
>“Tell me how you feel when someone insults you.”
### 2. Bad:
Suggests a goal but lacks clarity and coherence.
*Examples:*
>"Find me stuff about that thing, you know?"
>“Write something.”
>"Tell me about this thing."
>"Can you help with this?"
>"I need to know more."
### 3. Ok:
The intent is understandable, but it's missing information to complete the task.
*Examples:*
>"I need information on something important."
>“Write a blogpost.”
### 4. Good:
Presents a clear goal and necessary information, effectively directing the AI, but the prompt could be more specific.
*Examples:*
>"Provide a summary of renewable energy sources."
>“Tell me about Sean Connery.”
>"Explain global warming."
### 5. Very Good:
Comprehensive and explicit, leaving no room for ambiguity. Perfectly guides the AI and includes details.
*Examples:*
>"Compare the efficiency and environmental impact of solar and wind energy, including recent advancements and case studies from 2023."
>“Make a list of 5 plant-based recipes that I can try that don’t have red peppers as an ingredient.”
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
data-is-better-together/dibt-prompt-translation-for-arabic | data-is-better-together | "2024-03-21T12:42:04Z" | 2 | 3 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-21T12:42:02Z" | ---
dataset_info:
features:
- name: target
dtype: string
- name: quality
list:
- name: status
dtype: string
- name: user_id
dtype: string
- name: value
dtype: string
- name: metadata
dtype: string
- name: avg_rating
dtype: float64
- name: num_responses
dtype: int64
- name: agreement_ratio
dtype: float64
- name: raw_responses
sequence: int64
- name: kind
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 824861
num_examples: 501
download_size: 391793
dataset_size: 824861
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
data-is-better-together/MPEP_DUTCH | data-is-better-together | "2024-04-29T16:04:18Z" | 2 | 2 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:argilla",
"region:us",
"rlfh",
"argilla",
"human-feedback"
] | null | "2024-04-04T10:15:40Z" | ---
size_categories: n<1K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for MPEP_DUTCH
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("DIBT/MPEP_DUTCH")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/MPEP_DUTCH")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| source | Source | text | True | True |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| target | Vertaling | text | True | Vertaal de prompt naar het Nederlands. | N/A |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": "1788",
"fields": {
"source": "As an AI enthusiast, you love creating programs that can understand human language. Your latest project involves creating a program that can identify and replace words with their antonyms in a given text. \nTo demonstrate the effectiveness of your program, you decide to test it on a news article about a recent political event. However, to make it more challenging, you also want your program to differentiate between homonyms and use context clues to correctly replace them.\nHere\u0027s a step-by-step explanation of how your program works:\n1. The program reads the input text and identifies all the words that have antonyms.\n2. For each of these words, the program identifies the context in which it appears to determine the correct antonym to use.\n3. The program then replaces the original word with its antonym in the text.\n4. If the original word has multiple meanings, the program uses the context to determine which meaning is intended and replace it with the appropriate antonym.\n5. Finally, the program outputs the modified text with the replaced words.\nCan you put your natural language processing skills to the test and try to identify the antonyms used in the modified text?"
},
"metadata": {
"evolved_from": null,
"kind": "synthetic",
"source": "evol_instruct"
},
"responses": [
{
"status": "submitted",
"user_id": "315be0ef-8df8-48df-8ba1-ff3339ef016a",
"values": {
"target": {
"value": "Als een AI-enthousiasteling, houd je ervan om programma\u0027s te maken die de menselijke taal begrijpen. Je nieuwste project is een programma dat woorden kan herkennen en vervangen door hun antoniemen in een stuk tekst.\nOm de effectiviteit van je programma aan te tonen, besluit je het te testen op een nieuwsartikel over een recent politiek evenement. Om het uitdagender te maken, wil je ook dat je programma onderscheid maakt tussen homoniemen, en daarnaast contextuele aanwijzingen gebruikt woorden correct te vervangen.\nHier is een stapsgewijze uitleg van hoe je programma werkt:\n1. Het programma leest de inputtekst en identificeert alle woorden die antoniemen hebben.\n2. Voor elk van deze woorden identificeert het programma de context waarin het verschijnt om het juiste antoniem te bepalen.\n3. Het programma vervangt vervolgens het originele woord door zijn antoniem in de tekst.\n4. Als het originele woord meerdere betekenissen heeft, gebruikt het programma de context om te bepalen welke betekenis bedoeld is en vervangt het door het passende antoniem.\n5. Uiteindelijk geeft het programma de aangepaste tekst uit met de vervangen woorden.\nKun je je vaardigheden in natuurlijke taalverwerking op de proef stellen en proberen de antoniemen te identificeren die in de gewijzigde tekst worden gebruikt?"
}
}
}
],
"suggestions": [
{
"agent": null,
"question_name": "target",
"score": null,
"type": null,
"value": "Als een AI-enthousiasteling, houd je ervan om programma\u0027s te maken die de menselijke taal begrijpen. Je nieuwste project houdt in dat je een programma maakt dat woorden kan identificeren en vervangen door hun antoniemen in een gegeven tekst.\nOm de effectiviteit van je programma aan te tonen, besluit je het te testen op een nieuwsartikel over een recent politiek evenement. Om het echter uitdagender te maken, wil je ook dat je programma onderscheid maakt tussen homoniemen en contextuele aanwijzingen gebruikt om ze correct te vervangen.\nHier is een stapsgewijze uitleg van hoe je programma werkt:\n1. Het programma leest de inputtekst en identificeert alle woorden die antoniemen hebben.\n2. Voor elk van deze woorden identificeert het programma de context waarin het verschijnt om het juiste antoniem te bepalen.\n3. Het programma vervangt vervolgens het originele woord door zijn antoniem in de tekst.\n4. Als het originele woord meerdere betekenissen heeft, gebruikt het programma de context om te bepalen welke betekenis bedoeld is en vervangt het door het passende antoniem.\n5. Uiteindelijk geeft het programma de aangepaste tekst uit met de vervangen woorden.\nKun je je vaardigheden in natuurlijke taalverwerking op de proef stellen en proberen de antoniemen te identificeren die in de gewijzigde tekst worden gebruikt?"
}
],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": "1788",
"metadata": "{\"evolved_from\": null, \"kind\": \"synthetic\", \"source\": \"evol_instruct\"}",
"source": "As an AI enthusiast, you love creating programs that can understand human language. Your latest project involves creating a program that can identify and replace words with their antonyms in a given text. \nTo demonstrate the effectiveness of your program, you decide to test it on a news article about a recent political event. However, to make it more challenging, you also want your program to differentiate between homonyms and use context clues to correctly replace them.\nHere\u0027s a step-by-step explanation of how your program works:\n1. The program reads the input text and identifies all the words that have antonyms.\n2. For each of these words, the program identifies the context in which it appears to determine the correct antonym to use.\n3. The program then replaces the original word with its antonym in the text.\n4. If the original word has multiple meanings, the program uses the context to determine which meaning is intended and replace it with the appropriate antonym.\n5. Finally, the program outputs the modified text with the replaced words.\nCan you put your natural language processing skills to the test and try to identify the antonyms used in the modified text?",
"target": [
{
"status": "submitted",
"user_id": "315be0ef-8df8-48df-8ba1-ff3339ef016a",
"value": "Als een AI-enthousiasteling, houd je ervan om programma\u0027s te maken die de menselijke taal begrijpen. Je nieuwste project is een programma dat woorden kan herkennen en vervangen door hun antoniemen in een stuk tekst.\nOm de effectiviteit van je programma aan te tonen, besluit je het te testen op een nieuwsartikel over een recent politiek evenement. Om het uitdagender te maken, wil je ook dat je programma onderscheid maakt tussen homoniemen, en daarnaast contextuele aanwijzingen gebruikt woorden correct te vervangen.\nHier is een stapsgewijze uitleg van hoe je programma werkt:\n1. Het programma leest de inputtekst en identificeert alle woorden die antoniemen hebben.\n2. Voor elk van deze woorden identificeert het programma de context waarin het verschijnt om het juiste antoniem te bepalen.\n3. Het programma vervangt vervolgens het originele woord door zijn antoniem in de tekst.\n4. Als het originele woord meerdere betekenissen heeft, gebruikt het programma de context om te bepalen welke betekenis bedoeld is en vervangt het door het passende antoniem.\n5. Uiteindelijk geeft het programma de aangepaste tekst uit met de vervangen woorden.\nKun je je vaardigheden in natuurlijke taalverwerking op de proef stellen en proberen de antoniemen te identificeren die in de gewijzigde tekst worden gebruikt?"
}
],
"target-suggestion": "Als een AI-enthousiasteling, houd je ervan om programma\u0027s te maken die de menselijke taal begrijpen. Je nieuwste project houdt in dat je een programma maakt dat woorden kan identificeren en vervangen door hun antoniemen in een gegeven tekst.\nOm de effectiviteit van je programma aan te tonen, besluit je het te testen op een nieuwsartikel over een recent politiek evenement. Om het echter uitdagender te maken, wil je ook dat je programma onderscheid maakt tussen homoniemen en contextuele aanwijzingen gebruikt om ze correct te vervangen.\nHier is een stapsgewijze uitleg van hoe je programma werkt:\n1. Het programma leest de inputtekst en identificeert alle woorden die antoniemen hebben.\n2. Voor elk van deze woorden identificeert het programma de context waarin het verschijnt om het juiste antoniem te bepalen.\n3. Het programma vervangt vervolgens het originele woord door zijn antoniem in de tekst.\n4. Als het originele woord meerdere betekenissen heeft, gebruikt het programma de context om te bepalen welke betekenis bedoeld is en vervangt het door het passende antoniem.\n5. Uiteindelijk geeft het programma de aangepaste tekst uit met de vervangen woorden.\nKun je je vaardigheden in natuurlijke taalverwerking op de proef stellen en proberen de antoniemen te identificeren die in de gewijzigde tekst worden gebruikt?",
"target-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
}
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **source** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **target** is of type `text`, and description "Vertaal de prompt naar het Nederlands.".
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **target-suggestion** is of type `text`.
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
## Leaderboard
Zie de voortgang op het [Leaderboard](https://huggingface.co/spaces/DIBT-Dutch/prompt-translation).
## Hulp bij het vertalen
- Er staat een vertaalsuggestie klaar die door GPT-4 gemaakt is. Deze mag je aanpassen zoveel als je wilt, of je mag hem verwijderen en helemaal zelf vertalen.
- Je wordt aangemoedigd om het taalgebruik aan te passen om de prompt voor jou natuurlijker te laten klinken
- Prompts die erg specifieke vragen stellen over niet-Nederlandstalige gebieden mag je vernederlandsen. Bijvoorbeeld: in plaats van Amerikaanse staten kun je Nederlandse of Belgische provincies gebruiken.
- Maak zelf een inschatting of en hoe je bepaalde technische termen (bijvoorbeeld wetenschappelijke termen of programmeertermen) wilt vertalen. Wetenschappelijke termen in het Nederlands kan je vaak het makkelijkst terugvinden door de Engelstalige Wikipedia-pagina op te zoeken, en vanaf daar door te klikken naar de Nederlandstalige variant.
- Soms heeft GPT-4 de prompt ook uitgevoerd in plaats van alleen vertaald. In dat geval mag je de uitvoer gewoon verwijderen.
## Discord
Vragen of onduidelijkheden? Kom ons opzoeken in het [Discord-kanaal](https://discord.com/channels/879548962464493619/1217585262595408034)!
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
data-is-better-together/MPEP_GERMAN | data-is-better-together | "2024-05-07T13:04:41Z" | 2 | 2 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:argilla",
"region:us",
"rlfh",
"argilla",
"human-feedback"
] | null | "2024-04-09T15:15:53Z" | ---
size_categories: n<1K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for MPEP_GERMAN
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("DIBT/MPEP_GERMAN")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/MPEP_GERMAN")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| source | Prompt | text | True | True |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| target | Target | text | True | Überprüfe die Übersetzung und passe sie an, falls notwendig. Weitere Hinweise in den Annotation Guidelines. | N/A |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": "5996",
"fields": {
"source": "Can you provide two examples of metaphors using an Excel table format? \nHere is an Excel table with two examples of metaphors:\n| Metaphor | Meaning |\n| ------------- |:-------------:|\n| Life is a journey | Life can be compared to a journey with ups and downs |\n| Love is a rose | Love can be compared to a delicate and beautiful rose |\nCan you provide two more examples of metaphors using a MATLAB code format?"
},
"metadata": {
"evolved_from": null,
"kind": "synthetic",
"source": "evol_instruct"
},
"responses": [
{
"status": "submitted",
"user_id": "a26ea56e-a953-4474-ad7c-5d66cd0d35f8",
"values": {
"target": {
"value": "Kannst du zwei Beispiele f\u00fcr Metaphern im Excel-Tabellenformat liefern?\n\nHier ist eine Excel-Tabelle mit zwei Beispielen f\u00fcr Metaphern:\n\n| Metapher | Bedeutung |\n\n| ------------- |:-------------:|\n\n| Das Leben ist eine Reise | Das Leben kann mit einer Reise mit H\u00f6hen und Tiefen verglichen werden |\n\n| Liebe ist wie eine Rose | Liebe kann mit einer zarten und sch\u00f6nen Rose verglichen werden |\n\nKannst du zwei weitere Beispiele f\u00fcr Metaphern in einem MATLAB-Codeformat liefern?"
}
}
}
],
"suggestions": [
{
"agent": null,
"question_name": "target",
"score": null,
"type": null,
"value": "Kannst du zwei Beispiele f\u00fcr Metaphern in einem Excel-Tabellenformat liefern?\n\nHier ist eine Excel-Tabelle mit zwei Beispielen f\u00fcr Metaphern:\n\n| Metapher | Bedeutung |\n\n| ------------- |:-------------:|\n\n| Das Leben ist eine Reise | Das Leben kann mit einer Reise mit H\u00f6hen und Tiefen verglichen werden |\n\n| Liebe ist eine Rose | Liebe kann mit einer zarten und sch\u00f6nen Rose verglichen werden |\n\nKannst du zwei weitere Beispiele f\u00fcr Metaphern in einem MATLAB-Codeformat liefern?"
}
],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": "5996",
"metadata": "{\"source\": \"evol_instruct\", \"kind\": \"synthetic\", \"evolved_from\": null}",
"source": "Can you provide two examples of metaphors using an Excel table format? \nHere is an Excel table with two examples of metaphors:\n| Metaphor | Meaning |\n| ------------- |:-------------:|\n| Life is a journey | Life can be compared to a journey with ups and downs |\n| Love is a rose | Love can be compared to a delicate and beautiful rose |\nCan you provide two more examples of metaphors using a MATLAB code format?",
"target": [
{
"status": "submitted",
"user_id": "a26ea56e-a953-4474-ad7c-5d66cd0d35f8",
"value": "Kannst du zwei Beispiele f\u00fcr Metaphern im Excel-Tabellenformat liefern?\n\nHier ist eine Excel-Tabelle mit zwei Beispielen f\u00fcr Metaphern:\n\n| Metapher | Bedeutung |\n\n| ------------- |:-------------:|\n\n| Das Leben ist eine Reise | Das Leben kann mit einer Reise mit H\u00f6hen und Tiefen verglichen werden |\n\n| Liebe ist wie eine Rose | Liebe kann mit einer zarten und sch\u00f6nen Rose verglichen werden |\n\nKannst du zwei weitere Beispiele f\u00fcr Metaphern in einem MATLAB-Codeformat liefern?"
}
],
"target-suggestion": "Kannst du zwei Beispiele f\u00fcr Metaphern in einem Excel-Tabellenformat liefern?\n\nHier ist eine Excel-Tabelle mit zwei Beispielen f\u00fcr Metaphern:\n\n| Metapher | Bedeutung |\n\n| ------------- |:-------------:|\n\n| Das Leben ist eine Reise | Das Leben kann mit einer Reise mit H\u00f6hen und Tiefen verglichen werden |\n\n| Liebe ist eine Rose | Liebe kann mit einer zarten und sch\u00f6nen Rose verglichen werden |\n\nKannst du zwei weitere Beispiele f\u00fcr Metaphern in einem MATLAB-Codeformat liefern?",
"target-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
}
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **source** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **target** is of type `text`, and description "Überprüfe die Übersetzung und passe sie an, falls notwendig. Weitere Hinweise in den Annotation Guidelines.".
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **target-suggestion** is of type `text`.
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
# Leaderboard
Verfolge den Fortschritt auf unserem [Leaderboard](https://huggingface.co/spaces/DIBT-German/DIBT-German-Dashboard).
# Hinweise zur Übersetzung
- Für jeden Prompt gibt es einen Übersetzungsvorschlag, der von GPT-4 generiert wurde. Du kannst diesen Vorschlag bearbeiten und anpassen, oder ihn entfernen und selbst übersetzen.
- Du kannst die Ausdrucksweise anpassen, damit der Prompt natürlicher klingt.
- Prompts, die sehr spezifische Angaben zu nicht deutschsprachigen Gebieten machen, können übertragen statt wörtlich übersetzt werden. Anstelle von US-Bundesstaaten kannst Du z.B. deutsche Bundesländer verwenden.
- Entscheide selbst, ob und wie Du bestimmte Fachbegriffe übersetzen möchtest (z.B. wissenschaftliche oder Programmierbegriffe). Wissenschaftliche Begriffe auf Deutsch lassen sich oft am einfachsten finden, indem man die englische Wikipedia-Seite aufruft und sich von dort zur deutschen Version durchklickt.
- Manchmal enthalten die Übersetzungsvorschläge von GPT-4 auch Antworten auf die englischsprachigen Prompts, anstatt sie nur zu übersetzen. In diesem Fall kannst Du die über die Übersetzung hinausgehende Ausgabe einfach löschen.
# Mehr erfahren
Du möchtest mehr über das Multilingual Prompt Evaluation Project erfahren? Besuche das Data is Better Together Repository auf [Github](https://github.com/huggingface/data-is-better-together/tree/main). Dort findest Du auch Informationen zu anderen Sprachen, an denen bereits gearbeitet wird.
# Kontakt
Hast Du Fragen oder Wünsche? Sprich uns gerne auf [Discord](https://discord.com/channels/879548962464493619/1217179426002047076) an.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
data-is-better-together/MPEP_ARABIC | data-is-better-together | "2024-07-18T02:17:24Z" | 2 | 1 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:argilla",
"region:us",
"rlfh",
"argilla",
"human-feedback"
] | null | "2024-04-09T15:34:13Z" | ---
size_categories: n<1K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for MPEP_ARABIC
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("DIBT/MPEP_ARABIC")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/MPEP_ARABIC")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| source | Source | text | True | True |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| target | Target | text | True | Translate the text. | N/A |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"source": "If a recipe calls for 2 1/2 cups of sugar and you want to make a half portion of it, calculate the exact amount of sugar needed."
},
"metadata": {
"evolved_from": null,
"kind": "synthetic",
"source": "argilla/distilabel-reasoning-prompts"
},
"responses": [
{
"status": "submitted",
"user_id": "6e3edb87-0ccc-47ef-bd61-3ed0e68b20de",
"values": {
"target": {
"value": "\u0625\u0630\u0627 \u0643\u0627\u0646\u062a \u0627\u0644\u0648\u0635\u0641\u0629 \u062a\u062a\u0637\u0644\u0628 \u0643\u0648\u0628\u064a\u0646 \u0648\u0646\u0635\u0641 \u0645\u0646 \u0627\u0644\u0633\u0643\u0631 \u0648\u062a\u0631\u064a\u062f \u062a\u062d\u0636\u064a\u0631 \u0646\u0635\u0641 \u0647\u0630\u0647 \u0627\u0644\u0643\u0645\u064a\u0629\u060c \u0641\u0627\u062d\u0633\u0628 \u0643\u0645\u064a\u0629 \u0627\u0644\u0633\u0643\u0631 \u0627\u0644\u0645\u0637\u0644\u0648\u0628\u0629 \u0628\u0627\u0644\u0636\u0628\u0637."
}
}
}
],
"suggestions": [
{
"agent": null,
"question_name": "target",
"score": null,
"type": null,
"value": "\u0625\u0630\u0627 \u0643\u0627\u0646\u062a \u0627\u0644\u0648\u0635\u0641\u0629 \u062a\u062a\u0637\u0644\u0628 \u0643\u0648\u0628\u064a\u0646 \u0648\u0646\u0635\u0641 \u0645\u0646 \u0627\u0644\u0633\u0643\u0631 \u0648\u062a\u0631\u064a\u062f \u062a\u062d\u0636\u064a\u0631 \u0646\u0635\u0641 \u0627\u0644\u0643\u0645\u064a\u0629\u060c \u0641\u0627\u062d\u0633\u0628 \u0627\u0644\u0643\u0645\u064a\u0629 \u0627\u0644\u062f\u0642\u064a\u0642\u0629 \u0645\u0646 \u0627\u0644\u0633\u0643\u0631 \u0627\u0644\u0645\u0637\u0644\u0648\u0628\u0629."
}
],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": null,
"metadata": "{\"source\": \"argilla/distilabel-reasoning-prompts\", \"kind\": \"synthetic\", \"evolved_from\": null}",
"source": "If a recipe calls for 2 1/2 cups of sugar and you want to make a half portion of it, calculate the exact amount of sugar needed.",
"target": [
{
"status": "submitted",
"user_id": "6e3edb87-0ccc-47ef-bd61-3ed0e68b20de",
"value": "\u0625\u0630\u0627 \u0643\u0627\u0646\u062a \u0627\u0644\u0648\u0635\u0641\u0629 \u062a\u062a\u0637\u0644\u0628 \u0643\u0648\u0628\u064a\u0646 \u0648\u0646\u0635\u0641 \u0645\u0646 \u0627\u0644\u0633\u0643\u0631 \u0648\u062a\u0631\u064a\u062f \u062a\u062d\u0636\u064a\u0631 \u0646\u0635\u0641 \u0647\u0630\u0647 \u0627\u0644\u0643\u0645\u064a\u0629\u060c \u0641\u0627\u062d\u0633\u0628 \u0643\u0645\u064a\u0629 \u0627\u0644\u0633\u0643\u0631 \u0627\u0644\u0645\u0637\u0644\u0648\u0628\u0629 \u0628\u0627\u0644\u0636\u0628\u0637."
}
],
"target-suggestion": "\u0625\u0630\u0627 \u0643\u0627\u0646\u062a \u0627\u0644\u0648\u0635\u0641\u0629 \u062a\u062a\u0637\u0644\u0628 \u0643\u0648\u0628\u064a\u0646 \u0648\u0646\u0635\u0641 \u0645\u0646 \u0627\u0644\u0633\u0643\u0631 \u0648\u062a\u0631\u064a\u062f \u062a\u062d\u0636\u064a\u0631 \u0646\u0635\u0641 \u0627\u0644\u0643\u0645\u064a\u0629\u060c \u0641\u0627\u062d\u0633\u0628 \u0627\u0644\u0643\u0645\u064a\u0629 \u0627\u0644\u062f\u0642\u064a\u0642\u0629 \u0645\u0646 \u0627\u0644\u0633\u0643\u0631 \u0627\u0644\u0645\u0637\u0644\u0648\u0628\u0629.",
"target-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
}
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **source** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **target** is of type `text`, and description "Translate the text.".
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **target-suggestion** is of type `text`.
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
This is a translation dataset that contains texts. Please translate the text in the text field.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
data-is-better-together/MPEP_CZECH | data-is-better-together | "2024-06-27T20:04:09Z" | 2 | 1 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:argilla",
"region:us",
"rlfh",
"argilla",
"human-feedback"
] | null | "2024-04-16T09:27:22Z" | ---
size_categories: n<1K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for MPEP_CZECH
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("DIBT/MPEP_CZECH")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/MPEP_CZECH")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| source | Source | text | True | True |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| target | Target | text | True | Translate the text. | N/A |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": "1804",
"fields": {
"source": "What is the effect of climate change on the polar ice caps and how is it impacting global sea levels?"
},
"metadata": {
"evolved_from": null,
"kind": "human",
"source": "OpenAssistant/oasst2"
},
"responses": [
{
"status": "submitted",
"user_id": "818c9e76-9b4f-40c6-8efb-2dcd6d5d5ddb",
"values": {
"target": {
"value": "Jak\u00fd je vliv zm\u011bny klimatu na pol\u00e1rn\u00ed ledov\u00e9 \u010depice a jak ovliv\u0148uje glob\u00e1ln\u00ed hladinu mo\u0159\u00ed?"
}
}
}
],
"suggestions": [
{
"agent": null,
"question_name": "target",
"score": null,
"type": null,
"value": "Jak\u00fd je vliv zm\u011bny klimatu na pol\u00e1rn\u00ed ledov\u00e9 \u010depice a jak ovliv\u0148uje glob\u00e1ln\u00ed hladinu mo\u0159\u00ed?"
}
],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": "1804",
"metadata": "{\"evolved_from\": null, \"kind\": \"human\", \"source\": \"OpenAssistant/oasst2\"}",
"source": "What is the effect of climate change on the polar ice caps and how is it impacting global sea levels?",
"target": [
{
"status": "submitted",
"user_id": "818c9e76-9b4f-40c6-8efb-2dcd6d5d5ddb",
"value": "Jak\u00fd je vliv zm\u011bny klimatu na pol\u00e1rn\u00ed ledov\u00e9 \u010depice a jak ovliv\u0148uje glob\u00e1ln\u00ed hladinu mo\u0159\u00ed?"
}
],
"target-suggestion": "Jak\u00fd je vliv zm\u011bny klimatu na pol\u00e1rn\u00ed ledov\u00e9 \u010depice a jak ovliv\u0148uje glob\u00e1ln\u00ed hladinu mo\u0159\u00ed?",
"target-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
}
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **source** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **target** is of type `text`, and description "Translate the text.".
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **target-suggestion** is of type `text`.
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
This is a translation dataset that contains texts. Please translate the text in the text field.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
data-is-better-together/MPEP_HUNGARIAN | data-is-better-together | "2024-04-22T21:04:14Z" | 2 | 2 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:argilla",
"region:us",
"rlfh",
"argilla",
"human-feedback"
] | null | "2024-04-19T07:58:02Z" | ---
size_categories: n<1K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for MPEP_HUNGARIAN
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("DIBT/MPEP_HUNGARIAN")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/MPEP_HUNGARIAN")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| source | Source | text | True | True |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| target | Target | text | True | Translate the text. | N/A |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": "9999",
"fields": {
"source": "As an AI enthusiast, you love creating programs that can understand human language. Your latest project involves creating a program that can identify and replace words with their antonyms in a given text. \nTo demonstrate the effectiveness of your program, you decide to test it on a news article about a recent political event. However, to make it more challenging, you also want your program to differentiate between homonyms and use context clues to correctly replace them.\nHere\u0027s a step-by-step explanation of how your program works:\n1. The program reads the input text and identifies all the words that have antonyms.\n2. For each of these words, the program identifies the context in which it appears to determine the correct antonym to use.\n3. The program then replaces the original word with its antonym in the text.\n4. If the original word has multiple meanings, the program uses the context to determine which meaning is intended and replace it with the appropriate antonym.\n5. Finally, the program outputs the modified text with the replaced words.\nCan you put your natural language processing skills to the test and try to identify the antonyms used in the modified text?"
},
"metadata": {
"evolved_from": null,
"kind": "synthetic",
"source": "evol_instruct"
},
"responses": [
{
"status": "submitted",
"user_id": "d2d0ad53-12ef-4d3f-84c5-df3afb2b2699",
"values": {
"target": {
"value": "Mint mesters\u00e9ges intelligencia rajong\u00f3, szeretsz olyan programokat k\u00e9sz\u00edteni, amelyek k\u00e9pesek meg\u00e9rteni az emberi nyelvet. Leg\u00fajabb projekted egy olyan program fejleszt\u00e9se, amely k\u00e9pes felismerni \u00e9s kicser\u00e9lni a szavakat azok ellent\u00e9teire egy adott sz\u00f6vegben.\nAnnak \u00e9rdek\u00e9ben, hogy bemutasd a program hat\u00e9konys\u00e1g\u00e1t, \u00fagy d\u00f6ntesz, hogy teszteled azt egy \u00fajs\u00e1gcikken, amely egy nemr\u00e9giben t\u00f6rt\u00e9nt politikai esem\u00e9nyr\u0151l sz\u00f3l. Azonban, hogy m\u00e9g nagyobb kih\u00edv\u00e1st jelentsen, azt is szeretn\u00e9d, ha a program megk\u00fcl\u00f6nb\u00f6ztetn\u00e9 a homonim\u00e1kat \u00e9s a kontextus alapj\u00e1n helyesen cser\u00e9ln\u00e9 ki azokat.\n\u00cdme, egy l\u00e9p\u00e9sr\u0151l-l\u00e9p\u00e9sre le\u00edr\u00e1s a program m\u0171k\u00f6d\u00e9s\u00e9r\u0151l:\n1. A program beolvassa a bemeneti sz\u00f6veget \u00e9s azonos\u00edt minden olyan sz\u00f3t, amelynek van ellent\u00e9te.\n2. Minden ilyen sz\u00f3 eset\u00e9n a program azonos\u00edtja a kontextust, amelyben megjelennek, hogy meghat\u00e1rozza a helyes ellent\u00e9tes sz\u00f3t, amit haszn\u00e1lni kell.\n3. A program ezut\u00e1n kicser\u00e9li az eredeti sz\u00f3t annak ellentetj\u00e9re a sz\u00f6vegben.\n4. Ha az eredeti sz\u00f3nak t\u00f6bb jelent\u00e9se is van, a program a kontextust haszn\u00e1lja annak meghat\u00e1roz\u00e1s\u00e1ra, hogy melyik jelent\u00e9s \u00e9rtend\u0151, \u00e9s kicser\u00e9li a megfelel\u0151 ellent\u00e9tes sz\u00f3ra.\n5. V\u00e9g\u00fcl a program visszaadja a m\u00f3dos\u00edtott sz\u00f6veget a kicser\u00e9lt szavakkal.\nPr\u00f3b\u00e1ra tenn\u00e9d a term\u00e9szetes nyelvfeldolgoz\u00e1si k\u00e9pess\u00e9geidet, hogy azonos\u00edtsd a m\u00f3dos\u00edtott sz\u00f6vegben haszn\u00e1lt ellent\u00e9teket?"
}
}
}
],
"suggestions": [
{
"agent": null,
"question_name": "target",
"score": null,
"type": null,
"value": "A programot az emberi nyelv meg\u00e9rt\u00e9s\u00e9re alkalmas programok l\u00e9trehoz\u00e1sa szereti. Az utols\u00f3 projektje olyan program l\u00e9trehoz\u00e1sa, amely k\u00e9pes azonos\u00edtani \u00e9s helyettes\u00edteni a szavakat az ant\u00f3nimusaival egy adott sz\u00f6vegben. A program hat\u00e9konys\u00e1g\u00e1nak bemutat\u00e1sa \u00e9rdek\u00e9ben \u00fagy d\u00f6nt, hogy egy k\u00f6zelm\u00faltbeli politikai esem\u00e9nyr\u0151l sz\u00f3l\u00f3 h\u00edr cikkben tesztelj\u00fck. De hogy kih\u00edv\u00e1st jelentse, azt is szeretn\u00e9, hogy a program hom\u00f3nimokat k\u00fcl\u00f6nb\u00f6ztesse meg \u00e9s \u00f6sszef\u00fcgg\u00e9si nyomokat haszn\u00e1ljon, hogy helyesen helyettes\u00edts\u00e9k \u0151ket. Itt egy l\u00e9p\u00e9sr\u0151l l\u00e9p\u00e9sre magyar\u00e1zata annak, hogyan m\u0171k\u00f6dik a program: 1. A program olvasja a beutazott sz\u00f6veget \u00e9s azonos\u00edtja az \u00f6sszes ant\u00f3nimusaival rendelkez\u0151 sz\u00f3t. 2. Mindegyik sz\u00f3 eset\u00e9ben a program azonos\u00edtja azt a kontextust, amelyben \u00fagy t\u0171nik, hogy meghat\u00e1rozza az eredeti ant\u00f3nim\u00e1t, amelyet haszn\u00e1lni kell. 3. A program a sz\u00f6vegben"
}
],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": "9999",
"metadata": "{\"evolved_from\": null, \"kind\": \"synthetic\", \"source\": \"evol_instruct\"}",
"source": "As an AI enthusiast, you love creating programs that can understand human language. Your latest project involves creating a program that can identify and replace words with their antonyms in a given text. \nTo demonstrate the effectiveness of your program, you decide to test it on a news article about a recent political event. However, to make it more challenging, you also want your program to differentiate between homonyms and use context clues to correctly replace them.\nHere\u0027s a step-by-step explanation of how your program works:\n1. The program reads the input text and identifies all the words that have antonyms.\n2. For each of these words, the program identifies the context in which it appears to determine the correct antonym to use.\n3. The program then replaces the original word with its antonym in the text.\n4. If the original word has multiple meanings, the program uses the context to determine which meaning is intended and replace it with the appropriate antonym.\n5. Finally, the program outputs the modified text with the replaced words.\nCan you put your natural language processing skills to the test and try to identify the antonyms used in the modified text?",
"target": [
{
"status": "submitted",
"user_id": "d2d0ad53-12ef-4d3f-84c5-df3afb2b2699",
"value": "Mint mesters\u00e9ges intelligencia rajong\u00f3, szeretsz olyan programokat k\u00e9sz\u00edteni, amelyek k\u00e9pesek meg\u00e9rteni az emberi nyelvet. Leg\u00fajabb projekted egy olyan program fejleszt\u00e9se, amely k\u00e9pes felismerni \u00e9s kicser\u00e9lni a szavakat azok ellent\u00e9teire egy adott sz\u00f6vegben.\nAnnak \u00e9rdek\u00e9ben, hogy bemutasd a program hat\u00e9konys\u00e1g\u00e1t, \u00fagy d\u00f6ntesz, hogy teszteled azt egy \u00fajs\u00e1gcikken, amely egy nemr\u00e9giben t\u00f6rt\u00e9nt politikai esem\u00e9nyr\u0151l sz\u00f3l. Azonban, hogy m\u00e9g nagyobb kih\u00edv\u00e1st jelentsen, azt is szeretn\u00e9d, ha a program megk\u00fcl\u00f6nb\u00f6ztetn\u00e9 a homonim\u00e1kat \u00e9s a kontextus alapj\u00e1n helyesen cser\u00e9ln\u00e9 ki azokat.\n\u00cdme, egy l\u00e9p\u00e9sr\u0151l-l\u00e9p\u00e9sre le\u00edr\u00e1s a program m\u0171k\u00f6d\u00e9s\u00e9r\u0151l:\n1. A program beolvassa a bemeneti sz\u00f6veget \u00e9s azonos\u00edt minden olyan sz\u00f3t, amelynek van ellent\u00e9te.\n2. Minden ilyen sz\u00f3 eset\u00e9n a program azonos\u00edtja a kontextust, amelyben megjelennek, hogy meghat\u00e1rozza a helyes ellent\u00e9tes sz\u00f3t, amit haszn\u00e1lni kell.\n3. A program ezut\u00e1n kicser\u00e9li az eredeti sz\u00f3t annak ellentetj\u00e9re a sz\u00f6vegben.\n4. Ha az eredeti sz\u00f3nak t\u00f6bb jelent\u00e9se is van, a program a kontextust haszn\u00e1lja annak meghat\u00e1roz\u00e1s\u00e1ra, hogy melyik jelent\u00e9s \u00e9rtend\u0151, \u00e9s kicser\u00e9li a megfelel\u0151 ellent\u00e9tes sz\u00f3ra.\n5. V\u00e9g\u00fcl a program visszaadja a m\u00f3dos\u00edtott sz\u00f6veget a kicser\u00e9lt szavakkal.\nPr\u00f3b\u00e1ra tenn\u00e9d a term\u00e9szetes nyelvfeldolgoz\u00e1si k\u00e9pess\u00e9geidet, hogy azonos\u00edtsd a m\u00f3dos\u00edtott sz\u00f6vegben haszn\u00e1lt ellent\u00e9teket?"
}
],
"target-suggestion": "A programot az emberi nyelv meg\u00e9rt\u00e9s\u00e9re alkalmas programok l\u00e9trehoz\u00e1sa szereti. Az utols\u00f3 projektje olyan program l\u00e9trehoz\u00e1sa, amely k\u00e9pes azonos\u00edtani \u00e9s helyettes\u00edteni a szavakat az ant\u00f3nimusaival egy adott sz\u00f6vegben. A program hat\u00e9konys\u00e1g\u00e1nak bemutat\u00e1sa \u00e9rdek\u00e9ben \u00fagy d\u00f6nt, hogy egy k\u00f6zelm\u00faltbeli politikai esem\u00e9nyr\u0151l sz\u00f3l\u00f3 h\u00edr cikkben tesztelj\u00fck. De hogy kih\u00edv\u00e1st jelentse, azt is szeretn\u00e9, hogy a program hom\u00f3nimokat k\u00fcl\u00f6nb\u00f6ztesse meg \u00e9s \u00f6sszef\u00fcgg\u00e9si nyomokat haszn\u00e1ljon, hogy helyesen helyettes\u00edts\u00e9k \u0151ket. Itt egy l\u00e9p\u00e9sr\u0151l l\u00e9p\u00e9sre magyar\u00e1zata annak, hogyan m\u0171k\u00f6dik a program: 1. A program olvasja a beutazott sz\u00f6veget \u00e9s azonos\u00edtja az \u00f6sszes ant\u00f3nimusaival rendelkez\u0151 sz\u00f3t. 2. Mindegyik sz\u00f3 eset\u00e9ben a program azonos\u00edtja azt a kontextust, amelyben \u00fagy t\u0171nik, hogy meghat\u00e1rozza az eredeti ant\u00f3nim\u00e1t, amelyet haszn\u00e1lni kell. 3. A program a sz\u00f6vegben",
"target-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
}
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **source** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **target** is of type `text`, and description "Translate the text.".
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **target-suggestion** is of type `text`.
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
This is a translation dataset that contains texts. Please translate the text in the text field.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
data-is-better-together/aya_dataset_dutch_example | data-is-better-together | "2024-05-03T10:08:05Z" | 2 | 0 | [
"task_categories:text-generation",
"language:nl",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"sft"
] | [
"text-generation"
] | "2024-04-19T09:51:46Z" | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 2054479.0656496773
num_examples: 1633
- name: test
num_bytes: 125810.10812306659
num_examples: 100
download_size: 672315
dataset_size: 2180289.173772744
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: apache-2.0
task_categories:
- text-generation
language:
- nl
tags:
- sft
size_categories:
- 1K<n<10K
--- |
Tristepin/quakec-raw1 | Tristepin | "2024-05-01T18:57:24Z" | 2 | 0 | [
"license:mit",
"size_categories:10K<n<100K",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-05-01T18:56:59Z" | ---
license: mit
---
|
data-is-better-together/aya_dutch_dpo_raw | data-is-better-together | "2024-05-02T20:15:13Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif"
] | null | "2024-05-02T13:17:08Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: instruction
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
- name: generation_model
dtype: string
- name: generation
dtype: string
- name: predicted_generation_language
dtype: string
- name: predicted_generation_language_score
dtype: float64
- name: generations
sequence: string
- name: generation_models
sequence: string
- name: model_name
dtype: string
- name: ratings
sequence: int64
- name: rationales
sequence: string
splits:
- name: train
num_bytes: 3530439
num_examples: 1200
download_size: 1847668
dataset_size: 3530439
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for aya_dutch_dpo_raw
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/DIBT/aya_dutch_dpo_raw/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/DIBT/aya_dutch_dpo_raw/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"annotation_type": "re-annotations",
"generation": "De constructie van het fort, dat werd opgetrokken in de haven van Lahaina met behulp van massive koraalblokken die uit de omringende riffen waren verwijderd, markeerde een keerpunt in de politieke en economische ontwikkeling van het eiland Maui in de eerste helft van de negentiende eeuw.",
"generation_model": "meta-llama/Meta-Llama-3-70B-Instruct",
"generation_models": [
"aya",
"meta-llama/Meta-Llama-3-70B-Instruct"
],
"generations": [
"Een meer complexe versie van de zin is natuurlijk: \"Het fort werd gebouwd in het kadegebied bij Lahaina Harbor van koraalblokken met muren bekroond met 47 kanonnen die van schepen zijn verkregen\".",
"De constructie van het fort, dat werd opgetrokken in de haven van Lahaina met behulp van massive koraalblokken die uit de omringende riffen waren verwijderd, markeerde een keerpunt in de politieke en economische ontwikkeling van het eiland Maui in de eerste helft van de negentiende eeuw."
],
"instruction": "Maak een complexere zin: Het fort werd gebouwd in de haven van Lahaina met koraalblokken.",
"language": "Dutch",
"language_code": "nld",
"model_name": "meta-llama/Meta-Llama-3-70B-Instruct",
"predicted_generation_language": "nld_Latn",
"predicted_generation_language_score": 0.9995737671852112,
"ratings": [
4,
5
],
"rationales": [
"The text accurately provides a more complex sentence as requested, and the added details are correct (e.g., \"muren bekroond met 47 kanonnen\"). The sentence is well-structured and easy to follow. However, the model could be more confident in its language, and some parts feel slightly redundant (e.g., \"kadegebied bij Lahaina Harbor\").",
"The text provides a sophisticated and accurate sentence that not only meets the request but also adds valuable context about the significance of the fort\u0027s construction. The language is confident and precise, and the sentence is well-structured and engaging. The model demonstrates a thorough understanding of the topic and effectively conveys its knowledge without any noticeable errors or hallucinations."
],
"targets": "Een meer complexe versie van de zin is natuurlijk: \"Het fort werd gebouwd in het kadegebied bij Lahaina Harbor van koraalblokken met muren bekroond met 47 kanonnen die van schepen zijn verkregen\".",
"user_id": "ca908e583236b208e473e89dae5c7b7d3daf3662e2bbf6606f0702c718bb5c06"
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/aya_dutch_dpo_raw", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/aya_dutch_dpo_raw")
```
</details>
|
data-is-better-together/aya_dataset_english_example | data-is-better-together | "2024-05-03T10:08:35Z" | 2 | 0 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"sft"
] | [
"text-generation"
] | "2024-05-03T09:55:27Z" | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 4836140.556250679
num_examples: 3844
- name: test
num_bytes: 125810.10812306659
num_examples: 100
download_size: 1606520
dataset_size: 4961950.664373746
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- sft
size_categories:
- 1K<n<10K
--- |
data-is-better-together/aya_english_dpo_raw | data-is-better-together | "2024-05-03T10:07:52Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif"
] | null | "2024-05-03T10:07:49Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: instruction
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
- name: generation_model
dtype: string
- name: generation
dtype: string
- name: predicted_generation_language
dtype: string
- name: predicted_generation_language_score
dtype: float64
- name: generations
sequence: string
- name: generation_models
sequence: string
- name: model_name
dtype: string
- name: ratings
sequence: int64
- name: rationales
sequence: string
splits:
- name: train
num_bytes: 567777
num_examples: 100
download_size: 320089
dataset_size: 567777
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for aya_english_dpo_raw
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/DIBT/aya_english_dpo_raw/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/DIBT/aya_english_dpo_raw/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"annotation_type": "original-annotations",
"generation": "The largest organ in the human body is the skin. It covers the entire surface of the body and has a total area of about 22 square feet (2 square meters) in adults. The skin performs many important functions, such as:\n\n* Protecting the body from external damage, such as injury, infection, and UV radiation\n* Regulating body temperature\n* Aiding in the production of vitamin D\n* Assisting in the sensation of touch and feel\n* Helping to maintain fluid balance and hydration\n\nThe skin is composed of several layers, with the outermost layer being the epidermis, followed by the dermis, and then the hypodermis. It is estimated that the skin accounts for about 15% of an adult\u0027s total body weight.\n\nIt\u0027s worth noting that while the skin is the largest organ by surface area, the liver is often considered the largest organ by mass, weighing around 3 pounds (1.4 kilograms) in adults.",
"generation_model": "meta-llama/Meta-Llama-3-70B-Instruct",
"generation_models": [
"aya",
"meta-llama/Meta-Llama-3-70B-Instruct"
],
"generations": [
"The largest organ in the human body is the skin. It is the body\u0027s largest organ by surface area and weight. The skin serves many essential functions, including protecting the body from physical damage, pathogens, and dehydration, as well as helping to regulate body temperature and producing vitamin D in response to sunlight. It\u0027s a complex organ with multiple layers, cells, and structures, each serving a unique purpose in maintaining the body\u0027s overall health and well-being.",
"The largest organ in the human body is the skin. It covers the entire surface of the body and has a total area of about 22 square feet (2 square meters) in adults. The skin performs many important functions, such as:\n\n* Protecting the body from external damage, such as injury, infection, and UV radiation\n* Regulating body temperature\n* Aiding in the production of vitamin D\n* Assisting in the sensation of touch and feel\n* Helping to maintain fluid balance and hydration\n\nThe skin is composed of several layers, with the outermost layer being the epidermis, followed by the dermis, and then the hypodermis. It is estimated that the skin accounts for about 15% of an adult\u0027s total body weight.\n\nIt\u0027s worth noting that while the skin is the largest organ by surface area, the liver is often considered the largest organ by mass, weighing around 3 pounds (1.4 kilograms) in adults."
],
"instruction": "What is the largest organ in the human body?",
"language": "English",
"language_code": "eng",
"model_name": "meta-llama/Meta-Llama-3-70B-Instruct",
"predicted_generation_language": "eng_Latn",
"predicted_generation_language_score": 0.9590950012207031,
"ratings": [
5,
5
],
"rationales": [
"The text is accurate, informative, and comprehensive in describing the skin as the largest organ in the human body. It provides specific details about its functions, structure, and importance, aligning perfectly with the instruction.",
"This text is equally excellent, providing a clear and concise answer to the question. It lists the skin\u0027s functions, describes its composition, and offers additional interesting facts, such as the comparison with the liver\u0027s mass. The text is well-structured, accurate, and confident in its information, making it an excellent response."
],
"targets": "The largest organ in the human body is the skin. It is the body\u0027s largest organ by surface area and weight. The skin serves many essential functions, including protecting the body from physical damage, pathogens, and dehydration, as well as helping to regulate body temperature and producing vitamin D in response to sunlight. It\u0027s a complex organ with multiple layers, cells, and structures, each serving a unique purpose in maintaining the body\u0027s overall health and well-being.",
"user_id": "29f22cf193a81e1a5c47d76af453a91b3cd19aa348995c7add1df15fe24e8801"
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/aya_english_dpo_raw", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/aya_english_dpo_raw")
```
</details>
|