datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.54M
| likes
int64 0
6.35k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
40
⌀ | createdAt
unknown | card
stringlengths 19
1M
|
---|---|---|---|---|---|---|---|---|
jlbaker361/blender_animals | jlbaker361 | "2024-11-22T16:21:08Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T16:19:59Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: character
dtype: string
- name: x
dtype: string
- name: y
dtype: string
- name: x_1
dtype: string
- name: y_1
dtype: string
- name: angle
dtype: string
splits:
- name: train
num_bytes: 528387.0
num_examples: 4
download_size: 522511
dataset_size: 528387.0
---
# Dataset Card for "blender_animals"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bmmm/Equilibrium-Traffic-Networks | bmmm | "2024-11-22T16:33:30Z" | 6 | 1 | [
"license:mit",
"region:us"
] | null | "2024-11-22T16:24:13Z" | ---
license: mit
---
# Equilibrium-Traffic-Networks
This document describes the datasets generated for and used in the study "A hybrid deep-learning-metaheuristic framework for bi-level network design problems" by Bahman Madadi and Gonçalo H. de Almeida Correia, published in Expert Systems with Applications. The datasets are generated and used to train and evaluate models for solving the User Equilibrium (UE) problem on three transportation networks (Sioux-Falls, Eastern-Massachusetts, and Anaheim) from the well-known "transport networks for research" repository. You can just download the datasets from [this repository](https://doi.org/10.6084/m9.figshare.27889251.v1) with the comprehensive metadata.
Dataset citation: Madadi, Bahman (2024). Equilibrium-Traffic-Networks. figshare. Dataset. https://doi.org/10.6084/m9.figshare.27889251.v1
## Metadata
| Network | Nodes | Edges | OD Pairs | Train Samples | Val Samples | Test Samples | Dataset Size | Solvers | Algorithm |
|-----------------------|-------|-------|----------|---------------|-------------|--------------|--------------|---------|-----------|
| SiouxFalls | 24 | 76 | 576 | 18000 | 1000 | 1000 | 20,000 | Aeq, Ipp| BFW |
| Eastern-Massachusetts | 74 | 258 | 5476 | 4000 | 500 | 500 | 5,000 | Aeq, Ipp| BFW |
| Anaheim | 416 | 914 | 1444 | 4000 | 500 | 500 | 5,000 | Aeq, Ipp| BFW |
### Features and Data Fields
| Field | Type | Description |
|----------------------|--------|--------------------------------------------------|
| Node Features | Array | Represent origin-destination (OD) demand matrices for travel. Each OD pair specifies travel demand between zones. |
| Edge Features | Array | Include: Free-flow travel time (FFTT) and Capacity. |
| Edge Labels | Array | Optimal link flows derived from solving the DUE problem. |
| Number of Links | Int | Number of links in the network. |
| Number of Nodes | Int | Number of nodes in the network. |
| Number of OD Pairs | Int | Number of origin-destination pairs in the network. |
| Train Split | Int | Number of samples in the training set. |
| Validation Split | Int | Number of samples in the validation set. |
| Test Split | Int | Number of samples in the test set. |
| Dataset Size | Int | Total number of samples in the dataset. |
| Solvers | String | Solvers used for generating the dataset. |
| Algorithm | String | Algorithm used for generating the dataset. |
## Datasets
The datasets are generated using the scripts `data_due_generate.py` and `data_dataset_prep.py` from the GitHub repository. Each dataset corresponds to a different transportation network and contains solved instances of the DUE problem.
### Dataset Structure
Each dataset is stored as a pickle file and contains three splits: train, validation, and test. Each split is a list of DGLGraph objects with node and edge features, along with edge labels.
- **Node Features**: Represent origin-destination (OD) demand matrices for travel. Each OD pair specifies travel demand between zones. Stored in the `feat` field of the DGLGraph.
- **Edge Features**: Include Free-flow travel time (FFTT) and Capacity. Stored in the `feat` field of the DGLGraph.
- **Edge Labels**: Optimal link flows derived from solving the DUE problem. A list of labels for each edge in the DGLGraph.
### Available Networks
- SiouxFalls
- Eastern-Massachusetts
- Anaheim
## Usage
To load a dataset, use the following code:
```python
import pickle
from data_dataset_prep import DUEDatasetDGL
case = 'SiouxFalls' # or 'Eastern-Massachusetts', 'Anaheim'
data_dir = 'DatasetsDUE'
with open(f'{data_dir}/{case}/{case}.pkl', 'rb') as f:
train, val, test = pickle.load(f)
# Example: Accessing the first graph and its edge labels in the training set
graph, edge_labels = train[0]
print(graph)
print(edge_labels)
```
## Data Generation Steps
1. **Define Parameters**: Set the parameters for dataset generation in the `parameters()` function.
2. **Solve DUE Problem**: Use the `data_due_generate.py` script to solve the DUE problem for each network.
3. **Store Results**: Save the results as CSV files and clean up the data.
4. **Create DGL Dataset**: Convert the data into DGL format using the `data_dataset_prep.py` script and save it as pickle files.
## Scripts
### `data_due_generate.py` (from GitHub repository)
This script generates the datasets by solving the DUE problem for each network in the benchmark networks. The parameters for dataset generation are defined in the `parameters()` function.
### `data_dataset_prep.py` (from GitHub repository)
This script contains the classes and functions for preparing the datasets and converting them into DGL format.
## References
- [A hybrid deep-learning-metaheuristic framework for bi-level network design problems](https://doi.org/10.1016/j.eswa.2023.122814)
- [GitHub Repository: HDLMF_GIN-GA](https://github.com/bahmanmdd/HDLMF_GIN-GA)
- [Dataset repository](https://doi.org/10.6084/m9.figshare.27889251.v1)
## Citation
If you use these datasets in your research, please cite the following paper and dataset:
Paper citation:
Madadi B, de Almeida Correia GH. A hybrid deep-learning-metaheuristic framework for bi-level network design problems. Expert Systems with Applications. 2024 Jun 1;243:122814. https://doi.org/10.1016/j.eswa.2023.122814
Dataset citation:
Madadi, Bahman (2024). Equilibrium-Traffic-Networks. figshare. Dataset. https://doi.org/10.6084/m9.figshare.27889251.v1
---
### Metadata
```json
{
"datasets": [
{
"name": "SiouxFalls",
"description": "DGL dataset for the SiouxFalls transportation network with solved instances of the DUE problem.",
"num_samples": 20000,
"features": {
"node_features": "Represent origin-destination (OD) demand matrices for travel. Each OD pair specifies travel demand between zones.",
"edge_features": "Include Free-flow travel time (FFTT) and Capacity.",
"edge_labels": "Optimal link flows derived from solving the DUE problem."
},
"splits": ["train", "val", "test"]
},
{
"name": "Eastern-Massachusetts",
"description": "DGL dataset for the Eastern-Massachusetts transportation network with solved instances of the DUE problem.",
"num_samples": 5000,
"features": {
"node_features": "Represent origin-destination (OD) demand matrices for travel. Each OD pair specifies travel demand between zones.",
"edge_features": "Include Free-flow travel time (FFTT) and Capacity.",
"edge_labels": "Optimal link flows derived from solving the DUE problem."
},
"splits": ["train", "val", "test"]
},
{
"name": "Anaheim",
"description": "DGL dataset for the Anaheim transportation network with solved instances of the DUE problem.",
"num_samples": 5000,
"features": {
"node_features": "Represent origin-destination (OD) demand matrices for travel. Each OD pair specifies travel demand between zones.",
"edge_features": "Include Free-flow travel time (FFTT) and Capacity.",
"edge_labels": "Optimal link flows derived from solving the DUE problem."
},
"splits": ["train", "val", "test"]
}
],
"references": [
{
"title": "A hybrid deep-learning-metaheuristic framework for bi-level network design problems",
"doi": "10.1016/j.eswa.2023.122814",
"url": "https://doi.org/10.1016/j.eswa.2023.122814",
"authors": ["Bahman Madadi", "Gonçalo H. de Almeida Correia"],
"journal": "Expert Systems with Applications",
"year": 2024,
"volume": 243,
"pages": "122814"
},
{
"title": "GitHub Repository: HDLMF_GIN-GA",
"url": "https://github.com/bahmanmdd/HDLMF_GIN-GA"
}
]
}
```
|
musashihinck/rpp_step3_qwen25-72b | musashihinck | "2024-11-22T20:21:13Z" | 6 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T16:25:42Z" | ---
dataset_info:
features:
- name: common_id
dtype: string
- name: template_id
dtype: string
- name: topic_id
dtype: int64
- name: topic_polarity
dtype: string
- name: call_idx
dtype: int64
- name: generation_model
dtype: string
- name: eval_model
dtype: string
- name: prompt_text
dtype: string
- name: response_text
dtype: string
- name: eval_text
dtype: string
- name: template_text
dtype: string
- name: topic_text
dtype: string
splits:
- name: train
num_bytes: 10090784708
num_examples: 3180000
download_size: 4888086079
dataset_size: 10090784708
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Westcott/dior_text_to_image | Westcott | "2024-11-23T02:26:14Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T16:30:08Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2037786974.0
num_examples: 5862
- name: validation
num_bytes: 1998869913.0
num_examples: 5863
download_size: 4000336494
dataset_size: 4036656887.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
dgambettaphd/D_gen6_run0_llama2-7b_wiki_doc1000_real64_synt64 | dgambettaphd | "2024-11-22T16:48:03Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T16:47:59Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 579858
num_examples: 1000
download_size: 352374
dataset_size: 579858
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Shahradmz/ca_constitution_2 | Shahradmz | "2024-11-23T18:04:31Z" | 6 | 0 | [
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T17:07:08Z" | ---
license: mit
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 10469526
num_examples: 28673
- name: test
num_bytes: 1847119
num_examples: 5061
download_size: 6567019
dataset_size: 12316645
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
dgambettaphd/D_gen7_run0_llama2-7b_wiki_doc1000_real64_synt64 | dgambettaphd | "2024-11-22T18:00:49Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T18:00:46Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 579807
num_examples: 1000
download_size: 352435
dataset_size: 579807
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RyanYr/self-reflect_mini8Bit-om2-sft-t2_mg_t0 | RyanYr | "2024-11-22T18:13:02Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T18:13:01Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: response@0
sequence: string
splits:
- name: train
num_bytes: 35175150
num_examples: 19473
download_size: 16878778
dataset_size: 35175150
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Rudra-ai/ai-responses-helpful-70b-update-dpo | Rudra-ai | "2024-11-22T18:14:50Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T18:14:45Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 2572055
num_examples: 1000
download_size: 1259394
dataset_size: 2572055
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
infinite-dataset-hub/PregnancyBotConvo | infinite-dataset-hub | "2024-11-22T19:00:27Z" | 6 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"infinite-dataset-hub",
"synthetic"
] | null | "2024-11-22T19:00:27Z" | ---
license: mit
tags:
- infinite-dataset-hub
- synthetic
---
# PregnancyBotConvo
tags: ConversationalAI, Pregnancy, Healthcare
_Note: This is an AI-generated dataset so its content may be inaccurate or false_
**Dataset Description:**
The 'PregnancyBotConvo' dataset is a curated collection of simulated conversations between pregnant individuals and a conversational AI designed to provide guidance and support throughout the pregnancy journey. Each row in the dataset represents a conversation snippet with associated metadata to assist in training conversational models to handle queries related to pregnancy.
**CSV Content Preview:**
```
index,conversation,label
1,"User: I'm 12 weeks pregnant and I'm feeling very tired lately. Bot: It's quite common to feel more fatigued as your pregnancy progresses. Ensuring you're getting enough rest and staying hydrated can help. How have you been sleeping?",Supportive_Response
2,"User: My doctor mentioned I might need to start iron supplements. Bot: Iron is crucial during pregnancy to support your baby's growth and prevent anemia. Please discuss with your healthcare provider to get a prescription tailored for you.",Educational_Response
3,"User: I'm worried about morning sickness. Bot: Morning sickness typically subsides as the pregnancy advances. Eating small, frequent meals and avoiding certain smells can help manage the symptoms. Remember, it's normal, and you're not alone.",Reassuring_Response
4,"User: Can I continue exercising during my third trimester? Bot: Gentle exercise like walking or swimming is beneficial, but it's best to consult with your healthcare provider for personalized advice on exercise during the third trimester.",Advisory_Response
5,"User: How can I make sure I'm gaining the right amount of weight during pregnancy? Bot: Weight gain varies for each pregnancy. It's essential to follow your doctor's advice on nutrition and monitoring your weight to ensure it's within the recommended range. Shall we schedule a check-up?",Proactive_Response
```
This dataset serves as a basis for training AI models to provide empathetic, informative, and supportive interactions for pregnant individuals seeking advice or information on their pregnancy.
**Source of the data:**
The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query 'complete pregnancy chatbot':
- **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=complete+pregnancy+chatbot&dataset=PregnancyBotConvo&tags=ConversationalAI,+Pregnancy,+Healthcare
- **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
|
heyyjudes/lmsys-en-prompt-only-dedup | heyyjudes | "2024-11-22T23:09:27Z" | 6 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T19:42:22Z" | ---
dataset_info:
features:
- name: user_prompt
dtype: string
splits:
- name: train
num_bytes: 170450055.3380735
num_examples: 422952
download_size: 99351483
dataset_size: 170450055.3380735
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen9_run0_llama2-7b_wiki_doc1000_real64_synt64 | dgambettaphd | "2024-11-22T20:24:56Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T20:24:54Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 579795
num_examples: 1000
download_size: 352338
dataset_size: 579795
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Evgy23/Alice_dataset | Evgy23 | "2024-11-22T20:38:50Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T20:38:48Z" | ---
dataset_info:
features:
- name: Original Text
dtype: string
- name: Lemmatized Text
sequence: string
- name: "POS Annotation\t"
sequence:
sequence: string
- name: ner
sequence:
sequence: string
splits:
- name: train
num_bytes: 683
num_examples: 1
download_size: 3549
dataset_size: 683
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mmtg/ui_design_image_text | mmtg | "2024-11-22T20:50:17Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T20:48:55Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 34427370.0
num_examples: 1000
download_size: 28749395
dataset_size: 34427370.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_train_chunk_9 | ZixuanKe | "2024-11-22T21:25:40Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T21:25:38Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 5959798
num_examples: 1032
download_size: 424528
dataset_size: 5959798
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_train_chunk_4 | ZixuanKe | "2024-11-22T21:29:55Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T21:29:54Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 5601360
num_examples: 947
download_size: 410078
dataset_size: 5601360
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_25 | ZixuanKe | "2024-11-22T21:30:18Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T21:30:17Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 257977
num_examples: 46
download_size: 30543
dataset_size: 257977
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_20 | ZixuanKe | "2024-11-22T21:30:50Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T21:30:48Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 284735
num_examples: 40
download_size: 24644
dataset_size: 284735
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen10_run0_llama2-7b_wiki_doc1000_real64_synt64 | dgambettaphd | "2024-11-22T21:36:10Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T21:36:07Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 579816
num_examples: 1000
download_size: 352322
dataset_size: 579816
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_9 | ZixuanKe | "2024-11-22T21:37:29Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T21:37:27Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 191175
num_examples: 35
download_size: 25297
dataset_size: 191175
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_4 | ZixuanKe | "2024-11-22T21:41:01Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T21:41:00Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 316391
num_examples: 55
download_size: 26700
dataset_size: 316391
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RickBrannan/categorize_bib_lang_grammar | RickBrannan | "2024-11-22T22:24:09Z" | 6 | 0 | [
"task_categories:text-classification",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:1K<n<10K",
"modality:text",
"region:us"
] | [
"text-classification"
] | "2024-11-22T22:11:22Z" | ---
license: cc-by-sa-4.0
task_categories:
- text-classification
language:
- en
pretty_name: Categorize Sentences with Biblical language grammar
---
# Dataset Card for categorize_bib_lang_grammar
## Dataset Details
2,700+ sentences categorized as `0` (`NOT-GRAMMAR`) or `1` (`GRAMMAR`). For use with something like DistilBERT. Categorization is human curated.
* **License:** CC-BY-SA 4.0
### Dataset Sources
Most of the sentences (~2,000) in this training data come from:
* **unfoldingWord Greek Grammar**: https://git.door43.org/unfoldingWord/en_ugg, licensed CC-BY-SA 4.0
* **unfoldingWord Hebrew Grammar**: https://git.door43.org/unfoldingWord/en_uhg, licensed CC-BY-SA 4.0
A smaller portion of the sentences (~800) come from:
* **SIL Open Translators Notes**: https://opentn.bible, licensed CC-BY-SA 4.0
## Uses
The original intent of the data is to locate sentences in resources like the SIL Open Translators Notes that use grammatical terminology.
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_train_chunk_30 | ZixuanKe | "2024-11-22T23:32:48Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T23:32:47Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 5746968
num_examples: 934
download_size: 399415
dataset_size: 5746968
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Finnish-NLP/ds_nemotron_klm_lt_filt_dpo_rejected_orig_col_pairRM_com_les_10_hhrlhf_added | Finnish-NLP | "2024-11-22T23:35:06Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T23:35:03Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
- name: response_rejected
dtype: string
splits:
- name: train
num_bytes: 23181248.76174307
num_examples: 10086
download_size: 11669693
dataset_size: 23181248.76174307
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Gramacho/complete_pira_test_corpus2_ptbr_llama3_alpaca_46 | Gramacho | "2024-11-22T23:38:03Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T23:38:02Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: id
dtype: string
- name: context
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 143257
num_examples: 46
download_size: 78830
dataset_size: 143257
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fracapuano/moss-pen | fracapuano | "2024-11-23T00:43:21Z" | 6 | 0 | [
"task_categories:robotics",
"region:us",
"LeRobot",
"moss",
"tutorial",
"eval"
] | [
"robotics"
] | "2024-11-23T00:42:36Z" | ---
task_categories:
- robotics
tags:
- LeRobot
- moss
- tutorial
- eval
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_26 | ZixuanKe | "2024-11-23T01:27:25Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T01:27:24Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 272952
num_examples: 49
download_size: 42683
dataset_size: 272952
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
violetxi/NUMINA-V2-Clean-Blocks-6800_7000-0_41 | violetxi | "2024-11-23T05:52:16Z" | 6 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T01:57:07Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: source
dtype: string
- name: is_correct
dtype: bool
- name: target_answer
dtype: string
- name: solution
dtype: string
- name: solution_steps
dtype: string
- name: attempts
dtype: string
- name: model_answer
dtype: string
splits:
- name: train
num_bytes: 825110849
num_examples: 122816
download_size: 83109097
dataset_size: 825110849
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
preetam8/accent_correction_dataset_en_speaker_7 | preetam8 | "2024-11-23T02:32:09Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T02:30:22Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: speaker
dtype: string
- name: waveform
sequence: float16
splits:
- name: train
num_bytes: 2065771314
num_examples: 3000
download_size: 1939098830
dataset_size: 2065771314
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
capemox/ecthr-b-preprocessed | capemox | "2024-11-23T04:26:29Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T04:26:22Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 88986061
num_examples: 9000
- name: test
num_bytes: 11802744
num_examples: 1000
- name: validation
num_bytes: 10909180
num_examples: 1000
download_size: 51535941
dataset_size: 111697985
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_train_chunk_11 | ZixuanKe | "2024-11-23T05:52:46Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T05:52:45Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 5957724
num_examples: 1043
download_size: 434109
dataset_size: 5957724
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_11 | ZixuanKe | "2024-11-23T06:01:29Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T06:01:28Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 336231
num_examples: 53
download_size: 34048
dataset_size: 336231
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
JimmyHuang790624/lele_example | JimmyHuang790624 | "2024-11-23T06:31:52Z" | 6 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-23T06:30:39Z" | ---
license: apache-2.0
---
|
depression-detect/english-merged | depression-detect | "2024-11-23T07:50:07Z" | 6 | 0 | [
"license:mit",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T07:27:59Z" | ---
license: mit
---
# Dataset Overview
This dataset is composed of two separate smaller datasets obtained from Kaggle:
1. **`dataset.csv`**
[Link to Kaggle](https://www.kaggle.com/datasets/mdforiduzzamanzihad/preprocessed-depression-dataset)
2. **`depression_dataset_reddit_cleaned.csv`**
[Link to Kaggle](https://www.kaggle.com/datasets/infamouscoder/depression-reddit-cleaned)
Each dataset contains preprocessed text data related to depression, useful for tasks like sentiment analysis, mental health studies, or NLP modeling.
|
InsultedByMathematics/rebel-ultrafeedback-test-evaluation-update-401 | InsultedByMathematics | "2024-11-23T08:45:04Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T08:45:02Z" | ---
dataset_info:
features:
- name: response_0
dtype: string
- name: response_1
dtype: string
- name: response_2
dtype: string
- name: response_3
dtype: string
- name: response_4
dtype: string
- name: prompt_id
dtype: string
- name: prompt
dtype: string
- name: llama_prompt
dtype: string
- name: llama_prompt_tokens
sequence: int64
- name: response_0_reward
dtype: float64
- name: response_1_reward
dtype: float64
- name: response_2_reward
dtype: float64
- name: response_3_reward
dtype: float64
- name: response_4_reward
dtype: float64
- name: chosen
dtype: string
- name: chosen_reward
dtype: float64
- name: llama_chosen
dtype: string
- name: llama_chosen_tokens
sequence: int64
- name: reject
dtype: string
- name: reject_reward
dtype: float64
- name: llama_reject
dtype: string
- name: llama_reject_tokens
sequence: int64
- name: chosen_logprob
dtype: float64
- name: reject_logprob
dtype: float64
- name: finetuned_response_0
dtype: string
- name: finetuned_response_1
dtype: string
- name: finetuned_response_2
dtype: string
- name: finetuned_response_3
dtype: string
- name: finetuned_response_4
dtype: string
splits:
- name: test_prefs
num_bytes: 105679213
num_examples: 1801
download_size: 27898623
dataset_size: 105679213
configs:
- config_name: default
data_files:
- split: test_prefs
path: data/test_prefs-*
---
|
yuvraj17/temp2 | yuvraj17 | "2024-11-23T09:02:05Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T09:02:03Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 10319402.839161336
num_examples: 6231
download_size: 8302252
dataset_size: 10319402.839161336
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pruhtopia/rsc-5k-1 | pruhtopia | "2024-11-23T19:13:13Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T10:01:01Z" | ---
dataset_info:
features:
- name: image_id
dtype: string
- name: description
dtype: string
- name: image
dtype: binary
splits:
- name: train
num_bytes: 199381852
num_examples: 5718
download_size: 198959477
dataset_size: 199381852
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Suchinthana/Test_Cricket_Data | Suchinthana | "2024-11-23T19:09:59Z" | 6 | 0 | [
"language:en",
"license:cc0-1.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T10:17:33Z" | ---
dataset_info:
features:
- name: BATTING_Tests_Mat
dtype: float64
- name: BATTING_Tests_Runs
dtype: float64
- name: BATTING_First-class_Mat
dtype: float64
- name: BATTING_List A_Mat
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 144200
num_examples: 3605
download_size: 61522
dataset_size: 144200
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc0-1.0
language:
- en
size_categories:
- 1K<n<10K
---
Source dataset: [ESPN Cricket Players Data](https://www.kaggle.com/datasets/raghav333/espn-cricket-players-data). |
yuvraj17/temp4 | yuvraj17 | "2024-11-23T10:59:12Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T10:59:11Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 2299394
num_examples: 829
download_size: 1071987
dataset_size: 2299394
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ADHIZ/image_vicky | ADHIZ | "2024-11-23T11:26:50Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T11:26:44Z" | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 1678972.0
num_examples: 2
download_size: 1681322
dataset_size: 1678972.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rbaradaran/RAG_multihop_data | rbaradaran | "2024-11-23T12:10:07Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T12:07:05Z" | ---
dataset_info:
features:
- name: q_id
dtype: string
- name: category
dtype: string
- name: question
dtype: string
- name: response
dtype: string
- name: documents
list:
- name: ds_id
dtype: string
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 96376567
num_examples: 6857
download_size: 37427259
dataset_size: 96376567
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rbaradaran/RAG_all_data | rbaradaran | "2024-11-23T18:07:30Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T12:10:20Z" | ---
dataset_info:
features:
- name: q_id
dtype: string
- name: category
dtype: string
- name: question
dtype: string
- name: response
dtype: string
- name: documents
list:
- name: ds_id
dtype: string
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 134252200
num_examples: 13604
- name: test
num_bytes: 10037639
num_examples: 1025
download_size: 61235276
dataset_size: 144289839
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Rudra-ai/ai-responses-gsm8k-70b-update-dpo | Rudra-ai | "2024-11-23T13:04:06Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T13:04:04Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 3352021
num_examples: 2000
download_size: 1417949
dataset_size: 3352021
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yurifacanha/DPOquotes | yurifacanha | "2024-11-23T13:35:33Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T13:35:22Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 8917431
num_examples: 1300
download_size: 4519360
dataset_size: 8917431
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
herteltm/UTQA_v1 | herteltm | "2024-11-23T16:24:30Z" | 6 | 0 | [
"license:cc-by-4.0",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"physchem",
"benchmark",
"LLM"
] | null | "2024-11-23T14:28:22Z" | ---
license: cc-by-4.0
tags:
- physchem
- benchmark
- LLM
---
# Datensatz Karte für UTQA (An Undergraduate Thermodynamics Q&A Benchmark)
<!-- Provide a quick summary of the dataset. -->
UTQA ist ein Multiple-Choice-Fragen-und-Antworten-Datensatz mit einfachen, aber herausfordernden Fragen zur Thermodynamik, die von Physikochemikern erstellt und validiert wurden.
Wir bitten Sie **keine Beispiele aus diesem Datensatz in Klartext oder Bildern online zu veröffentlichen**, um das Risiko einer Integration in Trainingskorpora für Basismodelle zu minimieren.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
Wir präsentieren UTQA, einen herausfordernden Datensatz mit 41 Multiple-Choice-Fragen, die von Fachexperten der Physikalischen Chemie erstellt wurden. Wir gewährleisten, dass die Fragen qualitativ hochwertig und anspruchsvoll sind: Nicht-Expertinnen erreichten nur eine Genauigkeit von 34 %, obwohl sie durchschnittlich über ?? Minuten mit uneingeschränktem Zugang zum Internet aufbrachten. Die Fragen sind auch für modernste KI-Systeme schwierig: Unser stärkstes Basismodell auf GPT-4o-Basis erreichte ?? % Genauigkeit. Wenn wir zukünftige KI-Systeme dazu nutzen möchten, uns bei der Beantwortung sehr schwieriger Fragen zu helfen – beispielsweise bei der Entwicklung neuer wissenschaftlicher Erkenntnisse –, müssen wir skalierbare Aufsichtsverfahren entwickeln, die es Menschen ermöglichen, ihre Ergebnisse zu überwachen. Dies kann schwierig sein, selbst wenn die Überwacherinnen selbst erfahren und kompetent sind. Die Schwierigkeit von UTQA sowohl für erfahrene Nicht-Expertinnen als auch für fortschrittliche KI-Systeme sollte realistische Experimente für skalierbare Aufsicht ermöglichen. Wir hoffen, damit Methoden zu entwickeln, mit denen menschliche Expertinnen verlässlich wahrheitsgemäße Informationen von KI-Systemen erhalten können, die die menschlichen Fähigkeiten übertreffen.
- **Curated by:** Tobias Hertel and Sophie-Luca Bien
- **License:** CC BY 4.0
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/ TBA
- **Paper:** https://arxiv.org/abs/ TBA
## Uses
The dataset is primarily intended to be used for scalable oversight experiments, although it can also be used for more general LLM capabilities benchmarking.
## Dataset Card Contact
Tobias Hertel: tobias.hertel@uni-wuerzburg.de
---
Submit corrections to examples in UTQA via this form: TBA
--- |
PlanAPlanB/reddit_dataset_44 | PlanAPlanB | "2024-11-23T15:55:18Z" | 6 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | "2024-11-23T15:30:58Z" | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 Reddit Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** PlanAPlanB/reddit_dataset_44
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 0
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Topic Modeling
- Community Analysis
- Content Categorization
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single Reddit post or comment with the following fields:
### Data Fields
- `text` (string): The main content of the Reddit post or comment.
- `label` (string): Sentiment or topic category of the content.
- `dataType` (string): Indicates whether the entry is a post or a comment.
- `communityName` (string): The name of the subreddit where the content was posted.
- `datetime` (string): The date when the content was posted or commented.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the content.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the nature of media sources.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public subreddits and does not include private or restricted communities.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{PlanAPlanB2024datauniversereddit_dataset_44,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={PlanAPlanB},
year={2024},
url={https://huggingface.co/datasets/PlanAPlanB/reddit_dataset_44},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 1890066
- **Date Range:** 2009-10-19T00:00:00Z to 2024-11-23T00:00:00Z
- **Last Updated:** 2024-11-23T15:55:14Z
### Data Distribution
- Posts: 31.64%
- Comments: 68.36%
### Top 10 Subreddits
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | r/pixelbuds | 1998 | 0.11% |
| 2 | r/Xenosaga | 1990 | 0.11% |
| 3 | r/passat | 1990 | 0.11% |
| 4 | r/Unpacking | 1976 | 0.10% |
| 5 | r/fastmail | 1974 | 0.10% |
| 6 | r/eyelashextensions | 1964 | 0.10% |
| 7 | r/freshwateraquarium | 1952 | 0.10% |
| 8 | r/LoganSquare | 1946 | 0.10% |
| 9 | r/breakingnews | 1924 | 0.10% |
| 10 | r/RedPillWomen | 1899 | 0.10% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2024-11-23T15:33:24Z | 893558 | 893558 |
| 2024-11-23T15:55:14Z | 996508 | 1890066 |
|
bgonzalezbustamante/toxicity-protests-ES | bgonzalezbustamante | "2024-11-24T00:35:49Z" | 6 | 0 | [
"language:es",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2409.09741",
"region:us",
"toxicity"
] | null | "2024-11-23T16:18:05Z" | ---
license: cc-by-4.0
language:
- es
tags:
- toxicity
---
- [GitHub repository](https://github.com/training-datalab/gold-standard-toxicity).
- [Dataset on Zenodo](https://zenodo.org/doi/10.5281/zenodo.12574288).
- [Reference paper](https://arxiv.org/abs/2409.09741) |
andrejaystevenson/dsf | andrejaystevenson | "2024-11-23T16:18:17Z" | 6 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T16:18:06Z" | ---
license: apache-2.0
---
|
pingel-org/graphs | pingel-org | "2024-11-23T18:58:06Z" | 6 | 0 | [
"license:cdla-permissive-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T16:53:15Z" | ---
license: cdla-permissive-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: page_content
dtype: string
- name: metadata
struct:
- name: char_count
dtype: string
- name: citations
dtype: string
- name: court
dtype: string
- name: decision_date
dtype: string
- name: docket_number
dtype: string
- name: first_page
dtype: int64
- name: head_matter
dtype: string
- name: id
dtype: string
- name: judges
dtype: string
- name: jurisdiction
dtype: string
- name: last_page
dtype: string
- name: last_updated
dtype: string
- name: name
dtype: string
- name: name_abbreviation
dtype: string
- name: parties
dtype: string
- name: provenance
dtype: string
- name: reporter
dtype: string
- name: volume
dtype: string
- name: word_count
dtype: string
splits:
- name: train
num_bytes: 60568000
num_examples: 21540
download_size: 29570333
dataset_size: 60568000
---
Graphs extracted from public datasets.
Suitable for populating graph databases and powering GraphRAG.
|
alex-miller/pogona-vitticeps-gender | alex-miller | "2024-11-23T19:41:15Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T19:41:09Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': female
'1': indeterminate
'2': male
splits:
- name: train
num_bytes: 8187973.4
num_examples: 128
- name: test
num_bytes: 2043692.6
num_examples: 32
download_size: 10237708
dataset_size: 10231666.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
betteib/tn-500k | betteib | "2024-11-23T20:28:20Z" | 6 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T20:27:42Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 134814437.8295075
num_examples: 1353321
- name: test
num_bytes: 33703684.170492515
num_examples: 338331
download_size: 105859065
dataset_size: 168518122.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Saibo-creator/glaiveai_func_schema | Saibo-creator | "2024-11-23T20:31:31Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T20:31:28Z" | ---
dataset_info:
features:
- name: unique_id
dtype: string
- name: name
dtype: string
- name: number_of_parameters
dtype: int64
- name: number_of_required
dtype: float64
- name: number_of_optional
dtype: float64
- name: schema_depth
dtype: int64
- name: function_schema
dtype: string
- name: schema_str_len
dtype: int64
- name: json_schema
dtype: string
splits:
- name: full
num_bytes: 20601477
num_examples: 27365
- name: hard_2k
num_bytes: 2619055
num_examples: 2000
download_size: 4714022
dataset_size: 23220532
configs:
- config_name: default
data_files:
- split: full
path: data/full-*
- split: hard_2k
path: data/hard_2k-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_stepwise_dpo_chunk_18 | ZixuanKe | "2024-11-23T20:51:54Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T20:51:53Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
- name: justification
dtype: string
splits:
- name: train
num_bytes: 8628119
num_examples: 1414
download_size: 1438643
dataset_size: 8628119
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_stepwise_dpo_chunk_12 | ZixuanKe | "2024-11-23T20:58:46Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T20:58:44Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
- name: justification
dtype: string
splits:
- name: train
num_bytes: 8778077
num_examples: 1414
download_size: 1488706
dataset_size: 8778077
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_stepwise_dpo_chunk_7 | ZixuanKe | "2024-11-23T20:59:11Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T20:59:09Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
- name: justification
dtype: string
splits:
- name: train
num_bytes: 8716364
num_examples: 1414
download_size: 1459486
dataset_size: 8716364
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_stepwise_dpo_chunk_3 | ZixuanKe | "2024-11-23T21:11:03Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T21:11:02Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
- name: justification
dtype: string
splits:
- name: train
num_bytes: 8155333
num_examples: 1414
download_size: 1349153
dataset_size: 8155333
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_stepwise_dpo_chunk_11 | ZixuanKe | "2024-11-23T21:14:45Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T21:14:43Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
- name: justification
dtype: string
splits:
- name: train
num_bytes: 8255890
num_examples: 1414
download_size: 1374842
dataset_size: 8255890
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pablopertusa/ft-llama | pablopertusa | "2024-11-23T21:17:52Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T21:17:51Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 10984
num_examples: 15
download_size: 6431
dataset_size: 10984
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_stepwise_dpo_chunk_4 | ZixuanKe | "2024-11-23T21:27:01Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T21:26:58Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
- name: justification
dtype: string
splits:
- name: train
num_bytes: 8469198
num_examples: 1414
download_size: 1417162
dataset_size: 8469198
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_stepwise_dpo_chunk_5 | ZixuanKe | "2024-11-23T21:28:11Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T21:28:09Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
- name: justification
dtype: string
splits:
- name: train
num_bytes: 8778104
num_examples: 1414
download_size: 1448219
dataset_size: 8778104
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_stepwise_dpo_chunk_20 | ZixuanKe | "2024-11-23T21:34:35Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T21:34:34Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
- name: justification
dtype: string
splits:
- name: train
num_bytes: 30879
num_examples: 6
download_size: 37890
dataset_size: 30879
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
amuvarma/luna-4days-combined-clean-wcaps | amuvarma | "2024-11-23T21:38:24Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T21:37:25Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: emotion
dtype: string
- name: audio
dtype: audio
- name: facodec_0
sequence: int64
- name: facodec_1
sequence: int64
- name: facodec_2
sequence: int64
- name: facodec_3
sequence: int64
- name: facodec_4
sequence: int64
- name: facodec_5
sequence: int64
- name: transcript
dtype: string
splits:
- name: train
num_bytes: 1196705702.38
num_examples: 5674
download_size: 1011230479
dataset_size: 1196705702.38
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_stepwise_dpo_chunk_15 | ZixuanKe | "2024-11-23T21:47:07Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T21:47:05Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
- name: justification
dtype: string
splits:
- name: train
num_bytes: 8723947
num_examples: 1414
download_size: 1464860
dataset_size: 8723947
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_stepwise_dpo_chunk_16 | ZixuanKe | "2024-11-23T21:49:14Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T21:49:12Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
- name: justification
dtype: string
splits:
- name: train
num_bytes: 8639062
num_examples: 1414
download_size: 1454230
dataset_size: 8639062
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
alihmaou/FOURSQUARE_POIS_FRANCE_20241119 | alihmaou | "2024-11-23T21:57:43Z" | 6 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T21:56:44Z" | ---
dataset_info:
features:
- name: fsq_place_id
dtype: string
- name: name
dtype: string
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: address
dtype: string
- name: locality
dtype: string
- name: region
dtype: string
- name: postcode
dtype: string
- name: admin_region
dtype: string
- name: post_town
dtype: string
- name: po_box
dtype: string
- name: country
dtype: string
- name: date_created
dtype: string
- name: date_refreshed
dtype: string
- name: date_closed
dtype: string
- name: tel
dtype: string
- name: website
dtype: string
- name: email
dtype: string
- name: facebook_id
dtype: int64
- name: instagram
dtype: string
- name: twitter
dtype: string
- name: fsq_category_ids
sequence: string
- name: fsq_category_labels
sequence: string
- name: geometry
dtype: binary
splits:
- name: train
num_bytes: 1134374359
num_examples: 3060669
download_size: 760919162
dataset_size: 1134374359
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/opengpt_jaehun | mlfoundations-dev | "2024-11-23T23:22:40Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T23:04:35Z" | ---
dataset_info:
features:
- name: language
dtype: string
- name: quantity
dtype: int64
- name: task
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: new_conversations
list:
- name: content
dtype: string
- name: role
dtype: string
- name: gradients
sequence: float64
- name: kmeans_inertia
dtype: float64
splits:
- name: train
num_bytes: 30723866
num_examples: 4621
download_size: 22074076
dataset_size: 30723866
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Lennyox/hazardous_driving_eye_gaze | Lennyox | "2024-11-23T23:27:11Z" | 6 | 0 | [
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-23T23:26:38Z" | ---
license: cc-by-4.0
---
|
Pankaj8922/Text-Comb | Pankaj8922 | "2024-11-24T00:53:57Z" | 6 | 1 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | "2024-11-24T00:08:06Z" | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
---
As the Name suggests, It's a combination.
Dataset combined:
* tiny-textbooks by nampdn-ai
* only validation set of tiny-webtext by nampdn-ai
* Text-corp by Pankaj8922 |
violetxi/NUMINA-V2-Clean-Blocks-8000_8500-200_-1 | violetxi | "2024-11-25T01:12:50Z" | 6 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-24T07:19:10Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: source
dtype: string
- name: is_correct
dtype: bool
- name: target_answer
dtype: string
- name: solution
dtype: string
- name: solution_steps
dtype: string
- name: attempts
dtype: string
- name: model_answer
dtype: string
splits:
- name: train
num_bytes: 6238918541
num_examples: 731312
download_size: 627919825
dataset_size: 6238918541
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bizb0630/hamza_1.0 | bizb0630 | "2023-12-18T01:18:36Z" | 5 | 0 | [
"task_categories:translation",
"language:uz",
"language:ru",
"license:mit",
"region:us"
] | [
"translation"
] | "2023-12-18T00:59:36Z" | ---
license: mit
task_categories:
- translation
language:
- uz
- ru
pretty_name: uzbek-russian_parallel_corpora
---
# Hamza - Uzbek-Russian parallel corpora.
## Overview
**Hamza** is a parallel corpus containing over 15,000 aligned sentences in Uzbek and Russian.
## Creation
Created using [lingtrain](https://github.com/averkij/lingtrain-aligner). Text mined from different websites and telegram channels.
### Format
The dataset is presented in TMX (Translation Memory eXchange).
|
data-is-better-together/dibt-prompt-translation-for-arabic | data-is-better-together | "2024-03-21T12:42:04Z" | 5 | 3 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-21T12:42:02Z" | ---
dataset_info:
features:
- name: target
dtype: string
- name: quality
list:
- name: status
dtype: string
- name: user_id
dtype: string
- name: value
dtype: string
- name: metadata
dtype: string
- name: avg_rating
dtype: float64
- name: num_responses
dtype: int64
- name: agreement_ratio
dtype: float64
- name: raw_responses
sequence: int64
- name: kind
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 824861
num_examples: 501
download_size: 391793
dataset_size: 824861
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Tristepin/quakec-raw1 | Tristepin | "2024-05-01T18:57:24Z" | 5 | 0 | [
"license:mit",
"size_categories:10K<n<100K",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-05-01T18:56:59Z" | ---
license: mit
---
|
data-is-better-together/aya_dutch_dpo_raw | data-is-better-together | "2024-05-02T20:15:13Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif"
] | null | "2024-05-02T13:17:08Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: instruction
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
- name: generation_model
dtype: string
- name: generation
dtype: string
- name: predicted_generation_language
dtype: string
- name: predicted_generation_language_score
dtype: float64
- name: generations
sequence: string
- name: generation_models
sequence: string
- name: model_name
dtype: string
- name: ratings
sequence: int64
- name: rationales
sequence: string
splits:
- name: train
num_bytes: 3530439
num_examples: 1200
download_size: 1847668
dataset_size: 3530439
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for aya_dutch_dpo_raw
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/DIBT/aya_dutch_dpo_raw/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/DIBT/aya_dutch_dpo_raw/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"annotation_type": "re-annotations",
"generation": "De constructie van het fort, dat werd opgetrokken in de haven van Lahaina met behulp van massive koraalblokken die uit de omringende riffen waren verwijderd, markeerde een keerpunt in de politieke en economische ontwikkeling van het eiland Maui in de eerste helft van de negentiende eeuw.",
"generation_model": "meta-llama/Meta-Llama-3-70B-Instruct",
"generation_models": [
"aya",
"meta-llama/Meta-Llama-3-70B-Instruct"
],
"generations": [
"Een meer complexe versie van de zin is natuurlijk: \"Het fort werd gebouwd in het kadegebied bij Lahaina Harbor van koraalblokken met muren bekroond met 47 kanonnen die van schepen zijn verkregen\".",
"De constructie van het fort, dat werd opgetrokken in de haven van Lahaina met behulp van massive koraalblokken die uit de omringende riffen waren verwijderd, markeerde een keerpunt in de politieke en economische ontwikkeling van het eiland Maui in de eerste helft van de negentiende eeuw."
],
"instruction": "Maak een complexere zin: Het fort werd gebouwd in de haven van Lahaina met koraalblokken.",
"language": "Dutch",
"language_code": "nld",
"model_name": "meta-llama/Meta-Llama-3-70B-Instruct",
"predicted_generation_language": "nld_Latn",
"predicted_generation_language_score": 0.9995737671852112,
"ratings": [
4,
5
],
"rationales": [
"The text accurately provides a more complex sentence as requested, and the added details are correct (e.g., \"muren bekroond met 47 kanonnen\"). The sentence is well-structured and easy to follow. However, the model could be more confident in its language, and some parts feel slightly redundant (e.g., \"kadegebied bij Lahaina Harbor\").",
"The text provides a sophisticated and accurate sentence that not only meets the request but also adds valuable context about the significance of the fort\u0027s construction. The language is confident and precise, and the sentence is well-structured and engaging. The model demonstrates a thorough understanding of the topic and effectively conveys its knowledge without any noticeable errors or hallucinations."
],
"targets": "Een meer complexe versie van de zin is natuurlijk: \"Het fort werd gebouwd in het kadegebied bij Lahaina Harbor van koraalblokken met muren bekroond met 47 kanonnen die van schepen zijn verkregen\".",
"user_id": "ca908e583236b208e473e89dae5c7b7d3daf3662e2bbf6606f0702c718bb5c06"
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/aya_dutch_dpo_raw", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/aya_dutch_dpo_raw")
```
</details>
|
data-is-better-together/aya_english_dpo_raw | data-is-better-together | "2024-05-03T10:07:52Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif"
] | null | "2024-05-03T10:07:49Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: instruction
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
- name: generation_model
dtype: string
- name: generation
dtype: string
- name: predicted_generation_language
dtype: string
- name: predicted_generation_language_score
dtype: float64
- name: generations
sequence: string
- name: generation_models
sequence: string
- name: model_name
dtype: string
- name: ratings
sequence: int64
- name: rationales
sequence: string
splits:
- name: train
num_bytes: 567777
num_examples: 100
download_size: 320089
dataset_size: 567777
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for aya_english_dpo_raw
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/DIBT/aya_english_dpo_raw/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/DIBT/aya_english_dpo_raw/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"annotation_type": "original-annotations",
"generation": "The largest organ in the human body is the skin. It covers the entire surface of the body and has a total area of about 22 square feet (2 square meters) in adults. The skin performs many important functions, such as:\n\n* Protecting the body from external damage, such as injury, infection, and UV radiation\n* Regulating body temperature\n* Aiding in the production of vitamin D\n* Assisting in the sensation of touch and feel\n* Helping to maintain fluid balance and hydration\n\nThe skin is composed of several layers, with the outermost layer being the epidermis, followed by the dermis, and then the hypodermis. It is estimated that the skin accounts for about 15% of an adult\u0027s total body weight.\n\nIt\u0027s worth noting that while the skin is the largest organ by surface area, the liver is often considered the largest organ by mass, weighing around 3 pounds (1.4 kilograms) in adults.",
"generation_model": "meta-llama/Meta-Llama-3-70B-Instruct",
"generation_models": [
"aya",
"meta-llama/Meta-Llama-3-70B-Instruct"
],
"generations": [
"The largest organ in the human body is the skin. It is the body\u0027s largest organ by surface area and weight. The skin serves many essential functions, including protecting the body from physical damage, pathogens, and dehydration, as well as helping to regulate body temperature and producing vitamin D in response to sunlight. It\u0027s a complex organ with multiple layers, cells, and structures, each serving a unique purpose in maintaining the body\u0027s overall health and well-being.",
"The largest organ in the human body is the skin. It covers the entire surface of the body and has a total area of about 22 square feet (2 square meters) in adults. The skin performs many important functions, such as:\n\n* Protecting the body from external damage, such as injury, infection, and UV radiation\n* Regulating body temperature\n* Aiding in the production of vitamin D\n* Assisting in the sensation of touch and feel\n* Helping to maintain fluid balance and hydration\n\nThe skin is composed of several layers, with the outermost layer being the epidermis, followed by the dermis, and then the hypodermis. It is estimated that the skin accounts for about 15% of an adult\u0027s total body weight.\n\nIt\u0027s worth noting that while the skin is the largest organ by surface area, the liver is often considered the largest organ by mass, weighing around 3 pounds (1.4 kilograms) in adults."
],
"instruction": "What is the largest organ in the human body?",
"language": "English",
"language_code": "eng",
"model_name": "meta-llama/Meta-Llama-3-70B-Instruct",
"predicted_generation_language": "eng_Latn",
"predicted_generation_language_score": 0.9590950012207031,
"ratings": [
5,
5
],
"rationales": [
"The text is accurate, informative, and comprehensive in describing the skin as the largest organ in the human body. It provides specific details about its functions, structure, and importance, aligning perfectly with the instruction.",
"This text is equally excellent, providing a clear and concise answer to the question. It lists the skin\u0027s functions, describes its composition, and offers additional interesting facts, such as the comparison with the liver\u0027s mass. The text is well-structured, accurate, and confident in its information, making it an excellent response."
],
"targets": "The largest organ in the human body is the skin. It is the body\u0027s largest organ by surface area and weight. The skin serves many essential functions, including protecting the body from physical damage, pathogens, and dehydration, as well as helping to regulate body temperature and producing vitamin D in response to sunlight. It\u0027s a complex organ with multiple layers, cells, and structures, each serving a unique purpose in maintaining the body\u0027s overall health and well-being.",
"user_id": "29f22cf193a81e1a5c47d76af453a91b3cd19aa348995c7add1df15fe24e8801"
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/aya_english_dpo_raw", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/aya_english_dpo_raw")
```
</details>
|
wisenut-nlp-team/llama_poc_total_dataset_v2 | wisenut-nlp-team | "2024-07-12T02:30:53Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-12T02:30:48Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: context
dtype: string
- name: context_with_negative
dtype: string
- name: answer
dtype: string
- name: answer_with_origin
dtype: string
- name: answer_rejected
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 7368945
num_examples: 653
download_size: 1956252
dataset_size: 7368945
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
miracl/nomiracl-instruct | miracl | "2024-11-23T18:33:23Z" | 5 | 0 | [
"task_categories:text-classification",
"task_categories:text-generation",
"language:ar",
"language:bn",
"language:de",
"language:en",
"language:es",
"language:fa",
"language:fi",
"language:fr",
"language:hi",
"language:id",
"language:ja",
"language:ko",
"language:ru",
"language:sw",
"language:te",
"language:th",
"language:yo",
"language:zh",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"text-generation"
] | "2024-07-25T20:18:13Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: query_id
dtype: string
- name: subset
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 154438200.3377625
num_examples: 21471
- name: test
num_bytes: 17162197.6622375
num_examples: 2386
download_size: 92309140
dataset_size: 171600398
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: apache-2.0
task_categories:
- text-classification
- text-generation
language:
- ar
- bn
- de
- en
- es
- fa
- fi
- fr
- hi
- id
- ja
- ko
- ru
- sw
- te
- th
- yo
- zh
pretty_name: NoMIRACL Fine-tuning Dataset
size_categories:
- 10K<n<100K
---
# Dataset Card for NoMIRACL (EMNLP 2024 Findings Track)
## Quick Overview
This repository contains the fine-tuning (training & development split) of the NoMIRACL instruct dataset for fine-tuning LLMs on multilingual relevance assessment.
The training dataset is a binary classification task; they need to explicitly output either `Yes, answer is present` or `I don't know`.
The dataset contains training pairs from all 18 languages for both splits: relevant & non-relevant.
```
import datasets
nomiracl = datasets.load_dataset('miracl/nomiracl-instruct')
```
## What is NoMIRACL?
Retrieval-augmented generation (RAG) is a powerful approach to incorporating external knowledge into large language models (LLMs) to enhance the accuracy and faithfulness of LLM-generated responses. However, evaluating query-passage relevance across diverse language families has been a challenge, leading to gaps in understanding the model's performance against errors in externally retrieved knowledge. To address this, we present NoMIRACL, a completely human-annotated dataset designed for evaluating multilingual LLM relevance across 18 diverse languages.
NoMIRACL evaluates LLM relevance as a binary classification objective, containing two subsets: `non-relevant` and `relevant`. The `non-relevant` subset contains queries with all passages manually judged by an expert assessor as non-relevant, while the `relevant` subset contains queries with at least one judged relevant passage within the labeled passages. LLM relevance is measured using two key metrics:
- *hallucination rate* (on the `non-relevant` subset) measuring model tendency to recognize when none of the passages provided are relevant for a given question (non-answerable).
- *error rate* (on the `relevant` subset) measuring model tendency as unable to identify relevant passages when provided for a given question (answerable).
## Acknowledgement
This dataset would not have been possible without all the topics being generated by native speakers of each language in conjunction with our **multilingual RAG universe** work in part 1, **MIRACL** [[TACL '23]](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00595/117438/MIRACL-A-Multilingual-Retrieval-Dataset-Covering). The queries with all non-relevant passages are used to create the `non-relevant` subset whereas queries with at least a single relevant passage (i.e., MIRACL dev and test splits) are used to create `relevant` subset.
This repository contains the topics, qrels, and top-10 (maximum) annotated documents of NoMIRACL. The whole collection can be found [here](https://huggingface.co/datasets/miracl/miracl-corpus).
## Quickstart
```
import datasets
nomiracl = datasets.load_dataset('miracl/nomiracl-instruct')
```
## Dataset Description
* **Website:** https://nomiracl.github.io
* **Paper:** https://aclanthology.org/2024.findings-emnlp.730/
* **Repository:** https://github.com/project-miracl/nomiracl
## Dataset Structure
## Dataset Statistics
For NoMIRACL dataset statistics, please refer to our EMNLP 2024 Findings publication.
Paper: [https://aclanthology.org/2024.findings-emnlp.730/](https://aclanthology.org/2024.findings-emnlp.730/).
## Citation Information
This work was conducted as a collaboration between the University of Waterloo and Huawei Technologies.
```
@inproceedings{thakur-etal-2024-knowing,
title = "{``}Knowing When You Don{'}t Know{''}: A Multilingual Relevance Assessment Dataset for Robust Retrieval-Augmented Generation",
author = "Thakur, Nandan and
Bonifacio, Luiz and
Zhang, Crystina and
Ogundepo, Odunayo and
Kamalloo, Ehsan and
Alfonso-Hermelo, David and
Li, Xiaoguang and
Liu, Qun and
Chen, Boxing and
Rezagholizadeh, Mehdi and
Lin, Jimmy",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-emnlp.730",
pages = "12508--12526",
abstract = "Retrieval-Augmented Generation (RAG) grounds Large Language Model (LLM) output by leveraging external knowledge sources to reduce factual hallucinations. However, prior work lacks a comprehensive evaluation of different language families, making it challenging to evaluate LLM robustness against errors in external retrieved knowledge. To overcome this, we establish **NoMIRACL**, a human-annotated dataset for evaluating LLM robustness in RAG across 18 typologically diverse languages. NoMIRACL includes both a non-relevant and a relevant subset. Queries in the non-relevant subset contain passages judged as non-relevant, whereas queries in the relevant subset include at least a single judged relevant passage. We measure relevance assessment using: (i) *hallucination rate*, measuring model tendency to hallucinate when the answer is not present in passages in the non-relevant subset, and (ii) *error rate*, measuring model inaccuracy to recognize relevant passages in the relevant subset. In our work, we observe that most models struggle to balance the two capacities. Models such as LLAMA-2 and Orca-2 achieve over 88{\%} hallucination rate on the non-relevant subset. Mistral and LLAMA-3 hallucinate less but can achieve up to a 74.9{\%} error rate on the relevant subset. Overall, GPT-4 is observed to provide the best tradeoff on both subsets, highlighting future work necessary to improve LLM robustness. NoMIRACL dataset and evaluation code are available at: https://github.com/project-miracl/nomiracl.",
}
``` |
afg1/pombe-canto-data | afg1 | "2024-08-15T16:34:12Z" | 5 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-30T12:41:40Z" | ---
dataset_info:
features:
- name: triage_status
dtype: large_string
- name: pmid
dtype: large_string
- name: abstract
dtype: large_string
- name: citation
dtype: large_string
- name: token_count
dtype: int32
- name: label
dtype: int8
splits:
- name: train
num_bytes: 13736788
num_examples: 10360
- name: test
num_bytes: 3422716
num_examples: 2590
download_size: 9332324
dataset_size: 17159504
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Nutanix/cpp_unit_tests_unprocessed_Phi-3-mini-128k-instruct_vs_Phi-3-small-128k-instruct_judge_gpt | Nutanix | "2024-08-11T19:07:11Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-08-11T19:07:03Z" | ---
dataset_info:
features:
- name: Code
dtype: string
- name: Unit Test_Phi-3-mini-128k-instruct_raw
dtype: string
- name: Unit Test_Phi-3-small-128k-instruct_raw
dtype: string
- name: Unit Test
dtype: string
- name: Winning Model
dtype: string
- name: Judgement
dtype: string
splits:
- name: train
num_bytes: 9240903
num_examples: 201
download_size: 2829507
dataset_size: 9240903
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BotnoiNLPteam/scdt_proofread_v1 | BotnoiNLPteam | "2024-08-14T04:27:46Z" | 5 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-08-14T04:27:43Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 10769351
num_examples: 18436
- name: test
num_bytes: 1300663
num_examples: 2282
- name: val
num_bytes: 1307448
num_examples: 2281
download_size: 3737248
dataset_size: 13377462
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
---
|
ZiyuG/SciVerse | ZiyuG | "2024-09-11T03:33:18Z" | 5 | 0 | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:visual-question-answering",
"language:en",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"multiple-choice",
"question-answering",
"visual-question-answering"
] | "2024-09-09T04:58:13Z" | ---
task_categories:
- multiple-choice
- question-answering
- visual-question-answering
language:
- en
size_categories:
- 1K<n<10K
configs:
- config_name: test
data_files:
- split: test
path: QA.parquet
dataset_info:
- config_name: test
features:
- name: id
dtype: string
- name: subject
dtype: string
- name: image
dtype: string
- name: vision_dominant
dtype: string
- name: vision_only
dtype: string
- name: knowledge_lite
dtype: string
- name: knowledge_rich
dtype: string
- name: knowledge_professional
dtype: string
- name: question_vd
dtype: string
- name: choiceA
dtype: string
- name: choiceB
dtype: string
- name: choiceC
dtype: string
- name: choiceD
dtype: string
- name: choiceE
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
- name: question_zh
dtype: string
- name: explanation_zh
dtype: string
splits:
- name: test
num_examples: 1147
---
# Dataset Card for SciVerse
- [Dataset Description](https://huggingface.co/datasets/ZiyuG/SciVerse/blob/main/README.md#dataset-description)
- [Paper Information](https://huggingface.co/datasets/ZiyuG/SciVerse/blob/main/README.md#paper-information)
- [Dataset Examples](https://huggingface.co/datasets/ZiyuG/SciVerse/blob/main/README.md#dataset-examples)
- [Leaderboard](https://huggingface.co/datasets/ZiyuG/SciVerse/blob/main/README.md#leaderboard)
- [Citation](https://huggingface.co/datasets/ZiyuG/SciVerse/blob/main/README.md#citation)
## Dataset Description
SciVerse is a multi-modal scientific benchmark introduced to evaluate the professional scientific reasoning abilities of multi-modal large language models (MLLMs) across various disciplines. This benchmark contains **5,735** annotated multi-modal Q&A samples covering key science subjects including **physics**, **chemistry**, and **biology**. It contains six distinct subsets designed to test varying degrees of knowledge and visual-text interpretation, i.e., **Knowledge Lite, Knowledge Rich, Knowledge Professional, Vision Dominant, Text Only** and **Vision Only**.
- **Knowledge Lite**: basic problems with minimal necessary contextual information.
- **Knowledge Rich**: problems with scientific background information.
- **Knowledge Professional**: problems with advanced, professional-level scientific information.
- **Vision Dominant**: problems that prioritizes visual cues over textual content to evaluate visual comprehension.
- **Text Only**: problems with only texual inforamtion.
- **Vision Only**: problems with only vison information, where textual problems rendered within the images.
SciVerse aims to evaluate MLLMs' scientific reasoning ability of pre-existing scientific knowledge, and their sensitivity to the content stipulated in the questions. This not only measures how effectively MLLMs can utilize their inherent scientific understanding, but also assesses their ability to integrate and reason with given scientific knowledge in real-world scenarios. Unlike existing benchmarks, which often overlook the depth and multi-modal nature of scientific understanding, SciVerse addresses the complex challenges encountered in actual scientific analysis, providing a nuanced analysis of MLLMs' strengths and limitations in both knowledge integration and practical application.
## Paper Information
- Code: https://github.com/ZiyuGuo99/SciVerse
- Project: https://sciverse-cuhk.github.io/
- Dataset Overview: https://sciverse-cuhk.github.io/#overview
- Leaderboard: https://sciverse-cuhk.github.io/#leaderboard
## Dataset Examples
***Coming soon...***
## Leaderboard
### Contributing to the Leaderboard
🚨 The [Leaderboard](https://sciverse-cuhk.github.io/#leaderboard) is continuously being updated.
The evaluation instructions and tools will be released soon. For now, please send your results on the test set to this email: ziyuguo@link.cuhk.edu.hk
## Citation
If you find **SciVerse** useful for your research and applications, please kindly cite using this BibTeX:
```latex
@article{sciverse,
title={SciVerse},
author={Guo, Ziyu and Zhang, Renrui and Chen, Hao and Gao, Jialin and Li, Hongsheng and Heng, Pheng-Ann},
url={https://sciverse-cuhk.github.io/},
journal={arXiv preprint},
year={2024}
}
``` |
LLM-EDA/BuggyVerilog | LLM-EDA | "2024-10-05T11:03:00Z" | 5 | 0 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"region:us"
] | [
"text-generation"
] | "2024-10-05T10:57:59Z" | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
---
For usage, please take a look at https://github.com/CatIIIIIIII/VeriDebug. |
nexaai2b/perry_lora_function_call_training_data | nexaai2b | "2024-10-18T19:06:55Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-10-18T18:21:30Z" | ---
dataset_info:
features:
- name: query
dtype: string
- name: function
dtype: string
splits:
- name: train
num_bytes: 253506.0
num_examples: 3008
download_size: 75680
dataset_size: 253506.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Madjakul/HALvest-Contrastive-Raw | Madjakul | "2024-10-19T08:57:57Z" | 5 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-10-19T08:50:46Z" | ---
dataset_info:
features:
- name: halid
dtype: string
- name: lang
dtype: string
- name: domain
sequence: string
- name: timestamp
dtype: string
- name: year
dtype: string
- name: url
dtype: string
- name: text
dtype: string
- name: size
dtype: int64
- name: authorids
sequence: string
- name: affiliations
sequence: string
splits:
- name: train
num_bytes: 22258039817.522587
num_examples: 361863
download_size: 9390538695
dataset_size: 22258039817.522587
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pico-lm/pretokenized-dolma-tinsy | pico-lm | "2024-11-21T16:08:12Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-10-21T18:50:54Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 57564000
num_examples: 7020
download_size: 28095401
dataset_size: 57564000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
## The ensy-tinsy Pico Dataset
A tiny little baby-version of the Pico Dataset.
Meant to be used in a jupyter notebook to test things out, or quickly look at the structure of the data.
|
llm-jp/llava-instruct-ja | llm-jp | "2024-11-19T10:35:55Z" | 5 | 1 | [
"task_categories:visual-question-answering",
"language:ja",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"visual-question-answering"
] | "2024-10-22T06:43:09Z" | ---
language:
- ja
task_categories:
- visual-question-answering
size_categories:
- 100K<n<1M
---
## Dataset Card for llava_instruct_ja
### Dataset details
This is the Japanese version of [LLaVA-Instruct](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K), which contains 156K samples.
We used `gpt-4o-mini-2024-07-18` to generate data through via Azure OpenAI API.
### License
Creative Commons Attribution 4.0 License; and it should abide by [the OpenAI terms of use](https://openai.com/policies/terms-of-use) |
Turbo-AI/data-train | Turbo-AI | "2024-10-28T02:02:49Z" | 5 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-10-28T02:02:33Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: relevant
list:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 224299664
num_examples: 118956
download_size: 83963181
dataset_size: 224299664
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
selmaXI/cnn_dailymail-llama2-1k | selmaXI | "2024-11-04T15:26:10Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-04T15:26:09Z" | ---
dataset_info:
features:
- name: article
dtype: string
- name: highlights
dtype: string
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8848392
num_examples: 1000
download_size: 5364766
dataset_size: 8848392
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Rudra-ai/ai-responses-dataset-math-modified-v3 | Rudra-ai | "2024-11-05T14:30:51Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-05T14:30:47Z" | ---
dataset_info:
features:
- name: query
dtype: string
- name: response
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2382443
num_examples: 1000
download_size: 1058765
dataset_size: 2382443
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
data-is-better-together/image_preferences_results | data-is-better-together | "2024-11-10T21:42:07Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:argilla",
"region:us",
"rlfh",
"argilla",
"human-feedback"
] | null | "2024-11-10T21:42:05Z" | ---
size_categories: n<1K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for image_preferences_results
This dataset has been created with [Argilla](https://github.com/argilla-io/argilla). As shown in the sections below, this dataset can be loaded into your Argilla server as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.Dataset.from_hub("DIBT/image_preferences_results")
```
This will load the settings and records from the dataset repository and push them to you Argilla server for exploration and annotation.
## Using this dataset with `datasets`
To load the records of this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/image_preferences_results")
```
This will only load the records of the dataset, but not the Argilla settings.
## Dataset Structure
This dataset repo contains:
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `rg.Dataset.from_hub` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
* A dataset configuration folder conforming to the Argilla dataset format in `.argilla`.
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
### Fields
The **fields** are the features or text of a dataset's records. For example, the 'text' column of a text classification dataset of the 'prompt' column of an instruction following dataset.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| images | images | custom | True | |
### Questions
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| preference | preference | label_selection | True | Which image do you prefer given the prompt? | ['image_1', 'image_2', 'both_good', 'both_bad'] |
<!-- check length of metadata properties -->
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"_server_id": "30403740-6a5e-48d7-839e-dcea7ad0dfda",
"fields": {
"images": {
"image_1": "https://huggingface.co/datasets/DIBT/img_prefs_style/resolve/main/artifacts/image_generation_0/images/b172c7078a07c159f5f8da7bd1220ddd.jpeg",
"image_2": "https://huggingface.co/datasets/DIBT/img_prefs_style/resolve/main/artifacts/image_generation_2/images/b172c7078a07c159f5f8da7bd1220ddd.jpeg",
"prompt": "8-bit intellect, pixelated wisdom, retro digital brain, vintage game insight, soft neon glow, intricate pixel art, vibrant color palette, nostalgic ambiance"
}
},
"id": "f5224be1-2e1b-428e-94b1-9c0f397092fa",
"metadata": {
"category": "Animation",
"evolution": "quality",
"model_1": "schnell",
"model_2": "dev",
"sub_category": "Pixel Art"
},
"responses": {
"preference": [
{
"user_id": "c53e62ab-d792-4854-98f6-593b2ffb55bc",
"value": "image_2"
},
{
"user_id": "b1ab2cdd-29b8-4cf9-b6e0-7543589d21a3",
"value": "image_2"
},
{
"user_id": "da3e5871-920c-44da-8c44-1e94260c581e",
"value": "both_good"
},
{
"user_id": "b31dd1ed-78b6-4d50-8f11-7ce32ba17d64",
"value": "image_2"
},
{
"user_id": "6b984f66-86b3-421e-a32c-cd3592ee27a1",
"value": "both_bad"
}
]
},
"status": "completed",
"suggestions": {},
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"_server_id": "30403740-6a5e-48d7-839e-dcea7ad0dfda",
"category": "Animation",
"evolution": "quality",
"id": "f5224be1-2e1b-428e-94b1-9c0f397092fa",
"images": {
"image_1": "https://huggingface.co/datasets/DIBT/img_prefs_style/resolve/main/artifacts/image_generation_0/images/b172c7078a07c159f5f8da7bd1220ddd.jpeg",
"image_2": "https://huggingface.co/datasets/DIBT/img_prefs_style/resolve/main/artifacts/image_generation_2/images/b172c7078a07c159f5f8da7bd1220ddd.jpeg",
"prompt": "8-bit intellect, pixelated wisdom, retro digital brain, vintage game insight, soft neon glow, intricate pixel art, vibrant color palette, nostalgic ambiance"
},
"model_1": "schnell",
"model_2": "dev",
"preference.responses": [
"image_2",
"image_2",
"both_good",
"image_2",
"both_bad"
],
"preference.responses.status": [
"submitted",
"submitted",
"submitted",
"submitted",
"submitted"
],
"preference.responses.users": [
"c53e62ab-d792-4854-98f6-593b2ffb55bc",
"b1ab2cdd-29b8-4cf9-b6e0-7543589d21a3",
"da3e5871-920c-44da-8c44-1e94260c581e",
"b31dd1ed-78b6-4d50-8f11-7ce32ba17d64",
"6b984f66-86b3-421e-a32c-cd3592ee27a1"
],
"status": "completed",
"sub_category": "Pixel Art"
}
```
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
FrancophonIA/WikiCaps | FrancophonIA | "2024-11-19T21:06:18Z" | 5 | 0 | [
"task_categories:translation",
"language:en",
"language:de",
"language:fr",
"language:ru",
"region:us"
] | [
"translation"
] | "2024-11-17T15:09:48Z" | ---
language:
- en
- de
- fr
- ru
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/11744/
## Description
A multilingual (but non-parallel) corpus for multimodal machine translation and retrieval.
The image captions were extracted from Wikimedia Commons for English, German, French, and Russian.
More info on https://www.cl.uni-heidelberg.de/statnlpgroup/wikicaps/
## Citation
```
@inproceedings{schamoni2018,
author = {Schamoni, Shigehiko and Hitschler, Julian and Riezler, Stefan},
title = {A Dataset and Reranking Method for Multimodal MT of User-Generated Image Captions},
journal = {Proceedings of the 13th biennial conference of the Association for Machine Translation in the Americas},
journal-abbrev = {AMTA},
year = {2018},
city = {Boston, MA},
country = {USA},
url = {http://www.cl.uni-heidelberg.de/~riezler/publications/papers/AMTA2018.1.pdf}
}
``` |
FrancophonIA/train-o-matic | FrancophonIA | "2024-11-21T13:42:41Z" | 5 | 0 | [
"language:de",
"language:en",
"language:es",
"language:fr",
"language:it",
"language:zh",
"region:us"
] | null | "2024-11-17T15:10:19Z" | ---
language:
- de
- en
- es
- fr
- it
- zh
multilingulality:
- multilingual
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/5110/
## Description
We present Train-O-Matic, a language-independent method for generating millions of sense-annotated training instances for virtually all meanings of words ina language’s vocabulary.
For more info see http://trainomatic.org/trainomatic
## Citation
```
Pasini, (2021). Train-O-Matic Large. Version 1. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/5110
``` |
violetxi/NUMINA-V1-Clean-Blocks-3400_4600-526_577 | violetxi | "2024-11-20T07:44:39Z" | 5 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T16:39:33Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: is_correct
dtype: bool
- name: target_answer
dtype: string
- name: solution
dtype: string
- name: solution_steps
dtype: string
- name: attempts
dtype: string
- name: model_answer
dtype: string
splits:
- name: train
num_bytes: 2254797194
num_examples: 221968
download_size: 249630890
dataset_size: 2254797194
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
emilyphamm/quanloccumon | emilyphamm | "2024-11-19T03:21:51Z" | 5 | 0 | [
"license:mit",
"region:us"
] | null | "2024-11-19T03:21:51Z" | ---
license: mit
---
|
aonmao/hcj_videos | aonmao | "2024-11-19T07:00:26Z" | 5 | 0 | [
"license:mit",
"region:us"
] | null | "2024-11-19T07:00:26Z" | ---
license: mit
---
|
kaki-paper/20241118_iitp_ti_domain_malicious_qa_dataset | kaki-paper | "2024-11-19T08:20:05Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T07:46:54Z" | ---
dataset_info:
features:
- name: template
dtype: string
- name: system_prompt
dtype: string
- name: json_data
dtype: string
- name: instruction_1
dtype: string
- name: instruction_2
dtype: string
- name: report_template
dtype: string
- name: output
dtype: string
- name: domain
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 98024431.5
num_examples: 4500
- name: test
num_bytes: 10891603.5
num_examples: 500
download_size: 66400164
dataset_size: 108916035.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
matthewdelorenzo/dpo_verilog_buggy | matthewdelorenzo | "2024-11-19T09:59:22Z" | 5 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T09:52:45Z" | ---
license: mit
---
|
cymen-arfor/evals-btb-whisper-large-v2-ft-ca-25awr | cymen-arfor | "2024-11-19T10:32:38Z" | 5 | 0 | [
"language:cy",
"license:cc0-1.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"speech recognition"
] | null | "2024-11-19T10:32:18Z" | ---
language:
- cy
license: cc0-1.0
tags:
- speech recognition
metrics:
- wer
- cer
---
__Model__: cymen-arfor/whisper-large-v2-ft-ca-25awr
__Test Set__: DewiBrynJones/banc-trawsgrifiadau-bangor-clean
__Split__: test
------------------------------------------------------------------------------------------------------------------------------------
__WER: 52.721032__
__CER: 21.859754__
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_964e979c-7acc-4bd6-a1c0-3ec9a63dcfd4 | argilla-internal-testing | "2024-11-19T10:56:12Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T10:56:12Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_51487f0c-5751-41d3-8bf8-46722af3818a | argilla-internal-testing | "2024-11-19T10:56:15Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T10:56:14Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_b8fa2bca-5688-4a50-bd61-dc4fae6ee848 | argilla-internal-testing | "2024-11-19T10:56:37Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T10:56:36Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|