datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.59M
| likes
int64 0
6.32k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
40
β | createdAt
unknown | card
stringlengths 19
1.01M
|
---|---|---|---|---|---|---|---|---|
data-is-better-together/MPEP_GREEK | data-is-better-together | "2024-06-26T06:29:52Z" | 2 | 1 | [
"language:el",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:argilla",
"region:us",
"rlfh",
"argilla",
"human-feedback"
] | null | "2024-06-21T08:22:29Z" | ---
size_categories: n<1K
tags:
- rlfh
- argilla
- human-feedback
language:
- el
---
# Dataset Card for MPEP_GREEK
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("DIBT/MPEP_GREEK")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/MPEP_GREEK")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| source | Source | text | True | True |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| target | Target | text | True | Translate the text. | N/A |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": "888",
"fields": {
"source": "Given the text: An experienced and enthusiastic innovator...you want on your team.\nMargaret Hines is the founder and Principal Consultant of Inspire Marketing, LLC, investing in local businesses, serving the community with business brokerage and marketing consulting. She has an undergraduate degree from Washington University in St. Louis, MO, and an MBA from the University of Wisconsin-Milwaukee.\nMargaret offers consulting in marketing, business sales and turnarounds and franchising. She is also an investor in local businesses.\nPrior to founding Inspire Marketing in 2003, Margaret gained her business acumen, sales and marketing expertise while working at respected Fortune 1000 companies.\nSummarize the background and expertise of Margaret Hines, the founder of Inspire Marketing."
},
"metadata": {
"evolved_from": null,
"kind": "synthetic",
"source": "ultrachat"
},
"responses": [
{
"status": "submitted",
"user_id": "f4d8878d-e378-4087-a99b-c31dad5f0609",
"values": {
"target": {
"value": "\u0392\u03ac\u03c3\u03b5\u03b9 \u03c4\u03bf\u03c5 \u03ba\u03b5\u03b9\u03bc\u03ad\u03bd\u03bf\u03c5: \u039c\u03af\u03b1 \u03ad\u03bc\u03c0\u03b5\u03b9\u03c1\u03b7 \u03ba\u03b1\u03b9 \u03b5\u03bd\u03b8\u03bf\u03c5\u03c3\u03b9\u03ce\u03b4\u03b7\u03c2 \u03ba\u03b1\u03b9\u03bd\u03bf\u03c4\u03cc\u03bc\u03bf\u03c2... \u03c0\u03bf\u03c5 \u03b8\u03ad\u03bb\u03b5\u03c4\u03b5 \u03c3\u03c4\u03b7\u03bd \u03bf\u03bc\u03ac\u03b4\u03b1 \u03c3\u03b1\u03c2.\n\u0397 Margaret Hines \u03b5\u03af\u03bd\u03b1\u03b9 \u03b7 \u03b9\u03b4\u03c1\u03cd\u03c4\u03c1\u03b9\u03b1 \u03ba\u03b1\u03b9 \u03b7 \u03ba\u03cd\u03c1\u03b9\u03b1 \u03c3\u03cd\u03bc\u03b2\u03bf\u03c5\u03bb\u03bf\u03c2 \u03c4\u03b7\u03c2 Inspire Marketing, LLC, \u03ad\u03c7\u03bf\u03bd\u03c4\u03b1\u03c2 \u03b5\u03c0\u03b5\u03bd\u03b4\u03cd\u03c3\u03b5\u03b9 \u03c3\u03b5 \u03c4\u03bf\u03c0\u03b9\u03ba\u03ad\u03c2 \u03b5\u03c0\u03b9\u03c7\u03b5\u03b9\u03c1\u03ae\u03c3\u03b5\u03b9\u03c2, \u03b5\u03be\u03c5\u03c0\u03b7\u03c1\u03b5\u03c4\u03ce\u03bd\u03c4\u03b1\u03c2 \u03c4\u03b7\u03bd \u03ba\u03bf\u03b9\u03bd\u03cc\u03c4\u03b7\u03c4\u03b1 \u03bc\u03ad\u03c3\u03c9 \u03b5\u03c0\u03b9\u03c7\u03b5\u03b9\u03c1\u03b7\u03bc\u03b1\u03c4\u03b9\u03ba\u03ae\u03c2 \u03bc\u03b5\u03c3\u03b9\u03c4\u03b5\u03af\u03b1\u03c2 \u03ba\u03b1\u03b9 \u03c3\u03c5\u03bc\u03b2\u03bf\u03c5\u03bb\u03ce\u03bd \u03bc\u03ac\u03c1\u03ba\u03b5\u03c4\u03b9\u03bd\u03b3\u03ba. \u0388\u03c7\u03b5\u03b9 \u03c0\u03c4\u03c5\u03c7\u03af\u03bf \u03b1\u03c0\u03cc \u03c4\u03bf \u03a0\u03b1\u03bd\u03b5\u03c0\u03b9\u03c3\u03c4\u03ae\u03bc\u03b9\u03bf \u03c4\u03b7\u03c2 \u039f\u03c5\u03ac\u03c3\u03b9\u03b3\u03ba\u03c4\u03bf\u03bd \u03c3\u03c4\u03bf St. Louis, MO, \u03ba\u03b1\u03b9 MBA \u03b1\u03c0\u03cc \u03c4\u03bf \u03a0\u03b1\u03bd\u03b5\u03c0\u03b9\u03c3\u03c4\u03ae\u03bc\u03b9\u03bf \u03c4\u03bf\u03c5 Wisconsin-Milwaukee.\n\u0397 Margaret \u03c0\u03c1\u03bf\u03c3\u03c6\u03ad\u03c1\u03b5\u03b9 \u03c3\u03c5\u03bc\u03b2\u03bf\u03c5\u03bb\u03ad\u03c2 \u03c3\u03b5 \u03b8\u03ad\u03bc\u03b1\u03c4\u03b1 \u03bc\u03ac\u03c1\u03ba\u03b5\u03c4\u03b9\u03bd\u03b3\u03ba, \u03b5\u03c0\u03b9\u03c7\u03b5\u03b9\u03c1\u03b7\u03bc\u03b1\u03c4\u03b9\u03ba\u03ce\u03bd \u03c0\u03c9\u03bb\u03ae\u03c3\u03b5\u03c9\u03bd \u03ba\u03b1\u03b9 \u03b1\u03bd\u03b1\u03ba\u03b1\u03c4\u03b1\u03c3\u03ba\u03b5\u03c5\u03ce\u03bd \u03ba\u03b1\u03b9 franchising. \u0395\u03af\u03bd\u03b1\u03b9 \u03b5\u03c0\u03af\u03c3\u03b7\u03c2 \u03b5\u03c0\u03b5\u03bd\u03b4\u03cd\u03c4\u03c1\u03b9\u03b1 \u03c3\u03b5 \u03c4\u03bf\u03c0\u03b9\u03ba\u03ad\u03c2 \u03b5\u03c0\u03b9\u03c7\u03b5\u03b9\u03c1\u03ae\u03c3\u03b5\u03b9\u03c2.\n\u03a0\u03c1\u03b9\u03bd \u03b1\u03c0\u03cc \u03c4\u03b7\u03bd \u03af\u03b4\u03c1\u03c5\u03c3\u03b7 \u03c4\u03b7\u03c2 Inspire Marketing \u03c4\u03bf 2003, \u03b7 Margaret \u03b1\u03c0\u03ad\u03ba\u03c4\u03b7\u03c3\u03b5 \u03c4\u03b7\u03bd \u03b5\u03c0\u03b9\u03c7\u03b5\u03b9\u03c1\u03b7\u03bc\u03b1\u03c4\u03b9\u03ba\u03ae \u03c4\u03b7\u03c2 \u03bf\u03be\u03c5\u03b4\u03ad\u03c1\u03ba\u03b5\u03b9\u03b1, \u03ba\u03b1\u03b9 \u03c4\u03b7\u03bd \u03c4\u03b5\u03c7\u03bd\u03bf\u03b3\u03bd\u03c9\u03c3\u03af\u03b1 \u03c4\u03b7\u03c2 \u03c3\u03c4\u03b9\u03c2 \u03c0\u03c9\u03bb\u03ae\u03c3\u03b5\u03b9\u03c2 \u03ba\u03b1\u03b9 \u03c4\u03bf \u03bc\u03ac\u03c1\u03ba\u03b5\u03c4\u03b9\u03bd\u03b3\u03ba \u03cc\u03c3\u03bf \u03b5\u03c1\u03b3\u03b1\u03b6\u03cc\u03c4\u03b1\u03bd \u03c3\u03b5 \u03b1\u03bd\u03b1\u03b3\u03bd\u03c9\u03c1\u03b9\u03c3\u03bc\u03ad\u03bd\u03b5\u03c2 \u03b5\u03c4\u03b1\u03b9\u03c1\u03b5\u03af\u03b5\u03c2 \u03c4\u03bf\u03c5 Fortune 1000.\n\u03a3\u03c5\u03bd\u03cc\u03c8\u03b9\u03c3\u03b5 \u03c4\u03bf \u03b9\u03c3\u03c4\u03bf\u03c1\u03b9\u03ba\u03cc \u03ba\u03b1\u03b9 \u03c4\u03b7\u03bd \u03c4\u03b5\u03c7\u03bd\u03bf\u03b3\u03bd\u03c9\u03c3\u03af\u03b1 \u03c4\u03b7\u03c2 Margaret Hines, \u03c4\u03b7\u03c2 \u03b9\u03b4\u03c1\u03cd\u03c4\u03c1\u03b9\u03b1\u03c2 \u03c4\u03bf\u03c5 Inspire Marketing."
}
}
}
],
"suggestions": [],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": "888",
"metadata": "{\"source\": \"ultrachat\", \"kind\": \"synthetic\", \"evolved_from\": null}",
"source": "Given the text: An experienced and enthusiastic innovator...you want on your team.\nMargaret Hines is the founder and Principal Consultant of Inspire Marketing, LLC, investing in local businesses, serving the community with business brokerage and marketing consulting. She has an undergraduate degree from Washington University in St. Louis, MO, and an MBA from the University of Wisconsin-Milwaukee.\nMargaret offers consulting in marketing, business sales and turnarounds and franchising. She is also an investor in local businesses.\nPrior to founding Inspire Marketing in 2003, Margaret gained her business acumen, sales and marketing expertise while working at respected Fortune 1000 companies.\nSummarize the background and expertise of Margaret Hines, the founder of Inspire Marketing.",
"target": [
{
"status": "submitted",
"user_id": "f4d8878d-e378-4087-a99b-c31dad5f0609",
"value": "\u0392\u03ac\u03c3\u03b5\u03b9 \u03c4\u03bf\u03c5 \u03ba\u03b5\u03b9\u03bc\u03ad\u03bd\u03bf\u03c5: \u039c\u03af\u03b1 \u03ad\u03bc\u03c0\u03b5\u03b9\u03c1\u03b7 \u03ba\u03b1\u03b9 \u03b5\u03bd\u03b8\u03bf\u03c5\u03c3\u03b9\u03ce\u03b4\u03b7\u03c2 \u03ba\u03b1\u03b9\u03bd\u03bf\u03c4\u03cc\u03bc\u03bf\u03c2... \u03c0\u03bf\u03c5 \u03b8\u03ad\u03bb\u03b5\u03c4\u03b5 \u03c3\u03c4\u03b7\u03bd \u03bf\u03bc\u03ac\u03b4\u03b1 \u03c3\u03b1\u03c2.\n\u0397 Margaret Hines \u03b5\u03af\u03bd\u03b1\u03b9 \u03b7 \u03b9\u03b4\u03c1\u03cd\u03c4\u03c1\u03b9\u03b1 \u03ba\u03b1\u03b9 \u03b7 \u03ba\u03cd\u03c1\u03b9\u03b1 \u03c3\u03cd\u03bc\u03b2\u03bf\u03c5\u03bb\u03bf\u03c2 \u03c4\u03b7\u03c2 Inspire Marketing, LLC, \u03ad\u03c7\u03bf\u03bd\u03c4\u03b1\u03c2 \u03b5\u03c0\u03b5\u03bd\u03b4\u03cd\u03c3\u03b5\u03b9 \u03c3\u03b5 \u03c4\u03bf\u03c0\u03b9\u03ba\u03ad\u03c2 \u03b5\u03c0\u03b9\u03c7\u03b5\u03b9\u03c1\u03ae\u03c3\u03b5\u03b9\u03c2, \u03b5\u03be\u03c5\u03c0\u03b7\u03c1\u03b5\u03c4\u03ce\u03bd\u03c4\u03b1\u03c2 \u03c4\u03b7\u03bd \u03ba\u03bf\u03b9\u03bd\u03cc\u03c4\u03b7\u03c4\u03b1 \u03bc\u03ad\u03c3\u03c9 \u03b5\u03c0\u03b9\u03c7\u03b5\u03b9\u03c1\u03b7\u03bc\u03b1\u03c4\u03b9\u03ba\u03ae\u03c2 \u03bc\u03b5\u03c3\u03b9\u03c4\u03b5\u03af\u03b1\u03c2 \u03ba\u03b1\u03b9 \u03c3\u03c5\u03bc\u03b2\u03bf\u03c5\u03bb\u03ce\u03bd \u03bc\u03ac\u03c1\u03ba\u03b5\u03c4\u03b9\u03bd\u03b3\u03ba. \u0388\u03c7\u03b5\u03b9 \u03c0\u03c4\u03c5\u03c7\u03af\u03bf \u03b1\u03c0\u03cc \u03c4\u03bf \u03a0\u03b1\u03bd\u03b5\u03c0\u03b9\u03c3\u03c4\u03ae\u03bc\u03b9\u03bf \u03c4\u03b7\u03c2 \u039f\u03c5\u03ac\u03c3\u03b9\u03b3\u03ba\u03c4\u03bf\u03bd \u03c3\u03c4\u03bf St. Louis, MO, \u03ba\u03b1\u03b9 MBA \u03b1\u03c0\u03cc \u03c4\u03bf \u03a0\u03b1\u03bd\u03b5\u03c0\u03b9\u03c3\u03c4\u03ae\u03bc\u03b9\u03bf \u03c4\u03bf\u03c5 Wisconsin-Milwaukee.\n\u0397 Margaret \u03c0\u03c1\u03bf\u03c3\u03c6\u03ad\u03c1\u03b5\u03b9 \u03c3\u03c5\u03bc\u03b2\u03bf\u03c5\u03bb\u03ad\u03c2 \u03c3\u03b5 \u03b8\u03ad\u03bc\u03b1\u03c4\u03b1 \u03bc\u03ac\u03c1\u03ba\u03b5\u03c4\u03b9\u03bd\u03b3\u03ba, \u03b5\u03c0\u03b9\u03c7\u03b5\u03b9\u03c1\u03b7\u03bc\u03b1\u03c4\u03b9\u03ba\u03ce\u03bd \u03c0\u03c9\u03bb\u03ae\u03c3\u03b5\u03c9\u03bd \u03ba\u03b1\u03b9 \u03b1\u03bd\u03b1\u03ba\u03b1\u03c4\u03b1\u03c3\u03ba\u03b5\u03c5\u03ce\u03bd \u03ba\u03b1\u03b9 franchising. \u0395\u03af\u03bd\u03b1\u03b9 \u03b5\u03c0\u03af\u03c3\u03b7\u03c2 \u03b5\u03c0\u03b5\u03bd\u03b4\u03cd\u03c4\u03c1\u03b9\u03b1 \u03c3\u03b5 \u03c4\u03bf\u03c0\u03b9\u03ba\u03ad\u03c2 \u03b5\u03c0\u03b9\u03c7\u03b5\u03b9\u03c1\u03ae\u03c3\u03b5\u03b9\u03c2.\n\u03a0\u03c1\u03b9\u03bd \u03b1\u03c0\u03cc \u03c4\u03b7\u03bd \u03af\u03b4\u03c1\u03c5\u03c3\u03b7 \u03c4\u03b7\u03c2 Inspire Marketing \u03c4\u03bf 2003, \u03b7 Margaret \u03b1\u03c0\u03ad\u03ba\u03c4\u03b7\u03c3\u03b5 \u03c4\u03b7\u03bd \u03b5\u03c0\u03b9\u03c7\u03b5\u03b9\u03c1\u03b7\u03bc\u03b1\u03c4\u03b9\u03ba\u03ae \u03c4\u03b7\u03c2 \u03bf\u03be\u03c5\u03b4\u03ad\u03c1\u03ba\u03b5\u03b9\u03b1, \u03ba\u03b1\u03b9 \u03c4\u03b7\u03bd \u03c4\u03b5\u03c7\u03bd\u03bf\u03b3\u03bd\u03c9\u03c3\u03af\u03b1 \u03c4\u03b7\u03c2 \u03c3\u03c4\u03b9\u03c2 \u03c0\u03c9\u03bb\u03ae\u03c3\u03b5\u03b9\u03c2 \u03ba\u03b1\u03b9 \u03c4\u03bf \u03bc\u03ac\u03c1\u03ba\u03b5\u03c4\u03b9\u03bd\u03b3\u03ba \u03cc\u03c3\u03bf \u03b5\u03c1\u03b3\u03b1\u03b6\u03cc\u03c4\u03b1\u03bd \u03c3\u03b5 \u03b1\u03bd\u03b1\u03b3\u03bd\u03c9\u03c1\u03b9\u03c3\u03bc\u03ad\u03bd\u03b5\u03c2 \u03b5\u03c4\u03b1\u03b9\u03c1\u03b5\u03af\u03b5\u03c2 \u03c4\u03bf\u03c5 Fortune 1000.\n\u03a3\u03c5\u03bd\u03cc\u03c8\u03b9\u03c3\u03b5 \u03c4\u03bf \u03b9\u03c3\u03c4\u03bf\u03c1\u03b9\u03ba\u03cc \u03ba\u03b1\u03b9 \u03c4\u03b7\u03bd \u03c4\u03b5\u03c7\u03bd\u03bf\u03b3\u03bd\u03c9\u03c3\u03af\u03b1 \u03c4\u03b7\u03c2 Margaret Hines, \u03c4\u03b7\u03c2 \u03b9\u03b4\u03c1\u03cd\u03c4\u03c1\u03b9\u03b1\u03c2 \u03c4\u03bf\u03c5 Inspire Marketing."
}
],
"target-suggestion": null,
"target-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
}
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **source** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **target** is of type `text`, and description "Translate the text.".
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **target-suggestion** is of type `text`.
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
This is a translation dataset that contains texts. Please translate the text in the text field.
#### Annotation process
The translators were native Greeks. Each prompt was initially translated via Google Translate, then refined by human annotators.
Prompts containing information not relevant to the Greek context were not altered in any way before translation.
Words with no direct equivalent in Greek were not translated.
#### Who are the annotators?
Initial annotation of the entire dataset was done by [Marios Mamalis](https://huggingface.co/Mario00000).
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Nutanix/cpp_unittests_llama8b_vs_llama70b_judge_llama70 | Nutanix | "2024-07-25T06:00:53Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-18T00:24:27Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: Code
dtype: string
- name: Unit Test - Llama8b
dtype: string
- name: Unit Test - Llama70b
dtype: string
- name: Unit Test - (Ground Truth)
dtype: string
- name: Winning Model
dtype: string
- name: Judgement
dtype: string
splits:
- name: train
num_bytes: 104875856
num_examples: 2013
download_size: 26796977
dataset_size: 104875856
---
# Unit Test Evaluation Results
This repository details the evaluation of unit tests generated by LLAMA3 models. It compares the unit tests produced by two models: LLAMA3-8B-Instruct and LLAMA3-70B-Instruct against the [groundtruth data](https://huggingface.co/datasets/Nutanix/cpp-unit-test-benchmarking-dataset). In this evaluation, the LLAMA3-70B-Instruct model served as the judge, assessing how well the unit tests from both models aligned with the ground truth.
## Models Used
### [LLAMA3-70B-Instruct](https://huggingface.co/hdadlani/Llama-3-128k-70B-Instruct-awq)
- **HuggingFace Link**: [LLAMA3-70B-Instruct](https://huggingface.co/hdadlani/Llama-3-128k-70B-Instruct-awq)
- **Precision**: AWQ Quantized, 4-Bit Precision
- **Description**: A large-scale model used as the judge for evaluating unit tests.
### [LLAMA3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
- **HuggingFace Link**: [LLAMA3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
- **Precision**: BF16 Precision
- **Description**: A smaller model whose unit tests were compared against those generated by LLAMA3-70B-Instruct.
## Dataset
The evaluation utilized the [cpp-unit-test-benchmarking-dataset](https://huggingface.co/datasets/Nutanix/cpp-unit-test-benchmarking-dataset) as the ground truth.
### Dataset Structure
The dataset was loaded using the following structure:
```python
from datasets import Dataset, load_dataset
# Load the dataset
dataset = load_dataset("Nutanix/cpp_unittests_llama8b_vs_llama70b_judge_llama70")
# View dataset structure
DatasetDict({
train: Dataset({
features: [
'Code',
'Unit Test - Llama8b',
'Unit Test - Llama70b',
'Unit Test - (Ground Truth)',
'Winning Model',
'Judgement'
],
num_rows: 2013
})
})
```
## Features:
- **Code**: The source code for which the unit tests are written.
- **Unit Test - Llama8b**: Unit test generated by the LLAMA3-8B-Instruct model.
- **Unit Test - Llama70b**: Unit test generated by the LLAMA3-70B-Instruct model.
- **Unit Test - (Ground Truth)**: The benchmark or ground truth unit test.
- **Winning Model**: The model whose unit test is closer to the ground truth.
- **Judgement**: The evaluation results comparing the unit tests.
The results are summarized in the table below:
## Unit Test Evaluation Results
| Outcome | Count |
|----------------------|-------|
| LLAMA3-70B-Instruct | 1060 |
| LLAMA3-8B-Instruct | 277 |
| Error | 277 |
| Tie | 399 |
### Explanation
1. LLAMA3-70B-Instruct Wins: LLAMA3-70B-Instruct model aligned more closely with the ground truth in 1060 cases.
2. LLAMA3-8B-Instruct Wins: LLAMA3-8B-Instruct model aligned more closely with the ground truth in 277 cases.
3. Error: 277 instances where errors occurred, often due to context length exceeding 32,000 characters.
4. Tie: 399 instances where results were tied between the models.
### Win Rates
- LLAMA3-70B-Instruct Win Percentage: 52.66%
- LLAMA3-8B-Instruct Win Percentage: 13.76%
- Tie Percentage: 19.82%
### Framework to generate unit test
<img src="https://cdn-uploads.huggingface.co/production/uploads/6658bb3acf5fc31e3a0bd24a/nFUDNtFeAukk_qLZL24F6.png" alt="image/png" width="600" height="400"/>
### Evaluation Approach
The LLAMA3-70B-Instruct model, with its quantized 4-bit precision, was used as the judge to evaluate which unit test (from LLAMA3-8B-Instruct or LLAMA3-70B-Instruct) was closer to the ground truth provided by the benchmark dataset. This evaluation highlights the performance differences between the two models and indicates a higher alignment of LLAMA3-70B-Instruct with the benchmarked unit tests.
Prompt used for evaluation: [Evaluation Prompt](https://huggingface.co/datasets/Nutanix/cpp_unittests_llama8b_vs_llama70b_judge_llama70/blob/main/config_evaluator.yaml)
|
israel/ProverbEval | israel | "2024-10-10T07:46:51Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-18T12:15:04Z" | ---
configs:
- config_name: amh
data_files:
- split: train
path: amh/amh_all.csv
- config_name: amh_fill_blank
data_files:
- split: train_1
path: amh/amh_fill_1.csv
- split: train_2
path: amh/amh_fill_2.csv
- split: train_3
path: amh/amh_fill_3.csv
- split: valid
path: amh/amh_fill_valid.csv
- config_name: amh_choice_english
data_files:
- split: english_1
path: amh/amh_english_test_1.csv
- split: english_2
path: amh/amh_english_test_2.csv
- split: english_3
path: amh/amh_english_test_3.csv
- split: english_4
path: amh/amh_english_test_4.csv
- split: english_5
path: amh/amh_english_test_5.csv
- config_name: translate_amh_choice_english
data_files:
- split: english_1
path: translate-test/amh/amh_english_test_1.csv
- split: english_2
path: translate-test/amh/amh_english_test_2.csv
- split: english_3
path: translate-test/amh/amh_english_test_3.csv
- config_name: amh_choice_native
data_files:
- split: native_1
path: amh/amh_native_test_1.csv
- split: native_2
path: amh/amh_native_test_2.csv
- split: native_3
path: amh/amh_native_test_3.csv
- split: native_4
path: amh/amh_native_test_4.csv
- split: native_5
path: amh/amh_native_test_5.csv
- config_name: translate_amh_choice_native
data_files:
- split: native_1
path: translate-test/amh/amh_native_test_1.csv
- split: native_2
path: translate-test/amh/amh_native_test_2.csv
- split: native_3
path: translate-test/amh/amh_native_test_3.csv
- config_name: amh_generation
data_files:
- split: native
path: amh/amh_meaining_generation_native.csv
- split: english
path: amh/amh_meaining_generation_english.csv
- config_name: eng
data_files:
- split: train
path: eng/eng_all.csv
- config_name: eng_fill_blank
data_files:
- split: train_1
path: eng/eng_fill_1.csv
- split: train_2
path: eng/eng_fill_2.csv
- split: train_3
path: eng/eng_fill_3.csv
- split: valid
path: eng/eng_fill_valid.csv
- config_name: eng_generation
data_files:
- split: native
path: eng/eng_meaining_generation_native.csv
- config_name: eng_choice_native
data_files:
- split: native_1
path: eng/eng_native_test_1.csv
- split: native_2
path: eng/eng_native_test_2.csv
- split: native_3
path: eng/eng_native_test_3.csv
- config_name: eng_choice_native
data_files:
- split: native_1
path: eng/eng_native_test_1.csv
- split: native_2
path: eng/eng_native_test_2.csv
- split: native_3
path: eng/eng_native_test_3.csv
- split: native_4
path: eng/eng_native_test_4.csv
- split: native_5
path: eng/eng_native_test_5.csv
- config_name: gez_fill_blank
data_files:
- split: train_1
path: geez/geez_fill_1.csv
- split: train_2
path: geez/geez_fill_2.csv
- split: train_3
path: geez/geez_fill_3.csv
- split: valid
path: geez/gez_fill_valid.csv
- config_name: gez_choice_english
data_files:
- split: english_1
path: geez/geez_english_test_1.csv
- split: english_2
path: geez/geez_english_test_2.csv
- split: english_3
path: geez/geez_english_test_3.csv
- split: english_4
path: geez/geez_english_test_4.csv
- split: english_5
path: geez/geez_english_test_5.csv
- config_name: gez_choice_native
data_files:
- split: native_1
path: geez/geez_native_test_1.csv
- split: native_2
path: geez/geez_native_test_2.csv
- split: native_3
path: geez/geez_native_test_3.csv
- split: native_4
path: geez/geez_native_test_4.csv
- split: native_5
path: geez/geez_native_test_5.csv
- config_name: gez_generation
data_files:
- split: native
path: geez/gez-native-description.csv
- split: english
path: geez/geez_meaining_generation_english.csv
- config_name: orm
data_files:
- split: train
path: orm/orm_all.csv
- config_name: orm_choice_english
data_files:
- split: english_1
path: orm/orm_english_test_1.csv
- split: english_2
path: orm/orm_english_test_2.csv
- split: english_3
path: orm/orm_english_test_3.csv
- split: english_4
path: orm/orm_english_test_4.csv
- split: english_5
path: orm/orm_english_test_5.csv
- config_name: translate_orm_choice_english
data_files:
- split: english_1
path: translate-test/orm/orm_english_test_1.csv
- split: english_2
path: translate-test/orm/orm_english_test_2.csv
- split: english_3
path: translate-test/orm/orm_english_test_3.csv
- config_name: orm_choice_native
data_files:
- split: native_1
path: orm/orm_native_test_1.csv
- split: native_2
path: orm/orm_native_test_2.csv
- split: native_3
path: orm/orm_native_test_3.csv
- split: native_4
path: orm/orm_native_test_4.csv
- split: native_5
path: orm/orm_native_test_5.csv
- config_name: translate_orm_choice_native
data_files:
- split: native_1
path: translate-test/orm/orm_native_test_1.csv
- split: native_2
path: translate-test/orm/orm_native_test_2.csv
- split: native_3
path: translate-test/orm/orm_native_test_3.csv
- config_name: orm_generation
data_files:
- split: native
path: orm/orm_meaining_generation_native.csv
- split: english
path: orm/orm_meaining_generation_english.csv
- config_name: orm_fill_blank
data_files:
- split: train_1
path: orm/orm_fill_1.csv
- split: train_2
path: orm/orm_fill_2.csv
- split: train_3
path: orm/orm_fill_3.csv
- split: valid
path: orm/orm_fill_valid.csv
- config_name: tir
data_files:
- split: train
path: tir/tir_all.csv
- config_name: tir_fill_blank
data_files:
- split: train_1
path: tir/tir_fill_1.csv
- split: train_2
path: tir/tir_fill_2.csv
- split: train_3
path: tir/tir_fill_3.csv
- split: valid
path: tir/tir_fill_valid.csv
- config_name: tir_generation
data_files:
- split: native
path: tir/tir_meaining_generation_native.csv
- split: english
path: tir/tir_meaining_generation_english.csv
- config_name: tir_choice_english
data_files:
- split: english_1
path: tir/tir_english_test_1.csv
- split: english_2
path: tir/tir_english_test_2.csv
- split: english_3
path: tir/tir_english_test_3.csv
- split: english_4
path: tir/tir_english_test_4.csv
- split: english_5
path: tir/tir_english_test_5.csv
- config_name: tir_choice_native
data_files:
- split: native_1
path: tir/tir_native_test_1.csv
- split: native_2
path: tir/tir_native_test_2.csv
- split: native_3
path: tir/tir_native_test_3.csv
- split: native_4
path: tir/tir_native_test_4.csv
- split: native_5
path: tir/tir_native_test_5.csv
- config_name: translate_tir_choice_english
data_files:
- split: english_1
path: translate-test/tir/tir_english_test_1.csv
- split: english_2
path: translate-test/tir/tir_english_test_2.csv
- split: english_3
path: translate-test/tir/tir_english_test_3.csv
- config_name: translate_tir_choice_native
data_files:
- split: native_1
path: translate-test/tir/tir_native_test_1.csv
- split: native_2
path: translate-test/tir/tir_native_test_2.csv
- split: native_3
path: translate-test/tir/tir_native_test_3.csv
---
```
.
βββ amh
β βββ amharic-fill_test.csv
β βββ amh_english_test_1.csv
β βββ amh_english_test_2.csv
β βββ amh_english_test_3.csv
β βββ amh_fill_1.csv
β βββ amh_fill_2.csv
β βββ amh_fill_3.csv
β βββ amh_meaining_generation_english.csv
β βββ amh_meaining_generation_native.csv
β βββ amh_native_test_1.csv
β βββ amh_native_test_2.csv
β βββ amh_native_test_3.csv
βββ eng
β βββ eng_fill_test.csv
β βββ eng_meaining_generation_native.csv
β βββ eng_native_test_1.csv
β βββ eng_native_test_2.csv
β βββ eng_native_test_3.csv
βββ geez
β βββ geez_english_test_1.csv
β βββ geez_english_test_2.csv
β βββ geez_english_test_3.csv
β βββ geez_fill_1.csv
β βββ geez_fill_2.csv
β βββ geez_fill_3.csv
β βββ geez_meaining_generation_english.csv
βββ orm
β βββ orm_english_test_1.csv
β βββ orm_english_test_2.csv
β βββ orm_english_test_3.csv
β βββ orm_fill_1.csv
β βββ orm_fill_2.csv
β βββ orm_fill_3.csv
β βββ orm_meaining_generation_english.csv
β βββ orm_meaining_generation_native.csv
β βββ orm_native_test_1.csv
β βββ orm_native_test_2.csv
β βββ orm_native_test_3.csv
β βββ oromo_fill_test.csv
βββ tir
βββ tir_fill_1.csv
βββ tir_fill_2.csv
βββ tir_fill_3.csv
``` |
afg1/pombe-canto-data | afg1 | "2024-08-15T16:34:12Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-30T12:41:40Z" | ---
dataset_info:
features:
- name: triage_status
dtype: large_string
- name: pmid
dtype: large_string
- name: abstract
dtype: large_string
- name: citation
dtype: large_string
- name: token_count
dtype: int32
- name: label
dtype: int8
splits:
- name: train
num_bytes: 13736788
num_examples: 10360
- name: test
num_bytes: 3422716
num_examples: 2590
download_size: 9332324
dataset_size: 17159504
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Nutanix/cpp_unit_tests_unprocessed_Phi-3-mini-128k-instruct_vs_Phi-3-small-128k-instruct_judge_gpt | Nutanix | "2024-08-11T19:07:11Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-08-11T19:07:03Z" | ---
dataset_info:
features:
- name: Code
dtype: string
- name: Unit Test_Phi-3-mini-128k-instruct_raw
dtype: string
- name: Unit Test_Phi-3-small-128k-instruct_raw
dtype: string
- name: Unit Test
dtype: string
- name: Winning Model
dtype: string
- name: Judgement
dtype: string
splits:
- name: train
num_bytes: 9240903
num_examples: 201
download_size: 2829507
dataset_size: 9240903
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BotnoiNLPteam/scdt_proofread_v1 | BotnoiNLPteam | "2024-08-14T04:27:46Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-08-14T04:27:43Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 10769351
num_examples: 18436
- name: test
num_bytes: 1300663
num_examples: 2282
- name: val
num_bytes: 1307448
num_examples: 2281
download_size: 3737248
dataset_size: 13377462
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
---
|
Nutanix/CPP-UNITTEST-BENCH | Nutanix | "2024-08-28T05:42:31Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-08-28T04:36:56Z" | ---
dataset_info:
features:
- name: ID
dtype: int64
- name: Language
dtype: string
- name: Repository Name
dtype: string
- name: File Name
dtype: string
- name: File Path in Repository
dtype: string
- name: File Path for Unit Test
dtype: string
- name: Code
dtype: string
- name: Unit Test - (Ground Truth)
dtype: string
splits:
- name: train
num_bytes: 52934692
num_examples: 2653
download_size: 13965160
dataset_size: 52934692
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
---
# Dataset Card for Open Source Code and Unit Tests
## Dataset Details
### Dataset Description
This dataset contains c++ code snippets and their corresponding ground truth unit tests collected from various open-source GitHub repositories. The primary purpose of this dataset is to aid in the development and evaluation of automated testing tools, code quality analysis, and LLM models for test generation.
- **Curated by:** Vaishnavi Bhargava
- **Language(s):** C++
<img src="https://cdn-uploads.huggingface.co/production/uploads/6658bb3acf5fc31e3a0bd24a/hyIhFHmrjUzypFgNPU2UX.png" alt="image/png" width="800" height="600"/>
## Dataset Structure
```python
from datasets import Dataset, load_dataset
# Load the dataset
dataset = load_dataset("Nutanix/cpp_unit_tests_benchmark_dataset")
# View dataset structure
DatasetDict({
train: Dataset({
features: ['ID', 'Language', 'Repository Name', 'File Name', 'File Path in Repository', 'File Path for Unit Test', 'Code', 'Unit Test - (Ground Truth)'],
num_rows: 2653
})
})
```
The dataset consists of the following columns:
- `ID`: A unique identifier for each entry in the dataset. [Example: "0"]
- `Language`: The programming language of the file. [Example: "cpp"]
- `Repository Name`: The name of the GitHub repository, formatted as organisation/repository. [Example: "google/googletest"]
- `File Name`: The base name of the file (without extension) where the code or test is located. [Example: "sample1"]
- `File Path in Repository`: The relative path to the file within the GitHub repository. [Example: "googletest/samples/sample1.cc"]
- `File Path for Unit Test`: The relative path to the unit test file, if applicable. [Example: "googletest/samples/sample1_unittest.cc"]
- `Code`: The code content of the file, excluding any documentation or comments.
- `Unit Test - (Ground Truth)`: The content of the unit test file that tests the code.
### Dataset Sources
<img src="https://cdn-uploads.huggingface.co/production/uploads/6658bb3acf5fc31e3a0bd24a/jE8b8wf1uV_boMaHxsmnP.png" width="800" height="600" />
- **Repository:** The dataset is sourced from the following GitHub repositories: [Latest Commit before 2 July 24]
- [Pytorch](https://github.com/pytorch/pytorch)
- [Abseil Absl](https://github.com/abseil/abseil-cpp)
- [Google Test](https://github.com/google/googletest)
- [Libphonenumber](https://github.com/google/libphonenumber)
- [Tensorstore](https://github.com/google/tensorstore)
- [TensorFlow](https://github.com/tensorflow/tensorflow)
- [Glog](https://github.com/google/glog/tree/master/src/glog)
- [Cel-cpp](https://github.com/google/cel-cpp/tree/master)
- [LevelDB](https://github.com/google/leveldb)
- [Libaddressinput](https://github.com/google/libaddressinput/tree/master)
- [Langsvr](https://github.com/google/langsvr/tree/main)
- [tsl](https://github.com/google/tsl.git)
- [cel-cpp](https://github.com/google/cel-cpp.git)
- [quiche](https://github.com/google/quiche.git)
### Some analysis of the dataset:
The box plot representation depicting number of Code and Unit Test lines across different repositories
<img src="https://cdn-uploads.huggingface.co/production/uploads/6658bb3acf5fc31e3a0bd24a/E7aoKCvyRBjBR89sbetrR.png" width="800" height="600" />
<!-- The histogram visualizes the distribution of the number of lines in the "Code" and "Unit Test-(Ground Truth)" column of the dataset.
<div style="display: flex;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6658bb3acf5fc31e3a0bd24a/pm9VHIoIJgSBTWcmfXPOO.png" width="300" height="300" style="margin-right: 10px;" />
<img src="https://cdn-uploads.huggingface.co/production/uploads/6658bb3acf5fc31e3a0bd24a/Fo48OZiHeiVLQZ9yA5qch.png" width="300" height="300" />
</div>
The histogram visualizes the distribution of the number of tokens in the "Code" and "Unit Test-(Ground Truth)" column of the dataset.
<div style="display: flex;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6658bb3acf5fc31e3a0bd24a/UWb5i1bh5keq8hd7NdT6E.png" width="300" height="300" style="margin-right: 10px;" />
<img src="https://cdn-uploads.huggingface.co/production/uploads/6658bb3acf5fc31e3a0bd24a/bAgGzQGmrVrMxm-uHxffv.png" width="300" height="300" />
</div>
-->
## Uses
### Direct Use
This dataset is suitable for :
- Developing and evaluating automated testing tools.
- Analyzing code quality by comparing code with its corresponding unit tests.
- Training and testing LLM models for automated unit test generation.
## Dataset Creation
### Curation Rationale
The motivation for creating this dataset is to provide a comprehensive collection of code and unit tests from various reputable open-source projects. This can facilitate research and development in the areas of automated testing, code quality analysis, and LLM for software engineering.
### Source Data
#### Data Collection and Processing
The data was collected from public GitHub repositories. The selection criteria included repositories with well-documented code and corresponding unit tests. The data was filtered and normalized to ensure consistency.
#### Who are the source data producers?
The source data producers are the contributors to the respective open-source GitHub repositories.
## Bias, Risks, and Limitations
The dataset may have biases based on the coding practices and testing methodologies of the included repositories. It may not cover all possible scenarios and edge cases in software testing.
## Citation [optional]
|
ZiyuG/SciVerse | ZiyuG | "2024-09-11T03:33:18Z" | 2 | 0 | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:visual-question-answering",
"language:en",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"multiple-choice",
"question-answering",
"visual-question-answering"
] | "2024-09-09T04:58:13Z" | ---
task_categories:
- multiple-choice
- question-answering
- visual-question-answering
language:
- en
size_categories:
- 1K<n<10K
configs:
- config_name: test
data_files:
- split: test
path: QA.parquet
dataset_info:
- config_name: test
features:
- name: id
dtype: string
- name: subject
dtype: string
- name: image
dtype: string
- name: vision_dominant
dtype: string
- name: vision_only
dtype: string
- name: knowledge_lite
dtype: string
- name: knowledge_rich
dtype: string
- name: knowledge_professional
dtype: string
- name: question_vd
dtype: string
- name: choiceA
dtype: string
- name: choiceB
dtype: string
- name: choiceC
dtype: string
- name: choiceD
dtype: string
- name: choiceE
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
- name: question_zh
dtype: string
- name: explanation_zh
dtype: string
splits:
- name: test
num_examples: 1147
---
# Dataset Card for SciVerse
- [Dataset Description](https://huggingface.co/datasets/ZiyuG/SciVerse/blob/main/README.md#dataset-description)
- [Paper Information](https://huggingface.co/datasets/ZiyuG/SciVerse/blob/main/README.md#paper-information)
- [Dataset Examples](https://huggingface.co/datasets/ZiyuG/SciVerse/blob/main/README.md#dataset-examples)
- [Leaderboard](https://huggingface.co/datasets/ZiyuG/SciVerse/blob/main/README.md#leaderboard)
- [Citation](https://huggingface.co/datasets/ZiyuG/SciVerse/blob/main/README.md#citation)
## Dataset Description
SciVerse is a multi-modal scientific benchmark introduced to evaluate the professional scientific reasoning abilities of multi-modal large language models (MLLMs) across various disciplines. This benchmark contains **5,735** annotated multi-modal Q&A samples covering key science subjects including **physics**, **chemistry**, and **biology**. It contains six distinct subsets designed to test varying degrees of knowledge and visual-text interpretation, i.e., **Knowledge Lite, Knowledge Rich, Knowledge Professional, Vision Dominant, Text Only** and **Vision Only**.
- **Knowledge Lite**: basic problems with minimal necessary contextual information.
- **Knowledge Rich**: problems with scientific background information.
- **Knowledge Professional**: problems with advanced, professional-level scientific information.
- **Vision Dominant**: problems that prioritizes visual cues over textual content to evaluate visual comprehension.
- **Text Only**: problems with only texual inforamtion.
- **Vision Only**: problems with only vison information, where textual problems rendered within the images.
SciVerse aims to evaluate MLLMs' scientific reasoning ability of pre-existing scientific knowledge, and their sensitivity to the content stipulated in the questions. This not only measures how effectively MLLMs can utilize their inherent scientific understanding, but also assesses their ability to integrate and reason with given scientific knowledge in real-world scenarios. Unlike existing benchmarks, which often overlook the depth and multi-modal nature of scientific understanding, SciVerse addresses the complex challenges encountered in actual scientific analysis, providing a nuanced analysis of MLLMs' strengths and limitations in both knowledge integration and practical application.
## Paper Information
- Code: https://github.com/ZiyuGuo99/SciVerse
- Project: https://sciverse-cuhk.github.io/
- Dataset Overview: https://sciverse-cuhk.github.io/#overview
- Leaderboard: https://sciverse-cuhk.github.io/#leaderboard
## Dataset Examples
***Coming soon...***
## Leaderboard
### Contributing to the Leaderboard
π¨ The [Leaderboard](https://sciverse-cuhk.github.io/#leaderboard) is continuously being updated.
The evaluation instructions and tools will be released soon. For now, please send your results on the test set to this email: ziyuguo@link.cuhk.edu.hk
## Citation
If you find **SciVerse** useful for your research and applications, please kindly cite using this BibTeX:
```latex
@article{sciverse,
title={SciVerse},
author={Guo, Ziyu and Zhang, Renrui and Chen, Hao and Gao, Jialin and Li, Hongsheng and Heng, Pheng-Ann},
url={https://sciverse-cuhk.github.io/},
journal={arXiv preprint},
year={2024}
}
``` |
argilla-warehouse/apigen-smollm-trl-FC | argilla-warehouse | "2024-11-21T12:25:31Z" | 2 | 0 | [
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"arxiv:2406.18518",
"region:us",
"synthetic",
"function-calling",
"code",
"distilabel"
] | [
"text-generation"
] | "2024-10-17T08:23:19Z" | ---
dataset_info:
features:
- name: answers
dtype: string
- name: query
dtype: string
- name: id
dtype: int64
- name: tools
dtype: string
- name: func_name
dtype: string
- name: func_desc
dtype: string
- name: hash_id
dtype: string
- name: model_name
dtype: string
- name: origin
dtype: string
splits:
- name: train
num_bytes: 165059162
num_examples: 109402
download_size: 60235594
dataset_size: 165059162
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-4.0
task_categories:
- text-generation
language:
- en
tags:
- synthetic
- function-calling
- code
- distilabel
size_categories:
- 100K<n<1M
---
# Dataset card for argilla-warehouse/apigen-smollm-trl-FC
This dataset is a merge of [argilla/Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1)
and [Salesforce/xlam-function-calling-60k](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k), and was prepared for training using the script
`prepare_for_sft.py` that can be found in the repository files.
## References
```
@article{liu2024apigen,
title={APIGen: Automated Pipeline for Generating Verifiable and Diverse Function-Calling Datasets},
author={Liu, Zuxin and Hoang, Thai and Zhang, Jianguo and Zhu, Ming and Lan, Tian and Kokane, Shirley and Tan, Juntao and Yao, Weiran and Liu, Zhiwei and Feng, Yihao and others},
journal={arXiv preprint arXiv:2406.18518},
year={2024}
}
``` |
Madjakul/HALvest-Contrastive-Raw | Madjakul | "2024-10-19T08:57:57Z" | 2 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-10-19T08:50:46Z" | ---
dataset_info:
features:
- name: halid
dtype: string
- name: lang
dtype: string
- name: domain
sequence: string
- name: timestamp
dtype: string
- name: year
dtype: string
- name: url
dtype: string
- name: text
dtype: string
- name: size
dtype: int64
- name: authorids
sequence: string
- name: affiliations
sequence: string
splits:
- name: train
num_bytes: 22258039817.522587
num_examples: 361863
download_size: 9390538695
dataset_size: 22258039817.522587
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
selmaXI/cnn_dailymail-llama2-1k | selmaXI | "2024-11-04T15:26:10Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-04T15:26:09Z" | ---
dataset_info:
features:
- name: article
dtype: string
- name: highlights
dtype: string
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8848392
num_examples: 1000
download_size: 5364766
dataset_size: 8848392
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
5CD-AI/Viet-docvqa_test_subsampled-Gemini | 5CD-AI | "2024-11-21T15:42:32Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-09T16:53:03Z" | ---
dataset_info:
features:
- name: questionId
dtype: string
- name: query
dtype: string
- name: question_types
dtype: string
- name: image
dtype: image
- name: docId
dtype: int64
- name: image_filename
dtype: string
- name: page
dtype: string
- name: answer
dtype: string
- name: data_split
dtype: string
- name: source
dtype: string
- name: vi_image
dtype: image
- name: original_text
dtype: string
- name: translated_text
dtype: string
splits:
- name: test
num_bytes: 262558877.0
num_examples: 500
download_size: 247108892
dataset_size: 262558877.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
data-is-better-together/image_preferences_results | data-is-better-together | "2024-11-10T21:42:07Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:argilla",
"region:us",
"rlfh",
"argilla",
"human-feedback"
] | null | "2024-11-10T21:42:05Z" | ---
size_categories: n<1K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for image_preferences_results
This dataset has been created with [Argilla](https://github.com/argilla-io/argilla). As shown in the sections below, this dataset can be loaded into your Argilla server as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.Dataset.from_hub("DIBT/image_preferences_results")
```
This will load the settings and records from the dataset repository and push them to you Argilla server for exploration and annotation.
## Using this dataset with `datasets`
To load the records of this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/image_preferences_results")
```
This will only load the records of the dataset, but not the Argilla settings.
## Dataset Structure
This dataset repo contains:
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `rg.Dataset.from_hub` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
* A dataset configuration folder conforming to the Argilla dataset format in `.argilla`.
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
### Fields
The **fields** are the features or text of a dataset's records. For example, the 'text' column of a text classification dataset of the 'prompt' column of an instruction following dataset.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| images | images | custom | True | |
### Questions
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| preference | preference | label_selection | True | Which image do you prefer given the prompt? | ['image_1', 'image_2', 'both_good', 'both_bad'] |
<!-- check length of metadata properties -->
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"_server_id": "30403740-6a5e-48d7-839e-dcea7ad0dfda",
"fields": {
"images": {
"image_1": "https://huggingface.co/datasets/DIBT/img_prefs_style/resolve/main/artifacts/image_generation_0/images/b172c7078a07c159f5f8da7bd1220ddd.jpeg",
"image_2": "https://huggingface.co/datasets/DIBT/img_prefs_style/resolve/main/artifacts/image_generation_2/images/b172c7078a07c159f5f8da7bd1220ddd.jpeg",
"prompt": "8-bit intellect, pixelated wisdom, retro digital brain, vintage game insight, soft neon glow, intricate pixel art, vibrant color palette, nostalgic ambiance"
}
},
"id": "f5224be1-2e1b-428e-94b1-9c0f397092fa",
"metadata": {
"category": "Animation",
"evolution": "quality",
"model_1": "schnell",
"model_2": "dev",
"sub_category": "Pixel Art"
},
"responses": {
"preference": [
{
"user_id": "c53e62ab-d792-4854-98f6-593b2ffb55bc",
"value": "image_2"
},
{
"user_id": "b1ab2cdd-29b8-4cf9-b6e0-7543589d21a3",
"value": "image_2"
},
{
"user_id": "da3e5871-920c-44da-8c44-1e94260c581e",
"value": "both_good"
},
{
"user_id": "b31dd1ed-78b6-4d50-8f11-7ce32ba17d64",
"value": "image_2"
},
{
"user_id": "6b984f66-86b3-421e-a32c-cd3592ee27a1",
"value": "both_bad"
}
]
},
"status": "completed",
"suggestions": {},
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"_server_id": "30403740-6a5e-48d7-839e-dcea7ad0dfda",
"category": "Animation",
"evolution": "quality",
"id": "f5224be1-2e1b-428e-94b1-9c0f397092fa",
"images": {
"image_1": "https://huggingface.co/datasets/DIBT/img_prefs_style/resolve/main/artifacts/image_generation_0/images/b172c7078a07c159f5f8da7bd1220ddd.jpeg",
"image_2": "https://huggingface.co/datasets/DIBT/img_prefs_style/resolve/main/artifacts/image_generation_2/images/b172c7078a07c159f5f8da7bd1220ddd.jpeg",
"prompt": "8-bit intellect, pixelated wisdom, retro digital brain, vintage game insight, soft neon glow, intricate pixel art, vibrant color palette, nostalgic ambiance"
},
"model_1": "schnell",
"model_2": "dev",
"preference.responses": [
"image_2",
"image_2",
"both_good",
"image_2",
"both_bad"
],
"preference.responses.status": [
"submitted",
"submitted",
"submitted",
"submitted",
"submitted"
],
"preference.responses.users": [
"c53e62ab-d792-4854-98f6-593b2ffb55bc",
"b1ab2cdd-29b8-4cf9-b6e0-7543589d21a3",
"da3e5871-920c-44da-8c44-1e94260c581e",
"b31dd1ed-78b6-4d50-8f11-7ce32ba17d64",
"6b984f66-86b3-421e-a32c-cd3592ee27a1"
],
"status": "completed",
"sub_category": "Pixel Art"
}
```
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
tattabio/OG_prot90 | tattabio | "2024-11-18T23:05:04Z" | 2 | 1 | [
"license:cc-by-sa-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-11T19:50:40Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
splits:
- name: train
num_bytes: 31071554280
num_examples: 85007726
download_size: 27610510142
dataset_size: 31071554280
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-sa-4.0
---
# OG_prot90: An Open Genomic Protein Dataset
The `OG_prot90` dataset is a protein-only dataset, created by clustering the Open Genomic dataset ([`OG`](https://huggingface.co/datasets/tattabio/OG)) at 90% sequence identity.
MMseqs2 linclust (Steinegger and SΓΆding 2018) was used to cluster all 400M protein sequences from the OG dataset, resulting in 85M protein sequences.
Sequences were clustered at 90% sequence id and 90% sequence coverage.
## Use
```python
import datasets
ds = datasets.load_dataset('tattabio/OG_prot90')
```
To preview the dataset without downloading, load in streaming mode:
```python
import datasets
ds = datasets.load_dataset('tattabio/OG_prot90', streaming=True)['train']
print(next(iter(ds)))
```
## Citation
**BibTeX:**
```
@article{Cornman2024,
title = {The OMG dataset: An Open MetaGenomic corpus for mixed-modality genomic language modeling},
url = {https://www.biorxiv.org/content/early/2024/08/17/2024.08.14.607850},
DOI = {10.1101/2024.08.14.607850},
publisher = {Cold Spring Harbor Laboratory},
author = {Cornman, Andre and West-Roberts, Jacob and Camargo, Antonio Pedro and Roux, Simon and Beracochea, Martin and Mirdita, Milot and Ovchinnikov, Sergey and Hwang, Yunha},
year = {2024},
}
``` |
iszhaoxin/MCEval8K | iszhaoxin | "2024-11-18T07:00:00Z" | 2 | 0 | [
"license:cc-by-4.0",
"region:us"
] | null | "2024-11-18T07:00:00Z" | ---
license: cc-by-4.0
---
|
bizb0630/alpaca-cleaned_uz | bizb0630 | "2024-11-18T18:51:56Z" | 2 | 0 | [
"task_categories:text-generation",
"language:uz",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"instruction-finetuning"
] | [
"text-generation"
] | "2024-11-18T18:45:34Z" | ---
license: cc-by-4.0
language:
- uz
tags:
- instruction-finetuning
pretty_name: Alpaca-Cleaned-Uz
task_categories:
- text-generation
---
### Dataset Summary
This dataset is a translation of the [alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) dataset into Uzbek (Latin), using the GPT-4o mini API. |
kaiwenw/nov18_oasst_pref_jdpo_llama8b_0.9_n_9_temp_0.9 | kaiwenw | "2024-11-19T02:16:13Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T02:16:09Z" | ---
dataset_info:
features:
- name: judge_prompt
dtype: string
- name: judge_responses
sequence: string
- name: prefs
sequence: string
- name: vote
dtype: string
- name: user_prompt
dtype: string
- name: chosen_response
dtype: string
- name: reject_response
dtype: string
splits:
- name: train_chosen_first
num_bytes: 90657202
num_examples: 7019
- name: train_reject_first
num_bytes: 90468846
num_examples: 7019
- name: validation_chosen_first
num_bytes: 4573021
num_examples: 355
- name: validation_reject_first
num_bytes: 4548444
num_examples: 355
download_size: 71185658
dataset_size: 190247513
configs:
- config_name: default
data_files:
- split: train_chosen_first
path: data/train_chosen_first-*
- split: train_reject_first
path: data/train_reject_first-*
- split: validation_chosen_first
path: data/validation_chosen_first-*
- split: validation_reject_first
path: data/validation_reject_first-*
---
|
emilyphamm/quanloccumon | emilyphamm | "2024-11-19T03:21:51Z" | 2 | 0 | [
"license:mit",
"region:us"
] | null | "2024-11-19T03:21:51Z" | ---
license: mit
---
|
Yotofu/so100_shoes | Yotofu | "2024-11-19T04:36:12Z" | 2 | 0 | [
"task_categories:robotics",
"region:us",
"LeRobot",
"so100_stereo",
"tutorial"
] | [
"robotics"
] | "2024-11-19T04:35:53Z" | ---
task_categories:
- robotics
tags:
- LeRobot
- so100_stereo
- tutorial
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
corniclr25/stack-mined-go-v1 | corniclr25 | "2024-11-19T07:18:23Z" | 2 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T06:59:31Z" | ---
dataset_info:
features:
- name: query
dtype: string
- name: document
dtype: string
- name: negatives
sequence: string
- name: metadata
struct:
- name: objective
struct:
- name: paired
sequence: 'null'
- name: self
sequence: 'null'
- name: triplet
sequence:
sequence: string
splits:
- name: train
num_bytes: 67930273171
num_examples: 7000000
download_size: 24368886917
dataset_size: 67930273171
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aonmao/hcj_videos | aonmao | "2024-11-19T07:00:26Z" | 2 | 0 | [
"license:mit",
"region:us"
] | null | "2024-11-19T07:00:26Z" | ---
license: mit
---
|
corniclr25/stack-mined-java-v1 | corniclr25 | "2024-11-19T07:55:44Z" | 2 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T07:18:29Z" | ---
dataset_info:
features:
- name: query
dtype: string
- name: document
dtype: string
- name: negatives
sequence: string
- name: metadata
struct:
- name: objective
struct:
- name: paired
sequence: 'null'
- name: self
sequence: 'null'
- name: triplet
sequence:
sequence: string
splits:
- name: train
num_bytes: 142451950689
num_examples: 16000000
download_size: 50443007078
dataset_size: 142451950689
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
corniclr25/stack-mined-ruby-v1 | corniclr25 | "2024-11-19T09:00:33Z" | 2 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T08:57:24Z" | ---
dataset_info:
features:
- name: query
dtype: string
- name: document
dtype: string
- name: negatives
sequence: string
- name: metadata
struct:
- name: objective
struct:
- name: paired
sequence: 'null'
- name: self
sequence: 'null'
- name: triplet
sequence:
sequence: string
splits:
- name: train
num_bytes: 5576970404
num_examples: 890029
download_size: 2027859709
dataset_size: 5576970404
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
corniclr25/stack-mined-php-v1 | corniclr25 | "2024-11-19T09:19:22Z" | 2 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T09:00:35Z" | ---
dataset_info:
features:
- name: query
dtype: string
- name: document
dtype: string
- name: negatives
sequence: string
- name: metadata
struct:
- name: objective
struct:
- name: paired
sequence: 'null'
- name: self
sequence: 'null'
- name: triplet
sequence:
sequence: string
splits:
- name: train
num_bytes: 31467350418
num_examples: 3000000
download_size: 11980582978
dataset_size: 31467350418
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
matthewdelorenzo/dpo_verilog_buggy | matthewdelorenzo | "2024-11-19T09:59:22Z" | 2 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T09:52:45Z" | ---
license: mit
---
|
cymen-arfor/evals-btb-whisper-large-v2-ft-ca-25awr | cymen-arfor | "2024-11-19T10:32:38Z" | 2 | 0 | [
"language:cy",
"license:cc0-1.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"speech recognition"
] | null | "2024-11-19T10:32:18Z" | ---
language:
- cy
license: cc0-1.0
tags:
- speech recognition
metrics:
- wer
- cer
---
__Model__: cymen-arfor/whisper-large-v2-ft-ca-25awr
__Test Set__: DewiBrynJones/banc-trawsgrifiadau-bangor-clean
__Split__: test
------------------------------------------------------------------------------------------------------------------------------------
__WER: 52.721032__
__CER: 21.859754__
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_964e979c-7acc-4bd6-a1c0-3ec9a63dcfd4 | argilla-internal-testing | "2024-11-19T10:56:12Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T10:56:12Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_51487f0c-5751-41d3-8bf8-46722af3818a | argilla-internal-testing | "2024-11-19T10:56:15Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T10:56:14Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_b8fa2bca-5688-4a50-bd61-dc4fae6ee848 | argilla-internal-testing | "2024-11-19T10:56:37Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T10:56:36Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_1a5b216b-24ae-49da-bc66-c35add8803fe | argilla-internal-testing | "2024-11-19T10:56:48Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T10:56:46Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_798766d7-b28c-4087-9686-cef17204655a | argilla-internal-testing | "2024-11-19T10:57:54Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T10:57:53Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CGIAR/AgricultureVideosQnA2 | CGIAR | "2024-11-19T11:36:15Z" | 2 | 0 | [
"task_categories:question-answering",
"language:or",
"language:hi",
"license:apache-2.0",
"size_categories:1K<n<10K",
"region:us",
"agriculture",
"videoqna",
"videos"
] | [
"question-answering"
] | "2024-11-19T11:35:25Z" | ---
license: apache-2.0
task_categories:
- question-answering
language:
- or
- hi
tags:
- agriculture
- videoqna
- videos
size_categories:
- 1K<n<10K
---
The dataset is in XLS format with multiple sheets named for different languages. The dataset is primarily used for training and ground truth of answers that can be generated for agriculture related queries from the videos.
Each sheet has list of video urls (youtube links) and the question that can be asked, corresponding answers that can be generated from the videos, source of information in the answer and time stamps.
The sources of information could be:
Transcript: based on what one hears
Object: Based on an object shown
Scene description: based on what is described
Text overlay: based on text over lay shown in video
Corresponding time stamps are also provided.
The videos are in the following languages:
Hindi
Oriya |
gowtham28/math_qa_pairs8thclass1 | gowtham28 | "2024-11-19T11:48:59Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T11:48:58Z" | ---
dataset_info:
features:
- name: conversations
list:
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 43387
num_examples: 1
download_size: 22039
dataset_size: 43387
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_3ea49f74-55d7-487f-800e-bc166419c15e | argilla-internal-testing | "2024-11-19T11:58:04Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T11:58:03Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_27e84950-dff9-478a-b577-b2a84c96d4b7 | argilla-internal-testing | "2024-11-19T11:58:05Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T11:58:04Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_b1efc119-f3d6-45fe-ace0-14ad51d929ba | argilla-internal-testing | "2024-11-19T11:58:21Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T11:58:19Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_5efac0fb-a660-49d2-89a9-573165071686 | argilla-internal-testing | "2024-11-19T11:58:23Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T11:58:22Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_c07d0b4f-39b0-4ff5-a784-78ad9f7636fa | argilla-internal-testing | "2024-11-19T12:18:08Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T12:18:07Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_42b70cd6-af21-4747-bd92-c4b4d9e34dd5 | argilla-internal-testing | "2024-11-19T12:18:19Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T12:18:18Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_70cd6bd2-35b6-408b-bea4-b94d5195bd39 | argilla-internal-testing | "2024-11-19T12:18:33Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T12:18:33Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_e02aa976-e1e3-400c-8d69-d9d481328c98 | argilla-internal-testing | "2024-11-19T12:18:56Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T12:18:55Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_dcafd96c-5608-4314-ad7e-d7290d47f7a9 | argilla-internal-testing | "2024-11-19T12:47:10Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T12:47:10Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_f8763579-32dc-4d6d-b39d-8841dc999678 | argilla-internal-testing | "2024-11-19T12:47:13Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T12:47:12Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_b8692b64-f8cf-4ee1-a0bf-021748469668 | argilla-internal-testing | "2024-11-19T12:47:20Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T12:47:19Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_0bb11bf2-5689-456e-83a2-f9ce282dd46d | argilla-internal-testing | "2024-11-19T12:47:30Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T12:47:28Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
liyucheng/annotation | liyucheng | "2024-11-19T12:51:31Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T12:51:26Z" | ---
dataset_info:
features:
- name: entry_id
dtype: string
- name: published
dtype: string
- name: title
dtype: string
- name: authors
sequence: string
- name: primary_category
dtype: string
- name: categories
sequence: string
- name: text
dtype: string
- name: instruction_type
dtype: string
- name: section
dtype: string
splits:
- name: train
num_bytes: 134401099
num_examples: 3084
download_size: 53208021
dataset_size: 134401099
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_0159f511-b815-456f-aa4a-3d68742ef322 | argilla-internal-testing | "2024-11-19T12:57:28Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T12:57:27Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_3b010dac-2ebd-4cb6-a4c7-0eb31cadf39c | argilla-internal-testing | "2024-11-19T12:57:37Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T12:57:36Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_8a69bc58-ed75-4f0e-ae07-ce474106a6e5 | argilla-internal-testing | "2024-11-19T12:57:43Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T12:57:42Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_1338483e-61c7-4a06-8df2-8c21becc9944 | argilla-internal-testing | "2024-11-19T12:57:51Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T12:57:50Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_997d3bfe-d3eb-4005-ad5b-00a79f0f89e1 | argilla-internal-testing | "2024-11-19T12:57:57Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T12:57:56Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_5ecedb73-3c75-4788-afa8-2f78b5cf9189 | argilla-internal-testing | "2024-11-19T13:33:30Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T13:33:29Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_ff4ff1fd-78f6-4516-9c81-efb9f84e5f3e | argilla-internal-testing | "2024-11-19T13:33:38Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T13:33:38Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_e5f613f2-d469-4b5c-8c17-2a10a64f17af | argilla-internal-testing | "2024-11-19T13:33:41Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T13:33:40Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_783950c7-fec9-4516-9874-68e5c5a46c09 | argilla-internal-testing | "2024-11-19T13:33:42Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T13:33:42Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_b5dc1623-9301-461a-b19d-1cc4ccfeb26f | argilla-internal-testing | "2024-11-19T13:33:47Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T13:33:46Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_838e8dd4-886e-4b23-8bb7-3fe54ab39854 | argilla-internal-testing | "2024-11-19T13:33:46Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T13:33:46Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_ce8310bc-8477-4d21-a7f4-950d50909c80 | argilla-internal-testing | "2024-11-19T13:33:48Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T13:33:48Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_fc7da037-3876-4784-9f18-f03187e0569f | argilla-internal-testing | "2024-11-19T13:33:50Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T13:33:49Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_654d21f1-543c-499f-80a5-361bbd4a6e85 | argilla-internal-testing | "2024-11-19T13:34:08Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T13:34:07Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_cf38bfa3-3135-4a77-b1e8-77c0a2456a58 | argilla-internal-testing | "2024-11-19T13:34:10Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T13:34:09Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_55934b45-b3a6-4524-bcfd-ff7fc2af1c2c | argilla-internal-testing | "2024-11-19T13:49:54Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T13:49:53Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_d90528e4-dcf3-4671-b983-7a2f1f254652 | argilla-internal-testing | "2024-11-19T13:50:01Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T13:50:00Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_1943d7a5-3db0-4782-af81-91e8a47a6285 | argilla-internal-testing | "2024-11-19T13:50:17Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T13:50:15Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_c622e757-8090-482a-862c-8a5f067da2c6 | argilla-internal-testing | "2024-11-19T13:50:16Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T13:50:15Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_0d8d96d1-a920-4a67-8714-c3b01e3c6833 | argilla-internal-testing | "2024-11-19T13:50:34Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T13:50:32Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_f1984007-58f6-4742-af65-ee04b38336de | argilla-internal-testing | "2024-11-19T14:23:33Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T14:23:31Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_9ad7ed6e-23a6-4db1-90dd-8a272002471d | argilla-internal-testing | "2024-11-19T14:23:34Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T14:23:33Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_f00bea88-b7df-4a4e-afb8-cddac9996326 | argilla-internal-testing | "2024-11-19T14:23:36Z" | 2 | 0 | [
"region:us"
] | null | "2024-11-19T14:23:35Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_dfc39812-9224-4ad9-b014-20a216ef62a4 | argilla-internal-testing | "2024-11-19T14:23:44Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T14:23:43Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
VargheseP/palgo_ellipse_new_test | VargheseP | "2024-11-19T14:43:04Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T14:41:59Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption_basic
dtype: string
- name: caption_artsy
dtype: string
- name: caption_wt_parts
dtype: string
- name: conditioning_image
dtype: image
- name: mask_image
dtype: image
splits:
- name: test
num_bytes: 193616843.125
num_examples: 4655
download_size: 117620079
dataset_size: 193616843.125
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
VargheseP/palgo_ellipse_new_validation | VargheseP | "2024-11-19T14:43:43Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T14:43:08Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption_basic
dtype: string
- name: caption_artsy
dtype: string
- name: caption_wt_parts
dtype: string
- name: conditioning_image
dtype: image
- name: mask_image
dtype: image
splits:
- name: validation
num_bytes: 125880976.15
num_examples: 3025
download_size: 78553180
dataset_size: 125880976.15
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_8ae4d5df-2c89-44f0-96f5-19e833ebbb48 | argilla-internal-testing | "2024-11-19T16:04:05Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T16:03:48Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_2a23a0de-e95c-4f2d-af09-a631ec1bf2f7 | argilla-internal-testing | "2024-11-19T16:04:17Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T16:04:13Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_4e82c9d5-3a71-4675-89dc-2d62527bc666 | argilla-internal-testing | "2024-11-19T16:04:48Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T16:04:47Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_6e0860cd-b565-4736-8d49-63483b8184b9 | argilla-internal-testing | "2024-11-19T16:05:02Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T16:05:00Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_c437a5a5-86cc-47ef-8903-58bc65f488b3 | argilla-internal-testing | "2024-11-19T16:05:01Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T16:05:00Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_d56f0769-a2bc-4a89-b16a-d96db7f5f2dc | argilla-internal-testing | "2024-11-19T16:38:59Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T16:38:59Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_e156eb23-9eb5-41cb-a911-35a2fe965b84 | argilla-internal-testing | "2024-11-19T16:39:03Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T16:39:02Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_f870bc22-b5f0-4e66-99a5-69bf8d1af066 | argilla-internal-testing | "2024-11-19T16:39:04Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T16:39:02Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_c5e53f92-f34a-4538-bc10-e3c4da555333 | argilla-internal-testing | "2024-11-19T16:39:07Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T16:39:06Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_88fd0de0-b7d0-4004-9c77-49a2ebf44139 | argilla-internal-testing | "2024-11-19T16:39:13Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T16:39:12Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
enjalot/ls-fineweb-edu-100k | enjalot | "2024-11-19T16:41:02Z" | 2 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"latent-scope"
] | null | "2024-11-19T16:40:14Z" |
---
tags:
- latent-scope
---
# ls-fineweb-edu-100k
This dataset contains the files necessary to view in [latentscope](https://github.com/enjalot/latent-scope).
The files in the `latentscope` are used by the app to view. You can also preview the scope TODO
Total size of dataset files: 1.3 GB
TODO: download script inside latentscope
|
maranovak3/joe-small | maranovak3 | "2024-11-19T17:40:19Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T17:40:18Z" | ---
dataset_info:
features:
- name: text
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 48435
num_examples: 31
download_size: 28693
dataset_size: 48435
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
2203A51529/Indian_language_community_chatbot.csv | 2203A51529 | "2024-11-19T18:03:58Z" | 2 | 0 | [
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:text-generation",
"language:te",
"language:hi",
"language:ta",
"language:ml",
"size_categories:1K<n<10K",
"region:us"
] | [
"question-answering",
"text-classification",
"text-generation"
] | "2024-11-19T18:00:08Z" | ---
task_categories:
- question-answering
- text-classification
- text-generation
language:
- te
- hi
- ta
- ml
pretty_name: sony
size_categories:
- 1K<n<10K
--- |
dooder35/whereisterminal | dooder35 | "2024-11-19T18:10:19Z" | 2 | 0 | [
"license:other",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-19T18:06:58Z" | ---
license: other
license_name: question
license_link: https://huggingface.co/new-dataset
---
|
mlfoundations-dev/airoboros_trivia_instructions | mlfoundations-dev | "2024-11-19T20:59:59Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T20:59:55Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 2755950
num_examples: 20009
download_size: 1320909
dataset_size: 2755950
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
vinesmsuic/SwissProtCLAP_random_10k_gpt4o | vinesmsuic | "2024-11-19T21:11:59Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T21:11:58Z" | ---
dataset_info:
features:
- name: UniProt ID
dtype: string
- name: Protein Sequence
dtype: string
- name: gt_desc
dtype: string
- name: structure_info
dtype: string
- name: functional_info
dtype: string
splits:
- name: train
num_bytes: 17074568
num_examples: 10000
download_size: 10103847
dataset_size: 17074568
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/airoboros_joke_instructions | mlfoundations-dev | "2024-11-19T21:46:12Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-19T21:46:10Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 6908746
num_examples: 64328
download_size: 1599912
dataset_size: 6908746
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
self-generate/ds_coder_pos_reflct_adamw_iter1_sppo_hard_new_cn_mining_oj_iter1-binarized | self-generate | "2024-11-20T00:24:17Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T00:24:15Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: rejected_traceback
dtype: string
- name: chosen_probs
dtype: float64
- name: chosen_probs_win
dtype: float64
- name: chosen_probs_lose
dtype: float64
splits:
- name: train
num_bytes: 13578497
num_examples: 4029
download_size: 5584385
dataset_size: 13578497
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_coder_pos_reflct_adamw_iter1_sppo_hard_new_cn_mining_oj_iter1-binarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
self-generate/ds_coder_pos_reflct_adamw_iter1_sppo_hard_new_cn_mining_oj_iter1-full_response_traceback | self-generate | "2024-11-20T00:24:19Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T00:24:17Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: test
dtype: string
- name: tag
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: text_prompt
dtype: string
- name: text_chosen
dtype: string
- name: text_rejected
dtype: string
- name: generate_0
dtype: string
- name: generate_0_score
dtype: int64
- name: traceback_0
dtype: string
- name: generate_1
dtype: string
- name: generate_1_score
dtype: int64
- name: traceback_1
dtype: string
- name: generate_2
dtype: string
- name: generate_2_score
dtype: int64
- name: traceback_2
dtype: string
- name: generate_3
dtype: string
- name: generate_3_score
dtype: int64
- name: traceback_3
dtype: string
- name: generate_4
dtype: string
- name: generate_4_score
dtype: int64
- name: traceback_4
dtype: string
- name: generate_5
dtype: string
- name: generate_5_score
dtype: int64
- name: traceback_5
dtype: string
- name: generate_6
dtype: string
- name: generate_6_score
dtype: int64
- name: traceback_6
dtype: string
- name: generate_7
dtype: string
- name: generate_7_score
dtype: int64
- name: traceback_7
dtype: string
- name: probability
sequence:
sequence: float64
- name: rm_scores
sequence: int64
splits:
- name: train
num_bytes: 59285191
num_examples: 4029
download_size: 22114021
dataset_size: 59285191
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_coder_pos_reflct_adamw_iter1_sppo_hard_new_cn_mining_oj_iter1-full_response_traceback"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
self-generate/ds_coder_pos_reflct_adamw_iter1_sppo_hard_new_cn_mining_oj_iter1-binarized_all_pairs | self-generate | "2024-11-20T00:24:20Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T00:24:19Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: rejected_traceback
dtype: string
- name: test
dtype: string
splits:
- name: train
num_bytes: 43956401
num_examples: 13284
download_size: 10857438
dataset_size: 43956401
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_coder_pos_reflct_adamw_iter1_sppo_hard_new_cn_mining_oj_iter1-binarized_all_pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
RyanYr/reflect_llama318Bit_math-test_t2 | RyanYr | "2024-11-20T01:46:47Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T01:46:46Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: unique_id
dtype: string
- name: response@0
sequence: string
- name: response@1
sequence: string
- name: response@2
sequence: string
splits:
- name: train
num_bytes: 3404290
num_examples: 500
download_size: 1174961
dataset_size: 3404290
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
8803-DML-Upscaling/wawa_manifold | 8803-DML-Upscaling | "2024-11-20T02:10:19Z" | 2 | 0 | [
"license:mit",
"region:us"
] | null | "2024-11-20T02:08:44Z" | ---
license: mit
---
|
PROCIT-SANDBOX/training_dataset_ner_0.2 | PROCIT-SANDBOX | "2024-11-20T04:00:01Z" | 2 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T03:59:57Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': '"'
'1': ''''''
'2': '#'
'3': $
'4': (
'5': )
'6': ','
'7': .
'8': ':'
'9': '``'
'10': CC
'11': CD
'12': DT
'13': EX
'14': FW
'15': IN
'16': JJ
'17': JJR
'18': JJS
'19': LS
'20': MD
'21': NN
'22': NNP
'23': NNPS
'24': NNS
'25': NN|SYM
'26': PDT
'27': POS
'28': PRP
'29': PRP$
'30': RB
'31': RBR
'32': RBS
'33': RP
'34': SYM
'35': TO
'36': UH
'37': VB
'38': VBD
'39': VBG
'40': VBN
'41': VBP
'42': VBZ
'43': WDT
'44': WP
'45': WP$
'46': WRB
- name: chunk_tags
sequence:
class_label:
names:
'0': O
'1': B-ADJP
'2': I-ADJP
'3': B-ADVP
'4': I-ADVP
'5': B-CONJP
'6': I-CONJP
'7': B-INTJ
'8': I-INTJ
'9': B-LST
'10': I-LST
'11': B-NP
'12': I-NP
'13': B-PP
'14': I-PP
'15': B-PRT
'16': I-PRT
'17': B-SBAR
'18': I-SBAR
'19': B-UCP
'20': I-UCP
'21': B-VP
'22': I-VP
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 28408069
num_examples: 158225
- name: validation
num_bytes: 4417979
num_examples: 21273
- name: test
num_bytes: 4267858
num_examples: 21480
download_size: 5832528
dataset_size: 37093906
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_1sec_PERFECT_chunk_5 | HamdanXI | "2024-11-21T10:07:40Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T09:19:54Z" | ---
dataset_info:
features:
- name: audio_clip
sequence: float64
- name: layer0_prediction
sequence: float64
- name: predicted_text
dtype: string
- name: speaker_id
dtype: string
splits:
- name: train
num_bytes: 238160498
num_examples: 18
download_size: 185149686
dataset_size: 238160498
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_1sec_PERFECT_chunk_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
oserikov/pmi | oserikov | "2024-11-20T09:21:21Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T09:21:18Z" | ---
dataset_info:
features:
- name: all
struct:
- name: interlinear-text
list:
- name: item
struct:
- name: source
dtype: string
- name: paragraph
list:
- name: item
struct:
- name: speaker
dtype: string
- name: phrase
list:
- name: item
struct:
- name: ft
dtype: string
- name: id
dtype: string
- name: participant
dtype: string
- name: timestamp
sequence: string
- name: word
list:
list:
- name: item
struct:
- name: grammar_tags
sequence: string
- name: translation
sequence: string
- name: txt
dtype: string
- name: morph
list:
- name: item
struct:
- name: gls
dtype: string
- name: id
dtype: string
- name: txt
dtype: string
- name: item
dtype: 'null'
splits:
- name: train
num_bytes: 29025
num_examples: 1
download_size: 23237
dataset_size: 29025
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jangkimo/searchdata_small | jangkimo | "2024-11-20T10:00:36Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T10:00:21Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 89202417
num_examples: 52814
download_size: 45132961
dataset_size: 89202417
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RAG23/dataset_FAQ | RAG23 | "2024-11-20T10:22:14Z" | 2 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T10:21:28Z" | ---
license: mit
---
|
knkrn5/wealthpsychology-raw-data | knkrn5 | "2024-11-20T10:28:25Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-20T10:28:09Z" | ---
dataset_info:
features:
- name: wp nav
dtype: string
- name: wp nav_link
dtype: string
splits:
- name: wp_pages
num_bytes: 413
num_examples: 7
- name: wp_home
num_bytes: 1351
num_examples: 10
- name: blog_categories
num_bytes: 96
num_examples: 12
- name: fin_calculators
num_bytes: 48
num_examples: 6
- name: fin_quizzes
num_bytes: 64
num_examples: 8
- name: contact_info
num_bytes: 40
num_examples: 5
- name: about_us
num_bytes: 24
num_examples: 3
- name: our_team
num_bytes: 40
num_examples: 5
- name: our_plan
num_bytes: 128
num_examples: 16
download_size: 11115
dataset_size: 2204
configs:
- config_name: default
data_files:
- split: wp_pages
path: data/wp_pages-*
- split: wp_home
path: data/wp_home-*
- split: blog_categories
path: data/blog_categories-*
- split: fin_calculators
path: data/fin_calculators-*
- split: fin_quizzes
path: data/fin_quizzes-*
- split: contact_info
path: data/contact_info-*
- split: about_us
path: data/about_us-*
- split: our_team
path: data/our_team-*
- split: our_plan
path: data/our_plan-*
---
|