parquet-converter commited on
Commit
73a5d66
·
1 Parent(s): 505fd27

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,37 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.model filter=lfs diff=lfs merge=lfs -text
11
- *.msgpack filter=lfs diff=lfs merge=lfs -text
12
- *.onnx filter=lfs diff=lfs merge=lfs -text
13
- *.ot filter=lfs diff=lfs merge=lfs -text
14
- *.parquet filter=lfs diff=lfs merge=lfs -text
15
- *.pb filter=lfs diff=lfs merge=lfs -text
16
- *.pt filter=lfs diff=lfs merge=lfs -text
17
- *.pth filter=lfs diff=lfs merge=lfs -text
18
- *.rar filter=lfs diff=lfs merge=lfs -text
19
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
- *.tar.* filter=lfs diff=lfs merge=lfs -text
21
- *.tflite filter=lfs diff=lfs merge=lfs -text
22
- *.tgz filter=lfs diff=lfs merge=lfs -text
23
- *.wasm filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- # Audio files - uncompressed
29
- *.pcm filter=lfs diff=lfs merge=lfs -text
30
- *.sam filter=lfs diff=lfs merge=lfs -text
31
- *.raw filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - compressed
33
- *.aac filter=lfs diff=lfs merge=lfs -text
34
- *.flac filter=lfs diff=lfs merge=lfs -text
35
- *.mp3 filter=lfs diff=lfs merge=lfs -text
36
- *.ogg filter=lfs diff=lfs merge=lfs -text
37
- *.wav filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,201 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language_creators:
5
- - found
6
- language:
7
- - en
8
- license:
9
- - unknown
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 10K<n<100K
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - text-classification
18
- task_ids:
19
- - sentiment-classification
20
- paperswithcode_id: sst
21
- pretty_name: Stanford Sentiment Treebank v2
22
- dataset_info:
23
- features:
24
- - name: idx
25
- dtype: int32
26
- - name: sentence
27
- dtype: string
28
- - name: label
29
- dtype:
30
- class_label:
31
- names:
32
- 0: negative
33
- 1: positive
34
- splits:
35
- - name: train
36
- num_bytes: 4690022
37
- num_examples: 67349
38
- - name: validation
39
- num_bytes: 106361
40
- num_examples: 872
41
- - name: test
42
- num_bytes: 216868
43
- num_examples: 1821
44
- download_size: 7439277
45
- dataset_size: 5013251
46
- ---
47
-
48
- # Dataset Card for [Dataset Name]
49
-
50
- ## Table of Contents
51
- - [Table of Contents](#table-of-contents)
52
- - [Dataset Description](#dataset-description)
53
- - [Dataset Summary](#dataset-summary)
54
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
55
- - [Languages](#languages)
56
- - [Dataset Structure](#dataset-structure)
57
- - [Data Instances](#data-instances)
58
- - [Data Fields](#data-fields)
59
- - [Data Splits](#data-splits)
60
- - [Dataset Creation](#dataset-creation)
61
- - [Curation Rationale](#curation-rationale)
62
- - [Source Data](#source-data)
63
- - [Annotations](#annotations)
64
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
65
- - [Considerations for Using the Data](#considerations-for-using-the-data)
66
- - [Social Impact of Dataset](#social-impact-of-dataset)
67
- - [Discussion of Biases](#discussion-of-biases)
68
- - [Other Known Limitations](#other-known-limitations)
69
- - [Additional Information](#additional-information)
70
- - [Dataset Curators](#dataset-curators)
71
- - [Licensing Information](#licensing-information)
72
- - [Citation Information](#citation-information)
73
- - [Contributions](#contributions)
74
-
75
- ## Dataset Description
76
-
77
- - **Homepage:** https://nlp.stanford.edu/sentiment/
78
- - **Repository:**
79
- - **Paper:** [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank](https://www.aclweb.org/anthology/D13-1170/)
80
- - **Leaderboard:**
81
- - **Point of Contact:**
82
-
83
- ### Dataset Summary
84
-
85
- The Stanford Sentiment Treebank is a corpus with fully labeled parse trees that allows for a complete analysis of the
86
- compositional effects of sentiment in language. The corpus is based on the dataset introduced by Pang and Lee (2005)
87
- and consists of 11,855 single sentences extracted from movie reviews. It was parsed with the Stanford parser and
88
- includes a total of 215,154 unique phrases from those parse trees, each annotated by 3 human judges.
89
-
90
- Binary classification experiments on full sentences (negative or somewhat negative vs somewhat positive or positive
91
- with neutral sentences discarded) refer to the dataset as SST-2 or SST binary.
92
-
93
- ### Supported Tasks and Leaderboards
94
-
95
- - `sentiment-classification`
96
-
97
- ### Languages
98
-
99
- The text in the dataset is in English (`en`).
100
-
101
- ## Dataset Structure
102
-
103
- ### Data Instances
104
-
105
- ```
106
- {'idx': 0,
107
- 'sentence': 'hide new secretions from the parental units ',
108
- 'label': 0}
109
- ```
110
-
111
- ### Data Fields
112
-
113
- - `idx`: Monotonically increasing index ID.
114
- - `sentence`: Complete sentence expressing an opinion about a film.
115
- - `label`: Sentiment of the opinion, either "negative" (0) or positive (1).
116
-
117
- ### Data Splits
118
-
119
- | | train | validation | test |
120
- |--------------------|---------:|-----------:|-----:|
121
- | Number of examples | 67349 | 872 | 1821 |
122
-
123
- ## Dataset Creation
124
-
125
- ### Curation Rationale
126
-
127
- [More Information Needed]
128
-
129
- ### Source Data
130
-
131
- #### Initial Data Collection and Normalization
132
-
133
- [More Information Needed]
134
-
135
- #### Who are the source language producers?
136
-
137
- Rotten Tomatoes reviewers.
138
-
139
- ### Annotations
140
-
141
- #### Annotation process
142
-
143
- [More Information Needed]
144
-
145
- #### Who are the annotators?
146
-
147
- [More Information Needed]
148
-
149
- ### Personal and Sensitive Information
150
-
151
- [More Information Needed]
152
-
153
- ## Considerations for Using the Data
154
-
155
- ### Social Impact of Dataset
156
-
157
- [More Information Needed]
158
-
159
- ### Discussion of Biases
160
-
161
- [More Information Needed]
162
-
163
- ### Other Known Limitations
164
-
165
- [More Information Needed]
166
-
167
- ## Additional Information
168
-
169
- ### Dataset Curators
170
-
171
- [More Information Needed]
172
-
173
- ### Licensing Information
174
-
175
- Unknown.
176
-
177
- ### Citation Information
178
-
179
- ```bibtex
180
- @inproceedings{socher-etal-2013-recursive,
181
- title = "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank",
182
- author = "Socher, Richard and
183
- Perelygin, Alex and
184
- Wu, Jean and
185
- Chuang, Jason and
186
- Manning, Christopher D. and
187
- Ng, Andrew and
188
- Potts, Christopher",
189
- booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
190
- month = oct,
191
- year = "2013",
192
- address = "Seattle, Washington, USA",
193
- publisher = "Association for Computational Linguistics",
194
- url = "https://www.aclweb.org/anthology/D13-1170",
195
- pages = "1631--1642",
196
- }
197
- ```
198
-
199
- ### Contributions
200
-
201
- Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "The Stanford Sentiment Treebank consists of sentences from movie reviews and\nhuman annotations of their sentiment. The task is to predict the sentiment of a\ngiven sentence. We use the two-way (positive/negative) class split, and use only\nsentence-level labels.\n", "citation": "@inproceedings{socher2013recursive,\n title={Recursive deep models for semantic compositionality over a sentiment treebank},\n author={Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning, Christopher D and Ng, Andrew and Potts, Christopher},\n booktitle={Proceedings of the 2013 conference on empirical methods in natural language processing},\n pages={1631--1642},\n year={2013}\n}\n", "homepage": "https://nlp.stanford.edu/sentiment/", "license": "Unknown", "features": {"idx": {"dtype": "int32", "id": null, "_type": "Value"}, "sentence": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 2, "names": ["negative", "positive"], "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "sst2", "config_name": "default", "version": {"version_str": "2.0.0", "description": null, "major": 2, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4690022, "num_examples": 67349, "dataset_name": "sst2"}, "validation": {"name": "validation", "num_bytes": 106361, "num_examples": 872, "dataset_name": "sst2"}, "test": {"name": "test", "num_bytes": 216868, "num_examples": 1821, "dataset_name": "sst2"}}, "download_checksums": {"https://dl.fbaipublicfiles.com/glue/data/SST-2.zip": {"num_bytes": 7439277, "checksum": "d67e16fb55739c1b32cdce9877596db1c127dc322d93c082281f64057c16deaa"}}, "download_size": 7439277, "post_processing_size": null, "dataset_size": 5013251, "size_in_bytes": 12452528}}
 
 
default/sst2-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3daad2315d7a8ec2f91db0a6d032e6d277d8f49405fd60ef6c86049b371ca47b
3
+ size 147786
default/sst2-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3811f7223cbabd449b2ce95e8aa4ef9ebe3d27627f9dd0383d3497f2338c003
3
+ size 3110457
default/sst2-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98f971906bc299d17edbb001cbf48b6afab96b6dd9fb3da4879a852ee40f4386
3
+ size 72812
sst2.py DELETED
@@ -1,105 +0,0 @@
1
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
- """SST-2 (Stanford Sentiment Treebank v2) dataset."""
15
-
16
-
17
- import csv
18
- import os
19
-
20
- import datasets
21
-
22
-
23
- _CITATION = """\
24
- @inproceedings{socher2013recursive,
25
- title={Recursive deep models for semantic compositionality over a sentiment treebank},
26
- author={Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning, Christopher D and Ng, Andrew and Potts, Christopher},
27
- booktitle={Proceedings of the 2013 conference on empirical methods in natural language processing},
28
- pages={1631--1642},
29
- year={2013}
30
- }
31
- """
32
-
33
- _DESCRIPTION = """\
34
- The Stanford Sentiment Treebank consists of sentences from movie reviews and
35
- human annotations of their sentiment. The task is to predict the sentiment of a
36
- given sentence. We use the two-way (positive/negative) class split, and use only
37
- sentence-level labels.
38
- """
39
-
40
- _HOMEPAGE = "https://nlp.stanford.edu/sentiment/"
41
-
42
- _LICENSE = "Unknown"
43
-
44
- _URL = "https://dl.fbaipublicfiles.com/glue/data/SST-2.zip"
45
-
46
-
47
- class Sst2(datasets.GeneratorBasedBuilder):
48
- """SST-2 dataset."""
49
-
50
- VERSION = datasets.Version("2.0.0")
51
-
52
- def _info(self):
53
- features = datasets.Features(
54
- {
55
- "idx": datasets.Value("int32"),
56
- "sentence": datasets.Value("string"),
57
- "label": datasets.features.ClassLabel(names=["negative", "positive"]),
58
- }
59
- )
60
- return datasets.DatasetInfo(
61
- description=_DESCRIPTION,
62
- features=features,
63
- homepage=_HOMEPAGE,
64
- license=_LICENSE,
65
- citation=_CITATION,
66
- )
67
-
68
- def _split_generators(self, dl_manager):
69
- dl_dir = dl_manager.download_and_extract(_URL)
70
- return [
71
- datasets.SplitGenerator(
72
- name=datasets.Split.TRAIN,
73
- gen_kwargs={
74
- "file_paths": dl_manager.iter_files(dl_dir),
75
- "data_filename": "train.tsv",
76
- },
77
- ),
78
- datasets.SplitGenerator(
79
- name=datasets.Split.VALIDATION,
80
- gen_kwargs={
81
- "file_paths": dl_manager.iter_files(dl_dir),
82
- "data_filename": "dev.tsv",
83
- },
84
- ),
85
- datasets.SplitGenerator(
86
- name=datasets.Split.TEST,
87
- gen_kwargs={
88
- "file_paths": dl_manager.iter_files(dl_dir),
89
- "data_filename": "test.tsv",
90
- },
91
- ),
92
- ]
93
-
94
- def _generate_examples(self, file_paths, data_filename):
95
- for file_path in file_paths:
96
- filename = os.path.basename(file_path)
97
- if filename == data_filename:
98
- with open(file_path, encoding="utf8") as f:
99
- reader = csv.DictReader(f, delimiter="\t", quoting=csv.QUOTE_NONE)
100
- for idx, row in enumerate(reader):
101
- yield idx, {
102
- "idx": row["index"] if "index" in row else idx,
103
- "sentence": row["sentence"],
104
- "label": int(row["label"]) if "label" in row else -1,
105
- }