parquet-converter commited on
Commit
0fec0e5
1 Parent(s): 9d62f0a

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,37 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- # Audio files - uncompressed
29
- *.pcm filter=lfs diff=lfs merge=lfs -text
30
- *.sam filter=lfs diff=lfs merge=lfs -text
31
- *.raw filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - compressed
33
- *.aac filter=lfs diff=lfs merge=lfs -text
34
- *.flac filter=lfs diff=lfs merge=lfs -text
35
- *.mp3 filter=lfs diff=lfs merge=lfs -text
36
- *.ogg filter=lfs diff=lfs merge=lfs -text
37
- *.wav filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,179 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - no-annotation
4
- language_creators:
5
- - crowdsourced
6
- language:
7
- - en
8
- license:
9
- - cc-by-4.0
10
- - gfdl
11
- multilinguality:
12
- - monolingual
13
- size_categories:
14
- - 100M<n<200M
15
- source_datasets:
16
- - https://github.com/shibing624/code-autocomplete
17
- - https://github.com/bharathgs/Awesome-pytorch-list
18
- - https://github.com/akullpp/awesome-java
19
- - https://github.com/fffaraz/awesome-cpp
20
- task_categories:
21
- - text-generation
22
- task_ids:
23
- - language-modeling
24
- ---
25
- # Dataset Card for "SourceCode"
26
- ## Table of Contents
27
- - [Dataset Description](#dataset-description)
28
- - [Dataset Summary](#dataset-summary)
29
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
- - [Languages](#languages)
31
- - [Dataset Structure](#dataset-structure)
32
- - [Data Instances](#data-instances)
33
- - [Data Fields](#data-fields)
34
- - [Data Splits](#data-splits)
35
- - [Dataset Creation](#dataset-creation)
36
- - [Curation Rationale](#curation-rationale)
37
- - [Source Data](#source-data)
38
- - [Annotations](#annotations)
39
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
- - [Considerations for Using the Data](#considerations-for-using-the-data)
41
- - [Social Impact of Dataset](#social-impact-of-dataset)
42
- - [Discussion of Biases](#discussion-of-biases)
43
- - [Other Known Limitations](#other-known-limitations)
44
- - [Additional Information](#additional-information)
45
- - [Dataset Curators](#dataset-curators)
46
- - [Licensing Information](#licensing-information)
47
- - [Citation Information](#citation-information)
48
- - [Contributions](#contributions)
49
-
50
- ## Dataset Description
51
- - **Repository:** [code-autocomplete](https://github.com/shibing624/code-autocomplete)
52
- - **Leaderboard:** [leaderboard](https://github.com/shibing624/code-autocomplete) (located on the homepage)
53
- - **Size of downloaded dataset files:** 105 MB
54
- - **Total amount of disk used:** 570 MB
55
-
56
- ### Dataset Summary
57
-
58
- Source code dataset is a collection of Github awesome repos, it contains Python, Java, C++, and other programming languages.
59
- This dataset can be used in different NLP tasks like language modeling and text generation tasks.
60
-
61
- data source:
62
-
63
- - PYTHON_CODE: https://github.com/bharathgs/Awesome-pytorch-list
64
- - JAVA_CODE: https://github.com/akullpp/awesome-java
65
- - CPP_CODE: https://github.com/fffaraz/awesome-cpp
66
-
67
-
68
- ### Supported Tasks and Leaderboards
69
- - language modeling
70
- - code generation tasks, **Leaderboard:** [code-autocomplete](https://github.com/shibing624/code-autocomplete)
71
-
72
- ### Languages
73
-
74
- - programming languages: Python, Java, C++
75
- - natural language: English
76
-
77
- ## Dataset Structure
78
- ### Data Instances
79
- An example of 'train' looks as follows.
80
- ```
81
- This example was too long and was cropped:
82
-
83
- {
84
- "text": """
85
- import json
86
- import argparse
87
-
88
-
89
- def _parse_args():
90
- parser = argparse.ArgumentParser(
91
- description=__doc__,
92
- formatter_class=argparse.RawTextHelpFormatter,
93
- )
94
- parser.add_argument(
95
- '--model-file',
96
- required=True,
97
- help=(
98
- 'A pt file from '
99
- 'https://github.com/pytorch/fairseq/tree/main/examples/hubert'
100
- )
101
- )
102
- return parser.parse_args()
103
- """
104
- }
105
- ```
106
- ### Data Fields
107
- The data fields are the same among all splits.
108
- - `text`: a `string` feature.
109
- ### Data Splits
110
- #### python
111
- ```shell
112
- $ wc -l python/*
113
- 10000 python/test.txt
114
- 5215412 python/train.txt
115
- 10000 python/valid.txt
116
- 5235412 total
117
- ```
118
- #### java
119
- ```shell
120
- $ wc -l java/*
121
- 950083 java/test.txt
122
- 2802880 java/train.txt
123
- 940803 java/valid.txt
124
- 4693766 total
125
- ```
126
- #### cpp
127
- ```shell
128
- $ wc -l cpp/*
129
- 1060014 cpp/test.txt
130
- 3119241 cpp/train.txt
131
- 1099124 cpp/valid.txt
132
- 5278379 total
133
- ```
134
- ## Dataset Creation
135
- ### Curation Rationale
136
- As code generation dataset, I upload it to huggingface datasets.
137
- ### Source Data
138
- #### Initial Data Collection and Normalization
139
- #### Who are the source language producers?
140
- Citation:
141
-
142
- APA:
143
- ```latex
144
- Xu, M. code-autocomplete: Code AutoComplete with GPT2 model (Version 0.0.4) [Computer software]. https://github.com/shibing624/code-autocomplete
145
- ```
146
-
147
- BibTeX:
148
- ```latex
149
- @software{Xu_code-autocomplete_Code_AutoComplete,
150
- author = {Xu, Ming},
151
- title = {code-autocomplete: Code AutoComplete with GPT2 model},
152
- url = {https://github.com/shibing624/code-autocomplete},
153
- version = {0.0.4}
154
- }
155
- ```
156
-
157
- ### Annotations
158
- #### Annotation process
159
- #### Who are the annotators?
160
- nobody
161
- ### Personal and Sensitive Information
162
- ## Considerations for Using the Data
163
- ### Social Impact of Dataset
164
- This dataset was developed as a benchmark for evaluating code generation model.
165
- ### Discussion of Biases
166
- ### Other Known Limitations
167
- ## Additional Information
168
- ### Dataset Curators
169
-
170
- Github awesome programing code repos.
171
-
172
- ### Licensing Information
173
-
174
- GNU Free Documentation License v1.3 or later.
175
-
176
- For research use only.
177
-
178
- ### Contributions
179
- Thanks to [@shibing624](https://github.com/shibing624) add this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cpp/source_code-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ed8fc92d8a90f8fbb6c21b2d092b9a26a1a3f628ea1442e7c70ddbd7f65a6d5
3
+ size 14777918
cpp/source_code-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24b94f1059f6693185d3cf2cac2efea62619c0f9cd85ccabacb444634b829b2a
3
+ size 43310554
cpp/source_code-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec29da474598e23dfa838e72328827a4c544a7c936955e7370670395339d4cfa
3
+ size 13398188
java/source_code-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01cf04b41b788bc1e43084549d3e4d0671fb66ab956d0e03b6f480d2aa5466aa
3
+ size 13125149
java/source_code-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55321d53ebad853c2830782a990bca28ab6cda05d551fb45f5160f95df5c4262
3
+ size 39000045
java/source_code-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94f929eeb467866baa79991e4ee18ecde722c635d038546953344dadf46ab042
3
+ size 12826887
python/source_code-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b61f40a604c9d09ae58629002e0dbf4079a5e26c319b0ef7b6f3c8f624e1420f
3
+ size 154538
python/source_code-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:587fe15486fa942f14a16c58fe8b96cc5fb6eda765faff3402644d115a8ad987
3
+ size 84203586
python/source_code-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e7acf2a0e299dd21ec8b48750aca9043cea7c1d1b327715abd0ce0d6b47e945
3
+ size 151687
source_code.py DELETED
@@ -1,117 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- """
3
- @author:XuMing(xuming624@qq.com)
4
- @description:
5
- """
6
-
7
- """Code AutoComplete Python dataset Corpus.(code_autocomplete)"""
8
-
9
- import os
10
-
11
- import datasets
12
-
13
- _DESCRIPTION = """纯文本数据,内容:高质量编程源代码,包括Python,Java,CPP源代码"""
14
-
15
- PYTHON_HOME = "https://github.com/bharathgs/Awesome-pytorch-list"
16
- JAVA_HOME = "https://github.com/akullpp/awesome-java"
17
- CPP_HOME = "https://github.com/fffaraz/awesome-cpp"
18
-
19
- _CITATION = "https://github.com/shibing624/code-autocomplete"
20
-
21
- _DATA_URL = "https://github.com/shibing624/code-autocomplete/releases/download/0.0.4/source_code.zip"
22
-
23
-
24
- class SourceCodeConfig(datasets.BuilderConfig):
25
- """BuilderConfig for NLI_zh"""
26
-
27
- def __init__(self, features, data_url, citation, url, **kwargs):
28
- """BuilderConfig for NLI_zh
29
- Args:
30
- features: `list[string]`, list of the features that will appear in the
31
- feature dict. Should not include "label".
32
- data_url: `string`, url to download the zip file from.
33
- citation: `string`, citation for the data set.
34
- url: `string`, url for information about the data set.
35
- **kwargs: keyword arguments forwarded to super.
36
- """
37
- super().__init__(version=datasets.Version("1.0.0"), **kwargs)
38
- self.features = features
39
- self.data_url = data_url
40
- self.citation = citation
41
- self.url = url
42
-
43
-
44
- class SourceCode(datasets.GeneratorBasedBuilder):
45
- """The Natural Language Inference Chinese(NLI_zh) Corpus."""
46
-
47
- BUILDER_CONFIGS = [
48
- SourceCodeConfig(
49
- name="python",
50
- description=_DESCRIPTION,
51
- features=["text"],
52
- data_url=_DATA_URL,
53
- citation=_CITATION,
54
- url=PYTHON_HOME,
55
- ),
56
- SourceCodeConfig(
57
- name="java",
58
- description=_DESCRIPTION,
59
- features=["text"],
60
- data_url=_DATA_URL,
61
- citation=_CITATION,
62
- url=JAVA_HOME,
63
- ),
64
- SourceCodeConfig(
65
- name="cpp",
66
- description=_DESCRIPTION,
67
- features=["text"],
68
- data_url=_DATA_URL,
69
- citation=_CITATION,
70
- url=CPP_HOME,
71
- ),
72
- ]
73
-
74
- def _info(self):
75
- return datasets.DatasetInfo(
76
- description=self.config.description,
77
- features=datasets.Features(
78
- {
79
- "text": datasets.Value("string"),
80
- }
81
- ),
82
- homepage=self.config.url,
83
- citation=self.config.citation,
84
- )
85
-
86
- def _split_generators(self, dl_manager):
87
- dl_dir = dl_manager.download_and_extract(self.config.data_url) or ""
88
- dl_dir = os.path.join(dl_dir, f"source_code/{self.config.name}")
89
- return [
90
- datasets.SplitGenerator(
91
- name=datasets.Split.TRAIN,
92
- gen_kwargs={
93
- "filepath": os.path.join(dl_dir, f"train.txt"),
94
- },
95
- ),
96
- datasets.SplitGenerator(
97
- name=datasets.Split.VALIDATION,
98
- gen_kwargs={
99
- "filepath": os.path.join(dl_dir, f"valid.txt"),
100
- },
101
- ),
102
- datasets.SplitGenerator(
103
- name=datasets.Split.TEST,
104
- gen_kwargs={
105
- "filepath": os.path.join(dl_dir, f"test.txt"),
106
- },
107
- ),
108
- ]
109
-
110
- def _generate_examples(self, filepath):
111
- """This function returns the examples in the raw (text) form."""
112
- with open(filepath, 'r', encoding="utf-8") as f:
113
- for idx, row in enumerate(f):
114
- if row.strip():
115
- yield idx, {"text": row}
116
- else:
117
- yield idx, {"text": ""}