update namespace
Browse files
README.md
CHANGED
@@ -30,7 +30,7 @@ The GitHub Code dataset is a very large dataset so for most use cases it is reco
|
|
30 |
```python
|
31 |
from datasets import load_dataset
|
32 |
|
33 |
-
ds = load_dataset("
|
34 |
print(next(iter(ds)))
|
35 |
|
36 |
#OUTPUT:
|
@@ -47,7 +47,7 @@ print(next(iter(ds)))
|
|
47 |
You can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list. E.g. if your dream is to build a Codex model for Dockerfiles use the following configuration:
|
48 |
|
49 |
```python
|
50 |
-
ds = load_dataset("
|
51 |
print(next(iter(ds))["code"])
|
52 |
|
53 |
#OUTPUT:
|
@@ -63,7 +63,7 @@ ENV DEBIAN_FRONTEND="noninteractive" \
|
|
63 |
We also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages:
|
64 |
|
65 |
```python
|
66 |
-
ds = load_dataset("
|
67 |
|
68 |
licenses = []
|
69 |
for element in iter(ds).take(10_000):
|
@@ -76,7 +76,7 @@ Counter({'mit': 9896, 'isc': 104})
|
|
76 |
|
77 |
Naturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage:
|
78 |
```python
|
79 |
-
ds = load_dataset("
|
80 |
```
|
81 |
|
82 |
## Data Structure
|
@@ -174,7 +174,7 @@ Each example is also annotated with the license of the associated repository. Th
|
|
174 |
|
175 |
The dataset contains 115M files and the sum of all the source code file sizes is 873 GB (note that the size of the dataset is larger due to the extra fields). A breakdown per language is given in the plot and table below:
|
176 |
|
177 |
-
![dataset-statistics](https://huggingface.co/datasets/
|
178 |
|
179 |
| | Language |File Count| Size (GB)|
|
180 |
|---:|:-------------|---------:|-------:|
|
@@ -213,8 +213,8 @@ The dataset contains 115M files and the sum of all the source code file sizes is
|
|
213 |
## Dataset Creation
|
214 |
|
215 |
The dataset was created in two steps:
|
216 |
-
1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery (full query [here](https://huggingface.co/datasets/
|
217 |
-
2. Files with lines longer than 1000 characters and duplicates (exact duplicates ignoring whitespaces) were dropped (full preprocessing script [here](https://huggingface.co/datasets/
|
218 |
|
219 |
## Considerations for Using the Data
|
220 |
|
@@ -225,7 +225,7 @@ The dataset consists of source code from a wide range of repositories. As such t
|
|
225 |
You can load any older version of the dataset with the `revision` argument:
|
226 |
|
227 |
```Python
|
228 |
-
ds = load_dataset("
|
229 |
```
|
230 |
|
231 |
### v1.0
|
|
|
30 |
```python
|
31 |
from datasets import load_dataset
|
32 |
|
33 |
+
ds = load_dataset("codeparrot/github-code", streaming=True, split="train")
|
34 |
print(next(iter(ds)))
|
35 |
|
36 |
#OUTPUT:
|
|
|
47 |
You can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list. E.g. if your dream is to build a Codex model for Dockerfiles use the following configuration:
|
48 |
|
49 |
```python
|
50 |
+
ds = load_dataset("codeparrot/github-code", streaming=True, split="train", languages=["Dockerfile"])
|
51 |
print(next(iter(ds))["code"])
|
52 |
|
53 |
#OUTPUT:
|
|
|
63 |
We also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages:
|
64 |
|
65 |
```python
|
66 |
+
ds = load_dataset("codeparrot/github-code", streaming=True, split="train", licenses=["mit", "isc"])
|
67 |
|
68 |
licenses = []
|
69 |
for element in iter(ds).take(10_000):
|
|
|
76 |
|
77 |
Naturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage:
|
78 |
```python
|
79 |
+
ds = load_dataset("codeparrot/github-code", split="train")
|
80 |
```
|
81 |
|
82 |
## Data Structure
|
|
|
174 |
|
175 |
The dataset contains 115M files and the sum of all the source code file sizes is 873 GB (note that the size of the dataset is larger due to the extra fields). A breakdown per language is given in the plot and table below:
|
176 |
|
177 |
+
![dataset-statistics](https://huggingface.co/datasets/codeparrot/github-code/resolve/main/github-code-stats-alpha.png)
|
178 |
|
179 |
| | Language |File Count| Size (GB)|
|
180 |
|---:|:-------------|---------:|-------:|
|
|
|
213 |
## Dataset Creation
|
214 |
|
215 |
The dataset was created in two steps:
|
216 |
+
1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery (full query [here](https://huggingface.co/datasets/codeparrot/github-code/blob/main/query.sql)). The query was executed on _Mar 16, 2022, 6:23:39 PM UTC+1_.
|
217 |
+
2. Files with lines longer than 1000 characters and duplicates (exact duplicates ignoring whitespaces) were dropped (full preprocessing script [here](https://huggingface.co/datasets/codeparrot/github-code/blob/main/github_preprocessing.py)).
|
218 |
|
219 |
## Considerations for Using the Data
|
220 |
|
|
|
225 |
You can load any older version of the dataset with the `revision` argument:
|
226 |
|
227 |
```Python
|
228 |
+
ds = load_dataset("codeparrot/github-code", revision="v1.0")
|
229 |
```
|
230 |
|
231 |
### v1.0
|