Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# GitHub Code Dataset
|
2 |
+
|
3 |
+
## What is it?
|
4 |
+
The GitHub Code dataest consists of 115M code files from GitHub in 32 programming languages with 60 extensions totalling in 1TB of text data. The dataset was created from the public GitHub dataset on Google BiqQuery.
|
5 |
+
|
6 |
+
## How to use it
|
7 |
+
|
8 |
+
The GitHub Code dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following two lines of code:
|
9 |
+
|
10 |
+
```python
|
11 |
+
from datasets import load_dataset
|
12 |
+
|
13 |
+
ds = load_dataset("github-code", streaming=True, split="train")
|
14 |
+
print(next(iter(ds)))
|
15 |
+
|
16 |
+
|
17 |
+
{
|
18 |
+
'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n",
|
19 |
+
'repo_name': 'MirekSz/webpack-es6-ts',
|
20 |
+
'path': 'app/mods/mod190.js',
|
21 |
+
'language': 'JavaScript',
|
22 |
+
'license': 'isc',
|
23 |
+
'size': 73
|
24 |
+
}
|
25 |
+
```
|
26 |
+
|
27 |
+
You can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list:
|
28 |
+
|
29 |
+
```python
|
30 |
+
ds = load_dataset("github-code", streaming=True, split="train", languages=["Dockerfile"])
|
31 |
+
print(next(iter(ds))["code"])
|
32 |
+
|
33 |
+
"""\
|
34 |
+
FROM rockyluke/ubuntu:precise
|
35 |
+
|
36 |
+
ENV DEBIAN_FRONTEND="noninteractive" \
|
37 |
+
TZ="Europe/Amsterdam"
|
38 |
+
...
|
39 |
+
"""
|
40 |
+
```
|
41 |
+
|
42 |
+
We also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages:
|
43 |
+
|
44 |
+
```python
|
45 |
+
ds = load_dataset("github-code", streaming=True, split="train", licenses=["mit", "isc"])
|
46 |
+
|
47 |
+
licenses = []
|
48 |
+
iterable = iter(ds)
|
49 |
+
for i in range(10_000):
|
50 |
+
element = next(iterable)
|
51 |
+
licenses.append(element["license"])
|
52 |
+
print(Counter(licenses))
|
53 |
+
|
54 |
+
Counter({'mit': 9896, 'isc': 104})
|
55 |
+
```
|
56 |
+
|
57 |
+
Naturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage:
|
58 |
+
```python
|
59 |
+
ds = load_dataset("github-code", split="train")
|
60 |
+
```
|
61 |
+
|
62 |
+
## Languages
|
63 |
+
|
64 |
+
The dataset contains 30 programming languages with over 60 extensions:
|
65 |
+
|
66 |
+
```python
|
67 |
+
{
|
68 |
+
"Assembly": [".asm"],
|
69 |
+
"Batchfile": [".bat", ".cmd"],
|
70 |
+
"C": [".c", ".h"],
|
71 |
+
"C#": [".cs"],
|
72 |
+
"C++": [".cpp", ".hpp", ".c++", ".h++", ".cc", ".hh", ".C", ".H"],
|
73 |
+
"CMake": [".cmake"],
|
74 |
+
"CSS": [".css"],
|
75 |
+
"Dockerfile": [".dockerfile", "Dockerfile"],
|
76 |
+
"FORTRAN": ['.f90', '.f', '.f03', '.f08', '.f77', '.f95', '.for', '.fpp'],
|
77 |
+
"GO": [".go"],
|
78 |
+
"Haskell": [".hs"],
|
79 |
+
"HTML":[".html"],
|
80 |
+
"Java": [".java"],
|
81 |
+
"JavaScript": [".js"],
|
82 |
+
"Julia": [".jl"],
|
83 |
+
"Lua": [".lua"],
|
84 |
+
"Makefile": ["Makefile"],
|
85 |
+
"Markdown": [".md", ".markdown"],
|
86 |
+
"PHP": [".php", ".php3", ".php4", ".php5", ".phps", ".phpt"],
|
87 |
+
"Perl": [".pl", ".pm", ".pod", ".perl"],
|
88 |
+
"PowerShell": ['.ps1', '.psd1', '.psm1'],
|
89 |
+
"Python": [".py"],
|
90 |
+
"Ruby": [".rb"],
|
91 |
+
"Rust": [".rs"],
|
92 |
+
"SQL": [".sql"],
|
93 |
+
"Scala": [".scala"],
|
94 |
+
"Shell": [".sh", ".bash", ".command", ".zsh"],
|
95 |
+
"TypeScript": [".ts", ".tsx"],
|
96 |
+
"TeX": [".tex"],
|
97 |
+
"Visual Basic": [".vb"]
|
98 |
+
}
|
99 |
+
```
|
100 |
+
|
101 |
+
## Licenses
|
102 |
+
Each example is also annotated with the license of the associated repository. There are in total 15 licenses:
|
103 |
+
```python
|
104 |
+
[
|
105 |
+
'mit',
|
106 |
+
'apache-2.0',
|
107 |
+
'gpl-3.0',
|
108 |
+
'gpl-2.0',
|
109 |
+
'bsd-3-clause',
|
110 |
+
'agpl-3.0',
|
111 |
+
'lgpl-3.0',
|
112 |
+
'lgpl-2.1',
|
113 |
+
'bsd-2-clause',
|
114 |
+
'cc0-1.0',
|
115 |
+
'epl-1.0',
|
116 |
+
'mpl-2.0',
|
117 |
+
'unlicense',
|
118 |
+
'isc',
|
119 |
+
'artistic-2.0'
|
120 |
+
]
|
121 |
+
```
|
122 |
+
|
123 |
+
## Dataset creation
|
124 |
+
|
125 |
+
The dataset was created in two steps:
|
126 |
+
1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery (full query [here](https://huggingface.co/datasets/lvwerra/github-code/blob/main/query.sql)). The query was executed on _Feb 14, 2022, 12:03:16 PM UTC+1_.
|
127 |
+
2. Files with lines longer than 1000 characters and duplicates (exact duplicates ignoring whitespaces) were dropped (full preprocessing script [here](https://huggingface.co/datasets/lvwerra/github-code/blob/main/github_preprocessing.py)).
|