Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
code
ArXiv:
Libraries:
Datasets
Dask
License:

script to download the data

#7
by eminorhan - opened

For posterity: the following script successfully downloads the data:

import boto3
import gzip
from botocore import UNSIGNED
from botocore.config import Config
from datasets import load_dataset
from botocore.exceptions import ClientError


s3 = boto3.client("s3", config=Config(signature_version=UNSIGNED))
bucket_name = "softwareheritage"


def download_contents(files):
    download_success = True
    for file in files:
        try:
            key = f"content/{file['blob_id']}"
            obj = s3.get_object(Bucket=bucket_name, Key=key)
            with gzip.GzipFile(fileobj=obj['Body']) as fin:
                file["text"] = fin.read().decode("utf-8", errors="ignore")
        except ClientError as e:
            if e.response['Error']['Code'] == 'NoSuchKey':
                print(f"File not found: {key}")    
                file["text"] = ""
                download_success = False
    return {"files": files, "download_success": download_success}


num_proc = 1000  # adjust this number based on your setup
ds = load_dataset("bigcode/the-stack-v2-train-smol-ids", split="train", num_proc=num_proc, trust_remote_code=True)
ds = ds.map(lambda row: download_contents(row["files"]), num_proc=num_proc)
ds = ds.filter(lambda x: x['download_success'], num_proc=num_proc)  # filter out failed downloads

# print the first example to verify the data
print(ds[0])

# optionally, save the preprocessed data to disk
ds.save_to_disk('LOCAL_PATH', num_shards=3000)
print('Done!')

The download speed slows down toward the end, but it finishes successfully (in my experience, with num_proc=1000, it takes about 11 hours to download ~99% of the data and another 15 hours to download the remaining ~1%!). Make sure to adjust the number of processes (num_proc) and the optional local save path ('LOCAL_PATH') based on your setup. The final dataset takes up ~1.5TB disk space and you need another ~1.6TB to store the cache files (you can delete these later, once you make sure the full dataset is downloaded successfully). So, make sure you have enough disk space.

Sign up or log in to comment