Datasets:
Script to download all files of 1B sample data locally
#13
by
ivanzhouyq
- opened
Thanks to Together team for preparing RPV2 as well as the sample 1B dataset!!
I have a need to download the sample data locally to debug and test processing scripts in Apache Beam. load_dataset("togethercomputer/RedPajama-Data-V2", name="sample")
download files as arrow instead of the original formats, which is not good for my case.
So I made a script to download all sample files locally via hf_hub_download
. Thought it might be helpful for others who have similar needs.
import os
from huggingface_hub import hf_hub_download
repo_id = "togethercomputer/RedPajama-Data-V2"
sample_list_name = "sample/sample_listings.txt"
direcotories_postfix_map = {
"documents": ".json.gz",
"quality_signals": ".signals.json.gz",
"minhash": ".minhash.parquet",
"duplicates": ".duplicates.parquet",
}
# download and read sample_listings.txt
filepath = hf_hub_download(repo_id=repo_id, filename=sample_list_name, repo_type="dataset")
with open(filepath, "r") as f:
sample_listings = f.read().splitlines()
# download all files in sample_listings.txt
for sample_listing in sample_listings:
for directory, postfix in direcotories_postfix_map.items():
filename = f"sample/{directory}/{sample_listing}{postfix}"
cache_file_path = hf_hub_download(
repo_id=repo_id,
filename=filename,
repo_type="dataset",
)
print(f"Downloaded {filename} to {cache_file_path}")
really nice, thanks @ivanzhouyq !
FWIW here was a parallel impl in Spark : https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2/discussions/22