Help needed!

#16
by WatsonOverHere - opened

I’ve been attempting to download the C4 dataset in its nofilter English (en.noblocklist) and Spanish (c4-es) variants using the Hugging Face datasets library in streaming mode (streaming=True). While I’ve tried multiple approaches, I continue to face persistent issues at different stages of the process, resulting in unusable datasets. I primarily rely on AI models to assist me with these tasks, but this one I'm just hitting the WALL.

My Goal:

Complete Datasets: Full download of the uncensored English dataset (en.noblocklist) and Spanish dataset (c4-es), without partial or truncated results.
Clean Data: Files that are free from corrupted encoding, malformed JSON, or garbled characters (e.g., ni├▒a instead of niña).
Efficient Process: Avoid wasting hours on failed attempts due to incomplete downloads or errors during validation.

Challenges:
Partial Downloads:

In some cases, scripts only save 1–2 entries from the dataset instead of downloading the entire dataset.
This appears to result from loops that fail to fully iterate through the dataset iterator, despite resolving all data files (e.g., 1024/1024 resolved).

Data Corruption:

When the full dataset is streamed and saved, the resulting files often contain improperly encoded or garbled characters.
Example: ni├▒a instead of niña.
Despite specifying UTF-8 encoding during saving, these issues persist, particularly with non-English datasets like c4-es.
Validation Errors After Long Downloads:

Even after for 5–12 hours, at the last file download, it doesn't save correctly- or validation scripts frequently fail due to corrupted entries or invalid JSON structure. In this instance it results in a full start-over because "picking up where I left off" scripts have not been successful.

Excessive Fix Attempts:

I’ve tried dozens of AI-created python scripts to clean and fix corrupted files, using libraries like ftfy and unicodedata.normalize, but the root issues persist.

Environment

Base Directory: C:\local_ai\c4
Output Paths:
C:\local_ai\c4\en_cleaned for English nofilter data.
C:\local_ai\c4\es_cleaned for Spanish data.

Execution Environment:
Windows 11 Pro with Conda environment pytorch-env.
All scripts saved in UTF-8 encoding.

-How can I ensure the entire dataset streams successfully without loops terminating prematurely or silently skipping entries?

-How do I address or prevent data corruption/garbled characters during streaming or saving?

-Are there any best practices for handling raw web-scraped datasets like C4, especially when streaming non-English subsets?

Any insights or advice would be greatly appreciated! This issue has been ongoing for days, and I'm pretty bummed.

Thank you for any help.

Ai2 org

I can't help you with your code, but when downloading data of this size, you should probably use git to download the raw files.

The encoding issues are likely there because they are already present in the source data. The easiest thing to do is to run ftfy over the whole text, but it'll take some time to complete.

Thanks Dirk. The instructions however are presented in a way that the data is cleaned- when in fact, it is not. Ftfy wouldn't even resolve. I'll keep trying. Thanks for your response!

Sign up or log in to comment