Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10M - 100M
Tags:
ocr
License:
Pclanglais
commited on
Commit
•
0ead5fe
1
Parent(s):
409b8eb
Update README.md
Browse files
README.md
CHANGED
@@ -14,11 +14,11 @@ pretty_name: United States-Public Domain-Newspapers
|
|
14 |
With nearly 100 billion words, it is one of the largest open corpus in the United States. All the materials are now part of the public domain and have no intellectual property rights remaining.
|
15 |
|
16 |
## Content
|
17 |
-
As of January 2024, the collection contains nearly 21 millions unique newspaper and periodical editions published from the
|
18 |
|
19 |
The collection was compiled by Pierre-Carl Langlais based on the [dumps](https://chroniclingamerica.loc.gov/data/ocr/) made available by the Library of Congress. Each parquet file matches one of the 2618 original dump files, including their code name. It has the full text of a few thousand selected at random and a few core metadatas (edition id, date, word counts…). The metadata can be easily expanded thanks to the LOC APIs and other data services.
|
20 |
|
21 |
-
The [American Stories dataset](https://huggingface.co/datasets/dell-research-harvard/AmericanStories) is a curated and enhanced version of the same resource, with significant progress in regards to text quality and documentation. It currently retains about
|
22 |
|
23 |
## Language
|
24 |
|
|
|
14 |
With nearly 100 billion words, it is one of the largest open corpus in the United States. All the materials are now part of the public domain and have no intellectual property rights remaining.
|
15 |
|
16 |
## Content
|
17 |
+
As of January 2024, the collection contains nearly 21 millions unique newspaper and periodical editions published from the 1690 to 1963 (98,742,987,471 words).
|
18 |
|
19 |
The collection was compiled by Pierre-Carl Langlais based on the [dumps](https://chroniclingamerica.loc.gov/data/ocr/) made available by the Library of Congress. Each parquet file matches one of the 2618 original dump files, including their code name. It has the full text of a few thousand selected at random and a few core metadatas (edition id, date, word counts…). The metadata can be easily expanded thanks to the LOC APIs and other data services.
|
20 |
|
21 |
+
The [American Stories dataset](https://huggingface.co/datasets/dell-research-harvard/AmericanStories) is a curated and enhanced version of the same resource, with significant progress in regards to text quality and documentation. It currently retains about 20% of the original material.
|
22 |
|
23 |
## Language
|
24 |
|