File size: 4,896 Bytes
1f4daf5
 
851dd5a
 
27ddcd8
 
851dd5a
 
 
1f4daf5
27ddcd8
52eebac
62e5e90
851dd5a
27ddcd8
bf7276a
27ddcd8
 
0ead5fe
851dd5a
1194e5c
851dd5a
0ead5fe
e998018
 
 
ffced2d
851dd5a
 
79d3190
851dd5a
 
 
 
bf7276a
 
 
851dd5a
 
3188667
851dd5a
68d09e0
851dd5a
27ddcd8
3188667
 
a1cdf12
f179fc1
a1cdf12
d8ec5bc
e7b0be4
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
license: cc0-1.0
task_categories:
- text-generation
language:
- en
tags:
- ocr
pretty_name: United States-Public Domain-Newspapers
---

# 🇺🇸 US Public Domain Newspapers 🇺🇸

**US-PD-Newspapers** is an agregation of all the archives of US newspapers digitized by the Library of Congress for the Chronicling America digital library. 

With nearly 100 billion words, it is one of the largest open corpus in the United States. All the materials are now part of the public domain and have no intellectual property rights remaining.

## Content
As of January 2024, the collection contains nearly 21 millions unique newspaper and periodical editions published from the 1690 to 1963 (98,742,987,471 words).

The collection was compiled by Pierre-Carl Langlais based on the [dumps](https://chroniclingamerica.loc.gov/data/ocr/) made available by the Library of Congress. Each parquet file matches one of the 2618 original dump files, including their code name. It has the full text of a few thousand selected at random and a few core metadatas (edition id, date, word counts…). The metadata can be easily expanded thanks to the LOC APIs and other data services.

The [American Stories dataset](https://huggingface.co/datasets/dell-research-harvard/AmericanStories) is a curated and enhanced version of the same resource, with significant progress in regards to text quality and documentation. It currently retains about 20% of the original material.

## Language

While most of the collection is in English, it also covers a wider variety of European languages, especially German (600k editions) and Spanish (400k editions).

## Uses
The primary use of the collection is for cultural analytics on a wide scale. It has been instrumental for some major digital humanities projects like [Viral Texts](https://viraltexts.org/).

The collection also aims to expand the availability of open works for the training of Large Language Models. The text can be used for model training and republished without restriction for reproducibility purposes.

## License
The composition of the dataset adheres to the US criteria for public domain (any publication without a copyright removal). In agreement with the shorter term rules, the dataset is in the public domain for all countries with a Berne author-right model.

The Library of Congress does not claim any additional rights: "As a publicly supported institution, we generally do not own the rights to materials in our collections. You should determine for yourself whether or not an item is protected by copyright or in the public domain, and then satisfy any copyright or use restrictions when publishing or distributing materials from our collections."

## Future developments
This dataset is not a one time work but will continue to evolve significantly on several directions:
* Correction of computer generated errors in the text. All the texts have been transcribed automatically through the use of Optical Character Recognition (OCR) software. The original files have been digitized over a long time period (since the mid-2000s).
* Enhancement of the structure/editorial presentation of the original text. Some parts of the original documents are likely unwanted for large scale analysis or model training (header, page count…). Additionally, some advanced document structures like tables or multi-column layout are unlikely to be well formatted. Major enhancements could be experted through applying new SOTA layout recognition models on the original PDF files.
* Expansion of the collection to other cultural heritage holdings, especially coming from Hathi Trust, Internet Archive and Google Books.

The American Stories dataset already include some of theses features (especially better OCR and article-level segmentation) and may be a preferable solution if text quality is a concern.

## Acknowledgements
The corpus was stored and processed with the generous support of [OpenLLM France] (https://www.openllm-france.fr/) and Scaleway. It was built up with the support and concerted efforts of the state start-up LANGU:IA (start-up d’Etat), supported by the French Ministry of Culture and DINUM, as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC).

Corpus collection has been largely facilitated thanks to the open science LLM community insights and cooperation (Occiglot, Eleuther AI, Allen AI).

<div style="text-align: center;">
  <img src="https://github.com/mch-dd/datasetlogo/blob/main/scaleway.jpeg?raw=true" style="width: 33%; margin: 0 auto; display: inline-block;"/>
  <img src="https://github.com/mch-dd/datasetlogo/blob/main/ministere.png?raw=true" style="width: 33%; margin: 0 auto; display: inline-block;"/>
  <img src="https://github.com/mch-dd/datasetlogo/blob/main/occiglot.jpg?raw=true" style="width: 33%; margin: 0 auto; display: inline-block;"/>
</div>